1
|
Gorgas DL, Joldersma KB, Ankel FK, Carter WA, Barton MA, Reisdorff EJ. Emergency Medicine Milestones Final Ratings Are Often Subpar. West J Emerg Med 2024; 25:735-738. [PMID: 39319804 PMCID: PMC11418869 DOI: 10.5811/westjem.18703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 06/14/2024] [Accepted: 07/08/2024] [Indexed: 09/26/2024] Open
Abstract
Background The emergency medicine (EM) milestones are objective behaviors that are categorized into thematic domains called "subcompetencies" (eg, emergency stabilization). The scale for rating milestones is predicated on the assumption that a rating (level) of 1.0 corresponds to an incoming EM-1 resident and a rating of 4.0 is the "target rating" (albeit not an expectation) for a graduating resident. Our aim in this study was to determine the frequency with which graduating residents received the target milestone ratings. Methods This retrospective, cross-sectional study was a secondary analysis of a dataset used in a prior study but was not reported previously. We analyzed milestone subcompetency ratings from April 25-June 24, 2022 for categorical EM residents in their final year of training. Ratings were dichotomized as meeting the expected level at the time of program completion (ratings of ≥3.5) and not meeting the expected level at the time of program completion (ratings of ≤3.0). We calculated the number of residents who did not achieve target ratings for each of the subcompetencies. Results In Spring 2022, of the 2,637 residents in the spring of their last year of training, 1,613 (61.2%) achieved a rating of ≥3.5 on every subcompetency and 1,024 (38.8%) failed to achieve that rating on at least one subcompetency. There were 250 residents (9.5%) who failed to achieve half of their expected subcompetency ratings and 105 (4.0%) who failed to achieve the expected rating (ie, rating was ≤3.0) on every subcompetency. Conclusion When using an EM milestone rating threshold of 3.5, only 61.2% of physicians achieved the target ratings for program graduation; 4.0% of physicians failed to achieve target ratings for any milestone subcompetency; and 9.5% of physicians failed to achieve the target ratings for graduating residents in half of the subcompetencies.
Collapse
Affiliation(s)
- Diane L. Gorgas
- Ohio State University Wexner Medical Center, Department of Emergency Medicine, Columbus, Ohio
| | | | - Felix K. Ankel
- Regions Hospital, Department of Emergency Medicine, St. Paul, Minnesota
| | - Wallace A. Carter
- Weill Cornell Medicine, Department of Emergency Medicine, New York, New York
| | | | | |
Collapse
|
2
|
Walters J, Paradise Black N, Yurttutan Engin N, Cohen DE, Ben Khallouq B, Chen JG. Race and Gender Differences in Pediatric Milestone Levels: A Multi-Institutional Study. Clin Pediatr (Phila) 2024; 63:977-985. [PMID: 37735881 DOI: 10.1177/00099228231200985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/23/2023]
Abstract
The Accreditation Council for Graduate Medical Education milestones assess resident competency in 6 domains. We hypothesized that disparities in milestones exist across race and gender in pediatric residencies. This is a retrospective, cross-sectional, multi-institutional study (3 pediatric residencies, 1446 scores; 316 residents). African American residents received the lowest scores in patient care (PC) (P = .030), medical knowledge (MK) (P = .005), practice-based learning and improvement (PBLI) (P = .003), professionalism (PROF) (P < .001), and interpersonal communication skills (ICS) (P = .005). Differences were most pronounced in PROF (African American mean 3.35 [SD .75], Asian 3.51 (.66), Hispanic 3.58 (.66), white 3.59 (.67)). Female residents received higher scores than male residents in PC (P = .002) and system-based practice (SBP) (P = .049). Female interns received higher MK scores, 2.53 (.44) versus 2.48 (.48), P = .044, but lower scores as third years, 4.00 (.43) versus 4.14 (.45), P = .030. In this study, pediatric milestones differed based on race and gender.
Collapse
Affiliation(s)
- Jamee Walters
- Johns Hopkins All Children's Hospital Pediatric Residency Program, St. Petersburg, FL, USA
| | - Nicole Paradise Black
- Division of Medical Education, Department of Pediatrics, University of Florida Pediatric Residency, Gainesville, FL, USA
| | - Nesrin Yurttutan Engin
- Studer Family Children's Hospital, Ascension Sacred Heart, Community Health Northwest Florida-Trinity Pediatrics, University of Florida Pediatric Residency Program, Pensacola, FL, USA
| | - Debra E Cohen
- Studer Family Children's Hospital, Ascension Sacred Heart, University of Florida Pediatric Residency Program, Pensacola, FL, USA
| | - Bertha Ben Khallouq
- Department of Gynecology and Obstetrics, Orlando Health Winnie Palmer Hospital, Orlando, FL, USA
| | - J Gene Chen
- Department of Pediatric Medical Education, University of Florida Pediatric Residency Program at Orlando Health, Orlando, FL, USA
| |
Collapse
|
3
|
Nowicki KD, Balboni IM, Cidon MJ, Dhanrajani AD, Driest KD, Fair DC, Imundo LF, Mehta JJ, Tarvin SE, Walters HM, Woolnough LC, Edgar LC, Curran ML. Assessing Pediatric Rheumatology Fellow Competence in the Milestone Era: Past, Present, and Future. Arthritis Care Res (Hoboken) 2024; 76:600-607. [PMID: 38108087 DOI: 10.1002/acr.25276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 11/15/2023] [Accepted: 11/30/2023] [Indexed: 12/19/2023]
Abstract
Starting in 2015, pediatric rheumatology fellowship training programs were required by the Accreditation Council for Graduate Medical Education to assess fellows' academic performance within 21 subcompetencies falling under six competency domains. Each subcompetency had four or five milestone levels describing developmental progression of knowledge and skill acquisition. Milestones were standardized across all pediatric subspecialties. As part of the Milestones 2.0 revision project, the Accreditation Council for Graduate Medical Education convened a workgroup in 2022 to write pediatric rheumatology-specific milestones. Using adult rheumatology's Milestones 2.0 as a starting point, the workgroup revised the patient care and medical knowledge subcompetencies and milestones to reflect requirements and nuances of pediatric rheumatology care. Milestones within four remaining competency domains (professionalism, interpersonal and communication skills, practice-based learning and improvement, and systems-based practice) were standardized across all pediatric subspecialties, and therefore not revised. The workgroup created a supplemental guide with explanations of the intent of each subcompetency, 25 in total, and examples for each milestone level. The new milestones are an important step forward for competency-based medical education in pediatric rheumatology. However, challenges remain. Milestone level assignment is meant to be informed by results of multiple assessment methods. The lack of pediatric rheumatology-specific assessment tools typically result in clinical competency committees determining trainee milestone levels without such collated results as the foundation of their assessments. Although further advances in pediatric rheumatology fellowship competency-based medical education are needed, Milestones 2.0 importantly establishes the first pediatric-specific rheumatology Milestones to assess fellow performance during training and help measure readiness for independent practice.
Collapse
Affiliation(s)
- Katherine D Nowicki
- University of Colorado, Denver, and Children's Hospital Colorado, Aurora, Colorado
| | | | - Michal J Cidon
- Children's Hospital Los Angeles, Los Angeles, California
| | - Anita D Dhanrajani
- The University of Mississippi Medical Center, Jackson, Mississippi, Tulane University School of Medicine, New Orleans, and Children's Hospital of New Orleans, New Orleans, Louisiana
| | - Kyla D Driest
- Nationwide Children's Hospital and The Ohio State University, Columbus
| | | | - Lisa F Imundo
- Columbia University Medical Center, New York City, New York
| | - Jay J Mehta
- Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Stacey E Tarvin
- Riley Hospital for Children at Indiana University, Indianapolis
| | - Heather M Walters
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hamptstead, New York, and Cohen Children's Medical Center of New York, New Hyde Park
| | | | - Laura C Edgar
- Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Megan L Curran
- University of Colorado, Denver, and Children's Hospital Colorado, Aurora, Colorado
| |
Collapse
|
4
|
Frank AK, Lin JJ, Warren SB, Bullock JL, O'Sullivan P, Malishchak LE, Berman RA, Yialamas MA, Hauer KE. Stereotype Threat and Gender Bias in Internal Medicine Residency: It is Still Hard to be in Charge. J Gen Intern Med 2024; 39:636-642. [PMID: 37985610 DOI: 10.1007/s11606-023-08498-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 10/16/2023] [Indexed: 11/22/2023]
Abstract
BACKGROUND Despite similar numbers of women and men in internal medicine (IM) residency, women face unique challenges. Stereotype threat is hypothesized to contribute to underrepresentation of women in academic leadership, and exploring how it manifests in residency may provide insight into forces that perpetuate gender disparities. OBJECTIVE To quantify the prevalence of stereotype threat in IM residency and explore experiences contributing to that stereotype threat. DESIGN We used a mixed methods study design. First, we surveyed IM residents using the Stereotype Vulnerability Scale (SVS) to screen for stereotype threat. Second, we conducted focus groups with women who scored high on the SVS to understand experiences that led to stereotype threat. PARTICIPANTS The survey was sent to all IM residents at University of California, San Francisco (UCSF), in September-November 2019. Focus groups were conducted at UCSF in Spring 2020. APPROACH The survey included an adapted version of the SVS. For focus groups, we developed a focus group guide informed by literature on stereotype threat. We used a thematic approach to data analysis. The mixed methods design enabled us to draw metainferences by integrating the two data sources. KEY RESULTS Survey response rate was 61% (110/181). Women were significantly more likely than men to have a score indicating stereotype threat vulnerability (77% vs 0%, p < 0.001). Four themes from focus groups characterized women's experiences of gender bias and stereotype threat: gender norm tension, microaggressions and sexual harassment, authority questioned, and support and allyship. CONCLUSIONS Gender-based stereotype threat is highly prevalent among women IM residents. This phenomenon poses a threat to confidence and ability to execute patient care responsibilities, detracting from well-being and professional development. These findings indicate that, despite robust representation of women in IM training, further attention is needed to address gendered experiences and contributors to women's vulnerability to stereotype threat.
Collapse
Affiliation(s)
- Annabel K Frank
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA.
| | - Jackie J Lin
- School of Medicine, University of California, San Francisco, San Francisco, CA, USA
| | | | - Justin L Bullock
- Department of Medicine, University of Washington, Seattle, WA, USA
| | - Patricia O'Sullivan
- School of Medicine, University of California, San Francisco, San Francisco, CA, USA
| | | | - Rebecca A Berman
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA
| | - Maria A Yialamas
- Department of Medicine, Brigham and Women's Hospital, Boston, MA, USA
| | - Karen E Hauer
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA
| |
Collapse
|
5
|
Kwan B, Engel J, Steele B, Oyama L, Longhurst CA, El–Kareh R, Daniel M, Goldberg C, Clay B. An Automated System for Physician Trainee Procedure Logging via Electronic Health Records. JAMA Netw Open 2024; 7:e2352370. [PMID: 38265802 PMCID: PMC10809018 DOI: 10.1001/jamanetworkopen.2023.52370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 11/30/2023] [Indexed: 01/25/2024] Open
Abstract
Importance Procedural proficiency is a core competency for graduate medical education; however, procedural reporting often relies on manual workflows that are duplicative and generate data whose validity and accuracy are difficult to assess. Failure to accurately gather these data can impede learner progression, delay procedures, and negatively impact patient safety. Objective To examine accuracy and procedure logging completeness of a system that extracts procedural data from an electronic health record system and uploads these data securely to an application used by many residency programs for accreditation. Design, Setting, and Participants This quality improvement study of all emergency medicine resident physicians at University of California, San Diego Health was performed from May 23, 2023, to June 25, 2023. Exposures Automated system for procedure data extraction and upload to a residency management software application. Main Outcomes and Measures The number of procedures captured by the automated system when running silently compared with manually logged procedures in the same timeframe, as well as accuracy of the data upload. Results Forty-seven residents participated in the initial silent assessment of the extraction component of the system. During a 1-year period (May 23, 2022, to May 7, 2023), 4291 procedures were manually logged by residents, compared with 7617 procedures captured by the automated system during the same period, representing a 78% increase. During assessment of the upload component of the system (May 8, 2023, to June 25, 2023), a total of 1353 procedures and patient encounters were evaluated, with the system operating with a sensitivity of 97.4%, specificity of 100%, and overall accuracy of 99.5%. Conclusions and Relevance In this quality improvement study of emergency medicine resident physicians, an automated system demonstrated that reliance on self-reported procedure logging resulted in significant procedural underreporting compared with the use of data obtained at the point of performance. Additionally, this system afforded a degree of reliability and validity heretofore absent from the usual after-the-fact procedure logging workflows while using a novel application programming interface-based approach. To our knowledge, this system constitutes the first generalizable implementation of an automated solution to a problem that has existed in graduate medical education for decades.
Collapse
Affiliation(s)
- Brian Kwan
- Department of Emergency Medicine, University of California, San Diego, School of Medicine, San Diego
- Department of Biomedical Informatics, University of California, San Diego Health, San Diego
| | - Jeffery Engel
- Department of Information Services, University of California, San Diego Health, San Diego
| | - Brian Steele
- Office of Graduate Medical Education, University of California, San Diego Health, San Diego
| | - Leslie Oyama
- Department of Emergency Medicine, University of California, San Diego, School of Medicine, San Diego
| | - Christopher A. Longhurst
- Office of the Chief Medical Officer and Chief Digital Officer, University of California, San Diego Health, San Diego
- Department of Pediatrics, University of California, San Diego, School of Medicine, San Diego
| | - Robert El–Kareh
- Division of Hospital Medicine, Department of Medicine, University of California, San Diego School of Medicine, San Diego
- Office of the Associate Chief Medical Officer for Transformation and Learning, University of California, San Diego Health, San Diego
| | - Michelle Daniel
- Department of Emergency Medicine, University of California, San Diego, School of Medicine, San Diego
- Office of the Vice Dean for Medical Education, University of California, San Diego, School of Medicine, San Diego
| | - Charles Goldberg
- Office of Graduate Medical Education, University of California, San Diego Health, San Diego
- Division of Hospital Medicine, Department of Medicine, University of California, San Diego School of Medicine, San Diego
- Office of the Associate Dean for Graduate Medical Education, University of California, San Diego, School of Medicine, San Diego
| | - Brian Clay
- Division of Hospital Medicine, Department of Medicine, University of California, San Diego School of Medicine, San Diego
- Office of the Associate Chief Medical Officer, University of California, San Diego Health, San Diego
| |
Collapse
|
6
|
Choo EK, Woods R, Walker ME, O’Brien JM, Chan TM. The Quality of Assessment for Learning score for evaluating written feedback in anesthesiology postgraduate medical education: a generalizability and decision study. CANADIAN MEDICAL EDUCATION JOURNAL 2023; 14:78-85. [PMID: 38226296 PMCID: PMC10787859 DOI: 10.36834/cmej.75876] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
Background Competency based residency programs depend on high quality feedback from the assessment of entrustable professional activities (EPA). The Quality of Assessment for Learning (QuAL) score is a tool developed to rate the quality of narrative comments in workplace-based assessments; it has validity evidence for scoring the quality of narrative feedback provided to emergency medicine residents, but it is unknown whether the QuAL score is reliable in the assessment of narrative feedback in other postgraduate programs. Methods Fifty sets of EPA narratives from a single academic year at our competency based medical education post-graduate anesthesia program were selected by stratified sampling within defined parameters [e.g. resident gender and stage of training, assessor gender, Competency By Design training level, and word count (≥17 or <17 words)]. Two competency committee members and two medical students rated the quality of narrative feedback using a utility score and QuAL score. We used Kendall's tau-b co-efficient to compare the perceived utility of the written feedback to the quality assessed with the QuAL score. The authors used generalizability and decision studies to estimate the reliability and generalizability coefficients. Results Both the faculty's utility scores and QuAL scores (r = 0.646, p < 0.001) and the trainees' utility scores and QuAL scores (r = 0.667, p < 0.001) were moderately correlated. Results from the generalizability studies showed that utility scores were reliable with two raters for both faculty (Epsilon=0.87, Phi=0.86) and trainees (Epsilon=0.88, Phi=0.88). Conclusions The QuAL score is correlated with faculty- and trainee-rated utility of anesthesia EPA feedback. Both faculty and trainees can reliability apply the QuAL score to anesthesia EPA narrative feedback. This tool has the potential to be used for faculty development and program evaluation in Competency Based Medical Education. Other programs could consider replicating our study in their specialty.
Collapse
Affiliation(s)
- Eugene K Choo
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Rob Woods
- Department of Emergency Medicine, College of Medicine, University of Saskatchewan, Saskatchewan, Canada
| | - Mary Ellen Walker
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Jennifer M O’Brien
- Department of Anesthesiology, College of Medicine, University of Saskatchewan, Saskatchewan, Canada;
| | - Teresa M Chan
- Department of Medicine (Division of Emergency Medicine; Division of Education & Innovation), Michael G. DeGroote School of Medicine, Faculty of Health Sciences, McMaster University and Office of Continuing Professional Development & McMaster Education Research, Innovation, and Theory (MERIT) Program, Faculty of Health Sciences, McMaster University, Ontario, Canada
| |
Collapse
|
7
|
Park YS, Ryan MS, Hogan SO, Berg K, Eickmeyer A, Fancher TL, Farnan J, Lawson L, Turner L, Westervelt M, Holmboe E, Santen SA. Transition to Residency: National Study of Factors Contributing to Variability in Learner Milestones Ratings in Emergency Medicine and Family Medicine. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S123-S132. [PMID: 37983405 DOI: 10.1097/acm.0000000000005366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
PURPOSE The developmental trajectory of learning during residency may be attributed to multiple factors, including variation in individual trainee performance, program-level factors, graduating medical school effects, and the learning environment. Understanding the relationship between medical school and learner performance during residency is important in prioritizing undergraduate curricular strategies and educational approaches for effective transition to residency and postgraduate training. This study explores factors contributing to longitudinal and developmental variability in resident Milestones ratings, focusing on variability due to graduating medical school, training program, and learners using national cohort data from emergency medicine (EM) and family medicine (FM). METHOD Data from programs with residents entering training in July 2016 were used (EM: n=1,645 residents, 178 residency programs; FM: n=3,997 residents, 487 residency programs). Descriptive statistics were used to examine data trends. Cross-classified mixed-effects regression were used to decompose variance components in Milestones ratings. RESULTS During postgraduate year (PGY)-1, graduating medical school accounted for 5% and 6% of the variability in Milestones ratings, decreasing to 2% and 5% by PGY-3 for EM and FM, respectively. Residency program accounted for substantial variability during PGY-1 (EM=70%, FM=53%) but decreased during PGY-3 (EM=62%, FM=44%), with greater variability across training period in patient care (PC), medical knowledge (MK), and systems-based practice (SBP). Learner variance increased significantly between PGY-1 (EM=23%, FM=34%) and PGY-3 (EM=34%, FM=44%), with greater variability in practice-based learning and improvement (PBLI), professionalism (PROF), and interpersonal communication skills (ICS). CONCLUSIONS The greatest variance in Milestone ratings can be attributed to the residency program and to a lesser degree, learners, and medical school. The dynamic impact of program-level factors on learners shifts during the first year and across the duration of residency training, highlighting the influence of curricular, instructional, and programmatic factors on resident performance throughout residency.
Collapse
Affiliation(s)
- Yoon Soo Park
- Y.S. Park is head, Department of Medical Education, and The Ilene B. Harris Endowed Professor, University of Illinois College of Medicine, Chicago, Illinois; ORCID: https://orcid.org/0000-0001-8583-4335
| | - Michael S Ryan
- M.S. Ryan is associate dean for assessment, evaluation, research, and innovation, and professor of pediatrics, University of Virginia School of Medicine, Charlottesville, Virginia; ORCID: https://orcid.org/0000-0003-3266-9289
| | - Sean O Hogan
- S.O. Hogan is director of outcomes research and evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0009-0008-9006-1857
| | - Katherine Berg
- K. Berg is associate dean of assessment, director, Rector Clinical Skills and Simulation Center, and professor of medicine, Sydney Kimmel Medical College, Philadelphia, Pennsylvania
| | - Adam Eickmeyer
- A. Eickmeyer is director of medical school education, University of Chicago Pritzker School of Medicine, Chicago, Illinois, and a PhD candidate, Maastricht University School of Health Professions Education, Maastricht, the Netherlands
| | - Tonya L Fancher
- T.L. Fancher is associate dean for workforce innovation and education quality improvement and professor of medicine, University of California, Davis, School of Medicine, Sacramento, California
| | - Jeanne Farnan
- J. Farnan is associate dean for undergraduate medical education and professor of medicine, University of Chicago Pritzker School of Medicine, Chicago, Illinois
| | - Luan Lawson
- L. Lawson is senior associate dean of medical education and student affairs and professor of emergency medicine, Virginia Commonwealth University, Richmond, Virginia
| | - Laurah Turner
- L. Turner is assistant dean for evaluation and assessment and assistant professor of medical education, The University of Cincinnati College of Medicine, Cincinnati, Ohio
| | - Marjorie Westervelt
- M. Westervelt is director of educational assessment, scholarship, improvement, and innovation, Office of Medical Education, University of California, Davis, School of Medicine, Sacramento, California
| | - Eric Holmboe
- E. Holmboe is chief, research, milestones development and evaluation officer, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Sally A Santen
- S.A. Santen is senior associate dean, Virginia Commonwealth University, Richmond, Virginia, and professor, emergency medicine and medical education, University of Cincinnati, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-8327-8002
| |
Collapse
|
8
|
Huh DD, Yamazaki K, Holmboe E, Bartley GB, Schnabel SD, Levine RB, Srikumaran D. Gender Bias and Ophthalmology Accreditation Council for Graduate Medical Education Milestones Evaluations. JAMA Ophthalmol 2023; 141:982-988. [PMID: 37707837 PMCID: PMC10502694 DOI: 10.1001/jamaophthalmol.2023.4138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 07/24/2023] [Indexed: 09/15/2023]
Abstract
Importance Women remain underrepresented in ophthalmology and gender-based disparities exist in salary, grant receipt, publication rates, and surgical volume throughout training and in practice. Although studies in emergency medicine and general surgery showed mixed findings regarding gender differences in Accreditation Council for Graduate Medical Education (ACGME) Milestones ratings, limited data exist examining such differences within ophthalmology. Objective To examine gender differences in ophthalmology ACGME Milestones. Design, Setting, and Participants This was a retrospective cross-sectional study of postgraduate year 4 (PGY-4) residents from 120 ophthalmology programs graduating in 2019. Main Outcomes and Measures PGY-4 midyear and year-end medical knowledge (MK) and patient care (PC) ratings and Written Qualifying Examination (WQE) scaled scores for residents graduating in 2019 were included. Differential prediction techniques using Generalized Estimating Equations models were performed to identify differences by gender. Results Of 452 residents (median [IQR] age, 30.0 [29.0-32.0] years), 275 (61%) identified as men and 177 (39%) as women. There were no differences in PC domain average between women and men for both midyear (-0.07; 95% CI, -0.11 to 0; P =.06) and year-end (-0.04; 95% CI, -0.07 to 0.03; P =.51) assessment periods. For the MK domain average in the midyear assessment period, women (mean [SD], 3.76 [0.50]) were rated lower than men (mean [SD], 3.88 [0.47]; P = .006) with a difference in mean of -0.12 (95% CI, -0.18 to -0.03). For the year-end assessment, however, the average MK ratings were not different for women (mean [SD], 4.10 [0.47]) compared with men (mean [SD], 4.18 [0.47]; P = .20) with a difference in mean of -0.08 (95% CI, -0.13 to 0.03). Conclusions and Relevance Results suggest that ACGME ophthalmology Milestones in 2 general competencies did not demonstrate major gender bias on a national level at the time of graduation. There were, however, differences in MK ratings at the midyear mark, and as low ratings on evaluations and examinations may adversely affect career opportunities for trainees, it is important to continue further work examining other competencies or performance measures for potential biases.
Collapse
Affiliation(s)
- Dana D. Huh
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Kenji Yamazaki
- Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Eric Holmboe
- Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - George B. Bartley
- American Board of Ophthalmology, Doylestown, Pennsylvania
- Department of Ophthalmology, Mayo Clinic, Rochester, Minnesota
| | | | - Rachel B. Levine
- Division of General Internal Medicine, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Divya Srikumaran
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, Maryland
| |
Collapse
|
9
|
Lett E, Tran NK, Nweke N, Nguyen M, Kim JG, Holmboe E, McDade W, Boatright D. Intersectional Disparities in Emergency Medicine Residents' Performance Assessments by Race, Ethnicity, and Sex. JAMA Netw Open 2023; 6:e2330847. [PMID: 37733347 PMCID: PMC10514741 DOI: 10.1001/jamanetworkopen.2023.30847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 07/19/2023] [Indexed: 09/22/2023] Open
Abstract
Importance Previous studies have demonstrated sex-specific disparities in performance assessments among emergency medicine (EM) residents. However, less work has focused on intersectional disparities by ethnoracial identity and sex in resident performance assessments. Objective To estimate intersectional sex-specific ethnoracial disparities in standardized EM resident assessments. Design, Setting, and Participants This retrospective cohort study used data from the Association of American Medical Colleges and the Accreditation Council for Graduate Medical Education Milestones (Milestones) assessments to evaluate ratings for EM residents at 128 EM training programs in the US. Statistical analyses were conducted in June 2020 to January 2023. Exposure Training and assessment environments in EM residency programs across comparison groups defined by ethnoracial identity (Asian, White, or groups underrepresented in medicine [URM], ie, African American/Black, American Indian/Alaska Native, Hispanic/Latine, and Native Hawaiian/Other Pacific Islander) and sex (female/male). Main Outcomes and Measures Mean Milestone scores (scale, 0-9) across 6 core competency domains: interpersonal and communications skills, medical knowledge, patient care, practice-based learning and improvement, professionalism, and system-based practice. Overall assessment scores were calculated as the mean of the 6 competency scores. Results The study sample comprised 128 ACGME-accredited programs and 16 634 assessments for 2708 EM residents of which 1913 (70.6%) were in 3-year and 795 (29.4%) in 4-year programs. Most of the residents were White (n = 2012; 74.3%), followed by Asian (n = 477; 17.6%), Hispanic or Latine (n = 213; 7.9%), African American or Black (n = 160; 5.9%), American Indian or Alaska Native (n = 24; 0.9%), and Native Hawaiian or Other Pacific Islander (n = 4; 0.1%). Approximately 14.3% (n = 386) and 34.6% (n = 936) were of URM groups and female, respectively. Compared with White male residents, URM female residents in 3-year programs were rated increasingly lower in the medical knowledge (URM female score, -0.47; 95% CI, -0.77 to -0.17), patient care (-0.18; 95% CI, -0.35 to -0.01), and practice-based learning and improvement (-0.37; 95% CI, -0.65 to -0.09) domains by postgraduate year 3 year-end assessment; URM female residents in 4-year programs were also rated lower in all 6 competencies over the assessment period. Conclusions and Relevance This retrospective cohort study found that URM female residents were consistently rated lower than White male residents on Milestone assessments, findings that may reflect intersectional discrimination in physician competency evaluation. Eliminating sex-specific ethnoracial disparities in resident assessments may contribute to equitable health care by removing barriers to retention and promotion of underrepresented and minoritized trainees and facilitating diversity and representation among the emergency physician workforce.
Collapse
Affiliation(s)
- Elle Lett
- Health Systems and Population Health, University of Washington School of Public Health, Seattle
- Center for Anti-Racism and Community Health, University of Washington School of Public Health, Seattle
- Perelman School of Medicine, University of Pennsylvania, Philadelphia
| | - Nguyen Khai Tran
- The PRIDE Study/PRIDEnet, Stanford University School of Medicine, Palo Alto, California
| | - Nkemjika Nweke
- St George’s University School of Medicine, St George, Grenada
| | | | - Jung G. Kim
- Department of Health System Science, Kaiser Permanente Bernard J. Tyson School of Medicine, Pasadena, California
| | - Eric Holmboe
- Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - William McDade
- Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Dowin Boatright
- Ronald O. Perelman Department of Emergency Medicine, New York University, New York
| |
Collapse
|
10
|
Hauer KE, Park YS, Bullock JL, Tekian A. "My Assessments Are Biased!" Measurement and Sociocultural Approaches to Achieve Fairness in Assessment in Medical Education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S16-S27. [PMID: 37094278 DOI: 10.1097/acm.0000000000005245] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Assessing learners is foundational to their training and developmental growth throughout the medical education continuum. However, growing evidence shows the prevalence and impact of harmful bias in assessments in medical education, accelerating the urgency to identify solutions. Assessment bias presents a critical problem for all stages of learning and the broader educational system. Bias poses significant challenges to learners, disrupts the learning environment, and threatens the pathway and transition of learners into health professionals. While the topic of assessment bias has been examined within the context of measurement literature, limited guidance and solutions exist for learners in medical education, particularly in the clinical environment. This article presents an overview of assessment bias, focusing on clinical learners. A definition of bias and its manifestations in assessments are presented. Consequences of assessment bias are discussed within the contexts of validity and fairness and their impact on learners, patients/caregivers, and the broader field of medicine. Messick's unified validity framework is used to contextualize assessment bias; in addition, perspectives from sociocultural contexts are incorporated into the discussion to elaborate the nuanced implications in the clinical training environment. Discussions of these topics are conceptualized within the literature and the interventions used to date. The article concludes with practical recommendations to overcome bias and to develop an ideal assessment system. Recommendations address articulating values to guide assessment, designing assessment to foster learning and outcomes, attending to assessment procedures, promoting continuous quality improvement of assessment, and fostering equitable learning and assessment environments.
Collapse
Affiliation(s)
- Karen E Hauer
- K.E. Hauer is associate dean for competency assessment and professional standards, and professor, Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, California; ORCID: http://orcid.org/0000-0002-8812-4045
| | - Yoon Soo Park
- Y.S. Park is associate professor and associate head, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0001-8583-4335
| | - Justin L Bullock
- J.L. Bullock is a fellow, Department of Medicine, Division of Nephrology, University of Washington School of Medicine, Seattle, Washington; ORCID: http://orcid.org/0000-0003-4240-9798
| | - Ara Tekian
- A. Tekian is professor and associate dean for international education, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0002-9252-1588
| |
Collapse
|
11
|
Vallejo MC, Imler LE, Price SS, Lilly CL, Elmo RM, Shapiro RE, Nield LS. Identifying Gender-Related Differences in Graduate Medical Education with the Use of a Web-Based Professionalism Monitoring Tool. South Med J 2023; 116:395-399. [PMID: 37137472 PMCID: PMC10167550 DOI: 10.14423/smj.0000000000001555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
OBJECTIVES Medical education is required to ensure a healthy training and learning environment for resident physicians. Trainees are expected to demonstrate professionalism with patients, faculty, and staff. West Virginia University Graduate Medical Education (GME) initiated a Web-based professionalism and mistreatment form ("button") on our Web site for reporting professionalism breaches, mistreatment, and exemplary behavior events. The purpose of this study was to identify characteristics in resident trainees who had a "button push" activation about their behavior to better understand ways to improve professionalism in GME. METHODS This West Virginia University institutional review board-approved quality improvement study is a descriptive analysis of GME button push activations from July 2013 through June 2021. We compared characteristics of all of those trainees who had specific button activation(s) about their behavior. Data are reported as frequency and percentage. Nominal data and interval data were analyzed using the χ2 and the t test, respectively. P < 0.05 was significant. Logistic regression was used to analyze those differences that were significant. RESULTS In the 8-year study period, there were 598 button activations, and 54% (n = 324) of the activations were anonymous. Nearly all of the button reports (n = 586, 98%) were constructively resolved within 14 days. Of the 598 button activations, 95% (n = 569) were identified as involving one sex, with 66.3% (n = 377) identified as men and 33.7% (n = 192) as women. Of the 598 activations, 83.7% (n = 500) involved residents and 16.3% (n = 98) involved attendings. One-time offenders comprised 90% (n = 538), and 10% (n = 60) involved individuals who had previous button pushes about their behavior. CONCLUSIONS Implementation of a professionalism-monitoring tool, such as our Web-based button push, identified gender differences in the reporting of professionalism breaches, because twice as many men as women were identified as the instigator of a professionalism breech. The tool also facilitated timely interventions and exemplary behavior recognition.
Collapse
Affiliation(s)
- Manuel C. Vallejo
- Department of Graduate Medical Education, West Virginia University School of Medicine, Morgantown
| | | | | | - Christa L. Lilly
- Department of Epidemiology and Biostatistics, West Virginia University School of Medicine, Morgantown
| | - Rebecca M. Elmo
- Department of Medical Education, West Virginia University School of Medicine, Morgantown
| | - Robert E. Shapiro
- Department of Obstetrics & Gynecology, West Virginia University School of Medicine, Morgantown
| | - Linda S. Nield
- Department of Pediatrics, West Virginia University School of Medicine, Morgantown
| |
Collapse
|
12
|
Cai P, Ye P, Zhang Y, Dai R, Fan J, Hambly BD, Bao S, Tao K. The outcomes of lockdown in the higher education sector during the COVID-19 pandemic. PLoS One 2023; 18:e0282907. [PMID: 37098014 PMCID: PMC10128953 DOI: 10.1371/journal.pone.0282907] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 02/27/2023] [Indexed: 04/26/2023] Open
Abstract
To control COVID-19 pandemic, complete lockdown was initiated in 2020. We investigated the impact of lockdown on tertiary-level academic performance, by comparing educational outcomes amongst first-year students during second semester of their medical course prior to and during lockdown. Evidence: The demographics, including educational outcomes of the two groups were not significantly different during semester one (prior to the lockdown). The academic performance amongst women was better than men prior to lockdown. However, the scores were improved significantly for both sexes during lockdown in 2020, following the complete online teaching, compared to that in 2019, showing no significant difference between men and women in 2020, for English and Chinese History. There were significant different scores between men and women in lab-based Histology Practice in 2019 (in-person tuition) and 2020 (online digital tuition), although only a significant improvement in women was observed between 2019 and 2020. Implication: the forced change to online delivery of the second semester of the first-year medical program in 2020 due to the COVID-19 pandemic did not result in any decline in assessment outcomes in any of the subjects undertaken. We believe extensive online digital media should continue to be available to students in future.
Collapse
Affiliation(s)
- Peiling Cai
- School of Preclinical Medicine, Chengdu University, Chengdu, China
| | - Peng Ye
- School of Preclinical Medicine, Chengdu University, Chengdu, China
| | - Yihao Zhang
- School of Preclinical Medicine, Chengdu University, Chengdu, China
| | - Rui Dai
- School of Preclinical Medicine, Chengdu University, Chengdu, China
| | - Jingchun Fan
- School of Public Health, Centre for Evidence-Based Medicine, Gansu University of Chinese Medicine, Lanzhou, Gansu, China
| | - Brett D Hambly
- Department of Pathology, Tongji Hospital, School of Medicine, Tongji University, Shanghai, China
| | - Shisan Bao
- Department of Pathology, Tongji Hospital, School of Medicine, Tongji University, Shanghai, China
| | - Kun Tao
- Department of Pathology, Tongji Hospital, School of Medicine, Tongji University, Shanghai, China
| |
Collapse
|
13
|
Mamtani M, Shofer F, Scott K, Kaminstein D, Eriksen W, Takacs M, Hall AK, Weiss A, Walter LA, Gallahue F, Yarris L, Abbuhl SB, Aysola J. Gender Differences in Emergency Medicine Attending Physician Comments to Residents: A Qualitative Analysis. JAMA Netw Open 2022; 5:e2243134. [PMID: 36409494 PMCID: PMC9679878 DOI: 10.1001/jamanetworkopen.2022.43134] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
IMPORTANCE Prior studies have revealed gender differences in the milestone and clinical competency committee assessment of emergency medicine (EM) residents. OBJECTIVE To explore gender disparities and the reasons for such disparities in the narrative comments from EM attending physicians to EM residents. DESIGN, SETTING, AND PARTICIPANTS This multicenter qualitative analysis examined 10 488 narrative comments among EM faculty and EM residents between 2015 to 2018 in 5 EM training programs in the US. Data were analyzed from 2019 to 2021. MAIN OUTCOMES AND MEASURES Differences in narrative comments by gender and study site. Qualitative analysis included deidentification and iterative coding of the data set using an axial coding approach, with double coding of 20% of the comments at random to assess intercoder reliability (κ, 0.84). The authors reviewed the unmasked coded data set to identify emerging themes. Summary statistics were calculated for the number of narrative comments and their coded themes by gender and study site. χ2 tests were used to determine differences in the proportion of narrative comments by gender of faculty and resident. RESULTS In this study of 283 EM residents, of whom 113 (40%) identified as women, and 277 EM attending physicians, of whom 95 (34%) identified as women, there were notable gender differences in the content of the narrative comments from faculty to residents. Men faculty, compared with women faculty, were more likely to provide either nonspecific comments (115 of 182 [63.2%] vs 40 of 95 [42.1%]), or no comments (3387 of 10 496 [32.3%] vs 1169 of 4548 [25.7%]; P < .001) to men and women residents. Compared with men residents, more women residents were told that they were performing below level by men and women faculty (36 of 113 [31.9%] vs 43 of 170 [25.3%]), with the most common theme including lack of confidence with procedural skills. CONCLUSIONS AND RELEVANCE In this qualitative study of narrative comments provided by EM attending physicians to residents, multiple modifiable contributors to gender disparities in assessment were identified, including the presence, content, and specificity of comments. Among women residents, procedural competency was associated with being conflated with procedural confidence. These findings can inform interventions to improve parity in assessment across graduate medical education.
Collapse
Affiliation(s)
- Mira Mamtani
- Department of Emergency Medicine, Penn Medicine, Philadelphia, Pennsylvania
- FOCUS on Health and Leadership for Women, Penn Medicine, Philadelphia, Pennsylvania
| | - Frances Shofer
- Director of Epidemiology and Biostatistics, Department of Emergency Medicine, Penn Medicine, Philadelphia, Pennsylvania
| | - Kevin Scott
- Department of Emergency Medicine, Penn Medicine, Philadelphia, Pennsylvania
| | - Dana Kaminstein
- Co-Director of the Educational Research Program, Penn Graduate School of Education, Philadelphia, Pennsylvania
- Masters in Medical Education Program, Penn Graduate School of Education, Philadelphia, Pennsylvania
| | - Whitney Eriksen
- Mixed Methods Research Lab, Penn Medicine, Philadelphia, Pennsylvania
| | - Michael Takacs
- Department of Emergency Medicine, University of Iowa Carver College of Medicine, Iowa City
| | - Andrew K. Hall
- Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada
| | - Anna Weiss
- Department of Pediatrics, Penn Medicine, Philadelphia, Pennsylvania
- Division of Emergency Medicine, Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Lauren A. Walter
- Department of Emergency Medicine, University of Alabama at Birmingham, Birmingham
| | - Fiona Gallahue
- Department of Emergency Medicine, University of Washington, Seattle
| | - Lainie Yarris
- Department of Emergency Medicine, Oregon Health and Sciences University, Portland
| | | | - Jaya Aysola
- Division of General Internal Medicine, Department of Internal Medicine, Penn Medicine, Philadelphia, Pennsylvania
- Penn Medicine Center for Health Equity Advancement, Penn Medicine, Philadelphia, Pennsylvania
| |
Collapse
|
14
|
Farthing A, Burkhardt J. Moving Beyond the Binary: How Language and Common Research Practices Can Make Emergency Medicine Less Welcoming for Some Learners and Physicians. West J Emerg Med 2022; 23:890-892. [DOI: 10.5811/westjem.2022.8.58646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 08/25/2022] [Indexed: 11/15/2022] Open
Affiliation(s)
- Alex Farthing
- University of Pittsburgh Medical Center, Department of Emergency Medicine, Pittsburgh, Pennsylvania
| | - John Burkhardt
- University of Michigan Medical School, Departments of Emergency Medicine and Learning Health Sciences, Ann Arbor, Michigan
| |
Collapse
|
15
|
Menchetti I, Eagles D, Ghanem D, Leppard J, Fournier K, Cheung WJ. Gender differences in emergency medicine resident assessment: A scoping review. AEM EDUCATION AND TRAINING 2022; 6:e10808. [PMID: 36189450 PMCID: PMC9513437 DOI: 10.1002/aet2.10808] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 09/05/2022] [Accepted: 09/06/2022] [Indexed: 05/26/2023]
Abstract
Background Growing literature within postgraduate medical education demonstrates that female resident physicians experience gender bias throughout their training and future careers. This scoping review aims to describe the current body of literature on gender differences in emergency medicine (EM) resident assessment. Methods We conducted a scoping review which adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews guidelines. We included research involving resident physicians or fellows in EM (population and context), which focused on the impact of gender on assessments (concept). We searched seven databases from the databases' inception to April 4, 2022. Two reviewers independently screened citations, completed full-text review, and abstracted data. A third reviewer resolved any discrepancies. Results A total of 667 unique citations were identified; 10 studies were included, and all were conducted within the United States. Four studies reported differences in EM resident assessments attributable to gender within workplace-based assessments (qualitative comments and quantitative scores) by both attending physicians and nonphysicians. Six studies investigating clinical competency committee scores, procedural scores, and simulation-based assessments did not report any significant differences attributable to gender. Conclusions This scoping review found that gender bias exists within EM resident assessment most notably at the level of narrative comments typically received via workplace-based assessments. As female EM residents receive higher rates of negative or critical comments and discordant feedback documented on assessment, these findings raise concern about added barriers female EM residents may face while progressing through residency and the impact on their clinical and professional development.
Collapse
Affiliation(s)
| | - Debra Eagles
- Department of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
- School of Epidemiology and Public HealthUniversity of OttawaOttawaOntarioCanada
| | - Dana Ghanem
- Faculty of MedicineUniversity of OttawaOttawaOntarioCanada
| | - Jennifer Leppard
- Department of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
| | - Karine Fournier
- Health Sciences LibraryUniversity of OttawaOttawaOntarioCanada
| | - Warren J. Cheung
- Department of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
- Royal College of Physicians and Surgeons of CanadaOttawaOntarioCanada
| |
Collapse
|
16
|
Yamazaki K, Holmboe ES, Hamstra SJ. An Empirical Investigation Into Milestones Factor Structure Using National Data Derived From Clinical Competency Committees. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:569-576. [PMID: 34192718 DOI: 10.1097/acm.0000000000004218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE To investigate whether milestone data obtained from clinical competency committee (CCC) ratings in a single specialty reflected the 6 general competency domains framework. METHOD The authors examined milestone ratings from all 275 U.S. Accreditation Council for Graduate Medical Education-accredited categorical obstetrics and gynecology (OBGYN) programs from July 1, 2018, to June 30, 2019. The sample size ranged from 1,371 to 1,438 residents from 275 programs across 4 postgraduate years (PGYs), each with 2 assessment periods. The OBGYN milestones reporting form consisted of 28 subcompetencies under the 6 general competency domains. Milestone ratings were determined by each program's CCC. Intraclass correlations (ICCs) and design effects were calculated for each subcompetency by PGY and assessment period. A multilevel confirmatory factor analysis (CFA) perspective was used, and the pooled within-program covariance matrix was obtained to compare the fit of the 6-domain factor model against 3 other plausible models. RESULTS Milestone ratings from 5,618 OBGYN residents were examined. Moderate to high ICCs and design effects greater than 2.0 were prevalent among all subcompetencies for both assessment periods, warranting the use of the multilevel approach in applying CFA to the milestone data. The theory-aided split-patient care (PC) factor model, which used the 6 general competency domains but also included 3 factors within the PC domain (obstetric technical skills, gynecology technical skills, and ambulatory care), was consistently shown as the best-fitting model across all PGYs by assessment period conditions, except for one. CONCLUSIONS The findings indicate that in addition to using the 6 general competency domains framework in their rating process, CCCs may have further distinguished the PC competency domain into 3 meaningful factors. This study provides internal structure validity evidence for the milestones within a single specialty and may shed light on CCCs' understanding of the distinctive content embedded within the milestones.
Collapse
Affiliation(s)
- Kenji Yamazaki
- K. Yamazaki is senior analyst, Milestones Research and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-7039-4717
| | - Eric S Holmboe
- E.S. Holmboe is chief, Research, Milestones Development and Evaluation Officer, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0108-6021
| | - Stanley J Hamstra
- S.J. Hamstra is research consultant, Accreditation Council for Graduate Medical Education, Chicago, Illinois, professor, Department of Surgery, University of Toronto, Toronto, Ontario, Canada, and adjunct professor, Department of Medical Education, Feinberg School of Medicine, Northwestern University, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-0680-366X
| |
Collapse
|
17
|
Sheffield V, Hartley S, Stansfield RB, Mack M, Blackburn S, Vaughn VM, Heidemann L, Chang R, Lukela JR. Gendered Expectations: the Impact of Gender, Evaluation Language, and Clinical Setting on Resident Trainee Assessment of Faculty Performance. J Gen Intern Med 2022; 37:714-722. [PMID: 34405349 PMCID: PMC8904706 DOI: 10.1007/s11606-021-07093-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 07/30/2021] [Indexed: 10/20/2022]
Abstract
BACKGROUND Gender inequity is pervasive in academic medicine. Factors contributing to these gender disparities must be examined. A significant body of literature indicates men and women are assessed differently in teaching evaluations. However, limited data exist on how faculty gender affects resident evaluation of faculty performance based on the skill being assessed or the clinical practice settings in which the trainee-faculty interaction occurs. OBJECTIVE Evaluate for gender-based differences in the assessment of general internal medicine (GIM) faculty physicians by trainees in inpatient and outpatient settings. DESIGN Retrospective cohort study SUBJECTS: Inpatient and outpatient GIM faculty physicians in an Internal Medicine residency training program from July 1, 2015, to December 31, 2018. MAIN MEASURES Faculty scores on trainee teaching evaluations including overall teaching ability and Accreditation Council for Graduate Medical Education (ACGME) competencies (medical knowledge [MK], patient care [PC], professionalism [PROF], interpersonal and communication skills [ICS], practice-based learning and improvement [PBLI], and systems-based practice [SBP]) based on the institutional faculty assessment form. KEY RESULTS In total, 3581 evaluations by 445 trainees (55.1% men, 44.9% women) assessing 161 GIM faculty physicians (50.3% men, 49.7% women) were included. Male faculty were rated higher in overall teaching ability (male=4.69 vs. female=4.63, p=0.003) and in four of the six ACGME competencies (MK, PROF, PBLI, and SBP) based on our institutional evaluation form. In the inpatient setting, male faculty were rated more favorably for overall teaching (male = 4.70, female = 4.53, p=<0.001) and across all ACGME competencies. The only observed gender difference in the outpatient setting favored female faculty in PC (male = 4.65, female = 4.71, p=0.01). CONCLUSIONS Male and female GIM faculty performance was assessed differently by trainees. Gender-based differences were impacted by the setting of evaluation, with the greatest difference by gender noted in the inpatient setting.
Collapse
Affiliation(s)
- Virginia Sheffield
- Department of Internal Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Sarah Hartley
- Department of Internal Medicine, University of Michigan, Ann Arbor, MI, USA
| | | | - Megan Mack
- Department of Internal Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Staci Blackburn
- Department of Internal Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Valerie M Vaughn
- Department of Internal Medicine, University of Michigan, Ann Arbor, MI, USA.,Department of Internal Medicine, University of Utah School of Medicine, Salt Lake City, UT, USA
| | - Lauren Heidemann
- Department of Internal Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Robert Chang
- Department of Internal Medicine, University of Michigan, Ann Arbor, MI, USA
| | | |
Collapse
|
18
|
Billick M, Rassos J, Ginsburg S. Dressing the Part: Gender Differences in Residents' Experiences of Feedback in Internal Medicine. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:406-413. [PMID: 34709203 DOI: 10.1097/acm.0000000000004487] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
PURPOSE Multiple studies demonstrate that assessment of residents differs by gender, yet little is known about how these differences are experienced by women and men. The authors sought to understand whether the experience of being assessed and receiving feedback differs between men and women internal medicine (IM) residents and how women respond to these experiences. METHOD A constructivist grounded theory approach to data collection and interpretation was used. The authors invited all IM residents in postgraduate years 1-3 at the University of Toronto to participate in semistructured focus groups (August-October 2019). Twenty-two residents participated (8 men, 14 women). Focus groups were divided by gender and training level. RESULTS The authors found a profound difference in experiences of receiving feedback between men and women residents. The themes of challenges to power and authority, tactics to reestablish power and authority, conflicting feedback from attendings, and ways of moving forward all diverged between men and women residents. Women repeatedly brought up feedback outside of official assessment moments and relied on symbols, such as a white coat, stethoscope, and demure clothing, to "dress the part" of a physician. Women also encountered conflicting feedback from supervisors regarding confidence and assertiveness (e.g., sometimes told to be more assertive, other times to be less), often resulting in self-censorship; similar feedback was rarely noted by men. CONCLUSIONS Gendered differences in the experiences of being assessed and receiving feedback are not always reflected in standard measures. Gender and medicine can be considered performative, and these findings demonstrate women IM residents integrate multiple forms of feedback to create the persona of the woman physician. The authors believe this research contributes a unique vantage point to the experience of women residents interpreting explicit and implicit feedback in IM and highlights the socialization that occurs to become a woman physician.
Collapse
Affiliation(s)
- Maxime Billick
- M. Billick is a postgraduate year 4 chief medical resident in internal medicine, Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - James Rassos
- J. Rassos is assistant professor of medicine, Department of Medicine, Faculty of Medicine, University of Toronto, and staff physician, Unity Health, Toronto, Ontario, Canada
| | - Shiphra Ginsburg
- S. Ginsburg is professor of medicine, Department of Medicine, Faculty of Medicine, University of Toronto, scientist, Wilson Centre for Research in Education, University Health Network, University of Toronto, and Canada Research Chair in Health Professions Education, Toronto, Ontario, Canada; ORCID: http://orcid.org/0000-0002-4595-6650
| |
Collapse
|
19
|
See A, Pallaci M, Aluisio AR, Beck-Esmay J, Menchine M, Weinstock M, Lam CN, Riddell J. Assessment of Implicit Gender Bias During Evaluation of Procedural Competency Among Emergency Medicine Residents. JAMA Netw Open 2022; 5:e2147351. [PMID: 35129594 PMCID: PMC8822382 DOI: 10.1001/jamanetworkopen.2021.47351] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
IMPORTANCE Gender disparities exist throughout medicine. Recent studies have highlighted an attainment gap between male and female residents in performance evaluations on Accreditation Council for Graduate Medical Education (ACGME) milestones. Because of difficulties in blinding evaluators to gender, it remains unclear whether these observed disparities are because of implicit bias or other causes. OBJECTIVE To estimate the magnitude of implicit gender bias in assessments of procedural competency in emergency medicine residents and whether the gender of the evaluator is associated with identified implicit gender bias. DESIGN, SETTING, AND PARTICIPANTS A cross-sectional study was performed from 2018 to 2020 in which emergency medicine residency faculty assessed procedural competency by evaluating videos of residents performing 3 procedures in a simulated environment. They were blinded to the intent of the study. Proceduralists were filmed performing each procedure from 2 different viewpoints simultaneously by 2 different cameras. One was a gender-blinded (ie, hands-only) view, and the other a wide-angled gender-evident (ie, whole-body) view. The faculty evaluators viewed videos in a random order and assessed procedural competency on a global rating scale with extensive validity evidence for the evaluation of video-recorded procedural performance. MAIN OUTCOMES AND MEASURES The primary outcome was to determine if there was a difference in the evaluation of procedural competency based on gender. The secondary outcome was to determine if there was a difference in the evaluations based on the gender of the evaluator. RESULTS Fifty-one faculty evaluators enrolled from 19 states, with 22 male participants (43.1%), 29 female participants (56.9%), and a mean (SD) age of 37 (6.4) years. Each evaluator assessed all 60 procedures: 30 gender-blinded (hands-only view) videos and 30 identical gender-evident (wide angle) videos. There were no statistically significant differences in the study evaluators' scores of the proceduralists based on their gender, and the gender of the evaluator was not associated with the difference in mean scores. CONCLUSIONS AND RELEVANCE In this study, we did not identify a difference in the evaluation of procedural competency based upon the gender of the resident proceduralist or the gender of the faculty evaluator.
Collapse
Affiliation(s)
- Ashley See
- Department of Emergency Medicine, Adena Health Systems, Chillicothe, Ohio
- Department of Emergency Medicine, Kettering Health, Dayton, Ohio
| | - Michael Pallaci
- Department of Emergency Medicine, Adena Health System, Chillicothe, Ohio
- Department of Emergency Medicine, Summa Health System, Akron, Ohio
| | - Adam R. Aluisio
- Department of Emergency Medicine, Alpert Medical School, Brown University, Providence, Rhode Island
| | - Jenny Beck-Esmay
- Department of Emergency Medicine, Mt Sinai Morningside, New York, New York
| | - Michael Menchine
- Department of Emergency Medicine, Keck School of Medicine, University of Southern California, Los Angeles
| | - Michael Weinstock
- Department of Emergency Medicine, Adena Health Systems, Chillicothe, Ohio
- Department of Emergency Medicine, The Wexner Medical Center, Ohio State University, Columbus
| | - Chun Nok Lam
- Department of Emergency Medicine, Keck School of Medicine, University of Southern California, Los Angeles
- Department of Population and Public Health Science, Keck School of Medicine, University of Southern California, Los Angeles
| | - Jeff Riddell
- Department of Emergency Medicine, Keck School of Medicine, University of Southern California, Los Angeles
| |
Collapse
|
20
|
Athy S, Talmon G, Samson K, Martin K, Nelson K. Faculty Versus Resident Self-Assessment Using Pathology Milestones: How Aligned Are We? Acad Pathol 2021; 8:23742895211060526. [PMID: 34926794 PMCID: PMC8679011 DOI: 10.1177/23742895211060526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 08/23/2021] [Accepted: 10/15/2021] [Indexed: 11/17/2022] Open
Abstract
Competent physicians must be able to self-assess skill level; however, previous
studies suggest that medical trainees may not accurately self-assess. We
utilized Pathology Milestones (PM) data to determine whether there were
discrepancies in self- versus Clinical Competency Committee (CCC) ratings by
sex, program year (PGY), time of evaluation, and question category (Patient
Care, Medical Knowledge, Systems-Based Practice [SBP], Practice-Based Learning
and Improvement [PBL], Professionalism [PRO], and Interpersonal and
Communication Skills) and Residency In-Service Examination (RISE) score. We
completed retrospective analyses of PM evaluation scores from 2016 to 2019 (n =
23 residents) 2 times per year. Discrepancies in evaluation scores were
calculated by subtracting CCC scores from resident self-evaluation scores. There
was no significant difference in discrepancy scores between male versus female
residents (P = .94). Discrepancy scores among all PGYs were significantly
different (P < .0001), with PGY1 tending to overrate the most, followed by
PGY2. PGY3 and PGY4 underrated themselves on average compared to CCC ratings,
with PGY4 having significantly lower self-ratings than CCC compared to any other
PGY. In January, residents underscored themselves and in July residents
overscored themselves compared to CCC (P < .0001 for both). Question types
resulted in variable discrepancy scores, with SBP significantly lower than and
PRO significantly higher than all other categories (P < .05 for both).
Increases in RISE score correlated to increases in self- and CCC-scoring. These
discrepancies can help trainees improve self-assessment. Discrepancies indicate
potential areas for amelioration, such as curriculum adjustments or Milestone’s
verbiage.
Collapse
Affiliation(s)
- Sienna Athy
- Department of Medicine, University of Nebraska Medical Center, Omaha, NE, USA
| | - Geoffrey Talmon
- Department of Pathology and Microbiology and Associate Dean for Medical Education, University of Nebraska Medical Center, Omaha, NE, USA
| | - Kaeli Samson
- Department of Biostatistics, University of Nebraska Medical Center, Omaha, NE, USA
| | | | - Kari Nelson
- Department of Surgery and Graduate Medical Education Research and Education Program Manager, University of Nebraska Medical Center, Omaha, NE, USA
| |
Collapse
|
21
|
Park YS, Hamstra SJ, Yamazaki K, Holmboe E. Longitudinal Reliability of Milestones-Based Learning Trajectories in Family Medicine Residents. JAMA Netw Open 2021; 4:e2137179. [PMID: 34874406 PMCID: PMC8652607 DOI: 10.1001/jamanetworkopen.2021.37179] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
IMPORTANCE Longitudinal Milestones data reported to the Accreditation Council for Graduate Medical Education (ACGME) can be used to measure the developmental and educational progression of learners. Learning trajectories illustrate the pattern and rate at which learners acquire competencies toward unsupervised practice. OBJECTIVE To investigate the reliability of learning trajectories and patterns of learning progression that can support meaningful intervention and remediation for residents. DESIGN, SETTING, AND PARTICIPANTS This national retrospective cohort study included Milestones data from residents in family medicine, representing 6 semi-annual reporting periods from July 2016 to June 2019. INTERVENTIONS Longitudinal formative assessment using the Milestones assessment system reported to the ACGME. MAIN OUTCOMES AND MEASURES To estimate longitudinal consistency, growth rate reliability (GRR) and growth curve reliability (GCR) for 22 subcompetencies in the ACGME family medicine Milestones were used, incorporating clustering effects at the program level. Latent class growth curve models were used to examine longitudinal learning trajectories. RESULTS This study included Milestones ratings from 3872 residents in 514 programs. The Milestones reporting system reliably differentiated individual longitudinal patterns for formative purposes (mean [SD] GRR, 0.63 [0.03]); there was also evidence of precision for model-based rates of change (mean [SD] GCR, 0.91 [0.02]). Milestones ratings increased significantly across training years and reporting periods (mean [SD] of 0.55 [0.04] Milestones units per reporting period; P < .001); patterns of developmental progress varied by subcompetency. There were 3 or 4 distinct patterns of learning trajectories for each of the 22 subcompetencies. For example, for the professionalism subcompetency, residents were classified to 4 groups of learning trajectories; during the 3-year family medicine training period, trajectories diverged further after postgraduate year (PGY) 1, indicating a potential remediation point between the end of PGY 1 and the beginning of PGY 2 for struggling learners, who represented 16% of learners (620 residents). Similar inferences for learning trajectories were found for practice-based learning and improvement, systems-based practice, and interpersonal and communication skills. Subcompetencies in medical knowledge and patient care demonstrated more consistent patterns of upward growth. CONCLUSIONS AND RELEVANCE These findings suggest that the Milestones reporting system provides reliable longitudinal data for individualized tracking of progress in all subcompetencies. Learning trajectories with supporting reliability evidence could be used to understand residents' developmental progress and tailored for individualized learning plans and remediation.
Collapse
Affiliation(s)
- Yoon Soo Park
- Harvard Medical School, Boston, Massachusetts
- Massachusetts General Hospital, Boston
- University of Illinois at Chicago College of Medicine, Chicago
| | - Stanley J. Hamstra
- Accreditation Council for Graduate Medical Education, Chicago, Illinois
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
- Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| | - Kenji Yamazaki
- Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Eric Holmboe
- Accreditation Council for Graduate Medical Education, Chicago, Illinois
- Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| |
Collapse
|
22
|
Weber DE, Kinnear B, Kelleher M, Klein M, Sall D, Schumacher DJ, Zhang N, Warm E, Schauer DP. Effect of resident and assessor gender on entrustment-based observational assessment in an internal medicine residency program. MEDEDPUBLISH 2021. [DOI: 10.12688/mep.17410.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Background: Implicit gender bias leads to differences in assessment. Studies examining gender differences in resident milestone assessment data demonstrate variable results. The purpose of this study was to determine if observational entrustment scores differ by resident and assessor gender in a program of assessment based on discrete, observable skills. Methods: We analyzed overall entrustment scores and entrustment scores by Accreditation Council for Graduate Medical Education (ACGME) core competency for 238 residents (49% female) from 396 assessors (38% female) in one internal medicine residency program from July 2012 to June 2019. We conducted analyses at 1-12 months, 1-36 months, 1-6 months, 7-12 months, and 31-36 months. We used linear mixed-effect models to assess the role of resident and assessor gender, with resident-specific and assessor-specific random effect to account for repeated measures. Results: Statistically significant interactions existed between resident and assessor gender for overall entrustment at 1-12 months (p < 0.001), 1-36 months (p< 0.001), 1-6 months (p<0.001), 7-12 months (p=0.04), and 31-36 months (p<0.001). However, group differences were not statistically significant. In several instances an interaction was significant between resident and assessor gender by ACGME core competency, but there were no statistically significant group differences for all competencies at any time point. When applicable, subsequent analysis of main effect of resident or assessor gender independently of one another revealed no statistically significant differences. Conclusions: No significant differences in entrustment scores were found based on resident or assessor gender in our large, robust entrustment-based program of assessment. Determining the reasons for our findings may help identify ways to mitigate gender bias in assessment.
Collapse
|
23
|
Dundon KM, Powell WT, Wilder JL, King B, Schwartz A, McPhillips H, Best JA. Parenthood and Parental Leave Decisions in Pediatric Residency. Pediatrics 2021; 148:peds.2021-050107. [PMID: 34584002 DOI: 10.1542/peds.2021-050107] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/14/2021] [Indexed: 11/24/2022] Open
Abstract
OBJECTIVES The demands of residency training may impact trainees' decision to have children. We examined characteristics of pediatric residents' decisions regarding childbearing, determinants of resident parental leave, and associations with well-being. METHODS A survey of 845 pediatric residents at 13 programs was conducted between October 2019 and May 2020. Survey items included demographics, desire for future children, and logistics of parental leave. Outcomes included parental leave length, burnout and depression screening results, satisfaction with duration of breastfeeding, and satisfaction with parental leave and parenthood decisions. RESULTS Seventy-six percent (639 of 845) of residents responded to the survey. Fifty-two percent (330) of respondents reported delaying having children during residency, and 29% (97) of those were dissatisfied with their decision to do so. Busy work schedule (89.7%), finances (50.9%), and a desire not to extend residency (41.2%) were the most common reasons for delay. Of respondents, 16% were parents and 4% were pregnant or had pregnant partners. Sixty-one parental leaves were reported, and 67% of parents reported dissatisfaction with leave length. The most frequently self-reported determinant of leave duration was the desire not to extend residency training (74%). Program mean leave length was negatively associated with burnout, measured as a dichotomous outcome (odds ratio = 0.81 [95% confidence interval 0.68-0.98]; P = .02). CONCLUSIONS Many pediatric trainees delay parenthood during residency and are not satisfied with their decision to do so. Pediatric resident parental leave remains short and variable in duration, despite the positive association between longer leaves and overall well-being.
Collapse
Affiliation(s)
| | - Weston T Powell
- Pediatric Pulmonary and Sleep Medicine, Seattle Children's Hospital and University of Washington, Seattle, Washington
| | - Jayme L Wilder
- Department of Pediatrics, Boston Children's Hospital and Harvard Medical School, Harvard University, Boston, Massachusetts
| | - Beth King
- Association of Pediatric Program Directors, McLean, Virginia
| | - Alan Schwartz
- Association of Pediatric Program Directors, McLean, Virginia.,Departments of Medical Education and Pediatrics, University of Illinois at Chicago, Chicago, Illinois
| | | | | | | |
Collapse
|
24
|
Landau SI, Syvyk S, Wirtalla C, Aarons CB, Butts S, Holmboe E, Kelz RR. Trainee Sex and Accreditation Council for Graduate Medical Education Milestone Assessments During General Surgery Residency. JAMA Surg 2021; 156:925-931. [PMID: 34232269 DOI: 10.1001/jamasurg.2021.3005] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Importance In evaluating the effectiveness of general surgery (GS) training, an unbiased assessment of the progression of residents with attention to individual learner factors is imperative. Objective To evaluate the role of trainee sex in milestone achievement over the course of GS residency using national data from the Accreditation Council for Graduate Medical Education (ACGME). Design, Setting, and Participants This cross-sectional study evaluated female and male GS residents enrolled in ACGME-accredited programs in the US from 2014 to 2018 with reported variation in milestones performance across years in training and representation. Data were analyzed from November 2019 to June 2021. Main Outcomes and Measures Mean reported milestone score at initial and final assessment, and predicted time-to-attainment of equivalent performance by sex. Results Among 4476 GS residents from 250 programs who had milestone assessments at any point in their clinical training, 1735 were female (38.8%). Initially, female and male residents received similar mean (SD) milestone scores (1.95 [0.50] vs 1.94 [0.50]; P = .69). At the final assessment, female trainees received overall lower mean milestone scores than male trainees (4.25 vs 4.31; P < .001). Significantly lower mean milestone scores were reported for female residents at the final assessment for several subcompetencies in both univariate and multivariate analyses, with only medical knowledge 1 (pathophysiology, diagnosis, and initial management) common to both. Multilevel mixed-effects linear modeling demonstrated that female trainees had significantly lower rates of monthly milestone attainment in the subcompetency of medical knowledge 1, which was associated with a significant difference in training time of approximately 1.8 months. Conclusions and Relevance Both female and male GS trainees achieved the competency scores necessary to transition to independence after residency as measured by the milestones assessment system. Initially, there were no sex differences in milestone score. By graduation, there were differences in the measured assessment of female and male trainees across several subcompetencies. Careful monitoring for sex bias in the evaluation of trainees and scrutiny of the training process is needed to ensure that surgical residency programs support the educational needs of both female and male trainees.
Collapse
Affiliation(s)
- Sarah I Landau
- Department of Surgery, Center for Surgery and Health Economics, Perelman School of Medicine, University of Pennsylvania, Philadelphia
| | - Solomiya Syvyk
- Department of Surgery, Center for Surgery and Health Economics, Perelman School of Medicine, University of Pennsylvania, Philadelphia
| | - Chris Wirtalla
- Department of Surgery, Center for Surgery and Health Economics, Perelman School of Medicine, University of Pennsylvania, Philadelphia
| | - Cary B Aarons
- Department of Surgery, Center for Surgery and Health Economics, Perelman School of Medicine, University of Pennsylvania, Philadelphia
| | - Samantha Butts
- Department of Obstetrics and Gynecology, Penn State College of Medicine, Hershey, Pennsylvania
| | - Eric Holmboe
- Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Rachel R Kelz
- Department of Surgery, Center for Surgery and Health Economics, Perelman School of Medicine, University of Pennsylvania, Philadelphia
| |
Collapse
|
25
|
Huang IA, Tillou A, Hines OJ. Sex Differences in Milestones Achievement-An Issue of Learning or Assessment Bias? JAMA Surg 2021; 156:932. [PMID: 34232278 DOI: 10.1001/jamasurg.2021.3049] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Affiliation(s)
- Ivy A Huang
- Department of Surgery, University of California Los Angeles David Geffen School of Medicine, Los Angeles
| | - Areti Tillou
- Department of Surgery, University of California Los Angeles David Geffen School of Medicine, Los Angeles
| | - O Joe Hines
- Department of Surgery, University of California Los Angeles David Geffen School of Medicine, Los Angeles
| |
Collapse
|
26
|
Vu TT, Rose JA, Shabanova V, Kou M, Zuckerbraun NS, Roskind CG, Baghdassarian A, Levasseur K, Leonard K, Langhan ML. Milestones comparisons from residency to pediatric emergency medicine fellowship: Resetting expectations. AEM EDUCATION AND TRAINING 2021; 5:e10600. [PMID: 34124529 PMCID: PMC8171776 DOI: 10.1002/aet2.10600] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 03/13/2021] [Accepted: 03/26/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND Pediatric emergency medicine (PEM) fellowships accept trainees who have completed a residency in either emergency medicine (EM) or pediatrics and have adopted 17 subcompetencies with accompanying set of milestones from these two residency programs. This study aims to examine the changes in milestone scores among common subcompetencies from the end of EM or pediatrics residency to early PEM fellowship and evaluates time to reattainment of scores for subcompetencies in which a decline was noted. METHODS This is a national, retrospective cohort study of trainees enrolled in PEM fellowship programs from July 2014 to June 2018. PEM fellowship program directors voluntarily submitted deidentified milestone reports within the study time frame, including end-of-residency reports. Descriptive analyses of milestone scores between end of residency and PEM fellowship were performed. RESULTS Forty-eight U.S. PEM fellowship programs (65%) provided fellowship milestone data on 638 fellows, 218 (34%) of whom also had end-of-residency milestone scores submitted. Of 218 fellows eligible for analysis, 210 (96%) had completed a pediatrics residency and eight (4%) had completed an EM residency. Pediatric-trained fellows had statistically significant decreases in mean milestone scores in all 10 shared subcompetencies. Reattainment of milestone scores across all common subcompetencies for both EM and pediatric-trained PEM fellows occurred by the end of fellowship. CONCLUSIONS This study demonstrated declines in milestone scores from the end of primary residency training in pediatrics to early PEM fellowship in shared subcompetencies, which may suggest that performance expectations are reset at the beginning of PEM fellowship. Changes in subcompetency milestone anchors to provide subspecialty-specific context may be needed to more accurately define skills acquisition in the residency-to-fellowship transition.
Collapse
Affiliation(s)
- Tien T. Vu
- Children's Hospital ColoradoUniversity of Colorado SOM (School of Medicine)DenverColoradoUSA
| | - Jerri A. Rose
- Rainbow Babies & Children's HospitalCase Western Reserve Univ SOMClevelandOhioUSA
| | | | - Maybelle Kou
- Inova Children's HospitalVirginia Commonwealth University SOMFalls ChurchVirginiaUSA
| | - Noel S. Zuckerbraun
- Children's Hospital of PittsburghUniversity of Pittsburgh Medical CenterPittsburghPennsylvaniaUSA
| | | | - Aline Baghdassarian
- Children's Hospital of RichmondVirginia Commonwealth University SOMRichmondVirginiaUSA
| | - Kelly Levasseur
- Beaumont Children's HospitalOakland University William Beaumont SOMRoyal OakMichiganUSA
| | - Kathryn Leonard
- Washington University in St. Louis School of MedicineSt. LouisMissouriUSA
| | | |
Collapse
|
27
|
Zuckerbraun NS, Levasseur K, Kou M, Rose JA, Roskind CG, Vu T, Baghdassarian A, Leonard K, Shabanova V, Langhan ML. Gender Differences Among Milestone Assessments in a National Sample of Pediatric Emergency Medicine Fellowship Programs. AEM EDUCATION AND TRAINING 2021; 5:e10543. [PMID: 34099991 PMCID: PMC8166301 DOI: 10.1002/aet2.10543] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 09/12/2020] [Accepted: 09/30/2020] [Indexed: 05/26/2023]
Abstract
BACKGROUND Understanding gender gaps in trainee evaluations is critical because these may ultimately determine the duration of training. Currently, no studies describe the influence of gender on the evaluation of pediatric emergency medicine (PEM) fellows. OBJECTIVE The objective of our study was to compare milestone scores of female versus male PEM fellows. METHODS This is a multicenter retrospective cohort study of a national sample of PEM fellows from July 2014 to June 2018. Accreditation Council for Medical Education (ACGME) subcompetencies are scored on a 5-point scale and span six domains: patient care (PC), medical knowledge, systems-based practice, practice-based learning and improvement, professionalism, and interpersonal and communication skills (ICS). Summative assessments of the 23 PEM subcompetencies are assigned by each program's clinical competency committee and submitted semiannually for each fellow. Program directors voluntarily provided deidentified ACGME milestone reports. Demographics including sex, program region, and type of residency were collected. Descriptive analysis of milestones was performed for each year of fellowship. Multivariate analyses evaluated the difference in scores by sex for each of the subcompetencies. RESULTS Forty-eight geographically diverse programs participated, yielding data for 639 fellows (66% of all PEM fellows nationally); sex was recorded for 604 fellows, of whom 67% were female. When comparing the mean milestone scores in each of the six domains, there were no differences by sex in any year of training. When comparing scores within each of the 23 subcompetencies and correcting the significance level for comparison of multiple milestones, the scores for PC3 and ICS2 were significantly, albeit not meaningfully, higher for females. CONCLUSION In a national sample of PEM fellows, we found no major differences in milestone scores between females and males.
Collapse
Affiliation(s)
- Noel S. Zuckerbraun
- From theUPMC Children’s Hospital of Pittsburgh/University of Pittsburgh School of MedicinePittsburghPAUSA
| | - Kelly Levasseur
- theBeaumont Children’s Hospital/Oakland University William Beaumont School of MedicineRoyal OakMIUSA
| | - Maybelle Kou
- theInova Children’s Hospital/VCU School of MedicineFalls ChurchVAUSA
| | - Jerri A. Rose
- theRainbow Babies & Children’s Hospital/Case Western Reserve University School of MedicineClevelandOHUSA
| | | | - Tien Vu
- theChildren’s Hospital Colorado/University of Colorado School of MedicineDenverCOUSA
| | - Aline Baghdassarian
- theChildren’s Hospital of Richmond at VCU/Virginia Commonwealth University School of MedicineRichmondVAUSA
| | - Kathryn Leonard
- Washington University in St. Louis School of MedicineSt. LouisMOUSA
| | | | | |
Collapse
|
28
|
Chan TM, Sebok‐Syer SS, Cheung WJ, Pusic M, Stehman C, Gottlieb M. Workplace-based Assessment Data in Emergency Medicine: A Scoping Review of the Literature. AEM EDUCATION AND TRAINING 2021; 5:e10544. [PMID: 34099992 PMCID: PMC8166307 DOI: 10.1002/aet2.10544] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Revised: 10/02/2020] [Accepted: 10/05/2020] [Indexed: 06/01/2023]
Abstract
OBJECTIVE In the era of competency-based medical education (CBME), the collection of more and more trainee data is being mandated by accrediting bodies such as the Accreditation Council for Graduate Medical Education and the Royal College of Physicians and Surgeons of Canada. However, few efforts have been made to synthesize the literature around the current issues surrounding workplace-based assessment (WBA) data. This scoping review seeks to synthesize the landscape of literature on the topic of data collection and utilization for trainees' WBAs in emergency medicine (EM). METHODS The authors conducted a scoping review in the style of Arksey and O'Malley, seeking to synthesize and map literature on collecting, aggregating, and reporting WBA data. The authors extracted, mapped, and synthesized literature that describes, supports, and substantiates effective data collection and utilization in the context of the CBME movement within EM. RESULTS Our literature search retrieved 189 potentially relevant references (after removing duplicates) that were screened to 29 abstracts and papers relevant to collecting, aggregating, and reporting WBAs. Our analysis shows that there is an increasing temporal trend toward contributions in these topics, with the majority of the papers (16/29) being published in the past 3 years alone. CONCLUSION There is increasing interest in the areas around data collection and utilization in the age of CBME. The field, however, is only beginning to emerge, leaving more work that can and should be done in this area.
Collapse
Affiliation(s)
- Teresa M. Chan
- From theDepartment of MedicineDivision of Emergency Medicine and the Division of Education & InnovationMcMaster UniversityHamiltonOntarioCanada
- theProgram for Faculty DevelopmentFaculty of Health SciencesMcMaster UniversityHamiltonOntarioCanada
- and the McMaster Program for Education Research, Innovation, and TheoryMcMaster UniversityHamiltonOntarioCanada
| | | | - Warren J. Cheung
- theDepartment of Emergency MedicineUniversity of OttawaOttawaOntarioCanada
| | - Martin Pusic
- theDepartment of PediatricsHarvard Medical SchoolBostonMAUSA
| | | | - Michael Gottlieb
- and theDepartment of Emergency MedicineRush University Medical CenterChicagoILUSA
| |
Collapse
|
29
|
Hauer KE, Jurich D, Vandergrift J, Lipner RS, McDonald FS, Yamazaki K, Chick D, McAllister K, Holmboe ES. Gender Differences in Milestone Ratings and Medical Knowledge Examination Scores Among Internal Medicine Residents. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:876-884. [PMID: 33711841 DOI: 10.1097/acm.0000000000004040] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
PURPOSE To examine whether there are group differences in milestone ratings submitted by program directors working with clinical competency committees (CCCs) based on gender for internal medicine (IM) residents and whether women and men rated similarly on milestones perform comparably on subsequent in-training and certification examinations. METHOD This national retrospective study examined end-of-year medical knowledge (MK) and patient care (PC) milestone ratings and IM In-Training Examination (IM-ITE) and IM Certification Examination (IM-CE) scores for 2 cohorts (2014-2017, 2015-2018) of U.S. IM residents at ACGME-accredited programs. It included 20,098/21,440 (94%) residents, with 9,424 women (47%) and 10,674 men (53%). Descriptive statistics and differential prediction techniques using hierarchical linear models were performed. RESULTS For MK milestone ratings in PGY-1, men and women showed no statistical difference at a significance level of .01 (P = .02). In PGY-2 and PGY-3, men received statistically higher average MK ratings than women (P = .002 and P < .001, respectively). In contrast, men and women received equivalent average PC ratings in each PGY (P = .47, P = .72, and P = .80, for PGY-1, PGY-2, and PGY-3, respectively). Men slightly outperformed women with similar MK or PC ratings in PGY-1 and PGY-2 on the IM-ITE by about 1.7 and 1.5 percentage points, respectively, after adjusting for covariates. For PGY-3 ratings, women and men with similar milestone ratings performed equivalently on the IM-CE. CONCLUSIONS Milestone ratings were largely similar for women and men. Generally, women and men with similar MK or PC milestone ratings performed similarly on future examinations. Although there were small differences favoring men on earlier examinations, these differences disappeared by the final training year. It is questionable whether these small differences are educationally or clinically meaningful. The findings suggest fair, unbiased milestone ratings generated by program directors and CCCs assessing residents.
Collapse
Affiliation(s)
- Karen E Hauer
- K.E. Hauer is professor, Department of Medicine, University of California, San Francisco, San Francisco, California; ORCID: https://orcid.org/0000-0002-8812-4045
| | - Daniel Jurich
- D. Jurich is manager, psychometrics, National Board of Medical Examiners, Philadelphia, Pennsylvania
| | - Jonathan Vandergrift
- J. Vandergrift is senior research associate, American Board of Internal Medicine, Philadelphia, Pennsylvania
| | - Rebecca S Lipner
- R.S. Lipner is senior vice president for assessment and research, American Board of Internal Medicine, Philadelphia, Pennsylvania
| | - Furman S McDonald
- F.S. McDonald is senior vice president for academic and medical affairs, American Board of Internal Medicine, Philadelphia, Pennsylvania
| | - Kenji Yamazaki
- K. Yamazaki is senior analyst, milestones research and evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Davoren Chick
- D. Chick is senior vice president for medical education, American College of Physicians, Philadelphia, Pennsylvania
| | - Kevin McAllister
- K. McAllister is assessment officer, National Board of Medical Examiners, Philadelphia, Pennsylvania
| | - Eric S Holmboe
- E.S. Holmboe is chief research, milestone development, and evaluation officer, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| |
Collapse
|
30
|
Heath JK, Davis JE, Dine CJ, Padmore JS. Faculty Development for Milestones and Clinical Competency Committees. J Grad Med Educ 2021; 13:127-131. [PMID: 33936547 PMCID: PMC8078076 DOI: 10.4300/jgme-d-20-00851.1] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Affiliation(s)
- Janae K. Heath
- Janae K. Heath, MD, MSCE, is Assistant Professor of Medicine, Division of Pulmonary and Critical Care, University of Pennsylvania
| | - Jonathan E. Davis
- Jonathan E. Davis, MD, is Professor and Academic Chair, Emergency Medicine, Georgetown University Medical Center, and System Physician Chair for GME, Medstar Health
| | - C. Jessica Dine
- C. Jessica Dine, MD, MSHP, is Associate Professor of Medicine, Division of Pulmonary and Critical Care, and Associate Dean of Faculty Development, Perelman School of Medicine, University of Pennsylvania
| | - Jamie S. Padmore
- Jamie S. Padmore, DM, is Professor and Senior Associate Dean for Medical Education, Georgetown University Medical Center, and Vice President, Academic Affairs, and Designated Institutional Official, MedStar Health
| |
Collapse
|
31
|
Hamstra SJ, Yamazaki K. A Validity Framework for Effective Analysis and Interpretation of Milestones Data. J Grad Med Educ 2021; 13:75-80. [PMID: 33936537 PMCID: PMC8078069 DOI: 10.4300/jgme-d-20-01039.1] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Affiliation(s)
- Stanley J. Hamstra
- At the time of research, Stanley J. Hamstra, PhD, was Vice President, Milestones Research and Evaluation, Accreditation Council for Graduate Medical Education (ACGME), and is now Professor, Department of Surgery, University of Toronto, Adjunct Professor, Department of Medical Education, Feinberg School of Medicine, Northwestern University, and Research Consultant, ACGME
| | - Kenji Yamazaki
- Kenji Yamazaki, PhD, is Senior Analyst, Milestones Research and Evaluation, ACGME
| |
Collapse
|
32
|
Golden BP, Henschen BL, Liss DT, Kiely SL, Didwania AK. Association Between Internal Medicine Residency Applicant Characteristics and Performance on ACGME Milestones During Intern Year. J Grad Med Educ 2021; 13:213-222. [PMID: 33897955 PMCID: PMC8054584 DOI: 10.4300/jgme-d-20-00603.1] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 10/07/2020] [Accepted: 12/09/2020] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Residency programs apply varying criteria to the resident selection process. However, it is unclear which applicant characteristics reflect preparedness for residency. OBJECTIVE We determined the applicant characteristics associated with first-year performance in internal medicine residency as assessed by performance on Accreditation Council for Graduate Medical Education (ACGME) Milestones. METHODS We examined the association between applicant characteristics and performance on ACGME Milestones during intern year for individuals entering Northwestern University's internal medicine residency between 2013 and 2018. We used bivariate analysis and a multivariable linear regression model to determine the association between individual factors and Milestone performance. RESULTS Of 203 eligible residents, 198 (98%) were included in the final sample. One hundred fourteen residents (58%) were female, and 116 residents (59%) were White. Mean Step 1 and Step 2 CK scores were 245.5 (SD 12.0) and 258 (SD 10.8) respectively. Step 1 scores, Alpha Omega Alpha membership, medicine clerkship grades, and interview scores were not associated with Milestone performance in the bivariate analysis and were not included in the multivariable model. In the multivariable model, overall clerkship grades, ranking of the medical school, and year entering residency were significantly associated with Milestone performance (P ≤ .04). CONCLUSIONS Most traditional metrics used in residency selection were not associated with early performance on ACGME Milestones during internal medicine residency.
Collapse
Affiliation(s)
- Blair P. Golden
- At the time of writing, Blair P. Golden, MD, MS, was Chief Resident, Internal Medicine Residency, and Clinical Instructor, Division of General Internal Medicine and Geriatrics, Northwestern University Feinberg School of Medicine, and currently is Assistant Professor, Division of Hospital Medicine, Department of Medicine, University of Wisconsin School of Medicine and Public Health
| | - Bruce L. Henschen
- Bruce L. Henschen, MD, MPH, is Assistant Professor, Division of General Internal Medicine and Geriatrics, Northwestern University Feinberg School of Medicine
| | - David T. Liss
- David T. Liss, PhD, is Research Associate Professor, Division of General Internal Medicine and Geriatrics, Northwestern University Feinberg School of Medicine
| | - Sara L. Kiely
- Sara L. Kiely, MS, is Accreditation Council for Graduate Medical Education Liason, McGaw Medical Center of Northwestern University Feinberg School of Medicine
| | - Aashish K. Didwania
- Aashish K. Didwania, MD, is Associate Professor, Division of General Internal Medicine and Geriatrics, Program Director, Internal Medicine Residency, and Vice Chair for Education, Department of Medicine, Northwestern University Feinberg School of Medicine
| |
Collapse
|
33
|
Heath JK, Edgar L, Guralnick S. Assessment of Learning, for Learning: Operationalizing Milestones Data for Program-Level Improvement. J Grad Med Educ 2021; 13:120-123. [PMID: 33936545 PMCID: PMC8078066 DOI: 10.4300/jgme-d-20-00849.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Affiliation(s)
- Janae K. Heath
- Janae K. Heath, MD, MSCE, is Assistant Professor of Medicine, Division of Pulmonary and Critical Care, University of Pennsylvania
| | - Laura Edgar
- Laura Edgar, EdD, CAE, is Vice President, Milestones Development, Accreditation Council for Graduate Medical Education
| | - Susan Guralnick
- Susan Guralnick, MD, FAAP, is Associate Dean for Graduate Medical Education, Designated Institutional Official, and Professor of Pediatrics, University of California, Davis
| |
Collapse
|
34
|
Riddell JC, Robins L, Sherbino J, Brown A, Ilgen J. Residents' Perceptions of Effective Features of Educational Podcasts. West J Emerg Med 2020; 22:26-32. [PMID: 33439799 PMCID: PMC7806333 DOI: 10.5811/westjem.2020.10.49135] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Accepted: 10/15/2020] [Indexed: 11/11/2022] Open
Abstract
INTRODUCTION Educational podcasts are used by emergency medicine (EM) trainees to supplement clinical learning and to foster a sense of connection to broader physician communities. Yet residents report difficulties remembering what they learned from listening, and the features of podcasts that residents find most effective for learning remain poorly understood. Therefore, we sought to explore residents' perceptions of the design features of educational podcasts that they felt most effectively promoted learning. METHODS We used a qualitative approach to explore EM trainees' experiences with educational podcasts, focusing on design features that they found beneficial to their learning. We conducted 16 semi-structured interviews with residents from three institutions from March 2016-August 2017. Interview transcripts were analyzed line-by-line using constant comparison and organized into focused codes, conceptual categories, and then key themes. RESULTS The five canons of classical rhetoric provided a framework for thematically grouping the disparate features of podcasts that residents reported enhanced their learning. Specifically, they reported valuing the following: 1) Invention: clinically relevant material presented from multiple perspectives with explicit learning points; 2) Arrangement: efficient communication; 3) Style: narrative incorporating humor and storytelling; 4) Memory: repetition of key content; and 5) Delivery: short episodes with good production quality. CONCLUSION This exploratory study describes features that residents perceived as effective for learning from educational podcasts and provides foundational guidance for ongoing research into the most effective ways to structure medical education podcasts.
Collapse
Affiliation(s)
- Jeffrey C. Riddell
- Keck School of Medicine, University of Southern California, Department of Emergency Medicine, Los Angeles, California
| | - Lynne Robins
- University of Washington, Department of Biomedical Informatics and Medical Education, Seattle, Washington
| | | | - Alisha Brown
- Virginia Mason Hospital and Medical Center, Department of Emergency Medicine, Seattle, Washington
| | - Jonathan Ilgen
- University of Washington, Department of Emergency Medicine, Seattle, Washington
| |
Collapse
|
35
|
Burkhardt JC, Parekh KP, Gallahue FE, London KS, Edens MA, Humbert AJ, Pillow MT, Santen SA, Hopson LR. A Critical Disconnect: Residency Selection Factors Lack Correlation With Intern Performance. J Grad Med Educ 2020; 12:696-704. [PMID: 33391593 PMCID: PMC7771600 DOI: 10.4300/jgme-d-20-00013.1] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Revised: 05/30/2020] [Accepted: 08/01/2020] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Emergency medicine (EM) residency programs want to employ a selection process that will rank best possible applicants for admission into the specialty. OBJECTIVE We tested if application data are associated with resident performance using EM milestone assessments. We hypothesized that a weak correlation would exist between some selection factors and milestone outcomes. METHODS Utilizing data from 5 collaborating residency programs, a secondary analysis was performed on residents trained from 2013 to 2018. Factors in the model were gender, underrepresented in medicine status, United States Medical Licensing Examination Step 1 and 2 Clinical Knowledge (CK), Alpha Omega Alpha (AOA), grades (EM, medicine, surgery, pediatrics), advanced degree, Standardized Letter of Evaluation global assessment, rank list position, and controls for year assessed and program. The primary outcomes were milestone level achieved in the core competencies. Multivariate linear regression models were fitted for each of the 23 competencies with comparisons made between each model's results. RESULTS For the most part, academic performance in medical school (Step 1, 2 CK, grades, AOA) was not associated with residency clinical performance on milestones. Isolated correlations were found between specific milestones (eg, higher surgical grade increased wound care score), but most had no correlation with residency performance. CONCLUSIONS Our study did not find consistent, meaningful correlations between the most common selection factors and milestones at any point in training. This may indicate our current selection process cannot consistently identify the medical students who are most likely to be high performers as residents.
Collapse
Affiliation(s)
- John C Burkhardt
- Assistant Professor, Departments of Emergency Medicine and Learning Health Sciences, University of Michigan Medical School
| | - Kendra P Parekh
- Associate Professor, Department of Emergency Medicine, Vanderbilt University School of Medicine
| | - Fiona E Gallahue
- Residency Program Director and Associate Professor, Department of Emergency Medicine, University of Washington
| | - Kory S London
- Associate Residency Program Director, Director of Clinical Operations, Jefferson Methodist ED, Associate Director of Quality Assurance and Practice Improvement, and Assistant Professor, Department of Emergency Medicine, Thomas Jefferson University
| | - Mary A Edens
- Residency Program Director and Associate Professor, Department of Emergency Medicine, Louisiana State University Health Sciences Center Shreveport
| | - A J Humbert
- Residency Program Director and Associate Professor of Clinical Emergency Medicine, Indiana University School of Medicine
| | - M Tyson Pillow
- Vice Chair of Education, and Associate Professor, Department of Emergency Medicine, Baylor College of Medicine
| | - Sally A Santen
- Senior Associate Dean for Assessment, Evaluation and Scholarship, and Professor, Department of Emergency Medicine, Virginia Commonwealth University School of Medicine
| | - Laura R Hopson
- Associate Chair of Education, Emergency Medicine Residency Program, and Associate Professor of Emergency Medicine, University of Michigan Medical School
| |
Collapse
|
36
|
O'Connor DM, Dayal A, Arora VM. Differences in Milestone Evaluations of Men and Women: The Devil Is in the Details. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:1465. [PMID: 33002899 DOI: 10.1097/acm.0000000000003600] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Affiliation(s)
- Daniel M O'Connor
- Third-year resident, Harvard Combined Dermatology Residency Training Program, Boston, Massachusetts; ORCID: http://orcid.org/0000-0001-5464-2031
| | - Arjun Dayal
- Third-year resident, Dermatology Residency Training Program, University of Chicago, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-7024-2078
| | - Vineet M Arora
- Herbert T. Abelson Professor of medicine, assistant dean for scholarship and discovery, and associate chief medical officer-clinical learning environment, University of Chicago, Chicago, Illinois; ; Twitter: @FutureDocs; ORCID: https://orcid.org/0000-0002-4745-7599
| |
Collapse
|
37
|
Thoma B, Hall AK, Clark K, Meshkat N, Cheung WJ, Desaulniers P, Ffrench C, Meiwald A, Meyers C, Patocka C, Beatty L, Chan TM. Evaluation of a National Competency-Based Assessment System in Emergency Medicine: A CanDREAM Study. J Grad Med Educ 2020; 12:425-434. [PMID: 32879682 PMCID: PMC7450748 DOI: 10.4300/jgme-d-19-00803.1] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 02/11/2020] [Accepted: 05/20/2020] [Indexed: 01/08/2023] Open
Abstract
BACKGROUND In 2018, Canadian postgraduate emergency medicine (EM) programs began implementing a competency-based medical education (CBME) assessment program. Studies evaluating these programs have focused on broad outcomes using data from national bodies and lack data to support program-specific improvement. OBJECTIVE We evaluated the implementation of a CBME assessment program within and across programs to identify successes and opportunities for improvement at the local and national levels. METHODS Program-level data from the 2018 resident cohort were amalgamated and analyzed. The number of entrustable professional activity (EPA) assessments (overall and for each EPA) and the timing of resident promotion through program stages were compared between programs and to the guidelines provided by the national EM specialty committee. Total EPA observations from each program were correlated with the number of EM and pediatric EM rotations. RESULTS Data from 15 of 17 (88%) programs containing 9842 EPA observations from 68 of 77 (88%) EM residents in the 2018 cohort were analyzed. Average numbers of EPAs observed per resident in each program varied from 92.5 to 229.6, correlating with the number of blocks spent on EM and pediatric EM (r = 0.83, P < .001). Relative to the specialty committee's guidelines, residents were promoted later than expected (eg, one-third of residents had a 2-month delay to promotion from the first to second stage) and with fewer EPA observations than suggested. CONCLUSIONS There was demonstrable variation in EPA-based assessment numbers and promotion timelines between programs and with national guidelines.
Collapse
|
38
|
Klein R, Ufere NN, Rao SR, Koch J, Volerman A, Snyder ED, Schaeffer S, Thompson V, Warner AS, Julian KA, Palamara K. Association of Gender With Learner Assessment in Graduate Medical Education. JAMA Netw Open 2020; 3:e2010888. [PMID: 32672831 PMCID: PMC7366188 DOI: 10.1001/jamanetworkopen.2020.10888] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
IMPORTANCE Gender bias may affect assessment in competency-based medical education. OBJECTIVE To evaluate the association of gender with assessment of internal medicine residents. DESIGN, SETTING, AND PARTICIPANTS This multisite, retrospective, cross-sectional study included 6 internal medicine residency programs in the United States. Data were collected from July 1, 2016, to June 30, 2017, and analyzed from June 7 to November 6, 2019. EXPOSURES Faculty assessments of resident performance during general medicine inpatient rotations. MAIN OUTCOMES AND MEASURES Standardized scores were calculated based on rating distributions for the Accreditation Council for Graduate Medical Education's core competencies and internal medicine Milestones at each site. Standardized scores are expressed as SDs from the mean. The interaction of gender and postgraduate year (PGY) with standardized scores was assessed, adjusting for site, time of year, resident In-Training Examination percentile rank, and faculty rank and specialty. RESULTS Data included 3600 evaluations for 703 residents (387 male [55.0%]) by 605 faculty (318 male [52.6%]). Interaction between resident gender and PGY was significant in 6 core competencies. In PGY2, female residents scored significantly higher than male residents in 4 of 6 competencies, including patient care (mean standardized score [SE], 0.10 [0.04] vs 0.22 [0.05]; P = .04), systems-based practice (mean standardized score [SE], -0.06 [0.05] vs 0.13 [0.05]; P = .003), professionalism (mean standardized score [SE], -0.04 [0.06] vs 0.21 [0.06]; P = .001), and interpersonal and communication skills (mean standardized score [SE], 0.06 [0.05] vs 0.32 [0.06]; P < .001). In PGY3, male residents scored significantly higher than female patients in 5 of 6 competencies, including patient care (mean standardized score [SE], 0.47 [0.05] vs 0.32 [0.05]; P = .03), medical knowledge (mean standardized score [SE], 0.47 [0.05] vs 0.24 [0.06]; P = .003), systems-based practice (mean standardized score [SE], 0.30 [0.05] vs 0.12 [0.06]; P = .02), practice-based learning (mean standardized score [SE], 0.39 [0.05] vs 0.16 [0.06]; P = .004), and professionalism (mean standardized score [SE], 0.35 [0.05] vs 0.18 [0.06]; P = .03). There was a significant increase in male residents' competency scores between PGY2 and PGY3 (range of difference in mean adjusted standardized scores between PGY2 and PGY3, 0.208-0.391; P ≤ .002) that was not seen in female residents' scores (range of difference in mean adjusted standardized scores between PGY2 and PGY3, -0.117 to 0.101; P ≥ .14). There was a significant increase in male residents' scores between PGY2 and PGY3 cohorts in 6 competencies with female faculty and in 4 competencies with male faculty. There was no significant change in female residents' competency scores between PGY2 to PGY3 cohorts with male or female faculty. Interaction between faculty-resident gender dyad and PGY was significant in the patient care competency (β estimate [SE] for female vs male dyad in PGY1 vs PGY3, 0.184 [0.158]; β estimate [SE] for female vs male dyad in PGY2 vs PGY3, 0.457 [0.181]; P = .04). CONCLUSIONS AND RELEVANCE In this study, resident gender was associated with differences in faculty assessments of resident performance, and differences were linked to PGY. In contrast to male residents' scores, female residents' scores displayed a peak-and-plateau pattern whereby assessment scores peaked in PGY2. Notably, the peak-and-plateau pattern was seen in assessments by male and female faculty. Further study of factors that influence gender-based differences in assessment is needed.
Collapse
Affiliation(s)
- Robin Klein
- Division of General Internal Medicine and Geriatrics, Department of Internal Medicine, Emory University School of Medicine, Atlanta, Georgia
| | - Nneka N. Ufere
- Division of Gastroenterology, Department of Medicine, Massachusetts General Hospital, Boston
| | - Sowmya R. Rao
- Massachusetts General Hospital Biostatistics Center, Boston, Massachusetts
- Department of Global Health, Boston University School of Public Health, Boston, Massachusetts
| | - Jennifer Koch
- Department of Medicine, University of Louisville, Louisville, Kentucky
| | - Anna Volerman
- Department of Medicine, University of Chicago, Chicago, Illinois
- Department of Pediatrics, University of Chicago, Chicago, Illinois
| | - Erin D. Snyder
- Division of General Internal Medicine, Department of Medicine, University of Alabama at Birmingham School of Medicine
| | - Sarah Schaeffer
- Division of Hospital Medicine, Department of Medicine, University of California, San Francisco
| | - Vanessa Thompson
- Division of General Internal Medicine, Department of Medicine, University of California, San Francisco
| | - Ana Sofia Warner
- Division of General Internal Medicine, Department of Medicine, Massachusetts General Hospital, Boston
| | - Katherine A. Julian
- Division of General Internal Medicine, Department of Medicine, University of California, San Francisco
| | - Kerri Palamara
- Division of General Internal Medicine, Department of Medicine, Massachusetts General Hospital, Boston
| |
Collapse
|
39
|
Abstract
Milestones specific to orthopaedic surgical training document individual resident progress through skill development in multiple dimensions. Residents increasingly interact with and are assessed by surgeons in both academic and private practice environments. Milestones describe the skills that support competence. One of the primary goals of milestones is to provide continuous data for educational quality improvement of residency programs. They provide a dialogue between surgeons who supervise residents or fellows and the program's Clinical Competency Committee throughout a resident's education. The orthopaedic milestones were developed jointly by the Accreditation Council for Graduate Medical Education and the American Board of Orthopaedic Surgery. The working team was designed with broad representation within the specialty. The milestones were introduced to orthopaedic residencies in 2013. Orthopaedics is a 5-year training program; the first comprehensive longitudinal data set is now available for study. This summary provides historical perspective on the development of the milestones, state of current milestone implementation, attempts to establish validity, challenges with the milestones, and the development of next-generation assessment tools.
Collapse
|