1
|
Nowicki KD, Balboni IM, Cidon MJ, Dhanrajani AD, Driest KD, Fair DC, Imundo LF, Mehta JJ, Tarvin SE, Walters HM, Woolnough LC, Edgar LC, Curran ML. Assessing Pediatric Rheumatology Fellow Competence in the Milestone Era: Past, Present, and Future. Arthritis Care Res (Hoboken) 2024; 76:600-607. [PMID: 38108087 DOI: 10.1002/acr.25276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 11/15/2023] [Accepted: 11/30/2023] [Indexed: 12/19/2023]
Abstract
Starting in 2015, pediatric rheumatology fellowship training programs were required by the Accreditation Council for Graduate Medical Education to assess fellows' academic performance within 21 subcompetencies falling under six competency domains. Each subcompetency had four or five milestone levels describing developmental progression of knowledge and skill acquisition. Milestones were standardized across all pediatric subspecialties. As part of the Milestones 2.0 revision project, the Accreditation Council for Graduate Medical Education convened a workgroup in 2022 to write pediatric rheumatology-specific milestones. Using adult rheumatology's Milestones 2.0 as a starting point, the workgroup revised the patient care and medical knowledge subcompetencies and milestones to reflect requirements and nuances of pediatric rheumatology care. Milestones within four remaining competency domains (professionalism, interpersonal and communication skills, practice-based learning and improvement, and systems-based practice) were standardized across all pediatric subspecialties, and therefore not revised. The workgroup created a supplemental guide with explanations of the intent of each subcompetency, 25 in total, and examples for each milestone level. The new milestones are an important step forward for competency-based medical education in pediatric rheumatology. However, challenges remain. Milestone level assignment is meant to be informed by results of multiple assessment methods. The lack of pediatric rheumatology-specific assessment tools typically result in clinical competency committees determining trainee milestone levels without such collated results as the foundation of their assessments. Although further advances in pediatric rheumatology fellowship competency-based medical education are needed, Milestones 2.0 importantly establishes the first pediatric-specific rheumatology Milestones to assess fellow performance during training and help measure readiness for independent practice.
Collapse
Affiliation(s)
- Katherine D Nowicki
- University of Colorado, Denver, and Children's Hospital Colorado, Aurora, Colorado
| | | | - Michal J Cidon
- Children's Hospital Los Angeles, Los Angeles, California
| | - Anita D Dhanrajani
- The University of Mississippi Medical Center, Jackson, Mississippi, Tulane University School of Medicine, New Orleans, and Children's Hospital of New Orleans, New Orleans, Louisiana
| | - Kyla D Driest
- Nationwide Children's Hospital and The Ohio State University, Columbus
| | | | - Lisa F Imundo
- Columbia University Medical Center, New York City, New York
| | - Jay J Mehta
- Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Stacey E Tarvin
- Riley Hospital for Children at Indiana University, Indianapolis
| | - Heather M Walters
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hamptstead, New York, and Cohen Children's Medical Center of New York, New Hyde Park
| | | | - Laura C Edgar
- Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Megan L Curran
- University of Colorado, Denver, and Children's Hospital Colorado, Aurora, Colorado
| |
Collapse
|
2
|
Perez S, Schwartz A, Hauer KE, Karani R, Hirshfield LE, McNamara M, Henry D, Lupton KL, Woods M, Teherani A. Developing Evidence for Equitable Assessment Characteristics Based on Clinical Learner Preferences Using Discrete Choice Experiments. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S108-S115. [PMID: 37983403 DOI: 10.1097/acm.0000000000005360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
PURPOSE Medical education is only beginning to explore the factors that contribute to equitable assessment in clinical settings. Increasing knowledge about equitable assessment ensures a quality medical education experience that produces an excellent, diverse physician workforce equipped to address the health care disparities facing patients and communities. Through the lens of the Anti-Deficit Achievement framework, the authors aimed to obtain evidence for a model for equitable assessment in clinical training. METHOD A discrete choice experiment approach was used which included an instrument with 6 attributes each at 2 levels to reveal learner preferences for the inclusion of each attribute in equitable assessment. Self-identified underrepresented in medicine (UIM) and not underrepresented in medicine (non-UIM) (N = 306) fourth-year medical students and senior residents in medicine, pediatrics, and surgery at 9 institutions across the United States completed the instrument. A mixed-effects logit model was used to determine attributes learners valued most. RESULTS Participants valued the inclusion of all assessment attributes provided except for peer comparison. The most valued attribute of an equitable assessment was how learner identity, background, and trajectory were appreciated by clinical supervisors. The next most valued attributes were assessment of growth, supervisor bias training, narrative assessments, and assessment of learner's patient care, with participants willing to trade off any of the attributes to get several others. There were no significant differences in value placed on assessment attributes between UIM and non-UIM learners. Residents valued clinical supervisors valuing learner identity, background, and trajectory and clinical supervisor bias training more so than medical students. CONCLUSIONS This study offers support for the components of an antideficit-focused model for equity in assessment and informs efforts to promote UIM learner success and guide equity, diversity, and inclusion initiatives in medical education.
Collapse
Affiliation(s)
- Sandra Perez
- S. Perez is a resident, Department of Pathology, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Alan Schwartz
- A. Schwartz is the Michael Reese Endowed Professor of Medical Education, Department of Medical Education, and research professor, Department of Pediatrics, University of Illinois at Chicago, Chicago, Illinois, and director, Association of Pediatric Program Directors Longitudinal Educational Assessment Research Network (APPD LEARN), McLean, Virginia; ORCID: http://orcid.org/0000-0003-3809-6637
| | - Karen E Hauer
- K.E. Hauer is professor, Department of Medicine, and associate dean for competency assessment and professional standards, University of California, San Francisco, School of Medicine, San Francisco, California; ORCID: https://orcid.org/0000-0002-8812-4045
| | - Reena Karani
- R. Karani is professor, Departments of Medicine, Medical Education, and Geriatrics and Palliative Medicine, and director, Institute for Medical Education, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Laura E Hirshfield
- L.E. Hirshfield is the Dr. Georges Bordage Medical Education Faculty Scholar, associate professor, PhD program codirector, and associate director of graduate studies, Department of Medical Education, University of Illinois College of Medicine, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0894-2994
| | - Margaret McNamara
- M. McNamara is professor, Department of Pediatrics, and pediatric residency program director, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Duncan Henry
- D. Henry is associate professor, Department of Pediatrics, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Katherine L Lupton
- K.L. Lupton is professor, Department of Medicine, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Majka Woods
- M. Woods holds the Dibrell Family Professorship in the Art of Medicine, and is assistant professor, Department of Surgery, and vice dean for academic affairs, John Sealy School of Medicine at the University of Texas Medical Branch, Galveston, Texas
| | - Arianne Teherani
- A. Teherani is professor, Department of Medicine, education scientist, Center for Faculty Educators, director of program evaluation and education continuous quality improvement, and founding codirector, University of California Center for Climate Health and Equity, University of California, San Francisco, School of Medicine, San Francisco, California; ORCID: http://orcid.org/0000-0003-2936-9832
| |
Collapse
|
3
|
Reed S, Mink R, Stanek J, Tyrrell L, Li STT. Are Final Residency Milestones Correlated With Early Fellowship Performance in Pediatrics? ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:1069-1075. [PMID: 36972134 DOI: 10.1097/acm.0000000000005215] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
PURPOSE Milestones have been used to assess trainees across graduate medical education programs and reflect a developmental continuum from novice to expert. This study examined whether residency milestones are correlated with initial fellowship milestone performance in pediatrics. METHOD This retrospective cohort study used descriptive statistics to assess milestone scores from pediatric fellows who began fellowship training between July 2017 and July 2020. Milestone scores were obtained at the end of residency (R), middle of the first fellowship year (F1), and end of the first fellowship year (F2). RESULTS Data represent 3,592 unique trainees. High composite R scores, much lower F1 scores, and slightly higher F2 scores were found over time for all pediatric subspecialities. R scores were positively correlated with F1 scores (Spearman ρ = 0.12, P < .001) and F2 scores (Spearman ρ = 0.15, P < .001). Although scores are negligibly different when trainees graduate from residency, there were differences in F1 and F2 scores among fellows in different specialties. Those who trained at the same institution for residency and fellowship had higher composite milestone F1 and F2 scores compared with those who trained at different institutions ( P < .001). The strongest associations were between R and F2 scores for the professionalism and communication milestones, although associations were still relatively weak overall (r s = 0.13-0.20). CONCLUSIONS This study found high R scores and low F1 and F2 scores across all shared milestones, with weak association of scores within competencies, indicating that milestones are context dependent. Although professionalism and communication milestones had a higher correlation compared with the other competencies, the association was still weak. Residency milestones may be useful for individualized education in early fellowship, but fellowship programs should be cautious about overreliance on R scores due to the weak correlation with F1 and F2 scores.
Collapse
Affiliation(s)
- Suzanne Reed
- S. Reed is associate professor and pediatric residency associate program director, Department of Pediatrics, The Ohio State University College of Medicine/Nationwide Children's Hospital, Columbus, Ohio
| | - Richard Mink
- R. Mink is professor, University of California, Los Angeles, Los Angeles, California, and director, Association of Pediatric Program Directors Subspecialty Pediatrics Investigator Network, McLean, Virginia
| | - Joseph Stanek
- J. Stanek is a biostatistician, Division of Hematology, Oncology, and Bone Marrow Transplantation and the Biostatistics Resource, Nationwide Children's Hospital, Columbus, Ohio
| | - Laura Tyrrell
- L. Tyrrell is a pediatric hematologist and medical education specialist, Indiana Hemophilia & Thrombosis Center, Indianapolis, Indiana
| | - Su-Ting T Li
- S.-T.T. Li is professor, vice chair of education, and residency program director, Department of Pediatrics, University of California, Davis, Sacramento, California
| |
Collapse
|
4
|
Kendrick DE, Thelen AE, Chen X, Gupta T, Yamazaki K, Krumm AE, Bandeh-Ahmadi H, Clark M, Luckoscki J, Fan Z, Wnuk GM, Ryan AM, Mukherjee B, Hamstra SJ, Dimick JB, Holmboe ES, George BC. Association of Surgical Resident Competency Ratings With Patient Outcomes. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:813-820. [PMID: 36724304 DOI: 10.1097/acm.0000000000005157] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
PURPOSE Accurate assessment of clinical performance is essential to ensure graduating residents are competent for unsupervised practice. The Accreditation Council for Graduate Medical Education milestones framework is the most widely used competency-based framework in the United States. However, the relationship between residents' milestones competency ratings and their subsequent early career clinical outcomes has not been established. It is important to examine the association between milestones competency ratings of U.S. general surgical residents and those surgeons' patient outcomes in early career practice. METHOD A retrospective, cross-sectional study was conducted using a sample of national Medicare claims for 23 common, high-risk inpatient general surgical procedures performed between July 1, 2015, and November 30, 2018 (n = 12,400 cases) by nonfellowship-trained U.S. general surgeons. Milestone ratings collected during those surgeons' last year of residency (n = 701 residents) were compared with their risk-adjusted rates of mortality, any complication, or severe complication within 30 days of index operation during their first 2 years of practice. RESULTS There were no associations between mean milestone competency ratings of graduating general surgery residents and their subsequent early career patient outcomes, including any complication (23% proficient vs 22% not yet proficient; relative risk [RR], 0.97, [95% CI, 0.88-1.08]); severe complication (9% vs 9%, respectively; RR, 1.01, [95% CI, 0.86-1.19]); and mortality (5% vs 5%; RR, 1.07, [95% CI, 0.88-1.30]). Secondary analyses yielded no associations between patient outcomes and milestone ratings specific to technical performance, or between patient outcomes and composites of operative performance, professionalism, or leadership milestones ratings ( P ranged .32-.97). CONCLUSIONS Milestone ratings of graduating general surgery residents were not associated with the patient outcomes of those surgeons when they performed common, higher-risk procedures in a Medicare population. Efforts to improve how milestones ratings are generated might strengthen their association with early career outcomes.
Collapse
Affiliation(s)
- Daniel E Kendrick
- D.E. Kendrick is assistant professor, Department of Surgery, University of Minnesota, Minneapolis, Minnesota
| | - Angela E Thelen
- A.E. Thelen is research fellow, Center for Surgical Training and Research, Department of Surgery, University of Michigan, Ann Arbor, Michigan
| | - Xilin Chen
- X. Chen is research analyst, Center for Surgical Training and Research, Department of Surgery, University of Michigan, Ann Arbor, Michigan
| | - Tanvi Gupta
- T. Gupta is research analyst, Center for Surgical Training and Research, Department of Surgery, University of Michigan, Ann Arbor, Michigan
| | - Kenji Yamazaki
- K. Yamazaki is senior data analyst, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Andrew E Krumm
- A.E. Krumm is assistant professor, Department of Learning Health Sciences, University of Michigan, Ann Arbor, Michigan
| | - Hoda Bandeh-Ahmadi
- H. Bandeh-Ahmadi is project manager, Center for Surgical Training and Research, Department of Surgery, University of Michigan, Ann Arbor, Michigan
| | - Michael Clark
- M. Clark is a biostatistician, Consulting for Statistics, Computing, and Analytics Research, University of Michigan, Ann Arbor, Michigan
| | - John Luckoscki
- J. Luckoscki is research fellow, Center for Surgical Training and Research, Department of Surgery, University of Michigan, Ann Arbor, Michigan
| | - Zhaohui Fan
- Z. Fan is research analyst, Center for Healthcare Outcomes and Policy, Department of Surgery, University of Michigan, Ann Arbor, Michigan
| | - Greg M Wnuk
- G.M. Wnuk is program manager, Center for Surgical Training and Research, Department of Surgery, University of Michigan, Ann Arbor, Michigan
| | - Andrew M Ryan
- A.M. Ryan is professor, Department of Health Management and Policy, School of Public Health, University of Michigan, Ann Arbor, Michigan
| | - Bhramar Mukherjee
- B. Mukherjee is professor and chair, Division of Biostatistics, School of Public Health, University of Michigan, Ann Arbor, Michigan
| | - Stanley J Hamstra
- S.J. Hamstra is professor, Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Justin B Dimick
- J.B. Dimick is professor and chair, Department of Surgery, University of Michigan, Ann Arbor, Michigan
| | - Eric S Holmboe
- E.S. Holmboe is chief research, Milestone Development, and evaluation officer, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Brian C George
- B.C. George is director, Center for Surgical Training and Research, and assistant professor, Department of Surgery, University of Michigan, Ann Arbor, Michigan
| |
Collapse
|
5
|
Moeller J, Salas RME. Neurology Education in 2035: The Neurology Future Forecasting Series. Neurology 2023; 100:579-586. [PMID: 36564205 PMCID: PMC10033166 DOI: 10.1212/wnl.0000000000201669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 10/24/2022] [Indexed: 12/24/2022] Open
Abstract
In the past decade, there have been dramatic changes in all aspects of neurologic care, and along with this, neurology education has transformed. These changes have affected all aspects of education across the educational continuum, including learners, teachers, educators, content, delivery methods, assessments, and outcomes. Health systems science, health humanities, diversity, equity, and inclusion and health disparities are becoming core components of neurology curricula, and, in the future, will be integrated into every aspect of our educational mission. The ways in which material is taught and learned have been influenced by technologic innovations and a growing understanding of the science of learning. We forecast that this trend will continue, with learners choosing from an array of electronic resources to engage with fundamental topics, allowing front-line clinical teachers to spend more time supporting critical reasoning and teaching students how to learn. There has been a growing differentiation of educational roles (i.e., teachers, educators, and scholars). We forecast that these roles will become more distinct, each with an individualized pattern of support and expectations. Assessment has become more aligned with the work of the learners, and there are growing calls to focus more on the impact of educational programs on patient care. We forecast that there will be an increased emphasis on educational outcomes and public accountability for training programs. In this article, we reflect on the history of medical education in neurology and explore the current state to forecast the future of neurology education and discuss ways in which we can prepare.
Collapse
Affiliation(s)
- Jeremy Moeller
- From the Department of Neurology (J.M.), Yale University, New Haven, CT; Department of Neurology and Neurosurgery (R.M.E.S.), Johns Hopkins School of Medicine, Baltimore, MD.
| | - Rachel Marie E Salas
- From the Department of Neurology (J.M.), Yale University, New Haven, CT; Department of Neurology and Neurosurgery (R.M.E.S.), Johns Hopkins School of Medicine, Baltimore, MD
| |
Collapse
|
6
|
Tyrrell LJ, Stanek JR, Stewart C, Reed S. Accreditation Council for Graduate Medical Education Milestone Scores in Pediatrics: Pilot Study Exploring the Relationship Between Residency and Early Fellowship Scores. Acad Pediatr 2023; 23:178-184. [PMID: 35934278 DOI: 10.1016/j.acap.2022.06.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Revised: 06/22/2022] [Accepted: 06/28/2022] [Indexed: 01/19/2023]
Abstract
BACKGROUND The Accreditation Council for Graduate Medical Education requires Milestone-based assessments of residents and fellows. In pediatrics, 11 subcompetencies are common to both residency and subspecialty fellowship training. It is unknown whether Milestone scores achieved during residency are related to Milestone scores achieved in early fellowship. OBJECTIVE To assess the relationship between final residency Milestones scores and first-year fellowship Milestones scores in the 11 common subcompetencies (CSCs) across pediatric subspecialties. METHODS This was a retrospective single-institution cohort study of pediatric fellows beginning fellowship training between July 2016 and July 2019. De-identified Milestone score sets for final residency scores (R), mid-year first-year fellowship scores (F1), and final first-year fellowship scores (F2) were collected. Spearman correlation and regression analyses were used to assess score relationships. RESULTS Data for 85 of 98 eligible fellows were collected. Consistently, the F1 scores were lowest, and the R scores were highest. There was a weak positive correlation between the composite R scores and the composite F1 scores. There was a weak positive correlation between residency and fellowship scores for 6 CSCs and no significant correlation for the remaining 5. CONCLUSION For the 11 pediatric CSCs, the final residency Milestone scores are consistently higher than and only weakly associated with early fellowship Milestone scores. There may be limitations to the use of residency scores for fellowship program directors in guiding individualized education for early fellows. This study provides groundwork for additional study of Milestone relationships and may help inform the next iteration of pediatric subspecialty Milestones.
Collapse
Affiliation(s)
- Laura J Tyrrell
- Division of Hematology/Oncology/BMT, Nationwide Children's Hospital (LJ Tyrrell, JR Stanek, and S Reed), Columbus, Ohio.
| | - Joseph R Stanek
- Division of Hematology/Oncology/BMT, Nationwide Children's Hospital (LJ Tyrrell, JR Stanek, and S Reed), Columbus, Ohio; Biostatistics Resource at Nationwide Children's Hospital (JR Stanek), Columbus, Ohio
| | - Claire Stewart
- Division of Pediatric Critical Care Medicine, Nationwide Children's Hospital (C Stewart), Columbus, Ohio
| | - Suzanne Reed
- Division of Hematology/Oncology/BMT, Nationwide Children's Hospital (LJ Tyrrell, JR Stanek, and S Reed), Columbus, Ohio
| |
Collapse
|
7
|
Tavares W, Hodwitz K, Rowland P, Ng S, Kuper A, Friesen F, Shwetz K, Brydges R. Implicit and inferred: on the philosophical positions informing assessment science. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2021; 26:1597-1623. [PMID: 34370126 DOI: 10.1007/s10459-021-10063-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 07/25/2021] [Indexed: 06/13/2023]
Abstract
Assessment practices have been increasingly informed by a range of philosophical positions. While generally beneficial, the addition of options can lead to misalignment in the philosophical assumptions associated with different features of assessment (e.g., the nature of constructs and competence, ways of assessing, validation approaches). Such incompatibility can threaten the quality and defensibility of researchers' claims, especially when left implicit. We investigated how authors state and use their philosophical positions when designing and reporting on performance-based assessments (PBA) of intrinsic roles, as well as the (in)compatibility of assumptions across assessment features. Using a representative sample of studies examining PBA of intrinsic roles, we used qualitative content analysis to extract data on how authors enacted their philosophical positions across three key assessment features: (1) construct conceptualizations, (2) assessment activities, and (3) validation methods. We also examined patterns in philosophical positioning across features and studies. In reviewing 32 papers from established peer-reviewed journals, we found (a) authors rarely reported their philosophical positions, meaning underlying assumptions could only be inferred; (b) authors approached features of assessment in variable ways that could be informed by or associated with different philosophical assumptions; (c) we experienced uncertainty in determining (in)compatibility of philosophical assumptions across features. Authors' philosophical positions were often vague or absent in the selected contemporary assessment literature. Leaving such details implicit may lead to misinterpretation by knowledge users wishing to implement, build on, or evaluate the work. As such, assessing claims, quality and defensibility, may increasingly depend more on who is interpreting, rather than what is being interpreted.
Collapse
Affiliation(s)
- Walter Tavares
- The Wilson Centre, Temerty Faculty of Medicine, Department of Medicine, Institute for Health Policy, Management and Evaluation, University of Toronto/University Health Network, Toronto, Ontario, Canada.
| | - Kathryn Hodwitz
- Li Ka Shing Knowledge Institute, St. Michaels Hospital, Toronto, Ontario, Canada
| | - Paula Rowland
- The Wilson Centre, Temerty Faculty of Medicine, Department of Occupational Therapy and Occupational Science, University of Toronto/University Health Network, Toronto, Ontario , Canada
| | - Stella Ng
- The Wilson Centre, Temerty Faculty of Medicine, Department of Speech-Language Pathology, Temerty Faculty of Medicine, The Wilson Centre, University of Toronto, Centre for Faculty Development, Unity Health Toronto, Toronto, Ontario, Canada
| | - Ayelet Kuper
- The Wilson Centre, University Health Network/University of Toronto, Division of General Internal Medicine, Sunnybrook Health Sciences Centre, Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Farah Friesen
- Centre for Faculty Development, Temerty Faculty of Medicine, University of Toronto at Unity Health Toronto, Toronto, Ontario, Canada
| | - Katherine Shwetz
- Department of English, University of Toronto, Toronto, Ontario, Canada
| | - Ryan Brydges
- The Wilson Centre, Temerty Faculty of Medicine, Department of Medicine, Unity Health Toronto, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
8
|
Heath JK, Wang T, Santhosh L, Denson JL, Holmboe E, Yamazaki K, Clay AS, Carlos WG. Longitudinal Milestone Assessment Extending Through Subspecialty Training: The Relationship Between ACGME Internal Medicine Residency Milestones and Subsequent Pulmonary and Critical Care Fellowship Milestones. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:1603-1608. [PMID: 34010863 DOI: 10.1097/acm.0000000000004165] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
PURPOSE Accreditation Council for Graduate Medical Education (ACGME) milestones were implemented across medical subspecialties in 2015. Although milestones were proposed as a longitudinal assessment tool potentially providing opportunities for early implementation of individualized fellowship learning plans, the association of subspecialty fellowship ratings with prior residency ratings remains unclear. This study aimed to assess the relationship between internal medicine (IM) residency milestones and pulmonary and critical care medicine (PCCM) fellowship milestones. METHOD A multicenter retrospective cohort analysis was conducted for all PCCM trainees in ACGME-accredited PCCM fellowship programs, 2017-2018, who had complete prior IM milestone ratings from 2014 to 2017. Only professionalism and interpersonal and communication skills (ICS) were included based on shared anchors between IM and PCCM milestones. Using a generalized estimating equations model, the association of PCCM milestones ≤ 2.5 during the first fellowship year with corresponding IM subcompetencies was assessed at each time point, nested by program. Statistical significance was determined using logistic regression. RESULTS The study included 354 unique PCCM fellows. For ICS and professionalism subcompetencies, fellows with higher IM ratings were less likely to obtain PCCM ratings ≤ 2.5 during the first fellowship year. Each ICS subcompetency was significantly associated with future lapses in fellowship (ICS01: β = -0.67, P = .003; ICS02: β = -0.70, P = .001; ICS03: β = -0.60, P = .004) at various residency time points. Similar associations were noted for PROF03 (β = -0.57, P = .007). CONCLUSIONS Findings demonstrated an association between IM milestone ratings and low milestone ratings during PCCM fellowship. IM trainees with low ratings in several professionalism and ICS subcompetencies were more likely to be rated ≤ 2.5 during the first PCCM fellowship year. This highlights a potential use of longitudinal milestones to target educational gaps at the beginning of PCCM fellowship.
Collapse
Affiliation(s)
- Janae K Heath
- J.K. Heath is assistant professor, Department of Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0002-0533-3088
| | - Tisha Wang
- T. Wang is associate professor, Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California
| | - Lekshmi Santhosh
- L. Santhosh is assistant professor, Department of Medicine, University of California, San Francisco, San Francisco, California
| | - Joshua L Denson
- J.L. Denson is assistant professor, Section of Pulmonary, Critical Care, and Environmental Medicine, Tulane University School of Medicine, New Orleans, Louisiana; ORCID: https://orcid.org/0000-0002-8654-7765
| | - Eric Holmboe
- E. Holmboe is adjunct professor, Department of Medicine, Yale University, New Haven, Connecticut, and Chief Research, Milestone Development, and Evaluation Officer for the Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Kenji Yamazaki
- K. Yamazaki is senior analyst, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Alison S Clay
- A.S. Clay is assistant professor, Department of Medicine, Duke University, Durham, North Carolina
| | - W Graham Carlos
- W.G. Carlos is associate professor, Department of Medicine, Indiana University, Indianapolis, Indiana
| |
Collapse
|
9
|
Van Melle E, Hall AK, Schumacher DJ, Kinnear B, Gruppen L, Thoma B, Caretta-Weyer H, Cooke LJ, Frank JR. Capturing outcomes of competency-based medical education: The call and the challenge. MEDICAL TEACHER 2021; 43:794-800. [PMID: 34121596 DOI: 10.1080/0142159x.2021.1925640] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
There is an urgent need to capture the outcomes of the ongoing global implementation of competency-based medical education (CBME). However, the measurement of downstream outcomes following educational innovations, such as CBME is fraught with challenges stemming from the complexities of medical training, the breadth and variability of inputs, and the difficulties attributing outcomes to specific educational elements. In this article, we present a logic model for CBME to conceptualize an impact pathway relating to CBME and facilitate outcomes evaluation. We further identify six strategies to mitigate the challenges of outcomes measurement: (1) clearly identify the outcome of interest, (2) distinguish between outputs and outcomes, (3) carefully consider attribution versus contribution, (4) connect outcomes to the fidelity and integrity of implementation, (5) pay attention to unanticipated outcomes, and (6) embrace methodological pluralism. Embracing these challenges, we argue that careful and thoughtful evaluation strategies will move us forward in answering the all-important question: Are the desired outcomes of CBME being achieved?
Collapse
Affiliation(s)
- Elaine Van Melle
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Family Medicine, Queen's University, Kingston, Canada
| | - Andrew K Hall
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Emergency Medicine, Queen's University, Kingston,Canada
| | - Daniel J Schumacher
- Division of Emergency Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Benjamin Kinnear
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Larry Gruppen
- Department of Learning Health Sciences, University of Michigan, Ann Arbor, MI, USA
| | - Brent Thoma
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Emergency Medicine, University of Saskatchewan, Saskatoon, Canada
| | - Holly Caretta-Weyer
- Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Lara J Cooke
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Clinical Neurosciences, Division of Neurology, Cumming School of Medicine, University of Calgary, Calgary, Canada
| | - Jason R Frank
- Royal College of Physicians and Surgeons of Canada, Ottawa, Canada
- Department of Emergency Medicine, University of Ottawa, Ottawa, Canada
| |
Collapse
|
10
|
Tanaka P, Park YS, Roby J, Ahn K, Kakazu C, Udani A, Macario A. Milestone Learning Trajectories of Residents at Five Anesthesiology Residency Programs. TEACHING AND LEARNING IN MEDICINE 2021; 33:304-313. [PMID: 33327788 DOI: 10.1080/10401334.2020.1842210] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Construct: Every six months, residency programs report their trainees' Milestones Level achievement to the Accreditation Council for Graduate Medical Education (ACGME). Milestones should enable the learner and training program to know an individual's competency development trajectory. Background: Milestone Level ratings for residents grouped by specialty (e.g., Internal Medicine and Emergency Medicine) show that, in aggregate, senior residents receive higher ratings than junior residents. Anesthesiology Milestones, as assessed by both residents and faculty, also have a positive linear relationship with postgraduate year. However, these studies have been cross-sectional rather than longitudinal cohort studies, and studies of how individual residents progress during the course of training are needed. Longitudinal data analysis of performance assessment trajectories addresses a relevant validity question for the Next Accreditation System. We explored the application of learning analytics to longitudinal Milestones data to: 1) measure the frequency of "straight-lining"; 2) assess the proportion of residents that reach "Level 4" (ready for unsupervised practice) by graduation for each subcompetency; 3) identify variability among programs and individual residents in their baseline Milestone Level and rates of improvement; and 4) determine how hypothetically constructed growth curve models fit to the Milestones data reported to ACGME. Approach: De-identified Milestone Level ratings in each of the 25 subcompetencies submitted semiannually to the ACGME from July 1, 2014 to June 30, 2017 were retrospectively analyzed for graduating residents (n = 67) from a convenience sample of five anesthesia residency programs. The data reflected longitudinal resident Milestone progression from the beginning of the first year to the end of the third and final year of clinical anesthesiology training. The frequency of straight-lining, defined as the resident receiving the same exact Milestone Level rating for all 25 subcompetencies on a given 6-month report, was calculated for each program. Every resident was evaluated six times during training with the possibility of six straight-lined ratings. Findings: The number of residents in each program ranged from 5-21 (Median 13, range 16). Mean Milestone Level ratings for subcompetencies were significantly different at each six-month assessment (p < 0.001). Frequency of straight-lining varied significantly by program from 9% - 57% (Median 22%). Depending on the program, 53%-100% (median 86%) of residents reached the graduation target Level 4 or higher in all 25 anesthesiology subcompetencies. Nine to 18% of residents did not achieve a Level 4 rating for at least one subcompetency at any time during their residency. Across programs, significant variability was found in first-year clinical anesthesia training Milestone Levels, as well in the rate of improvement for five of the six core competencies. Conclusions: Anesthesia residents' Milestone Level growth trajectories as reported to the ACGME vary significantly across individual residents as well as by program. The present study offers a case example that raises concerns regarding the validity of the Next Accreditation System as it is currently used by some residency programs.
Collapse
Affiliation(s)
- Pedro Tanaka
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Yoon Soo Park
- Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Jay Roby
- Department of Anesthesiology, University of Southern California, Los Angeles, California
| | - Kyle Ahn
- Department of Anesthesiology, University of California Irvine, Irvine, California, USA
| | - Clinton Kakazu
- UCLA-Harbor Medical Center, Lost Angeles, California, USA
| | - Ankeet Udani
- Department of Anesthesiology, Duke University School of Medicine, Raleigh, North Carolina, USA
| | - Alex Macario
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
11
|
Hauer KE, Jurich D, Vandergrift J, Lipner RS, McDonald FS, Yamazaki K, Chick D, McAllister K, Holmboe ES. Gender Differences in Milestone Ratings and Medical Knowledge Examination Scores Among Internal Medicine Residents. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:876-884. [PMID: 33711841 DOI: 10.1097/acm.0000000000004040] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
PURPOSE To examine whether there are group differences in milestone ratings submitted by program directors working with clinical competency committees (CCCs) based on gender for internal medicine (IM) residents and whether women and men rated similarly on milestones perform comparably on subsequent in-training and certification examinations. METHOD This national retrospective study examined end-of-year medical knowledge (MK) and patient care (PC) milestone ratings and IM In-Training Examination (IM-ITE) and IM Certification Examination (IM-CE) scores for 2 cohorts (2014-2017, 2015-2018) of U.S. IM residents at ACGME-accredited programs. It included 20,098/21,440 (94%) residents, with 9,424 women (47%) and 10,674 men (53%). Descriptive statistics and differential prediction techniques using hierarchical linear models were performed. RESULTS For MK milestone ratings in PGY-1, men and women showed no statistical difference at a significance level of .01 (P = .02). In PGY-2 and PGY-3, men received statistically higher average MK ratings than women (P = .002 and P < .001, respectively). In contrast, men and women received equivalent average PC ratings in each PGY (P = .47, P = .72, and P = .80, for PGY-1, PGY-2, and PGY-3, respectively). Men slightly outperformed women with similar MK or PC ratings in PGY-1 and PGY-2 on the IM-ITE by about 1.7 and 1.5 percentage points, respectively, after adjusting for covariates. For PGY-3 ratings, women and men with similar milestone ratings performed equivalently on the IM-CE. CONCLUSIONS Milestone ratings were largely similar for women and men. Generally, women and men with similar MK or PC milestone ratings performed similarly on future examinations. Although there were small differences favoring men on earlier examinations, these differences disappeared by the final training year. It is questionable whether these small differences are educationally or clinically meaningful. The findings suggest fair, unbiased milestone ratings generated by program directors and CCCs assessing residents.
Collapse
Affiliation(s)
- Karen E Hauer
- K.E. Hauer is professor, Department of Medicine, University of California, San Francisco, San Francisco, California; ORCID: https://orcid.org/0000-0002-8812-4045
| | - Daniel Jurich
- D. Jurich is manager, psychometrics, National Board of Medical Examiners, Philadelphia, Pennsylvania
| | - Jonathan Vandergrift
- J. Vandergrift is senior research associate, American Board of Internal Medicine, Philadelphia, Pennsylvania
| | - Rebecca S Lipner
- R.S. Lipner is senior vice president for assessment and research, American Board of Internal Medicine, Philadelphia, Pennsylvania
| | - Furman S McDonald
- F.S. McDonald is senior vice president for academic and medical affairs, American Board of Internal Medicine, Philadelphia, Pennsylvania
| | - Kenji Yamazaki
- K. Yamazaki is senior analyst, milestones research and evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Davoren Chick
- D. Chick is senior vice president for medical education, American College of Physicians, Philadelphia, Pennsylvania
| | - Kevin McAllister
- K. McAllister is assessment officer, National Board of Medical Examiners, Philadelphia, Pennsylvania
| | - Eric S Holmboe
- E.S. Holmboe is chief research, milestone development, and evaluation officer, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| |
Collapse
|
12
|
Using Learning Analytics to Examine Achievement of Graduation Targets for Systems-Based Practice and Practice-Based Learning and Improvement: A National Cohort of Vascular Surgery Fellows. Ann Vasc Surg 2021; 76:463-471. [PMID: 33905852 DOI: 10.1016/j.avsg.2021.03.046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 03/17/2021] [Accepted: 03/19/2021] [Indexed: 11/20/2022]
Abstract
BACKGROUND Surgeons provide patient care in complex health care systems and must be able to participate in improving both personal performance and the performance of the system. The Accreditation Council for Graduate Medical Education (ACGME) Vascular Surgery Milestones are utilized to assess vascular surgery fellows' (VSF) achievement of graduation targets in the competencies of Systems Based Practice (SBP) and Practice Based Learning and Improvement (PBLI). We investigate the predictive value of semiannual milestones ratings for final achievement within these competencies at the time of graduation. METHODS National ACGME milestones data were utilized for analysis. All trainees entering the 2-year vascular surgery fellowship programs in July 2016 were included in the analysis (n = 122). Predictive probability values (PPVs) were obtained for each SBP and PBLI sub-competencies by biannual review periods, to estimate the probability of VSFs not reaching the recommended graduation target based on their previous milestones ratings. RESULTS The rate of nonachievement of the graduation target level 4.0 on the SBP and PBLI sub-competencies at the time of graduation for VSFs was 13.1-25.4%. At the first time point of assessment, 6 months into the fellowship program, the PPV of the SBP and PBLI milestones for nonachievement of level 4.0 upon graduation ranged from 16.3-60.2%. Six months prior to graduation, the PPVs across the 6 sub-competencies ranged from 14.6-82.9%. CONCLUSIONS A significant percentage of VSFs do not achieve the ACGME Vascular Surgery Milestone targets for graduation in the competencies of SBP and PBLI, suggesting a need to improve curricula and assessment strategies in these domains across vascular surgery fellowship programs. Reported milestones levels across all time point are predictive of ultimate achievement upon graduation and should be utilized to provide targeted feedback and individualized learning plans to ensure graduates are prepared to engage in personal and health care system improvement once in unsupervised practice.
Collapse
|
13
|
Hamstra SJ, Yamazaki K. A Validity Framework for Effective Analysis and Interpretation of Milestones Data. J Grad Med Educ 2021; 13:75-80. [PMID: 33936537 PMCID: PMC8078069 DOI: 10.4300/jgme-d-20-01039.1] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Affiliation(s)
- Stanley J. Hamstra
- At the time of research, Stanley J. Hamstra, PhD, was Vice President, Milestones Research and Evaluation, Accreditation Council for Graduate Medical Education (ACGME), and is now Professor, Department of Surgery, University of Toronto, Adjunct Professor, Department of Medical Education, Feinberg School of Medicine, Northwestern University, and Research Consultant, ACGME
| | - Kenji Yamazaki
- Kenji Yamazaki, PhD, is Senior Analyst, Milestones Research and Evaluation, ACGME
| |
Collapse
|
14
|
Golden BP, Henschen BL, Liss DT, Kiely SL, Didwania AK. Association Between Internal Medicine Residency Applicant Characteristics and Performance on ACGME Milestones During Intern Year. J Grad Med Educ 2021; 13:213-222. [PMID: 33897955 PMCID: PMC8054584 DOI: 10.4300/jgme-d-20-00603.1] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 10/07/2020] [Accepted: 12/09/2020] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Residency programs apply varying criteria to the resident selection process. However, it is unclear which applicant characteristics reflect preparedness for residency. OBJECTIVE We determined the applicant characteristics associated with first-year performance in internal medicine residency as assessed by performance on Accreditation Council for Graduate Medical Education (ACGME) Milestones. METHODS We examined the association between applicant characteristics and performance on ACGME Milestones during intern year for individuals entering Northwestern University's internal medicine residency between 2013 and 2018. We used bivariate analysis and a multivariable linear regression model to determine the association between individual factors and Milestone performance. RESULTS Of 203 eligible residents, 198 (98%) were included in the final sample. One hundred fourteen residents (58%) were female, and 116 residents (59%) were White. Mean Step 1 and Step 2 CK scores were 245.5 (SD 12.0) and 258 (SD 10.8) respectively. Step 1 scores, Alpha Omega Alpha membership, medicine clerkship grades, and interview scores were not associated with Milestone performance in the bivariate analysis and were not included in the multivariable model. In the multivariable model, overall clerkship grades, ranking of the medical school, and year entering residency were significantly associated with Milestone performance (P ≤ .04). CONCLUSIONS Most traditional metrics used in residency selection were not associated with early performance on ACGME Milestones during internal medicine residency.
Collapse
Affiliation(s)
- Blair P. Golden
- At the time of writing, Blair P. Golden, MD, MS, was Chief Resident, Internal Medicine Residency, and Clinical Instructor, Division of General Internal Medicine and Geriatrics, Northwestern University Feinberg School of Medicine, and currently is Assistant Professor, Division of Hospital Medicine, Department of Medicine, University of Wisconsin School of Medicine and Public Health
| | - Bruce L. Henschen
- Bruce L. Henschen, MD, MPH, is Assistant Professor, Division of General Internal Medicine and Geriatrics, Northwestern University Feinberg School of Medicine
| | - David T. Liss
- David T. Liss, PhD, is Research Associate Professor, Division of General Internal Medicine and Geriatrics, Northwestern University Feinberg School of Medicine
| | - Sara L. Kiely
- Sara L. Kiely, MS, is Accreditation Council for Graduate Medical Education Liason, McGaw Medical Center of Northwestern University Feinberg School of Medicine
| | - Aashish K. Didwania
- Aashish K. Didwania, MD, is Associate Professor, Division of General Internal Medicine and Geriatrics, Program Director, Internal Medicine Residency, and Vice Chair for Education, Department of Medicine, Northwestern University Feinberg School of Medicine
| |
Collapse
|
15
|
Hauer KE, Edgar L, Hogan SO, Kinnear B, Warm E. The Science of Effective Group Process: Lessons for Clinical Competency Committees. J Grad Med Educ 2021; 13:59-64. [PMID: 33936534 PMCID: PMC8078081 DOI: 10.4300/jgme-d-20-00827.1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Affiliation(s)
- Karen E. Hauer
- Karen E. Hauer, MD, PhD, is Associate Dean, Competency Assessment and Professional Standards, and Professor of Medicine, University of California, San Francisco
| | - Laura Edgar
- Laura Edgar, EdD, CAE, is Vice President, Milestones Development, Accreditation Council for Graduate Medical Education (ACGME)
| | - Sean O. Hogan
- Sean O. Hogan, PhD, is Director, Outcomes Research and Evaluation, ACGME
| | - Benjamin Kinnear
- Benjamin Kinnear, MD, MEd, is Associate Professor of Internal Medicine and Pediatrics, Department of Pediatrics, University of Cincinnati College of Medicine
| | - Eric Warm
- Eric Warm, MD, is Program Director, Internal Medicine, Department of Medicine, University of Cincinnati College of Medicine
| |
Collapse
|
16
|
Vinagre R, Tanaka P, Tardelli MA. Competency-based anesthesiology teaching: comparison of programs in Brazil, Canada and the United States. Braz J Anesthesiol 2021; 71:162-170. [PMID: 33781575 PMCID: PMC9373559 DOI: 10.1016/j.bjane.2020.12.026] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2020] [Revised: 12/12/2020] [Accepted: 12/24/2020] [Indexed: 11/25/2022] Open
Abstract
In 2017, the Brazilian Society of Anesthesiology (SBA) and the National Medical Residency Committee (CNRM) presented a joint competence matrix to train and evaluate physicians specializing in Anesthesiology, which was enforced in 2019. The competency-based curriculum aims to train residents in relation to certain results, in that residents are considered capable when they are able to act in an appropriate and effective manner within certain standards of performance. Canada and the United States (US) also use competency-based curriculum to train their professionals. In Canada, the format is the basis for using an evaluation method known as Entrustable Professional Activities (EPA), in which the mentor assesses residents’ capacity to perform certain tasks, classified in 5 levels. The US, in turn, uses Milestones as evaluation, in which competencies and sub-competencies are assessed according to residents’ progress during training. The present article aims to describe and compare the different competency-based curriculum and the evaluation methods used in the three countries, and proposes a reflection on future paths for medical education in Anesthesiology in Brazil.
Collapse
Affiliation(s)
- Rafael Vinagre
- Stanford University, School of Medicine, Department of Anesthesiology, Perioperative and Pain Medicine, Stanford, California, USA; Lincoln Medical and Health Care Center, Department of Internal Medicine, Bronx, New York, USA.
| | - Pedro Tanaka
- Stanford University, School of Medicine, Department of Anesthesiology, Perioperative and Pain Medicine, Stanford, California, USA
| | - Maria Angela Tardelli
- Universidade Federal de São Paulo (UNIFESP), Escola Paulista de Medicina, Departamento de Cirurgia, Disciplina de Anestesiologia, Dor e Medicina Intensiva, São Paulo, SP, Brazil
| |
Collapse
|
17
|
Francisco GE, Yamazaki K, Raddatz M, Sabharwal S, Robinson L, Kinney C, Holmboe E. Do Milestone Ratings Predict Physical Medicine and Rehabilitation Board Certification Examination Scores? Am J Phys Med Rehabil 2021; 100:S34-S39. [PMID: 33048889 DOI: 10.1097/phm.0000000000001613] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
ABSTRACT The Accreditation Council of Graduate Medical Education developed the Milestones to assist training programs in assessing resident physicians in the context of their participation in Accreditation Council of Graduate Medical Education-accredited training programs. Biannual assessments are done over a resident's entire training period to define the trajectory in achieving specialty-specific competencies. As part of its process of initial certification, the American Board of Physical Medicine and Rehabilitation requires successful completion of two examinations administered approximately 9 mos apart. The Part I Examination measures a single dimensional construct, physical medicine and rehabilitation medical knowledge, whereas Part II assesses the application of medical and physiatric knowledge to multiple domains, including data acquisition, problem solving, patient management, systems-based practice, and interpersonal and communication skills through specific patient case scenarios. This study aimed to investigate the validity of the Milestones by demonstrating its association with performance in the American Board of Physical Medicine and Rehabilitation certifying examinations. A cohort of 233 physical medicine and rehabilitation trainees in 3-yr residency programs (postgraduate year 2 entry) in the United States from academic years 2014-2016, who also took the American Board of Physical Medicine and Rehabilitation Parts I and II certifying examinations between 2016 and 2018, were included in the study. Milestones ratings in four distinct observation periods were correlated with scores in the American Board of Physical Medicine and Rehabilitation Parts I and II Examinations. Milestones ratings of medical knowledge (but not patient care, professionalism, problem-based learning, interpersonal and communication skills, and systems-based practice) predicted performance in subsequent Part I American Board of Physical Medicine and Rehabilitation Examination, but none of the Milestone ratings correlated with Part II Examination scaled scores.
Collapse
Affiliation(s)
- Gerard E Francisco
- From the Department of Physical Medicine and Rehabilitation, The University of Texas at Houston McGovern Medical School and TIRR Memorial Hermann, Houston, Texas (GEF); Accreditation Council for Graduate Medical Education (ACGME), Chicago, Illinois (KY, EH); American Board of Physical Medicine and Rehabilitation, Rochester, Minnesota (MR, CK); Harvard Medical School and VA Boston Health Care System, Boston, Massachusetts (SS); and University of Toronto, Ontario, Canada (LR)
| | | | | | | | | | | | | |
Collapse
|
18
|
Teherani A, Perez S, Muller-Juge V, Lupton K, Hauer KE. A Narrative Study of Equity in Clinical Assessment Through the Antideficit Lens. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:S121-S130. [PMID: 33229956 DOI: 10.1097/acm.0000000000003690] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PURPOSE Efforts to address inequities in medical education are centered on a dialogue of deficits that highlight negative underrepresented in medicine (UIM) learner experiences and lower performance outcomes. An alternative narrative explores perspectives on achievement and equity in assessment. This study sought to understand UIM learner perceptions of successes and equitable assessment practices. METHOD Using narrative research, investigators selected a purposeful sample of self-identified UIM fourth-year medical students and senior-level residents and conducted semistructured interviews. Questions elicited personal stories of achievement during clinical training, clinical assessment practices that captured achievement, and equity in clinical assessment. Using re-storying and thematic analysis, investigators coded transcripts and synthesized data into themes and representative stories. RESULTS Twenty UIM learners (6 medical students and 14 residents) were interviewed. Learners often thought about equity during clinical training and provided personal definitions of equity in assessment. Learners shared stories that reflected their achievements in patient care, favorable assessment outcomes, and growth throughout clinical training. Sound assessments that captured achievements included frequent observations with real-time feedback on predefined expectations by supportive, longitudinal clinical supervisors. Finally, equitable assessment systems were characterized as sound assessment systems that also avoided comparison to peers, used narrative assessment, assessed patient care and growth, trained supervisors to avoid bias, and acknowledged learner identity. CONCLUSIONS UIM learners characterized equitable and sound assessment systems that captured achievements during clinical training. These findings guide future efforts to create an inclusive, fair, and equitable clinical assessment experience.
Collapse
Affiliation(s)
- Arianne Teherani
- A. Teherani is professor, Department of Medicine, education scientist, Center for Faculty Educators, and director of program evaluation, University of California, San Francisco, School of Medicine, San Francisco, California; ORCID: http://orcid.org/0000-0003-2936-9832
| | - Sandra Perez
- S. Perez is a medical student, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Virginie Muller-Juge
- V. Muller-Juge is associate specialist, University of California, San Francisco, School of Medicine, San Francisco, California; ORCID: https://orcid.org/0000-0002-2346-8904
| | - Katherine Lupton
- K. Lupton is associate professor, Department of Medicine, University of California, San Francisco, School of Medicine, San Francisco, California
| | - Karen E Hauer
- K.E. Hauer is professor, Department of Medicine, and associate dean for competency assessment and professional standards, University of California, San Francisco, School of Medicine, San Francisco, California; ORCID: https://orcid.org/0000-0002-8812-4045
| |
Collapse
|
19
|
Hu K, Hicks PJ, Margolis M, Carraccio C, Osta A, Winward ML, Schwartz A. Reported Pediatrics Milestones (Mostly) Measure Program, Not Learner Performance. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:S89-S94. [PMID: 32769468 DOI: 10.1097/acm.0000000000003644] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PURPOSE Semiannually, U.S. pediatrics residency programs report resident milestone levels to the Accreditation Council for Graduate Medical Education (ACGME). The Pediatrics Milestones Assessment Collaborative (PMAC, consisting of the National Board of Medical Examiners, American Board of Pediatrics, and Association of Pediatric Program Directors) developed workplace-based assessments of 2 inferences: readiness to serve as an intern with a supervisor present (D1) and readiness to care for patients with a supervisor nearby in the pediatric inpatient setting (D2). The authors compared learner and program variance in PMAC scores with ACGME milestones. METHOD The authors examined sources of variance in PMAC scores and milestones between November 2015 and May 2017 of 181 interns at 8 U.S. pediatrics residency programs using random effects models with program, competency, learner, and program × competency components. RESULTS Program-related milestone variance was substantial (54% D1, 68% D2), both in comparison to learner milestone variance (22% D1, 14% D2) and program variance in the PMAC scores (12% D1, 10% D2). In contrast, learner variance represented 44% (D1) or 26% (D2) of variance in PMAC scores. Within programs, PMAC scores were positively correlated with milestones for all but one competency. CONCLUSIONS PMAC assessments provided scores with little program-specific variance and were more sensitive to differences in learners within programs compared with milestones. Milestones reflected greater differences by program than by learner. This may represent program-based differences in intern performance or in use of milestones as a reporting scale. Comparing individual learner milestones without adjusting for programs is problematic.
Collapse
Affiliation(s)
- Kimberly Hu
- K. Hu is an MD/MPH student, University of Illinois at Chicago, Chicago, Illinois
| | - Patricia J Hicks
- P.J. Hicks is professor of pediatrics, University of Texas Southwestern Medical School, Dallas, Texas
| | - Melissa Margolis
- M. Margolis is senior measurement scientist, National Board of Medical Examiners, Philadelphia, Pennsylvania
| | - Carol Carraccio
- C. Carraccio is vice president, competency-based medical education, American Board of Pediatrics, Chapel Hill, North Carolina
| | - Amanda Osta
- A. Osta is associate professor and residency program director, pediatrics, University of Illinois College of Medicine, Chicago, Illinois
| | - Marcia L Winward
- M.L. Winward is measurement scientist, National Board of Medical Examiners, Philadelphia, Pennsylvania
| | - Alan Schwartz
- A. Schwartz is the Michael Reese Endowed Professor of Medical Education, and research professor, pediatrics, University of Illinois College of Medicine, Chicago, Illinois, and network director, Association of Pediatric Program Directors (APPD) Longitudinal Educational Assessment Research Network (LEARN), MacLean, Virginia
| |
Collapse
|
20
|
McDonald FS, Jurich D, Duhigg LM, Paniagua M, Chick D, Wells M, Williams A, Alguire P. Correlations Between the USMLE Step Examinations, American College of Physicians In-Training Examination, and ABIM Internal Medicine Certification Examination. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:1388-1395. [PMID: 32271224 DOI: 10.1097/acm.0000000000003382] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
PURPOSE To assess the correlations between United States Medical Licensing Examination (USMLE) performance, American College of Physicians Internal Medicine In-Training Examination (IM-ITE) performance, American Board of Internal Medicine Internal Medicine Certification Exam (IM-CE) performance, and other medical knowledge and demographic variables. METHOD The study included 9,676 postgraduate year (PGY)-1, 11,424 PGY-2, and 10,239 PGY-3 internal medicine (IM) residents from any Accreditation Council for Graduate Medical Education-accredited IM residency program who took the IM-ITE (2014 or 2015) and the IM-CE (2015-2018). USMLE scores, IM-ITE percent correct scores, and IM-CE scores were analyzed using multiple linear regression, and IM-CE pass/fail status was analyzed using multiple logistic regression, controlling for USMLE Step 1, Step 2 Clinical Knowledge, and Step 3 scores; averaged medical knowledge milestones; age at IM-ITE; gender; and medical school location (United States or Canada vs international). RESULTS All variables were significant predictors of passing the IM-CE with IM-ITE scores having the strongest association and USMLE Step scores being the next strongest predictors. Prediction curves for the probability of passing the IM-CE based solely on IM-ITE score for each PGY show that residents must score higher on the IM-ITE with each subsequent administration to maintain the same estimated probability of passing the IM-CE. CONCLUSIONS The findings from this study should support residents and program directors in their efforts to more precisely identify and evaluate knowledge gaps for both personal learning and program improvement. While no individual USMLE Step score was as strongly predictive of IM-CE score as IM-ITE score, the combined relative contribution of all 3 USMLE Step scores was of a magnitude similar to that of IM-ITE score.
Collapse
Affiliation(s)
- Furman S McDonald
- F.S. McDonald is senior vice president for academic and medical affairs, American Board of Internal Medicine, Philadelphia, Pennsylvania, adjunct professor of medicine, Mayo Clinic College of Medicine and Science, Rochester, Minnesota, adjunct professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, and clinical associate, J. Edwin Wood Clinic, Pennsylvania Hospital, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-7952-3776
| | - Daniel Jurich
- D. Jurich is senior psychometrician, National Board of Medical Examiners, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0002-1870-2436
| | - Lauren M Duhigg
- L.M. Duhigg is senior research associate, American Board of Internal Medicine, Philadelphia, Pennsylvania
| | - Miguel Paniagua
- M. Paniagua is medical advisor, National Board of Medical Examiners, and adjunct professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0003-2307-4873
| | - Davoren Chick
- D. Chick is senior vice president of medical education, American College of Physicians, and adjunct professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0003-4477-1272
| | - Margaret Wells
- M. Wells is director of assessment and education programs, American College of Physicians, Philadelphia, Pennsylvania
| | - Amber Williams
- A. Williams is manager, Relationship Development, National Board of Medical Examiners, Philadelphia, Pennsylvania
| | - Patrick Alguire
- P. Alguire is senior vice president emeritus medical education, American College of Physicians, Philadelphia, Pennsylvania
| |
Collapse
|
21
|
Utility of Residency Milestones Reported to Fellowship Directors: A National Survey of Pediatric Fellowship Program Directors. Acad Pediatr 2020; 20:696-702. [PMID: 31978601 DOI: 10.1016/j.acap.2020.01.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 01/07/2020] [Accepted: 01/10/2020] [Indexed: 02/02/2023]
Abstract
OBJECTIVE The Accreditation Council for Graduate Medical Education recently made available final residency Milestones for first-year fellows to fellowship program directors (FPDs). Usefulness of residency Milestones for fellows is unknown. Our objective was to determine how many pediatric FPDs downloaded final residency Milestones for their first-year fellows and FPD perspectives about usefulness of residency Milestones. METHODS Mixed methods survey of pediatric FPDs, assessing FPD use of residency Milestones for first-year fellows, and FPD opinions about utility of residency Milestones for fellowship, including during fellow recruitment. Quantitative data were analyzed using descriptive statistics. Qualitative data were analyzed using content analysis. RESULTS The response rate was 67.8% (544 of 802). Only 39.3% (209 of 532) of FPDs downloaded final residency Milestones for their first-year fellows. Twenty-four percent (129 of 532) of all FPDs thought residency Milestones were useful. Forty-one percent (218 of 532) thought residency Milestones would be useful during recruitment; others believed this may harm applicants. Of FPDs that downloaded and reviewed residency Milestones, 27% (50 of 185) used them for individualized education. FPDs felt residency Milestones might allow for identification of trainee needs and baseline assessments, but thought that residency Milestones had limited usefulness during fellowship due to concerns about lack of validity evidence, relevance, and how Milestones are assessed and reported. CONCLUSIONS Most FPDs find residency Milestones to be of limited utility for their fellows and do not use residency Milestones to tailor education for their first-year fellows. Improving relevance of residency Milestones to fellowship training, validity, and how Milestones are assessed and reported may improve their usefulness for fellow training.
Collapse
|