1
|
Elhilu A, Ghazwani S, Adawi EA, Abdelwahab SI. Interns' Perceived Level of Proficiency After General Surgery Rotation: A Cross-Sectional Study From Saudi Arabia. Cureus 2024; 16:e57412. [PMID: 38694650 PMCID: PMC11061808 DOI: 10.7759/cureus.57412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/31/2024] [Indexed: 05/04/2024] Open
Abstract
BACKGROUND The role of interns during general surgical rotation is crucial in shaping their future careers as surgeons. Surgical rotation offers a unique opportunity to gain valuable hands-on experience in fast-paced and challenging environments. However, interns often face significant challenges in obtaining the necessary practical training to develop proficiency in surgical techniques. This article aims to analyze some aspects of the accumulated competency of interns during their general surgery rotation, focusing on the range of skills and knowledge gained, in addition to the challenges faced. SUBJECTS AND METHODS We conducted a cross-sectional study using an anonymous web-based self-assessment questionnaire. The target population of the study included all Jazan University medical interns enrolled in the academic year 2022-2023. RESULTS Most participants showed low-to-average levels of proficiency in monitoring clinical evolution and treatment plans, ranging from fundamental awareness (n = 17, 17.5%) to working knowledge (n = 51, 52.6%), with only three participants (3.1%) reporting an expert level of proficiency. The same pattern was observed in the documentation of patient records (range: 7.2%, n = 7 for fundamental awareness to 42.3%, n = 41 for working knowledge). However, a significant proportion saw themselves as either proficient (n = 23, 23.7%) or experts (n = 15, 15.5%) in this aspect. Regarding bedside procedures, such as venipuncture, proctoscopy, nasogastric tube insertion, and urethral catheterization, the participants showed different proficiency levels, with the lowest in proctoscopy, where 66 (68.0%) of the participants reported only fundamental awareness. The results also revealed low perceived proficiency in performing surgical skin incisions, wound suturing, knot tying, application of surgical skin clips, and abscess drainage, with the lowest proficiency observed in the excision of superficial lumps as more than half of the participants reported only fundamental awareness (n = 51, 52.6%). CONCLUSION The results of this study indicate that documentation and monitoring of patient progress are the competencies mastered most by the majority of interns during their rotations in general surgery. However, the interns' overall level of proficiency in bedside procedures and basic surgical skills acquired during their rotation was low to average. Additionally, interns were dissatisfied with their training and the opportunities provided for them to actively engage in performing procedures in the operating room. This low proficiency is unrelated to pre-internship academic achievement, sex, or interest in future surgical careers. This suggests that efforts are needed to develop strategies to enhance interns' satisfaction and engagement, ultimately improving their overall experience during internships.
Collapse
Affiliation(s)
| | - Salman Ghazwani
- Surgery Department, Faculty of Medicine, Jazan University, Jazan, SAU
| | - Essa A Adawi
- Surgery Department, Faculty of Medicine, Jazan University, Jazan, SAU
| | | |
Collapse
|
2
|
Upchurch DA, Fox K. Students' Approaches to Learning During Pre-Clinical and Clinical Phases of a Veterinary Curriculum, Their Motivations, and Their Correlation with GPA. JOURNAL OF VETERINARY MEDICAL EDUCATION 2024; 51:58-71. [PMID: 37014176 DOI: 10.3138/jvme-2022-0129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This study was conducted to determine if veterinary students adopt a different approach to learning in the clinical compared to pre-clinical phase, and what factors motivate their approach. We also sought to determine if the learning approach adopted correlates with grade point average (GPA). Two questionnaires were administered to the same cohort of students (112 students) at the end of the pre-clinical and at the end of the clinical phase. A total of 87 students completed at least one questionnaire. The questionnaires included the Approaches and Study Skills Inventory for students, which was used to provide scores for three learning approaches: surface (focus on memorization), strategic (focus on optimum grades), and deep (focus on understanding). The questionnaires also included open-ended questions probing for motivations behind adopting learning approaches. Statistical analyses were performed on the data to detect correlations between variables. Students were more likely to adopt a surface approach in the pre-clinical phase than in the clinical phase, although other learning approaches were not different between phases. No strong correlations existed between learning approach and GPA. Students who adopted a deep approach were typically motivated by higher-level motivations than those who adopted a surface approach, especially in the clinical phase. Time constraints, the desire to get good grades, and passing classes were the main reasons for adopting the surface approach. The results of the study can be beneficial for students by allowing them to identify those pressures that could prevent them from adopting a deeper approach earlier in the curriculum.
Collapse
Affiliation(s)
- David A Upchurch
- College of Veterinary Medicine, Kansas State University, 1800 Denison Avenue, Manhattan, KS 66506 USA
| | - Kirsty Fox
- The Royal Veterinary College, Hawkshead Lane, North Mymms, Hatfield, Hertfordshire, AL9 7TA UK
| |
Collapse
|
3
|
Bowe SN, Bly RA, Whipple ME, Gray ST. Residency Selection in Otolaryngology: Past, Present, & Future. Laryngoscope 2023; 133:S1-S13. [PMID: 36951573 DOI: 10.1002/lary.30668] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 01/31/2023] [Accepted: 02/25/2023] [Indexed: 03/24/2023]
Abstract
OBJECTIVES To examine the otolaryngology residency selection process, including past experiences based on the medical literature and survey analysis of our present practices to generate recommendations for future selection system design. METHODS A mixed-methods study, including a scoping review and a cross-sectional survey, was completed. Four databases were assessed for articles on otolaryngology residency selection published from January 1, 2016 through December 31, 2020. A 36-question survey was developed and distributed to 114 otolaryngology program directors. Descriptive and thematic analysis was performed. RESULTS Ultimately, 67 of 168 articles underwent data abstraction and assessment. Three themes surfaced during the analysis: effectiveness, efficiency, and equity. Regarding the survey, there were 62 participants (54.4% response rate). The three most important goals for the selection process were: (1) to fit the program culture, (2) to make good colleagues, and (3) to contribute to the program's diversity. The three biggest 'pain points' were as follows: (1) Large volume of applications, (2) Lack of reliable information about personal characteristics, and (3) Lack of reliable information about a genuine interest in the program. CONCLUSIONS Within this study, the depth and breadth of the literature on otolaryngology residency selection have been synthesized. Additionally, baseline data on selection practices within our specialty has been captured. With an informed understanding of our past and present, we can look to the future. Built upon the principles of person-environment fit theory, our proposed framework can guide research and policy discussions regarding the design of selection systems in otolaryngology, as we work to achieve more effective, efficient, and equitable outcomes. LEVEL OF EVIDENCE N/A Laryngoscope, 133:2929-2941, 2023.
Collapse
Affiliation(s)
- Sarah N Bowe
- Department of Otolaryngology-Head & Neck Surgery, San Antonio Uniformed Services Health Education Consortium, JBSA-Ft. Sam Houston, Texas, U.S.A
| | - Randall A Bly
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, Washington, U.S.A
- Seattle Children's Hospital and Research Institute, Seattle, Washington, U.S.A
| | - Mark E Whipple
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, Washington, U.S.A
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, Washington, U.S.A
| | - Stacey T Gray
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear Infirmary, Boston, Massachusetts, U.S.A
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, U.S.A
| |
Collapse
|
4
|
Ryan MS, Lomis KD, Deiorio NM, Cutrer WB, Pusic MV, Caretta-Weyer HA. Competency-Based Medical Education in a Norm-Referenced World: A Root Cause Analysis of Challenges to the Competency-Based Paradigm in Medical School. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:1251-1260. [PMID: 36972129 DOI: 10.1097/acm.0000000000005220] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Competency-based medical education (CBME) requires a criterion-referenced approach to assessment. However, despite best efforts to advance CBME, there remains an implicit, and at times, explicit, demand for norm-referencing, particularly at the junction of undergraduate medical education (UME) and graduate medical education (GME). In this manuscript, the authors perform a root cause analysis to determine the underlying reasons for continued norm-referencing in the context of the movement toward CBME. The root cause analysis consisted of 2 processes: (1) identification of potential causes and effects organized into a fishbone diagram and (2) identification of the 5 whys. The fishbone diagram identified 2 primary drivers: the false notion that measures such as grades are truly objective and the importance of different incentives for different key constituents. From these drivers, the importance of norm-referencing for residency selection was identified as a critical component. Exploration of the 5 whys further detailed the reasons for continuation of norm-referenced grading to facilitate selection, including the need for efficient screening in residency selection, dependence upon rank-order lists, perception that there is a best outcome to the match, lack of trust between residency programs and medical schools, and inadequate resources to support progression of trainees. Based on these findings, the authors argue that the implied purpose of assessment in UME is primarily stratification for residency selection. Because stratification requires comparison, a norm-referenced approach is needed. To advance CBME, the authors recommend reconsideration of the approach to assessment in UME to maintain the purpose of selection while also advancing the purpose of rendering a competency decision. Changing the approach will require a collaboration between national organizations, accrediting bodies, GME programs, UME programs, students, and patients/societies. Details are provided regarding the specific approaches required of each key constituent group.
Collapse
Affiliation(s)
- Michael S Ryan
- M.S. Ryan is professor and associate dean for assessment, evaluation, research and innovation, Department of Pediatrics, University of Virginia, Charlottesville, Virginia, and a PhD student, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands; ORCID: https://orcid.org/0000-0003-3266-9289
| | - Kimberly D Lomis
- K.D. Lomis is vice president, undergraduate medical education innovations, American Medical Association, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-3504-6776
| | - Nicole M Deiorio
- N.M. Deiorio is professor and associate dean for student affairs, Department of Emergency Medicine, Virginia Commonwealth University, Richmond, Virginia; ORCID: https://orcid.org/0000-0002-8123-1112
| | - William B Cutrer
- W.B. Cutrer is associate professor of pediatrics and associate dean for undergraduate medical education, Vanderbilt University School of Medicine, Nashville, Tennessee; ORCID: https://orcid.org/0000-0003-1538-9779
| | - Martin V Pusic
- M.V. Pusic is associate professor of emergency medicine and pediatrics, Department of Pediatrics, Harvard Medical School, Boston, Massachusetts; ORCID: https://orcid.org/0000-0001-5236-6598
| | - Holly A Caretta-Weyer
- H.A. Caretta-Weyer is assistant professor and associate residency director, Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, California; ORCID: https://orcid.org/0000-0002-9783-5797
| |
Collapse
|
5
|
Gaxiola-García MA, Villalpando-Casas JDJ, García-Minjares M, Martínez-González A. National examination for medical residency admission: academic performance in a high-stakes test and the need for continuing education. Postgrad Med J 2023; 99:599-604. [PMID: 37319154 DOI: 10.1136/postgradmedj-2022-141607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2022] [Accepted: 04/07/2022] [Indexed: 11/04/2022]
Abstract
INTRODUCTION Performance and selection rate of non-newly graduated physicians in a medical residency admission test as an indicator for the need of continuing education. METHODS A database comprising 153 654 physicians who took a residency admission test in the period 2014-2018 was analysed. Performance and selection rates were assessed in relation to year of graduation and performance in medical school. RESULTS The whole sample scored at a mean of 62.3 (SD ±8.9; range 1.11-91.11). Examinees who took the test in their year of graduation performed better (66.10) than those who took the test after their year of graduation (61.84); p<0.001.Selection rates differed accordingly; 33.9% for newly graduated physicians compared with 24.8% in those who took the test at least 1 year after graduation; p<0.001. An association between selection test performance and medical school grades was established using Pearson's correlation: r=0.40 for newly graduated physicians and r=0.30 for non-newly graduated physicians. There were statistically significant differences in selection rates for every ranking group of grades in medical school based on the χ2 test (p<0.001). The selection rates are decreased years after graduation even for candidates with high grades in medical school. DISCUSSION There is an association between performance in a medical residency admission test and academic variables of the candidates: medical school grades and time elapsed from graduation to test taking. The evidence of decrease in retention of medical knowledge since graduation highlights the pertinence of continuing education interventions.
Collapse
Affiliation(s)
| | | | | | - Adrián Martínez-González
- CUAIEED, Universidad Nacional Autónoma de México, UNAM, Mexico City, Mexico
- Public Health Department, Universidad Nacional Autónoma de México, UNAM, Mexico City, Mexico
| |
Collapse
|
6
|
Gazit N, Ben-Gal G, Eliashar R. Using Job Analysis for Identifying the Desired Competencies of 21st-Century Surgeons for Improving Trainees Selection. JOURNAL OF SURGICAL EDUCATION 2023; 80:81-92. [PMID: 36175291 DOI: 10.1016/j.jsurg.2022.08.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 06/29/2022] [Accepted: 08/16/2022] [Indexed: 06/16/2023]
Abstract
OBJECTIVE The current selection for surgical training is based on ineffective methods. In order to identify or to develop more valid selection tools to improve the selection, it is necessary to first define what are the competencies that are most important for success in contemporary surgery. Therefore, the current study aims to identify what competencies are required for success as a surgeon in the 21st-century and to evaluate their relative importance for selection for surgical training. METHODS Job analysis was conducted using a mixed-methods design. First, 104 senior surgeons from all surgical fields from various hospitals in Israel were interviewed in order to query their perceptions of competencies associated with success as a surgeon. Their answers were coded and analyzed to create a list of important competencies. Next, a larger sample of 1,102 surgeons and residents from all surgical fields completed a questionnaire in which they rated the importance of each competency in the list for success as a surgeon and for selection for surgical training in the 21st-century. RESULTS Twenty-four competencies (five technical skills, six cognitive abilities, 13 personality characteristics) were identified in the interview analysis. Analysis of the questionnaire's data revealed that all 24 competencies were perceived as important for success as a surgeon in the 21st-century as well as for selection for surgical training. The perceived importance of personality characteristics was higher than both cognitive abilities (p < 0.001) and technical skills (p < 0.001). The results did not differ between different surgical fields. CONCLUSIONS Twenty-four competencies were identified as important for 21st-century surgeons and for selection for surgical training. Although all competencies were perceived as important, personality characteristics were perceived as more important than technical skills and cognitive abilities. This updated definition of required competencies may aid in developing more valid selection methods of candidates for surgical training.
Collapse
Affiliation(s)
- Noa Gazit
- Department of Prosthodontics, Hadassah Medical Center, Faculty of Dental Medicine, Hebrew University of Jerusalem, Jerusalem, Israel; Department of Otolaryngology/HNS, Hadassah Medical Center, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel.
| | - Gilad Ben-Gal
- Department of Prosthodontics, Hadassah Medical Center, Faculty of Dental Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Ron Eliashar
- Department of Otolaryngology/HNS, Hadassah Medical Center, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
7
|
Berk GA, Ho TD, Stack‐Pyle TJ, Zeatoun A, Kong KA, Chaskes MB, Thorp BD, Ebert CS, DeMason CE, Kimple AJ, Senior BA. The next step: Replacing step 1 as a metric for residency application. Laryngoscope Investig Otolaryngol 2022; 7:1756-1761. [PMID: 36544915 PMCID: PMC9764748 DOI: 10.1002/lio2.947] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 10/01/2022] [Indexed: 12/24/2022] Open
Abstract
Objective As of January 2022, USMLE Step 1 scores are reported as pass/fail. Historically, Step 1 scores have been a critical component of residency applications, representing one of the few metrics standardized across all applicants independent of the school they attended. In competitive specialties, such as otolaryngology, programs routinely get 100+ applicants for each residency spot and use Step 1 as a screening tool. This study seeks to assess quantifiable metrics in the application that highly competitive residency programs could use for screening in place of Step 1 scores. Methods Otolaryngology applications to an academic medical center for the 2019-20 and 2020-21 ERAS cycles were reviewed. Board scores and quantitative research data were extracted. The relationships between Step 1 score and the other metrics were examined by computing Pearson's correlation coefficients and building regression models. Similar analyses were done separately for three different score tiers defined by Step 1 cutoffs at 220 points and 250 points. Results Step 2 score was the only variable that had meaningful correlation with Step 1 score (R = .67, p < 2.2e-16). No other objective metric such as journal articles, posters, or oral presentations correlated with Step 1 scores. Conclusion Step 1 scores were moderately correlated with Step 2 scores; however, using a Step 2 cutoff instead of a Step 1 cutoff would identify a different cohort of applicants for interview. No other quantifiable application metric had a positive correlation. In future match cycles, highly competitive residency programs will need to adopt new methods to screen candidates.Level of Evidence: Level 3.
Collapse
Affiliation(s)
- Garrett A. Berk
- Department of Otolaryngology—Head & Neck Surgerythe University of North CarolinaChapel HillNorth CarolinaUSA
| | - Tiffany D. Ho
- Department of Otolaryngology—Head & Neck Surgerythe University of North CarolinaChapel HillNorth CarolinaUSA
| | - Taylor J. Stack‐Pyle
- Department of Otolaryngology—Head & Neck Surgerythe University of North CarolinaChapel HillNorth CarolinaUSA
| | - Abdullah Zeatoun
- Department of Otolaryngology—Head & Neck Surgerythe University of North CarolinaChapel HillNorth CarolinaUSA
| | - Keonho A. Kong
- Department of Otolaryngology—Head & Neck Surgerythe University of North CarolinaChapel HillNorth CarolinaUSA
| | - Mark B. Chaskes
- Department of Otolaryngology—Head & Neck Surgerythe University of North CarolinaChapel HillNorth CarolinaUSA
| | - Brian D. Thorp
- Department of Otolaryngology—Head & Neck Surgerythe University of North CarolinaChapel HillNorth CarolinaUSA
| | - Charles S. Ebert
- Department of Otolaryngology—Head & Neck Surgerythe University of North CarolinaChapel HillNorth CarolinaUSA
| | - Christine E. DeMason
- Department of Otolaryngology—Head & Neck Surgerythe University of North CarolinaChapel HillNorth CarolinaUSA
| | - Adam J. Kimple
- Department of Otolaryngology—Head & Neck Surgerythe University of North CarolinaChapel HillNorth CarolinaUSA
| | - Brent A. Senior
- Department of Otolaryngology—Head & Neck Surgerythe University of North CarolinaChapel HillNorth CarolinaUSA
| |
Collapse
|
8
|
Warm EJ, Kinnear B, Lance S, Schauer DP, Brenner J. What Behaviors Define a Good Physician? Assessing and Communicating About Noncognitive Skills. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:193-199. [PMID: 34166233 DOI: 10.1097/acm.0000000000004215] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Once medical students attain a certain level of medical knowledge, success in residency often depends on noncognitive attributes, such as conscientiousness, empathy, and grit. These traits are significantly more difficult to assess than cognitive performance, creating a potential gap in measurement. Despite its promise, competency-based medical education (CBME) has yet to bridge this gap, partly due to a lack of well-defined noncognitive observable behaviors that assessors and educators can use in formative and summative assessment. As a result, typical undergraduate to graduate medical education handovers stress standardized test scores, and program directors trust little of the remaining information they receive, sometimes turning to third-party companies to better describe potential residency candidates. The authors have created a list of noncognitive attributes, with associated definitions and noncognitive skills-called observable practice activities (OPAs)-written for learners across the continuum to help educators collect assessment data that can be turned into valuable information. OPAs are discrete work-based assessment elements collected over time and mapped to larger structures, such as milestones, entrustable professional activities, or competencies, to create learning trajectories for formative and summative decisions. Medical schools and graduate medical education programs could adapt these OPAs or determine ways to create new ones specific to their own contexts. Once OPAs are created, programs will have to find effective ways to assess them, interpret the data, determine consequence validity, and communicate information to learners and institutions. The authors discuss the need for culture change surrounding assessment-even for the adoption of behavior-based tools such as OPAs-including grounding the work in a growth mindset and the broad underpinnings of CBME. Ultimately, improving assessment of noncognitive capacity should benefit learners, schools, programs, and most importantly, patients.
Collapse
Affiliation(s)
- Eric J Warm
- E.J. Warm is professor of medicine and program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0002-6088-2434
| | - Benjamin Kinnear
- B. Kinnear is associate professor of medicine and pediatrics and associate program director, Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0003-0052-4130
| | - Samuel Lance
- S. Lance is associate professor of plastic surgery and craniofacial surgery and program director of plastic surgery, Division of Plastic Surgery, University of California San Diego, San Diego, California; ORCID: https://orcid.org/0000-0002-5186-2677
| | - Daniel P Schauer
- D.P. Schauer is associate professor of medicine and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; ORCID: https://orcid.org/0000-0003-3264-8154
| | - Judith Brenner
- J. Brenner is associate professor of science education and medicine and associate dean for curricular integration and assessment, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York; ORCID: https://orcid.org/0000-0002-8697-5401
| |
Collapse
|
9
|
Gudgel BM, Melson AT, Dvorak J, Ding K, Siatkowski RM. Correlation of Ophthalmology Residency Application Characteristics with Subsequent Performance in Residency. JOURNAL OF ACADEMIC OPHTHALMOLOGY 2021. [DOI: 10.1055/s-0041-1733932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
Abstract
Purpose Only from reviewing applications, it is difficult to identify which applicants will be successful ophthalmology residents. The change of the USMLE Step 1 scoring to “Pass/Fail” removes another quantitative metric. We aimed to identify application attributes correlated with successful residency performance. This study also used artificial intelligence (AI) to evaluate letters of recommendation (LOR), the Dean's letter (MSPE), and personal statement (PS).
Design Retrospective analysis of application characteristics versus residency performance was conducted.
Participants Residents who graduated from the Dean McGee Eye Institute/University of Oklahoma Ophthalmology residency from 2004 to 2019 were included in this study.
Methods Thirty-four attributes were recorded from each application. Residents were subjectively ranked into tertiles and top and bottom deciles based on residency performance by faculty present during their training. The Ophthalmic Knowledge Assessment Program (OKAP) examination scores were used as an objective performance metric. Analysis was performed to identify associations between application attributes and tertile/decile ranking. Additional analysis used AI and natural language processing to evaluate applicant LORs, MSPE, and PS.
Main Outcome Measures Characteristics from residency applications that correlate with resident performance were the primary outcome of this study.
Results Fifty-five residents and 21 faculty members were included. A grade of “A” or “Honors” in the obstetrics/gynecology (OB/GYN) clerkship and the presence of a home ophthalmology department were associated with ranking in the top tertile but not the top decile. Mean core clerkship grades, medical school ranking in the top 25 U.S. News and World Report (USNWR) primary care rankings, and postgraduate year (PGY)-2 and PGY-3 OKAP scores were predictive of being ranked in both the top tertile and the top decile. USMLE scores, alpha-omega-alpha (AOA) status, and number of publications did not correlate with subjective resident performance. AI analysis of LORs, MSPE, and PS did not identify any text features that correlated with resident performance.
Conclusions Many metrics traditionally felt to be predictive of residency success (USMLE scores, AOA status, and research) did not predict resident success in our study. We did confirm the importance of core clerkship grades and medical school ranking. Objective measures of success such as PGY-2 and PGY-3 OKAP scores were associated with high subjective ranking.
Collapse
Affiliation(s)
- Brett M. Gudgel
- University of Oklahoma Health Science Center, Oklahoma City, Oklahoma
| | - Andrew T. Melson
- University of Oklahoma Health Science Center, Oklahoma City, Oklahoma
| | - Justin Dvorak
- University of Oklahoma Health Science Center, Oklahoma City, Oklahoma
| | - Kai Ding
- University of Oklahoma Health Science Center, Oklahoma City, Oklahoma
| | | |
Collapse
|
10
|
Goggs R, Kerl M, Jandrey KE, Guillaumin J. Prospective investigation of factors associated with success on the American College of Veterinary Emergency and Critical Care certification examination (2016-2018). J Vet Emerg Crit Care (San Antonio) 2021; 32:196-206. [PMID: 34714977 DOI: 10.1111/vec.13153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 08/30/2020] [Accepted: 08/31/2020] [Indexed: 11/27/2022]
Abstract
OBJECTIVE To assess the association of candidate attributes and residency training factors with success on the American College of Veterinary Emergency and Critical Care (ACVECC) board certification examination and to develop multivariable models of first-attempt success. DESIGN Prospective survey-based study. SETTING Post-assessment ACVECC examination candidates. ANIMALS None. INTERVENTIONS None. MEASUREMENTS AND MAIN RESULTS Comprehensive surveys were distributed to ACVECC examination candidates in 2016 to 2018 after completion of their assessments, but prior to publication of examination results. Unique anonymous candidate identification numbers were used to match survey responses to outcome data from the office of the ACVECC Executive Secretary. After curation to retain only the first response from each candidate, there were 97 unique candidate responses available for analysis. Univariate analyses identified multiple factors associated with first-attempt success and multiple differences between academic and private practice residency programs. Multivariable logistic regression modeling suggested that 5 factors were independently associated with first-attempt success on the ACVECC examination, specifically younger age, more weeks of study prior to the examination, training at a facility with more ACVECC Diplomates, training at a facility with more ACVECC residents, and having no requirement to manage both Emergency Room (ER) and Critical Care (CC) cases simultaneously. CONCLUSIONS Numerous resident and training center factors are associated with success in the ACVECC board certification examination. Residents and training centers might be able to use these data to enhance training, but caution must be exercised because these data are associative only.
Collapse
Affiliation(s)
- Robert Goggs
- Department of Clinical Sciences, Cornell University College of Veterinary Medicine, Ithaca, New York, USA
| | - Marie Kerl
- Regional Operations, Heartland Group, VCA Inc., Los Angeles, California, USA.,Department of Veterinary Medicine and Surgery, College of Veterinary Medicine, University of Missouri, Columbia, Missouri, USA
| | - Karl E Jandrey
- Department of Surgical and Radiological Sciences, University of California, Davis, California, USA
| | - Julien Guillaumin
- Department of Clinical Science, College of Veterinary Medicine and Biomedical Sciences, Colorado State University, Fort Collins, Colorado, USA
| |
Collapse
|
11
|
Affiliation(s)
- Eric J. Warm
- Eric J. Warm, MD, is Sue P. and Richard W. Vilter Professor of Medicine and Program Director, Department of Internal Medicine, University of Cincinnati College of Medicine
| | - Benjamin Kinnear
- Benjamin Kinnear, MD, MEd, is Associate Professor of Medicine and Pediatrics, and Associate Program Director, Department of Pediatrics, University of Cincinnati College of Medicine
| | - Anne Pereira
- Anne Pereira, MD, MPH, is Professor of Medicine and Assessment and Coaching Expert, Department of Internal Medicine, University of Minnesota Medical School
| | - David A. Hirsh
- David A. Hirsh, MD, is Associate Dean, Undergraduate Medical Education, and Associate Professor of Medicine, Department of Medicine, Harvard Medical School and Cambridge Health Alliance
| |
Collapse
|
12
|
Gundogan B, Dowlut N, Rajmohan S, Borrelli MR, Millip M, Iosifidis C, Udeaja YZ, Mathew G, Fowler A, Agha R. Assessing the compliance of systematic review articles published in leading dermatology journals with the PRISMA statement guidelines: A systematic review. JAAD Int 2021; 1:157-174. [PMID: 34409336 PMCID: PMC8361930 DOI: 10.1016/j.jdin.2020.07.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/15/2020] [Indexed: 11/24/2022] Open
Abstract
Background Reporting quality of systematic reviews and meta-analyses is of critical importance in dermatology because of their key role in informing health care decisions. Objective To assess the compliance of systematic reviews and meta-analyses in leading dermatology journals with the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) statement guidelines. Methods This review was carried out in accordance with PRISMA guidelines. Included studies were reviews published across 6 years in the top 4 highest-impact-factor dermatology journals of 2017. Records and full texts were screened independently. Data analysis was conducted with univariate multivariable linear regression. The primary outcome was to assess the compliance of systematic reviews and meta-analyses in leading dermatology journals with the PRISMA statement. Results A total of 166 studies were included and mean PRISMA compliance across all articles was 73%. Compliance significantly improved over time (β = .016; P = <.001). The worst reported checklist item was item 5 (reporting on protocol existence), with a compliance of 15% of articles. Conclusion PRISMA compliance within leading dermatology journals could be improved; however, it is steadily improving.
Collapse
Affiliation(s)
- Buket Gundogan
- University College London Hospital, London, United Kingdom
| | - Naeem Dowlut
- Oxford University Hospitals NHS Foundation Trust, Oxford, United Kingdom
| | | | - Mimi R Borrelli
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Stanford University School of Medicine, Stanford, California
| | - Mirabel Millip
- Oxford University Hospitals NHS Foundation Trust, Oxford, United Kingdom
| | - Christos Iosifidis
- Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| | - Yagazie Z Udeaja
- Luton and Dunstable University Hospital NHS Foundation Trust, Luton, United Kingdom
| | - Ginimol Mathew
- University College London Medical School, Gower Street, London, United Kingdom
| | | | - Riaz Agha
- Bart's Health NHS Foundation Trust, London, United Kingdom
| |
Collapse
|
13
|
Abstract
BACKGROUND Consistently selecting successful, productive applicants from an annual candidate pool is the goal of all resident selection practices. Efforts to routinely identify high-quality applicants involve scrutiny of multiple factors and formulation of an ordinal rank list. Linear modeling offers a quantified approach to applicant selection that is strongly supported by decades of psychological research. METHODS For the 2019 residency application process, the University of Wisconsin Plastic Surgery Residency Program used linear modeling in their evaluation and ranking process. A linear model was developed using United States Medical Licensing Examination Step 1 and Step 2 scores, letters of recommendation, publications, and extracurricular activities as inputs. RESULTS The applicant's total score was calculated from a maximum total score of 100. The mean and median scores were 49 and 48, respectively, and applicants were ranked according to total score. A separate rank list was maintained using our program's standard methodology for applicant ranking, which involves global intuitive scoring during the interview process. The Spearman rank correlation coefficient between the two lists was 0.532, and differences between the rank lists were used as a fulcrum for discussion before making the final rank list. CONCLUSIONS This article presents the first known instance of the use of linear modeling to improve consistency, increase fairness, and decrease bias in the plastic surgery residency selection process. Transparent sharing of methodology may be useful to other programs seeking to optimize their own ranking methodology. Furthermore, it indicates to applicants that they are being evaluated based on fair, quantifiable criteria.
Collapse
|
14
|
Zastrow RK, Burk-Rafel J, London DA. Systems-Level Reforms to the US Resident Selection Process: A Scoping Review. J Grad Med Educ 2021; 13:355-370. [PMID: 34178261 PMCID: PMC8207920 DOI: 10.4300/jgme-d-20-01381.1] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Revised: 01/18/2021] [Accepted: 02/18/2021] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Calls to reform the US resident selection process are growing, given increasing competition and inefficiencies of the current system. Though numerous reforms have been proposed, they have not been comprehensively cataloged. OBJECTIVE This scoping review was conducted to characterize and categorize literature proposing systems-level reforms to the resident selection process. METHODS Following Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) guidelines, searches of Embase, MEDLINE, Scopus, and Web of Science databases were performed for references published from January 2005 to February 2020. Articles were included if they proposed reforms that were applicable or generalizable to all applicants, medical schools, or residency programs. An inductive approach to qualitative content analysis was used to generate codes and higher-order categories. RESULTS Of 10 407 unique references screened, 116 met our inclusion criteria. Qualitative analysis generated 34 codes that were grouped into 14 categories according to the broad stages of resident selection: application submission, application review, interviews, and the Match. The most commonly proposed reforms were implementation of an application cap (n = 28), creation of a standardized program database (n = 21), utilization of standardized letters of evaluation (n = 20), and pre-interview screening (n = 13). CONCLUSIONS This scoping review collated and categorized proposed reforms to the resident selection process, developing a common language and framework to facilitate national conversations and change.
Collapse
Affiliation(s)
- Ryley K. Zastrow
- Ryley K. Zastrow, BS, is a Fourth-Year Medical Student, Department of Medical Education, Icahn School of Medicine at Mount Sinai
| | - Jesse Burk-Rafel
- Jesse Burk-Rafel, MD, MRes, is Assistant Professor, Department of Internal Medicine, and Assistant Director of UME-GME Innovation, Institute for Innovations in Medical Education, NYU Grossman School of Medicine
| | - Daniel A. London
- At the time of writing, Daniel A. London, MD, MS, was an Orthopaedic Surgery Resident, Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, and is currently a Hand Surgery Fellow, Mary S. Stern Hand Surgery Fellowship, TriHealth
| |
Collapse
|
15
|
Schnapp BH, Alvarez A, Bianchi R, Caretta‐Weyer H, Jewell C, Kalantari A, Lee E, Miller D, Quinn A. Curated collection for clinician educators: Six key papers on residency recruitment. AEM EDUCATION AND TRAINING 2021; 5:e10597. [PMID: 33969251 PMCID: PMC8086575 DOI: 10.1002/aet2.10597] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Revised: 02/19/2021] [Accepted: 03/08/2021] [Indexed: 05/23/2023]
Abstract
INTRODUCTION All emergency medicine (EM) residency programs must recruit new medical school graduates each year. The process is often overwhelming, with each program receiving far more applicants than available positions. We searched for evidence-based best practices to guide residency programs in screening, interviewing, and ranking applicants to ensure a high-performing and diverse residency class. METHODS A literature search was conducted on the topic of residency recruitment, utilizing a call on social media as well as multiple databases. After identifying relevant articles, we performed a modified Delphi process in three rounds, utilizing junior educators as well as more senior faculty. RESULTS We identified 51 relevant articles on the topic of residency recruitment. The Delphi process yielded six articles that were deemed most highly relevant over the three rounds. Transparency with selection criteria, holistic application review, standardized letters of evaluation, and blinding applicant files for interviewers were among noted best practices. CONCLUSIONS Well-supported evidence-based practices exist for residency recruitment, and programs may benefit from understanding which common recruitment practices offer the most value. The articles discussed here provide a foundation for faculty looking to improve their program's recruiting practices.
Collapse
Affiliation(s)
| | - Al’ai Alvarez
- Department of Emergency MedicineStanford UniversityStanfordCaliforniaUSA
| | - Riccardo Bianchi
- Department of Physiology and PharmacologySUNY Downstate Health Sciences UniversityNew YorkNew YorkUSA
| | | | - Corlin Jewell
- Department of Emergency MedicineUniversity of WisconsinMadisonWisconsinUSA
| | - Annahieta Kalantari
- Department of Emergency MedicineMilton S Hershey Medical CenterPenn State HealthHersheyPennsylvaniaUSA
| | - Eric Lee
- Department of Emergency MedicineMaimonides Medical CenterBrooklynNew YorkUSA
| | - Danielle Miller
- Department of Emergency MedicineStanford UniversityStanfordCaliforniaUSA
| | - Antonia Quinn
- SUNY Downstate Health SciencesUniversity College of MedicineNew YorkNew YorkUSA
- Department of Emergency MedicineSUNY DownstateBrooklynNew YorkUSA
| |
Collapse
|
16
|
Cangialosi PT, Chung BC, Thielhelm TP, Camarda ND, Eiger DS. Medical Students' Reflections on the Recent Changes to the USMLE Step Exams. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:343-348. [PMID: 33208676 PMCID: PMC8081295 DOI: 10.1097/acm.0000000000003847] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
The United States Medical Licensing Examination (USMLE) consists of Step 1, Step 2 Clinical Knowledge, Step 2 Clinical Skills, and Step 3. To be licensed to practice medicine in the United States, medical students must pass all parts of the USMLE. However, in addition to that pass/fail grade, students are currently given a numerical score for Step 1, Step 2 Clinical Knowledge, and Step 3. Residency program directors have come to use the Step 1 score to efficiently screen a growing number of residency applicants. As a result, a deleterious environment in undergraduate medical education has been created, given the importance of Step 1 to medical students matching to their preferred residency program. It was announced in February 2020 that the score-reporting protocol for Step 1 would be changed from a 3-digit numerical score to pass/fail only, beginning no earlier than January 1, 2022. This decision will undoubtedly impact medical students, medical schools, and residency program directors. Here, the authors discuss the impact that the change to Step 1 scoring will have on these key stakeholder groups, from their perspective as students at MD-granting medical schools in the United States. They also call attention to outstanding issues with the USMLE that must be addressed to improve undergraduate medical education for all stakeholders, and they offer advice for further improvements to the residency application process.
Collapse
Affiliation(s)
- Peter T Cangialosi
- P.T. Cangialosi is a fourth-year student, Rutgers New Jersey Medical School, Newark, New Jersey; ORCID: https://orcid.org/0000-0002-2138-1493
| | - Brian C Chung
- B.C. Chung is a fourth-year student, Keck School of Medicine of the University of Southern California, Los Angeles, California; ORCID: https://orcid.org/0000-0002-7979-934X
| | - Torin P Thielhelm
- T.P. Thielhelm is a fourth-year student, University of Miami Miller School of Medicine, Miami, Florida; ORCID: https://orcid.org/0000-0002-1205-2209
| | - Nicholas D Camarda
- N.D. Camarda is a third-year student, Medical Scientist Training Program, Tufts University School of Medicine, Boston, Massachusetts; ORCID: https://orcid.org/0000-0002-1853-0056
| | - Dylan S Eiger
- D.S. Eiger is a fifth-year student, Medical Scientist Training Program, Duke University School of Medicine, Durham, North Carolina; ORCID: https://orcid.org/0000-0001-9572-6282
| |
Collapse
|
17
|
Filiberto AC, Cooper LA, Loftus TJ, Samant SS, Sarosi GA, Tan SA. Objective predictors of intern performance. BMC MEDICAL EDUCATION 2021; 21:77. [PMID: 33499857 PMCID: PMC7839184 DOI: 10.1186/s12909-021-02487-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Accepted: 12/14/2020] [Indexed: 05/15/2023]
Abstract
BACKGROUND Residency programs select medical students for interviews and employment using metrics such as the United States Medical Licensing Examination (USMLE) scores, grade-point average (GPA), and class rank/quartile. It is unclear whether these metrics predict performance as an intern. This study tested the hypothesis that performance on these metrics would predict intern performance. METHODS This single institution, retrospective cohort analysis included 244 graduates from four classes (2015-2018) who completed an Accreditation Council for Graduate Medical Education (ACGME) certified internship and were evaluated by program directors (PDs) at the end of the year. PDs provided a global assessment rating and ratings addressing ACGME competencies (response rate = 47%) with five response options: excellent = 5, very good = 4, acceptable = 3, marginal = 2, unacceptable = 1. PDs also classified interns as outstanding = 4, above average = 3, average = 2, and below average = 1 relative to other interns from the same residency program. Mean USMLE scores (Step 1 and Step 2CK), third-year GPA, class rank, and core competency ratings were compared using Welch's ANOVA and follow-up pairwise t-tests. RESULTS Better performance on PD evaluations at the end of intern year was associated with higher USMLE Step 1 (p = 0.006), Step 2CK (p = 0.030), medical school GPA (p = 0.020) and class rank (p = 0.016). Interns rated as average had lower USMLE scores, GPA, and class rank than those rated as above average or outstanding; there were no significant differences between above average and outstanding interns. Higher rating in each of the ACGME core competencies was associated with better intern performance (p < 0.01). CONCLUSIONS Better performance as an intern was associated with higher USMLE scores, medical school GPA and class rank. When USMLE Step 1 reporting changes from numeric scores to pass/fail, residency programs can use other metrics to select medical students for interviews and employment.
Collapse
Affiliation(s)
- Amanda C Filiberto
- Department of Surgery, University of Florida Health, 1600 SW Archer Ave, PO Box 100109, Gainesville, FL, 32610, USA
| | - Lou Ann Cooper
- Office for Educational Affairs, University of Florida College of Medicine, Gainesville, FL, USA
| | - Tyler J Loftus
- Department of Surgery, University of Florida Health, 1600 SW Archer Ave, PO Box 100109, Gainesville, FL, 32610, USA
| | - Sonja S Samant
- University of Florida College of Medicine, Gainesville, FL, USA
| | - George A Sarosi
- Department of Surgery, University of Florida Health, 1600 SW Archer Ave, PO Box 100109, Gainesville, FL, 32610, USA
| | - Sanda A Tan
- Department of Surgery, University of Florida Health, 1600 SW Archer Ave, PO Box 100109, Gainesville, FL, 32610, USA.
| |
Collapse
|
18
|
Maxfield CM, Grimm LJ. The Value of Numerical USMLE Step 1 Scores in Radiology Resident Selection. Acad Radiol 2020; 27:1475-1480. [PMID: 31445825 DOI: 10.1016/j.acra.2019.08.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Revised: 08/08/2019] [Accepted: 08/08/2019] [Indexed: 11/30/2022]
Abstract
RATIONALE AND OBJECTIVES In response to a recent proposal to change scoring on the United States Medical Licensing (USMLE) Step 1 exam to pass/fail, we sought to determine the value of numerical Step 1 scores in predicting success in our radiology residency program. MATERIALS AND METHODS Residency applications for 157 residents entering the program between 2005 and 2017 were retrospectively reviewed. Biographical (gender, sports participation, advanced degree), undergraduate (school, major), and medical school (grades, USMLE Step 1 score, Alpha Omega Alpha membership, letters of recommendation, publications) data were recorded. Multivariate regression analysis was used to examine the relationship between these application factors and subsequent performance as a radiology resident, as determined by completion of the program without requiring corrective action, select Accreditation Council for Graduate Medical Education milestones, and selection as chief resident. RESULTS Corrective action was required for 7% (n = 12) of residents. Of the predictor variables, only Step 1 score was associated with the need for corrective action (p < 0.001). The interpretation of exams milestone was associated with higher Step 1 scores (p = 0.001) and number of medical school clerkship honors (p = 0.008). Selection as chief resident was associated with sports participation (p = 0.04), and clerkship honors (p = 0.02). CONCLUSION Numerical USMLE Step 1 scores are predictive of successful completion of radiology residency training without the need for corrective action, and of accelerated competence in the interpretation of exams milestone. Continued reporting of numerical Step 1 scores would be valuable in selection of radiology residents.
Collapse
Affiliation(s)
- Charles M Maxfield
- Department of Radiology, Duke University Medical Center, Durham, North Carolina.
| | - Lars J Grimm
- Department of Radiology, Duke University Medical Center, Box 3808, Durham, NC 27710
| |
Collapse
|
19
|
Application Factors Associated With Clinical Performance During Pediatric Internship. Acad Pediatr 2020; 20:1007-1012. [PMID: 32268217 DOI: 10.1016/j.acap.2020.03.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 03/25/2020] [Accepted: 03/26/2020] [Indexed: 11/22/2022]
Abstract
OBJECTIVE Our goal was to identify aspects of residency applications predictive of subsequent performance during pediatric internship. METHODS We conducted a retrospective cohort study of graduates of US medical schools who began pediatric internship in a large pediatric residency program in the summers of 2013 to 2017. The primary outcome was the weighted average of subjects' Accreditation Council for Graduate Medical Education pediatric Milestone scores at the end of pediatric internship. To determine factors independently associated with performance, we conducted multivariate linear mixed-effects models controlling for match year and Milestone grading committee as random effects and the following application factors as fixed effects: letter of recommendation strength, clerkship grades, medical school reputation, master's or PhD degrees, gender, US Medical Licensing Examination Step 1 score, Alpha Omega Alpha membership, private medical school, and interview score. RESULTS Our study population included 195 interns. In multivariate analyses, the aspects of applications significantly associated with composite Milestone scores at the end of internship were letter of recommendation strength (estimate 0.09, 95% confidence intervals [CI]: 0.04, 0.15), numbers of clerkship honors (est. 0.05, 95% CI: 0.01-0.09), medical school ranking (est. 0.04, 95% CI: 0.08-0.01), having a master's degree (est. 0.19, 95% CI: 0.03-0.36), and not having a PhD (est. 0.14, 95% CI: 0.02-0.26). Overall, the final model explained 18% of the variance in milestone scoring. CONCLUSIONS Letter of recommendation strength, clerkship grades, medical school ranking, and having obtained a Master's degree were significantly associated with higher clinical performance during pediatric internship.
Collapse
|
20
|
Rozenshtein A, Mullins ME, Marx MV. The USMLE Step 1 Pass/Fail Reporting Proposal: The APDR Position. Acad Radiol 2019; 26:1400-1402. [PMID: 31383545 DOI: 10.1016/j.acra.2019.06.004] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 06/07/2019] [Accepted: 06/10/2019] [Indexed: 11/17/2022]
Abstract
BACKGROUND The National Board of Medical Examiners (NBME) and the United States Medical Licensing Examination (USMLE) has convened a conference of "key stakeholders" on March 11-12, 2019 to consider reporting the results of the USMLE Step 1 as pass/fail. DISCUSSION While the original purpose of the USMLE Step 1 was to provide an objective basis for medical licensing, the score is increasingly used in residency applicant screening and selection because it is an objective, nationally recognized metric allowing comparison across medical schools in and outside the United States. Excessive reliance on the Step 1 score in the matching process has led to "Step 1 Culture" that drives medical schools to "teach to the test," increases medical student anxiety, and disadvantages minorities that have been shown to score lower on the USMLE Step 1 examination. The outsize role of the USMLE Step 1 score in resident selection is due to lack of standardization in medical school transcripts, grade inflation, and the lack of class standing in many summative assessments. Furthermore, the numeric score allows initial Electronic Residency Application Service filtering, commonly used by programs to limit the number of residency applications to review. CONCLUSION The Association of Program Directors in Radiology (APDR) is concerned that pass/fail reporting of the USMLE Step 1 score would take away an objective measure of medical student's knowledge and the incentive to acquire as much of it as possible. Although the APDR is not in favor of the Step 1 exam being used as a screening tool, in the absence of an equal or better metric for applicant comparison the APDR opposes the change in Step 1 reporting from the numeric score to pass/fail.
Collapse
Affiliation(s)
- Anna Rozenshtein
- Department of Radiology, Westchester Medical Center-New York Medical College, 100 Woods Road, Valhalla, NY 10595.
| | - Mark E Mullins
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, Georgia
| | - M Victoria Marx
- Department of Radiology, Keck School of Medicine University of South California, Los Angeles, California
| |
Collapse
|
21
|
Deiorio NM, Jarou ZJ, Alker A, Bird SB, Druck J, Gallahue FE, Hiller KM, Karl E, Pierce AE, Fletcher L, Dunleavy D. Applicant Reactions to the AAMC Standardized Video Interview During the 2018 Application Cycle. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:1498-1505. [PMID: 31219811 DOI: 10.1097/acm.0000000000002842] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
PURPOSE This study examined applicant reactions to the Association of American Medical Colleges Standardized Video Interview (SVI) during its first year of operational use in emergency medicine (EM) residency program selection to identify strategies to improve applicants' SVI experience and attitudes. METHOD Individuals who self-classified as EM applicants applying in the Electronic Residency Application Service 2018 cycle and who completed the SVI in summer 2017 were invited to participate in 2 surveys. Survey 1, which focused on procedural issues, was administered immediately after SVI completion. Survey 2, which focused on applicants' SVI experience, was administered in fall 2017, after SVI scores were released. RESULTS The response rates for surveys 1 and 2 were 82.3% (2,906/3,532) and 58.7% (2,074/3,532), respectively. Applicant reactions varied by aspect of the SVI studied and their SVI total scores. Most applicants were satisfied with most procedural aspects of the SVI, but most applicants were not satisfied with the SVI overall or with their total SVI scores. About 20% to 30% of applicants had neutral opinions about most aspects of the SVI. Negative reactions to the SVI were stronger for applicants who scored lower on the SVI. CONCLUSIONS Applicants had generally negative reactions to the SVI. Most were skeptical of its ability to assess the target competencies and its potential to add value to the selection process. Applicant acceptance and appreciation of the SVI will be critical to the SVI's acceptance by the graduate medical education community.
Collapse
Affiliation(s)
- Nicole M Deiorio
- N.M. Deiorio is associate dean for student affairs and professor, Department of Emergency Medicine, Virginia Commonwealth University School of Medicine, Richmond, Virginia. Z.J. Jarou is clinical associate, Section of Emergency Medicine, Department of Medicine, University of Chicago Medical Center, Chicago, Illinois. A. Alker is a resident, Department of Emergency Medicine, University of California, San Diego School of Medicine, San Diego, California. S.B. Bird is program director, Department of Emergency Medicine, and vice chair for education, University of Massachusetts Medical School, Worcester, Massachusetts. J. Druck is associate professor and assistant program director, Department of Emergency Medicine, University of Colorado School of Medicine, Denver, Colorado. F.E. Gallahue is associate professor and director, Department of Emergency Medicine, University of Washington, Seattle, Washington. K.M. Hiller is professor and director of undergraduate education, Department of Emergency Medicine, University of Arizona College of Medicine-Tucson, Tucson, Arizona. E. Karl is a resident, Department of Emergency Medicine, University of Nebraska Medical Center, Omaha, Nebraska. A.E. Pierce is associate professor, Department of Emergency Medicine, University of Texas Southwestern Medical Center, Dallas, Texas. L. Fletcher is an intern, Association of American Medical Colleges, Washington, DC. D. Dunleavy is director of admissions and selection research and development, Association of American Medical Colleges, Washington, DC
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
22
|
Ray ME, Coon JM, Al-Jumaili AA, Fullerton M. Quantitative and Qualitative Factors Associated with Social Isolation Among Graduate and Professional Health Science Students. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2019; 83:6983. [PMID: 31619819 PMCID: PMC6788151 DOI: 10.5688/ajpe6983] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Accepted: 06/04/2018] [Indexed: 05/18/2023]
Abstract
Objective. To determine the prevalence of social isolation and associated factors in graduate and professional health science students. Methods. Quantitative and qualitative data were gathered via an online survey from graduate and professional students in the colleges of dentistry, medicine, nursing, pharmacy, and public health at a Midwestern university. Questions assessed students' demographics, weekly activity hours, support systems, and financial concerns, and included the 20-item UCLA Loneliness Scale. Logistic regression was performed using the binary outcome of feeling socially isolated (yes/no) and examined program-related respondent comments using thematic analysis. Results. There were 427 survey respondents with 398 completing the full survey. Students answering the social isolation question (n=386) were included in the regression analysis. Nearly one-fifth (19.4%) of respondents indicated social isolation, with the highest percentage among nursing respondents (40.7%). Lacking a strong support, being a non-native English speaker, having caregiving responsibilities, and experiencing "lonely" items described in the UCLA Loneliness Scale were positively associated with social isolation. The ability to discuss feelings with friends in their professional program and experiencing "non-lonely" items were negatively associated with social isolation. Ninety-six comments revealed nine risk factor themes in four categories: individual (feeling different from peers, personality, employment), interpersonal (competition/exclusionary atmosphere, faculty relationship), organization (too busy with coursework, isolating program) and community (relocation reduces social support). Student-involvement in organizations (activities encouraging socialization) and community (support from outside the group) were protective factors. Conclusion. Understanding associated factors and designing strategies to reduce student social isolation may enhance the quality and well-being of future health professionals and scientists.
Collapse
Affiliation(s)
| | | | - Ali Azeez Al-Jumaili
- The University of Iowa College of Pharmacy, Iowa City, Iowa
- University of Baghdad College of Pharmacy, Baghdad, Iraq
| | | |
Collapse
|
23
|
Schmit EO, Wu CL, Khodadadi RB, Herrera LN, Williams WL, Estrada CA. What Defines an Honors Student? Survey of Pediatric and Internal Medicine Faculty Perspectives. South Med J 2019; 112:450-454. [PMID: 31375843 DOI: 10.14423/smj.0000000000001005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
OBJECTIVE Although considerable emphasis is placed on the attainment of honors in core medical school clerkships, little is known about what student characteristics are used by attending physicians to earn this designation. The purpose of this study was to evaluate what values and characteristics that attending physicians consider important in the evaluation of Pediatrics and Internal Medicine clerkship students for clinical honors designation. METHODS This cross-sectional survey study was framed around Accreditation Council for Graduate Medical Education (ACGME) competencies. It was administered at three tertiary care hospitals associated with one large medical school in an urban setting. Teaching ward attendings in Pediatrics and Internal Medicine who evaluated third-year medical students between 2013 and 2016 were surveyed. RESULTS Overall, Pediatric and Internal Medicine faculty demonstrated close agreement in which competencies were most important in designating clinical honors. Both groups believed that professionalism was the most important factor and that systems-based practice and patient care were among the least important factors. The only competency with a significant difference between the two groups was systems-based practice, with Internal Medicine placing more emphasis on the coordination of patient care and understanding social determinants of health. CONCLUSIONS Professionalism, communication skills, and medical knowledge are the most important characteristics when determining clinical honors on Pediatrics and Internal Medicine clerkships.
Collapse
Affiliation(s)
- Erinn O Schmit
- From the Department of Pediatrics, Division of Hospital Medicine, and the Department of Internal Medicine, Division of General Internal Medicine, University of Alabama at Birmingham, Birmingham, the University of Alabama School of Medicine, Birmingham, and the Birmingham Veterans Affairs Medical Center, Birmingham, Alabama
| | - Chang L Wu
- From the Department of Pediatrics, Division of Hospital Medicine, and the Department of Internal Medicine, Division of General Internal Medicine, University of Alabama at Birmingham, Birmingham, the University of Alabama School of Medicine, Birmingham, and the Birmingham Veterans Affairs Medical Center, Birmingham, Alabama
| | - Ryan B Khodadadi
- From the Department of Pediatrics, Division of Hospital Medicine, and the Department of Internal Medicine, Division of General Internal Medicine, University of Alabama at Birmingham, Birmingham, the University of Alabama School of Medicine, Birmingham, and the Birmingham Veterans Affairs Medical Center, Birmingham, Alabama
| | - L Nicholas Herrera
- From the Department of Pediatrics, Division of Hospital Medicine, and the Department of Internal Medicine, Division of General Internal Medicine, University of Alabama at Birmingham, Birmingham, the University of Alabama School of Medicine, Birmingham, and the Birmingham Veterans Affairs Medical Center, Birmingham, Alabama
| | - Winter L Williams
- From the Department of Pediatrics, Division of Hospital Medicine, and the Department of Internal Medicine, Division of General Internal Medicine, University of Alabama at Birmingham, Birmingham, the University of Alabama School of Medicine, Birmingham, and the Birmingham Veterans Affairs Medical Center, Birmingham, Alabama
| | - Carlos A Estrada
- From the Department of Pediatrics, Division of Hospital Medicine, and the Department of Internal Medicine, Division of General Internal Medicine, University of Alabama at Birmingham, Birmingham, the University of Alabama School of Medicine, Birmingham, and the Birmingham Veterans Affairs Medical Center, Birmingham, Alabama
| |
Collapse
|
24
|
Sharma A, Schauer DP, Kelleher M, Kinnear B, Sall D, Warm E. USMLE Step 2 CK: Best Predictor of Multimodal Performance in an Internal Medicine Residency. J Grad Med Educ 2019; 11:412-419. [PMID: 31440335 PMCID: PMC6699543 DOI: 10.4300/jgme-d-19-00099.1] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Revised: 04/26/2019] [Accepted: 06/04/2019] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Internal medicine (IM) residency programs receive information about applicants via academic transcripts, but studies demonstrate wide variability in satisfaction with and usefulness of this information. In addition, many studies compare application materials to only 1 or 2 assessment metrics, usually standardized test scores and work-based observational faculty assessments. OBJECTIVE We sought to determine which application materials best predict performance across a broad array of residency assessment outcomes generated by standardized testing and a yearlong IM residency ambulatory long block. METHODS In 2019, we analyzed available Electronic Residency Application Service data for 167 categorical IM residents, including advanced degree status, research experience, failures during medical school, undergraduate medical education award status, and United States Medical Licensing Examination (USMLE) scores. We compared these with post-match residency multimodal performance, including standardized test scores and faculty member, peer, allied health professional, and patient-level assessment measures. RESULTS In multivariate analyses, USMLE Step 2 Clinical Knowledge (CK) scores were most predictive of performance across all residency performance domains measured. Having an advanced degree was associated with higher patient-level assessments (eg, physician listens, physician explains, etc). USMLE Step 1 scores were associated with in-training examination scores only. None of the other measured application materials predicted performance. CONCLUSIONS USMLE Step 2 CK scores were the highest predictors of residency performance across a broad array of performance measurements generated by standardized testing and an IM residency ambulatory long block.
Collapse
|
25
|
Katzung KG, Ankel F, Clark M, Lawson LE, DeBlieux PMC, Cheaito MA, Hitti EA, Epter M, Kazzi A. What Do Program Directors Look for in an Applicant? J Emerg Med 2019; 56:e95-e101. [PMID: 30904381 DOI: 10.1016/j.jemermed.2019.01.010] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2018] [Accepted: 01/11/2019] [Indexed: 11/19/2022]
Abstract
Program directors (PDs) are faced with an increasing number of applicants to emergency medicine (EM) and a limited number of positions. This article will provide candidates with insight to what PDs look for in an applicant. We will elaborate on the performance in the emergency medicine clerkship, interview, clinical rotations (apart from EM), board scores, Alpha Omega Alpha membership, letters of recommendation, Medical Student Performance Evaluation or dean's letter, extracurricular activities, Gold Humanism Society membership, medical school attended, research and scholarly projects, personal statement, and commitment to EM. We stress the National Resident Matching Program process and how, ultimately, selection of a residency is equally dependent on an applicant's selection process.
Collapse
Affiliation(s)
| | - Felix Ankel
- University of Minnesota, Minneapolis, Minnesota
| | - Mark Clark
- Columbia University College of Physicians and Surgeons, New York City, New York; St. Luke's Roosevelt Hospital Center, Columbia University, New York City, New York
| | - Luan E Lawson
- Department of Emergency Medicine, East Carolina University, Brody School of Medicine, Greenville, North Carolina
| | - Peter M C DeBlieux
- Sections of Emergency Medicine and Pulmonary and Critical Care, Department of Medicine, Louisiana State University School of Medicine, New Orleans, Louisiana
| | - Mohamad Ali Cheaito
- Department of Emergency Medicine, American University of Beirut Medical Center, Beirut, Lebanon
| | - Eveline A Hitti
- Department of Emergency Medicine, American University of Beirut Medical Center, Beirut, Lebanon
| | | | - Amin Kazzi
- Department of Emergency Medicine, American University of Beirut Medical Center, Beirut, Lebanon
| |
Collapse
|
26
|
Grillo AC, Ghoneima AAM, Garetto LP, Bhamidipalli SS, Stewart KT. Predictors of orthodontic residency performance: An assessment of scholastic and demographic selection parameters. Angle Orthod 2019; 89:488-494. [PMID: 30605016 DOI: 10.2319/062518-477.1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
OBJECTIVE To evaluate the association between resident selection criteria, including Graduate Record Examination (GRE) scores, and student performance in an orthodontic residency program. MATERIALS AND METHODS This retrospective study evaluated the academic records of 70 orthodontic residency graduates from the Indiana University School of Dentistry. The following demographic and scholastic data were extracted from the student academic records: applicant age, gender, ethnicity, race, country of origin, dental school graduation year, GRE score, and graduate orthodontic grade point average (GPA). In addition, student American Board of Orthodontics (ABO) written examination quintiles were obtained from the ABO. Scatterplots, analysis of variance, and correlation coefficients were used to analyze the data. Statistical significance was established at .05 for the study. RESULTS No associations were found with any component of the GRE, except with the quantitative GRE section, which displayed a weak association with ABO module 2 scores. Dental school GPA demonstrated weak correlations with all ABO modules and moderate correlations with overall and didactic orthodontic GPAs. When assessing demographic factors, significant differences (P < .05) were observed, with the following groups demonstrating higher performance on certain ABO modules: age (younger), race (whites), and country of origin (US citizens). CONCLUSIONS Findings suggest the GRE has no association with student performance in an orthodontic residency. However, dental school GPA and/or class rank appear to be the strongest scholastic predictors of residency performance.
Collapse
|
27
|
Lee M, Vermillion M. Comparative values of medical school assessments in the prediction of internship performance. MEDICAL TEACHER 2018; 40:1287-1292. [PMID: 29390938 DOI: 10.1080/0142159x.2018.1430353] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
BACKGROUND Multiple undergraduate achievements have been used for graduate admission consideration. Their relative values in the prediction of residency performance are not clear. This study compared the contributions of major undergraduate assessments to the prediction of internship performance. METHODS Internship performance ratings of the graduates of a medical school were collected from 2012 to 2015. Hierarchical multiple regression analyses were used to examine the predictive values of undergraduate measures assessing basic and clinical sciences knowledge and clinical performances, after controlling for differences in the Medical College Admission Test (MCAT). RESULTS Four hundred eighty (75%) graduates' archived data were used in the study. Analyses revealed that clinical competencies, assessed by the USMLE Step 2 CK, NBME medicine exam, and an eight-station objective structured clinical examination (OSCE), were strong predictors of internship performance. Neither the USMLE Step 1 nor the inpatient internal medicine clerkship evaluation predicted internship performance. The undergraduate assessments as a whole showed a significant collective relationship with internship performance (ΔR2 = 0.12, p < 0.001). CONCLUSIONS The study supports the use of clinical competency assessments, instead of pre-clinical measures, in graduate admission consideration. It also provides validity evidence for OSCE scores in the prediction of workplace performance.
Collapse
Affiliation(s)
- Ming Lee
- a David Geffen School of Medicine , University of California , Los Angeles , CA , USA
| | - Michelle Vermillion
- a David Geffen School of Medicine , University of California , Los Angeles , CA , USA
| |
Collapse
|
28
|
Ward MA, Palazzi DL, Lorin MI, Agrawal A, Frankenthal H, Turner TL. Impact of the final adjective in the Medical Student Performance Evaluation on determination of applicant desirability. MEDICAL EDUCATION ONLINE 2018; 23:1542922. [PMID: 30406730 PMCID: PMC6225414 DOI: 10.1080/10872981.2018.1542922] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
BACKGROUND The Medical Student Performance Evaluation (MSPE) is a primary source of information used by residency programs in their selection of trainees. The MSPE contains a narrative description of the applicant's performance during medical school. In 2002, the Association of American Medical Colleges' guideline for preparation of the MSPE recommended inclusion of a comparative summative assessment of the student's overall performance relative to his/her peers (final adjective). OBJECTIVE We hypothesize that the inclusion of a final adjective in the MSPE affects a reviewer's assessment of the applicant's desirability more than the narrative description of performance and designed a study to evaluate this hypothesis. DESIGN Fifty-six faculty members from the Departments of Pediatrics and Medicine with experience reviewing MSPEs as part of the intern selection process reviewed two pairs of mock MSPE letters. In each pair, the narrative in one letter was superior to that in the other. Two final adjectives describing relative class ranks were created. Each subject was first presented with a pair of letters with mismatched final adjective (study), i.e., the letter with the stronger narrative was presented with the weaker final adjective and vice versa. The subject was then presented with a second pair of letters without final adjectives (control). Subjects ranked the relative desirability of the two applicants in each pair. RESULTS The proportion of rankings congruent with the strength of the narratives under study and control conditions were compared. Subjects were significantly less likely to rank the applicants congruent with the strength of the narratives when the strength of the final adjectives conflicted with the strength of the narrative; 42.9% of study letters were ranked congruent with the narrative versus 82.1% of controls (p = 0.0001). CONCLUSION The MSPE final adjective had a greater impact than the narrative description of performance on the determination of applicant desirability. ABBREVIATIONS MSPE: Medical Student Performance Evaluation; AAMC: Association of American Medical Colleges; BCM: Baylor College of Medicine.
Collapse
Affiliation(s)
- Mark A. Ward
- Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Debra L. Palazzi
- Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Martin I. Lorin
- Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
- CONTACT Martin I. Lorin Department of Pediatrics, Baylor College of Medicine, One Baylor Plaza, Houston, TX77030, USA
| | - Anoop Agrawal
- Department of Medicine, Baylor College of Medicine, Houston, TX, USA
| | - Hilel Frankenthal
- Department of Child Health, University of Missouri Hospital and Clinics, St. Louis, MO, USA
| | - Teri L. Turner
- Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
29
|
King A, Mayer C, Starnes A, Barringer K, Beier L, Sule H. Using the Association of American Medical Colleges Standardized Video Interview in a Holistic Residency Application Review. Cureus 2017; 9:e1913. [PMID: 29441247 PMCID: PMC5800766 DOI: 10.7759/cureus.1913] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Each year, residency programs work diligently to identify the best applicants for their respective programs, given the increasing volume of applications. Interview offers are often based on a mix of subjective and objective measures, with different programs relying more or less on each. A holistic application review involves a flexible and individualized way of assessing an applicant’s capabilities through a balanced consideration of experiences, attributes, and academic metrics. When considered collectively, these attributes may define how an individual may perform as a physician. One particular tool developed by the American Association of Medical Colleges (AAMC), the Standardized Video Interview (SVI), provides an objective measure of an applicant's professional behavior and interpersonal communication skills. The SVI may provide applicants with a chance to showcase the intangibles about themselves that are neither entered on their application nor reflected by their standardized examination scores.
Collapse
Affiliation(s)
- Andrew King
- Emergency Medicine, The Ohio State University Wexner Medical Center
| | - Chad Mayer
- Emergency Medicine, The Ohio State University Wexner Medical Center
| | | | | | | | - Harsh Sule
- Emergency Medicine, Rutgers New Jersey Medical School
| |
Collapse
|
30
|
Whittaker A, Smith KP, Shan G. Pharmacy Residency School-wide Match Rates and Modifiable Predictors in ACPE-accredited Colleges and Schools of Pharmacy. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2017; 81:6109. [PMID: 29367773 PMCID: PMC5774193 DOI: 10.5688/ajpe6109] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2016] [Accepted: 05/23/2017] [Indexed: 05/30/2023]
Abstract
Objective. To analyze the modifiable predictors of institution-wide residency match rates. Methods. This was a retrospective analysis of colleges and schools of pharmacy data and school-wide PGY-1 pharmacy residency match rates for 2013 through 2015. Independent variables included NAPLEX passing rates, history of ACPE probation, NIH funding, academic health center affiliation, dual-degree availability, program length, admit-to-applicant ratio, class size, tuition, student-driven research, clinically focused academic tracks, residency affiliation, U.S. News & World Report rankings, and minority enrollment. Results. In a repeated measures model, predictors of match results were NAPLEX pass rate, class size, academic health center affiliation, admit-to-applicant ratio, U.S. News & World Report rankings, and minority enrollment. Conclusion. Indicators of student achievement, college/school reputation, affiliations, and class demographics were significant predictors of institution-wide residency match rates. Further research is needed to understand how changes in these factors may influence overall match rates.
Collapse
Affiliation(s)
- Alana Whittaker
- Roseman University of Health Sciences College of Pharmacy, Las Vegas, Nevada
| | - Katherine P. Smith
- Roseman University of Health Sciences College of Pharmacy, Las Vegas, Nevada
| | - Guogen Shan
- School of the Community Health Sciences, University of Nevada, Las Vegas, Nevada
| |
Collapse
|
31
|
Warm EJ, Englander R, Pereira A, Barach P. Improving Learner Handovers in Medical Education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2017; 92:927-931. [PMID: 27805952 DOI: 10.1097/acm.0000000000001457] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Multiple studies have demonstrated that the information included in the Medical Student Performance Evaluation fails to reliably predict medical students' future performance. This faulty transfer of information can lead to harm when poorly prepared students fail out of residency or, worse, are shuttled through the medical education system without an honest accounting of their performance. Such poor learner handovers likely arise from two root causes: (1) the absence of agreed-on outcomes of training and/or accepted assessments of those outcomes, and (2) the lack of standardized ways to communicate the results of those assessments. To improve the current learner handover situation, an authentic, shared mental model of competency is needed; high-quality tools to assess that competency must be developed and tested; and transparent, reliable, and safe ways to communicate this information must be created.To achieve these goals, the authors propose using a learner handover process modeled after a patient handover process. The CLASS model includes a description of the learner's Competency attainment, a summary of the Learner's performance, an Action list and statement of Situational awareness, and Synthesis by the receiving program. This model also includes coaching oriented towards improvement along the continuum of education and care. Just as studies have evaluated patient handover models using metrics that matter most to patients, studies must evaluate this learner handover model using metrics that matter most to providers, patients, and learners.
Collapse
Affiliation(s)
- Eric J Warm
- E.J. Warm is the Sue P. and Richard W. Vilter Professor of Medicine and categorical medicine residency program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio. R. Englander is associate dean for undergraduate medical education, University of Minnesota Medical School, Minneapolis, Minnesota. A. Pereira is associate professor and assistant dean for clinical education, University of Minnesota Medical School, Minneapolis, Minnesota. P. Barach is clinical professor, Department of Pediatrics, Wayne State University School of Medicine, Detroit, Michigan
| | | | | | | |
Collapse
|
32
|
Chen F, Arora H, Martinelli SM, Teeter E, Mayer D, Zvara DA, Passannante A, Smith KA. The predictive value of pre-recruitment achievement on resident performance in anesthesiology. J Clin Anesth 2017; 39:139-144. [PMID: 28494890 DOI: 10.1016/j.jclinane.2017.03.052] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2016] [Revised: 03/24/2017] [Accepted: 03/31/2017] [Indexed: 10/19/2022]
Abstract
STUDY OBJECTIVE Selecting candidates for residency positions is challenging and there is little research on the correlation between commonly used selection criteria and subsequent performance in anesthesiology. This study examined the association between the selection measures and post-recruitment performance in residency. DESIGN Retrospective review of archival data. SETTING Anesthesiology residency program at a large academic anesthesiology department. SUBJECTS Residents who were matched to the anesthesiology program over 9years (graduation classes of 2006 to 2014). INTERVENTIONS None. MEASUREMENTS The pre-recruitment achievements included a comprehensive list of measures obtained from residents' application portfolios in conjunction with interview performance. The post-recruitment examination outcomes consisted of the in-training examination (ITE) scores in the three clinical anesthesia (CA) years and first-attempt success on the written board certification examination administered by the American Board of Anesthesiology (ABA). Scholarly output during residency was measured by publication record. Clinical performance at the conclusion of residency was independently rated by three faculty members. Bivariate analysis and regression models were conducted to examine association between predictors and outcomes. MAIN RESULTS High United States Medical Licensing Examination (USMLE) scores, class rank in medical school and interview performance were predictive of high examination scores in residency and good clinical performance. Class rank appeared to be the best predictor of scholarly publication and pursuing an academic career beyond residency. CONCLUSIONS Comparative performance with classmates (i.e., class rank) in medical school appeared to be an effective predictor of overall performance in residency, which warrants more attention in future study. Although interview performance is subject to recruitment team members' interpretation, it is an important measure to include in recruitment decisions.
Collapse
Affiliation(s)
- Fei Chen
- Department of Anesthesiology, The University of North Carolina at Chapel Hill, N2198, CB7010, UNC Hospitals, Chapel Hill, NC 27599-7010, United States.
| | - Harendra Arora
- Department of Anesthesiology, The University of North Carolina at Chapel Hill, N2198, CB7010, UNC Hospitals, Chapel Hill, NC 27599-7010, United States.
| | - Susan M Martinelli
- Department of Anesthesiology, The University of North Carolina at Chapel Hill, N2198, CB7010, UNC Hospitals, Chapel Hill, NC 27599-7010, United States.
| | - Emily Teeter
- Department of Anesthesiology, The University of North Carolina at Chapel Hill, N2198, CB7010, UNC Hospitals, Chapel Hill, NC 27599-7010, United States.
| | - David Mayer
- Department of Anesthesiology, The University of North Carolina at Chapel Hill, N2198, CB7010, UNC Hospitals, Chapel Hill, NC 27599-7010, United States.
| | - David A Zvara
- Department of Anesthesiology, The University of North Carolina at Chapel Hill, N2198, CB7010, UNC Hospitals, Chapel Hill, NC 27599-7010, United States.
| | - Anthony Passannante
- Department of Anesthesiology, The University of North Carolina at Chapel Hill, N2198, CB7010, UNC Hospitals, Chapel Hill, NC 27599-7010, United States.
| | - Kathleen A Smith
- Department of Anesthesiology, The University of North Carolina at Chapel Hill, N2198, CB7010, UNC Hospitals, Chapel Hill, NC 27599-7010, United States.
| |
Collapse
|
33
|
Bowe SN, Laury AM, Gray ST. Improving Otolaryngology Residency Selection Using Principles from Personnel Psychology. Otolaryngol Head Neck Surg 2017; 156:981-984. [DOI: 10.1177/0194599817698432] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
There has been a heightened focus on improving the resident selection process, particularly within highly competitive specialties. Previous research, however, has generally lacked a theoretical background, leading to inconsistent and biased results. Our recently published systematic review examining applicant characteristics and performance in residency can provide historical insight into the predictors (ie, constructs) and outcomes (ie, criteria) previously deemed pertinent by the otolaryngology community. Personnel psychology uses evidence-based practices to identify the most qualified candidates for employment using a variety of selection methods. Extensive research in this discipline has shown that integrity tests, structured interviews, work samples, and conscientiousness offer the greatest increase in validity when combined with general cognitive ability. Blending past research knowledge with the principles of personnel selection can provide the necessary foundation with which to engage in theory-driven, longitudinal studies on otolaryngology resident selection moving forward.
Collapse
Affiliation(s)
- Sarah N. Bowe
- Department of Otolaryngology, Massachusetts Eye & Ear Infirmary, Boston, Massachusetts, USA
| | - Adrienne M. Laury
- Department of Otolaryngology–Head and Neck Surgery, San Antonio Uniformed Services Health Education Consortium (SAUSHEC), Ft. Sam Houston, Texas, USA
| | - Stacey T. Gray
- Department of Otolaryngology, Massachusetts Eye & Ear Infirmary, Boston, Massachusetts, USA
- Department of Otolaryngology, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
34
|
Bowe SN, Laury AM, Gray ST. Associations between Otolaryngology Applicant Characteristics and Future Performance in Residency or Practice: A Systematic Review. Otolaryngol Head Neck Surg 2017; 156:1011-1017. [DOI: 10.1177/0194599817698430] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Objective This systematic review aims to evaluate which applicant characteristics available to an otolaryngology selection committee are associated with future performance in residency or practice. Data Sources PubMed, Scopus, ERIC, Health Business, Psychology and Behavioral Sciences Collection, and SocINDEX. Review Methods Study eligibility was performed by 2 independent investigators in accordance with the PRISMA protocol (Preferred Reporting Items for Systematic Reviews and Meta-analyses). Data obtained from each article included research questions, study design, predictors, outcomes, statistical analysis, and results/findings. Study bias was assessed with the Quality in Prognosis Studies tool. Results The initial search identified 439 abstracts. Six articles fulfilled all inclusion and exclusion criteria. All studies were retrospective cohort studies (level 4). Overall, the studies yielded relatively few criteria that correlated with residency success, with generally conflicting results. Most studies were found to have a high risk of bias. Conclusion Previous resident selection research has lacked a theoretical background, thus predisposing this work to inconsistent results and high risk of bias. The included studies provide historical insight into the predictors and criteria (eg, outcomes) previously deemed pertinent by the otolaryngology field. Additional research is needed, possibly integrating aspects of personnel selection, to engage in an evidence-based approach to identify highly qualified candidates who will succeed as future otolaryngologists.
Collapse
Affiliation(s)
- Sarah N. Bowe
- Department of Otolaryngology, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts, USA
| | - Adrienne M. Laury
- Department of Otolaryngology–Head and Neck Surgery, San Antonio Uniformed Services Health Education Consortium, Ft Sam Houston, Texas, USA
| | - Stacey T. Gray
- Department of Otolaryngology, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts, USA
- Department of Otolaryngology, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
35
|
What Predicts Performance? A Multicenter Study Examining the Association Between Resident Performance, Rank List Position, and United States Medical Licensing Examination Step 1 Scores. J Emerg Med 2017; 52:332-340. [DOI: 10.1016/j.jemermed.2016.11.008] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2016] [Revised: 10/17/2016] [Accepted: 11/01/2016] [Indexed: 11/21/2022]
|
36
|
Terry R, Hing W, Orr R, Milne N. Do coursework summative assessments predict clinical performance? A systematic review. BMC MEDICAL EDUCATION 2017; 17:40. [PMID: 28209159 PMCID: PMC5314623 DOI: 10.1186/s12909-017-0878-3] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2016] [Accepted: 02/04/2017] [Indexed: 05/17/2023]
Abstract
BACKGROUND Two goals of summative assessment in health profession education programs are to ensure the robustness of high stakes decisions such as progression and licensing, and predict future performance. This systematic and critical review aims to investigate the ability of specific modes of summative assessment to predict the clinical performance of health profession education students. METHODS PubMed, CINAHL, SPORTDiscus, ERIC and EMBASE databases were searched using key terms with articles collected subjected to dedicated inclusion criteria. Rigorous exclusion criteria were applied to ensure a consistent interpretation of 'summative assessment' and 'clinical performance'. Data were extracted using a pre-determined format and papers were critically appraised by two independent reviewers using a modified Downs and Black checklist with level of agreement between reviewers determined through a Kappa analysis. RESULTS Of the 4783 studies retrieved from the search strategy, 18 studies were included in the final review. Twelve were from the medical profession and there was one from each of physiotherapy, pharmacy, dietetics, speech pathology, dentistry and dental hygiene. Objective Structured Clinical Examinations featured in 15 papers, written assessments in four and problem based learning evaluations, case based learning evaluations and student portfolios each featured in one paper. Sixteen different measures of clinical performance were used. Two papers were identified as 'poor' quality and the remainder categorised as 'fair' with an almost perfect (k = 0.852) level of agreement between raters. Objective Structured Clinical Examination scores accounted for 1.4-39.7% of the variance in student performance; multiple choice/extended matching questions and short answer written examinations accounted for 3.2-29.2%; problem based or case based learning evaluations accounted for 4.4-16.6%; and student portfolios accounted for 12.1%. CONCLUSIONS Objective structured clinical examinations and written examinations consisting of multiple choice/extended matching questions and short answer questions do have significant relationships with the clinical performance of health professional students. However, caution should be applied if using these assessments as predictive measures for clinical performance due to a small body of evidence and large variations in the predictive strength of the relationships identified. Based on the current evidence, the Objective Structured Clinical Examination may be the most appropriate summative assessment for educators to use to identify students that may be at risk of poor performance in a clinical workplace environment. Further research on this topic is needed to improve the strength of the predictive relationship.
Collapse
Affiliation(s)
- Rebecca Terry
- Physiotherapy Program, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, 4226 Australia
| | - Wayne Hing
- Physiotherapy Program, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, 4226 Australia
| | - Robin Orr
- Physiotherapy Program, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, 4226 Australia
| | - Nikki Milne
- Physiotherapy Program, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, 4226 Australia
| |
Collapse
|
37
|
Gauer JL, Jackson JB. The association of USMLE Step 1 and Step 2 CK scores with residency match specialty and location. MEDICAL EDUCATION ONLINE 2017; 22:1358579. [PMID: 28762297 PMCID: PMC5653932 DOI: 10.1080/10872981.2017.1358579] [Citation(s) in RCA: 54] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2017] [Accepted: 07/17/2017] [Indexed: 05/25/2023]
Abstract
BACKGROUND For future physicians, residency programs offer necessary extended training in specific medical specialties. Medical schools benefit from an understanding of factors that lead their students to match into certain residency specialties. One such factor, often used during the residency application process, is scores on the USA Medical Licensing Exam (USMLE). OBJECTIVES To determine the relationship between USMLE Step 1 and Step 2 Clinical Knowledge (CK) scores and students' residency specialty match, and the association between both USMLE scores and state of legal residency (Minnesota) at the time of admission with students staying in-state or leaving the state for residency program. DESIGN USMLE scores and residency match data were analyzed from five graduating classes of students at the University of Minnesota Medical School (N = 1054). RESULTS A MANOVA found significant differences (p < 0.001) between residency specialties and both USMLE Step 1 and Step 2 CK scores, as well as the combination of the two. Students who matched in Dermatology had the highest mean USMLE scores overall, while students who matched in Family Medicine had the lowest mean scores. Students who went out of state for residency had significantly higher Step 1 scores (p = 0.027) than students who stayed in-state for residency, while there was no significant difference between the groups for Step 2 scores. A significant positive association was found between a student who applied as a legal resident of Minnesota and whether the student stayed in Minnesota for their residency program. CONCLUSIONS Residency specialty match was significantly associated with USMLE Step 1 and USMLE Step 2 CK scores, as was staying in-state or leaving the state for residency. Students who were legal residents of the state at the time of application were more likely to stay in-state for residency, regardless of USMLE score. ABBREVIATIONS CK: Clinical knowledge; COMLEX: Comprehensive Osteopathic Medical Licensing Examination; GME: Graduate medical education; NRMP: National Resident Matching Program; UME: Undergraduate medical education; USMLE: United States Medical Licensing Examination.
Collapse
|
38
|
Van Meter M, Williams M, Banuelos R, Carlson P, Schneider JI, Shy BD, Babcock C, Spencer M, Chathampally Y. Does the National Resident Match Program Rank List Predict Success in Emergency Medicine Residency Programs? J Emerg Med 2016; 52:77-82.e1. [PMID: 27692649 DOI: 10.1016/j.jemermed.2016.06.059] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Revised: 06/06/2016] [Accepted: 06/29/2016] [Indexed: 11/17/2022]
Abstract
BACKGROUND Emergency medicine (EM) residency programs use nonstandardized criteria to create applicant rank lists. One implicit assumption is that predictive associations exist between an applicant's rank and their future performance as a resident. To date, these associations have not been sufficiently demonstrated. OBJECTIVES We hypothesized that a strong positive correlation exists between the National Resident Match Program (NRMP) match-list applicant rank, the United States Medical Licensing Examination (USMLE) Step 1 and In-Training Examination (ITE) scores, and the graduating resident rank. METHODS A total of 286 residents from five EM programs over a 5-year period were studied. The applicant rank (AR) was derived from the applicant's relative rank list position on each programs' submitted NRMP rank list. The graduation rank (GR) was determined by a faculty consensus committee. GR was then correlated to AR using a Spearman's partial rank correlation. Additional correlations were sought with a ranking of the USMLE Step Score (UR) and the ITE Score (IR). RESULTS Combining data for all five programs, weak positive correlations existed between GR and AR, UR, and IR. The majority of correlations ranged between. When comparing GR and AR, there was a weak correlation of 0.13 (p = 0.03). CONCLUSION Our study found only weak correlations between GR and AR, UR, and IR, suggesting that those variables may not be strong predictors of resident performance. This has important implications for EM programs considering the resources devoted to applicant evaluation and ranking.
Collapse
Affiliation(s)
- Michael Van Meter
- University of Texas Health Science Center McGovern Medical School, Houston, Texas
| | - Michael Williams
- University of Texas Health Science Center McGovern Medical School, Houston, Texas
| | - Rosa Banuelos
- University of Texas Health Science Center McGovern Medical School, Houston, Texas
| | - Peter Carlson
- University of Texas Health Science Center McGovern Medical School, Houston, Texas
| | - Jeffrey I Schneider
- Boston Medical Center and Boston University School of Medicine, Boston, Massachusetts
| | - Bradley D Shy
- Icahn School of Medicine at Mount Sinai, New York, New York
| | - Christine Babcock
- University of Chicago Pritzker School of Medicine, Chicago, Illinois
| | - Matthew Spencer
- University of Rochester School Medical Center, Rochester, New York
| | | |
Collapse
|
39
|
Beninato T, Kleiman DA, Zarnegar R, Fahey TJ. Can Future Academic Surgeons be Identified in the Residency Ranking Process? JOURNAL OF SURGICAL EDUCATION 2016; 73:788-792. [PMID: 27137665 DOI: 10.1016/j.jsurg.2016.03.013] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2016] [Revised: 02/16/2016] [Accepted: 03/14/2016] [Indexed: 06/05/2023]
Abstract
OBJECTIVE The goal of surgical residency training programs is to train competent surgeons. Academic surgical training programs also have as a mission training future academicians-surgical scientists, teachers, and leaders. However, selection of surgical residents is dependent on a relatively unscientific process. Here we sought to determine how well the residency selection process is able to identify future academicians in surgery. DESIGN Rank lists from an academic surgical residency program from 1992 to 1997 were examined. All ranked candidates׳ career paths after residency were reviewed to determine whether they stayed in academics, were university affiliated, or in private practice. SETTING The study was performed at New York Presbyterian Hospital-Weill Cornell Medical College, New York, NY. PARTICIPANTS A total of 663 applicants for general surgery residency participated in this study. RESULTS In total 6 rank lists were evaluated, which included 663 candidates. Overall 76% remained in a general surgery subspecialty. Of those who remained in general surgery, 49% were in private practice, 20% were university affiliated, and 31% had academic careers. Approximately 47% of candidates that were ranked in the top 20 had ≥20 publications, with decreasing percentages as rank number increased. There was a strong correlation between the candidates׳ rank position and pursuing an academic career (p < 0.001, R(2) = 0.89). CONCLUSIONS Graduates of surgical residency who were ranked highly at the time of the residency match were more likely to pursue an academic career. The residency selection process can identify candidates likely to be future academicians.
Collapse
Affiliation(s)
- Toni Beninato
- Department of Surgery, New York Presbyterian Hospital-Weill Cornell Medical Center, New York, New York.
| | - David A Kleiman
- Department of Surgery, New York Presbyterian Hospital-Weill Cornell Medical Center, New York, New York
| | - Rasa Zarnegar
- Department of Surgery, New York Presbyterian Hospital-Weill Cornell Medical Center, New York, New York
| | - Thomas J Fahey
- Department of Surgery, New York Presbyterian Hospital-Weill Cornell Medical Center, New York, New York
| |
Collapse
|
40
|
Birden H, Barker J, Wilson I. Effectiveness of a rural longitudinal integrated clerkship in preparing medical students for internship. MEDICAL TEACHER 2016; 38:946-956. [PMID: 26691824 DOI: 10.3109/0142159x.2015.1114594] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
BACKGROUND We interviewed graduates from the first two cohorts of a postgraduate medical program that had a senior year longitudinal integrated clerkship (LIC) in a practice setting in rural New South Wales, Australia to determine how well their training prepared them to be junior doctors (3-4 years after graduation), and what aspects of that training they thought were particularly useful. METHODS In-depth interviews. RESULTS Fourteen junior doctors were interviewed. Participants reported feeling well prepared in ability to develop close relationships with clinical supervisors, good clinical and procedural skills, ability to work autonomously and work in teams, knowledge of health systems, ability to ensure self-care, and professionalism. Consensus view was that a rural placement was an excellent way to learn medicine for a variety of reasons including relationships with clinicians, less competition for access to patients, and opportunities to extend their clinical skills and act up to intern level. CONCLUSION The advantages we found in the training these junior doctors received which prepared them well for internship were integral both to the longitudinal, unstructured placement, and to the fact that it was carried out in a rural area. The two aspects of these placements appear to act synergistically, reinforcing the learning experience.
Collapse
Affiliation(s)
- Hudson Birden
- a James Cook University , Australia
- d University of Sydney , Australia
| | - Jane Barker
- b University of Western Sydney School of Medicine , Australia
| | - Ian Wilson
- c University of Wollongong Graduate School of Medicine , Australia
| |
Collapse
|
41
|
Schaverien MV. Selection for Surgical Training: An Evidence-Based Review. JOURNAL OF SURGICAL EDUCATION 2016; 73:721-9. [PMID: 27133583 DOI: 10.1016/j.jsurg.2016.02.007] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2015] [Revised: 02/07/2016] [Accepted: 02/23/2016] [Indexed: 05/26/2023]
Abstract
PURPOSE The predictive relationship between candidate selection criteria for surgical training programs and future performance during and at the completion of training has been investigated for several surgical specialties, however there is no interspecialty agreement regarding which selection criteria should be used. Better understanding the predictive reliability between factors at selection and future performance may help to optimize the process and lead to greater standardization of the surgical selection process. METHODS PubMed and Ovid MEDLINE databases were searched. Over 560 potentially relevant publications were identified using the search strategy and screened using the Cochrane Collaboration Data Extraction and Assessment Template. RESULTS 57 studies met the inclusion criteria. Several selection criteria used in the traditional selection demonstrated inconsistent correlation with subsequent performance during and at the end of surgical training. The following selection criteria, however, demonstrated good predictive relationships with subsequent resident performance: USMLE examination scores, Letters of Recommendation (LOR) including the Medical Student Performance Evaluation (MSPE), academic performance during clinical clerkships, the interview process, displaying excellence in extracurricular activities, and the use of unadjusted rank lists. CONCLUSIONS This systematic review supports that the current selection process needs to be further evaluated and improved. Multicenter studies using standardized outcome measures of success are now required to improve the reliability of the selection process to select the best trainees.
Collapse
Affiliation(s)
- Mark V Schaverien
- Department of Plastic Surgery, The University of Texas MD Anderson Cancer Center, Houston, Texas.
| |
Collapse
|
42
|
Natt N, Chang AY, Berbari EF, Kennel KA, Kearns AE. SELECTION OF ENDOCRINOLOGY SUBSPECIALTY TRAINEES: WHICH APPLICANT CHARACTERISTICS ARE ASSOCIATED WITH PERFORMANCE DURING FELLOWSHIP TRAINING? Endocr Pract 2015; 22:45-50. [PMID: 26437219 DOI: 10.4158/ep15808.or] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE To determine which residency characteristics are associated with performance during endocrinology fellowship training as measured by competency-based faculty evaluation scores and faculty global ratings of trainee performance. METHODS We performed a retrospective review of interview applications from endocrinology fellows who graduated from a single academic institution between 2006 and 2013. Performance measures included competency-based faculty evaluation scores and faculty global ratings. The association between applicant characteristics and measures of performance during fellowship was examined by linear regression. RESULTS The presence of a laudatory comparative statement in the residency program director's letter of recommendation (LoR) or experience as a chief resident was significantly associated with competency-based faculty evaluation scores (β = 0.22, P = .001; and β = 0.24, P = .009, respectively) and faculty global ratings (β = 0.85, P = .006; and β = 0.96, P = .015, respectively). CONCLUSION The presence of a laudatory comparative statement in the residency program director's LoR or experience as a chief resident were significantly associated with overall performance during subspecialty fellowship training. Future studies are needed in other cohorts to determine the broader implications of these findings in the application and selection process.
Collapse
|
43
|
Bhat R, Takenaka K, Levine B, Goyal N, Garg M, Visconti A, Oyama L, Castillo E, Broder J, Omron R, Hayden S. Predictors of a Top Performer During Emergency Medicine Residency. J Emerg Med 2015; 49:505-12. [PMID: 26242925 DOI: 10.1016/j.jemermed.2015.05.035] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2015] [Accepted: 05/29/2015] [Indexed: 11/29/2022]
Abstract
BACKGROUND Emergency Medicine (EM) residency program directors and faculty spend significant time and effort creating a residency rank list. To date, however, there have been few studies to assist program directors in determining which pre-residency variables best predict performance during EM residency. OBJECTIVE To evaluate which pre-residency variables best correlated with an applicant's performance during residency. METHODS This was a retrospective multicenter sample of all residents in the three most recent graduating classes from nine participating EM residency programs. The outcome measure of top residency performance was defined as placement in the top third of a resident's graduating class based on performance on the final semi-annual evaluation. RESULTS A total of 277 residents from nine institutions were evaluated. Eight of the predictors analyzed had a significant correlation with the outcome of resident performance. Applicants' grade during home and away EM rotations, designation as Alpha Omega Alpha (AOA), U.S. Medical Licensing Examination (USMLE) Step 1 score, interview scores, "global rating" and "competitiveness" on nonprogram leadership standardized letter of recommendation (SLOR), and having five or more publications or presentations showed a significant association with residency performance. CONCLUSION We identified several predictors of top performers in EM residency: an honors grade for an EM rotation, USMLE Step 1 score, AOA designation, interview score, high SLOR rankings from nonprogram leadership, and completion of five or more presentations and publications. EM program directors may consider utilizing these variables during the match process to choose applicants who have the highest chance of top performance during residency.
Collapse
Affiliation(s)
- Rahul Bhat
- Department of Emergency Medicine, MedStar Georgetown University, Hospital/MedStar Washington Hospital Center, Washington, DC
| | - Katrin Takenaka
- Department of Emergency Medicine, University of Texas, Houston, Texas
| | - Brian Levine
- Department of Emergency Medicine, Christiana Care Health System, Newark, Delaware
| | - Nikhil Goyal
- Department of Emergency Medicine, Henry Ford Hospital, Detroit, Michigan
| | - Manish Garg
- Department of Emergency Medicine, Temple University Hospital, Philadelphia, Pennsylvania
| | - Annette Visconti
- Department of Emergency Medicine, New York Methodist Hospital, Brooklyn, New York
| | - Leslie Oyama
- Department of Emergency Medicine, University of California at San Diego, La Jolla, California
| | - Edward Castillo
- Department of Emergency Medicine, University of California at San Diego, La Jolla, California
| | - Joshua Broder
- Division of Emergency Medicine, Department of Surgery, Duke University Hospital, Durham, North Carolina
| | - Rodney Omron
- Department of Emergency Medicine, Johns Hopkins University Hospital, Baltimore, Maryland
| | - Stephen Hayden
- Department of Emergency Medicine, University of California at San Diego, La Jolla, California
| |
Collapse
|
44
|
O’Neill LD, Norberg K, Thomsen M, Jensen RD, Brøndt SG, Charles P, Mortensen LS, Christensen MK. Residents in difficulty--just slower learners? A case-control study. BMC MEDICAL EDUCATION 2014; 14:1047. [PMID: 25551465 PMCID: PMC4336469 DOI: 10.1186/s12909-014-0276-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2014] [Accepted: 12/15/2014] [Indexed: 05/12/2023]
Abstract
BACKGROUND Recent meta-analyses have found small-moderate positive associations between general performance in medical school and postgraduate medical education. In addition, a couple of studies have found an association between poor performance in medical school and disciplinary action against practicing doctors. The aim of this study was to examine if a sample of Danish residents in difficulty tended to struggle already in medical school, and to determine whether administratively observable performance indicators in medical school could predict difficulties in residency. METHODS The study design was a cumulative incidence matched case-control study. The source population was all active specialist trainees, who were medical school graduates from Aarhus University, in 2010 to June 2013 in two Danish regions. Cases were doctors who decelerated, transferred, or dropped out of residency. Cases and controls were matched for graduation year. Medical school exam failures, grades, completion time, and academic dispensations as predictors of case status were examined with conditional logistic regression. RESULTS In total 89 cases and 343 controls were identified. The total number of medical school re-examinations and the time it took to complete medical school were significant individual predictors of subsequent difficulties (deceleration, transferral or dropout) in residency whereas average medical school grades were not. CONCLUSIONS Residents in difficulty eventually reached similar competence levels as controls during medical school; however, they needed more exam attempts and longer time to complete their studies, and so seemed to be slower learners. A change from "fixed-length variable-outcome programmes" to "fixed-outcome variable-length programmes" has been proposed as a way of dealing with the fact that not all learners reach the same level of competence for all activities at exactly the same time. This study seems to support the logic of such an approach to these residents in difficulty.
Collapse
Affiliation(s)
- Lotte Dyhrberg O’Neill
- />Centre for Medical Education, INCUBA Science Park Skejby, Brendstrupgårdsvej 102, Building B, 8200 Århus N, Denmark
| | - Karen Norberg
- />Postgraduate Medical Education in Region North, Skottenborg 26, 8800 Viborg, Denmark
| | - Maria Thomsen
- />Centre for Medical Education, INCUBA Science Park Skejby, Brendstrupgårdsvej 102, Building B, 8200 Århus N, Denmark
| | - Rune Dall Jensen
- />Centre for Medical Education, INCUBA Science Park Skejby, Brendstrupgårdsvej 102, Building B, 8200 Århus N, Denmark
| | - Signe Gjedde Brøndt
- />Centre for Medical Education, INCUBA Science Park Skejby, Brendstrupgårdsvej 102, Building B, 8200 Århus N, Denmark
| | - Peder Charles
- />Centre for Medical Education, INCUBA Science Park Skejby, Brendstrupgårdsvej 102, Building B, 8200 Århus N, Denmark
| | - Lene Stouby Mortensen
- />Department of Internal Medicine, Randers Regional Hospital, Skovlyvej 1, 8930 Randers, Denmark
| | - Mette Krogh Christensen
- />Centre for Medical Education, INCUBA Science Park Skejby, Brendstrupgårdsvej 102, Building B, 8200 Århus N, Denmark
| |
Collapse
|
45
|
Pittman T. Can We More Wisely Choose Residents? World Neurosurg 2014; 82:e553-4. [DOI: 10.1016/j.wneu.2014.06.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2014] [Accepted: 06/11/2014] [Indexed: 11/28/2022]
|
46
|
Hegarty CB, Lane DR, Love JN, Doty CI, DeIorio NM, Ronan-Bentle S, Howell J. Council of emergency medicine residency directors standardized letter of recommendation writers' questionnaire. J Grad Med Educ 2014; 6:301-6. [PMID: 24949136 PMCID: PMC4054731 DOI: 10.4300/jgme-d-13-00299] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/20/2013] [Revised: 01/02/2014] [Accepted: 01/15/2014] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND The Council of Emergency Medicine Residency Directors (CORD) Standardized Letter of Recommendation (SLOR) has become the primary tool used by emergency medicine (EM) faculty to evaluate residency candidates. A survey was created to describe the training, beliefs, and usage patterns of SLOR writers. METHODS The SLOR Task Force created the survey, which was circulated to the CORD listserv in 2012. RESULTS Forty-six percent of CORD members (320 of 695) completed the survey. Of the respondents, 39% (125 of 320) had fewer than 5 years of experience writing SLOR letters. Most were aware of published guidelines, and most reported they learned how to write a SLOR on their own (67.4%, 182 of 270). Sixty-eight percent (176 of 258) admitted to not following the instructions for certain questions. Self-reported grade inflation occurred "rarely" 36% (97 of 269) of the time and not at all 40% (107 of 269) of the time. CONCLUSIONS The CORD SLOR has become the primary tool used by EM faculty to evaluate candidates applying for residency in EM. The SLOR has been in use in the EM community for 16 years. However, our study has identified some problems with its use. Those issues may be overcome with a revised format for the SLOR and with faculty training in the writing and use of this document.
Collapse
|
47
|
Klein U, Storey B, Hanson PD. Benefits of Externships with Pediatric Dentistry Programs for Potential Residents: Program Directors’ and Current Residents’ Perceptions. J Dent Educ 2014. [DOI: 10.1002/j.0022-0337.2014.78.3.tb05697.x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Ulrich Klein
- Departments of Pediatric Dentistry; Children's Hospital Colorado; University of Colorado School of Dental Medicine
| | | | | |
Collapse
|
48
|
Ensor CR, Walker CL, Rider SK, Clemente EU, Ashby DM, Shermock KM. Streamlining the process for initial review of pharmacy residency applications: An analytic approach. Am J Health Syst Pharm 2013; 70:1670-5. [DOI: 10.2146/ajhp120769] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Affiliation(s)
- Christopher R. Ensor
- Department of Pharmacy and Therapeutics, University of Pittsburgh School of Pharmacy, Pittsburgh, PA; at the time of writing he was Clinical Specialist, Cardiothoracic Transplantation and Mechanical Circulatory Support, Comprehensive Transplant Center, Johns Hopkins Hospital, Baltimore, MD
| | - Cathy L. Walker
- Education and Training, and Program Director, Pharmacy Practice Postgraduate Year 1 Residency, Johns Hopkins Hospital
| | - Shyla K. Rider
- College of Pharmacy, The Ohio State University, Columbus; at the time of writing, she was a summer intern at Johns Hopkins Hospital
| | - Estela Uy Clemente
- College of Pharmacy, Harding University, Searcy, AR; at the time of writing, she was a summer intern at Johns Hopkins Hospital
| | - Daniel M. Ashby
- Health-System Pharmacy Administration Postgraduate Year 1/2 Residency
| | - Kenneth M. Shermock
- Center for Medication Quality and Outcomes, Department of Pharmacy, Johns Hopkins Hospital
| |
Collapse
|
49
|
Kenny S, McInnes M, Singh V. Associations between residency selection strategies and doctor performance: a meta-analysis. MEDICAL EDUCATION 2013; 47:790-800. [PMID: 23837425 DOI: 10.1111/medu.12234] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2012] [Revised: 02/04/2013] [Accepted: 03/21/2013] [Indexed: 05/17/2023]
Abstract
OBJECTIVES The purpose of this study was to use meta-analysis to establish which of the information available to the resident selection committee is associated with resident or doctor performance. METHODS Multiple electronic databases were searched to 4 September 2012. Two reviewers independently selected studies that met the present inclusion criteria and extracted data in duplicate; disagreement was resolved by consensus. Risk for bias was assessed using a customised bias assessment tool. Measures of association were converted to a common effect size (Hedges' g). Meta-analysis was performed using the random-effects model for each selection strategy and all outcomes without pooling. Sensitivity analysis for each selection strategy-outcome pair was performed with pooling of effect size. RESULTS Eighty studies involving a total of 41 704 participants were included in the meta-analysis. Seventeen different selection strategies and 17 outcomes were assessed across these studies. The strongest positive associations referred to examination-based selection strategies, such as the US Medical Licensing Examination (USMLE) Step 1, and examination-based outcomes, such as scores on in-training examinations. Moderate positive associations were present for medical school marks and both examination-based and subjective outcomes. Minimal or no associations were seen for the selection tools represented by interviews, reference letters and deans' letters. CONCLUSIONS Standardised examination performance and medical school grades show the strongest associations with current measures of doctor performance. Deans' letters, reference letters and interviews all show a lower than expected strength of association given the relative value often assigned to them during resident doctor selection. Objective selection strategies are potentially the most useful to residency selection committees based on current evaluative methods. However, reports in the literature of validated long-term doctor performance outcomes are scant.
Collapse
Affiliation(s)
- Stephanie Kenny
- Department of Medical Imaging, Ottawa Hospital, University of Ottawa, Ottawa, Ontario, Canada
| | | | | |
Collapse
|
50
|
Fryer JP, Corcoran N, George B, Wang E, Darosa D. Does resident ranking during recruitment accurately predict subsequent performance as a surgical resident? JOURNAL OF SURGICAL EDUCATION 2012; 69:724-30. [PMID: 23111037 DOI: 10.1016/j.jsurg.2012.06.010] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2012] [Revised: 05/31/2012] [Accepted: 06/11/2012] [Indexed: 05/25/2023]
Abstract
BACKGROUND While the primary goal of ranking applicants for surgical residency training positions is to identify the candidates who will subsequently perform best as surgical residents, the effectiveness of the ranking process has not been adequately studied. METHODS We evaluated our general surgery resident recruitment process between 2001 and 2011 inclusive, to determine if our recruitment ranking parameters effectively predicted subsequent resident performance. We identified 3 candidate ranking parameters (United States Medical Licensing Examination [USMLE] Step 1 score, unadjusted ranking score [URS], and final adjusted ranking [FAR]), and 4 resident performance parameters (American Board of Surgery In-Training Examination [ABSITE] score, PGY1 resident evaluation grade [REG], overall REG, and independent faculty rating ranking [IFRR]), and assessed whether the former were predictive of the latter. Analyses utilized Spearman correlation coefficient. RESULTS We found that the URS, which is based on objective and criterion based parameters, was a better predictor of subsequent performance than the FAR, which is a modification of the URS based on subsequent determinations of the resident selection committee. USMLE score was a reliable predictor of ABSITE scores only. However, when we compared our worst residence performances with the performances of the other residents in this evaluation, the data did not produce convincing evidence that poor resident performances could be reliably predicted by any of the recruitment ranking parameters. Finally, stratifying candidates based on their rank range did not effectively define a ranking cut-off beyond which resident performance would drop off. CONCLUSIONS Based on these findings, we recommend surgery programs may be better served by utilizing a more structured resident ranking process and that subsequent adjustments to the rank list generated by this process should be undertaken with caution.
Collapse
Affiliation(s)
- Jonathan P Fryer
- Department of Surgery, Northwestern University Feinberg School of Medicine, Chicago, IL 60611, USA.
| | | | | | | | | |
Collapse
|