1
|
Gumaste VV. Need for a More Objective, Inclusive, and Equitable Selection Process for Gastroenterology Fellowships. Dig Dis Sci 2024:10.1007/s10620-024-08592-6. [PMID: 39361197 DOI: 10.1007/s10620-024-08592-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Accepted: 08/10/2024] [Indexed: 10/05/2024]
Abstract
Diseases related to the digestive system account for a significant proportion of the diseases burden in the United States and result in 36.8 million ambulatory visits, 3.8 million hospital admissions, and 22.2 million gastrointestinal endoscopies. To meet the challenge that this quantum of gastroenterological disorders poses, we are obligated to select and train competent gastroenterologists. Admission into a Gastroenterology (GI) fellowship program is highly selective. In 2023, only 62.7% of candidates who applied were successful in matching into a fellowship program, making it even more competitive than a cardiology fellowship (match rate of 68.4%). Therefore, it is imperative that we ensure that the selection process is fair and transparent. Additionally, we need to be socially more responsible by emphasizing diversity and inclusivity to produce gastroenterologists who reflect the changing society we live in. An analysis of current practices indicates that the process of selection is not standardized and is more subjective than objective. This review is an attempt to identify deficiencies that can be rectified by the introduction of a standardized system that includes structured interviews, Standard Letters of Recommendation (SLOR), and objective scoring protocols-all of which would make the process of selection more equitable, diverse, and inclusive. Newer methods like Casper exam, Psychometric testing, and Preference Signaling can also be explored to this end.
Collapse
Affiliation(s)
- Vivek V Gumaste
- Division of Gastroenterology, Bayonne Medical Center, 29 East 29the Street, Bayonne, NJ, 07002, USA.
| |
Collapse
|
2
|
Vasan V, Cheng CP, Lerner DK, Pascual K, Mercado A, Iloreta AM, Teng MS. Machine Learning for Predictive Analysis of Otolaryngology Residency Letters of Recommendation. Laryngoscope 2024; 134:4016-4022. [PMID: 38602257 DOI: 10.1002/lary.31439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 03/06/2024] [Accepted: 03/25/2024] [Indexed: 04/12/2024]
Abstract
INTRODUCTION Letters of recommendation (LORs) are a highly influential yet subjective and often enigmatic aspect of the residency application process. This study hypothesizes that LORs do contain valuable insights into applicants and can be used to predict outcomes. This pilot study utilizes natural language processing and machine learning (ML) models using LOR text to predict interview invitations for otolaryngology residency applicants. METHODS A total of 1642 LORs from the 2022-2023 application cycle were retrospectively retrieved from a single institution. LORs were preprocessed and vectorized using three different techniques to represent the text in a way that an ML model can understand written prose: CountVectorizer (CV), Term Frequency-Inverse Document Frequency (TF-IDF), and Word2Vec (WV). Then, the LORs were trained and tested on five ML models: Logistic Regression (LR), Naive Bayes (NB), Decision Tree (DT), Random Forest (RF), and Support Vector Machine (SVM). RESULTS Of the 337 applicants, 67 were interviewed and 270 were not interviewed. In total, 1642 LORs (26.7% interviewed) were analyzed. The two best-performing ML models in predicting interview invitations were the TF-IDF vectorized DT and CV vectorized DT models. CONCLUSION This preliminary study revealed that ML models and vectorization combinations can provide better-than-chance predictions for interview invitations for otolaryngology residency applicants. The high-performing ML models were able to classify meaningful information from the LORs to predict applicant interview invitation. The potential of an automated process to help predict an applicant's likelihood of obtaining an interview invitation could be a valuable tool for training programs in the future. LEVEL OF EVIDENCE N/A Laryngoscope, 134:4016-4022, 2024.
Collapse
Affiliation(s)
- Vikram Vasan
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, U.S.A
| | - Christopher P Cheng
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, U.S.A
| | - David K Lerner
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, U.S.A
- Department of Otolaryngology-Head & Neck Surgery, University of Miami Miller School of Medicine, Miami, Florida, U.S.A
| | - Karen Pascual
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, U.S.A
| | - Amanda Mercado
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, U.S.A
| | - Alfred Marc Iloreta
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, U.S.A
| | - Marita S Teng
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, U.S.A
| |
Collapse
|
3
|
Clapp JT, Heins SJ, Gaulton TG, Kleid MA, Lane-Fall MB, Aysola J, Baranov DY, Fleisher LA, Gordon EKB. Does Masked Interviewing Encourage Holistic Review in Residency Selection? A Mixed-Methods Study. TEACHING AND LEARNING IN MEDICINE 2024; 36:369-380. [PMID: 37097188 DOI: 10.1080/10401334.2023.2204074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 02/27/2023] [Accepted: 03/28/2023] [Indexed: 06/19/2023]
Abstract
Problem: Medical educators increasingly champion holistic review. However, in U.S. residency selection, holistic review has been difficult to implement, hindered by a reliance on standardized academic criteria such as board scores. Masking faculty interviewers to applicants' academic files is a potential means of promoting holistic residency selection by increasing the interview's ability to make a discrete contribution to evaluation. However, little research has directly analyzed the effects of masking on how residency selection committees evaluate applicants. This mixed-methods study examined how masking interviews altered residency selection in an anesthesiology program at a large U.S. academic medical center. Intervention: During the 2019-2020 residency selection season in the University of Pennsylvania's Department of Anesthesiology & Critical Care, we masked interviewers to the major academic components of candidates' application files (board scores, transcripts, letters) on approximately half of interview days. The intent of the masking intervention was to mitigate the tendency of interviewers to form predispositions about candidates based on standardized academic criteria and thereby allow the interview to make a more independent contribution to candidate evaluation. Context: Our examination of the masking intervention used a concurrent, partially mixed, equal-status mixed-methods design guided by a pragmatist approach. We audio-recorded selection committee meetings and qualitatively analyzed them to explore how masking affected the process of candidate evaluation. We also collected independent candidate ratings from interviewers and consensus committee ratings and statistically compared ratings of candidates interviewed on masked days to ratings from conventional days. Impact: In conventional committee meetings, interviewers focused on how to reconcile academic metrics and interviews, and their evaluations of interviews were framed according to predispositions about candidates formed through perusal of application files. In masked meetings, members instead spent considerable effort evaluating candidates' "fit" and whether they came off as tactful. Masked interviewers gave halting opinions of candidates and sometimes pushed for committee leaders to reveal academic information, leading to masking breaches. Higher USMLE Step 1 score and higher medical school ranking were statistically associated with more favorable consensus rating. We found no significant differences in rating outcomes between masked and conventional interview days. Lessons learned: Elimination of academic metrics during the residency interview phase does not straightforwardly promote holistic review. While critical reflection among medical educators about the fairness and utility of such metrics has been productive, research and intervention should focus on the more proximate topic of how programs apply academic and other criteria to evaluate applicants.
Collapse
Affiliation(s)
- Justin T Clapp
- Department of Anesthesiology & Critical Care, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Penn Center for Perioperative Outcomes Research and Transformation, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Sarah J Heins
- Department of Anesthesiology & Critical Care, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Timothy G Gaulton
- Department of Anesthesiology & Critical Care, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Penn Center for Perioperative Outcomes Research and Transformation, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Melanie A Kleid
- Department of Anesthesiology & Critical Care, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Meghan B Lane-Fall
- Department of Anesthesiology & Critical Care, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Penn Center for Perioperative Outcomes Research and Transformation, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Jaya Aysola
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Department of Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Dimitry Y Baranov
- Department of Anesthesiology & Critical Care, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Lee A Fleisher
- Department of Anesthesiology & Critical Care, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Emily K B Gordon
- Department of Anesthesiology & Critical Care, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
4
|
Stone CL, Dogbey GY, Falls J, Kuo YP. Key factors for residency interview selection from the National Resident Matching Program: analysis of residency Program Director surveys, 2016-2020. J Osteopath Med 2023; 123:523-530. [PMID: 37615082 DOI: 10.1515/jom-2022-0144] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 07/18/2023] [Indexed: 08/25/2023]
Abstract
CONTEXT As the number of medical school graduates continues to outpace the available residency training positions, applying for residency in the United States has become a highly competitive process, often associated with a low rate of selection and invitation for interview. The National Resident Matching Program (NRMP) Program Director survey provides data assessing factors considered by Program Directors (PD) in selecting and inviting candidates for interview. Assessing the evolution of these factors over time is efficacious to inform and guide prospective applicants toward improving preparation for residency application. OBJECTIVES We aim to synthesize NRMP data showing factors that PDs reported and rated as important in their decision to select and invite applicants for interview. METHODS Data from residency PD surveys from 2008 to 2021 were accessed, but after applying inclusion/exclusion criteria, only the data from 2016 to 2020 were reviewed and analyzed. The NRMP survey reports provided two metrics that characterized PDs' evaluation of the residency factors for interview, namely, "percent citing factor" and "average rating" on a 0 to 5 Likert-type scale. These two metrics were combined into an aggregate measure of importance (AI), and another measure of relative importance (RI) was constructed from normalizing the AI of each individual factor to the sum of the AI within each survey year. RESULTS The top ranked factors were United States Medical Licensing Examination (USMLE) Step 1/Comprehensive Osteopathic Medical Licensing Examination (COMLEX) Level 1, Letter of Recommendation (LOR) in the specialty, Medical Student Performance Evaluation (MSPE/Dean's Letter), and USMLE Step 2 Clinical Knowledge (CK)/COMLEX Level 2 Cognitive Exam (CE) score, any failed attempt in USMLE/COMLEX, and perceived commitment to specialty. Factors rising in importance were Audition Elective/Rotation Within Your Department, Personal Statement (PS), Perceived Commitment to Specialty, Perceived Interest in Program, LOR in the Specialty, Other Life Experience, and Personal Prior Knowledge of the Applicant. Factors with declining importance were Interest in Academic Career, Awards or Special Honors in Basic Sciences, Graduate of Highly Regarded US Medical School, Awards or Special Honors in Clinical Clerkships, Lack of Gaps in Medical Education, Awards or Special Honors in Clerkship in Desired Specialty, and Consistency of Grades. Compared to the 2021 PD survey, our findings show continued predictive consistency, particularly related to specialty and program commitment. CONCLUSIONS The factors identified for the selection of medical school graduates for interview into a residency program reveal that PDs move toward a more integrated approach. Specifically, PDs are placing increasing emphasis on factors that border on subjective qualities more so than the more traditional, quantitative, and objective metrics. Medical students and educators need to continually apprise themselves of the NRMP data to inform students' preparation endeavors throughout medical school to strengthen their application portfolios and enhance their competitiveness for the matching process.
Collapse
Affiliation(s)
- Cooper L Stone
- Department of Psychiatry & Human Behavior, Thomas Jefferson University Hospital, Philadelphia, PA, USA
| | - Godwin Y Dogbey
- Jerry M. Wallace School of Osteopathic Medicine, Campbell University, Lillington, NC, USA
| | - John Falls
- Jerry M. Wallace School of Osteopathic Medicine, Campbell University, Lillington, NC, USA
| | - Yen-Ping Kuo
- Jerry M. Wallace School of Osteopathic Medicine, Campbell University, Lillington, NC, USA
| |
Collapse
|
5
|
Kibble J, Plochocki J. Comparing Machine Learning Models and Human Raters When Ranking Medical Student Performance Evaluations. J Grad Med Educ 2023; 15:488-493. [PMID: 37637337 PMCID: PMC10449343 DOI: 10.4300/jgme-d-22-00678.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 05/10/2023] [Accepted: 06/01/2023] [Indexed: 08/29/2023] Open
Abstract
Background The Medical Student Performance Evaluation (MSPE), a narrative summary of each student's academic and professional performance in US medical school is long, making it challenging for residency programs evaluating large numbers of applicants. Objective To create a rubric to assess MSPE narratives and to compare the ability of 3 commercially available machine learning models (MLMs) to rank MSPEs in order of positivity. Methods Thirty out of a possible 120 MSPEs from the University of Central Florida class of 2020 were de-identified and subjected to manual scoring and ranking by a pair of faculty members using a new rubric based on the Accreditation Council for Graduate Medical Education competencies, and to global sentiment analysis by the MLMs. Correlation analysis was used to assess reliability and agreement between student rank orders produced by faculty and MLMs. Results The intraclass correlation coefficient used to assess faculty interrater reliability was 0.864 (P<.001; 95% CI 0.715-0.935) for total rubric scores and ranged from 0.402 to 0.768 for isolated subscales; faculty rank orders were also highly correlated (rs=0.758; P<.001; 95% CI 0.539-0.881). The authors report good feasibility as the rubric was easy to use and added minimal time to reading MSPEs. The MLMs correctly reported a positive sentiment for all 30 MSPE narratives, but their rank orders produced no significant correlations between different MLMs, or when compared with faculty rankings. Conclusions The rubric for manual grading provided reliable overall scoring and ranking of MSPEs. The MLMs accurately detected positive sentiment in the MSPEs but were unable to provide reliable rank ordering.
Collapse
Affiliation(s)
- Jonathan Kibble
- Both authors are with University of Central Florida College of Medicine. Jonathan Kibble, PhD, is Professor of Medical Education; and
| | | |
Collapse
|
6
|
Compliance with CDIM-APDIM Guidelines for Department of Medicine Letters: an Opportunity to Improve Communication Across the Continuum. J Gen Intern Med 2022; 37:125-129. [PMID: 33791934 PMCID: PMC8739400 DOI: 10.1007/s11606-021-06744-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 02/04/2021] [Accepted: 03/17/2021] [Indexed: 01/03/2023]
Abstract
BACKGROUND With rising applications to internal medicine programs and pending changes in United States Medical Licensing Examination Step 1 score reporting, program directors desire transparent data for comparing applicants. The Department of Medicine Letters of Recommendation (DOM LORs) are frequently used to assess applicants and have the potential to provide clearly defined data on performance including stratification of a medical school class. Despite published guidelines on the expected content of the DOM LOR, these LORs do not always meet that need. OBJECTIVES To better understand the degree to which DOM LORs comply with published guidelines. METHODS We reviewed DOM LORs from 146 of 155 LCME-accredited medical schools in the 2019 Match cycle, assessing for compliance with published guidelines. RESULTS Adherence to the recommendation for DOM LORs to provide a final characterization of performance relative to peers was low (68/146, 47%). Of those that provided a final characterization, 19/68 (28%) provided a quantitative measure, and 49/68 (72%) provided a qualitative descriptor. Only 17/49 (35%) with qualitative terms described those terms, and thirteen distinct qualitative scales were identified. Ranking systems varied, with seven different titles given to highest performers. Explanations about determination of ranking groups were provided in 12% of cases. CONCLUSIONS Adherence to published guidelines for DOM LORs varies but is generally low. For program directors desiring transparent data to use in application review, clearly defined data on student performance, stratification groupings, and common language across schools could improve the utility of DOM LORs.
Collapse
|
7
|
Sam AH, Bala L, Westacott RJ, Brown C. Is Academic Attainment or Situational Judgment Test Performance in Medical School Associated With the Likelihood of Disciplinary Action? A National Retrospective Cohort Study. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:1467-1475. [PMID: 34133342 DOI: 10.1097/acm.0000000000004212] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
PURPOSE Disciplinary action imposed on physicians indicates their fitness to practice medicine is impaired and patient safety is potentially at risk. This national retrospective cohort study sought to examine whether there was an association between academic attainment or performance on a situational judgment test (SJT) in medical school and the risk of receiving disciplinary action within the first 5 years of professional practice in the United Kingdom. METHOD The authors included data from the UK Medical Education Database for 34,865 physicians from 33 U.K. medical schools that started the UK Foundation Programme (similar to internship) between 2014 and 2018. They analyzed data from 2 undergraduate medical assessments used in the United Kingdom: the Educational Performance Measure (EPM), which is based on academic attainment, and SJT, which is an assessment of professional attributes. The authors calculated hazard ratios (HRs) for EPM and SJT scores. RESULTS The overall rate of disciplinary action was low (65/34,865, 0.19%) and the mean time to discipline was 810 days (standard deviation [SD] = 440). None of the physicians with fitness to practice concerns identified as students went on to receive disciplinary action after they qualified as physicians. The multivariate survival analysis demonstrated that a score increase of 1 SD (approximately 7.6 percentage points) on the EPM reduced the hazard of disciplinary action by approximately 50% (HR = 0.51; 95% confidence interval [CI]: 0.38, 0.69; P < .001). There was not a statistically significant association between the SJT score and the hazard of disciplinary action (HR = 0.84; 95% CI: 0.62, 1.13; P = .24). CONCLUSIONS An increase in EPM score was significantly associated with a reduced hazard of disciplinary action, whereas performance on the SJT was not. Early identification of increased risk of disciplinary action may provide an opportunity for remediation and avoidance of patient harm.
Collapse
Affiliation(s)
- Amir H Sam
- A.H. Sam is head, Imperial College School of Medicine, Imperial College London, London, United Kingdom; ORCID: https://orcid.org/0000-0002-9599-9069
| | - Laksha Bala
- L. Bala is a clinical research fellow in medical education, Faculty of Medicine, Imperial College London, London, United Kingdom; ORCID: https://orcid.org/0000-0002-8242-379X
| | - Rachel J Westacott
- R.J. Westacott is senior clinical lecturer, Birmingham Medical School, University of Birmingham, Birmingham, United Kingdom; ORCID: https://orcid.org/0000-0001-9846-1961
| | - Celia Brown
- C. Brown is associate professor in quantitative methods, Warwick Medical School, University of Warwick, Coventry, United Kingdom; ORCID: https://orcid.org/0000-0002-7526-0793
| |
Collapse
|
8
|
Nilsen KM, Walling A, Grothusen J, Irwin G, Meyer M, Unruh G. Time and Financial Costs for Students Participating in the National Residency Matching Program (the Match ©): 2015 to 2020. Kans J Med 2021; 14:53-63. [PMID: 33763180 PMCID: PMC7984744 DOI: 10.17161/kjm.vol1414568] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Accepted: 11/12/2020] [Indexed: 11/17/2022] Open
Abstract
Introduction The purpose of this study was to provide information to assist students, faculty, and staff in making critical career-determining decisions regarding the residency NRMP “Match©” process. Methods A 47-item survey questionnaire was developed and piloted on a regional medical school campus in 2015. The revised questionnaire was distributed each year from 2016 to 2020 to fourth-year medical students after rank lists had been submitted. The questionnaire incorporated a request for comments about the interviewing experience and suggestions to improve the process. This narrative feedback was coded using a thematic analysis. Results The overall response rate was 86.1% (897/1,042). Annual response rates ranged from 70.0% in 2020 to 97.0% in 2018. Respondents’ average age was 27.3 (± 2.7) years and 50.0% (448/897) were male. Most applied to family medicine (164/897; 18.2%) and internal medicine (140/897; 15.6%). Eight specialties had fewer than ten applicants over the six-year period. The number of students applying to individual specialties fluctuated annually, but no specialty showed a consistent upward or downward trend over the study period. Conclusions This study found huge differences in numbers of applications, expenses, and days interviewing. Students crave more guidance, a more efficient system, transparent communication with programs, and less pressure during the process. Reducing escalating volumes of applications is central to improving the system. Despite efforts to inform applicants better, student behavior is unlikely to change until they feel safe in the belief that lower and more realistic numbers of applications and interviews are likely to result in securing an appropriate residency position.
Collapse
Affiliation(s)
- Kari M Nilsen
- Department of Family and Community Medicine, University of Kansas School of Medicine-Wichita, Wichita, KS
| | - Anne Walling
- Department of Family and Community Medicine, University of Kansas School of Medicine-Wichita, Wichita, KS
| | - Jill Grothusen
- Department of Family Medicine and Community Health, University of Kansas School of Medicine-Kansas City, Kansas City, KS
| | - Gretchen Irwin
- Office of Graduate Medical Education, University of Kansas School of Medicine-Wichita, Wichita, KS
| | - Mark Meyer
- Office of Student Affairs, University of Kansas School of Medicine-Kansas City, KS
| | - Greg Unruh
- Office of Graduate Medical Education, University of Kansas School of Medicine-Kansas City, KS
| |
Collapse
|
9
|
Mullen M, Barnard A, Gavard JA, Miller C, Thomure M. Residency match interview scheduling: quantifying the applicant experience. Postgrad Med J 2021; 98:e12. [PMID: 33707292 DOI: 10.1136/postgradmedj-2020-139514] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 02/07/2021] [Accepted: 02/20/2021] [Indexed: 11/03/2022]
Abstract
BACKGROUND The process of offering and scheduling residency interviews varies widely among programmes. Applicants report distress and have advocated for reform. However, there is a paucity of quantitative data to characterise applicant concerns. OBJECTIVE We quantified the interview scheduling experience for US allopathic medical students in the 2020 main residency match. METHODS An anonymous, 13 question survey was sent to student representatives from each Association of American Medical Colleges member institution. Recipients were asked to forward the survey to their entire fourth-year class. RESULTS Of 4314 applicants to whom the survey was sent, 786 (18.2%) responded. Overall, 20.4% reported missing the opportunity to interview at a programme because they did not have adequate time to respond to an invitation; applicants into surgical specialties were significantly more likely than their non-surgical peers to report this experience (26.4% vs 18.4%, p<0.05). Most (57.4%) respondents scheduled an interview knowing they would likely cancel it in the future. The most commonly cited reason for this behaviour was concern that applicants would not receive invitations from other programmes (85.6%). A majority (56.4%) of respondents did not believe the match interview process functions based on equity and merit. CONCLUSIONS About one in five respondents missed the opportunity to interview at a programme because they did not respond to an invitation in time. Most respondents scheduled interviews knowing that they were likely to cancel them in the future. Standardisation of the interview invitation timeline would address these concerns.
Collapse
Affiliation(s)
- Mark Mullen
- Department of Psychiatry, Creighton University, Omaha, Nebraska, USA
| | - Amanda Barnard
- Saint Louis University School of Education, Saint Louis, Missouri, USA
| | - Jeffrey A Gavard
- Department of Obstetrics and Gynecology, Saint Louis University School of Medicine, Saint Louis, Missouri, USA
| | - Chad Miller
- Office of Curricular Affairs, Saint Louis University, Saint Louis, Missouri, USA
| | - Michael Thomure
- Department of Obstetrics and Gynecology, Saint Louis University School of Medicine, Saint Louis, Missouri, USA
| |
Collapse
|
10
|
Hauer KE, Giang D, Kapp ME, Sterling R. Standardization in the MSPE: Key Tensions for Learners, Schools, and Residency Programs. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:44-49. [PMID: 32167965 DOI: 10.1097/acm.0000000000003290] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The Medical Student Performance Evaluation (MSPE), which summarizes a medical student's academic and professional undergraduate medical education performance and provides salient information during the residency selection process, faces persistent criticisms regarding heterogeneity and obscurity. Specifically, MSPEs do not always provide the same type or amount of information about students, especially from diverse schools, and important information is not always easy to find or interpret. To address these concerns, a key guiding principle from the Recommendations for Revising the MSPE Task Force of the Association of American Medical Colleges (AAMC) was to achieve "a level of standardization and transparency that facilitates the residency selection process." Benefits of standardizing the MSPE format include clarification of performance benchmarks or metrics, consistency across schools to enhance readability, and improved quality. In medical education, standardization may be an important mechanism to ensure accountability of the system for all learners, including those with varied backgrounds and socioeconomic resources. In this article, members of the aforementioned AAMC MSPE task force explore 5 tensions inherent in the pursuit of standardizing the MSPE: (1) presenting each student's individual characteristics and strengths in a way that is relevant, while also working with a standard format and providing standard content; (2) showcasing school-specific curricular strengths while also demonstrating standard evidence of readiness for internship; (3) defining and achieving the right amount of standardization so that the MSPE provides useful information, adds value to the residency selection process, and is efficient to read and understand; (4) balancing reporting with advocacy; and (5) maintaining standardization over time, especially given the tendency for the MSPE format and content to drift. Ongoing efforts to promote collaboration and trust across the undergraduate to graduate medical education continuum offer promise to reconcile these tensions and promote successful educational outcomes.
Collapse
Affiliation(s)
- Karen E Hauer
- K.E. Hauer is associate dean, Assessment, and professor, Department of Medicine, University of California, San Francisco School of Medicine, San Francisco, California; ORCID: https://orcid.org/0000-0002-8812-4045
| | - Daniel Giang
- D. Giang is associate dean, Graduate Medical Education, and professor, Department of Neurology, Loma Linda University, Loma Linda, California
| | - Meghan E Kapp
- M.E. Kapp is assistant professor, Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, Tennessee; ORCID: https://orcid.org/0000-0002-0252-3919
| | - Robert Sterling
- R. Sterling is associate professor, Department of Orthopaedic Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland; ORCID: https://orcid.org/0000-0003-2963-3162
| |
Collapse
|
11
|
Burk-Rafel J, Standiford TC. A Novel Ticket System for Capping Residency Interview Numbers: Reimagining Interviews in the COVID-19 Era. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:50-55. [PMID: 32910007 DOI: 10.1097/acm.0000000000003745] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The 2019 novel coronavirus (COVID-19) pandemic has led to dramatic changes in the 2020 residency application cycle, including halting away rotations and delaying the application timeline. These stressors are laid on top of a resident selection process already under duress with exploding application and interview numbers-the latter likely to be exacerbated with the widespread shift to virtual interviewing. Leveraging their trainee perspective, the authors propose enforcing a cap on the number of interviews that applicants may attend through a novel interview ticket system (ITS). Specialties electing to participate in the ITS would select an evidence-based, specialty-specific interview cap. Applicants would then receive unique electronic tickets-equal in number to the cap-that would be given to participating programs at the time of an interview, when the tickets would be marked as used. The system would be self-enforcing and would ensure each interview represents genuine interest between applicant and program, while potentially increasing the number of interviews-and thus match rate-for less competitive applicants. Limitations of the ITS and alternative approaches for interview capping, including an honor code system, are also discussed. Finally, in the context of capped interview numbers, the authors emphasize the need for transparent preinterview data from programs to inform applicants and their advisors on which interviews to attend, learning from prior experiences and studies on virtual interviewing, adherence to best practices for interviewing, and careful consideration of how virtual interviews may shift inequities in the resident selection process.
Collapse
Affiliation(s)
- Jesse Burk-Rafel
- J. Burk-Rafel is assistant professor of internal medicine and assistant director of UME-GME innovation, Institute for Innovations in Medical Education, NYU Grossman School of Medicine, New York, New York. At the time this article was written, he was an internal medicine resident, NYU Langone Health, New York, New York
| | - Taylor C Standiford
- T.C. Standiford is a fourth-year medical student, University of Michigan Medical School, Ann Arbor, Michigan
| |
Collapse
|
12
|
Saudek K, Treat R, Rogers A, Hahn D, Lauck S, Saudek D, Weisgerber M. A novel faculty development tool for writing a letter of recommendation. PLoS One 2020; 15:e0244016. [PMID: 33326489 PMCID: PMC7743943 DOI: 10.1371/journal.pone.0244016] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 11/18/2020] [Indexed: 11/29/2022] Open
Abstract
Objective Based on a national survey of program directors we developed a letter of recommendation (LOR) scoring rubric (SR) to assess LORs submitted to a pediatric residency program. The objective was to use the SR to analyze: the consistency of LOR ratings across raters and LOR components that contributed to impression of the LOR and candidate. Methods We graded 30 LORs submitted to a pediatric residency program that were evenly distributed based on final rank by our program. The SR contained 3 sections (letter features, phrases, and applicant abilities) and 2 questions about the quality of the LOR (LORQ) and impression of the candidate (IC) after reading the LOR on a 5-point Likert scale. Inter-rater reliability was calculated with intraclass correlation coefficients (ICC(2,1)). Pearson (r) correlations and stepwise multivariate linear regression modeling predicted LORQ and IC. Mean scores of phrases, features, and applicant abilities were analyzed with ANOVA and Bonferroni correction. Results Phrases (ICC(2,1) = 0.82, p<0.001)) and features (ICC(2,1) = 0.60, p<0.001)) were rated consistently, while applicant abilities were not (ICC(2,1) = 0.28, p<0.001)). For features, LORQ (R2 = 0.75, p<0.001) and IC (R2 = 0.58, p<0.001) were best predicated by: writing about candidates’ abilities, strength of recommendation, and depth of interaction with the applicant. For abilities, LORQ (R2 = 0.47, p<0.001) and IC (R2 = 0.51, p<0.001) were best predicted by: clinical reasoning, leadership, and communication skills (0.2). There were significant differences for phrases and features (p<0.05). Conclusions The SR was consistent across raters and correlates with impression of LORQ and IC. This rubric has potential as a faculty development tool for writing LORS.
Collapse
Affiliation(s)
- Kris Saudek
- Division of Neonatology, Department of Pediatrics, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
- * E-mail:
| | - Robert Treat
- Division of Neonatology, Department of Pediatrics, Medical College of Wisconsin, Milwaukee, Wisconsin, United States of America
| | - Amanda Rogers
- Department of Pediatrics, Division of Hospital Medicine, Milwaukee, Wisconsin, United States of America
| | - Danita Hahn
- Department of Pediatrics, Division of Hospital Medicine, Milwaukee, Wisconsin, United States of America
| | - Sara Lauck
- Department of Pediatrics, Division of Hospital Medicine, Milwaukee, Wisconsin, United States of America
| | - David Saudek
- Department of Pediatrics, Division of Cardiology, Milwaukee, Wisconsin, United States of America
| | - Michael Weisgerber
- Department of Pediatrics, Division of Hospital Medicine, Milwaukee, Wisconsin, United States of America
| |
Collapse
|
13
|
Towaij C. Everyone Is Awesome: Analyzing Letters of Reference in a General Surgery Residency Selection Process. J Grad Med Educ 2020; 12:566-570. [PMID: 33149825 PMCID: PMC7594789 DOI: 10.4300/jgme-d-20-00034.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 06/13/2020] [Accepted: 06/24/2020] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND The resident selection process involves the analysis of multiple data points, including letters of reference (LORs), which are inherently subjective in nature. OBJECTIVE We assessed the frequency with which LORs use quantitative terms to describe applicants and to assess whether the use of these terms reflects the ranking of trainees in the final selection process. METHODS A descriptive study analyzing LORs submitted by Canadian medical graduate applicants to the University of Ottawa General Surgery Program in 2019 was completed. We collected demographic information about applicants and referees and recorded the use of preidentified quantitative descriptors (eg, best, above average). A 10% audit of the data was performed. Descriptive statistics were used to analyze the demographics of our letters as well as the frequency of use of the quantitative descriptors. RESULTS Three hundred forty-three LORs for 114 applicants were analyzed. Eighty-five percent (291 of 343) of LORs used quantitative descriptors. Eighty-four percent (95 of 113) of applicants were described as above average, and 45% (51 of 113) were described as the "best" by at least 1 letter. The candidates described as the "best" ranked anywhere from second to 108th in our ranking system. CONCLUSIONS Most LORs use quantitative descriptors. These terms are generally positive, and while the use does discriminate between different applicants, it was not helpful in the context of ranking applicants in our file review process.
Collapse
|
14
|
Poly-Specialty Application Practices of Medical Students Applying to Integrated Vascular Surgery Residency. Ann Vasc Surg 2020; 69:125-132. [PMID: 32554201 DOI: 10.1016/j.avsg.2020.06.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Revised: 05/16/2020] [Accepted: 06/06/2020] [Indexed: 11/21/2022]
Abstract
BACKGROUND It is not uncommon for medical students seeking surgical residencies to apply to and rank two or more surgical specialties. Level of interest in a specialty is consistently cited as one of the most important factors for program directors when evaluating applicants for 0 + 5 integrated vascular surgery (IVS) programs. The purpose of this study was to examine trends in poly-specialty application submission to IVS and poly-specialty ranking of IVS to determine the percentage of applicants to IVS residencies with vascular surgery as their true preference. METHODS Electronic Residency Application Service (ERAS) statistics for noninternational medical graduates from 2011 to 2017 were mined for trends in poly-specialty applications between IVS and other surgical disciplines. The poly-specialty application percentage, range, and standard deviation were determined. The National Resident Match Program (NRMP) results and data from 2011 to 2018 were also used to identify those U.S. seniors who ranked IVS programs as their preferred choice, defined as ranking vascular as the only choice or the first-choice specialty. This was compared with those who ranked a specialty other than vascular surgery first but had vascular surgery listed on their rank list. These data were also collected for applicants to orthopedic surgery, neurosurgery, otolaryngology, obstetrics and gynecology, integrated cardiothoracic surgery, and integrated plastic surgery. RESULTS Between 2011 and 2017, applicants who submitted ERAS applications to IVS most often poly-specialty applied to IVS and general surgery (87%) followed by IVS and the following: preliminary surgery (71%), plastic surgery (22%), orthopedic surgery (19%), neurosurgery (17%), otolaryngology (16%), obstetrics and gynecology (12%), and urology (3%). The percentage of the applicant pool submitting rank lists with multiple specialties fell over the study period from 94% in 2011 to 67% in 2018. Between 2011 and 2018, an average of 14% of IVS applicants (n = 463), who submitted rank lists to the NRMP, ranked a specialty other than vascular as their true preference (range 7-23 SD 5). Only integrated cardiothoracic surgery had a higher percentage of applicants listing a different specialty as their true preference at 25% (range 18-36 SD 7). Nearly all (97-99%) applicants to orthopedic surgery, neurosurgery, otolaryngology, obstetrics and gynecology, and plastic surgery applied to that specialty as their true preference. CONCLUSIONS IVS residency applicants were most likely to apply for poly-specialty via the ERAS to general surgery and IVS. Compared to the other surgical specialties, those who submitted rank lists to the NRMP listing integrated cardiothoracic and IVS had the highest likelihood of ranking another specialty higher. Care must be taken when evaluating applications to IVS residencies to determine the applicant's level of interest in vascular surgery as a career.
Collapse
|
15
|
Akers A, Blough C, Iyer MS. COVID-19 Implications on Clinical Clerkships and the Residency Application Process for Medical Students. Cureus 2020; 12:e7800. [PMID: 32461867 PMCID: PMC7243841 DOI: 10.7759/cureus.7800] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Accepted: 04/23/2020] [Indexed: 11/27/2022] Open
Abstract
The coronavirus disease 2019 (COVID-19) pandemic has caused significant disruption to undergraduate medical education (UME). Although the immediate scheduling challenges are being addressed, there has been less discourse regarding how this pandemic will impact medical students in their preparation for and application to residency programs. While some historical disasters and pandemics provide a loose precedent for UME response during COVID-19, the impact of the current pandemic has surpassed any other events. COVID-19 will likely impact UME in the suspension of clinical rotations, alterations in grading, suspension or elimination of away rotations, changes in medical licensing exams, and ramifications on mental health. This review assesses governing medical bodies' recommendations regarding UME during the COVID-19 pandemic and how this may impact preparation for residency. In particular, residency programs will likely have to create new guidelines for assessing applicants during this unique cycle.
Collapse
Affiliation(s)
- Allison Akers
- Medicine, The Ohio State University Wexner Medical Center, Columbus, USA
| | - Christian Blough
- Medicine, The Ohio State University Wexner Medical Center, Columbus, USA
| | - Maya S Iyer
- Pediatrics, Nationwide Children's Hospital, Columbus, USA
- Pediatrics, The Ohio State University College of Medicine, Columbus, USA
| |
Collapse
|
16
|
Saudek K, Treat R, Goldblatt M, Saudek D, Toth H, Weisgerber M. Pediatric, Surgery, and Internal Medicine Program Director Interpretations of Letters of Recommendation. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:S64-S68. [PMID: 31365410 DOI: 10.1097/acm.0000000000002919] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE Literature describing program director (PD) perceptions of letters of recommendation (LORs) and "code" used by letter writers is limited. In 2016, a survey instrument was distributed nationally to pediatric PDs asking them to rate their interpretations of components of LORs. The results confirmed that letter phrases convey code, but these results were not known to be generalizable outside of pediatrics. The purpose of this study was to expand the survey to surgery and internal medicine (IM) PDs looking for areas of agreement or variation between the 3 specialties. METHOD The survey was sent nationally to surgery and IM PDs asking them to rate LORs in 3 areas on a 5-point Likert scale: 14 commonly used phrases, 13 letter features, and 10 applicant abilities. The LOR phrases were grouped using principal component analysis (PCA). Mean scores of components were analyzed with repeated-measures analysis of variance. RESULTS Response rates: pediatrics 43% (486 of 1079), surgery 55% (151 of 277), and IM 42% (170 of 408). PCA generated groups of positive, neutral, and negative phrases with moderate to strong correlation with each other for all 3 specialties. There were significant differences between the mean Likert scores of the positive, neutral, and negative groups of phrases for all 3 specialties (all P < .001). "Showed improvement" was rated the most negative phrase by all 3 specialties. CONCLUSIONS Key elements of LORs include distinct phrases depicting different degrees of endorsement of candidates. Pediatric, surgery, and IM PDs interpret letter components differently.
Collapse
Affiliation(s)
- Kris Saudek
- K. Saudek is associate professor of pediatrics and associate program director, pediatric residency program, Medical College of Wisconsin, Milwaukee, Wisconsin. R. Treat is associate professor and director of measurement and evaluation, Office of Academic Affairs, Medical College of Wisconsin, Milwaukee, Wisconsin. M. Goldblatt is professor of surgery and program director, Surgery Residency Program, Medical College of Wisconsin, Milwaukee, Wisconsin. D. Saudek is associate professor of pediatrics and director of quality improvement, Herma Heart Institute, Medical College of Wisconsin, Milwaukee, Wisconsin. H. Toth is associate professor of internal medicine-pediatrics and program director, Internal Medicine-Pediatric Residency Program, Medical College of Wisconsin, Milwaukee, Wisconsin. M. Weisgerber is professor of pediatrics and program director, Pediatric Residency Program, Medical College of Wisconsin, Milwaukee, Wisconsin
| | | | | | | | | | | |
Collapse
|
17
|
Rozenshtein A, Mullins ME, Marx MV. The USMLE Step 1 Pass/Fail Reporting Proposal: The APDR Position. Acad Radiol 2019; 26:1400-1402. [PMID: 31383545 DOI: 10.1016/j.acra.2019.06.004] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 06/07/2019] [Accepted: 06/10/2019] [Indexed: 11/17/2022]
Abstract
BACKGROUND The National Board of Medical Examiners (NBME) and the United States Medical Licensing Examination (USMLE) has convened a conference of "key stakeholders" on March 11-12, 2019 to consider reporting the results of the USMLE Step 1 as pass/fail. DISCUSSION While the original purpose of the USMLE Step 1 was to provide an objective basis for medical licensing, the score is increasingly used in residency applicant screening and selection because it is an objective, nationally recognized metric allowing comparison across medical schools in and outside the United States. Excessive reliance on the Step 1 score in the matching process has led to "Step 1 Culture" that drives medical schools to "teach to the test," increases medical student anxiety, and disadvantages minorities that have been shown to score lower on the USMLE Step 1 examination. The outsize role of the USMLE Step 1 score in resident selection is due to lack of standardization in medical school transcripts, grade inflation, and the lack of class standing in many summative assessments. Furthermore, the numeric score allows initial Electronic Residency Application Service filtering, commonly used by programs to limit the number of residency applications to review. CONCLUSION The Association of Program Directors in Radiology (APDR) is concerned that pass/fail reporting of the USMLE Step 1 score would take away an objective measure of medical student's knowledge and the incentive to acquire as much of it as possible. Although the APDR is not in favor of the Step 1 exam being used as a screening tool, in the absence of an equal or better metric for applicant comparison the APDR opposes the change in Step 1 reporting from the numeric score to pass/fail.
Collapse
Affiliation(s)
- Anna Rozenshtein
- Department of Radiology, Westchester Medical Center-New York Medical College, 100 Woods Road, Valhalla, NY 10595.
| | - Mark E Mullins
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, Georgia
| | - M Victoria Marx
- Department of Radiology, Keck School of Medicine University of South California, Los Angeles, California
| |
Collapse
|
18
|
Jhun P, Shoenberger J, Drigalla D, Johnson C, Stone S, DeBlieux PMC, Cheaito MA, Lotfipour S, Kazzi A. Are You Applying to More Than One Specialty? J Emerg Med 2019; 57:e157-e160. [PMID: 31279638 DOI: 10.1016/j.jemermed.2019.05.034] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2019] [Accepted: 05/11/2019] [Indexed: 11/27/2022]
Abstract
Although the majority of U.S. medical students predominantly apply to only one specialty, some apply to more than one. When it comes to emergency medicine (EM), applicants may apply to additional specialties due to several reasons: being international medical graduates as well as their inability to make a decision regarding the choice of specialty, fear from the growing competitiveness of EM, or the desire to stay in a specific geographic area. Accordingly, in this article we aim to guide medical students through the process of applying to more than one specialty, including using the Electronic Residency Application Service application, writing a personal statement, getting letters of recommendation, and an Early Match. Moreover, we elaborate on the effect of applying to more than one specialty on a student's application to a residency in EM.
Collapse
Affiliation(s)
- Paul Jhun
- Department of Emergency Medicine, University of California San Francisco, San Francisco, California
| | - Jan Shoenberger
- Keck School of Medicine of the University of Southern California, Los Angeles, California; Los Angeles County + University of Southern California Medical Center, Los Angeles, California
| | - Dorian Drigalla
- Department of Emergency Medicine, Texas A&M College of Medicine, Temple, Texas; Department of Emergency Medicine, Baylor Scott & White Health, Temple, Texas
| | - Cherlin Johnson
- Keck School of Medicine of the University of Southern California, Los Angeles, California; Los Angeles County + University of Southern California Medical Center, Los Angeles, California
| | - Susan Stone
- Keck School of Medicine of the University of Southern California, Los Angeles, California; Los Angeles County + University of Southern California Medical Center, Los Angeles, California
| | - Peter M C DeBlieux
- Section of Emergency Medicine, Section of Pulmonary and Critical Care, Department of Medicine, Louisiana State University School of Medicine, New Orleans, Louisiana
| | - Mohamad Ali Cheaito
- Department of Emergency Medicine, American University of Beirut Medical Center, Beirut, Lebanon
| | - Shahram Lotfipour
- Department of Emergency Medicine, University of California, Irvine, California
| | - Amin Kazzi
- Department of Emergency Medicine, American University of Beirut Medical Center, Beirut, Lebanon; Department of Emergency Medicine, University of California, Irvine, California
| |
Collapse
|
19
|
Hartman ND. A Narrative Review of the Evidence Supporting Factors Used by Residency Program Directors to Select Applicants for Interviews. J Grad Med Educ 2019; 11:268-273. [PMID: 31210855 PMCID: PMC6570461 DOI: 10.4300/jgme-d-18-00979.3] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Revised: 01/23/2019] [Accepted: 03/31/2019] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Residency applicants feel increasing pressure to maximize their chances of successfully matching into the program of their choice, and are applying to more programs than ever before. OBJECTIVE In this narrative review, we examined the most common and highly rated factors used to select applicants for interviews. We also examined the literature surrounding those factors to illuminate the advantages and disadvantages of using them as differentiating elements in interviewee selection. METHODS Using the 2018 NRMP Program Director Survey as a framework, we examined the last 10 years of literature to ascertain how residency directors are using these common factors to grant residency interviews, and whether these factors are predictive of success in residency. RESULTS Residency program directors identified 12 factors that contribute substantially to the decision to invite applicants for interviews. Although United States Medical Licensing Examination (USMLE) Step 1 is often used as a comparative factor, most studies do not demonstrate its predictive value for resident performance, except in the case of test failure. We also found that structured letters of recommendation from within a specialty carry increased benefit when compared with generic letters. Failing USMLE Step 1 or 2 and unprofessional behavior predicted lower performance in residency. CONCLUSIONS We found that the evidence basis for the factors most commonly used by residency directors is decidedly mixed in terms of predicting success in residency and beyond. Given these limitations, program directors should be skeptical of making summative decisions based on any one factor.
Collapse
|
20
|
Saudek K. Dear Program Director: Deciphering Letters of Recommendation. J Grad Med Educ 2018; 10:261-266. [PMID: 29946380 PMCID: PMC6008019 DOI: 10.4300/jgme-d-17-00712.1] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/22/2017] [Revised: 01/08/2018] [Accepted: 01/30/2018] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Letters of recommendation (LORs) are an important part of applications for residency and fellowship programs. Despite anecdotal use of a "code" in LORs, research on program director (PD) perceptions of the value of these documents is sparse. OBJECTIVE We analyzed PD interpretations of LOR components and discriminated between perceived levels of applicant recommendations. METHODS We conducted a cross-sectional, descriptive study of pediatrics residency and fellowship PDs. We developed a survey asking PDs to rate 3 aspects of LORs: 13 letter features, 10 applicant abilities, and 11 commonly used phrases, using a 5-point Likert scale. The 11 phrases were grouped using principal component analysis. Mean scores of components were analyzed with repeated-measures analysis of variance. Median Likert score differences between groups were analyzed with Mann-Whitney U tests. RESULTS Our survey had a 43% response rate (468 of 1079). "I give my highest recommendation" was rated the most positive phrase, while "showed improvement" was rated the most negative. Principal component analysis generated 3 groups of phrases with moderate to strong correlation with each other. The mean Likert score for each group from the PD rating was calculated. Positive phrases had a mean (SD) of 4.4 (0.4), neutral phrases 3.4 (0.5), and negative phrases 2.6 (0.6). There was a significant difference among all 3 pairs of mean scores (all P < .001). CONCLUSIONS Commonly used phrases in LORs were interpreted consistently by PDs and influenced their impressions of candidates. Key elements of LORs include distinct phrases depicting different degrees of endorsement.
Collapse
|
21
|
Conrad SS, Addams AN, Young GH. Holistic Review in Medical School Admissions and Selection: A Strategic, Mission-Driven Response to Shifting Societal Needs. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2016; 91:1472-1474. [PMID: 27627631 DOI: 10.1097/acm.0000000000001403] [Citation(s) in RCA: 76] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Medical schools and residency programs have always sought excellence in the areas of education, research, and clinical care. However, these pursuits are not accomplished within a vacuum-rather, they are continually and necessarily influenced by social, cultural, political, legal, and economic forces. Persistent demographic inequalities coupled with rapidly evolving biomedical research and a complex legal landscape heighten our collective awareness and emphasize the continued need to consider medicine's social contract when selecting, educating, and developing physicians and physician-scientists.Selection-who gains access to a medical education and to a career as a physician, researcher, and/or faculty member-is as much art as science. Quantitative assessments of applicants yield valuable information but fail to convey the full story of an applicant and the paths they have taken. Human judgment and evidence-based practice remain critical parts of implementing selection processes that yield the desired outcomes. Holistic review, in promoting the use of strategically designed, evidence-driven, mission-based, diversity-aware processes, provides a conceptual and practical framework for marrying the art with the science without sacrificing the unique value that each brings.In this Commentary, the authors situate medical student selection as both responsive to and informed by broader social context, health and health care needs, educational research and evidence, and state and federal law and policy. They propose that holistic review is a strategic, mission-driven, evidence-based process that recognizes diversity as critical to excellence, offers a flexible framework for selecting future physicians, and facilitates achieving institutional mission and addressing societal needs.
Collapse
Affiliation(s)
- Sarah S Conrad
- S.S. Conrad is director, Advancing Holistic Review Initiative, Association of American Medical Colleges, Washington, DC. A.N. Addams is director, Student Affairs, Strategy & Alignment, Association of American Medical Colleges, Washington, DC. G.H. Young is senior director, Student Affairs and Programs, Association of American Medical Colleges, Washington, DC
| | | | | |
Collapse
|