1
|
Hope D, Kluth D, Homer M, Dewar A, Goddard-Fuller R, Jaap A, Cameron H. Exploring the use of Rasch modelling in "common content" items for multi-site and multi-year assessment. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024:10.1007/s10459-024-10354-y. [PMID: 38977526 DOI: 10.1007/s10459-024-10354-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Accepted: 06/30/2024] [Indexed: 07/10/2024]
Abstract
Rasch modelling is a powerful tool for evaluating item performance, measuring drift in difficulty over time, and comparing students who sat assessments at different times or at different sites. Here, we use data from thirty UK medical schools to describe the benefits of Rasch modelling in quality assurance and the barriers to using it. Sixty "common content" multiple choice items were offered to all UK medical schools in 2016-17, and a further sixty in 2017-18, with five available in both years. Thirty medical schools participated, for sixty total datasets across two sessions, and 14,342 individual sittings. Schools selected items to embed in written assessment near the end of their programmes. We applied Rasch modelling to evaluate unidimensionality, model fit statistics and item quality, horizontal equating to compare performance across schools, and vertical equating to compare item performance across time. Of the sixty sittings, three provided non-unidimensional data, and eight violated goodness of fit measures. Item-level statistics identified potential improvements in item construction and provided quality assurance. Horizontal equating demonstrated large differences in scores across schools, while vertical equating showed item characteristics were stable across sessions. Rasch modelling provides significant advantages in model- and item- level reporting compared to classical approaches. However, the complexity of the analysis and the smaller number of educators familiar with Rasch must be addressed locally for a programme to benefit. Furthermore, due to the comparative novelty of Rasch modelling, there is greater ambiguity on how to proceed when a Rasch model identifies misfitting or problematic data.
Collapse
Affiliation(s)
- David Hope
- Medical Education Unit, The Chancellor's Building, College of Medicine and Veterinary Medicine, The University of Edinburgh, 49 Little France Crescent, Edinburgh, Scotland, EH16 4SB, UK.
| | - David Kluth
- Medical Education Unit, The Chancellor's Building, College of Medicine and Veterinary Medicine, The University of Edinburgh, 49 Little France Crescent, Edinburgh, Scotland, EH16 4SB, UK
| | - Matthew Homer
- Leeds Institute of Medical Education, Leeds School of Medicine, Worsley Building, University of Leeds, Woodhouse, Leeds, LS2 9JT, UK
| | - Avril Dewar
- Medical Education Unit, The Chancellor's Building, College of Medicine and Veterinary Medicine, The University of Edinburgh, 49 Little France Crescent, Edinburgh, Scotland, EH16 4SB, UK
| | | | - Alan Jaap
- Medical Education Unit, The Chancellor's Building, College of Medicine and Veterinary Medicine, The University of Edinburgh, 49 Little France Crescent, Edinburgh, Scotland, EH16 4SB, UK
| | - Helen Cameron
- Aston Medical School, Aston University, 295 Aston Express Way, Birmingham, B4 7ET, UK
| |
Collapse
|
2
|
Norcini J, Grabovsky I, Barone MA, Anderson MB, Pandian RS, Mechaber AJ. The Associations Between United States Medical Licensing Examination Performance and Outcomes of Patient Care. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2024; 99:325-330. [PMID: 37816217 DOI: 10.1097/acm.0000000000005480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/12/2023]
Abstract
PURPOSE The United States Medical Licensing Examination (USMLE) comprises a series of assessments required for the licensure of U.S. MD-trained graduates as well as those who are trained internationally. Demonstration of a relationship between these examinations and outcomes of care is desirable for a process seeking to provide patients with safe and effective health care. METHOD This was a retrospective cohort study of 196,881 hospitalizations in Pennsylvania over a 3-year period (January 1, 2017 to December 31, 2019) for 5 primary diagnoses: heart failure, acute myocardial infarction, stroke, pneumonia, or chronic obstructive pulmonary disease. The 1,765 attending physicians for these hospitalizations self-identified as family physicians or general internists. A converted score based on USMLE Step 1, Step 2 Clinical Knowledge, and Step 3 scores was available, and the outcome measures were in-hospital mortality and log length of stay (LOS). The research team controlled for characteristics of patients, hospitals, and physicians. RESULTS For in-hospital mortality, the adjusted odds ratio was 0.94 (95% confidence interval [CI] = 0.90, 0.99; P < .02). Each standard deviation increase in the converted score was associated with a 5.51% reduction in the odds of in-hospital mortality. For log LOS, the adjusted estimate was 0.99 (95% CI = 0.98, 0.99; P < .001). Each standard deviation increase in the converted score was associated with a 1.34% reduction in log LOS. CONCLUSIONS Better provider USMLE performance was associated with lower in-hospital mortality and shorter log LOS for patients, although the magnitude of the latter is unlikely to be of practical significance. These findings add to the body of evidence that examines the validity of the USMLE licensure program.
Collapse
|
3
|
Ellis R, Cleland J, Scrimgeour DS, Lee AJ, Hines J, Brennan PA. Establishing the predictive validity of the intercollegiate membership of the Royal Colleges of surgeons written examination: MRCS Part A. Surgeon 2023; 21:323-330. [PMID: 37544852 DOI: 10.1016/j.surge.2023.07.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 07/12/2023] [Accepted: 07/13/2023] [Indexed: 08/08/2023]
Abstract
Successful completion of the Intercollegiate Membership of the Royal Colleges of Surgeons (MRCS) examination is mandatory for surgical trainees entering higher specialist training in the United Kingdom. Despite its international reputation, and the value placed on the examination in surgical training, there has been little evidence of its predictive validity until recently. In this review, we present a summary of findings of four recent Intercollegiate studies assessing the predictive validity of the MRCS Part A (written) examination. Data from all four studies showed statistically significant positive correlations between the MRCS Part A and other written examinations taken by surgical trainees over the course of their education. The studies summarised in this review provide compelling evidence for the predictive validity of this gatekeeping examination. This review will be of interest to trainees, training institutions and the Royal Colleges given the value placed on the examination by surgical training programmes.
Collapse
Affiliation(s)
- Ricky Ellis
- Institute of Applied Health Sciences, University of Aberdeen, Aberdeen, AB25 2ZD, United Kingdom; Urology Department, Nottingham University Hospitals, Nottingham, United Kingdom.
| | - Jennifer Cleland
- Medical Education Research and Scholarship Unit, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore.
| | - Duncan Sg Scrimgeour
- Institute of Applied Health Sciences, University of Aberdeen, Aberdeen, AB25 2ZD, United Kingdom; Department of Colorectal Surgery, Aberdeen Royal Infirmary, Aberdeen, AB25 2ZN, United Kingdom.
| | - Amanda J Lee
- Medical Statistics Team, Institute of Applied Health Sciences, University of Aberdeen, AB25 2ZD, United Kingdom.
| | - John Hines
- Urology Department, University College Hospital, London, W1G 8PH, United Kingdom.
| | - Peter A Brennan
- Department of Maxillo-Facial Surgery, Queen Alexandra Hospital, Portsmouth, PO6 3LY, United Kingdom.
| |
Collapse
|
4
|
Ellis R, Cleland J, Scrimgeour DS, Lee AJ, Hines J, Brennan PA. Establishing the predictive validity of the intercollegiate membership of the Royal Colleges of surgeons written examination: MRCS part B. Surgeon 2023; 21:278-284. [PMID: 37517979 DOI: 10.1016/j.surge.2023.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 07/12/2023] [Accepted: 07/13/2023] [Indexed: 08/01/2023]
Abstract
The Intercollegiate Membership of the Royal Colleges of Surgeons (MRCS) is a high-stakes postgraduate examination taken by thousands of surgical trainees worldwide every year. The MRCS is a challenging assessment, highly regarded by surgical training programmes and valued as a gatekeeper to the surgical profession. The examination is taken at considerable personal, social and financial cost to surgical trainees, and failure has significant implications for career progression. Given the value placed on MRCS, it must be a reliable and valid assessment of the knowledge and skills of early-career surgeons. Our first article 'Establishing the Predictive Validity of the Intercollegiate Membership of the Royal Colleges of Surgeons Written Examination: MRCS Part A' discussed the principles of assessment reliability and validity and outlined the mounting evidence supporting the predictive validity of the MRCS Part A (the multiple-choice questionnaire component of the examination). This, the second article in the series discusses six recently published studies investigating the predictive validity of the MRCS Part B (the clinical component of the examination). All national longitudinal cohort studies reviewed have demonstrated significant correlations between MRCS Part B and other assessments taken during the UK surgical training pathway, supporting the predictive validity of MRCS Part B. This review will be of interest to trainees, trainers and Royal Colleges given the value placed on the examination by surgical training programmes.
Collapse
Affiliation(s)
- Ricky Ellis
- Institute of Applied Health Sciences, University of Aberdeen, Aberdeen, United Kingdom; Urology Department, Nottingham University Hospitals, Nottingham, United Kingdom.
| | - Jennifer Cleland
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore.
| | - Duncan Sg Scrimgeour
- Institute of Applied Health Sciences, University of Aberdeen, Aberdeen, United Kingdom; Department of Colorectal Surgery, Aberdeen Royal Infirmary, Aberdeen, United Kingdom.
| | - Amanda J Lee
- Medical Statistics Team, Institute of Applied Health Sciences, University of Aberdeen, United Kingdom.
| | - John Hines
- University College Hospital London, United Kingdom.
| | - Peter A Brennan
- Department of Maxillo-Facial Surgery, Queen Alexandra Hospital, Portsmouth, United Kingdom.
| |
Collapse
|
5
|
Shirkhodaie C, Avila S, Seidel H, Gibbons RD, Arora VM, Farnan JM. The Association Between USMLE Step 2 Clinical Knowledge Scores and Residency Performance: A Systematic Review and Meta-Analysis. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:264-273. [PMID: 36512984 DOI: 10.1097/acm.0000000000005061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE With the change in Step 1 score reporting, Step 2 Clinical Knowledge (CK) may become a pivotal factor in resident selection. This systematic review and meta-analysis seeks to synthesize existing observational studies that assess the relationship between Step 2 CK scores and measures of resident performance. METHOD The authors searched MEDLINE, Web of Science, and Scopus databases using terms related to Step 2 CK in 2021. Two researchers identified studies investigating the association between Step 2 CK and measures of resident performance and included studies if they contained a bivariate analysis examining Step 2 CK scores' association with an outcome of interest: in-training examination (ITE) scores, board certification examination scores, select Accreditation Council for Graduate Medical Education core competency assessments, overall resident performance evaluations, or other subjective measures of performance. For outcomes that were investigated by 3 or more studies, pooled effect sizes were estimated with random-effects models. RESULTS Among 1,355 potential studies, 68 met inclusion criteria and 43 were able to be pooled. There was a moderate positive correlation between Step 2 CK and ITE scores (0.52, 95% CI 0.45-0.59, P < .01). There was a moderate positive correlation between Step 2 CK and ITE scores for both nonsurgical (0.59, 95% CI 0.51-0.66, P < .01) and surgical specialties (0.41, 95% CI 0.33-0.48, P < .01). There was a very weak positive correlation between Step 2 CK scores and subjective measures of resident performance (0.19, 95% CI 0.13-0.25, P < .01). CONCLUSIONS This study found Step 2 CK scores have a statistically significant moderate positive association with future examination scores and a statistically significant weak positive correlation with subjective measures of resident performance. These findings are increasingly relevant as Step 2 CK scores will likely become more important in resident selection.
Collapse
Affiliation(s)
- Camron Shirkhodaie
- C. Shirkhodaie is a medical student, Pritzker School of Medicine, University of Chicago, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-4279-3251
| | - Santiago Avila
- S. Avila is a medical student, Pritzker School of Medicine, University of Chicago, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-3633-4304
| | - Henry Seidel
- H. Seidel is a medical student, Pritzker School of Medicine, University of Chicago, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-7364-1365
| | - Robert D Gibbons
- R.D. Gibbons is professor, Center for Health Statistics and Departments of Medicine and Public Health Sciences, University of Chicago, Chicago, Illinois
| | - Vineet M Arora
- V.M. Arora is professor, Department of Medicine, University of Chicago Medicine, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-4745-7599
| | - Jeanne M Farnan
- J.M. Farnan is professor, Department of Medicine, University of Chicago Medicine, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-1138-9416
| |
Collapse
|
6
|
Jeyaraju M, Linford H, Bosco Mendes T, Caufield-Noll C, Tackett S. Factors Leading to Successful Performance on U.S. National Licensure Exams for Medical Students: A Scoping Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:136-148. [PMID: 35857389 DOI: 10.1097/acm.0000000000004877] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE To synthesize the evidence of the factors leading to successful performance on knowledge-based national licensure exams (NLEs) for medical students. METHOD The authors conducted a scoping review to summarize the peer-reviewed empiric literature that used United States Medical Licensing Examination (USMLE) Step 1 or Step 2 Clinical Knowledge or Comprehensive Osteopathic Medical Licensing Examination (COMLEX) Level 1 or Level 2 Cognitive Evaluation scores as outcomes. The authors searched PubMed and Scopus without date restrictions through April 30, 2021. Two reviewers independently screened and selected studies for inclusion. Data were summarized narratively and with descriptive statistics. RESULTS The authors screened 1,185 unique citations and included 233 full-text studies in their review. Of these, 201 (86%) were studies of USMLE exams, 31 (13%) were studies of COMLEX exams, and 1 (0.4%) reported on both. The authors classified 29 studies (12%) as informing NLE preparation, 163 (70%) as attempting to identify predictive variables, and 76 (33%) as using NLE scores for program evaluation. Preparation studies found that the number of practice test items, practice exam scores, and less time in dedicated preparation correlated with higher NLE scores. Use of other commercial resources or study strategies was not consistently associated with higher scores. Predictive studies found the strongest relationships between individuals' performance on past assessments and their NLE scores. CONCLUSIONS The factors leading to successful performance on knowledge-based NLEs align with well-known principles from the cognitive sciences. Learners build on existing foundations of knowledge (reflected in their prior academic performance) and are likely to learn more efficiently with testing and spaced learning over time. While commercial test preparation resources are ubiquitous, there is no evidence that a single resource gives students a competitive advantage on NLEs. Developing habits of regular and continuous learning is necessary for clinical practice and successful NLE performance.
Collapse
Affiliation(s)
- Maniraj Jeyaraju
- M. Jeyaraju was a medical student, University of Maryland School of Medicine, Baltimore, Maryland, at the time this study was completed. He is now a family medicine resident, University of North Carolina School of Medicine, Chapel Hill, North Carolina; ORCID: https://orcid.org/0000-0003-1170-2422
| | - Henry Linford
- H. Linford was a postgraduate year 1 transitional resident, Crozer Health, Upland, Pennsylvania, at the time this study was completed. He is now a psychiatry resident, Texas Institute for Graduate Medical Education and Research, San Antonio, Texas
| | - Thiago Bosco Mendes
- T. Bosco Mendes was endocrinologist, Departamento de Medicina Interna, Universidade do Estado de São Paulo (Unesp), Botucatu, São Paulo, Brasil, at the time this study was completed. He is now an internal medicine resident, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; ORCID: https://orcid.org/0000-0001-8349-3303
| | - Christine Caufield-Noll
- C. Caufield-Noll was informationist, National Institutes of Health Library, National Institutes of Health, Bethesda, Maryland, at the time this study was completed; ORCID: https://orcid.org/0000-0002-5637-3717
| | - Sean Tackett
- S. Tackett is associate professor of medicine and international medical education director, Division of General Internal Medicine, Johns Hopkins Bayview Medical Center, Baltimore, Maryland; ORCID: https://orcid.org/0000-0001-5369-7225
| |
Collapse
|
7
|
Cuddy MM, Liu C, Ouyang W, Barone MA, Young A, Johnson DA. An Examination of the Associations Among USMLE Step 3 Scores and the Likelihood of Disciplinary Action in Practice. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:1504-1510. [PMID: 35675131 DOI: 10.1097/acm.0000000000004775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE As the last examination in the United States Medical Licensing Examination (USMLE) sequence, Step 3 provides a safeguard before physicians enter into unsupervised practice. There is, however, little validity research focusing on Step 3 scores beyond examining its associations with other educational and professional assessments thought to cover similar content. This study examines the associations between Step 3 scores and subsequent receipt of disciplinary action taken by state medical boards for problematic behavior in practice. It analyzes Step 3 total, Step 3 computer-based case simulation (CCS), and Step 3 multiple-choice question (MCQ) scores. METHOD The final sample included 275,392 board-certified physicians who graduated from MD-granting medical schools and who passed Step 3 between 2000 and 2017. Cross-classified multilevel logistic regression models were used to examine the effects of Step 3 scores on the likelihood of receiving a disciplinary action, controlling for other USMLE scores and accounting for jurisdiction and specialty. RESULTS Results showed that physicians with higher Step 3 total, CCS, and MCQ scores tended to have lower chances of receiving a disciplinary action, after accounting for other USMLE scores. Specifically, a 1-standard-deviation increase in Step 3 total, CCS, and MCQ score was associated with a 23%, 11%, and 17% decrease in the odds of receiving a disciplinary action, respectively. The effect of Step 2 CK score on the likelihood of receiving a disciplinary action was statistically significant, while the effect of Step 1 score became statistically nonsignificant when other Step scores were included in the analysis. CONCLUSIONS Physicians who perform better on Step 3 are less likely to receive a disciplinary action from a state medical board for problematic behavior in practice. These findings provide some validity evidence for the use of Step 3 scores when making medical licensure decisions in the United States.
Collapse
Affiliation(s)
- Monica M Cuddy
- M.M. Cuddy is measurement scientist, NBME, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0002-5756-9113
| | - Chunyan Liu
- C. Liu is senior psychometrician, NBME, Philadelphia, Pennsylvania
| | - Wenli Ouyang
- W. Ouyang is data analyst III, NBME, Philadelphia, Pennsylvania
| | - Michael A Barone
- M.A. Barone is vice president, Competency-Based Assessment, NBME, Philadelphia, Pennsylvania
| | - Aaron Young
- A. Young is vice president, Research and Data Integration, Federation of State Medical Boards, Euless, Texas
| | - David A Johnson
- D.A. Johnson is chief assessment officer, Federation of State Medical Boards, Euless, Texas
| |
Collapse
|
8
|
Wenghofer E, Boulet J. Medical Council of Canada Qualifying Examinations and performance in future practice. CANADIAN MEDICAL EDUCATION JOURNAL 2022; 13:53-61. [PMID: 36091726 PMCID: PMC9441123 DOI: 10.36834/cmej.73770] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The purpose of medical licensing examinations is to protect the public from practitioners who do not have adequate knowledge, skills, and abilities to provide acceptable patient care, and therefore evaluating the validity of these examinations is a matter of accountability. Our objective was to discuss the Medical Council of Canada's Qualifying Examinations (MCCQEs) Part I (QE1) and Part II (QE2) in terms of how well they reflect future performance in practice. We examined the supposition that satisfactory performance on the MCCQEs are important determinants of practice performance and, ultimately, patient outcomes. We examined the literature before the implementation of the QE2 (pre-1992), post QE2 but prior to the implementation of the new Blueprint (1992-2018), and post Blueprint (2018-present). The literature suggests that MCCQE performance is predictive of future physician behaviours, that the relationship between examination performance and outcomes did not attenuate with practice experience, and that associations between examination performance and outcomes made sense clinically. While the evidence suggests the MCC qualifying examinations measure the intended constructs and are predictive of future performance, the validity argument is never complete. As new competency requirements emerge, we will need to develop valid and reliable mechanisms for determining practice readiness in these areas.
Collapse
Affiliation(s)
- Elizabeth Wenghofer
- School of Kinesiology and Health Sciences, Laurentian University; Division of Human Sciences, Northern Ontario School of Medicine, Ontario, Canada
| | - John Boulet
- National Board of Osteopathic Medical Examiners (NBOME); Uniformed Services University of the Health Sciences (USUHS), Bethesda, Maryland, USA
| |
Collapse
|
9
|
Girard AO, Qiu C, Lake IV, Chen J, Lopez CD, Yang R. US Medical Student Perspectives on the Impact of a Pass/Fail USMLE Step 1. JOURNAL OF SURGICAL EDUCATION 2022; 79:397-408. [PMID: 34602379 DOI: 10.1016/j.jsurg.2021.09.010] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 07/26/2021] [Accepted: 09/16/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE The purpose of this study is to (1) gather US medical student attitudes regarding pass or fail score reporting of the USMLE Step 1 exam and (2) investigate the impact of this new policy on specialty interest and redistribution of efforts to enhance individual competitiveness. DESIGN This is a cross-sectional analysis of US medical students surveyed from July to October 2020. Surveys were administered on social media and via medical school email list serv. Data were analyzed using Student t test and Chi-squared statistic, alpha = 0.01. SETTING Data analysis was conducted at Johns Hopkins University in Baltimore, Maryland. PARTICIPANTS This study included a sample of 852 students enrolled in US medical schools. RESULTS The plurality of students (39.0%) was in favor of the new policy; 30.9% of students were opposed. Students interested in highly competitive specialties (HCS) and students who scored 240 or higher on Step 1 ("high scorers") were more likely to oppose the policy compared with HCS-disinterested students and students who scored below 240 ("sub-240 scorers"). If students were to hypothetically take Step 1 with pass or fail scoring, most students report that they would dedicate less time studying than they had for the numerical exam (72.7%) and more time preparing for Step 2 CK (70.5%) and conducting research in HCS (59.6%). Sub-240 scorers would be more likely to apply to a more competitive specialty (44.4%). Nearly half of HCS-interested post-Step 1 students would be more likely to dual apply (48.7%), the majority of which were also high scorers (89.5%). CONCLUSIONS Students expressed polarized opinions regarding pass or fail Step 1 score reporting. Time spent studying for Step 1 may be displaced toward Step 2 CK and research. Residency programs in both HCS and non-HCS can expect an increase in applicant pool size and diversity.
Collapse
Affiliation(s)
- Alisa O Girard
- Department of Plastic and Reconstructive Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland; Division of Plastic Surgery, Rutgers-Robert Wood Johnson Medical School, Piscataway, New Jersey
| | - Cecil Qiu
- Department of Plastic and Reconstructive Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Isabel V Lake
- Department of Plastic and Reconstructive Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Jonlin Chen
- Department of Plastic and Reconstructive Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Christopher D Lopez
- Department of Plastic and Reconstructive Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Robin Yang
- Department of Plastic and Reconstructive Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland.
| |
Collapse
|
10
|
Mun F, Jeong S, Juliano PJ, Hennrikus WL. Perceptions of USMLE Step 1 Pass/Fail Score Reporting Among Orthopedic Surgery Residency Program Directors. Orthopedics 2022; 45:e30-e34. [PMID: 34846244 DOI: 10.3928/01477447-20211124-08] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
The United States Medical Licensing Examination (USMLE) Step 1 examination will transition from graded to pass/fail scoring starting no earlier than January 2022. Orthopedic surgery residency programs will need to adapt to these changes. The goal of this study was to investigate the perceptions of orthopedic surgery residency program directors on the change of Step 1 from a graded to a pass/fail examination. We also investigated how the change would affect the other factors that are typically considered in the selection of orthopedic surgery residents. A survey was distributed to 161 directors of allopathic orthopedic surgery programs. Contact information was obtained from a national database. Of those contacted, 75 (46.6%) program directors responded. Most (85.3%) did not support the pass/fail change. Most believe that greater importance will be placed on the Step 2 Clinical Knowledge examination (96.0%), audition elective with their department (84.0%), personal knowledge of the applicant (78.7%), grades (74.7%), letters of recommendation from recognizable orthopedic surgeons (74.7%), and Alpha Omega Alpha status (69.3%). Most also believe that this change will advantage allopathic students who attend highly regarded schools (58.7%). Most of the program directors support a graded preclinical curriculum (69.3%) and caps on the number of orthopedic surgery residency applications (70.7%). Although most orthopedic surgery program directors disagree with the change to a pass/fail Step 1 examination, residency programs will need to reevaluate how they screen applicants for an interview once the scored Step 1 is no longer available. With this change, other factors, such as Step 2 score, audition rotations, and grades in clerkships, will be emphasized more heavily. [Orthopedics. 2022;45(1):e30-e34.].
Collapse
|
11
|
Nissan ME, Singh NP, Boyd CJ. Invited Commentary from the author of: Boyd. Implementation of a Secondary Application to Increase Efficiency in the Plastic Surgery Match. Ann Plast Surg 2022; 88:129. [PMID: 33587457 DOI: 10.1097/sap.0000000000002738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
12
|
Tamblyn R, Girard N, Boulet J, Dauphinee D, Habib B. Association of clinical competence, specialty and physician country of origin with opioid prescribing for chronic pain: a cohort study. BMJ Qual Saf 2021; 31:340-352. [PMID: 34725228 PMCID: PMC9046738 DOI: 10.1136/bmjqs-2021-013503] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 08/31/2021] [Indexed: 11/24/2022]
Abstract
Background Although little is known about why opioid prescribing practices differ between physicians, clinical competence, specialty training and country of origin may play a role. We hypothesised that physicians with stronger clinical competence and communication skills are less likely to prescribe opioids and prescribe lower doses, as do medical specialists and physicians from Asia. Methods Opioid prescribing practices were examined among international medical graduates (IMGs) licensed to practise in the USA who evaluated Medicare patients for chronic pain problems in 2014–2015. Clinical competence was assessed by the Educational Commission for Foreign Medical Graduates (ECFMG) Clinical Skills Assessment. Physicians in the ECFMG database were linked to the American Medical Association Masterfile. Patients evaluated for chronic pain were obtained by linkage to Medicare outpatient and prescription files. Opioid prescribing was measured within 90 days of evaluation visits. Prescribed dose was measured using morphine milligram equivalents (MMEs). Generalised estimating equation logistic and linear regression estimated the association of clinical competence, specialty, and country of origin with opioid prescribing and dose. Results 7373 IMGs evaluated 65 012 patients for chronic pain; 15.2% received an opioid prescription. Increased clinical competence was associated with reduced opioid prescribing, but only among female physicians. For every 10% increase in the clinical competence score, the odds of prescribing an opioid decreased by 16% for female physicians (OR 0.84, 95% CI 0.75 to 0.94) but not male physicians (OR 0.99, 95% CI 0.92 to 1.07). Country of origin was associated with prescribed opioid dose; US and Canadian citizens prescribed higher doses (adjusted MME difference +3.56). Primary care physicians were more likely to prescribe opioids, but surgical and hospital-based specialists prescribed higher doses. Conclusions Clinical competence at entry into US graduate training, physician gender, specialty and country of origin play a role in opioid prescribing practices.
Collapse
Affiliation(s)
- Robyn Tamblyn
- Department of Medicine and Department of Epidemiology and Biostatistics, McGill University, Montreal, Quebec, Canada
| | - Nadyne Girard
- Clinical and Health Informatics Research Group, McGill University, Montreal, Quebec, Canada
| | - John Boulet
- Foundation for Advancement of International Medical Education and Research (FAIMER), Philadelphia, Pennsylvania, USA
| | - Dale Dauphinee
- Foundation for Advancement of International Medical Education and Research (FAIMER), Philadelphia, Pennsylvania, USA.,McGill University Montreal, Montreal, Quebec, Canada
| | - Bettina Habib
- Clinical and Health Informatics Research Group, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
13
|
Sam AH, Bala L, Westacott RJ, Brown C. Is Academic Attainment or Situational Judgment Test Performance in Medical School Associated With the Likelihood of Disciplinary Action? A National Retrospective Cohort Study. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:1467-1475. [PMID: 34133342 DOI: 10.1097/acm.0000000000004212] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
PURPOSE Disciplinary action imposed on physicians indicates their fitness to practice medicine is impaired and patient safety is potentially at risk. This national retrospective cohort study sought to examine whether there was an association between academic attainment or performance on a situational judgment test (SJT) in medical school and the risk of receiving disciplinary action within the first 5 years of professional practice in the United Kingdom. METHOD The authors included data from the UK Medical Education Database for 34,865 physicians from 33 U.K. medical schools that started the UK Foundation Programme (similar to internship) between 2014 and 2018. They analyzed data from 2 undergraduate medical assessments used in the United Kingdom: the Educational Performance Measure (EPM), which is based on academic attainment, and SJT, which is an assessment of professional attributes. The authors calculated hazard ratios (HRs) for EPM and SJT scores. RESULTS The overall rate of disciplinary action was low (65/34,865, 0.19%) and the mean time to discipline was 810 days (standard deviation [SD] = 440). None of the physicians with fitness to practice concerns identified as students went on to receive disciplinary action after they qualified as physicians. The multivariate survival analysis demonstrated that a score increase of 1 SD (approximately 7.6 percentage points) on the EPM reduced the hazard of disciplinary action by approximately 50% (HR = 0.51; 95% confidence interval [CI]: 0.38, 0.69; P < .001). There was not a statistically significant association between the SJT score and the hazard of disciplinary action (HR = 0.84; 95% CI: 0.62, 1.13; P = .24). CONCLUSIONS An increase in EPM score was significantly associated with a reduced hazard of disciplinary action, whereas performance on the SJT was not. Early identification of increased risk of disciplinary action may provide an opportunity for remediation and avoidance of patient harm.
Collapse
Affiliation(s)
- Amir H Sam
- A.H. Sam is head, Imperial College School of Medicine, Imperial College London, London, United Kingdom; ORCID: https://orcid.org/0000-0002-9599-9069
| | - Laksha Bala
- L. Bala is a clinical research fellow in medical education, Faculty of Medicine, Imperial College London, London, United Kingdom; ORCID: https://orcid.org/0000-0002-8242-379X
| | - Rachel J Westacott
- R.J. Westacott is senior clinical lecturer, Birmingham Medical School, University of Birmingham, Birmingham, United Kingdom; ORCID: https://orcid.org/0000-0001-9846-1961
| | - Celia Brown
- C. Brown is associate professor in quantitative methods, Warwick Medical School, University of Warwick, Coventry, United Kingdom; ORCID: https://orcid.org/0000-0002-7526-0793
| |
Collapse
|
14
|
Arnhart KL, Cuddy MM, Johnson D, Barone MA, Young A. Multiple United States Medical Licensing Examination Attempts and the Estimated Risk of Disciplinary Actions Among Graduates of U.S. and Canadian Medical Schools. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:1319-1323. [PMID: 34133346 DOI: 10.1097/acm.0000000000004210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
PURPOSE The United States Medical Licensing Examination (USMLE) recently announced 2 policy changes: shifting from numeric score reporting on the Step 1 examination to pass/fail reporting and limiting examinees to 4 attempts for each Step component. In light of these policies, exam measures other than scores, such as the number of examination attempts, are of interest. Attempt limit policies are intended to ensure minimum standards of physician competency, yet little research has explored how Step attempts relate to physician practice outcomes. This study examined the relationship between USMLE attempts and the likelihood of receiving disciplinary actions from state medical boards. METHOD The sample population was 219,018 graduates from U.S. and Canadian MD-granting medical schools who passed all USMLE Step examinations by 2011 and obtained a medical license in the United States, using data from the NBME and the Federation of State Medical Boards. Logistic regressions estimated how attempts on Steps 1, 2 Clinical Knowledge (CK), and 3 examinations influenced the likelihood of receiving disciplinary actions by 2018, while accounting for physician characteristics. RESULTS A total of 3,399 physicians (2%) received at least 1 disciplinary action. Additional attempts needed to pass Steps 1, 2 CK, and 3 were associated with an increased likelihood of receiving disciplinary actions (odds ratio [OR]: 1.07, 95% confidence interval [CI]: 1.01, 1.13; OR: 1.09, 95% CI: 1.03, 1.16; OR: 1.11, 95% CI: 1.04, 1.17, respectively), after accounting for other factors. CONCLUSIONS Physicians who took multiple attempts to pass Steps 1, 2 CK, and 3 were associated with higher estimated likelihood of receiving disciplinary actions. This study offers support for licensure and practice standards to account for physicians' USMLE attempts. The relatively small effect sizes, however, caution policy makers from placing sole emphasis on this relationship.
Collapse
Affiliation(s)
- Katie L Arnhart
- K.L. Arnhart is senior research analyst, Research and Data Integration, Federation of State Medical Boards, Euless, Texas; ORCID: http://orcid.org/0000-0001-9975-6358
| | - Monica M Cuddy
- M.M. Cuddy is measurement scientist, Center for Advanced Assessment, NBME, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-5756-9113
| | - David Johnson
- D. Johnson is chief assessment officer, Federation of State Medical Boards, Euless, Texas; ORCID: http://orcid.org/0000-0003-3669-1838
| | - Michael A Barone
- M.A. Barone is vice president, Competency Based Assessment, NBME, Philadelphia, Pennsylvania; ORCID: http://orcid.org/0000-0002-4724-784X
| | - Aaron Young
- A. Young is vice president, Research and Data Integration, Federation of State Medical Boards, Euless, Texas; ORCID: http://orcid.org/0000-0002-5517-5874
| |
Collapse
|
15
|
Hamstra SJ, Cuddy MM, Jurich D, Yamazaki K, Burkhardt J, Holmboe ES, Barone MA, Santen SA. Exploring the Association Between USMLE Scores and ACGME Milestone Ratings: A Validity Study Using National Data From Emergency Medicine. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:1324-1331. [PMID: 34133345 PMCID: PMC8378430 DOI: 10.1097/acm.0000000000004207] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
PURPOSE The United States Medical Licensing Examination (USMLE) sequence and the Accreditation Council for Graduate Medical Education (ACGME) milestones represent 2 major components along the continuum of assessment from undergraduate through graduate medical education. This study examines associations between USMLE Step 1 and Step 2 Clinical Knowledge (CK) scores and ACGME emergency medicine (EM) milestone ratings. METHOD In February 2019, subject matter experts (SMEs) provided judgments of expected associations for each combination of Step examination and EM subcompetency. The resulting sets of subcompetencies with expected strong and weak associations were selected for convergent and discriminant validity analysis, respectively. National-level data for 2013-2018 were provided; the final sample included 6,618 EM residents from 158 training programs. Empirical bivariate correlations between milestone ratings and Step scores were calculated, then those correlations were compared with the SMEs' judgments. Multilevel regression analyses were conducted on the selected subcompetencies, in which milestone ratings were the dependent variable, and Step 1 score, Step 2 CK score, and cohort year were independent variables. RESULTS Regression results showed small but statistically significant positive relationships between Step 2 CK score and the subcompetencies (regression coefficients ranged from 0.02 [95% confidence interval (CI), 0.01-0.03] to 0.12 [95% CI, 0.11-0.13]; all P < .05), with the degree of association matching the SMEs' judgments for 7 of the 9 selected subcompetencies. For example, a 1 standard deviation increase in Step 2 CK score predicted a 0.12 increase in MK-01 milestone rating, when controlling for Step 1. Step 1 score showed a small statistically significant effect with only the MK-01 subcompetency (regression coefficient = 0.06 [95% CI, 0.05-0.07], P < .05). CONCLUSIONS These results provide incremental validity evidence in support of Step 1 and Step 2 CK score and EM milestone rating uses.
Collapse
Affiliation(s)
- Stanley J. Hamstra
- S.J. Hamstra was vice president, Milestones Research and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois, at the time of writing, and is now professor, Department of Surgery, University of Toronto, Toronto, Ontario, Canada, and adjunct professor, Department of Medical Education, Feinberg School of Medicine, Northwestern University, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-0680-366X
| | - Monica M. Cuddy
- M.M. Cuddy is measurement scientist, National Board of Medical Examiners, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0002-5756-9113
| | - Daniel Jurich
- D. Jurich is manager, Psychometrics, National Board of Medical Examiners, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0002-1870-2436
| | - Kenji Yamazaki
- K. Yamazaki is senior analyst, Milestones Research and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-7039-4717
| | - John Burkhardt
- J. Burkhardt is assistant professor, Emergency Medicine and Learning Health Sciences, University of Michigan, Ann Arbor, Michigan
| | - Eric S. Holmboe
- E.S. Holmboe is chief and Research, Milestones Development and Evaluation Officer, Accreditation Council for Graduate Medical Education, Chicago, Illinois
| | - Michael A. Barone
- M.A. Barone is vice president, Licensure Programs, National Board of Medical Examiners, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0002-4724-784X
| | - Sally A. Santen
- S.A. Santen is senior associate dean and professor of emergency medicine, Virginia Commonwealth University School of Medicine, Richmond, Virginia; ORCID: https://orcid.org/0000-0002-8327-8002
| |
Collapse
|
16
|
Rahil A, Hamamyh T, Al-Mohammed A, Kamel A, Abubeker I, Abu-Raddad L, Dargham S, Suliman S, Al Mohanadi D, Al Khal A. Do the selection criteria of internal medicine residency program predict resident performance? Qatar Med J 2021; 2021:20. [PMID: 34189112 PMCID: PMC8216212 DOI: 10.5339/qmj.2021.20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 03/18/2021] [Indexed: 01/07/2023] Open
Abstract
BACKGROUND Well-performing physician reflects the success of the residency program in selecting the best candidates for training. This study aimed to evaluate the selection criteria, mainly the United States Medical Licensing Examination (USMLE) Step 2 Clinical Knowledge (CK) results and applicants' status as international or locally trained applicants, used by the medical education department and the internal medicine residency program in Hamad Medical Corporation in Qatar to predict the residents' performance during their training. METHODS A retrospective chart review was performed for three batches of graduates who started residency training in 2011, 2012, and 2013. Each group completed 4 years of training. The USMLE Step 2 CK status of the applicant, in-training exam (ITE) scores, formative evaluation scores, Arab Board written and clinical exams pass rate, and other indicators were analyzed. Statistical analysis included chi squares and independent t-test to identify associations. Multivariable analyses were conducted using logistic and linear regressions to test for adjusted associations. RESULTS The study included 118 (81 international/37 locally trained applicants) internal medicine residents. The ITE score correlated positively with the USMLE Step 2 CK score (r = 0.621, r = 0.587, r = 0.576, r = 0.571, p < 0.001) over the 4 years of training and among the international compared with locally trained applicants (p < 0.001). The rate of passing part 1 and 2 written exam of the Arab Board was higher in international than in local applicants, whereas clinical Arab Board exam and formative evaluation were not associated with any criteria. CONCLUSIONS Higher USMLE Step 2 CK score correlated with better performance on ITE but not with other performance indicators, whereas international applicants did better in both ITE and Arab Board written exam than local applicants. These variables may provide reasonable predictors of well-performing physicians.
Collapse
Affiliation(s)
- Ali Rahil
- Hamad General Hospital, Doha, Qatar E-mail: ,E-mail:
| | | | | | | | | | - Laith Abu-Raddad
- Biomathematics Research Core, Weill Cornell Medical College, Qatar
| | - Soha Dargham
- Biomathematics Research Core, Weill Cornell Medical College, Qatar
| | | | | | | |
Collapse
|
17
|
Ellis R, Cleland J, Scrimgeour DSG, Lee AJ, Brennan PA. A cross-sectional study examining the association between MRCS performance and surgeons receiving sanctions against their medical registration. Surgeon 2021; 20:211-215. [PMID: 34030984 DOI: 10.1016/j.surge.2021.04.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 03/20/2021] [Accepted: 04/06/2021] [Indexed: 11/18/2022]
Abstract
BACKGROUND Fitness to practice (FtP) investigations by the General Medical Council (GMC) safeguard patients and maintain the integrity of the medical profession. The likelihood of FtP sanctions is influenced by specialty and socio-demographic factors and can be predicted by performance at postgraduate examinations. This is the first study to characterise the prevalence of FtP sanctions in early-career surgeons and to examine the association with performance at the Membership of the Royal College of Surgeons (MRCS) examination. METHODS All UK graduates who attempted MRCS between September 2007-January 2020 were matched to the GMC list of registered medical practitioners. Clinicians who had active FtP sanctions between 28th August 2018 and 28th August 2020 were identified. Data were anonymised by RCS England prior to analysis. RESULTS Of 11,660 candidates who attempted MRCS within the study period, only 31 (0.3%) had FtP sanctions between 2018 and 2020. Of these, 12 had active conditions on registration, seven had undertakings and 14 had warnings. There was no statistically significant difference in MRCS performance in either Parts A or B of the examination for those with and those free from FtP sanctions (P > 0.05). CONCLUSIONS In this, the largest study of MRCS candidates to date, the prevalence of active FtP sanctions in early-career surgeons was 0.3%, significantly lower than the prevalence of sanctions across more experienced UK surgeons (0.9%). These data highlight early-career surgeons as a low-risk group for disciplinary action and should reassure patients and medical professionals of the rarity of FtP sanctions.
Collapse
Affiliation(s)
- R Ellis
- Institute of Applied Health Sciences, University of Aberdeen, AB25 2ZD, United Kingdom; Urology Department, Nottingham University Hospitals NHS Trust, Nottingham, United Kingdom.
| | - J Cleland
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - D S G Scrimgeour
- Institute of Applied Health Sciences, University of Aberdeen, AB25 2ZD, United Kingdom; Department of Colorectal Surgery, Aberdeen Royal Infirmary, Aberdeen, AB25 2ZN, United Kingdom
| | - A J Lee
- Department of Medical Statistics, Institute of Applied Health Sciences, University of Aberdeen, AB25 2ZD, United Kingdom
| | - P A Brennan
- Department of Maxillo-Facial Surgery, Queen Alexandra Hospital, Portsmouth, PO6 3LY, United Kingdom
| |
Collapse
|
18
|
Mun F, Scott AR, Cui D, Chisty A, Hennrikus WL, Hennrikus EF. Internal medicine residency program director perceptions of USMLE Step 1 pass/fail scoring: A cross-sectional survey. Medicine (Baltimore) 2021; 100:e25284. [PMID: 33847625 PMCID: PMC8052063 DOI: 10.1097/md.0000000000025284] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 03/08/2021] [Indexed: 01/04/2023] Open
Abstract
The United States Medical Licensing Examination Step 1 will transition to a pass/fail exam starting no earlier than January 2022. Internal medicine residency programs will need to adapt to these changes. The purpose of this study was to investigate: 1. internal medicine residency program directors’ perceptions on the change of Step 1 to a pass/fail exam, and 2. the impact on other factors considered for internal medicine residency selection. A validated REDCap survey was sent to 548 program directors at active Accreditation Council for Graduate Medical Education internal medicine residency programs. Contact information from the American Medical Association's Fellowship and Residency Electronic Interactive Database was used. The survey had 123 respondents (22.4%). Most internal medicine program directors do not support the pass/fail change. A greater importance will be placed on Step 2 Clinical Knowledge exam, personal knowledge of the applicant, clerkship grades, and audition electives. Allopathic students from less highly regarded medical schools, as well as osteopathic and international students, will be disadvantaged. About half believe that schools should adopt a graded pre-clinical curriculum (51.2%) and that there should be residency application caps (54.5%). Internal medicine program directors mostly disagree with the pass/fail Step 1 transition. Residency programs will need to reevaluate how applicants are evaluated. Other factors, such as Step 2 Clinical Knowledge score, personal knowledge of the applicant, grades in clerkships, and audition rotations will now be emphasized more heavily.
Collapse
Affiliation(s)
| | | | - David Cui
- Pennsylvania State University College of Medicine
| | - Alia Chisty
- Pennsylvania State University College of Medicine
- Department of Internal Medicine
| | - William L. Hennrikus
- Pennsylvania State University College of Medicine
- Bone and Joint Institute, Penn State Milton S. Hershey Medical Center, Hershey, PA, USA
| | - Eileen F. Hennrikus
- Pennsylvania State University College of Medicine
- Department of Internal Medicine
| |
Collapse
|
19
|
Wenghofer EF, Steele RS, Christiansen RG, Carter MH. Evaluation of a High Stakes Physician Competency Assessment: Lessons for Assessor Training, Program Accountability, and Continuous Improvement. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2021; 41:111-118. [PMID: 33929350 DOI: 10.1097/ceh.0000000000000362] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
INTRODUCTION There is a dearth of evidence evaluating postlicensure high-stakes physician competency assessment programs. Our purpose was to contribute to this evidence by evaluating a high-stakes assessment for assessor inter-rater reliability and the relationship between performance on individual assessment components and overall performance. We did so to determine if the assessment tools identify specific competency needs of the assessed physicians and contribute to our understanding of physician dyscompetence more broadly. METHOD Four assessors independently reviewed 102 video-recorded assessments and scored physicians on seven assessment components and overall performance. Inter-rater reliability was measured using intraclass correlation coefficients using a multiple rater, consistency, two-way random effect model. Analysis of variance with least-significant difference post-hoc analyses examined if the mean component scores differed significantly by quartile ranges of overall performance. Linear regression analysis determined the extent to which each component score was associated with overall performance. RESULTS Intraclass correlation coefficients ranged between 0.756 and 0.876 for all components scored and was highest for overall performance. Regression indicated that individual component scores were positively associated with overall performance. Levels of variation in component scores were significantly different across quartile ranges with higher variability in poorer performers. DISCUSSION High-stake assessments can be conducted reliably and identify performance gaps of potentially dyscompetent physicians. Physicians who performed well tended to do so in all aspects evaluated, whereas those who performed poorly demonstrated areas of strength and weakness. Understanding that dyscompetence rarely means a complete or catastrophic lapse competence is vital to understanding how educational needs change through a physician's career.
Collapse
Affiliation(s)
- Elizabeth F Wenghofer
- Dr. Wenghofer: Full Professor, School of Rural and Northern Health, Laurentian University, Sudbury, Ontario, Canada. Dr. Steele: Medical Director of Knowledge, Skills, Training, Assessment, and Training (KSTAR) Physician Programs, A&M Rural and Community Health Institute, Texas A&M University Health Science Center, College Station, TX. Dr. Christiansen: Professor of Medicine, Department of Medicine, University of Illinois College of Medicine, Rockford, IL. Dr. Carter: Clinical Assistant Professor of primary care medicine, Primary Care and Population Health, Texas A&M University Health Science Center, College Station, TX
| | | | | | | |
Collapse
|
20
|
Avery AC, Dowers KL, West AB, Graham BJ, Hellyer P, Avery PR, Ballweber LR, Hassel DM, Oaks JF, Frye MA. Student, faculty, and program outcomes associated with capstone examinations administered to veterinary students at Colorado State University. J Am Vet Med Assoc 2021; 257:165-175. [PMID: 32597728 DOI: 10.2460/javma.257.2.165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
21
|
Rajesh A, Asaad M, Sridhar M. Binary Reporting of USMLE Step 1 Scores: Resident Perspectives. JOURNAL OF SURGICAL EDUCATION 2021; 78:304-307. [PMID: 32600888 DOI: 10.1016/j.jsurg.2020.06.013] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Revised: 05/23/2020] [Accepted: 06/13/2020] [Indexed: 06/11/2023]
Abstract
The recent consensus from the Invitational Conference on USMLE Scoring has recommended a transition to a binary pass/fail reporting on the USMLE Step 1 exam to be implemented from January 22, 2022. While this change was instituted in an effort to decrease medical student stress and re-iterate the importance of the Step 1 as merely a licensing or qualifying exam, this decision has profound implications for medical graduates of both United States and foreign medical schools. In addition to compounding the difficulties of resident selection by residency programs, the new system could exert significant mental and financial burden on medical students, and potentially affect the diversity of graduate medical education in the United States. This article draws attention to the downstream effects of a pass/fail system on the future of medical and surgical education.
Collapse
Affiliation(s)
- Aashish Rajesh
- Department of Surgery, University of Texas Health Science Center, San Antonio, Texas.
| | - Malke Asaad
- Department of Plastic Surgery, MD Anderson Cancer Center, Houston, Texas
| | - Monica Sridhar
- Department of Surgery, University of Texas Health Science Center, San Antonio, Texas
| |
Collapse
|
22
|
Rashid H, Coppola KM, Lebeau R. Three Decades Later: A Scoping Review of the Literature Related to the United States Medical Licensing Examination. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:S114-S121. [PMID: 33105189 DOI: 10.1097/acm.0000000000003639] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
PURPOSE To conduct a scoping review of the timing, scope, and purpose of literature related to the United States Medical Licensing Examination (USMLE) given the recent impetus to revise USMLE scoring. METHOD The authors searched PubMed, PsycInfo, and ERIC for relevant articles published from 1990 to 2019. Articles selected for review were labeled as research or commentaries and coded by USMLE Step level, sample characteristics (e.g., year(s), single/multiple institutions), how scores were used (e.g., predictor/outcome/descriptor), and purpose (e.g., clarification/justification/description). RESULTS Of the 741 articles meeting inclusion criteria, 636 were research and 105 were commentaries. Publication totals in the past 5 years exceeded those of the first 20 years.Step 1 was the sole focus of 38%, and included in 84%, of all publications. Approximately half of all research articles used scores as a predictor or outcome measure related to other curricular/assessment efforts, with a marked increase in the use of scores as predictors in the past 10 years. The overwhelming majority of studies were classified as descriptive in purpose. CONCLUSIONS Nearly 30 years after the inception of the USMLE, aspirations for its predictive utility are rising faster than evidence supporting the manner in which the scores are used. A closer look is warranted to systematically review and analyze the contexts and purposes for which USMLE scores can productively be used. Future research should explore cognitive and noncognitive factors that can be used in conjunction with constrained use of USMLE results to inform evaluation of medical students and schools and to support the residency selection process.
Collapse
Affiliation(s)
- Hanin Rashid
- H. Rashid is associate director, Office for Advancing Learning, Teaching, and Assessment, and assistant professor, Cognitive Skills Program, Rutgers Robert Wood Johnson Medical School, Piscataway, New Jersey
| | - Kristen M Coppola
- K.M. Coppola is assistant professor, Cognitive Skills Program, Rutgers Robert Wood Johnson Medical School, Piscataway, New Jersey
| | - Robert Lebeau
- R. Lebeau is director, Office for Advancing Learning, Teaching, and Assessment, and Cognitive Skills Program, Rutgers Robert Wood Johnson Medical School, Piscataway, New Jersey
| |
Collapse
|
23
|
Integrated Plastic Surgery Applicant Review: Important Factors and Selection Criteria. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2020; 8:e2892. [PMID: 32802635 PMCID: PMC7413791 DOI: 10.1097/gox.0000000000002892] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Accepted: 03/26/2020] [Indexed: 11/25/2022]
Abstract
Matching into integrated plastic surgery residency is highly competitive. Applicants to these programs are among the most accomplished graduating medical students, consistently demonstrating some of the highest United States Medical Licensing Examination scores, mean numbers of research publications, and rates of Alpha Omega Alpha Honor Medical Society membership. The applicant review process requires programs to rely on a number of objective and subjective factors to determine which of these qualified applicants have the most potential for success. We outline these factors, discuss their correlation with resident performance, and provide our institution’s applicant review process both for applicants hoping to optimize their applications for success in the National Resident Matching Program and for program faculty hoping to optimize their resident selection process.
Collapse
|
24
|
Krupat E, Dienstag JL, Padrino SL, Mayer JE, Shore MF, Young A, Chaudhry HJ, Pelletier SR, Reis BY. Do Professionalism Lapses in Medical School Predict Problems in Residency and Clinical Practice? ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:888-895. [PMID: 31895703 DOI: 10.1097/acm.0000000000003145] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
PURPOSE Recognizing that physicians must exhibit high levels of professionalism, researchers have attempted to identify the precursors of clinicians' professionalism difficulties, typically using retrospective designs that trace sanctioned physicians back to medical school. To better establish relative risk for professionalism lapses in practice, however, this relationship must also be studied prospectively. Therefore, this study investigated the sequelae of medical school professionalism lapses by following students with medical school professionalism problems into residency and practice. METHOD Beginning in 2014, 108 graduates from Harvard Medical School and Case Western Reserve University School of Medicine who appeared before their schools' review boards between 1993 and 2007 for professionalism-related reasons were identified, as well as 216 controls matched by sex, minority status, and graduation year. Prematriculation information and medical school performance data were collected for both groups. Outcomes for the groups were studied at 2 points in time: ratings by residency directors, and state medical board sanctions and malpractice suits during clinical practice. RESULTS Compared with controls, students who appeared before their schools' review boards were over 5 times more likely to undergo disciplinary review during residency (16% vs 3%, respectively) and almost 4 times more likely to require remediation or counseling (35% vs 9%, respectively). During clinical practice, 10% of those who had made review board appearances were sued or sanctioned vs 5% of controls. Logistic regression for these outcomes indicated, however, that professional lapses in medical school were not the only, or even the most important, predictor of problems in practice. CONCLUSIONS Students with professionalism lapses in medical school are significantly more likely to experience professionalism-related problems during residency and practice, although other factors may also play an important predictive role.
Collapse
Affiliation(s)
- Edward Krupat
- E. Krupat is associate professor of medicine, Department of Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts. J.L. Dienstag is interim dean for faculty affairs and professor of medicine, Department of Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts. S.L. Padrino is assistant dean for clinical sciences, and assistant professor, Departments of Medicine and Psychiatry, Case Western Reserve University School of Medicine, Cleveland, Ohio; ORCID: http://orcid.org/0000-0001-5637-5870. J.E. Mayer Jr is professor of surgery, Boston Children's Hospital, Boston, Massachusetts. M.F. Shore, deceased, was professor emeritus, Department of Psychiatry, McLean Hospital, Harvard Medical School, Belmont, Massachusetts. A. Young is assistant vice president, Research and Data Integration, Federation of State Medical Boards, Euless, Texas; ORCID: http://orcid.org/0000-0002-5517-5874. H.J. Chaudhry is president and chief executive officer, Federation of State Medical Boards, Euless, Texas; ORCID: http://orcid.org/0000-0003-3356-1106. S.R. Pelletier is senior project manager, Office of Educational Quality Improvement, Harvard Medical School, Boston, Massachusetts. B.Y. Reis is director, Predictive Medicine Group, Harvard Medical School and Computational Health Informatics Program, Boston Children's Hospital, Boston, Massachusetts
| | | | | | | | | | | | | | | | | |
Collapse
|
25
|
Roberts WL, Gross GA, Gimpel JR, Smith LL, Arnhart K, Pei X, Young A. An Investigation of the Relationship Between COMLEX-USA Licensure Examination Performance and State Licensing Board Disciplinary Actions. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:925-930. [PMID: 31626002 DOI: 10.1097/acm.0000000000003046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE Passing the Comprehensive Osteopathic Medical Licensing Examination of the United States (COMLEX-USA) serves as a licensing requirement, yet there is limited understanding between this high-stakes exam and performance outcomes. This study examined the relationship between COMLEX-USA scores and disciplinary actions received by osteopathic physicians. METHOD Data for osteopathic physicians (N = 26,383) who graduated from medical school between 2004 and 2013 were analyzed using multinomial logistic regression to assess the relationship between COMLEX-USA scores and placement into one of 3 disciplinary action categories relative to no action received, controlling for years in practice and gender. RESULTS Less than 1% of physicians in this study (n = 187) had a disciplinary action(s). Controlling for all COMLEX-USA levels, years in practice, and gender, higher Level 3 scores were associated with significant decreased odds for all action categories: revoked licensed (odds ratio [OR] = 0.51, 95% confidence interval [CI] 0.36, 0.72; P < .001), imposed limitations to practice (OR = 0.59, 95% CI 0.41, 0.84; P < .01), and other action imposed (OR = 0.48, 95% CI 0.33, 0.69; P < .001), relative to not receiving an action. In these same models, higher Level 2 Performance Evaluation Biomedical/Biomechanical Domain scores decreased the odds for an action that revoked a license (OR = 0.75, 95% CI 0.58, 0.98; P < .05) and imposed limitations to practice (OR = 0.64, 95% CI 0.49, 0.84; P < .001). CONCLUSIONS These findings provide evidence that the COMLEX-USA delivers useful information regarding the likelihood of a practitioner receiving state board disciplinary actions.
Collapse
Affiliation(s)
- William L Roberts
- W.L. Roberts is director, Psychometrics/Research, Clinical Skills Testing, National Board of Osteopathic Medical Examiners, Conshohocken, Pennsylvania; ORCID: https://orcid.org/0000-0001-6175-8059. G.A. Gross is vice president, Clinical Skills Testing, National Board of Osteopathic Medical Examiners, Conshohocken, Pennsylvania. J.R. Gimpel is president and chief executive officer, National Board of Osteopathic Medical Examiners, Conshohocken, Pennsylvania. L.L. Smith is senior psychometrician, Clinical Skills Testing, National Board of Osteopathic Medical Examiners, Conshohocken, Pennsylvania. K. Arnhart is senior research analyst, Research and Data Integration, Federation of State Medical Boards, Euless, Texas. X. Pei is senior research analyst, Research and Data Integration, Federation of State Medical Boards, Euless, Texas. A. Young is assistant vice president, Research and Data Integration, Federation of State Medical Boards, Euless, Texas
| | | | | | | | | | | | | |
Collapse
|
26
|
Chaudhry HJ, Katsufrakis PJ, Tallia AF. The USMLE Step 1 Decision: An Opportunity for Medical Education and Training. JAMA 2020; 323:2017-2018. [PMID: 32142115 DOI: 10.1001/jama.2020.3198] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Affiliation(s)
| | | | - Alfred F Tallia
- Rutgers Robert Wood Johnson Medical School, New Brunswick, New Jersey
| |
Collapse
|
27
|
Kopp JP, Ibáñez B, Jones AT, Pei X, Young A, Arnhart K, Rizzo AG, Buyske J. Association Between American Board of Surgery Initial Certification and Risk of Receiving Severe Disciplinary Actions Against Medical Licenses. JAMA Surg 2020; 155:e200093. [PMID: 32186688 DOI: 10.1001/jamasurg.2020.0093] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Importance Board certification is used as a marker of surgeon quality and professionalism. Although some research has linked certification in surgery to outcomes, more research is needed. Objective To measure associations between surgeons obtaining American Board of Surgery (ABS) certification and examination performance with receiving future severe disciplinary actions against their medical licenses. Design, Setting, and Participants Retrospective analysis of severe license action rates for surgeons who attempted ABS certification based on certification status and examination performance. Surgeons who attempted to become certified were classified as certified or failing to obtain certification. Additionally, groups were further categorized based on whether the surgeon had to repeat examinations and whether they ultimately passed. The study included surgeons who initially attempted certification between 1976 and 2017 (n = 44 290). Severe license actions from 1976 to 2018 were obtained from the Federation of State Medical Boards, and certification data were obtained from the ABS database. Data were analyzed between 1978 and 2008. Main Outcomes and Measures Severe license action rates were analyzed across certified surgeons and those failing to obtain certification, as well as across examination performance groups. Results The final dataset included 36 197 men (81.7%) and 8093 women (18.3%). The incidence of severe license actions was significantly greater for surgeons who attempted and failed to obtain certification (incidence rate per 1000 person-years = 2.49; 95% CI, 2.13-2.85) than surgeons who were certified (incidence rate per 1000 person years = 0.77; 95% CI, 0.71-0.83). Adjusting for sex and international medical graduate status, the risk of receiving a severe license action across time was also significantly greater for surgeons who failed to obtain certification. Surgeons who progressed further in the certification examination sequence and had fewer repeated examinations had a lower incidence and less risk over time of receiving severe license actions. Conclusions and Relevance Obtaining board certification was associated with a lower rate of receiving severe license actions from a state medical board. Passing examinations in the certification examination process on the first attempt was also associated with lower severe license action rates. This study provides supporting evidence that board certification is 1 marker of surgeon quality and professionalism.
Collapse
Affiliation(s)
- Jason P Kopp
- American Board of Surgery, Philadelphia, Pennsylvania
| | | | | | - Xiaomei Pei
- Federation of State Medical Boards, Euless, Texas
| | - Aaron Young
- Federation of State Medical Boards, Euless, Texas
| | | | | | - Jo Buyske
- American Board of Surgery, Philadelphia, Pennsylvania
| |
Collapse
|
28
|
George P, Santen S, Hammoud M, Skochelak S. Stepping Back: Re-evaluating the Use of the Numeric Score in USMLE Examinations. MEDICAL SCIENCE EDUCATOR 2020; 30:565-567. [PMID: 34457702 PMCID: PMC8368936 DOI: 10.1007/s40670-019-00906-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
There are increasing concerns from medical educators about students' over-emphasis on preparing for a high-stakes licensing examination during medical school, especially the US Medical Licensing Examination (USMLE) Step 1. Residency program directors' use of the numeric score (otherwise known as the three-digit score) on Step 1 to screen and select applicants drive these concerns. Since the USMLE was not designed as a residency selection tool, the use of numeric scores for this purpose is often referred to as a secondary and unintended use of the USMLE score. Educators and students are concerned about USMLE's potentially negative influence on curricular innovation and the role of high-stakes examinations in student and trainee well-being. Changing the score reporting of the examinations from a numeric score to pass/fail has been suggested by some. This commentary first reviews the primary use and secondary uses of the USMLE scores. We then focus on the advantages and disadvantages of the currently reported numeric score using Messick's conceptualization of construct validity as our framework. Finally, we propose a path forward to design a comprehensive, more holistic review of residency candidates.
Collapse
Affiliation(s)
- Paul George
- Warren Alpert Medical School of Brown University, 222 Richmond Street, Providence, RI 02912 USA
| | - Sally Santen
- Virginia Commonwealth University School of Medicine, 1201 East Marshal Street, Box 980565, Richmond, VA 23298 USA
| | - Maya Hammoud
- University of Michigan Medical School, 1540 E Hospital Dr, SPC 4276, Ann Arbor, MI 48109-4276 USA
| | - Susan Skochelak
- American Medical Association, 330 N. Wabash-43rd Floor, Chicago, IL 60611-5885 USA
| |
Collapse
|
29
|
Jurich D, Santen SA, Paniagua M, Fleming A, Harnik V, Pock A, Swan-Sein A, Barone MA, Daniel M. Effects of Moving the United States Medical Licensing Examination Step 1 After Core Clerkships on Step 2 Clinical Knowledge Performance. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:111-121. [PMID: 31365399 PMCID: PMC6924934 DOI: 10.1097/acm.0000000000002921] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
PURPOSE To investigate the effect of a change in the United States Medical Licensing Examination Step 1 timing on Step 2 Clinical Knowledge (CK) scores, the effect of lag time on Step 2 CK performance, and the relationship of incoming Medical College Admission Test (MCAT) score to Step 2 CK performance pre and post change. METHOD Four schools that moved Step 1 after core clerkships between academic years 2008-2009 and 2017-2018 were analyzed. Standard t tests were used to examine the change in Step 2 CK scores pre and post change. Tests of differences in proportions were used to evaluate whether Step 2 CK failure rates differed between curricular change groups. Linear regressions were used to examine the relationships between Step 2 CK performance, lag time and incoming MCAT score, and curricular change group. RESULTS Step 2 CK performance did not change significantly (P = .20). Failure rates remained highly consistent (pre change: 1.83%; post change: 1.79%). The regression indicated that lag time had a significant effect on Step 2 CK performance, with scores declining with increasing lag time, with small but significant interaction effects between MCAT and Step 2 CK scores. Students with lower incoming MCAT scores tended to perform better on Step 2 CK when Step 1 was after clerkships. CONCLUSIONS Moving Step 1 after core clerkships appears to have had no significant impact on Step 2 CK scores or failure rates, supporting the argument that such a change is noninferior to the traditional model. Students with lower MCAT scores benefit most from the change.
Collapse
Affiliation(s)
- Daniel Jurich
- D. Jurich is senior psychometrician, National Board of Medical Examiners, Philadelphia, Pennsylvania
| | - Sally A. Santen
- S.A. Santen is senior associate dean of evaluation, assessment and scholarship of learning and professor of emergency medicine, Virginia Commonwealth University School of Medicine, Richmond, Virginia
| | - Miguel Paniagua
- M. Paniagua is medical advisor, Test Development Services, National Board of Medical Examiners, and adjunct associate professor, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Amy Fleming
- A. Fleming is associate dean for medical student affairs and professor of pediatrics, Vanderbilt University School of Medicine, Nashville, Tennessee
| | - Victoria Harnik
- V. Harnik is associate dean for curriculum and associate professor, Department of Cell Biology, New York University School of Medicine, New York, New York
| | - Arnyce Pock
- A. Pock is associate dean for curriculum and associate professor of medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland
| | - Aubrie Swan-Sein
- A. Swan-Sein is director, Center for Education Research and Evaluation, and assistant professor of educational assessment, Columbia University Vagelos College of Physicians and Surgeons, New York, New York
| | - Michael A. Barone
- M.A. Barone is vice president of licensure, National Board of Medical Examiners, Philadelphia, Pennsylvania
| | - Michelle Daniel
- M. Daniel is assistant dean for curriculum and associate professor, Departments of Emergency Medicine and Learning Health Sciences, University of Michigan Medical School, Ann Arbor, Michigan; ORCID: http://orcid.org/0000-0001-8961-7119
| |
Collapse
|
30
|
Beck Dallaghan GL, Byerley JS, Howard N, Bennett WC, Gilliland KO. Medical School Resourcing of USMLE Step 1 Preparation: Questioning the Validity of Step 1. MEDICAL SCIENCE EDUCATOR 2019; 29:1141-1145. [PMID: 34457594 PMCID: PMC8368791 DOI: 10.1007/s40670-019-00822-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Affiliation(s)
- Gary L. Beck Dallaghan
- Office of Medical Education, University of North Carolina School of Medicine, 108 Taylor Hall, CB#7321, Chapel Hill, NC 27599-7321 USA
| | - Julie Story Byerley
- Office of Medical Education, University of North Carolina School of Medicine, 108 Taylor Hall, CB#7321, Chapel Hill, NC 27599-7321 USA
| | - Neva Howard
- Office of Medical Education, University of North Carolina School of Medicine, 108 Taylor Hall, CB#7321, Chapel Hill, NC 27599-7321 USA
| | - William C. Bennett
- Office of Medical Education, University of North Carolina School of Medicine, 108 Taylor Hall, CB#7321, Chapel Hill, NC 27599-7321 USA
| | - Kurt O. Gilliland
- Office of Medical Education, University of North Carolina School of Medicine, 108 Taylor Hall, CB#7321, Chapel Hill, NC 27599-7321 USA
| |
Collapse
|
31
|
Garber AM. Use of Filters for Residency Application Review: Results From the Internal Medicine In-Training Examination Program Director Survey. J Grad Med Educ 2019; 11:704-707. [PMID: 31871573 PMCID: PMC6919169 DOI: 10.4300/jgme-d-19-00345.1] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Revised: 08/11/2019] [Accepted: 09/11/2019] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND The increase in applications to residency programs, known as "application inflation," creates challenges for program directors (PDs). Prior studies have shown that internal medicine (IM) PDs utilize criteria, such as United States Medical Licensing Examination (USMLE) Step examination performance, when reviewing applications. However, little is known about how early these filters are utilized in the application review cycle. OBJECTIVE This study sought to assess the frequency and types of filters utilized by IM PDs during initial residency application screening and prior to more in-depth application review. METHODS A web-based request for the 2016 Internal Medicine In-Training Examination (IM-ITE) PD Survey was sent to IM PDs. Responses from this survey were analyzed, excluding non-US programs. RESULTS With a 50% response rate (214 of 424), IM PDs responded that the most commonly used data points to filter applicants prior to in-depth application review were the USMLE Step 2 Clinical Knowledge score (32%, 67 of 208), USMLE Step 1 score (24%, 50 of 208), and medical school attended (12%, 25 of 208). Over half of US IM PD respondents (55%, 114 of 208) indicated that they list qualifying interview criteria on their program website, and 31% of respondents (50 of 160) indicated that more than half of their applicant pool does not meet the program's specified interview criteria. CONCLUSIONS Results from the 2016 IM-ITE PD Survey indicate many IM PDs use filters for initial application screening, and that these filters, when available to applicants, do not affect many applicants' decisions to apply.
Collapse
|
32
|
Swails JL, Aibana O, Stoll BJ. The Conundrum of the United States Medical Licensing Examination Score Reporting Structure. JAMA 2019; 322:605-606. [PMID: 31322646 DOI: 10.1001/jama.2019.9669] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Affiliation(s)
- Jennifer L Swails
- McGovern Medical School, Department of Internal Medicine, UT Health, Houston, Texas
| | - Omowunmi Aibana
- McGovern Medical School, Department of Internal Medicine, UT Health, Houston, Texas
| | - Barbara J Stoll
- McGovern Medical School, Medical Sciences, UT Health, Houston, Texas
| |
Collapse
|
33
|
Sharma A, Schauer DP, Kelleher M, Kinnear B, Sall D, Warm E. USMLE Step 2 CK: Best Predictor of Multimodal Performance in an Internal Medicine Residency. J Grad Med Educ 2019; 11:412-419. [PMID: 31440335 PMCID: PMC6699543 DOI: 10.4300/jgme-d-19-00099.1] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Revised: 04/26/2019] [Accepted: 06/04/2019] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Internal medicine (IM) residency programs receive information about applicants via academic transcripts, but studies demonstrate wide variability in satisfaction with and usefulness of this information. In addition, many studies compare application materials to only 1 or 2 assessment metrics, usually standardized test scores and work-based observational faculty assessments. OBJECTIVE We sought to determine which application materials best predict performance across a broad array of residency assessment outcomes generated by standardized testing and a yearlong IM residency ambulatory long block. METHODS In 2019, we analyzed available Electronic Residency Application Service data for 167 categorical IM residents, including advanced degree status, research experience, failures during medical school, undergraduate medical education award status, and United States Medical Licensing Examination (USMLE) scores. We compared these with post-match residency multimodal performance, including standardized test scores and faculty member, peer, allied health professional, and patient-level assessment measures. RESULTS In multivariate analyses, USMLE Step 2 Clinical Knowledge (CK) scores were most predictive of performance across all residency performance domains measured. Having an advanced degree was associated with higher patient-level assessments (eg, physician listens, physician explains, etc). USMLE Step 1 scores were associated with in-training examination scores only. None of the other measured application materials predicted performance. CONCLUSIONS USMLE Step 2 CK scores were the highest predictors of residency performance across a broad array of performance measurements generated by standardized testing and an IM residency ambulatory long block.
Collapse
|
34
|
Pugh D, De Champlain A, Touchie C. Plus ça change, plus c'est pareil: Making a continued case for the use of MCQs in medical education. MEDICAL TEACHER 2019; 41:569-577. [PMID: 30299196 DOI: 10.1080/0142159x.2018.1505035] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Despite the increased emphasis on the use of workplace-based assessment in competency-based education models, there is still an important role for the use of multiple choice questions (MCQs) in the assessment of health professionals. The challenge, however, is to ensure that MCQs are developed in a way to allow educators to derive meaningful information about examinees' abilities. As educators' needs for high-quality test items have evolved so has our approach to developing MCQs. This evolution has been reflected in a number of ways including: the use of different stimulus formats; the creation of novel response formats; the development of new approaches to problem conceptualization; and the incorporation of technology. The purpose of this narrative review is to provide the reader with an overview of how our understanding of the use of MCQs in the assessment of health professionals has evolved to better measure clinical reasoning and to improve both efficiency and item quality.
Collapse
Affiliation(s)
- Debra Pugh
- a Department of Medicine , University of Ottawa , Ottawa , ON , Canada
| | | | - Claire Touchie
- a Department of Medicine , University of Ottawa , Ottawa , ON , Canada
- b Medical Council of Canada , Ottawa , ON , Canada
| |
Collapse
|
35
|
Gauer JL, Jackson JB. The association between United States Medical Licensing Examination scores and clinical performance in medical students. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2019; 10:209-216. [PMID: 31114422 PMCID: PMC6497117 DOI: 10.2147/amep.s192011] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2018] [Accepted: 02/28/2019] [Indexed: 06/01/2023]
Abstract
Purpose: United States Medical Licensing Examination (USMLE) Step 1 and Step 2 Clinical Knowledge (CK) scores are frequently used to evaluate applicants to residency programs. Recent literature questions the value of USMLE scores for evaluation of residency applicants, in part due to a lack of evidence supporting a relationship with clinical performance. This study explored the relationship between USMLE scores and medical students' clinical performance, as measured by the count of honors grades received in core clinical clerkships. Methods: USMLE Step 1 and Step 2 CK scores and number of honors grades per student in seven core clinical clerkships were obtained from 1,511 medical students who graduated in 2013-2017 from two medical schools. The relationships between variables were analyzed using correlation coefficients, independent-samples t-tests, and hierarchical multiple regression. Results: Count of honors grades correlated with both Step 1 (R=0.480, P<0.001) and Step 2 CK (R=0.542, P<0.001). After correcting for gender, institution, and test-taking ability (using MCAT scores as a proxy for test-taking ability) in a hierarchical multiple regression model, Step 1 and Step 2 CK scores together explained 22.2% of the variance in count of honors grades. Conclusion: USMLE Step 1 and Step 2 CK scores moderately correlate with the number of honors grades per student in core clinical clerkships. This relationship is maintained even after correcting for gender, institution, and test-taking ability. These results indicate that USMLE scores have a positive linear association with clinical performance as a medical student.
Collapse
Affiliation(s)
| | - J Brooks Jackson
- Roy J. and Lucille A. Carver College of Medicine, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
36
|
DuBois JM, Anderson EE, Chibnall JT, Diakov L, Doukas DJ, Holmboe ES, Koenig HM, Krause JH, McMillan G, Mendelsohn M, Mozersky J, Norcross WA, Whelan AJ. Preventing Egregious Ethical Violations in Medical Practice: Evidence-Informed Recommendations from a Multidisciplinary Working Group. ACTA ACUST UNITED AC 2019; 104:23-31. [PMID: 30984914 DOI: 10.30770/2572-1852-104.4.23] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
This article reports the consensus recommendations of a working group that was convened at the end of a 4-year research project funded by the National Institutes of Health that examined 280 cases of egregious ethical violations in medical practice. The group reviewed data from the parent project, as well as other research on sexual abuse of patients, criminal prescribing of controlled substances, and unnecessary invasive procedures that were prosecuted as fraud. The working group embraced the goals of making such violations significantly less frequent and, when they do occur, identifying them sooner and taking necessary steps to ensure they are not repeated. Following review of data and previously published recommendations, the working group developed 10 recommendations that provide a starting point to meet these goals. Recommendations address leadership, oversight, tracking, disciplinary actions, education of patients, partnerships with law enforcement, further research and related matters. The working group recognized the need for further refinement of the recommendations to ensure feasibility and appropriate balance between protection of patients and fairness to physicians. While full implementation of appropriate measures will require time and study, we believe it is urgent to take visible actions to acknowledge and address the problem at hand.
Collapse
|
37
|
Rubright JD, Jodoin M, Barone MA. Examining Demographics, Prior Academic Performance, and United States Medical Licensing Examination Scores. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:364-370. [PMID: 30024473 DOI: 10.1097/acm.0000000000002366] [Citation(s) in RCA: 140] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
PURPOSE To examine whether demographic differences exist in United States Medical Licensing Examination (USMLE) scores and the extent to which any differences are explained by students' prior academic achievement. METHOD The authors completed hierarchical linear modeling of data for U.S. and Canadian allopathic and osteopathic medical graduates testing on USMLE Step 1 during or after 2010, and completing USMLE Step 3 by 2015. Main outcome measures were computer-based USMLE examinations: Step 1, Step 2 Clinical Knowledge, and Step 3. Test-taker characteristics included sex, self-identified race, U.S. citizenship status, English as a second language, and age at first Step 1 attempt. Covariates included composite Medical College Admission Test (MCAT) scores, undergraduate grade point average (GPA), and previous USMLE scores. RESULTS A total of 45,154 examinees from 172 medical schools met the inclusion criteria. The sample was 67% white and 48% female; 3.7% non-U.S. citizens; and 7.4% with English as a second language. Hierarchical linear models examined demographic variables with and without covariates including MCAT scores and GPA. All Step examinations showed significant differences by gender after adding covariates, varying by Step. Racial differences were observed for each Step, attenuated by the addition of covariates. CONCLUSIONS Demographic differences in USMLE performance were tempered by previous examination performance and undergraduate performance. Additional research is required to identify factors that contribute to demographic differences, can aid educators' identification of students who would benefit from assistance preparing for USMLE, and can assist residency program directors in assessing performance measures while meeting diversity goals.
Collapse
Affiliation(s)
- Jonathan D Rubright
- J.D. Rubright is senior psychometrician, National Board of Medical Examiners, Philadelphia, Pennsylvania. M. Jodoin is vice president of psychometrics and data analysis, National Board of Medical Examiners, Philadelphia, Pennsylvania. M.A. Barone is vice president of licensure, National Board of Medical Examiners, Philadelphia, Pennsylvania
| | | | | |
Collapse
|
38
|
Katsufrakis PJ, Chaudhry HJ. Improving Residency Selection Requires Close Study and Better Understanding of Stakeholder Needs. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:305-308. [PMID: 30570495 DOI: 10.1097/acm.0000000000002559] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
The United States Medical Licensing Examination has long been valued by state medical boards as an evidence-based, objective assessment of an individual's progressive readiness for the unsupervised practice of medicine. As a secondary use, it is also valued by residency program directors in resident selection. In response to Chen and colleagues' consideration of changing Step 1 scoring to pass/fail, contextual and germane information is offered in this Invited Commentary, including a discussion of potential consequences, risks, and benefits of such a change. A review of stakeholders involved in the residency application process and their possible reactions to a scoring change precedes a discussion of possible changes to the process-changes that may better address expressed concerns. In addition to pass/fail scoring, these include limiting score releases only to examinees, changing the timing of score releases, increasing the amount and improving the quality of information about residency programs available to applicants, developing additional quantitative measures of applicant characteristics important to residency programs, and developing a rating system for medical school student evaluations. Thoughtful and broad consideration of stakeholders and their concerns, informed by the best evidence available, will be necessary to maximize the potential for improvement and minimize the risk of unintended adverse consequences resulting from any changes to the status quo. An upcoming invitational conference in 2019 that is being organized by several stakeholder organizations is expected to further explore underlying issues and concerns related to these options.
Collapse
Affiliation(s)
- Peter J Katsufrakis
- P.J. Katsufrakis is president and CEO, National Board of Medical Examiners, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-9077-9190. H.J. Chaudhry is president and CEO, Federation of State Medical Boards, Euless, Texas
| | | |
Collapse
|
39
|
DuBois JM, Anderson EE, Chibnall JT, Mozersky J, Walsh HA. Serious Ethical Violations in Medicine: A Statistical and Ethical Analysis of 280 Cases in the United States From 2008-2016. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2019; 19:16-34. [PMID: 30676904 PMCID: PMC6460481 DOI: 10.1080/15265161.2018.1544305] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Serious ethical violations in medicine, such as sexual abuse, criminal prescribing of opioids, and unnecessary surgeries, directly harm patients and undermine trust in the profession of medicine. We review the literature on violations in medicine and present an analysis of 280 cases. Nearly all cases involved repeated instances (97%) of intentional wrongdoing (99%), by males (95%) in nonacademic medical settings (95%), with oversight problems (89%) and a selfish motive such as financial gain or sex (90%). More than half of cases involved a wrongdoer with a suspected personality disorder or substance use disorder (51%). Despite clear patterns, no factors provide readily observable red flags, making prevention difficult. Early identification and intervention in cases requires significant policy shifts that prioritize the safety of patients over physician interests in privacy, fair processes, and proportionate disciplinary actions. We explore a series of 10 questions regarding policy, oversight, discipline, and education options. Satisfactory answers to these questions will require input from diverse stakeholders to help society negotiate effective and ethically balanced solutions.
Collapse
Affiliation(s)
- James M. DuBois
- Division of General Medical Sciences, Washington University School of Medicine, 660 S. Euclid Avenue, Campus Box 8005, St Louis MO 63110, USA,
| | - Emily E. Anderson
- Neiswanger Institute for Bioethics & Health Policy, Loyola University Chicago Stritch School of Medicine, 2160 S. First Avenue, Maywood, IL 60153,
| | - John T. Chibnall
- Department of Neurology & Psychiatry, Saint Louis University School of Medicine, 1438 S. Grand Blvd., St. Louis, MO 63104,
| | - Jessica Mozersky
- Division of General Medical Sciences, Washington University School of Medicine, 660 S. Euclid Avenue, Campus Box 8005, St Louis MO 63110, USA,
| | - Heidi A. Walsh
- Division of General Medical Sciences, Washington University School of Medicine, 660 S. Euclid Avenue, Campus Box 8005, St Louis MO 63110, USA,
| |
Collapse
|
40
|
Wakeford R, Ludka K, Woolf K, McManus IC. Fitness to practise sanctions in UK doctors are predicted by poor performance at MRCGP and MRCP(UK) assessments: data linkage study. BMC Med 2018; 16:230. [PMID: 30522486 PMCID: PMC6284295 DOI: 10.1186/s12916-018-1214-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2018] [Accepted: 11/09/2018] [Indexed: 01/01/2023] Open
Abstract
BACKGROUND The predictive validity of postgraduate examinations, such as MRCGP and MRCP(UK) in the UK, is hard to assess, particularly for clinically relevant outcomes. The sanctions imposed on doctors by the UK's General Medical Council (GMC), including erasure from the Medical Register, are indicators of serious problems with fitness to practise (FtP) that threaten patient safety or wellbeing. This data linkage study combined data on GMC sanctions with data on postgraduate examination performance. METHODS Examination results were obtained for UK registered doctors taking the MRCGP Applied Knowledge Test (AKT; n = 27,561) or Clinical Skills Assessment (CSA; n = 17,365) at first attempt between 2010 and 2016 or taking MRCP(UK) Part 1 (MCQ; n = 37,358), Part 2 (MCQ; n = 28,285) or Practical Assessment of Clinical Examination Skills (PACES; n = 27,040) at first attempt between 2001 and 2016. Exam data were linked with GMC actions on a doctor's registration from September 2008 to January 2017, sanctions including Erasure, Suspension, Conditions on Practice, Undertakings or Warnings (ESCUW). Examination results were only considered at first attempts. Multiple logistic regression assessed the odds ratio for ESCUW in relation to examination results. Multiple imputation was used for structurally missing values. RESULTS Doctors sanctioned by the GMC performed substantially less well on MRCGP and MRCP(UK), with a mean Cohen's d across the five exams of - 0.68. Doctors on the 2.5th percentile of exam performance were about 12 times more likely to have FtP problems than those on the 97.5th percentile. Knowledge assessments and clinical assessments were independent predictors of future sanctions, with clinical assessments predicting ESCUW significantly better. The log odds of an FtP sanction were linearly related to examination marks over the entire range of performance, additional performance increments lowering the risk of FtP sanctions at all performance levels. CONCLUSIONS MRCGP and MRCP(UK) performance are valid predictors of professionally important outcomes that transcend simple knowledge or skills and the GMC puts under the headings of conduct and trust. Postgraduate examinations may predict FtP sanctions because the psychological processes involved in successfully studying, understanding and practising medicine at a high level share similar mechanisms to those underlying conduct and trust.
Collapse
Affiliation(s)
| | | | - Katherine Woolf
- Research Department of Medical Education, UCL Medical School, University College London, Gower Street, London, WC1E 6BT, UK
| | - I C McManus
- Research Department of Medical Education, UCL Medical School, University College London, Gower Street, London, WC1E 6BT, UK.
| |
Collapse
|
41
|
Lee M, Vermillion M. Comparative values of medical school assessments in the prediction of internship performance. MEDICAL TEACHER 2018; 40:1287-1292. [PMID: 29390938 DOI: 10.1080/0142159x.2018.1430353] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
BACKGROUND Multiple undergraduate achievements have been used for graduate admission consideration. Their relative values in the prediction of residency performance are not clear. This study compared the contributions of major undergraduate assessments to the prediction of internship performance. METHODS Internship performance ratings of the graduates of a medical school were collected from 2012 to 2015. Hierarchical multiple regression analyses were used to examine the predictive values of undergraduate measures assessing basic and clinical sciences knowledge and clinical performances, after controlling for differences in the Medical College Admission Test (MCAT). RESULTS Four hundred eighty (75%) graduates' archived data were used in the study. Analyses revealed that clinical competencies, assessed by the USMLE Step 2 CK, NBME medicine exam, and an eight-station objective structured clinical examination (OSCE), were strong predictors of internship performance. Neither the USMLE Step 1 nor the inpatient internal medicine clerkship evaluation predicted internship performance. The undergraduate assessments as a whole showed a significant collective relationship with internship performance (ΔR2 = 0.12, p < 0.001). CONCLUSIONS The study supports the use of clinical competency assessments, instead of pre-clinical measures, in graduate admission consideration. It also provides validity evidence for OSCE scores in the prediction of workplace performance.
Collapse
Affiliation(s)
- Ming Lee
- a David Geffen School of Medicine , University of California , Los Angeles , CA , USA
| | - Michelle Vermillion
- a David Geffen School of Medicine , University of California , Los Angeles , CA , USA
| |
Collapse
|
42
|
Hoekzema GS, Stevermer JJ. Characterization of Licensees During the First Year of Missouri's Assistant Physician Licensure Program. JAMA 2018; 320:1706-1707. [PMID: 30357285 PMCID: PMC6583859 DOI: 10.1001/jama.2018.11191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
This study characterizes the number of licenses issued to assistant physicians in the first year of Missouri’s assistant physician program, which authorizes physicians who have not completed their residencies to provide primary care services in underserved areas.
Collapse
Affiliation(s)
- Grant S. Hoekzema
- Department of Family Medicine, Mercy Hospital St Louis, St Louis, Missouri
| | - James J. Stevermer
- Department of Family and Community Medicine, University of Missouri Health, Columbia
| |
Collapse
|
43
|
The American Board of Internal Medicine Maintenance of Certification Examination and State Medical Board Disciplinary Actions: a Population Cohort Study. J Gen Intern Med 2018; 33. [PMID: 29516388 PMCID: PMC6082195 DOI: 10.1007/s11606-018-4376-z] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
BACKGROUND Some have questioned whether successful performance in the American Board of Internal Medicine (ABIM) Maintenance of Certification (MOC) program is meaningful. The association of the ABIM Internal Medicine (IM) MOC examination with state medical board disciplinary actions is unknown. OBJECTIVE To assess risk of disciplinary actions among general internists who did and did not pass the MOC examination within 10 years of initial certification. DESIGN Historical population cohort study. PARTICIPANTS The population of internists certified in internal medicine, but not a subspecialty, from 1990 through 2003 (n = 47,971). INTERVENTION ABIM IM MOC examination. SETTING General internal medicine in the USA. MAIN MEASURES The primary outcome measure was time to disciplinary action assessed in association with whether the physician passed the ABIM IM MOC examination within 10 years of initial certification, adjusted for training, certification, demographic, and regulatory variables including state medical board Continuing Medical Education (CME) requirements. KEY RESULTS The risk for discipline among physicians who did not pass the IM MOC examination within the 10 year requirement window was more than double than that of those who did pass the examination (adjusted HR 2.09; 95% CI, 1.83 to 2.39). Disciplinary actions did not vary by state CME requirements (adjusted HR 1.02; 95% CI, 0.94 to 1.16), but declined with increasing MOC examination scores (Kendall's tau-b coefficient = - 0.98 for trend, p < 0.001). Among disciplined physicians, actions were less severe among those passing the IM MOC examination within the 10-year requirement window than among those who did not pass the examination. CONCLUSIONS Passing a periodic assessment of medical knowledge is associated with decreased state medical board disciplinary actions, an important quality outcome of relevance to patients and the profession.
Collapse
|
44
|
Gelinne A, Zuckerman S, Benzil D, Grady S, Callas P, Durham S. United States Medical Licensing Exam Step I Score as a Predictor of Neurosurgical Career Beyond Residency. Neurosurgery 2018; 84:1028-1034. [DOI: 10.1093/neuros/nyy313] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Accepted: 06/14/2018] [Indexed: 11/13/2022] Open
Abstract
AbstractBACKGROUNDUnited States Medical Licensing Exam (USMLE) Step I score is cited as one of the most important factors when for applying to neurosurgery residencies. No studies have documented a correlation between USMLE Step I score and metrics of neurosurgical career trajectory beyond residency.OBJECTIVETo determine whether USMLE Step I exam scores are predictive of neurosurgical career beyond residency, as defined by American Board of Neurological Surgery (ABNS) certification status, practice type, academic rank, and research productivity.METHODSA database of neurosurgery residency applicants who matched into neurosurgery from 1997 to 2007 was utilized that included USMLE Step I score. Online databases were used to determine h-index, National Institutes of Health (NIH) grant funding, academic rank, practice type, and ABNS certification status of each applicant. Linear regression and nonparametric testing determined associations between USMLE Step I scores and these variables.RESULTSUSMLE Step I scores were higher for neurosurgeons in academic positions (237) when compared to community practice (234) and non-neurosurgeons (233, P < .01). USMLE Step I score was not different between neurosurgeons of different academic rank (P = .21) or ABNS certification status (P = .78). USMLE Step I score was not correlated with h-index for academic neurosurgeons (R2 = 0.002, P = .36).CONCLUSIONUSMLE Step I score has little utility in predicting the future careers of neurosurgery resident applicants. A career in academic neurosurgery is associated with a slightly higher USMLE Step I score. However, USMLE Step I score does not predict academic rank or productivity (h-index or NIH funding) nor does USMLE Step I score predict ABNS certification status.
Collapse
Affiliation(s)
- Aaron Gelinne
- Department of Neurological Surgery, University of Vermont Medical Center, Burlington, Vermont
| | - Scott Zuckerman
- Department of Neurological Surgery, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Deborah Benzil
- Department of Neurological Surgery, Mount Sinai Health System, Mount Kisco, New York
| | - Sean Grady
- Department of Neurological Surgery, University of Pennsylvania Medicine, Philadelphia, Pennsylvania
| | - Peter Callas
- Department of Mathematics & Statistics, University of Vermont, Burlington, Vermont
| | - Susan Durham
- Department of Neurological Surgery, University of Vermont Medical Center, Burlington, Vermont
| |
Collapse
|