1
|
Plewa MC, Ledrick DJ, Jenkins K, Orqvist A, McCrea M. Can USMLE and COMLEX-USA Scores Predict At-Risk Emergency Medicine Residents' Performance on In-Training Examinations? Cureus 2024; 16:e58684. [PMID: 38651085 PMCID: PMC11033967 DOI: 10.7759/cureus.58684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/21/2024] [Indexed: 04/25/2024] Open
Abstract
PURPOSE The United States Medical Licensing Examination (USMLE) and Comprehensive Osteopathic Medical Licensing Examination (COMLEX) scores are standard methods used to determine residency candidates' medical knowledge. The authors were interested in using the USMLE and COMLEX part 2 scores in our emergency medicine (EM) residency program to identify at-risk residents who may have difficulty on the in-training exam (ITE) and to determine the cutoff values under which an intern could be given an individualized study plan to ensure medical knowledge competency. METHODS The authors abstracted the USMLE and COMLEX part 2 scores and the American Board of Emergency Medicine (ABEM) ITE scores for a cohort of first-year EM residents graduating years 2010-2022, converting raw scores to percentiles, and compared part 2 and ABEM ITE scores with Pearson's correlation, a Bland-Altman analysis of bias and 95% limits of agreement, and ROC analysis to determine optimal the cut-off values for predicting ABEM ITE < 50th percentile and the estimated test characteristics. RESULTS Scores were available for 152 residents, including 93 USMLE and 88 COMLEX exams. The correlations between part 2 scores and ABEM ITE were r = 0.36 (95%CI: 0.17, 0.52; p < 0.001) for USMLE and r = 0.50 (95%CI: 0.33, 0.64; p < 0.001) for COMLEX. Bias and limits of agreement for both part 2 scores were -14 ± 63% for USMLE and 13 ± 50% for COMLEX in predicting the ABEM ITE scores. USMLE < 37th percentile and COMLEX < 53rd percentile identified 42% (N = 39) and 27% (N = 24) of EM residents, respectively, as at risk, with a sensitivity of 61% and 49% and specificity of 71% and 92%, respectively. CONCLUSION USMLE and COMLEX part 2 scores have a very limited role in identifying those at risk of low ITE performance, suggesting that other factors should be considered to identify interns in need of medical knowledge remediation.
Collapse
Affiliation(s)
- Michael C Plewa
- Emergency Medicine, Mercy Health - St. Vincent Medical Center, Toledo, USA
| | - David J Ledrick
- Emergency Medicine, Mercy Health - St. Vincent Medical Center, Toledo, USA
| | - Kenneth Jenkins
- Emergency Medicine, Ohio University Heritage College of Osteopathic Medicine, Athens, USA
| | - Aaron Orqvist
- Emergency Medicine, Mercy Health - St. Vincent Medical Center, Toledo, USA
| | - Michael McCrea
- Emergency Medicine, Mercy Health - St. Vincent Medical Center, Toledo, USA
| |
Collapse
|
2
|
Theiler CA, Vakkalanka JP, Obr BJ, Hansen N, McCabe DJ. The impact of fellowship-trained medical toxicology faculty on emergency medicine resident in-training examination scores. AEM EDUCATION AND TRAINING 2023; 7:e10840. [PMID: 36711255 PMCID: PMC9873863 DOI: 10.1002/aet2.10840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 10/24/2022] [Accepted: 12/05/2022] [Indexed: 06/18/2023]
Abstract
Background The American Board of Emergency Medicine (ABEM) In Training Exam (ITE) gauges residents' medical knowledge and has been shown to correlate with subsequent performance on the ABEM board qualifying examination. It is common for emergency medicine (EM) residencies to employ subspecialty-trained faculty members with the expectation of improved resident education and subspecialty knowledge. We hypothesized that the presence of subspecialty faculty in toxicology would increase residents' scores on the toxicology portion of the ITE. Methods We assessed ABEM ITE scores at our institution from 2013-2022 and compared these to national data. The exposure of interest was the absence or presence of fellowship-trained toxicology faculty. The primary outcome was performance on the toxicology portion of the ITE, and secondary outcome was overall performance on the exam. Results Residents who had ≥1 toxicology faculty were 37% (95% CI: 1.01-1.87) more likely to surpass the national average for toxicology scores, and those who had ≥2 toxicology faculty were 77% (95% CI: 1.28-2.44) more likely to surpass the national average for toxicology scores on the ABEM ITE. With the presence of ≥2 toxicology faculty, there was also an increase in toxicology score by years in training, with residents being 63% (95% CI: 1.01-2.64), 68% (95% CI: 1.08-2.61), and 92% (95% CI: 1.01-3.63) more likely to surpass the national average for toxicology score in first, second, and third years of residency, respectively. There was no significant relationship between the presence of toxicology faculty and the overall ABEM ITE scores. Conclusions The presence of fellowship-trained toxicology faculty positively impacted residents' performance on the toxicology portion of the ABEM ITE but did not significantly impact the overall score. With the presence of ≥2 toxicology faculty we noted an improvement in toxicology scores throughout the 3 years of training, indicating that an individual rotation or educational block is probably less important than spaced repetition through a longitudinal curriculum.
Collapse
Affiliation(s)
- Carly A. Theiler
- Department of Emergency MedicineUniversity of IowaIowa CityIowaUSA
| | | | - Brooks J. Obr
- Department of Emergency MedicineUniversity of IowaIowa CityIowaUSA
| | - Nicole Hansen
- Department of Emergency MedicineUniversity of IowaIowa CityIowaUSA
| | - Daniel J. McCabe
- Department of Emergency MedicineUniversity of IowaIowa CityIowaUSA
- Division of Medical Toxicology, Department of Emergency MedicineUniversity of IowaIowa CityIowaUSA
| |
Collapse
|
3
|
Jewell C, Kraut A, Miller D, Ray K, Werley E, Schnapp B. Metrics of Resident Achievement for Defining Program Aims. West J Emerg Med 2022; 23:1-8. [PMID: 35060852 PMCID: PMC8782131 DOI: 10.5811/westjem.2021.12.53554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 12/06/2021] [Indexed: 11/28/2022] Open
Abstract
Introduction Resident achievement data is a powerful but underutilized means of program evaluation, allowing programs to empirically measure whether they are meeting their program aims, facilitate refinement of curricula and improve resident recruitment efforts. The goal was to provide an overview of available metrics of resident achievement and how these metrics can be used to inform program aims. Methods A literature search was performed using PubMed and Google Scholar between May and November of 2020. Publications were eligible for inclusion if they discussed or assessed “excellence” or “success” during residency training. A narrative review structure was chosen due to the intention to provide an examination of the literature on available resident achievement metrics. Results 57 publications met inclusion criteria and were included in the review. Metrics of excellence were grouped into larger categories, including success defined by program factors, academics, national competencies, employer factors, and possible new metrics. Conclusions Programs can best evaluate whether they are meeting their program aims by creating a list of important resident-level metrics based on their stated goals and values using one or more of the published definitions as a foundation. Each program must define which metrics align best with their individual program aims and mission.
Collapse
Affiliation(s)
- Corlin Jewell
- University of Wisconsin School of Medicine and Public Health, BerbeeWalsh Department of Emergency Medicine, Madison, Wisconsin
| | - Aaron Kraut
- University of Wisconsin School of Medicine and Public Health, BerbeeWalsh Department of Emergency Medicine, Madison, Wisconsin
| | - Danielle Miller
- University of Colorado School of Medicine, Department of Emergency Medicine, Aurora, Colorado
| | - Kaitlin Ray
- University of Wisconsin School of Medicine and Public Health, BerbeeWalsh Department of Emergency Medicine, Madison, Wisconsin
| | - Elizabeth Werley
- PennState College of Medicine, Department of Emergency Medicine, Hershey, Pennsylvania
| | - Bejamin Schnapp
- University of Wisconsin School of Medicine and Public Health, BerbeeWalsh Department of Emergency Medicine, Madison, Wisconsin
| |
Collapse
|
4
|
Walter LA, Khoury CA, DeLaney MC, Thompson MA, Rushing C, Edwards AR. Does QBank participation impact in-training examination performance? AEM EDUCATION AND TRAINING 2021; 5:e10636. [PMID: 34368599 PMCID: PMC8320330 DOI: 10.1002/aet2.10636] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/12/2021] [Accepted: 06/07/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND Performance on the annual in-training examination (ITE) for emergency medicine (EM) residents has been shown to correlate with performance on the American Board of Emergency Medicine (ABEM) qualifying examination. As such, significant planning is often committed to ITE preparation, from an individual resident and a residency program perspective. Online specialty-specific question banks (QBanks) represent a popular medium for ITE preparation; however, the impact of QBanks on ITE performance is unclear. We sought to assess the impact of QBank participation on EM resident ITE performance. METHODS ITE and QBank performance results were collated over 2 academic years, 2019 and 2020, from a 3-year EM residency program. The QBank was provided as a self-study option in 2019 and incorporated as a mandatory component of the curriculum in 2020. ITE raw scores and percentile rank for training level scores were compared with performance on the QBank, including QBank average performance score as well as number of QBank questions completed. The Pearson correlation coefficient was used to measure association between ITE performance and QBank correlates. Additional descriptive demographics, to include gender, PGY level, and USMLE step 1 and 2 scores were also considered. RESULTS Sixty-two sets (30 residents in 2019, 32 residents in 2020) of ITE performance data and QBank correlates were included. Overall, raw ITE scores and number of QBank questions completed were found to have a significant, positive correlation, (r(60) = 0.34, p < 0.05); correlation was stronger for 2019 (r[28] = 0.39, p < 0.05) compared to 2020 (r[30] =0.25, p = 0.16). Overall, ITE percentile rank for training level scores were also found to have a significant, positive correlation with number of QBank questions completed (r(60) = 0.35, p < 0.05); correlation was again stronger for 2019 (r(28) = 0.42, p < 0.05) compared to 2020 (r(30) = 0.29, p = 0.12). Finally, ITE percentile rank for training level correlated positively with QBank average performance (as a percentage), albeit weakly, and was not found to be significant overall (r[60] = 0.20, p = 0.16); in this instance, 2019 did not show a correlation (r[28] =0.12, p = 0.54); however, 2020 did (r[30] =0.55, p < 0.05). Academic year 2020 raw ITE scores also demonstrated a significant correlation with QBank average performance (r[30] = 0.66, p < 0.0001) while 2019 did not (r[28] = 0.08, p = 0.68). CONCLUSION Participation and engagement in a QBank are associated with improved EM resident performance on the ABEM ITE. Utilization of a QBank may be an effective mode of ITE preparation for EM residents.
Collapse
Affiliation(s)
- Lauren A. Walter
- Department of Emergency MedicineUniversity of Alabama at BirminghamBirminghamAlabamaUSA
| | - Charles A. Khoury
- Department of Emergency MedicineUniversity of Alabama at BirminghamBirminghamAlabamaUSA
| | - Matthew C. DeLaney
- Department of Emergency MedicineUniversity of Alabama at BirminghamBirminghamAlabamaUSA
| | - Maxwell A. Thompson
- Department of Emergency MedicineUniversity of Alabama at BirminghamBirminghamAlabamaUSA
| | - Courtney Rushing
- Department of Emergency MedicineUniversity of Alabama at BirminghamBirminghamAlabamaUSA
| | - Andrew R. Edwards
- Department of Emergency MedicineUniversity of Alabama at BirminghamBirminghamAlabamaUSA
| |
Collapse
|
5
|
Han R, Keith J, Slodkowska E, Nofech-Mozes S, Djordjevic B, Parra-Herran C, Shachar S, Mirkovic J, Sherman C, Hsieh E, Ismiil N, Lu FI. Hot Seat Diagnosis: Competency-Based Tool Is Superior to Time-Based Tool for the Formative In-Service Assessment of Pathology Trainees. Arch Pathol Lab Med 2021; 146:123-131. [PMID: 34133708 DOI: 10.5858/arpa.2020-0702-ep] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/04/2021] [Indexed: 01/09/2023]
Abstract
CONTEXT.— Competency-based medical education relies on frequent formative in-service assessments to ascertain trainee progression. Currently at our institution, trainees receive a summative end-of-rotation In-Training Evaluation Report based on feedback collected from staff pathologists. There is no method of simulating report sign-out. OBJECTIVE.— To develop a formative in-service assessment tool that is able to simulate report sign-out and provide case-by-case feedback to trainees. Further, to compare time- versus competency-based assessment models. DESIGN.— Twenty-one pathology trainees were assessed for 20 months. Hot Seat Diagnosis by trainees and trainee assessment by pathologists were recorded in the Laboratory Information System. In the first iteration, trainees were assessed by using a time-based assessment scale on their ability to diagnose, report, use ancillary testings, comment on clinical implications, provide intraoperative consultation and/or gross cases. The second iteration used a competency-based assessment scale. Trainees and pathologists completed surveys on the effectiveness of the In-Training Evaluation Report versus the Hot Seat Diagnosis tool. RESULTS.— Scores from both iterations correlated significantly with other assessment tools including the Resident In-Service Examination (r = 0.93, P = .04 and r = 0.87, P = .03). The competency-based model was better able to demonstrate improvement over time and stratify junior versus senior trainees than the time-based model. Trainees and pathologists rated Hot Seat Diagnosis as significantly more objective, detailed, and timely than the In-Training Evaluation Report, and effective at simulating report sign-out. CONCLUSIONS.— Hot Seat Diagnosis is an effective tool for the formative in-service assessment of pathology trainees and simulation of report sign-out, with the competency-based model outperforming the time-based model.
Collapse
Affiliation(s)
- Rachel Han
- From the Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada (Han, Keith, Slodkowska, Nofech-Mozes, Djordjevic, Shachar, Mirkovic, Sherman, Lu)
| | - Julia Keith
- From the Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada (Han, Keith, Slodkowska, Nofech-Mozes, Djordjevic, Shachar, Mirkovic, Sherman, Lu).,The Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada (Keith, Slodkowska, Nofech-Mozes, Djordjevic, Shachar, Mirkovic, Sherman, Lu)
| | - Elzbieta Slodkowska
- From the Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada (Han, Keith, Slodkowska, Nofech-Mozes, Djordjevic, Shachar, Mirkovic, Sherman, Lu).,The Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada (Keith, Slodkowska, Nofech-Mozes, Djordjevic, Shachar, Mirkovic, Sherman, Lu)
| | - Sharon Nofech-Mozes
- From the Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada (Han, Keith, Slodkowska, Nofech-Mozes, Djordjevic, Shachar, Mirkovic, Sherman, Lu).,The Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada (Keith, Slodkowska, Nofech-Mozes, Djordjevic, Shachar, Mirkovic, Sherman, Lu)
| | - Bojana Djordjevic
- From the Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada (Han, Keith, Slodkowska, Nofech-Mozes, Djordjevic, Shachar, Mirkovic, Sherman, Lu).,The Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada (Keith, Slodkowska, Nofech-Mozes, Djordjevic, Shachar, Mirkovic, Sherman, Lu)
| | - Carlos Parra-Herran
- The Department of Pathology, Brigham and Women's Hospital, Boston, Massachusetts (Parra-Herran)
| | - Sade Shachar
- From the Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada (Han, Keith, Slodkowska, Nofech-Mozes, Djordjevic, Shachar, Mirkovic, Sherman, Lu).,The Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada (Keith, Slodkowska, Nofech-Mozes, Djordjevic, Shachar, Mirkovic, Sherman, Lu)
| | - Jelena Mirkovic
- From the Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada (Han, Keith, Slodkowska, Nofech-Mozes, Djordjevic, Shachar, Mirkovic, Sherman, Lu).,The Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada (Keith, Slodkowska, Nofech-Mozes, Djordjevic, Shachar, Mirkovic, Sherman, Lu)
| | - Christopher Sherman
- From the Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada (Han, Keith, Slodkowska, Nofech-Mozes, Djordjevic, Shachar, Mirkovic, Sherman, Lu).,The Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada (Keith, Slodkowska, Nofech-Mozes, Djordjevic, Shachar, Mirkovic, Sherman, Lu)
| | - Eugene Hsieh
- The Department of Pathology, Dynacare, Brampton, Ontario, Canada (Hsieh)
| | - Nadia Ismiil
- The Department of Pathology, Lakeridge Health Ajax Pickering Hospital, Ajax, Ontario, Canada (Ismiil)
| | - Fang-I Lu
- From the Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada (Han, Keith, Slodkowska, Nofech-Mozes, Djordjevic, Shachar, Mirkovic, Sherman, Lu).,The Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada (Keith, Slodkowska, Nofech-Mozes, Djordjevic, Shachar, Mirkovic, Sherman, Lu)
| |
Collapse
|
6
|
Dokmak A, Radwan A, Halpin M, Jaber BL, Nader C. Design and implementation of an academic enrichment program to improve performance on the internal medicine in-training exam. MEDICAL EDUCATION ONLINE 2020; 25:1686950. [PMID: 31707925 PMCID: PMC6853221 DOI: 10.1080/10872981.2019.1686950] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Revised: 09/20/2019] [Accepted: 10/28/2019] [Indexed: 06/10/2023]
Abstract
The internal medicine In-Training Exam (ITE) is administered at residency training programs to assess medical knowledge. Our internal medicine residency program witnessed a performance decline on the ITE between 2011 and 2014. The goal of this quality improvement project was to improve medical knowledge among residents as measured by an improvement in performance on the ITE, through the design and implementation of an Academic Enrichment Program (AEP). The AEP was designed in 2014-2015, and entailed a multipronged approach, including strengthening and tailoring of the didactic curriculum, establishment of a minimum conference attendance rate, and adoption of the New England Journal of Medicine Knowledge-Plus Internal Medicine Board Review platform. Residents performing below a pre-specified percentile rank cutoff on the previous year's ITE in any of the 12 content areas were required to complete a pre-specified percentage of the question bank in that specific topic. We examined a total of 164 residents enrolled in our program under the categorical training track. The mean (± SEM) ITE percentile for the 12 content areas increased significantly from calendar years 2011-2014 to 2015-2018, reflecting implementation of the AEP (p < 0.001). In brief, compared to the AEP-unexposed graduating classes of residents, the AEP-exposed graduating classes of residents displayed a significant improvement in the mean ITE percentile rank. This quality improvement project was carried out at a single institution. The implementation of a structured academic enrichment program significantly improves performance on the ITE.
Collapse
Affiliation(s)
- Amr Dokmak
- Department of Medicine, St Elizabeth’s Medical Center, Tufts University School of Medicine, Boston, MA, USA
| | - Amr Radwan
- Department of Medicine, St Elizabeth’s Medical Center, Tufts University School of Medicine, Boston, MA, USA
| | - Meredith Halpin
- Division of Hematology & Medical Oncology, Department of Medicine, Boston University School of Medicine, Boston, MA, USA
| | - Bertrand L. Jaber
- Department of Medicine, St Elizabeth’s Medical Center, Tufts University School of Medicine, Boston, MA, USA
| | - Claudia Nader
- Department of Medicine, St Elizabeth’s Medical Center, Tufts University School of Medicine, Boston, MA, USA
| |
Collapse
|
7
|
Ju C, Bove J, Hochman S. Does the Removal of Textbook Reading from Emergency Medicine Resident Education Negatively Affect In-Service Scores? West J Emerg Med 2020; 21:434-440. [PMID: 32191201 PMCID: PMC7081878 DOI: 10.5811/westjem.2019.11.44639] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2019] [Accepted: 11/18/2019] [Indexed: 11/12/2022] Open
Abstract
Introduction In-service exam scores are used by residency programs as a marker for progress and success on board exams. Conference curriculum helps residents prepare for these exams. At our institution, due to resident feedback a change in curriculum was initiated. Our objective was to determine whether assigned Evidence-Based Medicine (EBM) articles and Rosh Review questions were non-inferior to Tintinalli textbook readings. We further hypothesized that the non-textbook assigned curriculum would lead to higher resident satisfaction, greater utilization, and a preference over the old curriculum. Methods We collected scores from both the allopathic In-training Examination (ITE) and osteopathic Emergency Medicine Residency In-service Exam (RISE) scores taken by our program’s residents from both the 2015–2016 and 2016–2017 residency years. We compared scores pre-curriculum change (pre-CC) to scores post-curriculum change (post-CC). A five-question survey was sent to the residents regarding their satisfaction, preference, and utilization of the two curricula. Results Resident scores post-CC were shown to be non-inferior to their scores pre-CC for both exams. There was also no significant difference when we compared scores from each class post-CC to their respective class year pre-CC for both exams. Our survey showed significantly more satisfaction, utilization, and preference for this new curriculum among residents. Conclusion We found question-based learning and Evidence-Based Medicine articles non-inferior to textbook readings. This study provides evidence to support a move away from textbook readings without sacrificing scores on examinations.
Collapse
Affiliation(s)
- Christine Ju
- Saint Joseph's University Medical Center, Department of Emergency Medicine, Paterson, New Jersey
| | - Joseph Bove
- Saint Joseph's University Medical Center, Department of Emergency Medicine, Paterson, New Jersey
| | - Steven Hochman
- Saint Joseph's University Medical Center, Department of Emergency Medicine, Paterson, New Jersey
| |
Collapse
|
8
|
Forcucci JA, Hyer JM, Bruner ET, Lewin DN, Batalis NI. Success in Implementation of a Resident In-Service Examination Review Series. Am J Clin Pathol 2017; 147:370-373. [PMID: 28340222 DOI: 10.1093/ajcp/aqx013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVES Primary pathology board certification has been correlated with senior resident in-service examination (RISE) performance. We describe our success with an annual, month-long review series. METHODS Aggregate program RISE performance data were gathered for 3 years prior to and 3 years following initiation of the review series. In addition, mean United States Medical Licensing Examination Step 1 and 2 Clinical Knowledge scores for residents participating in each RISE examination were obtained to control for incoming knowledge and test-taking ability. Linear models were used to evaluate differences in average RISE performance prior to and following the initiation of the review series in addition to controlling for relevant covariates. RESULTS Significant improvement was noted in the grand total, anatomic pathology section average, clinical pathology section average, and transfusion medicine section. Although not statistically significant, improvement was noted on the cytopathology and clinical chemistry sections. There was no significant difference in scores in hematopathology, molecular pathology, and the special topics section average. In addition, improvement in primary pathology board certification rates was also noted. CONCLUSIONS Institution of a month-long RISE review series demonstrated improved overall performance within our training program. The success could easily be replicated in any training program without significant disruption to an annual didactic series.
Collapse
Affiliation(s)
| | - J. Madison Hyer
- Public Health Sciences, Medical University of South Carolina, Charleston
| | | | - David N. Lewin
- From the Departments of Pathology and Laboratory Medicine
| | | |
Collapse
|
9
|
Moroz A, Bang H. Predicting Performance on the American Board of Physical Medicine and Rehabilitation Written Examination Using Resident Self-Assessment Examination Scores. J Grad Med Educ 2016; 8:50-6. [PMID: 26913103 PMCID: PMC4763400 DOI: 10.4300/jgme-d-15-00065.1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Studies across medical specialties have shown that scores on residency self-assessment examinations (SAEs) can predict performance on certifying board examinations. OBJECTIVE This study explored the predictive abilities of different composite SAE scores in physical medicine and rehabilitation and determined an optimal cut-point to identify an "at-risk" performance group. METHODS For our study, both predictive scores (SAE scores) and outcomes (board examination scores) are expressed in national percentile scores. We analyzed data in graduates of a physical medicine and rehabilitation residency program between 2008 and 2014. We compared mean, median, lowest, highest, and most recent score among up to 3 SAE scores with respect to their associations with the outcome via linear and logistic regression. We computed regression/correlation coefficient, P value, R (2), area under the curve, sensitivity, specificity, and predictive values. Identification of optimal cut-point was guided by accuracy, discrimination, and model-fit statistics. RESULTS Predictor and outcome data were available for 88 of 99 residents. In regression models, all SAE predictors showed significant associations (P ≤ .001) and the mean score performed best (r = 0.55). A 1-point increase in mean SAE was associated with a 1.88 score increase in board score and a 16% decrease in odds of failure. The rule of mean SAE score below 47 yielded the highest accuracy, highest discrimination, and best model fit. CONCLUSIONS Mean SAE score may be used to predict performance on the American Board of Physical Medicine and Rehabilitation-written examination. The optimal statistical cut-point to identify the at-risk group for failure appears to be around the 47th SAE national percentile.
Collapse
Affiliation(s)
- Alex Moroz
- Corresponding author: Alex Moroz, MD, New York University Langone Medical Center, 333 E 38 Street, New York, NY 10016, 212.263.6110,
| | | |
Collapse
|
10
|
Ferrell BT, Tankersley WE, Morris CD. Using an Accountability Program to Improve Psychiatry Resident Scores on In-Service Examinations. J Grad Med Educ 2015; 7:555-9. [PMID: 26692966 PMCID: PMC4675411 DOI: 10.4300/jgme-d-14-00722.1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND The Psychiatry Resident-In-Training Examination (PRITE) is a standardized examination that measures residents' educational progress during residency training. It also serves as a moderate-to-strong predictor of later performance on the board certification examination. OBJECTIVE This study evaluated the effectiveness of an accountability program used by a public psychiatric hospital to increase its residents' PRITE scores. METHODS A series of consequences and incentives were developed based on levels of PRITE performance. Poor performance resulted in consequences, including additional academic assignments. Higher performance led to residents earning external moonlighting privileges. Standardized PRITE scores for all residents (N = 67) over a 10-year period were collected and analyzed. The PRITE examination consists of 2 subscales-psychiatry and neurology. Change in the overall level of PRITE scores following the implementation of the accountability program was estimated using a discontinuous growth curve model for each subscale. RESULTS Standardized scores on the psychiatry subscale were 51.09 points, approximately 0.50 SD change, which was higher after the accountability program was implemented. Standardized scores on the neurology subscale did not change. CONCLUSIONS An accountability program that assigns consequences based on examination performance may be moderately successful in improving scores on the psychiatry subscale scores of the PRITE. This likely has longer-term benefits for residents due to the relationship between PRITE and board certification examination performance.
Collapse
Affiliation(s)
| | - William E. Tankersley
- Corresponding author: William E. Tankersley, MD, Children's Recovery Center, 320 12th Avenue NE, Norman, OK 73071, 405.684.7262,
| | | |
Collapse
|
11
|
Atsawarungruangkit A. Relationship of residency program characteristics with pass rate of the American Board of Internal Medicine certifying exam. MEDICAL EDUCATION ONLINE 2015; 20:28631. [PMID: 26426400 PMCID: PMC4590350 DOI: 10.3402/meo.v20.28631] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2015] [Revised: 09/08/2015] [Accepted: 09/08/2015] [Indexed: 05/31/2023]
Abstract
OBJECTIVES To evaluate the relationship between the pass rate of the American Board of Internal Medicine (ABIM) certifying exam and the characteristics of residency programs. METHODS The study used a retrospective, cross-sectional design with publicly available data from the ABIM and the Fellowship and Residency Electronic Interactive Database. All categorical residency programs with reported pass rates were included. Using univariate and multivariate, linear regression analyses, I analyzed how 69 factors (e.g., location, general information, number of faculty and trainees, work schedule, educational environment) are related to the pass rate. RESULTS Of 371 programs, only one region had a significantly different pass rate from the other regions; however, as no other characteristics were reported in this region, I excluded program location from further analysis. In the multivariate analysis, pass rate was significantly associated with four program characteristics: ratio of full-time equivalent paid faculty to positions, percentage of osteopathic doctors, formal mentoring program, and on-site child care (OCC). Numerous factors were not associated at all, including minimum exam scores, salary, vacation days, and average hours per week. CONCLUSIONS As shown through the ratio of full-time equivalent paid faculty to positions and whether there was a formal mentoring program, a highly supervised training experience was strongly associated with the pass rate. In contrast, percentage of osteopathic doctors was inversely related to the pass rate. Programs with OCC significantly outperformed programs without OCC. This study suggested that enhancing supervision of training programs and offering parental support may help attract and produce competitive residents.
Collapse
|
12
|
Atsawarungruangkit A. Relationship of residency program characteristics with pass rate of the American Board of Internal Medicine certifying exam. MEDICAL EDUCATION ONLINE 2015; 20:28631. [PMID: 26426400 PMCID: PMC4590350 DOI: 10.3402/meo.v20.28631 10.3402/meo.v3420.28631] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2015] [Revised: 09/08/2015] [Accepted: 09/08/2015] [Indexed: 05/31/2023]
Abstract
OBJECTIVES To evaluate the relationship between the pass rate of the American Board of Internal Medicine (ABIM) certifying exam and the characteristics of residency programs. METHODS The study used a retrospective, cross-sectional design with publicly available data from the ABIM and the Fellowship and Residency Electronic Interactive Database. All categorical residency programs with reported pass rates were included. Using univariate and multivariate, linear regression analyses, I analyzed how 69 factors (e.g., location, general information, number of faculty and trainees, work schedule, educational environment) are related to the pass rate. RESULTS Of 371 programs, only one region had a significantly different pass rate from the other regions; however, as no other characteristics were reported in this region, I excluded program location from further analysis. In the multivariate analysis, pass rate was significantly associated with four program characteristics: ratio of full-time equivalent paid faculty to positions, percentage of osteopathic doctors, formal mentoring program, and on-site child care (OCC). Numerous factors were not associated at all, including minimum exam scores, salary, vacation days, and average hours per week. CONCLUSIONS As shown through the ratio of full-time equivalent paid faculty to positions and whether there was a formal mentoring program, a highly supervised training experience was strongly associated with the pass rate. In contrast, percentage of osteopathic doctors was inversely related to the pass rate. Programs with OCC significantly outperformed programs without OCC. This study suggested that enhancing supervision of training programs and offering parental support may help attract and produce competitive residents.
Collapse
|
13
|
Li F, Gimpel JR, Arenson E, Song H, Bates BP, Ludwin F. Relationship Between COMLEX-USA Scores and Performance on the American Osteopathic Board of Emergency Medicine Part I Certifying Examination. J Osteopath Med 2014; 114:260-6. [DOI: 10.7556/jaoa.2014.051] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Abstract
Context: Few studies have investigated how well scores from the Comprehensive Osteopathic Medical Licensing Examination-USA (COMLEX-USA) series predict resident outcomes, such as performance on board certification examinations.
Objectives: To determine how well COMLEX-USA predicts performance on the American Osteopathic Board of Emergency Medicine (AOBEM) Part I certification examination.
Methods: The target study population was first-time examinees who took AOBEM Part I in 2011 and 2012 with matched performances on COMLEX-USA Level 1, Level 2-Cognitive Evaluation (CE), and Level 3. Pearson correlations were computed between AOBEM Part I first-attempt scores and COMLEX-USA performances to measure the association between these examinations. Stepwise linear regression analysis was conducted to predict AOBEM Part I scores by the 3 COMLEX-USA scores. An independent t test was conducted to compare mean COMLEX-USA performances between candidates who passed and who failed AOBEM Part I, and a stepwise logistic regression analysis was used to predict the log-odds of passing AOBEM Part I on the basis of COMLEX-USA scores.
Results: Scores from AOBEM Part I had the highest correlation with COMLEX-USA Level 3 scores (.57) and slightly lower correlation with COMLEX-USA Level 2-CE scores (.53). The lowest correlation was between AOBEM Part I and COMLEX-USA Level 1 scores (.47). According to the stepwise regression model, COMLEX-USA Level 1 and Level 2-CE scores, which residency programs often use as selection criteria, together explained 30% of variance in AOBEM Part I scores. Adding Level 3 scores explained 37% of variance. The independent t test indicated that the 397 examinees passing AOBEM Part I performed significantly better than the 54 examinees failing AOBEM Part I in all 3 COMLEX-USA levels (P<.001 for all 3 levels). The logistic regression model showed that COMLEX-USA Level 1 and Level 3 scores predicted the log-odds of passing AOBEM Part I (P=.03 and P<.001, respectively).
Conclusion: The present study empirically supported the predictive and discriminant validities of the COMLEX-USA series in relation to the AOBEM Part I certification examination. Although residency programs may use COMLEX-USA Level 1 and Level 2-CE scores as partial criteria in selecting residents, Level 3 scores, though typically not available at the time of application, are actually the most statistically related to performances on AOBEM Part I.
Collapse
|