1
|
Story DJ, Gao H, Vallevand AL, Manthey D. Taking More Society for Academic Emergency Medicine Practice Tests Does Not Lead to Improved National EM-M4 Exam Scores. West J Emerg Med 2023; 24:38-42. [PMID: 36735005 PMCID: PMC9897245 DOI: 10.5811/westjem.2022.12.57683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 12/24/2022] [Indexed: 01/28/2023] Open
Abstract
INTRODUCTION Emergency medicine (EM) is a required clerkship for third-year medical students, and an elective EM acting internship (AI) is available to fourth-year students at our institution. The Society for Academic Emergency Medicine's (SAEM) National Emergency Medicine M4 Examination (EM-M4) is administered to students at the end of the EM AI experience. To prepare for the exam, students gain access to 23 practice tests available from SAEM. In this study we investigate the correlation between the number of practice tests taken and EM-M4 performance. METHODS We collected data for EM-M4 and the US Medical Licensing Exam (USMLE) Step 2 Clinical Knowledge (CK) from students completing a MS4 EM clerkship in consecutive medical school classes from 2014-2017 at a private medical school. In addition, we collected data during the clerkship on the number of practice exams taken and whether a comprehensive practice exam was taken. We analyzed the study population three ways to determine whether the number of practice tests impacted final exam results: a binary distribution (1-11 or 12-23 tests taken); quaternary distribution (1-6, 7-12, 13-18, or 19-23 tests taken); and individual test variability (1,2,3,…22,23 tests taken). Complete data for 147 students was used for data analysis. RESULTS The EM-M4 showed moderate (r = 0.49) correlations with USMLE Step 2 CK. There was no significant difference in EM-M4 performance in the binary analysis (P ≤ 0.09), the quaternary analysis (P ≤ 0.09), or the continuous variable analysis (P ≤ 0.52). Inclusion of a comprehensive practice test also did not correlate with EM-M4 performance (P ≤ 0.78). CONCLUSION Degree of utilization of SAEM practice tests did not seem to correlate with performance on the EM-M4 examination at our institution. This could be due to many factors including that the question bank is composed of items that had poor item discrimination, possible inadequate coverage of EM curriculum, and/or use of alternative study methods. While further investigation is needed, if our conclusions prove generalizable, then using the SAEM practice tests is an extraneous cognitive load from a modality without proven benefit.
Collapse
Affiliation(s)
- David J. Story
- Wake Forest Baptist Medical Center, Department of Emergency Medicine, Winston-Salem, North Carolina
| | - Hong Gao
- Wake Forest School of Medicine, Office of Undergraduate Medical Education, Winston-Salem, North Carolina
| | - Andrea L. Vallevand
- Wake Forest School of Medicine, Office of Undergraduate Medical Education, Winston-Salem, North Carolina
| | - David Manthey
- Wake Forest School of Medicine, Department of Emergency Medicine, Winston-Salem, North Carolina
| |
Collapse
|
2
|
Hiller K, Jung J, Lawson L, Riddell R, Franzen D. Multi-institutional Implementation of the National Clinical Assessment Tool in Emergency Medicine: Data From the First Year of Use. AEM EDUCATION AND TRAINING 2021; 5:e10496. [PMID: 33842811 PMCID: PMC8019216 DOI: 10.1002/aet2.10496] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Revised: 06/08/2020] [Accepted: 06/15/2020] [Indexed: 06/12/2023]
Abstract
OBJECTIVES Uniformly training physicians to provide safe, high-quality care requires reliable assessment tools to ensure learner competency. The consensus-derived National Clinical Assessment Tool in Emergency Medicine (NCAT-EM) has been adopted by clerkships across the country. Analysis of large-scale deidentified data from a consortium of users is reported. METHODS Thirteen sites entered data into a Web-based platform resulting in over 6,400 discrete NCAT-EM assessments from 748 students and 704 assessors. Reliability, internal consistency analysis, and factorial analysis of variance for hypothesis generation were performed. RESULTS All categories on the NCAT-EM rating scales and professionalism subdomains were used. Clinical rating scale and global assessment scores were positively skewed, similar to other assessments commonly used in emergency medicine (EM). Professionalism lapses were noted in <1% of assessments. Cronbach's alpha was >0.8 for each site; however, interinstitutional variability was significant. M4 students scored higher than M3 students, and EM-bound students scored higher than non-EM-bound students. There were site-specific differences based on number of prior EM rotations, but no overall association. There were differences in scores based on assessor faculty rank and resident training year, but not by years in practice. There were site-specific differences based on student sex, but overall no difference. CONCLUSIONS To our knowledge, this is the first large-scale multi-institutional implementation of a single clinical assessment tool. This study demonstrates the feasibility of a unified approach to clinical assessment across multiple diverse sites. Challenges remain in determining appropriate score distributions and improving consistency in scoring between sites.
Collapse
Affiliation(s)
- Katherine Hiller
- From theDepartment of Emergency MedicineUniversity of ArizonaTucsonAZUSA
| | - Julianna Jung
- theDepartment of Emergency MedicineJohns Hopkins UniversityBaltimoreMDUSA
| | - Luan Lawson
- theDepartment of Emergency MedicineEast Carolina UniversityGreenvilleNCUSA
| | - Rebecca Riddell
- theOffice of Assessment and EvaluationJohns Hopkins UniversityBaltimoreMDUSA
| | - Doug Franzen
- and theDepartment of Emergency MedicineUniversity of WashingtonSeattleWAUSA
| |
Collapse
|
3
|
Dubosh NM, Fisher J, Lewis J, Ullman EA. Faculty Evaluations Correlate Poorly with Medical Student Examination Performance in a Fourth-Year Emergency Medicine Clerkship. J Emerg Med 2017; 52:850-855. [PMID: 28341085 DOI: 10.1016/j.jemermed.2016.09.018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2016] [Accepted: 09/12/2016] [Indexed: 11/29/2022]
Abstract
BACKGROUND Clerkship directors routinely evaluate medical students using multiple modalities, including faculty assessment of clinical performance and written examinations. Both forms of evaluation often play a prominent role in final clerkship grade. The degree to which these modalities correlate in an emergency medicine (EM) clerkship is unclear. OBJECTIVE We sought to correlate faculty clinical evaluations with medical student performance on a written, standardized EM examination of medical knowledge. METHODS This is a retrospective study of fourth-year medical students in a 4-week EM elective at one academic medical center. EM faculty performed end of shift evaluations of students via a blinded online system using a 5-point Likert scale for 8 domains: data acquisition, data interpretation, medical knowledge base, professionalism, patient care and communication, initiative/reliability/dependability, procedural skills, and overall evaluation. All students completed the National EM M4 Examination in EM. Means, medians, and standard deviations for end of shift evaluation scores were calculated, and correlations with examination scores were assessed using a Spearman's rank correlation coefficient. RESULTS Thirty-nine medical students with 224 discrete faculty evaluations were included. The median number of evaluations completed per student was 6. The mean score (±SD) on the examination was 78.6% ± 6.1%. The examination score correlated poorly with faculty evaluations across all 8 domains (ρ 0.074-0.316). CONCLUSION Faculty evaluations of medical students across multiple domains of competency correlate poorly with written examination performance during an EM clerkship. Educators need to consider the limitations of examination score in assessing students' ability to provide quality patient clinical care.
Collapse
Affiliation(s)
- Nicole M Dubosh
- Department of Emergency Medicine, Harvard Medical School, Boston, Massachusetts; Department of Emergency Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | - Jonathan Fisher
- Department of Emergency Medicine, Maricopa Medical Center, Phoenix, Arizona
| | - Jason Lewis
- Department of Emergency Medicine, Harvard Medical School, Boston, Massachusetts; Department of Emergency Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | - Edward A Ullman
- Department of Emergency Medicine, Harvard Medical School, Boston, Massachusetts; Department of Emergency Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| |
Collapse
|
4
|
Lawson LE, Musick D, Brewer K. Correlation of the National Emergency Medicine M4 Clerkship Examination with USMLE Examination Performance. West J Emerg Med 2015; 16:1159-65. [PMID: 26759671 PMCID: PMC4703161 DOI: 10.5811/westjem.2015.10.25496] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2015] [Revised: 09/17/2015] [Accepted: 09/23/2015] [Indexed: 11/25/2022] Open
Abstract
Introduction Assessment of medical students’ knowledge in clinical settings is complex yet essential to the learning process. Clinical clerkships use various types of written examinations to objectively test medical knowledge within a given discipline. Within emergency medicine (EM), a new national standardized exam was developed to test medical knowledge in this specialty. Evaluation of the psychometric properties of a new examination is an important issue to address during test development and use. Studies have shown that student performance on selected standardized exams will reveal students’ strengths and/or weaknesses, so that effective remedial efforts can be implemented. Our study sought to address these issues by examining the association of scores on the new EM national exam with other standardized exam scores. Methods From August 2011 to April 2013, average National EM M4 examination scores of fourth-year medical students taken at the end of a required EM clerkship were compiled. We examined the correlation of the National EM M4 examination with the scores of initial attempts of the United States Medical Licensing Exam (USMLE) Step 1 and Step 2 Clinical Knowledge (CK) examinations. Correlation coefficients and 95% confidence intervals of correlation coefficients are reported. We also examined the association between the national EM M4 examination score, final grades for the EM rotation, and USMLE Step 1 and Step 2 CK scores. Results 133 students were included in the study and achieved a mean score of 79.5 SD 8.0 on the National EM M4 exam compared to a national mean of 79.7 SD 3.89. The mean USMLE Step 1 score was 226.8 SD 19.3. The mean USMLE Step 2 CK score was 238.5 SD 18.9. National EM M4 examination scores showed moderate correlation with both USMLE Step 1 (mean score=226.8; correlation coefficient=0.50; 95% CI [0.28–0.67]) and USMLE Step 2 CK (mean score=238.5; correlation coefficient=0.47; 95% CI [0.25–0.65]). Students scoring below the median on the national EM M4 exam also scored well below their colleagues on USMLE exams. Conclusion The moderate correlation of the national EM M4 examination and USMLE Step 1 and Step 2 CK scores provides support for the utilization of the CDEM National EM M4 examination as an effective means of assessing medical knowledge for fourth-year medical students. Identification of students scoring lower on standardized exams allows for effective remedial efforts to be undertaken throughout the medical education process.
Collapse
Affiliation(s)
- Luan E Lawson
- East Carolina University, Brody School of Medicine, Department of Emergency Medicine, Greenville, North Carolina; East Carolina University, Brody School of Medicine, Department of Medical Education, Greenville, North Carolina
| | - Davis Musick
- Virginia Tech Carillion School of Medicine, Department of Internal Medicine, Roanoke, Virginia
| | - Kori Brewer
- East Carolina University, Brody School of Medicine, Department of Emergency Medicine, Greenville, North Carolina
| |
Collapse
|
5
|
Hiller K, House J, Lawson L, Poznanski S, Morrissey TK. Medical Student Performance on the National Board of Medical Examiners Emergency Medicine Advanced Clinical Examination and the National Emergency Medicine M4 Exams. West J Emerg Med 2015; 16:919-22. [PMID: 26594290 PMCID: PMC4651594 DOI: 10.5811/westjem.2015.9.27305] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2015] [Revised: 09/07/2015] [Accepted: 09/26/2015] [Indexed: 11/11/2022] Open
Abstract
INTRODUCTION In April 2013, the National Board of Medical Examiners (NBME) released an Advanced Clinical Examination (ACE) in emergency medicine (EM). In addition to this new resource, CDEM (Clerkship Directors in EM) provides two online, high-quality, internally validated examinations. National usage statistics are available for all three examinations, however, it is currently unknown how students entering an EM residency perform as compared to the entire national cohort. This information may help educators interpret examination scores of both EM-bound and non-EM-bound students. OBJECTIVES The objective of this study was to compare EM clerkship examination performance between students who matched into an EM residency in 2014 to students who did not. We made comparisons were made using the EM-ACE and both versions of the National fourth year medical student (M4) EM examinations. METHOD In this retrospective multi-institutional cohort study, the EM-ACE and either Version 1 (V1) or 2 (V2) of the National EM M4 examination was given to students taking a fourth-year EM rotation at five institutions between April 2013 to February 2014. We collected examination performance, including the scaled EM-ACE score, and percent correct on the EM M4 exams, and 2014 NRMP Match status. Student t-tests were performed on the examination averages of students who matched in EM as compared with those who did not. RESULTS A total of 606 students from five different institutions took both the EM-ACE and one of the EM M4 exams; 94 (15.5%) students matched in EM in the 2014 Match. The mean score for EM-bound students on the EM-ACE, V1 and V2 of the EM M4 exams were 70.9 (n=47, SD=9.0), 84.4 (n=36, SD=5.2), and 83.3 (n=11, SD=6.9), respectively. Mean scores for non-EM-bound students were 68.0 (n=256, SD=9.7), 82.9 (n=243, SD=6.5), and 74.5 (n=13, SD=5.9). There was a significant difference in mean scores in EM-bound and non-EM-bound student for the EM-ACE (p=0.05) and V2 (p<0.01) but not V1 (p=0.18) of the National EM M4 examination. CONCLUSION Students who successfully matched in EM performed better on all three exams at the end of their EM clerkship.
Collapse
Affiliation(s)
- Katherine Hiller
- University of Arizona School of Medicine, Department of Emergency Medicine, Tucson, Arizona
| | - Joseph House
- University of Michigan School of Medicine, Department of Emergency Medicine, Ann Arbor, Michigan
| | - Luan Lawson
- East Carolinas University Brody School of Medicine, Department of Emergency Medicine, Greenville, North Carolina
| | - Stacey Poznanski
- Wright State University School of Medicine, Department of Emergency Medicine, Dayton, Ohio
| | - Thomas K Morrissey
- University of Florida School of Medicine-Jacksonville, Department of Emergency Medicine, Jacksonville, Florida
| |
Collapse
|
6
|
Heitz C, Prusakowski M, Willis G, Franck C. Does the Concept of the "Flipped Classroom" Extend to the Emergency Medicine Clinical Clerkship? West J Emerg Med 2015; 16:851-5. [PMID: 26594277 PMCID: PMC4651581 DOI: 10.5811/westjem.2015.9.27256] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2015] [Revised: 08/26/2015] [Accepted: 09/26/2015] [Indexed: 01/31/2023] Open
Abstract
Introduction Linking educational objectives and clinical learning during clerkships can be difficult. Clinical shifts during emergency medicine (EM) clerkships provide a wide variety of experiences, some of which may not be relevant to recommended educational objectives. Students can be directed to standardize their clinical experiences, and this improves performance on examinations. We hypothesized that applying a “flipped classroom” model to the clinical clerkship would improve performance on multiple-choice testing when compared to standard learning. Methods Students at two institutions were randomized to complete two of four selected EM clerkship topics in a “flipped fashion,” and two others in a standard fashion. For flipped topics, students were directed to complete chief complaint-based asynchronous modules prior to a shift, during which they were directed to focus on the chief complaint. For the other two topics, modules were to be performed at the students’ discretion, and shifts would not have a theme. At the end of the four-week clerkship, a 40-question multiple-choice examination was administered with 10 questions per topic. We compared performance on flipped topics with those performed in standard fashion. Students were surveyed on perceived effectiveness, ability to follow the protocol, and willingness of preceptors to allow a chief-complaint focus. Results Sixty-nine students participated; examination scores for 56 were available for analysis. For the primary outcome, no difference was seen between the flipped method and standard (p=0.494.) A mixed model approach showed no effect of flipped status, protocol adherence, or site of rotation on the primary outcome of exam scores. Students rated the concept of the flipped clerkship highly (3.48/5). Almost one third (31.1%) of students stated that they were unable to adhere to the protocol. Conclusion Preparation for a clinical shift with pre-assigned, web-based learning modules followed by an attempt at chief-complaint-focused learning during a shift did not result in improvements in performance on a multiple-choice assessment of knowledge; however, one third of participants did not adhere strictly to the protocol. Future investigations should ensure performance of pre-assigned learning as well as clinical experiences, and consider alternate measures of knowledge.
Collapse
Affiliation(s)
- Corey Heitz
- Virginia Tech Carilion School of Medicine, Department of Emergency Medicine, Roanoke, Virginia
| | - Melanie Prusakowski
- Virginia Tech Carilion School of Medicine, Department of Emergency Medicine, Roanoke, Virginia
| | - George Willis
- University of Maryland College of Medicine, Department of Emergency Medicine, Baltimore, Maryland
| | - Christopher Franck
- Virginia Polytechnic Institute and State University, Department of Statistics, Blacksburg, Virginia
| |
Collapse
|
7
|
Heitz CR, Lawson L, Beeson M, Miller ES. The National Emergency Medicine Fourth-year Student (M4) Examinations: Updates and Performance. J Emerg Med 2015; 50:128-34. [PMID: 26409677 DOI: 10.1016/j.jemermed.2015.06.072] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2014] [Revised: 06/22/2015] [Accepted: 06/25/2015] [Indexed: 11/16/2022]
Abstract
BACKGROUND Version 1 (V1) of the National Emergency Medicine Fourth-year Student (EM M4) Examination was released in 2011 and revised along with release of V2 in 2012. Each examination contains 50 multiple-choice questions designed to assess knowledge in the EM M4 clerkship curriculum. Development and initial performance data were described previously. OBJECTIVE To provide updated V1 performance data, describe development and revision of V2, and to compare performance between academic years and examination forms, and within academic years. METHODS Examinations are administered at www.saemtests.org with ongoing performance data provided. After 1 year of use, nine questions on V2 were revised, five because of low discriminatory ability and four because of excessive difficulty. Revision or replacement was done in accordance with the National Board of Medical Examiners (NBME) Item Writing Guidelines. Mean scores were compared for V1 between academic years (i.e., July 2011-June 2012 vs. July 2012-June 2013), V2 compared with V1, and for each examination version for early and late test takers. RESULTS V1 has been administered >10,000 times since its release, and the current form mean is 81.5% (SD 3.7). Average discriminatory value (rpb) is 0.204. V2 has been administered >1500 times, with a mean score of 78.4% (SD 4.4) and average rpb 0.253. V1 and V2 current means differ statistically. Scores from examinees completing V1 or V2 early vs. late in the academic year differ statistically. CONCLUSIONS Performance data for V1 remain stable after 2 years. Revisions of poorly performing questions improved question performance on V2. Questions with low rpb or low pdiff will continue to be revised annually. While examination forms differ statistically, the practical utility of the differences is not defined.
Collapse
Affiliation(s)
- Corey R Heitz
- Carilion Clinic, Virginia Tech School of Medicine, Roanoke, Virgnia
| | - Luan Lawson
- Brody School of Medicine at East Carolina University, Greenville, North Carolina
| | - Michael Beeson
- Northeast Ohio Medical University, Rootstown, Ohio; Akron General Medical Center, Akron, Ohio
| | - Emily S Miller
- Harvard Medical School, Massachusetts General Hospital, Boston, Massachusetts
| |
Collapse
|
8
|
Hiller K, Miller ES, Lawson L, Wald D, Beeson M, Heitz C, Morrissey T, House J, Poznanski S. Correlation of the NBME advanced clinical examination in EM and the national EM M4 exams. West J Emerg Med 2015; 16:138-42. [PMID: 25671023 PMCID: PMC4307698 DOI: 10.5811/westjem.2014.11.24189] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2014] [Revised: 11/14/2014] [Accepted: 11/18/2014] [Indexed: 11/18/2022] Open
Abstract
Introduction Since 2011 two online, validated exams for fourth-year emergency medicine (EM) students have been available (National EM M4 Exams). In 2013 the National Board of Medical Examiners offered the Advanced Clinical Examination in Emergency Medicine (EM-ACE). All of these exams are now in widespread use; however, there are no data on how they correlate. This study evaluated the correlation between the EM-ACE exam and the National EM M4 Exams. Methods From May 2013 to April 2014 the EM-ACE and one version of the EM M4 exam were administered sequentially to fourth-year EM students at five U.S. medical schools. Data collected included institution, gross and scaled scores and version of the EM M4 exam. We performed Pearson’s correlation and random effects linear regression. Results 303 students took the EM-ACE and versions 1 (V1) or 2 (V2) of the EM M4 exams (279 and 24, respectively). The mean percent correct for the exams were as follows: EM-ACE 74.8 (SD-8.83), V1 83.0 (SD-6.41), V2 78.5 (SD-7.70). Pearson’s correlation coefficient for the V1/EM-ACE was 0.51 (0.42 scaled) and for the V2/EM-ACE was 0.59 (0.41 scaled). The coefficient of determination for V1/EM-ACE was 0.72 and for V2/EM-ACE = 0.71 (0.86 and 0.49 for scaled scores). The R-squared values were 0.25 and 0.30 (0.18 and 0.13, scaled), respectively. There was significant cluster effect by institution. Conclusion There was moderate positive correlation of student scores on the EM-ACE exam and the National EM M4 Exams.
Collapse
Affiliation(s)
- Katherine Hiller
- University of Arizona, Department of Emergency Medicine, Tucson, Arizona
| | - Emily S Miller
- Harvard University, Department of Emergency Medicine, Boston, Massachusetts
| | - Luan Lawson
- Brody School of Medicine at East Carolina University, Department of Emergency Medicine, Greenville, North Carolina
| | - David Wald
- Temple University School of Medicine, Department of Emergency Medicine, Philadelphia, Pennsylvania
| | - Michael Beeson
- Northeastern Ohio Medical University, Department of Emergency Medicine, Rootstown, Ohio
| | - Corey Heitz
- Virginia Tech Carilion School of Medicine, Department of Emergency Medicine, Roanoke, Virginia
| | - Thomas Morrissey
- University of Florida Health Sciences Center, Department of Emergency Medicine, Jacksonville, Florida
| | - Joseph House
- University of Michigan School of Medicine, Department of Emergency Medicine, Ann Arbor, Michigan
| | - Stacey Poznanski
- Wright State University Boonshoft School of Medicine, Department of Emergency Medicine, Dayton, Ohio
| |
Collapse
|