1
|
Assessing clinical competence: a multitrait-multimethod matrix construct validity study. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024; 29:567-585. [PMID: 37530967 DOI: 10.1007/s10459-023-10269-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 07/19/2023] [Indexed: 08/03/2023]
Abstract
Education in Doctor of Medicine programs has moved towards an emphasis on clinical competency, with entrustable professional activities providing a framework of learning objectives and outcomes to be assessed within the clinical environment. While the identification and structured definition of objectives and outcomes have evolved, many methods employed to assess clerkship students' clinical skills remain relatively unchanged. There is a paucity of medical education research applying advanced statistical design and analytic techniques to investigate the validity of clinical skills assessment. One robust statistical method, multitrait-multimethod matrix analysis, can be applied to investigate construct validity across multiple assessment instruments and settings. Four traits were operationalized to represent the construct of critical clinical skills (professionalism, data gathering, data synthesis, and data delivery). The traits were assessed using three methods (direct observations by faculty coaches, clinical workplace-based evaluations, and objective structured clinical examination type clinical practice examinations). The four traits and three methods were intercorrelated for the multitrait-multimethod matrix analysis. The results indicated reliability values in the adequate to good range across the three methods with the majority of the validity coefficients demonstrating statistical significance. The clearest evidence for convergent and divergent validity was with the professionalism trait. The correlations on the same method/different traits analyses indicated substantial method effect; particularly on clinical workplace-based assessments. The multitrait-multimethod matrix approach, currently underutilized in medical education, could be employed to explore validity evidence of complex constructs such as clinical skills. These results can inform faculty development programs to improve the reliability and validity of assessments within the clinical environment.
Collapse
|
2
|
Foundations of Emergency Medicine: Impact of a Standardized, Open-access, Core Content Curriculum on In-Training Exam Scores. West J Emerg Med 2024; 25:209-212. [PMID: 38596920 PMCID: PMC11000563 DOI: 10.5811/westjem.18387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 12/09/2023] [Accepted: 01/12/2023] [Indexed: 04/11/2024] Open
Abstract
Introduction Learners frequently benefit from modalities such as small-group, case-based teaching and interactive didactic experiences rather than passive learning methods. These contemporary techniques are features of Foundations of Emergency Medicine (FoEM) curricula, and particularly the Foundations I (F1) course, which targets first-year resident (PGY-1) learners. The American Board of Emergency Medicine administers the in-training exam (ITE) that provides an annual assessment of EM-specific medical knowledge. We sought to assess the effect of F1 implementation on ITE scores. Methods We retrospectively analyzed data from interns at four EM residency programs accredited by the Accreditation Council for Graduate Medical Education. We collected data in 2021. Participating sites were geographically diverse and included three- and four-year training formats. We collected data from interns two years before (control group) and two years after (intervention group) implementation of F1 at each site. Year of F1 implementation ranged from 2015-2018 at participating sites. We abstracted data using a standard form including program, ITE raw score, year of ITE administration, US Medical Licensing Exam Step 1 score, Step 2 Clinical Knowledge (CK) score, and gender. We performed univariable and multivariable linear regression to explore differences between intervention and control groups. Results We collected data for 180 PGY-1s. Step 1 and Step 2 CK scores were significant predictors of ITE in univariable analyses (both with P < 0.001). After accounting for Step 1 and Step 2 CK scores, we did not find F1 implementation to be a significant predictor of ITE score, P = 0.83. Conclusion Implementation of F1 curricula did not show significant changes in performance on the ITE after controlling for important variables.
Collapse
|
3
|
The more things change the more they stay the same: Factors influencing emergency medicine residency selection in the virtual era. AEM EDUCATION AND TRAINING 2023; 7:e10921. [PMID: 37997588 PMCID: PMC10664396 DOI: 10.1002/aet2.10921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 08/25/2023] [Accepted: 09/13/2023] [Indexed: 11/25/2023]
Abstract
Background Interviews for emergency medicine (EM) residency positions largely transitioned to a virtual-only format in 2020-2021. The impact of virtual interview factors on applicants' rank of programs is unknown. Objective We sought to assess the impact of modifiable factors in virtual interviews on applicants' rank of EM residency programs. Methods We conducted a cross-sectional mixed-methods survey of students applying to at least one of seven study authors' EM residency programs in the United States during the 2020-2021 application cycle. The survey was developed using an interactive Delphi process and piloted prior to implementation. The survey was administered from May to June 2021 with up to four email reminders. Quantitative analysis included descriptive statistics. Three authors performed a thematic qualitative analysis of free-text responses. Results A total of 664 of 2281 (29.1%) students completed the survey, including 335 (50.5%) male, 316 (47.7%) female, and six (0.9%) nonbinary. A total of 143 (21.6%) respondents identified as underrepresented in medicine and 84 (12.7%) identified as LGBTQIA+. Respondents participated in a median of 14 interviews and ranked a median of 14 programs. Most respondents (335, 50.6%) preferred a choice of in-person or virtual, while 183 (27.6%) preferred all in-person, and 144 (21.8%) preferred all virtual. The program website and interview social were the most important factors influencing respondent ranking. Qualitative analysis revealed several positive aspects of virtual interviews including logistical ease and comfort. Negative aspects include technical issues, perceived interview hoarding, and barriers to applicant assessment and performance. Demonstrated effort by the program, effective information delivery, communication of resident culture, and a well-implemented interview day positively influenced respondents' rank of programs. Conclusions This study identified characteristics of the virtual interview format that impact applicants' rank of programs. These results can inform future recruitment practices.
Collapse
|
4
|
Reconceptualizing the emergency medicine resident scholarly requirement: Proposed framework and rubric. AEM EDUCATION AND TRAINING 2023; 7:S33-S40. [PMID: 37383837 PMCID: PMC10294215 DOI: 10.1002/aet2.10878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 01/31/2023] [Accepted: 02/01/2023] [Indexed: 06/30/2023]
Abstract
Background The completion of a scholarly project is a common program requirement by the Accreditation Council for Graduate Medical Education (ACGME) for all residency training programs. However, the implementation can vary significantly between programs. Lack of generalizable standards for scholarly projects required of all trainees within ACGME-accredited residencies has led to a large range of quality and effort put forth to complete these projects. Our goal is to introduce a framework and propose a corresponding rubric for application to resident scholarship to quantify and qualify the components of scholarship to better measure resident scholarly output across the graduate medical education (GME) continuum. Methods Eight experienced educators and members of the Society for Academic Emergency Medicine Education Committee were selected to explore the current scholarly project guidelines and propose a definition that can be universally applied to diverse training programs. Following a review of the current literature, the authors engaged in iterative, divergent, and convergent discussions via meetings and asynchronous dialogue to develop a framework and associated rubric. Results The group proposes that emergency medicine (EM) resident scholarship should (1) involve a structured process, (2) generate outcomes, (3) be disseminated, and (4) be peer reviewed. These components of resident scholarly activity are achieved whether this is a single project encompassing all four domains, or multiple smaller projects that sum to the whole. To assist residency programs in assessing a given individual resident's achievement of the standards set forth, a rubric is proposed. Conclusion Based on current literature and consensus, we propose a framework and rubric for tracking of resident scholarly project achievement in an effort to elevate and advance EM scholarship. Future work should explore the optimal application of this framework and define minimal scholarship goals for EM resident scholarship.
Collapse
|
5
|
Test-enhanced learning: As easy as (A), (B), (C). AEM EDUCATION AND TRAINING 2023; 7:e10825. [PMID: 37008652 PMCID: PMC10028412 DOI: 10.1002/aet2.10825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 10/20/2022] [Accepted: 10/23/2022] [Indexed: 06/19/2023]
|
6
|
Quality assessment of optic nerve sheath diameter ultrasonography: Scoping literature review and Delphi protocol. J Neuroimaging 2022; 32:808-824. [PMID: 35711135 DOI: 10.1111/jon.13018] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 06/06/2022] [Accepted: 06/06/2022] [Indexed: 11/29/2022] Open
Abstract
BACKGROUND AND PURPOSE The optic nerve is surrounded by the extension of meningeal coverings of the brain. When the pressure in the cerebrospinal fluid increases, it causes a distention of the optic nerve sheath diameter (ONSD), which allows the use of this measurement by ultrasonography (US) as a noninvasive surrogate of elevated intracranial pressure. However, ONSD measurements in the literature have exhibited significant heterogeneity, suggesting a need for consensus on ONSD image acquisition and measurement. We aim to establish a consensus for an ONSD US Quality Criteria Checklist (ONSD US QCC). METHODS A scoping systematic review of published ultrasound ONSD imaging and measurement criteria was performed to guide the development of a preliminary ONSD US QCC that will undergo a modified Delphi study to reach expert consensus on ONSD quality criteria. The protocol of this modified Delphi study is presented in this manuscript. RESULTS A total of 357 ultrasound studies were included in the review. Quality criteria were evaluated under five categories: probe selection, safety, positioning, image acquisition, and measurement. CONCLUSIONS This review and Delphi protocol aim to establish ONSD US QCC. A broad consensus from this process may reduce the variability of ONSD measurements in future studies, which would ultimately translate into improved ONSD clinical applications. This protocol was reviewed and endorsed by the German Society of Ultrasound in Medicine.
Collapse
|
7
|
Avocado toasted: Mythbusting "Millennials," "Generation Z," and generational theory. AEM EDUCATION AND TRAINING 2022; 6:e10757. [PMID: 35664707 PMCID: PMC9134576 DOI: 10.1002/aet2.10757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 04/01/2022] [Accepted: 04/04/2022] [Indexed: 06/15/2023]
|
8
|
Effect of Perceived Level of Interaction on Faculty Evaluations of 3rd Year Medical Students. MEDICAL SCIENCE EDUCATOR 2021; 31:1327-1332. [PMID: 34457975 PMCID: PMC8368453 DOI: 10.1007/s40670-021-01307-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/11/2021] [Indexed: 06/13/2023]
Abstract
INTRODUCTION Several factors are known to affect the way clinical performance evaluations (CPEs) of medical students are completed by supervising physicians. We sought to explore the effect of faculty perceived "level of interaction" (LOI) on these evaluations. METHODS Our third-year CPE requires evaluators to identify perceived LOI with each student as low, moderate, or high. We examined CPEs completed during the academic year 2018-2019 for differences in (1) clinical and professionalism ratings, (2) quality of narrative comments, (3) quantity of narrative comments, and (4) percentage of evaluation questions left unrated. RESULTS A total of 3682 CPEs were included in the analysis. ANOVA revealed statistically significant differences between LOI and clinical ratings (p ≤ .001), with mean ratings from faculty with a high LOI significantly higher than from faculty with a moderate or low LOI (p ≤ .001). Chi-squared analysis demonstrated differences based on faculty LOI and whether questions were left unrated (p ≤ .001), quantity of narrative comments (p ≤ .001), and specificity of narrative comments (p ≤ .001). CONCLUSIONS Faculty who perceive higher LOI were more likely to assign that student higher ratings, complete more of the clinical evaluation and were more likely to provide narrative feedback with more specific, higher-quality comments. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s40670-021-01307-w.
Collapse
|
9
|
Academic Emergency Medicine Faculty Experiences with Racial and Sexual Orientation Discrimination. West J Emerg Med 2020; 21:1160-1169. [PMID: 32970570 PMCID: PMC7514380 DOI: 10.5811/westjem.2020.6.47123] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Accepted: 06/29/2020] [Indexed: 11/11/2022] Open
Abstract
INTRODUCTION Despite the increasing diversity of individuals entering medicine, physicians from racial and sexual minority groups continue to experience bias and discrimination in the workplace. The objective of this study was to determine the current experiences and perceptions of discrimination on the basis of race and sexual orientation among academic emergency medicine (EM) faculty. METHODS We conducted a cross-sectional survey of a convenience sample of EM faculty across six programs. Survey items included the Overt Gender Discrimination at Work (OGDW) Scale adapted for race and sexual orientation, and the frequency and source of experienced and observed discrimination. Group comparisons were made using t-tests or chi-square analyses, and relationships between race or sexual orientation, and we evaluated physicians' experiences using correlation analyses. RESULTS A total of 141 out of 352 (40.1%) subjects completed at least a portion of the survey. Non-White physicians reported higher mean racial OGDW scores than their White counterparts (13.4 vs 8.6; 95% confidence interval (CI) for difference, -7.7 - -2.9). Non-White EM faculty were also more likely to report having experienced discriminatory treatment based on race than were White EM faculty (48.0% vs 12.6%; CI for difference, 16.6% - 54.2%), although both groups were equally likely to report having observed race-based discrimination of another physician. EM faculty who identified as sexual minorities reported higher mean sexual minority OGDW scores than their heterosexual counterparts (11.1 vs 7.1; 95% CI for difference, -7.3 - -0.6). There were no significant differences between sexual minority and heterosexual faculty in their reports of experiencing or observing discrimination based on sexual orientation. CONCLUSION EM faculty from racial and sexual minority groups perceived more discrimination based on race or sexual orientation in their workplace than their majority counterparts. EM faculty regardless of race or sexual orientation were similar in their observations of discriminatory treatment of another physician based on race or sexual orientation.
Collapse
|
10
|
#MeToo in EM: A Multicenter Survey of Academic Emergency Medicine Faculty on Their Experiences with Gender Discrimination and Sexual Harassment. West J Emerg Med 2020; 21:252-260. [PMID: 32191183 PMCID: PMC7081862 DOI: 10.5811/westjem.2019.11.44592] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Accepted: 11/04/2019] [Indexed: 11/24/2022] Open
Abstract
Introduction Gender-based discrimination and sexual harassment of female physicians are well documented. The #MeToo movement has brought renewed attention to these problems. This study examined academic emergency physicians’ experiences with workplace gender discrimination and sexual harassment. Methods We conducted a cross-sectional survey of a convenience sample of emergency medicine (EM) faculty across six programs. Survey items included the following: the Overt Gender Discrimination at Work (OGDW) Scale; the frequency and source of experienced and observed discrimination; and whether subjects had encountered unwanted sexual behaviors by a work superior or colleague in their careers. For the latter question, we asked subjects to characterize the behaviors and whether those experiences had a negative effect on their self-confidence and career advancement. We made group comparisons using t-tests or chi-square analyses, and evaluated relationships between gender and physicians’ experiences using correlation analyses. Results A total of 141 out of 352 (40.1%) subjects completed at least a portion of the survey. Women reported higher mean OGDW scores than men (15.4 vs 10.2; 95% confidence interval [CI], 3.6–6.8). Female faculty were also more likely to report having experienced gender-based discriminatory treatment than male faculty (62.7% vs 12.5%; 95% CI, 35.1%–65.4%), although male and female faculty were equally likely to report having observed gender-based discriminatory treatment of another physician (64.7% vs 56.3%; 95% CI, 8.6%–25.5%). The three most frequent sources of experienced or observed gender-based discriminatory treatment were patients, consulting or admitting physicians, and nursing staff. The majority of women reported having encountered unwanted sexual behaviors in their careers, with a significantly greater proportion of women reporting them compared to men (52.9% vs 26.2%, 95% CI, 9.9%–43.4%). The majority of unwanted behaviors were sexist remarks and sexual advances. Of those respondents who encountered these unwanted behaviors, 22.9% and 12.5% reported at least somewhat negative effects on their self-confidence and career advancement. Conclusion Female EM faculty perceived more gender-based discrimination in their workplaces than their male counterparts. The majority of female and approximately a quarter of male EM faculty encountered unwanted sexual behaviors in their careers.
Collapse
|
11
|
Critical Electrocardiogram Curriculum: Setting the Standard for Flipped-Classroom EKG Instruction. West J Emerg Med 2019; 21:52-57. [PMID: 31913819 PMCID: PMC6948695 DOI: 10.5811/westjem.2019.11.44509] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Accepted: 11/03/2019] [Indexed: 12/02/2022] Open
Abstract
Introduction Electrocardiogram (EKG) interpretation is integral to emergency medicine (EM).1 In 2003 Ginde et al. found 48% of emergency medicine (EM) residency directors supported creating a national EKG curriculum.2 No formal national curriculum exists, and it is unknown whether residents gain sufficient skill from clinical exposure alone. Methods The authors sought to assess the value of this EKG curriculum, which provides exposure to critical EKG patterns, a framework for EKG interpretation when the diagnosis is not obvious, and implementation guidelines and open access to any interested residency. The Foundations of Emergency Medicine (FoEM) EKG I course launched in January 2016, followed by EKG II in July 2017; they are benchmarked to post-graduate year 1 (PGY) and PGY2 level learners, respectively. Selected topics included 15 published critical EKG diagnoses and 33 selected by the authors.5 Cases included presenting symptoms, EKGs, and Free Open Access Medical Education (FOAM) links. Full EKG interpretations and question answers were provided. Results Enrollment during 2017–2018 included 37 EM residencies with 663 learners in EKG I and 22 EM residencies with 438 learners in EKG II. Program leaders and learners were surveyed annually. Leaders indicated that content was appropriate for intended PGY levels. Leaders and learners indicated the curriculum improved the ability of learners to interpret EKGs while working in the emergency department (ED). Conclusion There is an unmet need for standardization and improvement of EM resident EKG training. Leaders and learners exposed to FoEM EKG courses report improved ability of learners to interpret EKGs in the ED.
Collapse
|
12
|
A Multidisciplinary Self-Directed Learning Module Improves Knowledge of a Quality Improvement Instrument: The HEART Pathway. J Healthc Qual 2019; 40:e9-e14. [PMID: 27442714 PMCID: PMC5250587 DOI: 10.1097/jhq.0000000000000044] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
We created and tested an educational intervention to support implementation of an institution wide QI project (the HEART Pathway) designed to improve care for patients with acute chest pain. Although online learning modules have been shown effective in imparting knowledge regarding QI projects, it is unknown whether these modules are effective across specialties and healthcare professions. Participants, including nurses, advanced practice clinicians, house staff and attending physicians (N = 486), were enrolled into an online, self-directed learning course exploring the key concepts of the HEART Pathway. The module was completed by 97% of enrollees (469/486) and 90% passed on the first attempt (422/469). Out of 469 learners, 323 completed the pretest, learning module and posttest in the correct order. Mean test scores across learners improved significantly from 74% to 89% from the pretest to the posttest. Following the intervention, the HEART Pathway was used for 88% of patients presenting to our institution with acute chest pain. Our data demonstrate that this online, self-directed learning module can improve knowledge of the HEART Pathway across specialties-paving the way for more efficient and informed care for acute chest pain patients.
Collapse
|
13
|
A Narrative Review of the Evidence Supporting Factors Used by Residency Program Directors to Select Applicants for Interviews. J Grad Med Educ 2019; 11:268-273. [PMID: 31210855 PMCID: PMC6570461 DOI: 10.4300/jgme-d-18-00979.3] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Revised: 01/23/2019] [Accepted: 03/31/2019] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Residency applicants feel increasing pressure to maximize their chances of successfully matching into the program of their choice, and are applying to more programs than ever before. OBJECTIVE In this narrative review, we examined the most common and highly rated factors used to select applicants for interviews. We also examined the literature surrounding those factors to illuminate the advantages and disadvantages of using them as differentiating elements in interviewee selection. METHODS Using the 2018 NRMP Program Director Survey as a framework, we examined the last 10 years of literature to ascertain how residency directors are using these common factors to grant residency interviews, and whether these factors are predictive of success in residency. RESULTS Residency program directors identified 12 factors that contribute substantially to the decision to invite applicants for interviews. Although United States Medical Licensing Examination (USMLE) Step 1 is often used as a comparative factor, most studies do not demonstrate its predictive value for resident performance, except in the case of test failure. We also found that structured letters of recommendation from within a specialty carry increased benefit when compared with generic letters. Failing USMLE Step 1 or 2 and unprofessional behavior predicted lower performance in residency. CONCLUSIONS We found that the evidence basis for the factors most commonly used by residency directors is decidedly mixed in terms of predicting success in residency and beyond. Given these limitations, program directors should be skeptical of making summative decisions based on any one factor.
Collapse
|
14
|
Why Residents Quit: National Rates of and Reasons for Attrition Among Emergency Medicine Physicians in Training. West J Emerg Med 2019; 20:351-356. [PMID: 30881556 PMCID: PMC6404714 DOI: 10.5811/westjem.2018.11.40449] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2018] [Revised: 11/08/2018] [Accepted: 11/10/2018] [Indexed: 11/26/2022] Open
Abstract
Introduction Recruiting and retaining residents who will complete their emergency medicine (EM) training is vital, not only because residency positions are a limited and costly resource, but also to prevent the significant disruptions, increased workload, and low morale that may arise when a resident prematurely leaves a program. We investigated national rates of EM resident attrition and examined the reasons and factors associated with their attrition. Methods In this retrospective, observational study we used national data from the American Medical Association National Graduate Medical Education Census for all residents who entered Accreditation Council for Graduate Medical Education-accredited EM programs between academic years 2006–2007 and 2015–2016. Our main outcome was the annual national rate of EM resident attrition. Secondary outcomes included the main reason for attrition as well as resident factors associated with attrition. Results Compared to the other 10 largest specialties, EM had the lowest rate of attrition (0.8%, 95% confidence interval [CI] [0.7–0.9]), or approximately 51.6 (95% CI [44.7–58.5]) residents per year. In the attrition population, 44.2% of the residents were women, a significantly higher proportion when compared to the proportion of female EM residents overall (38.8%, p=0.011). A greater proportion of Hispanic/Latino (1.8%) residents also left their programs when compared to their White (0.9%) counterparts (p<0.001). In examining reasons for attrition as reported by the program director, female residents were significantly more likely than male residents to leave due to “health/family reasons” (21.5% vs 9.6%, p=0.019). Conclusion While the overall rate of attrition among EM residents is low, women and some under-represented minorities in medicine had a higher than expected rate of attrition. Future studies that qualitatively investigate the factors contributing to greater attrition among female and some ethnic minority residents are necessary to inform efforts promoting inclusion and diversity within the specialty.
Collapse
|
15
|
3 for the Price of 1: Teaching Chest Pain Risk Stratification in a Multidisciplinary, Problem-based Learning Workshop. West J Emerg Med 2018; 19:613-618. [PMID: 29760864 PMCID: PMC5942033 DOI: 10.5811/westjem.2017.12.36444] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2017] [Revised: 12/03/2017] [Accepted: 12/21/2017] [Indexed: 11/11/2022] Open
Abstract
Introduction Chest pain is a common chief complaint among patients presenting to health systems and often leads to complex and intensive evaluations. While these patients are often cared for by a multidisciplinary team (primary care, emergency medicine, and cardiology), medical students usually learn about the care of these patients in a fragmented, single-specialty paradigm. The present and future care of patients with chest pain is multidisciplinary, and the education of medical students on the subject should be as well. Our objective was to evaluate the effectiveness of a multidisciplinary, problem-based learning workshop to teach third-year medical students about risk assessment for patients presenting with chest pain, specifically focusing on acute coronary syndromes. Methods To create an educational experience consistent with multidisciplinary team-based care, we designed a multidisciplinary, problem-based learning workshop to provide medical students with an understanding of how patients with chest pain are cared for in a systems-based manner to improve outcomes. Participants included third-year medical students (n=219) at a single, tertiary care, academic medical center. Knowledge acquisition was tested in a pre-/post-retention test study design. Results Following the workshop, students achieved a 19.7% (95% confidence interval [CI] [17.3-22.2%]) absolute increase in scores on post-testing as compared to pre-testing. In addition, students maintained an 11.1% (95% CI [7.2-15.0%]) increase on a retention test vs. the pre-test. Conclusion A multidisciplinary, problem-based learning workshop is an effective method of producing lasting gains in student knowledge about chest pain risk stratification.
Collapse
|
16
|
This Article Corrects: “Trends in NRMP Data from 2007–2014 for U.S. Seniors Matching into Emergency Medicine”. West J Emerg Med 2017; 18:550. [PMID: 28435510 PMCID: PMC5391909 DOI: 10.5811/westjem.2017.4.34410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
17
|
Trends in NRMP Data from 2007-2014 for U.S. Seniors Matching into Emergency Medicine. West J Emerg Med 2017; 18:105-109. [PMID: 28116018 PMCID: PMC5226739 DOI: 10.5811/westjem.2016.10.31237] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2016] [Accepted: 10/27/2016] [Indexed: 11/11/2022] Open
Abstract
INTRODUCTION Since 1978, the National Residency Matching Program (NRMP) has published data demonstrating characteristics of applicants who have matched into their preferred specialty in the NRMP main residency match. These data have been published approximately every two years. There is limited information about trends within these published data for students matching into emergency medicine (EM). Our objective was to investigate and describe trends in NRMP data to include the following: the ratio of applicants to available EM positions; United State Medical Licensing Examination (USMLE) Step 1 and Step 2 scores (compared to the national means); number of programs ranked; and Alpha Omega Alpha Honor Medical Society (AOA) membership among U.S. seniors matching into EM. METHODS This was a retrospective observational review of NRMP data published between 2007 and 2016. We analyzed the data using analysis of variance (ANOVA) or Kruskal-Wallis testing, and Fischer's exact or chi-squared testing, as appropriate to determine statistical significance. RESULTS The ratio of applicants to available EM positions remained essentially stable from 2007 to 2014 but did increase slightly in 2016. We observed a net upward trend in overall Step 1 and Step 2 scores for EM applicants. However, this did not outpace the national trend increase in Step 1 and 2 scores overall. There was an increase in the mean number of programs ranked by EM applicants over the years studied from 7.8 (SD4.2) to 9.2 (SD5.0, p<0.001), driven predominantly by the cohort of U.S. students successful in the match. Among time intervals, there was a difference in the number of EM applicants with AOA membership (p=0.043) due to a drop in the number of AOA students in 2011. No sustained statistical trend in AOA membership was identified over the seven-year period studied. CONCLUSION NRMP data demonstrate trends among EM applicants that are similar to national trends in other specialties for USMLE board scores, and a modest increase in number of programs ranked. AOA membership was largely stable. EM does not appear to have become more competitive relative to other specialties or previous years in these categories.
Collapse
|
18
|
A Novel Tool for Assessment of Emergency Medicine Resident Skill in Determining Diagnosis and Management for Emergent Electrocardiograms: A Multicenter Study. J Emerg Med 2016; 51:697-704. [PMID: 27618476 DOI: 10.1016/j.jemermed.2016.06.054] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2016] [Revised: 06/17/2016] [Accepted: 06/29/2016] [Indexed: 11/16/2022]
Abstract
BACKGROUND Reading emergent electrocardiograms (ECGs) is one of the emergency physician's most crucial tasks, yet no well-validated tool exists to measure resident competence in this skill. OBJECTIVES To assess validity of a novel tool measuring emergency medicine resident competency for interpreting, and responding to, critical ECGs. In addition, we aim to observe trends in this skill for resident physicians at different levels of training. METHODS This is a multi-center, prospective study of postgraduate year (PGY) 1-4 residents at five emergency medicine (EM) residency programs in the United States. An assessment tool was created that asks the physician to identify either the ECG diagnosis or the best immediate management. RESULTS One hundred thirteen EM residents from five EM residency programs submitted completed assessment surveys, including 43 PGY-1s, 33 PGY-2s, and 37 PGY-3/4s. PGY-3/4s averaged 74.6% correct (95% confidence interval [CI] 70.9-78.4) and performed significantly better than PGY-1s, who averaged 63.2% correct (95% CI 58.0-68.3). PGY-2s averaged 69.0% (95% CI 62.2-73.7). Year-to-year differences were more pronounced in management than in diagnosis. CONCLUSIONS Residency training in EM seems to be associated with improved ability to interpret "critical" ECGs as measured by our assessment tool. This lends validity evidence for the tool by correlating with a previously observed association between residency training and improved ECG interpretation. Resident skill in ECG interpretation remains less than ideal. Creation of this sort of tool may allow programs to assess resident performance as well as evaluate interventions designed to improve competency.
Collapse
|
19
|
Faculty Prediction of In-Training Examination Scores of Emergency Medicine Residents: A Multicenter Study. J Emerg Med 2015; 49:64-9. [PMID: 25843930 DOI: 10.1016/j.jemermed.2015.01.015] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2014] [Revised: 01/08/2015] [Accepted: 01/11/2015] [Indexed: 10/23/2022]
Abstract
BACKGROUND The Emergency Medicine In-Training Examination (EMITE) is one of the few validated instruments for medical knowledge assessment of emergency medicine (EM) residents. The EMITE is administered only once annually, with results available just 2 months before the end of the academic year. An earlier predictor of EMITE scores would be helpful for educators to institute timely remediation plans. A previous single-site study found that only 69% of faculty predictions of EMITE scores were accurate. OBJECTIVE The goal of this article was to measure the accuracy with which EM faculty at five residency programs could predict EMITE scores for resident physicians. METHODS We asked EM faculty at five different residency programs to predict the 2014 EMITE scores for all their respective resident physicians. The primary outcome was prediction accuracy, defined as the proportion of predictions within 6% of the actual scores. The secondary outcome was prediction precision, defined as the mean deviation of predictions from the actual scores. We assessed faculty background variables for correlation with the two outcomes. RESULTS One hundred and eleven faculty participated in the study (response rate 68.9%). Mean prediction accuracy for all faculty was 60.0%. Mean prediction precision was 6.3%. Participants were slightly more accurate at predicting scores of noninterns compared to interns. No faculty background variable correlated with the primary or secondary outcomes. Eight participants predicted scores with high accuracy (>80%). CONCLUSIONS In this multicenter study, EM faculty possessed only moderate accuracy at predicting resident EMITE scores. A very small subset of faculty members is highly accurate.
Collapse
|