1
|
Seed JD, Gauthier S, Zevin B, Hall AK, Chaplin T. Simulation vs workplace-based assessment in resuscitation: a cross-specialty descriptive analysis and comparison. CANADIAN MEDICAL EDUCATION JOURNAL 2023; 14:92-98. [PMID: 37465738 PMCID: PMC10351640 DOI: 10.36834/cmej.73692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
Background Simulation-based assessment can complement workplace-based assessment of rare or difficult to assess Entrustable Professional Activities (EPAs). We aimed to compare the use of simulation-based assessment for resuscitation-focused EPAs in three postgraduate medical training programs and describe faculty perceptions of simulation-based assessment. Methods EPA assessment scores and setting (simulation or workplace) were extracted from 2017-2020 for internal medicine, emergency medicine, and surgical foundations residents at the transition to discipline and foundations of discipline stages. A questionnaire was distributed to clinical competency committee members. Results Eleven percent of EPA assessments were simulation-based. The proportion of simulation-based assessment did not differ between programs but differed between transition (38%) and foundations (4%) stages within surgical foundations only. Entrustment scores differed between settings in emergency medicine at the transition level only (simulation: 4.82 ± 0.60 workplace: 3.74 ± 0.93). 70% of committee members (n=20) completed the questionnaire. Of those that use simulation-based assessment, 45% interpret them differently than workplace-based assessments. 73% and 100% trust simulation for high-stakes and low-stakes assessment, respectively. Conclusions The proportion of simulation-based assessment for resuscitation focused EPAs did not differ between three postgraduate medical training programs. Interpretation of simulation-based assessment data between committee members was inconsistent. All respondents trust simulation-based assessment for low-stakes, and the majority for high-stakes assessment. These findings have practical implications for the integration simulation into programs of assessment.
Collapse
Affiliation(s)
- Jeremy D Seed
- Department of Emergency Medicine, Queen's University, Ontario, Canada
| | | | - Boris Zevin
- Department of Surgery, Queen's University, Ontario, Canada
| | - Andrew K Hall
- Department of Emergency Medicine, University of Ottawa, Ontario, Canada
| | - Timothy Chaplin
- Department of Emergency Medicine, Queen's University, Ontario, Canada
| |
Collapse
|
2
|
Gottlieb M, Jordan J, Siegelman JN, Cooney R, Stehman C, Chan TM. Direct Observation Tools in Emergency Medicine: A Systematic Review of the Literature. AEM EDUCATION AND TRAINING 2021; 5:e10519. [PMID: 34041428 PMCID: PMC8138102 DOI: 10.1002/aet2.10519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 07/31/2020] [Accepted: 08/09/2020] [Indexed: 05/07/2023]
Abstract
OBJECTIVES Direct observation is important for assessing the competency of medical learners. Multiple tools have been described in other fields, although the degree of emergency medicine-specific literature is unclear. This review sought to summarize the current literature on direct observation tools in the emergency department (ED) setting. METHODS We searched PubMed, Scopus, CINAHL, the Cochrane Central Register of Clinical Trials, the Cochrane Database of Systematic Reviews, ERIC, PsycINFO, and Google Scholar from 2012 to 2020 for publications on direct observation tools in the ED setting. Data were dual extracted into a predefined worksheet, and quality analysis was performed using the Medical Education Research Study Quality Instrument. RESULTS We identified 38 publications, comprising 2,977 learners. Fifteen different tools were described. The most commonly assessed tools included the Milestones (nine studies), Observed Structured Clinical Exercises (seven studies), the McMaster Modular Assessment Program (six studies), Queen's Simulation Assessment Test (five studies), and the mini-Clinical Evaluation Exercise (four studies). Most of the studies were performed in a single institution, and there were limited validity or reliability assessments reported. CONCLUSIONS The number of publications on direct observation tools for the ED setting has markedly increased. However, there remains a need for stronger internal and external validity data.
Collapse
Affiliation(s)
- Michael Gottlieb
- Department of Emergency MedicineRush University Medical CenterChicagoILUSA
| | - Jaime Jordan
- Department of Emergency MedicineRonald Reagan UCLA Medical CenterLos AngelesCAUSA
| | | | - Robert Cooney
- Department of Emergency MedicineGeisinger Medical CenterDanvillePAUSA
| | | | - Teresa M. Chan
- Department of MedicineDivision of Emergency MedicineMcMaster UniversityHamiltonOntarioCanada
| |
Collapse
|
3
|
Relationship between ratings of performance in the simulated and workplace environments among emergency medicine residents. CAN J EMERG MED 2020; 22:811-818. [DOI: 10.1017/cem.2020.388] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
ABSTRACTObjectivesThe Emergency Medicine (EM) Specialty Committee of the Royal College of Physicians and Surgeons of Canada (RCPSC) specifies that resuscitation entrustable professional activities (EPAs) can be assessed in the workplace and simulated environments. However, limited validity evidence for these assessments in either setting exists. We sought to determine if EPA ratings improve over time and whether an association exists between ratings in the workplace v. simulation environment.MethodsAll Foundations EPA1 (F1) assessments were collected for first-year residents (n = 9) in our program during the 2018–2019 academic year. This EPA focuses on initiating and assisting in the resuscitation of critically ill patients. EPA ratings obtained in the workplace and simulation environments were compared using Lin's concordance correlation coefficient (CCC). To determine whether ratings in the two environments differed as residents progressed through training, a within-subjects analysis of variance was conducted with training environment and month as independent variables.ResultsWe collected 104 workplace and 36 simulation assessments. No correlation was observed between mean EPA ratings in the two environments (CCC(8) = -0.01; p = 0.93). Ratings in both settings improved significantly over time (F(2,16) = 18.8; p < 0.001; η2= 0.70), from 2.9 ± 1.2 in months 1–4 to 3.5 ± 0.2 in months 9–12. Workplace ratings (3.4 ± 0.1) were consistently higher than simulation ratings (2.9 ± 0.2) (F(2,16) = 7.2; p = 0.028; η2= 0.47).ConclusionsNo correlation was observed between EPA F1 ratings in the workplace v. simulation environments. Further studies are needed to clarify the conflicting results of our study with others and build an evidence base for the validity of EPA assessments in simulated and workplace environments.
Collapse
|
4
|
Weersink K, Hall AK, Rich J, Szulewski A, Dagnone JD. Simulation versus real-world performance: a direct comparison of emergency medicine resident resuscitation entrustment scoring. Adv Simul (Lond) 2019; 4:9. [PMID: 31061721 PMCID: PMC6492388 DOI: 10.1186/s41077-019-0099-4] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2018] [Accepted: 04/15/2019] [Indexed: 11/10/2022] Open
Abstract
Background Simulation is increasingly being used in postgraduate medical education as an opportunity for competency assessment. However, there is limited direct evidence that supports performance in the simulation lab as a surrogate of workplace-based clinical performance for non-procedural tasks such as resuscitation in the emergency department (ED). We sought to directly compare entrustment scoring of resident performance in the simulation environment to clinical performance in the ED. Methods The resuscitation assessment tool (RAT) was derived from the previously implemented and studied Queen's simulation assessment tool (QSAT) via a modified expert review process. The RAT uses an anchored global assessment scale to generate an entrustment score and narrative comments. Emergency medicine (EM) residents were assessed using the RAT on cases in simulation-based examinations and in the ED during resuscitation cases from July 2016 to June 2017. Resident mean entrustment scores were compared using Pearson's correlation coefficient to determine the relationship between entrustment in simulation cases and in the ED. Inductive thematic analysis of written commentary was conducted to compare workplace-based with simulation-based feedback. Results There was a moderate, positive correlation found between mean entrustment scores in the simulated and workplace-based settings, which was statistically significant (r = 0.630, n = 17, p < 0.01). Further, qualitative analysis demonstrated overall management and leadership themes were more common narratives in the workplace, while more specific task-based feedback predominated in the simulation-based assessment. Both workplace-based and simulation-based narratives frequently commented on communication skills. Conclusions In this single-center study with a limited sample size, assessment of residents using entrustment scoring in simulation settings was demonstrated to have a moderate positive correlation with assessment of resuscitation competence in the workplace. This study suggests that resuscitation performance in simulation settings may be an indicator of competence in the clinical setting. However, multiple factors contribute to this complicated and imperfect relationship. It is imperative to consider narrative comments in supporting the rationale for numerical entrustment scores in both settings and to include both simulation and workplace-based assessment in high-stakes decisions of progression.
Collapse
Affiliation(s)
- Kristen Weersink
- 1Department of Emergency Medicine, Queen's University, Kingston Health Sciences Center c/o 76 Stuart St, Kingston, ON K7L2V7 Canada
| | - Andrew K Hall
- 1Department of Emergency Medicine, Queen's University, Kingston Health Sciences Center c/o 76 Stuart St, Kingston, ON K7L2V7 Canada
| | - Jessica Rich
- 2Faculty of Education, Queen's University, Kingston, ON Canada
| | - Adam Szulewski
- 1Department of Emergency Medicine, Queen's University, Kingston Health Sciences Center c/o 76 Stuart St, Kingston, ON K7L2V7 Canada
| | - J Damon Dagnone
- 1Department of Emergency Medicine, Queen's University, Kingston Health Sciences Center c/o 76 Stuart St, Kingston, ON K7L2V7 Canada
| |
Collapse
|
5
|
Jong M, Elliott N, Nguyen M, Goyke T, Johnson S, Cook M, Lindauer L, Best K, Gernerd D, Morolla L, Matuzsan Z, Kane B. Assessment of Emergency Medicine Resident Performance in an Adult Simulation Using a Multisource Feedback Approach. West J Emerg Med 2018; 20:64-70. [PMID: 30643603 PMCID: PMC6324708 DOI: 10.5811/westjem.2018.12.39844] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2018] [Revised: 12/06/2018] [Accepted: 12/09/2018] [Indexed: 11/11/2022] Open
Abstract
Introduction The Accreditation Council for Graduate Medical Education (ACGME) specifically notes multisource feedback (MSF) as a recommended means of resident assessment in the emergency medicine (EM) Milestones. High-fidelity simulation is an environment wherein residents can receive MSF from various types of healthcare professionals. Previously, the Queen’s Simulation Assessment Tool (QSAT) has been validated for faculty to assess residents in five categories: assessment; diagnostic actions; therapeutic actions; interpersonal communication, and overall assessment. We sought to determine whether the QSAT could be used to provide MSF using a standardized simulation case. Methods Prospectively after institutional review board approval, residents from a dual ACGME/osteopathic-approved postgraduate years (PGY) 1–4 EM residency were consented for participation. We developed a standardized resuscitation after overdose case with specific 1–5 Likert anchors used by the QSAT. A PGY 2–4 resident participated in the role of team leader, who completed a QSAT as self-assessment. The team consisted of a PGY-1 peer, an emergency medical services (EMS) provider, and a nurse. Two core faculty were present to administer the simulation case and assess. Demographics were gathered from all participants completing QSATs. We analyzed QSATs by each category and on cumulative score. Hypothesis testing was performed using intraclass correlation coefficients (ICC), with 95% confidence intervals. Interpretation of ICC results was based on previously published definitions. Results We enrolled 34 team leader residents along with 34 nurses. A single PGY-1, a single EMS provider and two faculty were also enrolled. Faculty provided higher cumulative QSAT scores than the other sources of MSF. QSAT scores did not increase with team leader PGY level. ICC for inter-rater reliability for all sources of MSF was 0.754 (0.572–0.867). Removing the self-evaluation scores increased inter-rater reliability to 0.838 (0.733–0.910). There was lesser agreement between faculty and nurse evaluations than from the EMS or peer evaluation. Conclusion In this single-site cohort using an internally developed simulation case, the QSAT provided MSF with excellent reliability. Self-assessment decreases the reliability of the MSF, and our data suggest self-assessment should not be a component of MSF. Use of the QSAT for MSF may be considered as a source of data for clinical competency committees.
Collapse
Affiliation(s)
- Michael Jong
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania
| | - Nicole Elliott
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania.,University of South Florida Morsani College of Medicine, Tampa, Florida
| | - Michael Nguyen
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania.,University of South Florida Morsani College of Medicine, Tampa, Florida
| | - Terrence Goyke
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania.,University of South Florida Morsani College of Medicine, Tampa, Florida
| | - Steven Johnson
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania.,University of South Florida Morsani College of Medicine, Tampa, Florida
| | - Matthew Cook
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania.,University of South Florida Morsani College of Medicine, Tampa, Florida
| | - Lisa Lindauer
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania
| | - Katie Best
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania
| | - Douglas Gernerd
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania
| | - Louis Morolla
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania
| | - Zachary Matuzsan
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania
| | - Bryan Kane
- Lehigh Valley Health Network, Department of Emergency and Hospital Medicine, Bethlehem, Pennsylvania.,University of South Florida Morsani College of Medicine, Tampa, Florida
| |
Collapse
|