1
|
Klein MR, Loke DE, Barsuk JH, Adler MD, McGaghie WC, Salzman DH. Twelve tips for developing simulation-based mastery learning clinical skills checklists. MEDICAL TEACHER 2024:1-6. [PMID: 38670308 DOI: 10.1080/0142159x.2024.2345270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/16/2024] [Indexed: 04/28/2024]
Abstract
Simulation-based mastery learning is a powerful educational paradigm that leads to high levels of performance through a combination of strict standards, deliberate practice, formative feedback, and rigorous assessment. Successful mastery learning curricula often require well-designed checklists that produce reliable data that contribute to valid decisions. The following twelve tips are intended to help educators create defensible and effective clinical skills checklists for use in mastery learning curricula. These tips focus on defining the scope of a checklist using established principles of curriculum development, crafting the checklist based on a literature review and expert input, revising and testing the checklist, and recruiting judges to set a minimum passing standard. While this article has a particular focus on mastery learning, with the exception of the tips related to standard setting, the general principles discussed apply to the development of any clinical skills checklist.
Collapse
Affiliation(s)
- Matthew R Klein
- Department of Emergency Medicine, Brown University Warren Alpert Medical School, Providence, Rhode Island, USA
| | - Dana E Loke
- Department of Emergency Medicine, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Jeffrey H Barsuk
- Department of Medicine (Hospital Medicine) and Department of Medical Education, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | - Mark D Adler
- Department of Pediatrics (Emergency Medicine) and Department of Medical Education, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | - William C McGaghie
- Department of Medical Education and Department of Preventive Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | - David H Salzman
- Department of Emergency Medicine and Department of Medical Education, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| |
Collapse
|
2
|
Svendsen CN, Glargaard GL, Lundstrøm LH, Rosenstock CV, Haug AC, Afshari A, Hesselfeldt R, Strøm C. Flexible bronchoscopic intubation through a supraglottic airway device: An evaluation of consultant anaesthetist performance. Acta Anaesthesiol Scand 2024; 68:178-187. [PMID: 37877551 DOI: 10.1111/aas.14348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 08/04/2023] [Accepted: 09/15/2023] [Indexed: 10/26/2023]
Abstract
BACKGROUND Few clinical studies investigate technical skill performance in experienced clinicians. METHODS We undertook a prospective observational study evaluating procedural skill competence in consultant anaesthetists who performed flexible bronchoscopic intubation (FBI) under continuous ventilation through a second-generation supraglottic airway device (SAD). Airway management was recorded on video and performance evaluated independently by three external assessors. We included 100 adult patients undergoing airway management by 25 anaesthetist specialists, each performing four intubations. We used an Objective Structured Assessment of Technical Skills-inspired global rating scale as primary outcome. Further, we assessed the overall pass rate (proportion of cases where the average of assessors' evaluation for every domain scored ≥3); the progression in the global rating scale score; time to intubation; self-reported procedural confidence; and pass rate from the first to the fourth airway procedure. RESULTS Overall median global rating scale score was 29.7 (interquartile range 26.0-32.7 [range 16.7-37.7]. At least one global rating scale domain was deemed 'not competent' (one or more domains in the evaluation was scored <3) in 30% of cases of airway management, thus the pass rate was 70% (95% CI 60%-78%). After adjusting for multiple testing, we found a statistically significant difference between the first and fourth case of airway management regarding time to intubation (p = .006), but no difference in global rating scale score (p = .018); self-reported confidence before the procedure (p = .014); or pass rate (p = .109). CONCLUSION Consultant anaesthetists had a median global rating scale score of 29.7 when using a SAD as conduit for FBI. However, despite reporting high procedural confidence, at least one global rating scale domain was deemed 'not competent' in 30% of cases, which indicates a clear potential for improvement of skill competence among professionals.
Collapse
Affiliation(s)
| | - Gine L Glargaard
- Department of Anaesthesiology, New North Zealand Hospital, Hillerød, Denmark
| | - Lars H Lundstrøm
- Department of Anaesthesiology, New North Zealand Hospital, Hillerød, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| | - Charlotte V Rosenstock
- Department of Anaesthesiology, New North Zealand Hospital, Hillerød, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| | - Anne C Haug
- Department of Anaesthesiology, Aarhus University Hospital, Aarhus, Denmark
| | - Arash Afshari
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
- Department of Anaesthesiology, Juliane Marie Centre Rigshospitalet, Copenhagen, Denmark
| | - Rasmus Hesselfeldt
- Department of Anaesthesiology, Centre of Head and Orthopaedics, Copenhagen, Denmark
| | - Camilla Strøm
- Department of Anaesthesiology, Centre of Head and Orthopaedics, Copenhagen, Denmark
| |
Collapse
|
3
|
Kim E, Song S, Kim S. Development of pediatric simulation-based education - a systematic review. BMC Nurs 2023; 22:291. [PMID: 37641090 PMCID: PMC10463597 DOI: 10.1186/s12912-023-01458-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 08/22/2023] [Indexed: 08/31/2023] Open
Abstract
BACKGROUND This systematic literature review explored the general characteristics, validation, and reliability of pediatric simulation-based education (P-SBE). METHODS A literature search was conducted between May 23 and 28 using the PRISMA guidelines, which covered databases such as MEDLINE, EMBASE, CINAHL, and Cochrane Library. In the third selection process, the original texts of 142 studies were selected, and 98 documents were included in the final content analysis. RESULTS A total of 109 papers have been published in the ten years since 2011. Most of the study designs were experimental studies, including RCT with 76 articles. Among the typologies of simulation, advanced patient simulation was the most common (92), and high-fidelity simulation was the second most common (75). There were 29 compatibility levels and professional levels, with 59 scenarios related to emergency interventions and 19 scenarios related to communication feasibility and decision making. Regarding the effect variable, 65 studies confirmed that skills were the most common. However, validity of the scenarios and effect variables was not verified in 56.1% and 67.3% of studies, respectively. CONCLUSION Based on these findings, simulation based-education (SBE) is an effective educational method that can improve the proficiency and competence of medical professionals dealing with child. Learning through simulation provides an immersive environment in which learners interact with the presented patient scenario and make decisions, actively learning the attitudes, knowledge, and skills necessary for medical providers. In the future, it is expected that such research on SBE will be actively followed up and verified for its validity and reliability.
Collapse
Affiliation(s)
- EunJoo Kim
- Department of Nursing, Gangneung-Wonju National University, 150, Namwon-ro, Heungop- myeon, Wonju-si, 26403, Gangwon-do, Republic of Korea
| | - SungSook Song
- Department of Nursing, INHA University, 313, Docbae-ro, Michuhol-gu, Incheon, 22188, Republic of Korea
| | - SeongKwang Kim
- Department of Nursing, Gangneung-Wonju National University, 150, Namwon-ro, Heungop- myeon, Wonju-si, 26403, Gangwon-do, Republic of Korea.
| |
Collapse
|
4
|
Frithioff A, Frendø M, Foghsgaard S, Sørensen MS, Andersen SAW. Are Video Recordings Reliable for Assessing Surgical Performance? A Prospective Reliability Study Using Generalizability Theory. Simul Healthc 2023; 18:219-225. [PMID: 36260767 DOI: 10.1097/sih.0000000000000672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
INTRODUCTION Reliability is pivotal in surgical skills assessment. Video-based assessment can be used for objective assessment without physical presence of assessors. However, its reliability for surgical assessments remains largely unexplored. In this study, we evaluated the reliability of video-based versus physical assessments of novices' surgical performances on human cadavers and 3D-printed models-an emerging simulation modality. METHODS Eighteen otorhinolaryngology residents performed 2 to 3 mastoidectomies on a 3D-printed model and 1 procedure on a human cadaver. Performances were rated by 3 experts evaluating the final surgical result using a well-known assessment tool. Performances were rated both hands-on/physically and by video recordings. Interrater reliability and intrarater reliability were explored using κ statistics and the optimal number of raters and performances required in either assessment modality was determined using generalizability theory. RESULTS Interrater reliability was moderate with a mean κ score of 0.58 (range 0.53-0.62) for video-based assessment and 0.60 (range, 0.55-0.69) for physical assessment. Video-based and physical assessments were equally reliable (G coefficient 0.85 vs. 0.80 for 3D-printed models and 0.86 vs 0.87 for cadaver dissections). The interaction between rater and assessment modality contributed to 8.1% to 9.1% of the estimated variance. For the 3D-printed models, 2 raters evaluating 2 video-recorded performances or 3 raters physically assessing 2 performances yielded sufficient reliability for high-stakes assessment (G coefficient >0.8). CONCLUSIONS Video-based and physical assessments were equally reliable. Some raters were affected by changing from physical to video-based assessment; consequently, assessment should be either physical or video based, not a combination.
Collapse
Affiliation(s)
- Andreas Frithioff
- From the Copenhagen Hearing and Balance Center, Department of Otorhinolaryngology-Head & Neck Surgery and Audiology (A.F., M.F., S.F., M.S., S.A.W.A.), Rigshospitalet, Copenhagen; and Copenhagen Academy for Medical Education and Simulation (CAMES; A.F., M.F., S.A.W.A.), Center for HR & Education, Copenhagen, Denmark
| | | | | | | | | |
Collapse
|
5
|
van Maarseveen OEC, Ham WHW, van Cruchten S, Duhoky R, Leenen LPH. Evaluation of validity and reliability of video analysis and live observations to assess trauma team performance. Eur J Trauma Emerg Surg 2022; 48:4797-4803. [PMID: 35817942 DOI: 10.1007/s00068-022-02004-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 05/15/2022] [Indexed: 11/03/2022]
Abstract
INTRODUCTION A trauma resuscitation is dynamic and complex process in which failures could lead to serious adverse events. In several trauma centers, evaluation of trauma resuscitation is part of a hospital's quality assessment program. While video analysis is commonly used, some hospitals use live observations, mainly due to ethical and medicolegal concerns. The aim of this study was to compare the validity and reliability of video analysis and live observations to evaluate trauma resuscitations. METHODS In this prospective observational study, validity was assessed by comparing the observed adherence to 28 advanced trauma life support (ATLS) guideline related tasks by video analysis to life observations. Interobserver reliability was assessed by calculating the intra class coefficient of observed ATLS related tasks by live observations and video analysis. RESULTS Eleven simulated and thirteen real-life resuscitations were assessed. Overall, the percentage of observed ATLS related tasks performed during simulated resuscitations was 10.4% (P < 0.001) higher when the same resuscitations were analysed using video compared to live observations. During real-life resuscitations, 8.7% (p < 0.001) more ATLS related tasks were observed using video review compared to live observations. In absolute terms, a mean of 2.9 (during simulated resuscitations) respectively 2.5 (during actual resuscitations) ATLS-related tasks per resuscitation were not identified using live observers, that were observed through video analysis. The interobserver variability for observed ATLS related tasks was significantly higher using video analysis compared to live observations for both simulated (video analysis: ICC 0.97; 95% CI 0.97-0.98 vs. live observation: ICC 0.69; 95% CI 0.57-0.78) and real-life witnessed resuscitations (video analyse 0.99; 95% CI 0.99-1.00 vs live observers 0.86; 95% CI 0.83-0.89). CONCLUSION Video analysis of trauma resuscitations may be more valid and reliable compared to evaluation by live observers. These outcomes may guide the debate to justify video review instead of live observations.
Collapse
Affiliation(s)
- Oscar E C van Maarseveen
- Department of Trauma Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands.
| | - Wietske H W Ham
- Emergency Department, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands.,Institute of Nursing Studies, University of Applied Science, Heidelberglaan 7, 3584 CS, Utrecht, The Netherlands
| | - Stijn van Cruchten
- Department of Trauma Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - Rauand Duhoky
- Department of Trauma Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands.,Emergency Department, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - Luke P H Leenen
- Department of Trauma Surgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| |
Collapse
|
6
|
Cooke AS, Mullan SM, Morten C, Hockenhull J, Lee MRF, Cardenas LM, Rivero MJ. V-QBA vs. QBA—How Do Video and Live Analysis Compare for Qualitative Behaviour Assessment? Front Vet Sci 2022; 9:832239. [PMID: 35372536 PMCID: PMC8966882 DOI: 10.3389/fvets.2022.832239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 01/20/2022] [Indexed: 11/20/2022] Open
Abstract
Animal welfare is an inextricable part of livestock production and sustainability. Assessing welfare, beyond physical indicators of health, is challenging and often relies on qualitative techniques. Behaviour is a key component of welfare to consider and Qualitative Behaviour Assessment (QBA) aims to achieve this by systematically scoring behaviour across specific terms. In recent years, numerous studies have conducted QBA by using video footage, however, the method was not originally developed using video and video QBA (V-QBA) requires validation. Forty live QBAs were conducted, by two assessors, on housed beef cattle to help fill this validation gap. Video was recorded over the assessment period and a second video assessment was conducted. Live and video scores for each term were compared for both correlation and significant difference. Principle component analysis (PCA) was then conducted and correlations and differences between QBA and V-QBA for the first two components were calculated. Of the 20 terms, three were removed due to an overwhelming majority of scores of zero. Of the remaining 17 terms, 12 correlated significantly, and a significant pairwise difference was found for one (“Bored”). QBA and V-QBA results correlated across both PC1 (defined as “arousal”) and PC2 (defined as “mood”). Whilst there was no significant difference between the techniques for PC1, there was for PC2, with V-QBA generally yielding lower scores than QBA. Furthermore, based on PC1 and PC2, corresponding QBA and V-QBA scores were significantly closer than would be expected at random. Results found broad agreement between QBA and V-QBA at both univariate and multivariate levels. However, the lack of absolute agreement and muted V-QBA results for PC2 mean that caution should be taken when implementing V-QBA and that it should ideally be treated independently from live QBA until further evidence is published. Future research should focus on a greater variety of animals, environments, and assessors to address further validation of the method.
Collapse
Affiliation(s)
- A. S. Cooke
- Department of Sustainable Agriculture Sciences, Rothamsted Research, North Wyke, Okehampton, United Kingdom
- *Correspondence: A. S. Cooke
| | - S. M. Mullan
- UCD School of Veterinary Medicine, University College Dublin, Dublin, Ireland
| | - C. Morten
- Department of Sustainable Agriculture Sciences, Rothamsted Research, North Wyke, Okehampton, United Kingdom
| | - J. Hockenhull
- Bristol Veterinary School, University of Bristol, Bristol, United Kingdom
| | - M. R. F. Lee
- Harper Adams University, Edgmond, United Kingdom
| | - L. M. Cardenas
- Department of Sustainable Agriculture Sciences, Rothamsted Research, North Wyke, Okehampton, United Kingdom
| | - M. J. Rivero
- Department of Sustainable Agriculture Sciences, Rothamsted Research, North Wyke, Okehampton, United Kingdom
- M. J. Rivero
| |
Collapse
|
7
|
Salzman GA, El H, Chang TP. Impact of Environmental Noise Levels on Endotracheal Intubation Performance Among Pediatric Emergency Providers: A Simulation Study. Pediatr Emerg Care 2021; 37:e944-e949. [PMID: 30964852 DOI: 10.1097/pec.0000000000001831] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
BACKGROUND The emergency department is a stressful workplace environment with environmental stimuli and distractions, including noise. This has potential effects on perceived stress for providers and critical procedure performance. OBJECTIVE This study aimed to characterize the impact of environmental noise levels on the time to intubate, the quality of intubation, and physiologic stress response in pediatric emergency department providers. METHODS This was a randomized control simulation-based study in which experienced pediatric providers intubated an adult manikin 3 times while experiencing 3 different ambient noise levels (60, 75, 80 dB) in random order. Participants' times to intubate were measured, as was the endotracheal tube depth. The quality of each intubation attempt was assessed via video review against a standardized checklist. Lastly, participants' heart rates were monitored in real time to assess for physiologic stress response. Differences in performance were analyzed using a repeated-measures analysis of variance. RESULTS No significant difference was found between noise levels and time to intubate (P = 0.19), although each subsequent attempt shortened the time to intubate (P = 0.01). Physiological heart rate changes did not differ by noise level (P = 0.35). Subjectively, "time and economy of motion" and "overall performance" did not differ by noise level but did improve for each subsequent attempt number (P < 0.046). CONCLUSIONS Intubation performance improved with attempt number, but no differences in performance were seen between noise levels. This suggests that rehearsing and practice impacts performance more than environmental noise levels.
Collapse
Affiliation(s)
- Garrett A Salzman
- From the Keck School of Medicine of the University of Southern California
| | - Hanan El
- Division of Emergency and Transport Medicine, Children's Hospital Los Angeles, Los Angeles, CA
| | | |
Collapse
|
8
|
Alberto EC, Jagannath S, McCusker ME, Keller S, Marsic I, Sarcevic A, O’Connell KJ, Burd RS. Classification strategies for non-routine events occurring in high-risk patient care settings: A scoping review. J Eval Clin Pract 2021; 27:464-471. [PMID: 33249690 PMCID: PMC7961264 DOI: 10.1111/jep.13456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 07/10/2020] [Accepted: 07/13/2020] [Indexed: 11/27/2022]
Abstract
INTRODUCTION Non-routine events (NREs) are atypical or unusual occurrences in a pre-defined process. Although some NREs in high-risk clinical settings have no adverse effects on patient care, others can potentially cause serious patient harm. A unified strategy for identifying and describing NREs in these domains will facilitate the comparison of results between studies. METHODS We conducted a literature search in PubMed, CINAHL, and EMBASE to identify studies related to NREs in high-risk domains and evaluated the methods used for event observation and description. We applied The Joint Commission on Accreditation of Healthcare Organization (JCAHO) taxonomy (cause, impact, domain, type, prevention, and mitigation) to the descriptions of NREs from the literature. RESULTS We selected 25 articles that met inclusion criteria for review. Real-time documentation of NREs was more common than a retrospective video review. Thirteen studies used domain experts as observers and seven studies validated observations with interrater reliability. Using the JCAHO taxonomy, "cause" was the most frequently applied classification method, followed by "impact," "type," "domain," and "prevention and mitigation." CONCLUSIONS NREs are frequent in high-risk medical settings. Strengths identified in several studies included the use of multiple observers with domain expertise and validation of the event ascertainment approach using interrater reliability. By applying the JCAHO taxonomy to the current literature, we provide an example of a structured approach that can be used for future analyses of NREs.
Collapse
Affiliation(s)
- Emily C. Alberto
- Division of Trauma and Burns, Children’s National Hospital, Washington, DC, USA
| | - Swathi Jagannath
- College of Computing and Informatics, Drexel University, Philadelphia, PA, USA
| | - Maureen E. McCusker
- Office of Institutional Research and Decision Support, Virginia Commonwealth University, Richmond, VA, USA
| | - Susan Keller
- Department of Nursing Science Professional Practice and Quality, Children’s National Hospital, Washington, DC, USA
| | - Ivan Marsic
- Department of Electrical and Computer Engineering, Rutgers University, Piscataway, NJ, USA
| | - Aleksandra Sarcevic
- College of Computing and Informatics, Drexel University, Philadelphia, PA, USA
| | - Karen J. O’Connell
- Division of Emergency Medicine, Children’s National Hospital, Washington, DC, USA
| | - Randall S. Burd
- Division of Trauma and Burns, Children’s National Hospital, Washington, DC, USA
| |
Collapse
|
9
|
Felthun JZ, Taylor S, Shulruf B, Allen DW. Assessment methods and the validity and reliability of measurement tools in online objective structured clinical examinations: a systematic scoping review. JOURNAL OF EDUCATIONAL EVALUATION FOR HEALTH PROFESSIONS 2021; 18:11. [PMID: 34058802 PMCID: PMC8212027 DOI: 10.3352/jeehp.2021.18.11] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Accepted: 05/18/2021] [Indexed: 05/21/2023]
Abstract
The coronavirus disease 2019 (COVID-19) pandemic has required educators to adapt the in-person objective structured clinical examination (OSCE) to online settings in order for it to remain a critical component of the multifaceted assessment of a student’s competency. This systematic scoping review aimed to summarize the assessment methods and validity and reliability of the measurement tools used in current online OSCE (hereafter, referred to as teleOSCE) approaches. A comprehensive literature review was undertaken following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews guidelines. Articles were eligible if they reported any form of performance assessment, in any field of healthcare, delivered in an online format. Two reviewers independently screened the results and analyzed relevant studies. Eleven articles were included in the analysis. Pre-recorded videos were used in 3 studies, while observations by remote examiners through an online platform were used in 7 studies. Acceptability as perceived by students was reported in 2 studies. This systematic scoping review identified several insights garnered from implementing teleOSCEs, the components transferable from telemedicine, and the need for systemic research to establish the ideal teleOSCE framework. TeleOSCEs may be able to improve the accessibility and reproducibility of clinical assessments and equip students with the requisite skills to effectively practice telemedicine in the future.
Collapse
Affiliation(s)
| | - Silas Taylor
- Office of Medical Education, University of New South Wales, Sydney, NSW, Australia
| | - Boaz Shulruf
- Office of Medical Education, University of New South Wales, Sydney, NSW, Australia
- Centre for Medical and Health Sciences Education, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Digby Wigram Allen
- School of Medicine, The University of New South Wales, Kensington, NSW, Australia
- Corresponding
| |
Collapse
|
10
|
Pirie J, St. Amant L, Glover Takahashi S. Managing residents in difficulty within CBME residency educational systems: a scoping review. BMC MEDICAL EDUCATION 2020; 20:235. [PMID: 32703231 PMCID: PMC7376876 DOI: 10.1186/s12909-020-02150-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Accepted: 07/15/2020] [Indexed: 05/28/2023]
Abstract
BACKGROUND Best practices in managing residents in difficulty (RID) in the era of competency-based medical education (CBME) are not well described. This scoping review aimed to inventory the current literature and identify major themes in the articles that address or employ CBME as part of the identification and remediation of residents in difficulty. METHODS Articles published between 2011 to 2017 were included if they were about postgraduate medical education, RID, and offered information to inform the structure and/or processes of CBME. All three reviewers performed a primary screening, followed by a secondary screening of abstracts of the chosen articles, and then a final comprehensive sub-analysis of the 11 articles identified as using a CBME framework. RESULTS Of 165 articles initially identified, 92 qualified for secondary screening; the 63 remaining articles underwent full-text abstracting. Ten themes were identified from the content analysis with "identification of RID" (41%) and "defining and classifying deficiencies" (30%) being the most frequent. In the CBME article sub-analysis, the most frequent themes were: need to identify RID (64%), improving assessment tools (45%), and roles and responsibilities of players involved in remediation (27%). Almost half of the CBME articles were published in 2016-2017. CONCLUSIONS Although CBME programs have been implemented for many years, articles have only recently begun specifically addressing RID within a competency framework. Much work is needed to describe the sequenced progression, tailored learning experiences, and competency-focused instruction. Finally, future research should focus on the outcomes of remediation in CBME programs.
Collapse
Affiliation(s)
- Jonathan Pirie
- Department of Pediatrics, Faculty of Medicine, University of Toronto, Toronto, Canada
- Paediatric Emergency Medicine, The Hospital for Sick Children, Toronto, Canada
| | - Lisa St. Amant
- Postgraduate Medical Education, Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Susan Glover Takahashi
- Department of Family and Community Medicine, Faculty of Medicine, Integrated Senior Scholar – Centre for Faculty Development and Postgraduate Medical Education, University of Toronto, Toronto, Canada
| |
Collapse
|
11
|
Ambardekar AP, Black S, Singh D, Lockman JL, Simpao AF, Schwartz AJ, Hales RL, Rodgers DL, Gurnaney HG. The impact of simulation-based medical education on resident management of emergencies in pediatric anesthesiology. Paediatr Anaesth 2019; 29:753-759. [PMID: 31034728 DOI: 10.1111/pan.13652] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/07/2018] [Revised: 04/13/2019] [Accepted: 04/19/2019] [Indexed: 11/29/2022]
Abstract
BACKGROUND Resident education in pediatric anesthesiology is challenging. Traditional curricula for anesthesiology residency programs have included a combination of didactic lectures and mentored clinical service, which can be variable. Limited pediatric medical knowledge, technical inexperience, and heightened resident anxiety further challenge patient care. We developed a pediatric anesthesia simulation-based curriculum to address crises related to hypoxemia and dysrhythmia management in the operating room as an adjunct to traditional didactic and clinical experiences. AIMS The primary objective of this trial was to evaluate the impact of a simulation curriculum designed for anesthesiology residents on their performance during the management of crises in the pediatric operating room. A secondary objective was to compare the retention of learned knowledge by assessment at the eight-week time point during the rotation. METHODS In this prospective, observational trial 30 residents were randomized to receive simulation-based education on four perioperative crises (Laryngospasm, Bronchospasm, Supraventricular Tachycardia (SVT), and Bradycardia) during the first week (Group A) or fifth week (Group B) of an eight-week rotation. Assessment sessions that included two scenarios (Laryngospasm, SVT) were performed in the first week, fifth week, and the eighth week of their rotation for all residents. The residents were assessed in real time and by video review using a 7-point checklist generated by a modified Delphi technique of senior pediatric anesthesiology faculty. RESULTS Residents in Group A showed improvement between the first week and fifth week assessment as well as between first week and eighth week assessments without decrement between the fifth week and eighth week assessments for both the laryngospasm and SVT scenarios. Residents in Group B showed improvement between the first week and eighth week assessments for both scenarios and between the fifth week and eighth week assessment for the SVT scenario. CONCLUSION This adjunctive simulation-based curriculum enhanced the learner's management of laryngospasm and SVT management and is a reasonable addition to didactic and clinical curricula for anesthesiology residents.
Collapse
Affiliation(s)
- Aditee P Ambardekar
- Department of Anesthesiology and Pain Management, University of Texas Southwestern Medical School, Dallas, Texas
| | - Stephanie Black
- Department of Anesthesiology and Critical Care Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Devika Singh
- Department of Anesthesiology and Critical Care Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Justin L Lockman
- Department of Anesthesiology and Critical Care Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Allan F Simpao
- Department of Anesthesiology and Critical Care Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Alan J Schwartz
- Department of Anesthesiology and Critical Care Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Roberta L Hales
- Center for Simulation, Advanced Education, and Innovation, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - David L Rodgers
- Clinical Simulation Center, Penn State Hershey Medical Center, Hershey, Pennsylvania
| | - Harshad G Gurnaney
- Department of Anesthesiology and Critical Care Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
12
|
Muacevic A, Adler JR. Evaluation of a Modified Objective Structured Assessment of Technical Skills Tool for the Assessment of Pediatric Laceration Repair Performance. Cureus 2019; 11:e4056. [PMID: 31016083 PMCID: PMC6464438 DOI: 10.7759/cureus.4056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
Introduction The Accreditation Council for Graduate Medical Education (ACGME) has developed milestones including procedural skills under the core competency of patient care. Progress in training is expected to be monitored by residency programs. To our knowledge, there exists no tool to evaluate pediatric resident laceration repair performance. Methods The Objective Structured Assessment of Technical Skills was adapted to evaluate resident laceration repair performance using two components: a global rating scale (GRS) and a checklist. Pediatric and family medicine residents at a tertiary care children's hospital were filmed performing a simulated laceration repair. Videos were evaluated by at least five physicians trained in laceration repair. Concordance correlation coefficients (CCC) were calculated for the GRS and checklist scores. Scores for each resident were compared across levels of training and procedural experience. Spearman's rank order correlations were calculated to compare the checklist and GRS. Results Thirty residents were filmed performing laceration repair procedures. The CCC showed fair concordance across reviewers for the checklist (0.55, 95% CI: 0.38-0.69) and the GRS (0.53, 95% CI: 0.36-0.67). There was no significant difference in scores by self-reported experience or training level. There was correlation between the median GRS and checklist scores (Spearman ρ = 0.730, p < .001). Conclusions A novel tool to evaluate resident laceration repair performance in a pediatric emergency department showed fair agreement across reviewers. The study tool is not precise enough for summative evaluation; however, it can be used to distinguish between trainees who have and have not attained competence in laceration repair for formative feedback.
Collapse
|
13
|
Julian O, Patrick H, Felix N, Tilman W, Mirco F, Beat-Peter MS, Gerhard S, Tanner MC. Development and validation of an objective assessment scale for chest tube insertion under 'direct' and 'indirect' rating. BMC MEDICAL EDUCATION 2018; 18:320. [PMID: 30587187 PMCID: PMC6307220 DOI: 10.1186/s12909-018-1430-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Accepted: 12/14/2018] [Indexed: 06/09/2023]
Abstract
BACKGROUND There is an increasing need for objective and validated educational concepts. This holds especially true for surgical procedures like chest tube insertion (CTI). Thus, we developed an instrument for objectification of learning successes: the assessment scale based on Objective Structured Assessment of Technical Skill (OSATS) for chest tube insertion, which is evaluated in this study. Primary endpoint was the evaluation of intermethod reliability (IM). Secondary endpoints are 'indirect' interrater reliability (IR) and construct validity of the scale (CV). METHODS Every participant (N = 59) performed a CTI on a porcine thorax. Participants received three ratings (one 'direct' on site, two 'indirect' via video rating). IM compares 'direct' with 'indirect' ratings. IR was assessed between 'indirect' ratings. CV was investigated by subgroup analysis based on prior experience in CTI for 'direct' and 'indirect' rating. RESULTS We included 59 medical students to our study. IM showed moderate conformity ('direct' vs. 'indirect 1' ICC = 0.735, 95% CI: 0.554-0.843; 'direct' vs. 'indirect 2' ICC = 0.722, 95% CI 0.533-0.835) and good conformity between 'direct' vs. 'average indirect' rating (ICC = 0.764, 95% CI: 0.6-0.86). IR showed good conformity (ICC = 0.84, 95% CI: 0.707-0.91). CV was proven between subgroups in 'direct' (p = 0.037) and 'indirect' rating (p = 0.013). CONCLUSION Results for IM suggest equivalence for 'direct' and 'indirect' ratings, while both IR and CV was demonstrated in both rating methods. Thus, the assessment scale seems a reliable method for rating trainees' performances 'directly' as well as 'indirectly'. It may help to objectify and facilitate the assessment of training of chest tube insertion.
Collapse
Affiliation(s)
- Ober Julian
- HTRG – Heidelberg Trauma Research Group, Center for Orthopedics, Trauma Surgery and Spinal Cord Injury, Trauma and Reconstructive Surgery, Heidelberg University Hospital, Schlierbacher Landstrasse 200a, D-69118 Heidelberg, Germany
| | - Haubruck Patrick
- HTRG – Heidelberg Trauma Research Group, Center for Orthopedics, Trauma Surgery and Spinal Cord Injury, Trauma and Reconstructive Surgery, Heidelberg University Hospital, Schlierbacher Landstrasse 200a, D-69118 Heidelberg, Germany
| | - Nickel Felix
- Department of General, Visceral and Transplantation Surgery Heidelberg University Hospital, D-69120 Heidelberg, Germany
| | - Walker Tilman
- HTRG – Heidelberg Trauma Research Group, Center for Orthopedics, Trauma Surgery and Spinal Cord Injury, Trauma and Reconstructive Surgery, Heidelberg University Hospital, Schlierbacher Landstrasse 200a, D-69118 Heidelberg, Germany
| | - Friedrich Mirco
- Department of General, Visceral and Transplantation Surgery Heidelberg University Hospital, D-69120 Heidelberg, Germany
| | - Müller-Stich Beat-Peter
- Department of General, Visceral and Transplantation Surgery Heidelberg University Hospital, D-69120 Heidelberg, Germany
| | - Schmidmaier Gerhard
- HTRG – Heidelberg Trauma Research Group, Center for Orthopedics, Trauma Surgery and Spinal Cord Injury, Trauma and Reconstructive Surgery, Heidelberg University Hospital, Schlierbacher Landstrasse 200a, D-69118 Heidelberg, Germany
| | - Michael C. Tanner
- HTRG – Heidelberg Trauma Research Group, Center for Orthopedics, Trauma Surgery and Spinal Cord Injury, Trauma and Reconstructive Surgery, Heidelberg University Hospital, Schlierbacher Landstrasse 200a, D-69118 Heidelberg, Germany
| |
Collapse
|
14
|
Lie D, Richter-Lagha R, Byul Sarah Ma S. A Pilot Comparison of In-Room and Video Ratings of Team Behaviors of Students in Interprofesional Teams. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2018; 82:6487. [PMID: 30013246 PMCID: PMC6041492 DOI: 10.5688/ajpe6487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2017] [Accepted: 12/20/2017] [Indexed: 06/08/2023]
Abstract
Objective. To examine concordance between in-room and video faculty ratings of interprofessional behaviors in a standardized team objective structured clinical encounter (TOSCE). Methods. In-room and video-rated student performance scores in an interprofessional 2-station TOSCE were compared using a validated 3-point scale assessing six team competencies. Scores for each student were derived from two in-room faculty members and one faculty member who viewed video recordings of the same team encounter from equivalent visual vantage points. All faculty members received the same rigorous rater training. Paired sample t-tests were used to compare individual student scores. McNemar's test was used to compare student pass/fail rates to determine the impact of rating modality on performance scores. Results. In-room and video student scores were captured for 12 novice teams (47 students) with each team consisting of students from four professions (medicine, pharmacy, physician assistant, nursing). Video ratings were consistently lower for all competencies and significantly lower for competencies of roles and responsibilities, and conflict management. Using a criterion of an average score of 2 out of 3 for at least one station for passing, 56% of students passed when rated in-room compared with 20% when rated by video. Conclusion. In-room and video ratings are not equal. Educators should consider scoring discrepancies based on modality when assessing team behaviors.
Collapse
Affiliation(s)
- Désirée Lie
- Keck School of Medicine, University of Southern California, Los Angeles, California
| | - Regina Richter-Lagha
- Keck School of Medicine, University of Southern California, Los Angeles, California
| | - Sae Byul Sarah Ma
- Keck School of Medicine, University of Southern California, Los Angeles, California
| |
Collapse
|
15
|
Isaak R, Stiegler M, Hobbs G, Martinelli SM, Zvara D, Arora H, Chen F. Comparing Real-time Versus Delayed Video Assessments for Evaluating ACGME Sub-competency Milestones in Simulated Patient Care Environments. Cureus 2018; 10:e2267. [PMID: 29736352 PMCID: PMC5935426 DOI: 10.7759/cureus.2267] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Background Simulation is an effective method for creating objective summative assessments of resident trainees. Real-time assessment (RTA) in simulated patient care environments is logistically challenging, especially when evaluating a large group of residents in multiple simulation scenarios. To date, there is very little data comparing RTA with delayed (hours, days, or weeks later) video-based assessment (DA) for simulation-based assessments of Accreditation Council for Graduate Medical Education (ACGME) sub-competency milestones. We hypothesized that sub-competency milestone evaluation scores obtained from DA, via audio-video recordings, are equivalent to the scores obtained from RTA. Methods Forty-one anesthesiology residents were evaluated in three separate simulated scenarios, representing different ACGME sub-competency milestones. All scenarios had one faculty member perform RTA and two additional faculty members perform DA. Subsequently, the scores generated by RTA were compared with the average scores generated by DA. Variance component analysis was conducted to assess the amount of variation in scores attributable to residents and raters. Results Paired t-tests showed no significant difference in scores between RTA and averaged DA for all cases. Cases 1, 2, and 3 showed an intraclass correlation coefficient (ICC) of 0.67, 0.85, and 0.50 for agreement between RTA scores and averaged DA scores, respectively. Analysis of variance of the scores assigned by the three raters showed a small proportion of variance attributable to raters (4% to 15%). Conclusions The results demonstrate that video-based delayed assessment is as reliable as real-time assessment, as both assessment methods yielded comparable scores. Based on a department’s needs or logistical constraints, our findings support the use of either real-time or delayed video evaluation for assessing milestones in a simulated patient care environment.
Collapse
Affiliation(s)
- Robert Isaak
- Department of Anesthesiology, University of North Carolina School of Medicine
| | - Marjorie Stiegler
- Department of Anesthesiology, University of North Carolina School of Medicine
| | - Gene Hobbs
- Department of Neurosurgery, University of North Carolina School of Medicine
| | - Susan M Martinelli
- Department of Anesthesiology, University of North Carolina School of Medicine
| | - David Zvara
- Department of Anesthesiology, University of North Carolina School of Medicine
| | - Harendra Arora
- Department of Anesthesiology, University of North Carolina School of Medicine
| | - Fei Chen
- Department of Anesthesiology, University of North Carolina School of Medicine
| |
Collapse
|
16
|
Huang RJ, Limsui D, Triadafilopoulos G. Video-based performance assessment in endoscopy: Moving beyond "see one, do one, teach one"? Gastrointest Endosc 2018; 87:776-777. [PMID: 29454450 DOI: 10.1016/j.gie.2017.09.014] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/11/2017] [Accepted: 09/18/2017] [Indexed: 12/11/2022]
Affiliation(s)
- Robert J Huang
- Division of Gastroenterology and Hepatology, Stanford University School of Medicine, Stanford, California, USA
| | - David Limsui
- Division of Gastroenterology and Hepatology, Stanford University School of Medicine, Stanford, California, USA
| | - George Triadafilopoulos
- Division of Gastroenterology and Hepatology, Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
17
|
Sparks JL, Crouch DL, Sobba K, Evans D, Zhang J, Johnson JE, Saunders I, Thomas J, Bodin S, Tonidandel A, Carter J, Westcott C, Martin RS, Hildreth A. Association of a Surgical Task During Training With Team Skill Acquisition Among Surgical Residents: The Missing Piece in Multidisciplinary Team Training. JAMA Surg 2017; 152:818-825. [PMID: 28538983 DOI: 10.1001/jamasurg.2017.1085] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Importance The human patient simulators that are currently used in multidisciplinary operating room team training scenarios cannot simulate surgical tasks because they lack a realistic surgical anatomy. Thus, they eliminate the surgeon's primary task in the operating room. The surgical trainee is presented with a significant barrier when he or she attempts to suspend disbelief and engage in the scenario. Objective To develop and test a simulation-based operating room team training strategy that challenges the communication abilities and teamwork competencies of surgeons while they are engaged in realistic operative maneuvers. Design, Setting, and Participants This pre-post educational intervention pilot study compared the gains in teamwork skills for midlevel surgical residents at Wake Forest Baptist Medical Center after they participated in a standardized multidisciplinary team training scenario with 3 possible levels of surgical realism: (1) SimMan (Laerdal) (control group, no surgical anatomy); (2) "synthetic anatomy for surgical tasks" mannequin (medium-fidelity anatomy), and (3) a patient simulated by a deceased donor (high-fidelity anatomy). Interventions Participation in the simulation scenario and the subsequent debriefing. Main Outcomes and Measures Teamwork competency was assessed using several instruments with extensive validity evidence, including the Nontechnical Skills assessment, the Trauma Management Skills scoring system, the Crisis Resource Management checklist, and a self-efficacy survey instrument. Participant satisfaction was assessed with a Likert-scale questionnaire. Results Scenario participants included midlevel surgical residents, anesthesia providers, scrub nurses, and circulating nurses. Statistical models showed that surgical residents exposed to medium-fidelity simulation (synthetic anatomy for surgical tasks) team training scenarios demonstrated greater gains in teamwork skills compared with control groups (SimMan) (Nontechnical Skills video score: 95% CI, 1.06-16.41; Trauma Management Skills video score: 95% CI, 0.61-2.90) and equivalent gains in teamwork skills compared with high-fidelity simulations (deceased donor) (Nontechnical Skills video score: 95% CI, -8.51 to 6.71; Trauma Management Skills video score: 95% CI, -1.70 to 0.49). Conclusions and Relevance Including a surgical task in operating room team training significantly enhanced the acquisition of teamwork skills among midlevel surgical residents. Incorporating relatively inexpensive, medium-fidelity synthetic anatomy in human patient simulators was as effective as using high-fidelity anatomies from deceased donors for promoting teamwork skills in this learning group.
Collapse
Affiliation(s)
| | | | - Kathryn Sobba
- Wake Forest Baptist Health, Winston Salem, North Carolina
| | - Douglas Evans
- Wake Forest Baptist Health, Winston Salem, North Carolina
| | | | | | - Ian Saunders
- Wake Forest Baptist Health, Winston Salem, North Carolina
| | - John Thomas
- Wake Forest Baptist Health, Winston Salem, North Carolina
| | - Sarah Bodin
- Wake Forest Baptist Health, Winston Salem, North Carolina
| | | | - Jeff Carter
- Wake Forest Baptist Health, Winston Salem, North Carolina
| | - Carl Westcott
- Wake Forest Baptist Health, Winston Salem, North Carolina
| | - R Shayn Martin
- Wake Forest Baptist Health, Winston Salem, North Carolina
| | - Amy Hildreth
- Wake Forest Baptist Health, Winston Salem, North Carolina
| |
Collapse
|
18
|
Faudeux C, Tran A, Dupont A, Desmontils J, Montaudié I, Bréaud J, Braun M, Fournier JP, Bérard E, Berlengi N, Schweitzer C, Haas H, Caci H, Gatin A, Giovannini-Chami L. Development of Reliable and Validated Tools to Evaluate Technical Resuscitation Skills in a Pediatric Simulation Setting: Resuscitation and Emergency Simulation Checklist for Assessment in Pediatrics. J Pediatr 2017; 188:252-257.e6. [PMID: 28456389 DOI: 10.1016/j.jpeds.2017.03.055] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/12/2016] [Revised: 02/08/2017] [Accepted: 03/24/2017] [Indexed: 11/28/2022]
Abstract
OBJECTIVES To develop a reliable and validated tool to evaluate technical resuscitation skills in a pediatric simulation setting. STUDY DESIGN Four Resuscitation and Emergency Simulation Checklist for Assessment in Pediatrics (RESCAPE) evaluation tools were created, following international guidelines: intraosseous needle insertion, bag mask ventilation, endotracheal intubation, and cardiac massage. We applied a modified Delphi methodology evaluation to binary rating items. Reliability was assessed comparing the ratings of 2 observers (1 in real time and 1 after a video-recorded review). The tools were assessed for content, construct, and criterion validity, and for sensitivity to change. RESULTS Inter-rater reliability, evaluated with Cohen kappa coefficients, was perfect or near-perfect (>0.8) for 92.5% of items and each Cronbach alpha coefficient was ≥0.91. Principal component analyses showed that all 4 tools were unidimensional. Significant increases in median scores with increasing levels of medical expertise were demonstrated for RESCAPE-intraosseous needle insertion (P = .0002), RESCAPE-bag mask ventilation (P = .0002), RESCAPE-endotracheal intubation (P = .0001), and RESCAPE-cardiac massage (P = .0037). Significantly increased median scores over time were also demonstrated during a simulation-based educational program. CONCLUSIONS RESCAPE tools are reliable and validated tools for the evaluation of technical resuscitation skills in pediatric settings during simulation-based educational programs. They might also be used for medical practice performance evaluations.
Collapse
Affiliation(s)
- Camille Faudeux
- Pediatric Emergency Department, Hôpitaux pédiatriques de Nice CHU-Lenval, Nice, France; Pediatric Nephrology Department, CHU de Nice, Nice, France
| | - Antoine Tran
- Pediatric Emergency Department, Hôpitaux pédiatriques de Nice CHU-Lenval, Nice, France; Medical Simulation Center, Faculty of Medicine of Nice, Université de Nice Sophia-Antipolis, Nice, France
| | - Audrey Dupont
- Medical Simulation Center, Faculty of Medicine of Nice, Université de Nice Sophia-Antipolis, Nice, France; Pediatric Intensive Care Unit, Hôpitaux pédiatriques de Nice CHU-Lenval, Nice, France
| | - Jonathan Desmontils
- Pediatric Emergency Department, Hôpitaux pédiatriques de Nice CHU-Lenval, Nice, France
| | - Isabelle Montaudié
- Pediatric Emergency Department, Hôpitaux pédiatriques de Nice CHU-Lenval, Nice, France
| | - Jean Bréaud
- Medical Simulation Center, Faculty of Medicine of Nice, Université de Nice Sophia-Antipolis, Nice, France; Université de Nice-Sophia Antipolis, Nice, France
| | - Marc Braun
- University Centre for Education by Medical Simulation (CUESIM)-The Virtual Hospital of Lorraine of the Faculty of Medicine of Nancy, France; Université de Nancy, Nancy, France
| | - Jean-Paul Fournier
- Medical Simulation Center, Faculty of Medicine of Nice, Université de Nice Sophia-Antipolis, Nice, France; Université de Nice-Sophia Antipolis, Nice, France
| | - Etienne Bérard
- Pediatric Nephrology Department, CHU de Nice, Nice, France; Université de Nice-Sophia Antipolis, Nice, France
| | - Noémie Berlengi
- Pediatric Emergency Department, Hôpital d'enfants de Nancy, Nancy, France
| | - Cyril Schweitzer
- Université de Nancy, Nancy, France; Pediatric Emergency Department, Hôpital d'enfants de Nancy, Nancy, France
| | - Hervé Haas
- Pediatric Emergency Department, Hôpitaux pédiatriques de Nice CHU-Lenval, Nice, France
| | - Hervé Caci
- Pediatric Outpatient Unit, Hôpitaux pédiatriques de Nice CHU-Lenval, Nice, France
| | - Amélie Gatin
- University Centre for Education by Medical Simulation (CUESIM)-The Virtual Hospital of Lorraine of the Faculty of Medicine of Nancy, France; Pediatric Emergency Department, Hôpital d'enfants de Nancy, Nancy, France
| | - Lisa Giovannini-Chami
- Medical Simulation Center, Faculty of Medicine of Nice, Université de Nice Sophia-Antipolis, Nice, France; Pediatric Intensive Care Unit, Hôpitaux pédiatriques de Nice CHU-Lenval, Nice, France; Université de Nice-Sophia Antipolis, Nice, France; Pediatric Pulmonology and Allergology Department, Hôpitaux pédiatriques de Nice CHU-Lenval, Nice, France.
| |
Collapse
|
19
|
Papanagnou D. Telesimulation: A Paradigm Shift for Simulation Education. AEM EDUCATION AND TRAINING 2017; 1:137-139. [PMID: 30051024 PMCID: PMC6001830 DOI: 10.1002/aet2.10032] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2016] [Accepted: 02/23/2017] [Indexed: 05/16/2023]
|
20
|
|
21
|
Nickel F, Hendrie JD, Stock C, Salama M, Preukschas AA, Senft JD, Kowalewski KF, Wagner M, Kenngott HG, Linke GR, Fischer L, Müller-Stich BP. Direct Observation versus Endoscopic Video Recording-Based Rating with the Objective Structured Assessment of Technical Skills for Training of Laparoscopic Cholecystectomy. Eur Surg Res 2016; 57:1-9. [DOI: 10.1159/000444449] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2015] [Accepted: 02/04/2016] [Indexed: 11/19/2022]
Abstract
Purpose: The validated Objective Structured Assessment of Technical Skills (OSATS) score is used for evaluating laparoscopic surgical performance. It consists of two subscores, a Global Rating Scale (GRS) and a Specific Technical Skills (STS) scale. The OSATS has accepted construct validity for direct observation ratings by experts to discriminate between trainees' levels of experience. Expert time is scarce. Endoscopic video recordings would facilitate assessment with the OSATS. We aimed to compare video OSATS with direct OSATS. Methods: We included 79 participants with different levels of experience [58 medical students, 15 junior residents (novices), and 6 experts]. Performance of a cadaveric porcine laparoscopic cholecystectomy (LC) was evaluated with OSATS by blinded expert raters by direct observation and then as an endoscopic video recording. Operative time was recorded. Results: Direct OSATS rating and video OSATS rating correlated significantly (ρ = 0.33, p = 0.005). Significant construct validity was found for direct OSATS in distinguishing between students or novices and experts. Students and novices were not different in direct OSATS or video OSATS. Mean operative times varied for students (73.4 ± 9.0 min), novices (65.2 ± 22.3 min), and experts (46.8 ± 19.9 min). Internal consistency was high between the GRS and STS subscores for both direct and video OSATS with Cronbach's α of 0.76 and 0.86, respectively. Video OSATS and operative time in combination was a better predictor of direct OSATS than each single parameter. Conclusion: Direct OSATS rating was better than endoscopic video rating for differentiating between students or novices and experts for LC and should remain the standard approach for the discrimination of experience levels. However, in the absence of experts for direct rating, video OSATS supplemented with operative time should be used instead of single parameters for predicting direct OSATS scores.
Collapse
|
22
|
Jordan A, Antomarchi J, Bongain A, Tran A, Delotte J. Development and validation of an objective structured assessment of technical skill tool for the practice of breech presentation delivery. Arch Gynecol Obstet 2016; 294:327-32. [DOI: 10.1007/s00404-016-4063-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2015] [Accepted: 02/24/2016] [Indexed: 10/22/2022]
|
23
|
Abstract
Objective Each year over 1.5 million health care professionals attend emergency care courses. Despite high stakes for patients and extensive resources involved, little evidence exists on the quality of assessment. The aim of this study was to evaluate the validity and reliability of commonly used formats in assessing emergency care skills. Methods Residents were assessed at the end of a 2-week emergency course; a subgroup was videotaped. Psychometric analyses were conducted to assess the validity and inter-rater reliability of the assessment instrument, which included a checklist, a 9-item competency scale and a global performance scale. Results A group of 144 residents and 12 raters participated in the study; 22 residents were videotaped and re-assessed by 8 raters. The checklists showed limited validity and poor inter-rater reliability for the dimensions “correct” and “timely” (ICC = .30 and.39 resp.). The competency scale had good construct validity, consisting of a clinical and a communication subscale. The internal consistency of the (sub)scales was high (α = .93/.91/.86). The inter-rater reliability was moderate for the clinical competency subscale (.49) and the global performance scale (.50), but poor for the communication subscale (.27). A generalizability study showed that for a reliable assessment 5–13 raters are needed when using checklists, and four when using the clinical competency scale or the global performance scale. Conclusions This study shows poor validity and reliability for assessing emergency skills with checklists but good validity and moderate reliability with clinical competency or global performance scales. Involving more raters can improve the reliability substantially. Recommendations are made to improve this high stakes skill assessment.
Collapse
|
24
|
Antomarchi J, Delotte J, Jordan A, Tran A, Bongain A. Development and validation of an objective structured assessment of technical skill tool for the practice of vertex presentation delivery. Arch Gynecol Obstet 2014; 290:243-7. [DOI: 10.1007/s00404-014-3204-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2013] [Accepted: 02/28/2014] [Indexed: 11/25/2022]
|
25
|
Iyer MS, Santen SA, Nypaver M, Warrier K, Bradin S, Chapman R, McAllister J, Vredeveld J, House JB, Accreditation Council for Graduate Medical Education Committee, Emergency Medicine and Pediatric Residency Review Committee. Assessing the validity evidence of an objective structured assessment tool of technical skills for neonatal lumbar punctures. Acad Emerg Med 2013; 20:321-4. [PMID: 23517267 DOI: 10.1111/acem.12093] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2012] [Revised: 09/09/2012] [Accepted: 10/02/2012] [Indexed: 12/12/2022]
Abstract
BACKGROUND The lumbar puncture (LP) is a procedural competency deemed necessary by the Accreditation Council for Graduate Medical Education and the Emergency Medicine and Pediatric Residency Review Committees. The emergency department (ED) is a primary site for residents to be evaluated performing neonatal LPs. Current evaluation methods lack validity evidence as assessment tools. OBJECTIVES This was a pilot study to develop an objective structured assessment of technical skills for neonatal LP (OSATS-LP) and to document validity evidence for the instrument in regard to five sources of test validity: content, response process, relation to other variables, inter-rater reliability, and consequences of testing. METHODS Pediatric residents were videotaped in the fall of 2011 for comparison of faculty evaluation of resident performance during a neonatal LP using a video-delayed format. Residents completed a demographic experience survey evaluating relations to other variables. Content and response process validity was obtained through expert panel meetings and resulted in the following seven domains of performance for the OSATS-LP: preparation, positioning, analgesia, needle insertion, cerebrospinal fluid (CSF) collection, management of laboratory studies, and sterility. t-tests assessed significance between level of training, previous intensive care unit experience, and residents' self-assessed confidence in comparison with their total performance score. The inter-rater agreement of the OSATS-LP was obtained using the Fleiss' kappa for each domain. RESULTS Sixteen pediatric residents completed the simulation with six raters evaluating each resident (96 ratings). The domains of sterility and CSF collection had moderate statistical reliability (κ = 0.41 and 0.51, respectively). The domains of preparation, analgesia, and management of laboratories had substantial reliability (κ = 0.60, 0.62, and 0.62, respectively). The domains of positioning and needle insertion were less reliable (κ = 0.16 and 0.16, respectively). Individuals who had completed one or more rotations in the neonatal intensive care unit (NICU) had a higher total score (12.5 vs. 16.9; p < 0.01). The residents' own perception of ability to perform an LP unsupervised did not result in a higher total score. CONCLUSIONS The OSATS-LP has reasonable evidence in four of the five sources for test validity. This study serves as a launching point for using this tool in clinical environments such as the ED and, therefore, has the potential to provide real-time formative and summative feedback to improve resident skills and ultimately lead to improvements in patient care.
Collapse
Affiliation(s)
- Maya S. Iyer
- Department of Pediatrics; Division of Emergency Medicine; Children's Hospital of Pittsburgh; Pittsburgh PA
- Department of Pediatrics and Communicable Diseases; University of Michigan; Ann Arbor MI
| | - Sally A. Santen
- Department of Medical Education; University of Michigan; Ann Arbor MI
- Department of Emergency Medicine; University of Michigan; Ann Arbor MI
| | - Michele Nypaver
- Department of Emergency Medicine; University of Michigan; Ann Arbor MI
| | - Kavita Warrier
- Department of Pediatrics and Communicable Diseases; University of Michigan; Ann Arbor MI
| | - Stuart Bradin
- Department of Emergency Medicine; University of Michigan; Ann Arbor MI
| | - Rachel Chapman
- Division of Neonatal-Perinatal Medicine; University of Michigan; Ann Arbor MI
| | - Jennifer McAllister
- Division of Neonatal-Perinatal Medicine; University of Michigan; Ann Arbor MI
| | - Jennifer Vredeveld
- Department of Pediatrics and Communicable Diseases; University of Michigan; Ann Arbor MI
- Department of Internal Medicine; University of Michigan; Ann Arbor MI
| | - Joseph B. House
- Department of Emergency Medicine; University of Michigan; Ann Arbor MI
| | | | | |
Collapse
|