1
|
Gershov S, Mahameed F, Raz A, Laufer S. Inherent bias in simulation-based assessment. Br J Anaesth 2025; 134:1531-1533. [PMID: 39799054 DOI: 10.1016/j.bja.2024.10.044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Revised: 10/28/2024] [Accepted: 10/30/2024] [Indexed: 01/15/2025] Open
Affiliation(s)
- Sapir Gershov
- Technion Autonomous Systems Program, Technion - Israel Institute of Technology, Haifa, Israel
| | - Fadi Mahameed
- Rambam Health Care Campus, Haifa, Israel; Faculty of Data and Decision Sciences, Technion - Israel Institute of Technology, Haifa, Israel.
| | - Aeyal Raz
- Rambam Health Care Campus, Haifa, Israel; Rappaport Faculty of Medicine, Technion - Israel Institute of Technology, Haifa, Israel
| | - Shlomi Laufer
- Faculty of Data and Decision Sciences, Technion - Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
2
|
Shimizu I, Tanaka K, Mori JI, Yamauchi A, Kato S, Masuda Y, Nakazawa Y, Kanno H. Objective Structured Clinical Examination to Assess Patient Safety Competencies of Japanese Medical Students: Development and Validation Argument. Cureus 2024; 16:e73969. [PMID: 39563687 PMCID: PMC11574574 DOI: 10.7759/cureus.73969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/19/2024] [Indexed: 11/21/2024] Open
Abstract
Background The growing emphasis on improving patient safety over the past two decades has received more focus in the undergraduate curricula, and the appropriate assessment of patient safety competencies at graduation is crucial in competency-based medical education. However, there is no valid method for assessing patient safety competencies because current assessment methods in medical education focus less on behavior. The objective structured clinical examination (OSCE) is a method to assess clinical performance and has been implemented by medical schools in Japan for summative assessment at graduation. However, stations with sufficient validity to assess patient safety competencies have not yet been developed. Thus, this study aimed to evaluate, under a contemporary validity framework, an OSCE station for assessing patient safety competencies that students are expected to achieve at graduation from medical schools in Japan. Methods A validity argument was conducted using Messick's validity framework, which includes content, response process, relations to other variables, internal structure, and consequences. First, we applied a modified Delphi study to develop OSCE stations for assessing patient safety competencies based on the national model core curriculum at graduation. The panel survey recruited members who have expertise in clinical education and patient safety. The draft stations simulated various situations associated with patient safety. Final-year medical students then took the OSCE. We analyzed the results of the OSCE, compared the scores with those of the clinical reasoning examination, and evaluated its reliability. Results Out of 30 panelists, 22 (73.3%) fully participated in the Delphi rounds. After two Delphi rounds, we established four stations to assess patient safety competencies. They met the content dimension of the validity framework. The OSCE results showed low correlation with clinical reasoning, suggesting that patient competencies cannot be inferred from clinical reasoning. Each station had satisfactory reliability. The entire process minimized possible assessment bias. Conclusions The OSCE scenario designed through the modified Delphi study met the five criteria of Messick's validity framework. The results show that it is a valid strategy for assessing patient safety competencies at graduation.
Collapse
Affiliation(s)
- Ikuo Shimizu
- Medical Education, Chiba University Graduate School of Medicine, Chiba, JPN
- Center for Medical Education and Clinical Training, Shinshu University School of Medicine, Matsumoto, JPN
- Quality and Patient Safety, Chiba University Hospital, Chiba, JPN
| | - Kazumi Tanaka
- Healthcare Quality and Safety, Gunma University Graduate School of Medicine, Maebashi, JPN
| | - Jun-Ichirou Mori
- Center for Medical Education and Clinical Training, Shinshu University School of Medicine, Matsumoto, JPN
| | - Aiga Yamauchi
- Academic Affairs Office, Shinshu University School of Medicine, Matsumoto, JPN
| | - Sawako Kato
- Center for Medical Education and Clinical Training, Shinshu University School of Medicine, Matsumoto, JPN
| | - Yuichi Masuda
- Center for Medical Education and Clinical Training, Shinshu University School of Medicine, Matsumoto, JPN
| | - Yuichi Nakazawa
- Center for Medical Education and Clinical Training, Shinshu University School of Medicine, Matsumoto, JPN
| | - Hiroyuki Kanno
- Center for Medical Education and Clinical Training, Shinshu University School of Medicine, Matsumoto, JPN
| |
Collapse
|
3
|
Buléon C, Mattatia L, Minehart RD, Rudolph JW, Lois FJ, Guillouet E, Philippon AL, Brissaud O, Lefevre-Scelles A, Benhamou D, Lecomte F, group TSAWS, Bellot A, Crublé I, Philippot G, Vanderlinden T, Batrancourt S, Boithias-Guerot C, Bréaud J, de Vries P, Sibert L, Sécheresse T, Boulant V, Delamarre L, Grillet L, Jund M, Mathurin C, Berthod J, Debien B, Gacia O, Der Sahakian G, Boet S, Oriot D, Chabot JM. Simulation-based summative assessment in healthcare: an overview of key principles for practice. ADVANCES IN SIMULATION (LONDON, ENGLAND) 2022; 7:42. [PMID: 36578052 PMCID: PMC9795938 DOI: 10.1186/s41077-022-00238-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 11/30/2022] [Indexed: 12/29/2022]
Abstract
BACKGROUND Healthcare curricula need summative assessments relevant to and representative of clinical situations to best select and train learners. Simulation provides multiple benefits with a growing literature base proving its utility for training in a formative context. Advancing to the next step, "the use of simulation for summative assessment" requires rigorous and evidence-based development because any summative assessment is high stakes for participants, trainers, and programs. The first step of this process is to identify the baseline from which we can start. METHODS First, using a modified nominal group technique, a task force of 34 panelists defined topics to clarify the why, how, what, when, and who for using simulation-based summative assessment (SBSA). Second, each topic was explored by a group of panelists based on state-of-the-art literature reviews technique with a snowball method to identify further references. Our goal was to identify current knowledge and potential recommendations for future directions. Results were cross-checked among groups and reviewed by an independent expert committee. RESULTS Seven topics were selected by the task force: "What can be assessed in simulation?", "Assessment tools for SBSA", "Consequences of undergoing the SBSA process", "Scenarios for SBSA", "Debriefing, video, and research for SBSA", "Trainers for SBSA", and "Implementation of SBSA in healthcare". Together, these seven explorations provide an overview of what is known and can be done with relative certainty, and what is unknown and probably needs further investigation. Based on this work, we highlighted the trustworthiness of different summative assessment-related conclusions, the remaining important problems and questions, and their consequences for participants and institutions of how SBSA is conducted. CONCLUSION Our results identified among the seven topics one area with robust evidence in the literature ("What can be assessed in simulation?"), three areas with evidence that require guidance by expert opinion ("Assessment tools for SBSA", "Scenarios for SBSA", "Implementation of SBSA in healthcare"), and three areas with weak or emerging evidence ("Consequences of undergoing the SBSA process", "Debriefing for SBSA", "Trainers for SBSA"). Using SBSA holds much promise, with increasing demand for this application. Due to the important stakes involved, it must be rigorously conducted and supervised. Guidelines for good practice should be formalized to help with conduct and implementation. We believe this baseline can direct future investigation and the development of guidelines.
Collapse
Affiliation(s)
- Clément Buléon
- grid.460771.30000 0004 1785 9671Department of Anesthesiology, Intensive Care and Perioperative Medicine, Caen Normandy University Hospital, 6th Floor, Caen, France ,grid.412043.00000 0001 2186 4076Medical School, University of Caen Normandy, Caen, France ,grid.419998.40000 0004 0452 5971Center for Medical Simulation, Boston, MA USA
| | - Laurent Mattatia
- grid.411165.60000 0004 0593 8241Department of Anesthesiology, Intensive Care and Perioperative Medicine, Nîmes University Hospital, Nîmes, France
| | - Rebecca D. Minehart
- grid.419998.40000 0004 0452 5971Center for Medical Simulation, Boston, MA USA ,grid.32224.350000 0004 0386 9924Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, MA USA ,grid.38142.3c000000041936754XHarvard Medical School, Boston, MA USA
| | - Jenny W. Rudolph
- grid.419998.40000 0004 0452 5971Center for Medical Simulation, Boston, MA USA ,grid.32224.350000 0004 0386 9924Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, MA USA ,grid.38142.3c000000041936754XHarvard Medical School, Boston, MA USA
| | - Fernande J. Lois
- grid.4861.b0000 0001 0805 7253Department of Anesthesiology, Intensive Care and Perioperative Medicine, Liège University Hospital, Liège, Belgique
| | - Erwan Guillouet
- grid.460771.30000 0004 1785 9671Department of Anesthesiology, Intensive Care and Perioperative Medicine, Caen Normandy University Hospital, 6th Floor, Caen, France ,grid.412043.00000 0001 2186 4076Medical School, University of Caen Normandy, Caen, France
| | - Anne-Laure Philippon
- grid.411439.a0000 0001 2150 9058Department of Emergency Medicine, Pitié Salpêtrière University Hospital, APHP, Paris, France
| | - Olivier Brissaud
- grid.42399.350000 0004 0593 7118Department of Pediatric Intensive Care, Pellegrin University Hospital, Bordeaux, France
| | - Antoine Lefevre-Scelles
- grid.41724.340000 0001 2296 5231Department of Emergency Medicine, Rouen University Hospital, Rouen, France
| | - Dan Benhamou
- grid.413784.d0000 0001 2181 7253Department of Anesthesiology, Intensive Care and Perioperative Medicine, Kremlin Bicêtre University Hospital, APHP, Paris, France
| | - François Lecomte
- grid.411784.f0000 0001 0274 3893Department of Emergency Medicine, Cochin University Hospital, APHP, Paris, France
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
4
|
Jackson P, Siddharthan T, Cordoba Torres IT, Green BA, Policard CJP, Degraff J, Padalkar R, Logothetis KB, Gold JA, Fort AC. Developing and Implementing Noninvasive Ventilator Training in Haiti during the COVID-19 Pandemic. ATS Sch 2022; 3:112-124. [PMID: 35634008 PMCID: PMC9130714 DOI: 10.34197/ats-scholar.2021-0070oc] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 12/07/2021] [Indexed: 02/07/2023] Open
Abstract
Background Noninvasive ventilation (NIV) is an important component of respiratory therapy for a range of cardiopulmonary conditions. The World Health Organization recommends NIV use to decrease the use of intensive care unit resources and improve outcomes among patients with respiratory failure during periods of high patient capacity from coronavirus disease (COVID-19). However, healthcare providers in many low- and middle-income countries, including Haiti, do not have experience with NIV. We conducted NIV training and evaluation in Port-au-Prince, Haiti. Objectives To design and implement a multimodal NIV training program in Haiti that would improve confidence and knowledge of NIV use for respiratory failure. Methods In January 2021, we conducted a 3-day multimodal NIV training consisting of didactic sessions, team-based learning, and multistation simulation for 36 Haitian healthcare workers. The course included 5 didactic session and 10 problem-based and simulation sessions. All course material was independently created by the study team on the basis of Accreditation Council for Continuing Medical Education-approved content and review of available evidence. All participants completed pre- and post-training knowledge-based examinations and confidence surveys, which used a 5-point Likert scale. Results A total of 36 participants were included in the training and analysis, mean age was 39.94 years (standard deviation [SD] = 9.45), and participants had an average of 14.32 years (SD = 1.21) of clinical experience. Most trainees (75%, n = 27) were physicians. Other specialties included nursing (19%, n = 7), nurse anesthesia (3%, n = 1), and respiratory therapy (3%, n = 1). Fifty percent (n = 18) of participants stated they had previous experience with NIV. The majority of trainees (77%) had an increase in confidence survey score; the mean confidence survey score increased significantly after training from 2.75 (SD = 0.77) to 3.70 (SD = 0.85) (P < 0.05). The mean knowledge examination score increased by 39.63% (SD = 15.99%) after training, which was also significant (P < 0.001). Conclusion This multimodal NIV training, which included didactic, simulation, and team-based learning, was feasible and resulted in significant increases in trainee confidence and knowledge with NIV. This curriculum has the potential to provide NIV training to numerous low- and middle-income countries as they manage the ongoing COVID-19 pandemic and rising burden of noncommunicable disease. Further research is necessary to ensure the sustainability of these improvements and adaptability to other low- and middle-income settings.
Collapse
Affiliation(s)
- Peter Jackson
- Division of Pulmonary and Critical Care, Virginia Commonwealth University, Richmond, Virginia
| | | | | | - Barth A. Green
- Department of Neurosurgery, University of Miami Miller School of Medicine, Miami, Florida
| | | | | | - Roma Padalkar
- Rowan University School of Osteopathic Medicine, Stratford, New Jersey; and
| | - Kathryn B. Logothetis
- Division of Pulmonary and Critical Care, Virginia Commonwealth University, Richmond, Virginia
| | - Jeffrey A. Gold
- Department of Pulmonary and Critical Care, Oregon Health & Science University, Portland, Oregon
| | - Alexander C. Fort
- Department of Anesthesiology, Perioperative Medicine and Pain Management, and
| |
Collapse
|
5
|
Sidi A, Gravenstein N, Vasilopoulos T, Lampotang S. Simulation-Based Assessment Identifies Longitudinal Changes in Cognitive Skills in an Anesthesiology Residency Training Program. J Patient Saf 2021; 17:e490-e496. [PMID: 28582277 DOI: 10.1097/pts.0000000000000392] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES We describe observed improvements in nontechnical or "higher-order" deficiencies and cognitive performance skills in an anesthesia residency cohort for a 1-year time interval. Our main objectives were to evaluate higher-order, cognitive performance and to demonstrate that simulation can effectively serve as an assessment of cognitive skills and can help detect "higher-order" deficiencies, which are not as well identified through more traditional assessment tools. We hypothesized that simulation can identify longitudinal changes in cognitive skills and that cognitive performance deficiencies can then be remediated over time. METHODS We used 50 scenarios evaluating 35 residents during 2 subsequent years, and 18 of those 35 residents were evaluated in both years (post graduate years 3 then 4) in the same or similar scenarios. Individual basic knowledge and cognitive performance during simulation-based scenarios were assessed using a 20- to 27-item scenario-specific checklist. Items were labeled as basic knowledge/technical (lower-order cognition) or advanced cognitive/nontechnical (higher-order cognition). Identical or similar scenarios were repeated annually by a subset of 18 residents during 2 successive academic years. For every scenario and item, we calculated group error scenario rate (frequency) and individual (resident) item success. Grouped individuals' success rates are calculated as mean (SD), and item success grade and group error rates are calculated and presented as proportions. For all analyses, α level is 0.05. RESULTS Overall PGY4 residents' error rates were lower and success rates higher for the cognitive items compared with technical item performance in the operating room and resuscitation domains. In all 3 clinical domains, the cognitive error rate by PGY4 residents was fairly low (0.00-0.22) and the cognitive success rate by PGY4 residents was high (0.83-1.00) and significantly better compared with previous annual assessments (P < 0.05). Overall, there was an annual decrease in error rates for 2 years, primarily driven by decreases in cognitive errors. The most commonly observed cognitive error types remained anchoring, availability bias, premature closure, and confirmation bias. CONCLUSIONS Simulation-based assessments can highlight cognitive performance areas of relative strength, weakness, and progress in a resident or resident cohort. We believe that they can therefore be used to inform curriculum development including activities that require higher-level cognitive processing.
Collapse
|
6
|
The evolution of a national, advanced airway management simulation-based course for anaesthesia trainees. Eur J Anaesthesiol 2021; 38:138-145. [PMID: 32675701 DOI: 10.1097/eja.0000000000001268] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
BACKGROUND Needs analyses involving patient complaints and anaesthesiologists' confidence levels in difficult airway management procedures in Denmark have shown a need for training in both technical and non-technical skills. OBJECTIVE To provide an example of how to design, implement and evaluate a national simulation-based course in advanced airway management for trainees within a compulsory, national specialist training programme. DESIGN AND RESULTS A national working group, established by the Danish Society for Anaesthesiology and Intensive Care Medicine, designed a standardised simulation course in advanced airway management for anaesthesiology trainees based on the six-step approach. Learning objectives are grounded in the curriculum and analyses-of-needs (in terms of knowledge, skills and attitudes, including non-technical skills, which encompass the cognitive skills and social skills, necessary for safe and effective performance). A total of 28 courses for 800 trainees have been conducted. Evaluation has been positive and pre and posttests have indicated a positive effect on learning. CONCLUSION The course was successfully designed and implemented within the national training programme for trainees. Important factors for success were involvement of all stakeholders, thorough planning, selection of the most important learning objectives, the use of interactive educational methods and training of the facilitators.
Collapse
|
7
|
Alsulimani LK, Al-Otaiby FM, Alnofaiey YH, Binobaid FA, Jafarah LM, Khalil DA. Attitudes Towards Introduction of Multiple Modalities of Simulation in Objective Structured Clinical Examination (OSCE) of Emergency Medicine (EM) Final Board Examination: A Cross-Sectional Study. Open Access Emerg Med 2020; 12:441-449. [PMID: 33299360 PMCID: PMC7720994 DOI: 10.2147/oaem.s275764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Accepted: 11/17/2020] [Indexed: 12/05/2022] Open
Abstract
Purpose Objective Structured Clinical Examination (OSCE) is the current modality of choice for evaluating practical skills for graduating emergency medicine residents of final Saudi board examination. This study aims to evaluate the attitudes of both residents and faculty towards the idea of utilizing multiple modalities of simulation in a high-stakes emergency medicine (EM) examination. The goal is to propose a method to improve the process of this examination. Participants and Methods The data were obtained using a cross-sectional survey questionnaire that was distributed to 141 participants, including both EM residents and instructors in the Saudi Board of Emergency Medicine. An online survey tool was used. The data were collected and subsequently analyzed to gauge the general and specific attitudes of both residents and instructors. Results Of the 141 participants, 136 provided complete responses; almost half were residents from all years, and the other half were primarily instructors (registrars, senior registrars, or consultants). Most of the participants from both groups (70% of the residents and 86% of the instructors) would like to see simulation incorporated into the final EM board OSCEs. Most of the participants (78%), however, had no experience with using multiple modalities of simulation in OSCEs. Overall, the majority (74.82%) expressed the belief that simulation-based OSCEs would improve the assessment of EM residents’ competencies. The modalities that received the most support were part-task trainers and hybrid simulation (70.71% and 70%, respectively). Conclusion From this study, we can conclude that both parties (residents and instructors) are largely willing to see multimodality simulation being incorporated into the final board examinations. Stakeholders should interpret this consensus as an impetus to proceed with such an implementation of multimodality simulation. Input from both groups should be considered when planning for such a change in this high-stakes exam.
Collapse
Affiliation(s)
- Loui K Alsulimani
- Department of Emergency Medicine,Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia.,Department of Medical Education, King Abdulaziz University, Jeddah, Saudi Arabia.,Clinical Skills and Simulation Center, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Fayhan M Al-Otaiby
- Department of Emergency Medicine, International Medical Center, Jeddah, Saudi Arabia
| | - Yasser H Alnofaiey
- Department of Emergency Medicine, Faculty of Medicine, Taif University, Taif, Saudi Arabia
| | - Fares A Binobaid
- Department of Emergency Medicine, King Abdulaziz Hospital, Makkah, Saudi Arabia
| | - Linda M Jafarah
- Department of Emergency Medicine, King Fahad Medical City, Riyadh, Saudi Arabia
| | - Daniyah A Khalil
- Primary Healthcare Center, King Fahad General Hospital, Jeddah, Saudi Arabia
| |
Collapse
|
8
|
Tanaka P, Park YS, Liu L, Varner C, Kumar AH, Sandhu C, Yumul R, McCartney KT, Spilka J, Macario A. Assessment Scores of a Mock Objective Structured Clinical Examination Administered to 99 Anesthesiology Residents at 8 Institutions. Anesth Analg 2020; 131:613-621. [PMID: 32149757 DOI: 10.1213/ane.0000000000004705] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
BACKGROUND Objective Structured Clinical Examinations (OSCEs) are used in a variety of high-stakes examinations. The primary goal of this study was to examine factors influencing the variability of assessment scores for mock OSCEs administered to senior anesthesiology residents. METHODS Using the American Board of Anesthesiology (ABA) OSCE Content Outline as a blueprint, scenarios were developed for 4 of the ABA skill types: (1) informed consent, (2) treatment options, (3) interpretation of echocardiograms, and (4) application of ultrasonography. Eight residency programs administered these 4 OSCEs to CA3 residents during a 1-day formative session. A global score and checklist items were used for scoring by faculty raters. We used a statistical framework called generalizability theory, or G-theory, to estimate the sources of variation (or facets), and to estimate the reliability (ie, reproducibility) of the OSCE performance scores. Reliability provides a metric on the consistency or reproducibility of learner performance as measured through the assessment. RESULTS Of the 115 total eligible senior residents, 99 participated in the OSCE because the other residents were unavailable. Overall, residents correctly performed 84% (standard deviation [SD] 16%, range 38%-100%) of the 36 total checklist items for the 4 OSCEs. On global scoring, the pass rate for the informed consent station was 71%, for treatment options was 97%, for interpretation of echocardiograms was 66%, and for application of ultrasound was 72%. The estimate of reliability expressing the reproducibility of examinee rankings equaled 0.56 (95% confidence interval [CI], 0.49-0.63), which is reasonable for normative assessments that aim to compare a resident's performance relative to other residents because over half of the observed variation in total scores is due to variation in examinee ability. Phi coefficient reliability of 0.42 (95% CI, 0.35-0.50) indicates that criterion-based judgments (eg, pass-fail status) cannot be made. Phi expresses the absolute consistency of a score and reflects how closely the assessment is likely to reproduce an examinee's final score. Overall, the greatest (14.6%) variance was due to the person by item by station interaction (3-way interaction) indicating that specific residents did well on some items but poorly on other items. The variance (11.2%) due to residency programs across case items was high suggesting moderate variability in performance from residents during the OSCEs among residency programs. CONCLUSIONS Since many residency programs aim to develop their own mock OSCEs, this study provides evidence that it is possible for programs to create a meaningful mock OSCE experience that is statistically reliable for separating resident performance.
Collapse
Affiliation(s)
- Pedro Tanaka
- From the Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, California
| | - Yoon Soo Park
- Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois
| | - Linda Liu
- Department of Anesthesia and Perioperative Care, University of California San Francisco, San Francisco, California
| | - Chelsia Varner
- Department of Anesthesiology, University of Southern California, Los Angeles, California
| | - Amanda H Kumar
- Department of Anesthesiology, Duke University School of Medicine, Durham, North Carolina
| | - Charandip Sandhu
- Department of Anesthesiology, University of California Davis, Davis, California
| | - Roya Yumul
- Department of Anesthesiology, Cedars Sinai Medical Center, Los Angeles, California
| | - Kate Tobin McCartney
- Department of Anesthesiology, University of California Irvine, Irvine, California
| | - Jared Spilka
- Naval Medical Center San Diego, San Diego, California
| | - Alex Macario
- From the Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, California
| |
Collapse
|
9
|
Warner DO, Isaak RS, Peterson-Layne C, Lien CA, Sun H, Menzies AO, Cole DJ, Dainer RJ, Fahy BG, Macario A, Suresh S, Harman AE. Development of an Objective Structured Clinical Examination as a Component of Assessment for Initial Board Certification in Anesthesiology. Anesth Analg 2020; 130:258-264. [PMID: 31688077 DOI: 10.1213/ane.0000000000004496] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
With its first administration of an Objective Structured Clinical Examination (OSCE) in 2018, the American Board of Anesthesiology (ABA) became the first US medical specialty certifying board to incorporate this type of assessment into its high-stakes certification examination system. The fundamental rationale for the ABA's introduction of the OSCE is to include an assessment that allows candidates for board certification to demonstrate what they actually "do" in domains relevant to clinical practice. Inherent in this rationale is that the OSCE will capture competencies not well assessed in the current written and oral examinations-competencies that will allow the ABA to judge whether a candidate meets the standards expected for board certification more properly. This special article describes the ABA's journey from initial conceptualization through first administration of the OSCE, including the format of the OSCE, the process for scenario development, the standardized patient program that supports OSCE administration, examiner training, scoring, and future assessment of reliability, validity, and impact of the OSCE. This information will be beneficial to both those involved in the initial certification process, such as residency graduate candidates and program directors, and others contemplating the use of high-stakes summative OSCE assessments.
Collapse
Affiliation(s)
- David O Warner
- From the Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, Minnesota
| | - Robert S Isaak
- Department of Anesthesiology, The University of North Carolina at Chapel Hill, Chapel Hill, North Carolina
| | | | - Cynthia A Lien
- Department of Anesthesiology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Huaping Sun
- The American Board of Anesthesiology, Raleigh, North Carolina
| | - Anna O Menzies
- The American Board of Anesthesiology, Raleigh, North Carolina
| | - Daniel J Cole
- Department of Anesthesiology and Perioperative Medicine, University of California, Los Angeles, Los Angeles, California
| | - Rupa J Dainer
- Department of Ambulatory Surgery, Pediatric Specialists of Virginia, Fairfax, Virginia
| | - Brenda G Fahy
- Department of Anesthesiology, University of Florida, Gainesville, Florida
| | - Alex Macario
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University, Stanford, California
| | - Santhanam Suresh
- Department of Pediatric Anesthesiology, Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University, Chicago, Illinois
| | - Ann E Harman
- The American Board of Anesthesiology, Raleigh, North Carolina
| |
Collapse
|
10
|
Seam N, Lee AJ, Vennero M, Emlet L. Simulation Training in the ICU. Chest 2019; 156:1223-1233. [PMID: 31374210 PMCID: PMC6945651 DOI: 10.1016/j.chest.2019.07.011] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2019] [Revised: 06/03/2019] [Accepted: 07/13/2019] [Indexed: 01/09/2023] Open
Abstract
Because of an emphasis on patient safety and recognition of the effectiveness of simulation as an educational modality across multiple medical specialties, use of health-care simulation (HCS) for medical education has become more prevalent. In this article, the effectiveness of simulation for areas important to the practice of critical care is reviewed. We examine the evidence base related to domains of procedural mastery, development of communication skills, and interprofessional team performance, with specific examples from the literature in which simulation has been used successfully in these domains in critical care training. We also review the data assessing the value of simulation in other areas highly relevant to critical care practice, including assessment of performance, integration of HCS in decision science, and critical care quality improvement, with attention to the areas of system support and high-risk, low-volume events in contemporary health-care systems. When possible, we report data evaluating effectiveness of HCS in critical care training based on high-level learning outcomes resulting from the training, rather than lower level outcomes such as learner confidence or posttest score immediately after training. Finally, obstacles to the implementation of HCS, such as cost and logistics, are examined and current and future strategies to evaluate best use of simulation in critical care training are discussed.
Collapse
Affiliation(s)
- Nitin Seam
- Critical Care Medicine Department, National Institutes of Health, Bethesda, MD.
| | - Ai Jin Lee
- Women's Guild Simulation Center for Advanced Clinical Skills, Cedars-Sinai Medical Center, Los Angeles, CA
| | | | - Lillian Emlet
- VA Pittsburgh Healthcare System and University of Pittsburgh Medical Center, Pittsburgh, PA
| |
Collapse
|
11
|
Halwani Y, Sachdeva AK, Satterthwaite L, de Montbrun S. Development and evaluation of the General Surgery Objective Structured Assessment of Technical Skill (GOSATS). Br J Surg 2019; 106:1617-1622. [DOI: 10.1002/bjs.11359] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2019] [Revised: 07/16/2019] [Accepted: 08/13/2019] [Indexed: 11/05/2022]
Abstract
Abstract
Background
Technical skill acquisition is important in surgery specialty training. Despite an emphasis on competency-based training, few tools are currently available for direct technical skills assessment at the completion of training. The aim of this study was to develop and validate a simulated technical skill examination for graduating (postgraduate year (PGY)5) general surgery trainees.
Methods
A simulated eight-station, procedure-based general surgery technical skills examination was developed. Board-certified general surgeons blinded to the level of training rated performance of PGY3 and PGY5 trainees by means of validated scoring. Cronbach's α was used to calculate reliability indices, and a conjunctive model to set a pass score with borderline regression methodology. Subkoviak methodology was employed to assess the reliability of the pass–fail decision. The relationship between passing the examination and PGY level was evaluated using χ2 analysis.
Results
Ten PGY3 and nine PGY5 trainees were included. Interstation reliability was 0·66, and inter-rater reliability for three stations was 0·92, 0·97 and 0·76. A pass score of 176·8 of 280 (63·1 per cent) was set. The pass rate for PGY5 trainees was 78 per cent (7 of 9), compared with 30 per cent (3 of 10) for PGY3 trainees. Reliability of the pass–fail decision had an agreement coefficient of 0·88. Graduating trainees were significantly more likely to pass the examination than PGY3 trainees (χ2 = 4·34, P = 0·037).
Conclusion
A summative general surgery technical skills examination was developed with reliability indices within the range needed for high-stakes assessments. Further evaluation is required before the examination can be used in decisions regarding certification.
Collapse
Affiliation(s)
- Y Halwani
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | - A K Sachdeva
- American College of Surgeons, Chicago, Illinois, USA
| | - L Satterthwaite
- University of Toronto, Surgical Skills Centre, Mount Sinai Hospital, Toronto, Ontario, Canada
| | - S de Montbrun
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
- Division of General Surgery, St Michael's Hospital, Toronto, Ontario, Canada
| |
Collapse
|
12
|
Everett TC, McKinnon RJ, Ng E, Kulkarni P, Borges BCR, Letal M, Fleming M, Bould MD. Simulation-based assessment in anesthesia: an international multicentre validation study. Can J Anaesth 2019; 66:1440-1449. [PMID: 31559541 DOI: 10.1007/s12630-019-01488-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2018] [Revised: 06/14/2019] [Accepted: 06/14/2019] [Indexed: 11/27/2022] Open
Abstract
PURPOSE Simulated clinical events provide a means to evaluate a practitioner's performance in a standardized manner for all candidates that are tested. We sought to provide evidence for the validity of simulation-based assessment tools in simulated pediatric anesthesia emergencies. METHODS Nine centres in two countries recruited subjects to participate in simulated operating room events. Participants ranged in anesthesia experience from junior residents to staff anesthesiologists. Performances were video recorded for review and scored by specially trained, blinded, expert raters. The rating tools consisted of scenario-specific checklists and a global rating scale that allowed the rater to make a judgement about the subject's performance, and by extension, preparedness for independent practice. The reliability of the tools was classified as "substantial" (intraclass correlation coefficients ranged from 0.84 to 0.96 for the checklists and from 0.85 to 0.94 for the global rating scale). RESULTS Three-hundred and ninety-one simulation encounters were analysed. Senior trainees and staff significantly out-performed junior trainees (P = 0.04 and P < 0.001 respectively). The effect size of grade (junior vs senior trainee vs staff) on performance was classified as "medium" (partial η2 = 0.06). Performance deficits were observed across all grades of anesthesiologist, particularly in two of the scenarios. CONCLUSIONS This study supports the validity of our simulation-based anesthesiologist assessment tools in several domains of validity. We also describe some residual challenges regarding the validity of our tools, some notes of caution in terms of the intended consequences of their use, and identify opportunities for further research.
Collapse
Affiliation(s)
- Tobias C Everett
- Department of Anesthesia and Pain Medicine, The Hospital for Sick Children, University of Toronto, 555 University Avenue, Toronto, ON, M5G 1X8, Canada.
| | - Ralph J McKinnon
- Department of Anesthesia, Royal Manchester Children's Hospital, Manchester, United Kingdom
| | - Elaine Ng
- Department of Anesthesia and Pain Medicine, The Hospital for Sick Children, University of Toronto, 555 University Avenue, Toronto, ON, M5G 1X8, Canada
| | - Pradeep Kulkarni
- Department of Anesthesia, Stollery Children's Hospital, University of Alberta, Edmonton, AB, Canada
| | - Bruno C R Borges
- Department of Anesthesia, McMaster Children's Hospital, McMaster University, Hamilton, ON, Canada
| | - Michael Letal
- Department of Anesthesia, Alberta Children's Hospital, University of Calgary, Calgary, AB, Canada
| | - Melinda Fleming
- Department of Anesthesia, Queens University, Kingston, ON, Canada
| | - M Dylan Bould
- Department of Anesthesia, Children's Hospital of Eastern Ontario, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
13
|
Weersink K, Hall AK, Rich J, Szulewski A, Dagnone JD. Simulation versus real-world performance: a direct comparison of emergency medicine resident resuscitation entrustment scoring. Adv Simul (Lond) 2019; 4:9. [PMID: 31061721 PMCID: PMC6492388 DOI: 10.1186/s41077-019-0099-4] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2018] [Accepted: 04/15/2019] [Indexed: 11/10/2022] Open
Abstract
Background Simulation is increasingly being used in postgraduate medical education as an opportunity for competency assessment. However, there is limited direct evidence that supports performance in the simulation lab as a surrogate of workplace-based clinical performance for non-procedural tasks such as resuscitation in the emergency department (ED). We sought to directly compare entrustment scoring of resident performance in the simulation environment to clinical performance in the ED. Methods The resuscitation assessment tool (RAT) was derived from the previously implemented and studied Queen's simulation assessment tool (QSAT) via a modified expert review process. The RAT uses an anchored global assessment scale to generate an entrustment score and narrative comments. Emergency medicine (EM) residents were assessed using the RAT on cases in simulation-based examinations and in the ED during resuscitation cases from July 2016 to June 2017. Resident mean entrustment scores were compared using Pearson's correlation coefficient to determine the relationship between entrustment in simulation cases and in the ED. Inductive thematic analysis of written commentary was conducted to compare workplace-based with simulation-based feedback. Results There was a moderate, positive correlation found between mean entrustment scores in the simulated and workplace-based settings, which was statistically significant (r = 0.630, n = 17, p < 0.01). Further, qualitative analysis demonstrated overall management and leadership themes were more common narratives in the workplace, while more specific task-based feedback predominated in the simulation-based assessment. Both workplace-based and simulation-based narratives frequently commented on communication skills. Conclusions In this single-center study with a limited sample size, assessment of residents using entrustment scoring in simulation settings was demonstrated to have a moderate positive correlation with assessment of resuscitation competence in the workplace. This study suggests that resuscitation performance in simulation settings may be an indicator of competence in the clinical setting. However, multiple factors contribute to this complicated and imperfect relationship. It is imperative to consider narrative comments in supporting the rationale for numerical entrustment scores in both settings and to include both simulation and workplace-based assessment in high-stakes decisions of progression.
Collapse
Affiliation(s)
- Kristen Weersink
- 1Department of Emergency Medicine, Queen's University, Kingston Health Sciences Center c/o 76 Stuart St, Kingston, ON K7L2V7 Canada
| | - Andrew K Hall
- 1Department of Emergency Medicine, Queen's University, Kingston Health Sciences Center c/o 76 Stuart St, Kingston, ON K7L2V7 Canada
| | - Jessica Rich
- 2Faculty of Education, Queen's University, Kingston, ON Canada
| | - Adam Szulewski
- 1Department of Emergency Medicine, Queen's University, Kingston Health Sciences Center c/o 76 Stuart St, Kingston, ON K7L2V7 Canada
| | - J Damon Dagnone
- 1Department of Emergency Medicine, Queen's University, Kingston Health Sciences Center c/o 76 Stuart St, Kingston, ON K7L2V7 Canada
| |
Collapse
|
14
|
Aparicio-Martínez P, Martínez-Jiménez MDP, Perea-Moreno AJ, Vaquero-Álvarez E, Redel-Macías MD, Vaquero-Abellán M. Is possible to train health professionals in prevention of high-risk pathogens like the Ebola by using the mobile phone? TELEMATICS AND INFORMATICS 2019. [DOI: 10.1016/j.tele.2018.08.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
15
|
Use of Simulation in High-Stakes Summative Assessments in Surgery. COMPREHENSIVE HEALTHCARE SIMULATION: SURGERY AND SURGICAL SUBSPECIALTIES 2019. [DOI: 10.1007/978-3-319-98276-2_11] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
|
16
|
|
17
|
Hart D, Bond W, Siegelman JN, Miller D, Cassara M, Barker L, Anders S, Ahn J, Huang H, Strother C, Hui J. Simulation for Assessment of Milestones in Emergency Medicine Residents. Acad Emerg Med 2018; 25:205-220. [PMID: 28833892 DOI: 10.1111/acem.13296] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2017] [Revised: 08/01/2017] [Accepted: 08/16/2017] [Indexed: 11/29/2022]
Abstract
OBJECTIVES All residency programs in the United States are required to report their residents' progress on the milestones to the Accreditation Council for Graduate Medical Education (ACGME) biannually. Since the development and institution of this competency-based assessment framework, residency programs have been attempting to ascertain the best ways to assess resident performance on these metrics. Simulation was recommended by the ACGME as one method of assessment for many of the milestone subcompetencies. We developed three simulation scenarios with scenario-specific milestone-based assessment tools. We aimed to gather validity evidence for this tool. METHODS We conducted a prospective observational study to investigate the validity evidence for three mannequin-based simulation scenarios for assessing individual residents on emergency medicine (EM) milestones. The subcompetencies (i.e., patient care [PC]1, PC2, PC3) included were identified via a modified Delphi technique using a group of experienced EM simulationists. The scenario-specific checklist (CL) items were designed based on the individual milestone items within each EM subcompetency chosen for assessment and reviewed by experienced EM simulationists. Two independent live raters who were EM faculty at the respective study sites scored each scenario following brief rater training. The inter-rater reliability (IRR) of the assessment tool was determined by measuring intraclass correlation coefficient (ICC) for the sum of the CL items as well as the global rating scales (GRSs) for each scenario. Comparing GRS and CL scores between various postgraduate year (PGY) levels was performed with analysis of variance. RESULTS Eight subcompetencies were chosen to assess with three simulation cases, using 118 subjects. Evidence of test content, internal structure, response process, and relations with other variables were found. The ICCs for the sum of the CL items and the GRSs were >0.8 for all cases, with one exception (clinical management GRS = 0.74 in sepsis case). The sum of CL items and GRSs (p < 0.05) discriminated between PGY levels on all cases. However, when the specific CL items were mapped back to milestones in various proficiency levels, the milestones in the higher proficiency levels (level 3 [L3] and 4 [L4]) did not often discriminate between various PGY levels. L3 milestone items discriminated between PGY levels on five of 12 occasions they were assessed, and L4 items discriminated only two of 12 times they were assessed. CONCLUSION Three simulation cases with scenario-specific assessment tools allowed evaluation of EM residents on proficiency L1 to L4 within eight of the EM milestone subcompetencies. Evidence of test content, internal structure, response process, and relations with other variables were found. Good to excellent IRR and the ability to discriminate between various PGY levels was found for both the sum of CL items and the GRSs. However, there was a lack of a positive relationship between advancing PGY level and the completion of higher-level milestone items (L3 and L4).
Collapse
Affiliation(s)
- Danielle Hart
- Emergency Medicine; Hennepin County Medical Center; University of Minnesota Medical School; Minneapolis MN
| | - William Bond
- Department of Emergency Medicine; Lehigh Valley Health Network; Allentown PA
| | | | - Daniel Miller
- Department of Emergency Medicine; University of Iowa; Iowa City IA
| | - Michael Cassara
- Department of Emergency Medicine; Hofstra University North Shore Long Island Jewish SOM; Northwell Health Center; Lake Success NY
| | - Lisa Barker
- Department of Emergency Medicine; University of Illinois College of Medicine at Peoria; Peoria IL
| | - Shilo Anders
- Department of Anesthesiology; Vanderbilt University; Nashville TN
| | - James Ahn
- Department of Emergency Medicine; University of Chicago; Chicago IL
| | - Hubert Huang
- Division of Education; Lehigh Valley Health Network; Allentown PA
| | | | - Joshua Hui
- Department of Emergency Medicine; Kaiser Permanente; Los Angeles Medical Center; Los Angeles CA
| |
Collapse
|
18
|
|
19
|
Exposure to Simulated Mortality Affects Resident Performance During Assessment Scenarios. ACTA ACUST UNITED AC 2017; 12:282-288. [DOI: 10.1097/sih.0000000000000257] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
|
20
|
|
21
|
de Montbrun S. Passing a Technical Skills Examination in the First Year of Surgical Residency Can Predict Future Performance. J Grad Med Educ 2017. [PMID: 28638511 PMCID: PMC5476382 DOI: 10.4300/jgme-d-16-00517.1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND The ability of an assessment to predict performance would be of major benefit to residency programs, allowing for early identification of residents at risk. OBJECTIVE We sought to establish whether passing the Objective Structured Assessment of Technical Skills (OSATS) examination in postgraduate year 1 (PGY-1) predicts future performance. METHODS Between 2002 and 2012, 133 PGY-1 surgery residents at the University of Toronto (Toronto, Ontario, Canada) completed an 8-station, simulated OSATS examination as a component of training. With recently set passing scores, residents were assigned a pass/fail status using 3 standards setting methods (contrasting groups, borderline group, and borderline regression). Future in-training performance was compared between residents who had passed and those who failed the OSATS, using in-training evaluation reports from resident files. A Mann-Whitney U test compared performance among groups at PGY-2 and PGY-4 levels. RESULTS Residents who passed the OSATS examination outperformed those who failed, when compared during PGY-2 across all 3 standard setting methodologies (P < .05). During PGY-4, only the contrasting groups method showed a significant difference (P < .05). CONCLUSIONS We found that PGY-1 surgical resident pass/fail status on a technical skills examination was associated with future performance on in-training evaluation reports in later years. This provides validity evidence for the current PGY-1 pass/fail score, and suggests that this technical skills examination may be used to predict performance and to identify residents who require remediation.
Collapse
|
22
|
Fekonja Z, Nerat J, Gönc V, Pišlar M, Denny M, Trifkovič KČ. Comparing Students’ Self-Assessment with Teachers’ Assessment of Clinical Skills Using an Objective Structured Clinical Examination (OSCE). TEACHING AND LEARNING IN NURSING 2017. [DOI: 10.5772/67956] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
23
|
|
24
|
Kothari LG, Shah K, Barach P. Simulation based medical education in graduate medical education training and assessment programs. PROGRESS IN PEDIATRIC CARDIOLOGY 2017. [DOI: 10.1016/j.ppedcard.2017.02.001] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
25
|
West AJ, Parchoma G. The practice of simulation-based assessment in respiratory therapy education. CANADIAN JOURNAL OF RESPIRATORY THERAPY : CJRT = REVUE CANADIENNE DE LA THERAPIE RESPIRATOIRE : RCTR 2017; 53:13-16. [PMID: 30996624 PMCID: PMC6422207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Clinical simulation has gained prominence as an educational approach in many Canadian respiratory therapy programs and is strongly associated with improved learning, clinical and nonclinical skill, future performance, and patient outcomes. Traditionally, the primary assessment approach employed in clinical simulation has been formative debriefing for learning. Contextual factors, such as limited opportunities for learning in clinical practice and technologically oriented perspectives on learning in clinical simulation, are converging to prompt a move from using formative debriefing sessions that support learning in simulation to employing high-stakes testing intended to measure entry-to-practice competencies. We adopt the perspective that these factors are intricately linked to the profession's regulatory environment, which may strongly influence how simulation practices become embedded with respiratory therapy educational programs. Through this discussion we challenge the profession to consider how environmental factors, including externally derived requirements, may ultimately impact the effectiveness of simulation-based learning environments.
Collapse
Affiliation(s)
- Andrew J West
- Werklund School of Education, University of Calgary, Calgary, AB
| | - Gale Parchoma
- College of Education, University of Saskatchewan, Saskatoon, SK
| |
Collapse
|
26
|
[Full-scale simulation in German medical schools and anesthesia residency programs : Status quo]. Anaesthesist 2016; 66:11-20. [PMID: 27942787 DOI: 10.1007/s00101-016-0251-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2016] [Revised: 10/31/2016] [Accepted: 11/18/2016] [Indexed: 10/20/2022]
Abstract
BACKGROUND Simulation has been increasingly used in medicine. In 2003 German university departments of anesthesiology were provided with a full-scale patient simulator, designated for use with medical students. Meanwhile simulation courses are also offered to physicians and nurses. Currently, the national model curriculum for residency programs in anesthesiology is being revised, possibly to include mandatory simulation training. OBJECTIVES To assess the status quo of full-scale simulation training for medical school, residency and continuing medical education in German anesthesiology. METHODS All 38 German university chairs for anesthesiology as well as five arbitrarily chosen non-university facilities were invited to complete an online questionnaire regarding their centers' infrastructure and courses held between 2010 and 2012. RESULTS The overall return rate was 86 %. In university simulation centers seven non-student staff members, mainly physicians, were involved, adding up to a full-time equivalent of 1.2. All hours of work were paid by 61 % of the centers. The median center size was 100 m2 (range 20-500 m2), equipped with three patient simulators (1-32). Simulators of high or very high fidelity are available at 80 % of the centers. Scripted scenarios were used by 91 %, video debriefing by 69 %. Of the participating university centers, 97 % offered courses for medical students, 81 % for the department's employees, 43 % for other departments of their hospital, and 61 % for external participants. In 2012 the median center reached 46 % of eligible students (0-100), 39 % of the department's physicians (8-96) and 16 % of its nurses (0-56) once. For physicians and nurses from these departments that equals one simulation-based training every 2.6 and 6 years, respectively. 31 % made simulation training mandatory for their residents, 29 % for their nurses and 24 % for their attending physicians. The overall rates of staff ever exposed to simulation were 45 % of residents (8-90), and 30 % each of nurses (10-80) and attendings (0-100). Including external courses the average center trained 59 (4-271) professionals overall in 2012. No clear trend could be observed over the three years polled. The results for the non-university centers were comparable. CONCLUSIONS Important first steps have been taken to implement full-scale simulation in Germany. In addition to programs for medical students courses for physicians and nurses are available today. To reach everyone clinically involved in German anesthesiology on a regular basis the current capacities need to be dramatically increased. The basis for that to happen will be new concepts for funding, possibly supported by external requirements such as the national model curriculum for residency in anesthesiology.
Collapse
|
27
|
Abstract
The goal of faculty development activities is to supply the public with knowledgeable, skilled, and competent physicians who are prepared for high performance in the dynamic and complex healthcare environment. Current faculty development programs lack evidence-based support and are not sufficient to meet the professional needs of practicing physicians. Simulation activities for faculty development offer an alternative to traditional, teacher-centric educational offerings. Grounded in adult learning theory, simulation is a learner-centric, interactive, efficient, and effective method to train busy professionals. Many of the faculty development needs of clinical neonatologists can be met by participating in simulation-based activities that focus on technical skills, teamwork, leadership, communication, and patient safety.
Collapse
Affiliation(s)
- Heather M French
- Division of Neonatology, Department of Pediatrics, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA.
| | - Roberta L Hales
- Center for Simulation, Advanced Education, and Innovation, The Children's Hospital of Philadelphia, Philadelphia, PA; Division of Simulation, Department of Emergency Medicine, Drexel University College of Medicine, Philadelphia, PA
| |
Collapse
|
28
|
Lighthall GK, Barr J. The Use of Clinical Simulation Systems to Train Critical Care Physicians. J Intensive Care Med 2016; 22:257-69. [PMID: 17895484 DOI: 10.1177/0885066607304273] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Intensive care units are complex and dynamic clinical environments in which the delivery of appropriate and timely care to critically ill patients depends on the integrated and efficient actions of providers with specialized training. The use of realistic clinical simulator systems can help to facilitate and standardize the training of critical-care physicians, nurses, respiratory therapists, and pharmacists without having the training process jeopardize the well-being of patients. In this article, we review the current state of the art of patient simulator systems and their applications to critical-care medicine, and we offer some examples and recommendations on how to integrate simulator systems into critical-care training.
Collapse
Affiliation(s)
- Geoffrey K Lighthall
- Department of Anesthesia, Stanford University School of Medicine, Stanford, CA, USA.
| | | |
Collapse
|
29
|
Simulation With PARTS (Phase-Augmented Research and Training Scenarios): A Structure Facilitating Research and Assessment in Simulation. Simul Healthc 2016; 10:178-87. [PMID: 25932706 DOI: 10.1097/sih.0000000000000085] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
INTRODUCTION Assessment in simulation is gaining importance, as are scenario design methods increasing opportunity for assessment. We present our approach to improving measurement in complex scenarios using PARTS [Phase-Augmented Research and Training Scenarios], essentially separating cases into clearly delineated phases. METHODS We created 7 PARTS with real-time rating instruments and tested these in 63 cases during 4 weeks of simulation. Reliability was tested by comparing real-time rating with postsimulation video-based rating using the same instrument. Validity was tested by comparing preintervention and postintervention total results, by examining the difference in improvement when focusing on the phase-specific results addressed by the intervention, and further explored by trying to demonstrate the discrete improvement expected from proficiency in the rare occurrence of leader inclusive behavior. RESULTS Intraclass correlations [3,1] between real-time and postsimulation ratings were 0.951 (95% confidence interval [CI], 0.794-0.990), 1.00 (95% CI, --to--), 0.948 (95% CI, 0.783-0.989), and 0.995 (95% CI, 0.977-0.999) for 3 phase-specific scores and total scenario score, respectively. Paired t tests of prelecture-postlecture performance showed an improvement of 14.26% (bias-corrected and accelerated bootstrap [BCa] 95% CI, 4.71-23.82; P = 0.009) for total performance but of 28.57% (BCa 95% CI, 13.84-43.30; P = 0.002) for performance in the respective phase. The correlation of total scenario performance with leader inclusiveness was not significant (rs = 0.228; BCa 95% CI. -0.082 to 0.520; P = 0.119) but significant for specific phase performance (rs = 0.392; BCa 95% CI, 0.118-0.632; P = 0.006). CONCLUSIONS The PARTS allowed for improved reliability and validity of measurements in complex scenarios.
Collapse
|
30
|
Amiel I, Simon D, Merin O, Ziv A. Mobile in Situ Simulation as a Tool for Evaluation and Improvement of Trauma Treatment in the Emergency Department. JOURNAL OF SURGICAL EDUCATION 2016; 73:121-8. [PMID: 26443239 DOI: 10.1016/j.jsurg.2015.08.013] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2015] [Revised: 07/11/2015] [Accepted: 08/26/2015] [Indexed: 05/10/2023]
Abstract
BACKGROUND Medical simulation is an increasingly recognized tool for teaching, coaching, training, and examining practitioners in the medical field. For many years, simulation has been used to improve trauma care and teamwork. Despite technological advances in trauma simulators, including better means of mobilization and control, most reported simulation-based trauma training has been conducted inside simulation centers, and the practice of mobile simulation in hospitals' trauma rooms has not been investigated fully. METHODS The emergency department personnel from a second-level trauma center in Israel were evaluated. Divided into randomly formed trauma teams, they were reviewed twice using in situ mobile simulation training at the hospital's trauma bay. In all, 4 simulations were held before and 4 simulations were held after a structured learning intervention. The intervention included a 1-day simulation-based training conducted at the Israel Center for Medical Simulation (MSR), which included video-based debriefing facilitated by the hospital's 4 trauma team leaders who completed a 2-day simulation-based instructors' course before the start of the study. The instructors were also trained on performance rating and thus were responsible for the assessment of their respective teams in real time as well as through reviewing of the recorded videos; thus enabling a comparison of the performances in the mobile simulation exercise before and after the educational intervention. RESULTS The internal reliability of the experts' evaluation calculated in the Cronbach α model was found to be 0.786. Statistically significant improvement was observed in 4 of 10 parameters, among which were teamwork (29.64%) and communication (24.48%) (p = 0.00005). CONCLUSION The mobile in situ simulation-based training demonstrated efficacy both as an assessment tool for trauma teams' function and an educational intervention when coupled with in vitro simulation-based training, resulting in a significant improvement of the teams' function in various aspects of treatment.
Collapse
Affiliation(s)
- Imri Amiel
- The Israel Center for Medical Simulation (MSR), Sheba Medical Center, Tel Hashomer, Israel; Department of Surgery, Sheba Medical Center, Tel Hashomer, Israel.
| | - Daniel Simon
- Trauma Unit, Sheba Medical Center, Tel Hashomer, Israel
| | - Ofer Merin
- Trauma Unit, Department of Cardiothoracic Surgery, Shaarei-Zedek Medical Center, Jerusalem, Israel; Faculty of Medicine, Hebrew University, Jerusalem, Israel
| | - Amitai Ziv
- The Israel Center for Medical Simulation (MSR), Sheba Medical Center, Tel Hashomer, Israel; Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
31
|
Facchinato APA, Benedicto CC, Mora AG, Cabral DMC, Fagundes DJ. Clinical competency evaluation of Brazilian chiropractic interns. THE JOURNAL OF CHIROPRACTIC EDUCATION 2015; 29:145-150. [PMID: 25588200 PMCID: PMC4582613 DOI: 10.7899/jce-14-13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2011] [Revised: 04/30/2012] [Accepted: 10/26/2014] [Indexed: 06/04/2023]
Abstract
OBJECTIVE This study compares the results of an objective structured clinical examination (OSCE) between 2 groups of students before an internship and after 6 months of clinical practice in an internship. METHODS Seventy-two students participated, with 36 students in each cohort. The OSCEs were performed in the simulation laboratory before the participants' clinical practice internship and after 6 months of the internship. Students were tested in 9 stations for clinical skills and knowledge. The same procedures were repeated for both cohorts. The t test was used for unpaired parametric samples and Fisher's exact test was used for comparison of proportions. RESULTS There was no difference in the mean final score between the 2 groups (p = .34 for test 1; p = .08 for test 2). The performance of the students in group 1 was not significantly different when performed before and after 6 months of clinical practice, but in group 2 there was a significant decrease in the average score after 6 months of clinical practice. CONCLUSIONS There was no difference in the cumulative average score for the 2 groups before and after 6 months of clinical practice in the internship. There were differences within the cohorts, however, with a significant decrease in the average score in group 2. Issues pertaining to test standardization and student motivation for test 2 may have influenced the scores.
Collapse
|
32
|
Mansoorian MR, Hosseiny MS, Khosravan S, Alami A, Alaviani M. Comparing the Effects of Objective Structured Assessment of Technical Skills (OSATS) and Traditional Method on Learning of Students. Nurs Midwifery Stud 2015; 4:e27714. [PMID: 26339669 PMCID: PMC4557410 DOI: 10.17795/nmsjournal27714] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2015] [Revised: 04/26/2015] [Accepted: 04/27/2015] [Indexed: 12/11/2022] Open
Abstract
Background: Despite the benefits of the objective structured assessment of technical skills (OSATS) and it appropriateness for evaluating clinical abilities of nursing students , few studies are available on the application of this method in nursing education. Objectives: The purpose of this study was to compare the effect of using OSATS and traditional methods on the students’ learning. We also aimed to signify students’ views about these two methods and their views about the scores they received in these methods in a medical emergency course. Patients and Methods: A quasi-experimental study was performed on 45 first semester students in nursing and medical emergencies passing a course on fundamentals of practice. The students were selected by a census method and evaluated by both the OSATS and traditional methods. Data collection was performed using checklists prepared based on the ‘text book of nursing procedures checklists’ published by Iranian nursing organization and a questionnaire containing learning rate and students’ estimation of their received scores. Descriptive statistics as well as paired t-test and independent samples t-test were used in data analysis. Results: The mean of students’ score in OSATS was significantly higher than their mean score in traditional method (P = 0.01). Moreover, the mean of self-evaluation score after the traditional method was relatively the same as the score the students received in the exam. However, the mean of self-evaluation score after the OSATS was relatively lower than the scores the students received in the OSATS exam. Most students believed that OSATS can evaluate a wide range of students’ knowledge and skills compared to traditional method. Conclusions: Results of this study indicated the better effect of OSATS on learning and its relative superiority in precise assessment of clinical skills compared with the traditional evaluation method. Therefore, we recommend using this method in evaluation of students in practical courses.
Collapse
Affiliation(s)
| | - Marzeih Sadat Hosseiny
- Department of Nursing and Midwifery, Gonabad University of Medical Sciences, Gonabad, IR Iran
| | - Shahla Khosravan
- Social Determinant of Health Center Research, Faculty of Nursing and Midwifery, Gonabad University of Medical Sciences, Gonabad, IR Iran
| | - Ali Alami
- Department of Health, School of Public Health, Social Determinant of Health Research Center, Gonabad University of Medical Sciences, Gonabad, IR Iran
| | - Mehri Alaviani
- Department of Community and Mental Health Nursing, School of Nursing and Midwifery, Maragheh Faculty of Medical Sciences, Maragheh, IR Iran
| |
Collapse
|
33
|
Udani AD, Kim TE, Howard SK, Mariano ER. Simulation in teaching regional anesthesia: current perspectives. Local Reg Anesth 2015; 8:33-43. [PMID: 26316812 PMCID: PMC4540124 DOI: 10.2147/lra.s68223] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
The emerging subspecialty of regional anesthesiology and acute pain medicine represents an opportunity to evaluate critically the current methods of teaching regional anesthesia techniques and the practice of acute pain medicine. To date, there have been a wide variety of simulation applications in this field, and efficacy has largely been assumed. However, a thorough review of the literature reveals that effective teaching strategies, including simulation, in regional anesthesiology and acute pain medicine are not established completely yet. Future research should be directed toward comparative-effectiveness of simulation versus other accepted teaching methods, exploring the combination of procedural training with realistic clinical scenarios, and the application of simulation-based teaching curricula to a wider range of learner, from the student to the practicing physician.
Collapse
Affiliation(s)
- Ankeet D Udani
- Department of Anesthesiology, Duke University School of Medicine, Durham, NC, USA
| | - T Edward Kim
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, CA, USA ; Anesthesiology and Perioperative Care Service, Veterans Affairs Palo Alto Health Care System, Palo Alto, CA, USA
| | - Steven K Howard
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, CA, USA ; Anesthesiology and Perioperative Care Service, Veterans Affairs Palo Alto Health Care System, Palo Alto, CA, USA
| | - Edward R Mariano
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, CA, USA ; Anesthesiology and Perioperative Care Service, Veterans Affairs Palo Alto Health Care System, Palo Alto, CA, USA
| |
Collapse
|
34
|
Kurrek MM, Morgan P, Howard S, Kranke P, Calhoun A, Hui J, Kiss A. Simulation as a new tool to establish benchmark outcome measures in obstetrics. PLoS One 2015; 10:e0131064. [PMID: 26107661 PMCID: PMC4480859 DOI: 10.1371/journal.pone.0131064] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2015] [Accepted: 05/28/2015] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND There are not enough clinical data from rare critical events to calculate statistics to decide if the management of actual events might be below what could reasonably be expected (i.e. was an outlier). OBJECTIVES In this project we used simulation to describe the distribution of management times as an approach to decide if the management of a simulated obstetrical crisis scenario could be considered an outlier. DESIGN Twelve obstetrical teams managed 4 scenarios that were previously developed. Relevant outcome variables were defined by expert consensus. The distribution of the response times from the teams who performed the respective intervention was graphically displayed and median and quartiles calculated using rank order statistics. RESULTS Only 7 of the 12 teams performed chest compressions during the arrest following the 'cannot intubate/cannot ventilate' scenario. All other outcome measures were performed by at least 11 of the 12 teams. Calculation of medians and quartiles with 95% CI was possible for all outcomes. Confidence intervals, given the small sample size, were large. CONCLUSION We demonstrated the use of simulation to calculate quantiles for management times of critical event. This approach could assist in deciding if a given performance could be considered normal and also point to aspects of care that seem to pose particular challenges as evidenced by a large number of teams not performing the expected maneuver. However sufficiently large sample sizes (i.e. from a national data base) will be required to calculate acceptable confidence intervals and to establish actual tolerance limits.
Collapse
Affiliation(s)
- Matt M. Kurrek
- Department of Anesthesia, University of Toronto, Toronto, Ontario, Canada
- * E-mail:
| | - Pamela Morgan
- Department of Anesthesia, University of Toronto, Toronto, Ontario, Canada
| | - Steven Howard
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Palo Alto, California, United States of America
| | - Peter Kranke
- Department of Anesthesia and Critical Care, University of Wuerzburg, Wuerzburg, Germany
| | - Aaron Calhoun
- Division of Critical Care, Department of Pediatrics, University of Louisville, Louisville, Kentucky, United States of America
| | - Joshua Hui
- Department of Emergency Medicine, University of California Los Angeles and Olive View-UCLA Medical Center, Los Angeles, California, United States of America
| | - Alex Kiss
- Department of Research Design and Biostatistics, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
35
|
Hall AK, Pickett W, Dagnone JD. Development and evaluation of a simulation-based resuscitation scenario assessment tool for emergency medicine residents. CAN J EMERG MED 2015; 14:139-46. [DOI: 10.2310/8000.2012.110385] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
ABSTRACT
Objective:
We sought to develop and validate a three-station simulation-based Objective Structured Clinical Examination (OSCE) tool to assess emergency medicine resident competency in resuscitation scenarios.
Methods:
An expert panel of emergency physicians developed three scenarios for use with high-fidelity mannequins. For each scenario, a corresponding assessment tool was developed with an essential actions (EA) checklist and a global assessment score (GAS). The scenarios were (1) unstable ventricular tachycardia, (2) respiratory failure, and (3) ST elevation myocardial infarction. Emergency medicine residents were videotaped completing the OSCE, and three clinician experts independently evaluated the videotapes using the assessment tool.
Results:
Twenty-one residents completed the OSCE (nine residents in the College of Family Physicians of Canada– Emergency Medicine [CCFP-EM] program, six junior residents in the Fellow of the Royal College of Physicians of Canada–Emergency Medicine [FRCP-EM] program, six senior residents in the FRCP-EM). Interrater reliability for the EA scores was good but varied between scenarios (Spearman rho 5 [1] 0.68, [2] 0.81, [3] 0.41). Interrater reliability for the GAS was also good, with less variability (rho 5 [1] 0.64, [2] 0.56, [3] 0.62). When comparing GAS scores, senior FRCP residents outperformed CCFP-EM residents in all scenarios and junior residents in two of three scenarios (p , 0.001 to 0.01). Based on EA scores, senior FRCP residents outperformed CCFP-EM residents, but junior residents outperformed senior FRCP residents in scenario 1 and CCFPEM residents in all scenarios (p 5 0.006 to 0.04).
Conclusions:
This study outlines the creation of a high-fidelity simulation assessment tool for trainees in emergency medicine. A single-point GAS demonstrated stronger relational validity and more consistent reliability in comparison with an EA checklist. This preliminary work will provide a foundation for ongoing future development of simulationbased assessment tools.
Collapse
|
36
|
|
37
|
|
38
|
Sidi A, Gravenstein N, Lampotang S. Construct Validity and Generalizability of Simulation-Based Objective Structured Clinical Examination Scenarios. J Grad Med Educ 2014; 6:489-94. [PMID: 26279774 PMCID: PMC4535213 DOI: 10.4300/jgme-d-13-00356.1] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/03/2013] [Revised: 02/05/2014] [Accepted: 03/31/2014] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND It is not known if construct-related validity (progression of scores with different levels of training) and generalizability of Objective Structured Clinical Examination (OSCE) scenarios previously used with non-US graduating anesthesiology residents translate to a US training program. OBJECTIVE We assessed for progression of scores with training for a validated high-stakes simulation-based anesthesiology examination. METHODS Fifty US anesthesiology residents in postgraduate years (PGYs) 2 to 4 were evaluated in operating room, trauma, and resuscitation scenarios developed for and used in a high-stakes Israeli Anesthesiology Board examination, requiring a score of 70% on the checklist for passing (including all critical items). RESULTS The OSCE error rate was lower for PGY-4 than PGY-2 residents in each field, and for most scenarios within each field. The critical item error rate was significantly lower for PGY-4 than PGY-3 residents in operating room scenarios, and for PGY-4 than PGY-2 residents in resuscitation scenarios. The final pass rate was significantly higher for PGY-3 and PGY-4 than PGY-2 residents in operating room scenarios, and also was significantly higher for PGY-4 than PGY-2 residents overall. PGY-4 residents had a better error rate, total scenarios score, general evaluation score, critical items error rate, and final pass rate than PGY-2 residents. CONCLUSIONS The comparable error rates, performance grades, and pass rates for US PGY-4 and non-US (Israeli) graduating (PGY-4 equivalent) residents, and the progression of scores among US residents with training level, demonstrate the construct-related validity and generalizability of these high-stakes OSCE scenarios.
Collapse
|
39
|
Sidi A, Baslanti TO, Gravenstein N, Lampotang S. Simulation-based assessment to evaluate cognitive performance in an anesthesiology residency program. J Grad Med Educ 2014; 6:85-92. [PMID: 24701316 PMCID: PMC3963801 DOI: 10.4300/jgme-d-13-00230.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/19/2013] [Revised: 09/01/2013] [Accepted: 09/23/2013] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND Problem solving in a clinical context requires knowledge and experience, and most traditional examinations for learners do not capture skills that are required in some situations where there is uncertainty about the proper course of action. OBJECTIVE We sought to evaluate anesthesiology residents for deficiencies in cognitive performance within and across 3 clinical domains (operating room, trauma, and cardiac resuscitation) using simulation-based assessment. METHODS Individual basic knowledge and cognitive performance in each simulation-based scenario were assessed in 47 residents using a 15- to 29-item scenario-specific checklist. For every scenario and item we calculated group error scenario rate (frequency) and individual (resident) item success. For all analyses, alpha was designated as 0.05. RESULTS Postgraduate year (PGY)-3 and PGY-4 residents' cognitive items error rates were higher and success rates lower compared to basic and technical performance in each domain tested (P < .05). In the trauma and resuscitation scenarios, the cognitive error rate by PGY-4 residents was fairly high (0.29-0.5) and their cognitive success rate was low (0.5-0.68). The most common cognitive errors were anchoring, availability bias, premature closure, and confirmation bias. CONCLUSIONS Simulation-based assessment can differentiate between higher-order (cognitive) and lower-order (basic and technical) skills expected of relatively experienced (PGY-3 and PGY-4) anesthesiology residents. Simulation-based assessments can also highlight areas of relative strength and weakness in a resident group, and this information can be used to guide curricular modifications to address deficiencies in tasks requiring higher-order processing and cognition.
Collapse
|
40
|
Abstract
Abstract
Background:
Effective teamwork is important for patient safety, and verbal communication underpins many dimensions of teamwork. The validity of the simulated environment would be supported if it elicited similar verbal communications to the real setting. The authors hypothesized that anesthesiologists would exhibit similar verbal communication patterns in routine operating room (OR) cases and routine simulated cases. The authors further hypothesized that anesthesiologists would exhibit different communication patterns in routine cases (real or simulated) and simulated cases involving a crisis.
Methods:
Key communications relevant to teamwork were coded from video recordings of anesthesiologists in the OR, routine simulation and crisis simulation and percentages were compared.
Results:
The authors recorded comparable videos of 20 anesthesiologists in the two simulations, and 17 of these anesthesiologists in the OR, generating 400 coded events in the OR, 683 in the routine simulation, and 1,419 in the crisis simulation. The authors found no significant differences in communication patterns in the OR and the routine simulations. The authors did find significant differences in communication patterns between the crisis simulation and both the OR and the routine simulations. Participants rated team communication as realistic and considered their communications occurred with a similar frequency in the simulations as in comparable cases in the OR.
Conclusion:
The similarity of teamwork-related communications elicited from anesthesiologists in simulated cases and the real setting lends support for the ecological validity of the simulation environment and its value in teamwork training. Different communication patterns and frequencies under the challenge of a crisis support the use of simulation to assess crisis management skills.
Collapse
|
41
|
|
42
|
Designing and Implementing the Objective Structured Clinical Examination in Anesthesiology. Anesthesiology 2014; 120:196-203. [DOI: 10.1097/aln.0000000000000068] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Abstract
Since its description in 1974, the Objective Structured Clinical Examination (OSCE) has gained popularity as an objective assessment tool of medical students, residents, and trainees. With the development of the anesthesiology residents’ milestones and the preparation for the Next Accreditation System, there is an increased interest in OSCE as an evaluation tool of the six core competencies and the corresponding milestones proposed by the Accreditation Council for Graduate Medical Education.
In this article the authors review the history of OSCE and its current application in medical education and in different medical and surgical specialties. They also review the use of OSCE by anesthesiology programs and certification boards in the United States and internationally. In addition, they discuss the psychometrics of test design and implementation with emphasis on reliability and validity measures as they relate to OSCE.
Collapse
|
43
|
Everett TC, Ng E, Power D, Marsh C, Tolchard S, Shadrina A, Bould MD. The Managing Emergencies in Paediatric Anaesthesia global rating scale is a reliable tool for simulation-based assessment in pediatric anesthesia crisis management. Paediatr Anaesth 2013; 23:1117-23. [PMID: 23800112 DOI: 10.1111/pan.12212] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/23/2013] [Indexed: 11/30/2022]
Abstract
INTRODUCTION The use of simulation-based assessments for high-stakes physician examinations remains controversial. The Managing Emergencies in Paediatric Anaesthesia course uses simulation to teach evidence-based management of anesthesia crises to trainee anesthetists in the United Kingdom (UK) and Canada. In this study, we investigated the feasibility and reliability of custom-designed scenario-specific performance checklists and a global rating scale (GRS) assessing readiness for independent practice. METHODS After research ethics board approval, subjects were videoed managing simulated pediatric anesthesia crises in a single Canadian teaching hospital. Each subject was randomized to two of six different scenarios. All 60 scenarios were subsequently rated by four blinded raters (two in the UK, two in Canada) using the checklists and GRS. The actual and predicted reliability of the tools was calculated for different numbers of raters using the intraclass correlation coefficient (ICC) and the Spearman-Brown prophecy formula. RESULTS Average measures ICCs ranged from 'substantial' to 'near perfect' (P ≤ 0.001). The reliability of the checklists and the GRS was similar. Single measures ICCs showed more variability than average measures ICC. At least two raters would be required to achieve acceptable reliability. CONCLUSIONS We have established the reliability of a GRS to assess the management of simulated crisis scenarios in pediatric anesthesia, and this tool is feasible within the setting of a research study. The global rating scale allows raters to make a judgement regarding a participant's readiness for independent practice. These tools may be used in the future research examining simulation-based assessment.
Collapse
Affiliation(s)
- Tobias C Everett
- Department of Anesthesiology and Pain Medicine, The Hospital for Sick Children, University of Toronto, Toronto, ON, Canada
| | | | | | | | | | | | | |
Collapse
|
44
|
|
45
|
González AM. [The assessment of clinical skills as a problem]. REVISTA ESPANOLA DE ANESTESIOLOGIA Y REANIMACION 2013; 60:292-293. [PMID: 23582184 DOI: 10.1016/j.redar.2012.11.019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2012] [Revised: 11/04/2012] [Accepted: 11/06/2012] [Indexed: 06/02/2023]
|
46
|
de Montbrun S, MacRae H. Simulation and Minimally Invasive Colorectal Surgery. SEMINARS IN COLON AND RECTAL SURGERY 2013. [DOI: 10.1053/j.scrs.2012.10.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
47
|
Berkenstadt H, Ben-Menachem E, Dach R, Ezri T, Ziv A, Rubin O, Keidan I. Deficits in the Provision of Cardiopulmonary Resuscitation During Simulated Obstetric Crises. Anesth Analg 2012; 115:1122-6. [DOI: 10.1213/ane.0b013e3182691977] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
48
|
|
49
|
Abstract
INTRODUCTION Remote-facilitated simulation-based learning was developed for team training with low-cost, preexisting, and easy-access resources to disseminate training with limited number of the faculty. This study was performed to examine the technical feasibility and to describe its characteristics compared with an on-site simulation system. METHOD We performed 2 pilot remote-facilitated sessions, followed by 3 additional sessions where 16 participants and 2 facilitators assessed the system using posttraining surveys containing items using 5-point Likert scale. All sessions consisted of briefing, simulation scenarios, and debriefing. RESULTS Eighty-seven percent of the participants rated the remote system at least as effective as the on-site system. All the participants rated the sound quality of the system at least as good as the on-site one and indicated that they could understand what the facilitator said at least as well as the on-site one. Fourteen of 16 participants would like to receive simulation training through remote facilitation. Facilitators reported that the operability of the remote system was the same as the on-site simulation system. CONCLUSIONS Remote-facilitated simulation-based learning is technically feasible with low-cost, preexisting, and easy-access resources. Learners rated this system as equally effective as the on-site system and facilitators indicated that the operability was adequate.
Collapse
|
50
|
The effect of simulation in improving students’ performance in laparoscopic surgery: a meta-analysis. Surg Endosc 2012; 26:3215-24. [DOI: 10.1007/s00464-012-2327-z] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2012] [Accepted: 04/26/2012] [Indexed: 01/05/2023]
|