1
|
Ogut E, Yildirim FB, Senol Y, Senol AU. Comprehensive evaluation of the educational impact and effectiveness of specialized study modules in cross-sectional anatomy: a study on student engagement and learning outcomes. BMC MEDICAL EDUCATION 2025; 25:514. [PMID: 40211255 PMCID: PMC11987277 DOI: 10.1186/s12909-025-07050-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2025] [Accepted: 03/25/2025] [Indexed: 04/12/2025]
Abstract
BACKGROUND This study aimed to evaluate the effectiveness of Special Study Modules (SSMs) in Cross-Sectional Anatomy. These modules offer students the opportunity to develop their learning skills and foster specific academic interests. This study aimed to assess the satisfaction levels and learning outcomes of students who participated in the Cross-Sectional Anatomy SSMs, as determined by their feedback. METHODS Data for this descriptive study were collected from student feedback at the beginning and end of the SSMs. A total of 100 undergraduate medical students provided feedback on the modules between 2018 and 2022. The student survey consisted of 11 questions, and feedback was obtained using an open-ended questionnaire. RESULTS 74% of students emphasized the importance of these classes (p = 0.004).Teamwork was also significantly valued by 9% of students (p = 0.025). While 52% of students appreciated the module for presentation skills and clinical learning, the difference was not statistically significant. The module's impact on career choice and communication with faculty was noted by 13% of the students (p = 0.057). CONCLUSIONS Cross-sectional anatomy SSMs were found to be valuable by students, enhancing their ability to identify anatomical structures in cross-sectional images and distinguish sections from different levels and regions. SSMs also promote greater proficiency in imaging techniques. Overall, these modules were effective in key educational domains, particularly in facilitating the integration of knowledge and fostering teamwork among participants.
Collapse
Affiliation(s)
- Eren Ogut
- Department of Anatomy, Faculty of Medicine, Istanbul Medeniyet University, Istanbul, Türkiye.
| | | | - Yesim Senol
- Department of Medical Education, Faculty of Medicine, Akdeniz University, Antalya, Türkiye
| | - A Utku Senol
- Department of Radiology, Faculty of Medicine, Akdeniz University, Antalya, Türkiye
| |
Collapse
|
2
|
Carrillo-Avalos BA, Leenen I, Trejo-Mejía JA, Sánchez-Mendiola M. Bridging Validity Frameworks in Assessment: Beyond Traditional Approaches in Health Professions Education. TEACHING AND LEARNING IN MEDICINE 2025; 37:229-238. [PMID: 38108266 DOI: 10.1080/10401334.2023.2293871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Revised: 11/12/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023]
Abstract
Construct: High-stakes assessments measure several constructs, such as knowledge, competencies, and skills. In this case, validity evidence for test scores' uses and interpretations is of utmost importance, because of the consequences for everyone involved in their development and implementation. Background: Educational assessment requires an appropriate understanding and use of validity frameworks; however, health professions educators still struggle with the conceptual challenges of validity, and frequently validity analyses have a narrow focus. Important obstacles are the plurality of validity frameworks and the difficulty of grounding these abstract concepts in practice. Approach: We reviewed the validity frameworks literature to identify the main elements of frequently used models (Messick and Kane's) and proposed linking frameworks including Russell's recent overarching proposal. Examples are provided with commonly used assessment instruments in health professions education. Findings: Several elements in these frameworks can be integrated into a common approach, matching and aligning Messick's sources of validity with Kane's four inference types. Conclusions: This proposal to contribute evidence for assessment inferences may provide guidance to understanding the use of validity evidence in applied settings. The evolving field of validity research provides opportunities for its integration and practical use in health professions education.
Collapse
Affiliation(s)
| | - Iwin Leenen
- Faculty of Psychology, National Autonomous University of Mexico (UNAM), Mexico City, Mexico
| | | | - Melchor Sánchez-Mendiola
- Faculty of Medicine, UNAM, Mexico City, Mexico
- Educational Innovation and Distance Education, UNAM, Coordination of Open University, Mexico City, Mexico
| |
Collapse
|
3
|
Tappan RS, Roth HR, McGaghie WC. Using Simulation-Based Mastery Learning to Achieve Excellent Learning Outcomes in Physical Therapist Education. JOURNAL, PHYSICAL THERAPY EDUCATION 2025; 39:40-48. [PMID: 38954765 DOI: 10.1097/jte.0000000000000358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 05/24/2024] [Indexed: 07/04/2024]
Abstract
INTRODUCTION The 2 aims of this observational study are (a) to describe the implementation and feasibility of a bed mobility skills simulation-based mastery learning (SBML) curricular module for physical therapist students and (b) to measure learning outcomes and student perceptions of this module. REVIEW OF LITERATURE Simulation-based mastery learning is an outcome-based educational approach that has been successful in other health professions but has not been explored in physical therapy education. SUBJECTS Eighty-seven students in a single cohort of a Doctor of Physical Therapy program. METHODS The SBML module in this pretest-posttest study included a pretest, instruction, initial posttest, and additional rounds of instruction and assessment as needed for all learners to achieve the minimum passing standard (MPS) set using the Mastery Angoff and Patient Safety methods. Outcome measures were bed mobility assessment pass rates and scores, additional student and faculty time compared with a traditional approach, and student perceptions of their self-confidence and the module. RESULTS All students achieved the MPS after 3 rounds of training and assessment beyond the initial posttest. Mean Total Scores improved from 67.6% (12.9%) at pretest to 91.4% (4.8%) at mastery posttest ( P < .001, Cohen's d = 1.8, 95% CI [1.4-2.1]); mean Safety Scores improved from 75.2% (16.0%) at pretest to 100.0% (0.0%) at mastery posttest ( P < .001, Cohen's d = 1.5, 95% CI [1.2-1.9]). Students who did not achieve the MPS at the initial posttest ( n = 30) required a mean of 1.2 hours for additional instruction and assessment. Survey results revealed an increase in student confidence ( P < .001) and positive student perceptions of the module. DISCUSSION AND CONCLUSION Implementation of this SBML module was feasible and resulted in uniformly high levels of bed mobility skill acquisition. Based on rigorous learning outcomes, feasible requirements for implementation, and increased student confidence, SBML offers a promising approach for wider implementation in physical therapy education.
Collapse
Affiliation(s)
- Rachel S Tappan
- Rachel S. Tappan is a board-certified clinical specialist in neurologic physical therapy and an associate professor in the Department of Physical Therapy & Human Movement Sciences, Feinberg School of Medicine, Northwestern University, Chicago, Illinois ( ). Please address all correspondence to Rachel S. Tappan
- Heidi R. Roth is a board-certified clinical specialist in neurologic physical therapy and an assistant professor in the Department of Physical Therapy & Human Movement Sciences, Feinberg School of Medicine, Northwestern University
- William C. McGaghie is a professor in the Department of Medical Education, Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University.
| | - Heidi R Roth
- Rachel S. Tappan is a board-certified clinical specialist in neurologic physical therapy and an associate professor in the Department of Physical Therapy & Human Movement Sciences, Feinberg School of Medicine, Northwestern University, Chicago, Illinois ( ). Please address all correspondence to Rachel S. Tappan
- Heidi R. Roth is a board-certified clinical specialist in neurologic physical therapy and an assistant professor in the Department of Physical Therapy & Human Movement Sciences, Feinberg School of Medicine, Northwestern University
- William C. McGaghie is a professor in the Department of Medical Education, Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University.
| | - William C McGaghie
- Rachel S. Tappan is a board-certified clinical specialist in neurologic physical therapy and an associate professor in the Department of Physical Therapy & Human Movement Sciences, Feinberg School of Medicine, Northwestern University, Chicago, Illinois ( ). Please address all correspondence to Rachel S. Tappan
- Heidi R. Roth is a board-certified clinical specialist in neurologic physical therapy and an assistant professor in the Department of Physical Therapy & Human Movement Sciences, Feinberg School of Medicine, Northwestern University
- William C. McGaghie is a professor in the Department of Medical Education, Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University.
| |
Collapse
|
4
|
Klein MR, Loke DE, Barsuk JH, Adler MD, McGaghie WC, Salzman DH. Twelve tips for developing simulation-based mastery learning clinical skills checklists. MEDICAL TEACHER 2025; 47:212-217. [PMID: 38670308 DOI: 10.1080/0142159x.2024.2345270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/16/2024] [Indexed: 04/28/2024]
Abstract
Simulation-based mastery learning is a powerful educational paradigm that leads to high levels of performance through a combination of strict standards, deliberate practice, formative feedback, and rigorous assessment. Successful mastery learning curricula often require well-designed checklists that produce reliable data that contribute to valid decisions. The following twelve tips are intended to help educators create defensible and effective clinical skills checklists for use in mastery learning curricula. These tips focus on defining the scope of a checklist using established principles of curriculum development, crafting the checklist based on a literature review and expert input, revising and testing the checklist, and recruiting judges to set a minimum passing standard. While this article has a particular focus on mastery learning, with the exception of the tips related to standard setting, the general principles discussed apply to the development of any clinical skills checklist.
Collapse
Affiliation(s)
- Matthew R Klein
- Department of Emergency Medicine, Brown University Warren Alpert Medical School, Providence, Rhode Island, USA
| | - Dana E Loke
- Department of Emergency Medicine, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Jeffrey H Barsuk
- Department of Medicine (Hospital Medicine) and Department of Medical Education, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | - Mark D Adler
- Department of Pediatrics (Emergency Medicine) and Department of Medical Education, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | - William C McGaghie
- Department of Medical Education and Department of Preventive Medicine, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | - David H Salzman
- Department of Emergency Medicine and Department of Medical Education, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| |
Collapse
|
5
|
von Buchwald JH, Frendø M, Frithioff A, Britze A, Frederiksen TW, Melchiors J, Andersen SAW. Gathering Validity Evidence for a Simulation-Based Test of Otoscopy Skills. Ann Otol Rhinol Laryngol 2025; 134:70-78. [PMID: 39417404 DOI: 10.1177/00034894241288434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2024]
Abstract
OBJECTIVE Otoscopy is a key clinical examination used by multiple healthcare providers but training and testing of otoscopy skills remain largely uninvestigated. Simulator-based assessment of otoscopy skills exists, but evidence on its validity is scarce. In this study, we explored automated assessment and performance metrics of an otoscopy simulator through collection of validity evidence according to Messick's framework. METHODS Novices and experienced otoscopists completed a test program on the Earsi otoscopy simulator. Automated assessment of diagnostic ability and performance were compared with manual ratings of technical skills. Reliability of assessment was evaluated using Generalizability theory. Linear mixed models and correlation analysis were used to compare automated and manual assessments. Finally, we used the contrasting groups method to define a pass/fail level for the automated score. RESULTS A total of 12 novices and 12 experienced otoscopists completed the study. We found an overall G-coefficient of .69 for automated assessment. The experienced otoscopists achieved a significantly higher mean automated score than the novices (59.9% (95% CI [57.3%-62.6%]) vs. 44.6% (95% CI [41.9%-47.2%]), P < .001). For the manual assessment of technical skills, there was no significant difference, nor did the automated score correlate with the manually rated score (Pearson's r = .20, P = .601). We established a pass/fail standard for the simulator's automated score of 49.3%. CONCLUSION We explored validity evidence supporting an otoscopy simulator's automated score, demonstrating that this score mainly reflects cognitive skills. Manual assessment therefore still seems necessary at this point and external video-recording is necessary for valid assessment. To improve the reliability, the test course should include more cases to achieve a higher G-coefficient and a higher pass/fail standard should be used.
Collapse
Affiliation(s)
- Josefine Hastrup von Buchwald
- Department of Otorhinolaryngology, Head & Neck Surgery & Audiology, Rigshospitalet, Copenhagen, Denmark
- Copenhagen Academy for Medical Education and Simulation, Center for HR & Education, The Capital Region of Denmark, Copenhagen, Denmark
| | - Martin Frendø
- Department of Otorhinolaryngology, Head & Neck Surgery & Audiology, Rigshospitalet, Copenhagen, Denmark
- Copenhagen Academy for Medical Education and Simulation, Center for HR & Education, The Capital Region of Denmark, Copenhagen, Denmark
| | - Andreas Frithioff
- Department of Otorhinolaryngology, Head & Neck Surgery & Audiology, Rigshospitalet, Copenhagen, Denmark
- Copenhagen Academy for Medical Education and Simulation, Center for HR & Education, The Capital Region of Denmark, Copenhagen, Denmark
| | - Anders Britze
- Department of Otorhinolaryngology-Head & Neck Surgery, Aarhus University Hospital, Aarhus, Denmark
| | | | - Jacob Melchiors
- Department of Otorhinolaryngology, Head & Neck Surgery & Audiology, Rigshospitalet, Copenhagen, Denmark
- Copenhagen Academy for Medical Education and Simulation, Center for HR & Education, The Capital Region of Denmark, Copenhagen, Denmark
| | - Steven Arild Wuyts Andersen
- Department of Otorhinolaryngology, Head & Neck Surgery & Audiology, Rigshospitalet, Copenhagen, Denmark
- Copenhagen Academy for Medical Education and Simulation, Center for HR & Education, The Capital Region of Denmark, Copenhagen, Denmark
| |
Collapse
|
6
|
Selim O, Dueck AD, Kulasegaram KM, Brydges R, Walsh CM, Okrainec A. Validity of the Diabetic Wound Assessment Learning Tool. CLINICAL TEACHER 2025; 22:e70025. [PMID: 39805632 DOI: 10.1111/tct.70025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Revised: 12/09/2024] [Accepted: 12/18/2024] [Indexed: 01/16/2025]
Abstract
PURPOSE The development of the Diabetic Wound Assessment Learning Tool (DiWALT) has previously been described. However, an examination of its application to a larger, more heterogeneous group of participants is lacking. In order to allow for a more robust assessment of the psychometric properties of the DiWALT, we applied it to a broader group of participants. MATERIALS AND METHODS We built validity evidence for the tool by assessing 74 clinician participants' during two simulated wound care scenarios: Two assessors independently rated each participant using our tool, with a total of five raters providing scores. We evaluated validity evidence using generalizability theory analyses and by comparing performance scores across the three experience levels using ANOVA. RESULTS The tool differentiated between novices and the other two groups well (p < 0.01) but not between intermediates and experts (p = 0.34). Our generalizability coefficient was 0.87, and our phi coefficient was 0.87. CONCLUSION The accumulated validity evidence suggests our tool can be used to assess novice clinicians' competence in initial diabetic wound management during simulated cases. Further work is required to clarify the DiWALT's performance in a broader universe of generalisation and to examine evidence for its extrapolation and implications inferences.
Collapse
Affiliation(s)
- Omar Selim
- Division of Vascular and Endovascular Surgery, Department of Surgery, Brigham and Women's Hospital, Boston, Massachusetts, USA
- Department of Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Andrew D Dueck
- Division of Vascular Surgery, Schulich Heart Centre, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Kulamakan M Kulasegaram
- Department of Family and Community Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Ryan Brydges
- Allan Waters Family Simulation Centre, St. Michael's Hospital, Toronto, Ontario, Canada
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Catharine M Walsh
- Division of Gastroenterology, Hepatology and Nutrition, Hospital for Sick Children, Toronto, Ontario, Canada
- The Wilson Centre for Research in Education, University Health Network, Toronto, Ontario, Canada
| | - Allan Okrainec
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
- Temerty-Chang Telesimulation Centre, University of Toronto, Toronto, Ontario, Canada
- Division of General Surgery, University Health Network, Toronto, Ontario, Canada
| |
Collapse
|
7
|
Bell S. Simulation-based assessment in the context of paramedic education: A scoping review. CLINICAL TEACHER 2025; 22:e13834. [PMID: 39500738 DOI: 10.1111/tct.13834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 09/03/2024] [Indexed: 12/24/2024]
Abstract
OBJECTIVE Simulation is a widespread modality in the field of medical education. Within the paramedic sphere, simulation is valuable in providing exposure to high-acuity, low occurrence incidents encountered rarely in practice, affording unique educational opportunities. Recognising this importance, this scoping review seeks to establish the contemporaneous evidence base for the use of simulation-based assessment in the context of paramedic education, systematically map the research done in this area and consider the implications for future educational programmes. METHODS A scoping review of peer, and non-peer reviewed literature across a broad range of medical literature databases for both published and grey material utilising previously published search filters for the paramedic field. The review was conducted aligned to the PRISMA Extension for Scoping Reviews checklist. Studies were selected based on relevance to the research question. RESULTS Twenty four unique papers were identified, filtered via the application of inclusion and exclusion criteria to five included papers. The application of forward snowballing methodology revealed three additional papers included for appraisal. Thematic analysis of the eight papers revealed the domains of assessment acceptability and assessment validity as key considerations for the design and use of simulation-based assessment in the field. CONCLUSION Simulation-based assessment has a role in paramedic education; additional research is necessary to empirically establish the validity and reliability of the modality in the field.
Collapse
Affiliation(s)
- Steve Bell
- North West Ambulance Service NHS Trust, Bolton, UK
| |
Collapse
|
8
|
Carstensen SMD, Just SA, Pfeiffer-Jensen M, Østergaard M, Konge L, Terslev L. Development and validation of a new tool for assessment of trainees' interventional musculoskeletal ultrasound skills. Rheumatology (Oxford) 2025; 64:484-492. [PMID: 38273715 DOI: 10.1093/rheumatology/keae050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 12/11/2023] [Accepted: 12/28/2023] [Indexed: 01/27/2024] Open
Abstract
OBJECTIVES Interventional musculoskeletal ultrasound (MSUS) procedures are routinely performed in rheumatology practice. However, the efficacy and safety of the procedures rely on the competence of the physician, and assessment of skills is crucial. Thus, this study aimed to develop and establish validity evidence for a tool assessing trainees' interventional MSUS skills. METHODS An expert panel of rheumatologists modified an existing tool for assessing competences in invasive abdominal and thoracic ultrasound procedures. The new tool (the Assessment of Interventional Musculoskeletal Ultrasound Skills [AIMUS] tool) reflects the essential steps in interventional MSUS. To establish validity evidence, physicians with different levels of interventional MSUS experience were enrolled and performed two procedures on a rubber phantom, simulating real patient cases. All performances were video-recorded, anonymized and assessed in random order by two blinded raters using the AIMUS tool. RESULTS 65 physicians from 21 different countries were included and categorized into groups based on their experience, resulting in 130 videos for analysis. The internal consistency of the tool was excellent, with a Cronbach's α of 0.96. The inter-case reliability was good with a Pearson's correlation coefficient (PCC) of 0.74 and the inter-rater reliability was moderate to good (PCC 0.58). The ability to discriminate between different levels of experience was highly significant (P < 0.001). CONCLUSION We have developed and established validity evidence for a new interventional MSUS assessment tool. The tool can be applied in future competency-based educational programmes, provide structured feedback to trainees in daily clinical practice and ensure end-of-training competence. TRIAL REGISTRATION ClinicalTrials.gov, http://clinicaltrials.gov, NCT05303974.
Collapse
Affiliation(s)
- Stine Maya Dreier Carstensen
- Copenhagen Center for Arthritis Research, Center for Rheumatology and Spine Diseases, Centre for Head and Orthopaedics, Copenhagen University Hospital-Rigshospitalet Glostrup, Copenhagen, Denmark
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, The University of Copenhagen, Copenhagen, Denmark
| | - Søren Andreas Just
- Section of Rheumatology, Department of Medicine, Svendborg Hospital-Odense University Hospital, Svendborg, Denmark
| | - Mogens Pfeiffer-Jensen
- Copenhagen Center for Arthritis Research, Center for Rheumatology and Spine Diseases, Centre for Head and Orthopaedics, Copenhagen University Hospital-Rigshospitalet Glostrup, Copenhagen, Denmark
| | - Mikkel Østergaard
- Copenhagen Center for Arthritis Research, Center for Rheumatology and Spine Diseases, Centre for Head and Orthopaedics, Copenhagen University Hospital-Rigshospitalet Glostrup, Copenhagen, Denmark
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, The University of Copenhagen, Copenhagen, Denmark
| | - Lars Konge
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, The University of Copenhagen, Copenhagen, Denmark
- Copenhagen Academy for Medical Education and Simulation, The Capital Region of Denmark, Copenhagen, Denmark
| | - Lene Terslev
- Copenhagen Center for Arthritis Research, Center for Rheumatology and Spine Diseases, Centre for Head and Orthopaedics, Copenhagen University Hospital-Rigshospitalet Glostrup, Copenhagen, Denmark
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, The University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
9
|
Cook DA, Durning SJ, Stephenson CR, Gruppen LD, Lineberry M. Assessment of management reasoning: Design considerations drawn from analysis of simulated outpatient encounters. MEDICAL TEACHER 2025; 47:218-232. [PMID: 38627020 DOI: 10.1080/0142159x.2024.2337251] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 03/27/2024] [Indexed: 02/08/2025]
Abstract
PURPOSE Management reasoning is a distinct subset of clinical reasoning. We sought to explore features to be considered when designing assessments of management reasoning. METHODS This is a hybrid empirical research study, narrative review, and expert perspective. In 2021, we reviewed and discussed 10 videos of simulated (staged) physician-patient encounters, actively seeking actions that offered insights into assessment of management reasoning. We analyzed our own observations in conjunction with literature on clinical reasoning assessment, using a constant comparative qualitative approach. RESULTS Distinguishing features of management reasoning that will influence its assessment include management scripts, shared decision-making, process knowledge, illness-specific knowledge, and tailoring of the encounter and management plan. Performance domains that merit special consideration include communication, integration of patient preferences, adherence to the management script, and prognostication. Additional facets of encounter variation include the clinical problem, clinical and nonclinical patient characteristics (including preferences, values, and resources), team/system characteristics, and encounter features. We cataloged several relevant assessment approaches including written/computer-based, simulation-based, and workplace-based modalities, and a variety of novel response formats. CONCLUSIONS Assessment of management reasoning could be improved with attention to the performance domains, facets of variation, and variety of approaches herein identified.
Collapse
Affiliation(s)
- David A Cook
- Office of Applied Scholarship and Education Science, Mayo Clinic College of Medicine and Science, Rochester, MN, USA
- Division of General Internal Medicine, Mayo Clinic, Rochester, MN, USA
| | - Steven J Durning
- Center for Health Professions Education, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | | | - Larry D Gruppen
- Department of Learning Health Sciences and director, University of Michigan, Ann Arbor, MI, USA
| | - Matt Lineberry
- University of Kansas Medical Center and Health System, Kansas City, KS, USA
| |
Collapse
|
10
|
Cobos M. Dataset of video game-based assessments in digital culture courses at Indoamerica University. Data Brief 2025; 58:111217. [PMID: 39802839 PMCID: PMC11719281 DOI: 10.1016/j.dib.2024.111217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2024] [Revised: 12/03/2024] [Accepted: 12/04/2024] [Indexed: 01/16/2025] Open
Abstract
This dataset contains evaluation results from video game-based assessments administered to first-level university students across six different academic programs at Universidad Indoamérica from October 2022 to August 2024. The data were collected using an adapted version of Pacman through the ClassTools.net platform, where traditional quiz questions were integrated into gameplay mechanics. The dataset comprises 1418 assessment attempts from students in Law, Medicine, Psychology, Clinical Psychology, Architecture, and Nursing programs, documenting their performance in digital culture and computing courses. Each record includes attempt number, timestamp, student identifier, gender, academic period, section, career program, and score achieved. The dataset enables analysis of student performance patterns, learning progression through multiple attempts, and comparative studies across different academic programs and periods. This information can support research in educational gamification, assessment design, and digital learning strategies in higher education.
Collapse
Affiliation(s)
- Miguel Cobos
- Facultad de Educación, Universidad Indoamérica, Quito, Ecuador
| |
Collapse
|
11
|
Goldman MP, Slade MD, Gielissen K, Hirsch AW, Prabhu EA, Dunne DW, Auerbach MA. Procedural Entrustment Alignment Between Pediatric Residents and Their Preceptors in the Pediatric Emergency Department. Pediatr Emerg Care 2025:00006565-990000000-00583. [PMID: 39841101 DOI: 10.1097/pec.0000000000003330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/23/2025]
Abstract
OBJECTIVE Entrustment describes the balance of supervision and autonomy between resident and preceptor to complete doctoring tasks like procedures. Entrustment alignment between resident and preceptor facilitates safe, successful outcomes, and promotes learning. Study objectives describe procedural entrustment alignment between senior pediatric residents and their preceptors and report the impact of a simulation-based formative assessment (SFA) on entrustment alignment. METHODS This prospective observational study enrolled a convenience sample of senior pediatric residents in 2023. The SFA was videoed, consisted of obtaining informed consent and performing simulated procedures (laceration [LAC] and lumbar puncture [LP]). Residents self-assessed their entrustability pre/post-SFA. A PEM preceptor panel individually rated videos of the residents. PEM panel's scores were compared to residents' scores on both an 8-point scale and the dichotomized variable of needing "in versus out" of the room entrustment. RESULTS Twenty-four residents' SFAs were rated by 9 panelists. Before the SFA, entrustment alignments on the 8-point scale were as follows: resident LAC 4.08 vs PEM panel 4.97 (P < 0.001), and resident LP 4.75 vs PEM panel 5.31 (P = 0.15). After the SFA, entrustment alignments were as follows: resident LAC 5.21 vs PEM panel 4.97 (P = 0.32), and resident LP 5.54 vs PEM panel 5.31 (P = 0.52). The dichotomized analyses revealed improved alignment post-SFA: LAC-pre-kappa = 0.03 vs LAC-post 0.46, and LP-pre-kappa = (-0.03) vs LP-post = 0.24. CONCLUSIONS Our findings indicate senior pediatric residents desire less entrustment (more supervision) for procedures but better align with preceptors after an SFA. This work offers insight into procedural entrustment decision making and the potential of SFA's to facilitate procedural learning.
Collapse
Affiliation(s)
| | - Martin D Slade
- Department of Internal Medicine, Section of General Medicine, Yale University School of Medicine
| | | | | | - Elizabeth A Prabhu
- Department of Emergency Medicine, Columbia University Irving Medical Center, New York, NY
| | | | | |
Collapse
|
12
|
Cin MD, Nourmohammadi Z, Hamdan U, Zopf DA. Design, Development, and Evaluation of a 3D-Printed Buccal Myomucosal Flap Simulator. Cleft Palate Craniofac J 2025:10556656241311044. [PMID: 39782928 DOI: 10.1177/10556656241311044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2025] Open
Abstract
OBJECTIVE Buccal myomucosal flap procedures have become a critical tool in the armamentarium of the cleft surgeon. Mastering this technique is complex and providing sufficient training opportunities presents significant challenges. Our study details the design, development, and evaluation of a low-cost, high-fidelity buccal myomucosal flap surgical simulator. Our goal is to establish a reliable teaching tool for early learners, validated through craniofacial surgeon assessment. DESIGN The simulator comprises of an anatomical model and a stand created using computer-aided design software. Hard tissues were 3D-printed, while soft tissues were cast in silicone. The model underwent review by craniofacial surgeons utilizing a 1 to 5 Likert scale across six evaluation domains. SETTING In-person simulated dissection session. PATIENTS/PARTICIPANTS Sixteen craniofacial surgery providers from various subspecialties. INTERVENTIONS None. MAIN OUTCOME MEASURE Anonymized survey responses. RESULTS The simulator received fair to high scores across all evaluation domains, notably 4.31 as a training tool, 3.77 as a competency evaluation tool, 3.92 as a rehearsal tool, and 3.93 in relevance to practice. CONCLUSIONS The validated buccal myomucosal flap simulator theoretically enables the acquisition of surgical skills in a zero-risk simulated environment. Plans involve integration into a structured curriculum with diverse participants. Continued iteration and adoption hold the promise of significantly enhancing access to training for competency of cleft and craniofacial procedures.
Collapse
Affiliation(s)
- Mitchell D Cin
- Medical School, Central Michigan University, Mount Pleasant, MI, USA
| | - Zahra Nourmohammadi
- Department of Otolaryngology-Head and Neck Surgery, University of Michigan Medical School, Ann Arbor, MI, USA
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
| | | | - David A Zopf
- Department of Otolaryngology-Head and Neck Surgery, University of Michigan Medical School, Ann Arbor, MI, USA
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
13
|
Lide RSC, Moquin R, Green E. Constructing a Validity Argument and Exploring Implications for the American Board of Anesthesiology's Basic Examination. THE JOURNAL OF EDUCATION IN PERIOPERATIVE MEDICINE : JEPM 2025; 27:E738. [PMID: 40207075 PMCID: PMC11978226 DOI: 10.46374/volxxvii_issue1_lide] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
Background In 2014, The American Board of Anesthesiology introduced the Basic Examination as a graduation requirement for second-year anesthesiology trainees. The exam's validity has been supported by evidence demonstrating enhanced performance on other standardized exams; however, an assessment's validity is inseparable from decisions made on its behalf. This study aimed to understand the usage and implications of the Basic Exam within training programs to construct a comprehensive validity argument. Methods Semistructured interviews were conducted with a sample of 20 program directors from Accreditation Council for Graduate Medical Education-accredited anesthesiology training programs. Thematic analysis was performed by a 3-member team. Results A 56-item codebook was developed and applied to the 20 transcripts, yielding 1941 coded segments organized into 7 themes. Theme 1 highlights varied programmatic policies, including dismissal (1a). Theme 2 addresses the perceived purposes of the exam: as a tool to "weed out" residents unlikely to achieve board certification (2a), a data point supporting remediation (2b), and a distinguishing accomplishment of physician anesthesiologists (2c). Theme 3 captures programmatic implications for recruitment (3a), operations (3b), and curricula (3c). Theme 4 confirms that residents are studying for the exam, emphasizing targeted test preparation (4a). Theme 5 discusses resident implications, including stress (5a) and clinical distraction (5b). Themes 6 and 7 explore the implications of failure and equity concerns, respectively. Conclusions This study identifies a significantly underdeveloped validity argument supporting dismissal based on Basic Exam results and explores implications to guide future validation efforts.
Collapse
Affiliation(s)
- Riley S. Carpenter Lide
- Riley S. Carpenter Lideis an Associate Professor and Residency Program Director in the Department of Anesthesiology, University of Arkansas for Medical Sciences, Little Rock, AR. Rachel Moquin is an Associate Professor and Associate Vice Chair for Faculty and Educator Development, Department of Anesthesiology, Washington University School of Medicine, St. Louis, MO. Erin Green is an Assistant Professor of Emergency Medicine and Assistant Dean for Clinical Learning, Medical College of Wisconsin–Green Bay, Green Bay, WI.
| | - Rachel Moquin
- Riley S. Carpenter Lideis an Associate Professor and Residency Program Director in the Department of Anesthesiology, University of Arkansas for Medical Sciences, Little Rock, AR. Rachel Moquin is an Associate Professor and Associate Vice Chair for Faculty and Educator Development, Department of Anesthesiology, Washington University School of Medicine, St. Louis, MO. Erin Green is an Assistant Professor of Emergency Medicine and Assistant Dean for Clinical Learning, Medical College of Wisconsin–Green Bay, Green Bay, WI.
| | - Erin Green
- Riley S. Carpenter Lideis an Associate Professor and Residency Program Director in the Department of Anesthesiology, University of Arkansas for Medical Sciences, Little Rock, AR. Rachel Moquin is an Associate Professor and Associate Vice Chair for Faculty and Educator Development, Department of Anesthesiology, Washington University School of Medicine, St. Louis, MO. Erin Green is an Assistant Professor of Emergency Medicine and Assistant Dean for Clinical Learning, Medical College of Wisconsin–Green Bay, Green Bay, WI.
| |
Collapse
|
14
|
Abelleyra Lastoria DA, Rehman S, Ahmed F, Jasionowska S, Salibi A, Cavale N, Dasgupta P, Aydin A. A Systematic Review of Simulation-Based Training Tools in Plastic Surgery. JOURNAL OF SURGICAL EDUCATION 2025; 82:103320. [PMID: 39615161 DOI: 10.1016/j.jsurg.2024.103320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 10/01/2024] [Accepted: 10/19/2024] [Indexed: 12/11/2024]
Abstract
OBJECTIVES The recent shift from traditional surgical teaching to the incorporation of simulation training in plastic surgery has resulted in the development of a variety of simulation models and tools. We aimed to assess the validity and establish the effectiveness of all currently available simulators and tools for plastic surgery. DESIGN Systematic review. METHODS The PRISMA 2020 checklist was followed. The review protocol was prospectively registered in PROSPERO (CRD42021231546). Published and unpublished literature databases were searched to the 29th of October 2023. Each model was appraised in accordance with the Messick validity framework, and a rating was given for each section. To determine the effectiveness of each model, the McGaghie model of translational outcomes was used. RESULTS On screening 1794 articles, 116 were identified to discuss validity and effectiveness of simulation models in plastic surgery. These were hand surgery (6 studies), breast surgery (12 studies), facial surgery (25 studies), cleft lip and palate surgery (29 studies), rhinoplasty (4 studies), hair transplant surgery (1 study), surgery for burns (10 studies), and general skills in plastic surgery (29 studies). Only 1 model achieved an effectiveness level > 3, and no model had a rating > 2 in all aspects of the Messick validity framework. CONCLUSION There are limited models enabling the transfer of skills to clinical practice. No models achieved reductions in surgical complications or costs. There must be more validity studies conducted using updated validity frameworks, with an increased emphasis on the applicability of these simulators to improve patient outcomes and surgical technique. More training tools evaluating both technical and non-technical surgical skills are recommended.
Collapse
Affiliation(s)
| | - Sehrish Rehman
- GKT School of Medical Education, King's College London, London, United Kingdom
| | - Farah Ahmed
- St George's, University of London, London, United Kingdom
| | - Sara Jasionowska
- Imperial Vascular Unit, Imperial College Healthcare NHS Trust, London, United Kingdom
| | - Andrej Salibi
- Department of Plastic Surgery, Sandwell and West Birmingham NHS Trust, Birmingham, United Kingdom
| | - Naveen Cavale
- Departments of Plastic Surgery, King's College Hospital NHS Foundation Trust and Guy's & St Thomas' NHS Foundation Trust, London, United Kingdom
| | - Prokar Dasgupta
- MRC Centre for Transplantation, Guy's Hospital, King's College London, London, United Kingdom
| | - Abdullatif Aydin
- MRC Centre for Transplantation, Guy's Hospital, King's College London, London, United Kingdom.
| |
Collapse
|
15
|
De Mol L, Van Herzeele I, Van de Voorde P, Vanommeslaeghe H, Konge L, Desender L, Willaert W. Measuring Residents' Competence in Chest Tube Insertion on Thiel-Embalmed Bodies: A Validity Study. Simul Healthc 2024:01266021-990000000-00166. [PMID: 39787542 DOI: 10.1097/sih.0000000000000842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2025]
Abstract
INTRODUCTION Chest tube insertions (CTIs) have a high complication rate, prompting the training of technical skills in simulated settings. However, assessment tools require validity evidence prior to their implementation. This study aimed to collect validity evidence for assessment of technical skills in CTI on Thiel-embalmed human bodies. METHODS Invitations were sent to residents and staff from the departments of surgery, pulmonology, and emergency medicine. Participants were familiarized with the Thiel body and the supplied equipment. Standardized clinical context and instructions were provided. All participants performed 2 CTIs and were assessed with the Assessment for Competence in Chest Tube InsertiON (ACTION) tool, consisting of a 17-item rating scale and a 16-item error checklist. Live and post hoc video-based assessments by 2 raters were performed. Generalizability analysis was performed to evaluate reliability. Mean scores and errors were compared using a mixed-model repeated measures analysis of variance (ANOVA). A pass/fail score was determined using the contrasting groups' method. RESULTS Ten novices and 8 experienced participants completed the study. The Generalizability coefficients were moderate for the rating scale (0.75), and low for the error checklist (0.4). Novices scored lower on the rating scale?? (44±6.7/68 vs 50.8 ± 5.7/68, P = 0.024), but did not commit significantly more errors (1.6 ± 1.1/16 vs 1.0 ± 0.6/16, P = 0.066). A pass/fail score of 47/68 was established. CONCLUSION The rating scale in the Assessment for Competence in Chest Tube InsertiON tool has a robust validity argument for use on Thiel-embalmed bodies, allowing it to be used in simulation-based mastery learning curricula. In contrast, its error checklist has insufficient reliability and validity to be used for summative assessment.
Collapse
Affiliation(s)
- Leander De Mol
- From the Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, Ghent, Belgium (L.D.M., I.V.H., L.D., W.W.); Department of Thoracic and Vascular Surgery, Ghent University Hospital, Ghent, Belgium (I.V.H., L.D.); Department of Basic and Applied Medical Sciences, Faculty of Medicine and Health Sciences, Ghent University, Ghent, Belgium (P.V.d.V.); Department of Emergency Medicine, Ghent University Hospital, Ghent, Belgium (P.V.d.V.); Department of Gastrointestinal Surgery, Ghent University Hospital, Ghent, Belgium (H.V., W.W.); Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark (L.K.); and Copenhagen Academy for Medical Education and Simulation (CAMES), Copenhagen, Denmark (L.K.)
| | | | | | | | | | | | | |
Collapse
|
16
|
Hewett Brumberg EK, Douma MJ, Alibertis K, Charlton NP, Goldman MP, Harper-Kirksey K, Hawkins SC, Hoover AV, Kule A, Leichtle S, McClure SF, Wang GS, Whelchel M, White L, Lavonas EJ. 2024 American Heart Association and American Red Cross Guidelines for First Aid. Circulation 2024; 150:e519-e579. [PMID: 39540278 DOI: 10.1161/cir.0000000000001281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2024]
Abstract
Codeveloped by the American Heart Association and the American Red Cross, these guidelines represent the first comprehensive update of first aid treatment recommendations since 2010. Incorporating the results of structured evidence reviews from the International Liaison Committee on Resuscitation, these guidelines cover first aid treatment for critical and common medical, traumatic, environmental, and toxicological conditions. This update emphasizes the continuous evolution of evidence evaluation and the necessity of adapting educational strategies to local needs and diverse community demographics. Existing guidelines remain relevant unless specifically updated in this publication. Key topics that are new, are substantially revised, or have significant new literature include opioid overdose, bleeding control, open chest wounds, spinal motion restriction, hypothermia, frostbite, presyncope, anaphylaxis, snakebite, oxygen administration, and the use of pulse oximetry in first aid, with the inclusion of pediatric-specific guidance as warranted.
Collapse
|
17
|
Chatelain LS, Ferrero E, Guigui P, Garreau de Loubresse C, Benhamou D, Blanié A. Development and validity evidence of an interactive 3D model for thoracic and lumbar spinal fractures pedagogy: a first step of validity study. Orthop Traumatol Surg Res 2024:104084. [PMID: 39653143 DOI: 10.1016/j.otsr.2024.104084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Revised: 09/30/2024] [Accepted: 12/05/2024] [Indexed: 12/13/2024]
Abstract
BACKGROUND Thoracic and lumbar spinal fractures are common in trauma care, requiring accurate classification to guide appropriate treatment. While traditional teaching methods use static 2D images, there is a growing need for interactive tools to improve understanding. This study addresses the lack of interactive three-dimensional (3D) models for teaching the AO (Arbeitsgemeinschaft für Osteosynthesefragen) Spine classification for thoracic and lumbar fractures. HYPOTHESIS A free and open-access interactive 3D model of thoracic and lumbar spinal fractures was developed. The study aimed to provide preliminary validity evidence. We hypothesized that this model would be a valid educational tool for teaching the AO Spine classification, receiving high scores from senior spine surgeons on a validation questionnaire regarding anatomical realism and pedagogical value. The primary endpoint was the percentage of surgeons rating the model ≥8/10 on the Likert scale for content validation. We hypothesized that the 3D model would be validated by at least 75% of participating senior spine surgeons (rating ≥8/10) for anatomical realism and pedagogical value. METHODS The 3D model was created using the Blender® software, incorporating CT-scan images of a lumbar spine. AO Spine classification was used to recreate spinal fractures animations. The model could be used on any computer or smartphone, directly online. A total of 24 senior spine surgeons (5 professors, 6 fellows, 8 hospital practitioners, and 5 private practitioners) evaluated the 3D model using a structured questionnaire with seven Likert-scale items, assessing anatomical realism, fracture representation, adherence to the AO Spine classification, pedagogical value, and ease of use. A score of ≥8/10 was considered a positive validation. Group comparisons were made based on hospital activity and age. RESULTS The 3D model was positively validated by 92% of surgeons for anatomical realism, 88% for fracture representation, and 92% for adherence to the AO Spine classification. The model's educational value for junior residents was rated positively by 100% of participants. Six out of 24 surgeons (25%) rated the ease of navigation <8/10. Group comparisons revealed that university-affiliated surgeons rated the model higher overall (mean score 9.25/10) compared to private practitioners, who gave the lowest ratings (mean score 8.6/10). No significant correlation was found between age and ease of navigation (p = 0.948). DISCUSSION The developed 3D model of thoracic and lumbar spine fractures is the first of its kind. It provides an innovative, open-access and freely online accessible tool for teaching the AO Spine classification. The findings demonstrate that it is a valid pedagogical tool for teaching the AO Spine classification, with strong support for its anatomical accuracy and pedagogical effectiveness. This study sets the stage for a future validation study with surgical residents. LEVEL OF EVIDENCE III.
Collapse
Affiliation(s)
| | - Emmanuelle Ferrero
- Hôpital Européen Georges Pompidou (HEGP), Department of Orthopedic Surgery, Paris, France
| | - Pierre Guigui
- Hôpital Européen Georges Pompidou (HEGP), Department of Orthopedic Surgery, Paris, France
| | | | - Dan Benhamou
- Hôpital du Kremlin-Bicêtre, Department of Anesthesia, Intensive Care and Perioperative Medicine, Le Kremlin-Bicêtre, France; Simulation Center LabForsSIMS, Bicêtre Medical School, Paris-Saclay University, Le Kremlin-Bicêtre, France
| | - Antonia Blanié
- Hôpital du Kremlin-Bicêtre, Department of Anesthesia, Intensive Care and Perioperative Medicine, Le Kremlin-Bicêtre, France; Simulation Center LabForsSIMS, Bicêtre Medical School, Paris-Saclay University, Le Kremlin-Bicêtre, France
| |
Collapse
|
18
|
Bo X, Xue F, Xia Q, He K. Investigation and self-assessment of liver transplantation training physicians at Shanghai Renji Hospital: A preliminary study. Surg Open Sci 2024; 22:24-33. [PMID: 39525882 PMCID: PMC11550166 DOI: 10.1016/j.sopen.2024.10.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 07/28/2024] [Accepted: 10/07/2024] [Indexed: 11/16/2024] Open
Abstract
Background Continuing medical education in liver transplantation is pivotal in enhancing the proficiency of liver surgeons. The goal of this study is to obtain information on all aspects of the training, enable us to pinpoint the training's strengths, and address any shortcomings or challenges. Method We conducted an online questionnaire survey, which was comprised of 33 questions, offering response options in the form of "yes/no", single choice, or multiple choice. Results A total of 59 liver surgeons actively participated in the questionnaire survey. The majority of them exhibited a comprehensive understanding of the liver transplant training program, encompassing its structure, content, and assessment format. It is noteworthy that all respondents expressed keen interest in novel course components such as medical humanities, interpersonal communication, full-process patient management, and scientific research and academic activities. The overall satisfaction with the diverse specialized training courses was notably high. Furthermore, there was a significant improvement in self-confidence among participants for performing relevant clinical practices post-training, signifying the effectiveness of the program. Notably, key determinants influencing physicians' confidence levels before and after training included accumulated clinical practice time, basic operation cases, and educational background. Conclusion This survey reveals that trainees possess a commendable grasp of the program, maintain a positive outlook, and gain substantial benefits from the training. Importantly, it underscores the need to enhance the pedagogical skills of professional training instructors, continually refine the curriculum, and serve as a foundation for informed decisions in the ongoing training of liver transplant physicians.
Collapse
Affiliation(s)
- Xiaochen Bo
- Department of Liver Surgery, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Feng Xue
- Department of Liver Surgery, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Qiang Xia
- Department of Liver Surgery, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Kang He
- Department of Liver Surgery, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| |
Collapse
|
19
|
Popov V, Mateju N, Jeske C, Lewis KO. Metaverse-based simulation: a scoping review of charting medical education over the last two decades in the lens of the 'marvelous medical education machine'. Ann Med 2024; 56:2424450. [PMID: 39535116 PMCID: PMC11562026 DOI: 10.1080/07853890.2024.2424450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 08/12/2024] [Accepted: 10/11/2024] [Indexed: 11/16/2024] Open
Abstract
BACKGROUND Over the past two decades, the use of Metaverse-enhanced simulations in medical education has witnessed significant advancement. These simulations offer immersive environments and technologies, such as augmented reality, virtual reality, and artificial intelligence that have the potential to revolutionize medical training by providing realistic, hands-on experiences in diagnosing and treating patients, practicing surgical procedures, and enhancing clinical decision-making skills. This scoping review aimed to examine the evolution of simulation technology and the emergence of metaverse applications in medical professionals' training, guided by Friedman's three dimensions in medical education: physical space, time, and content, along with an additional dimension of assessment. METHODS In this scoping review, we examined the related literature in six major databases including PubMed, EMBASE, CINAHL, Scopus, Web of Science, and ERIC. A total of 173 publications were selected for the final review and analysis. We thematically analyzed these studies by combining Friedman's three-dimensional framework with assessment. RESULTS Our scoping review showed that Metaverse technologies, such as virtual reality simulation and online learning modules have enabled medical education to extend beyond physical classrooms and clinical sites by facilitating remote training. In terms of the Time dimension, simulation technologies have made partial but meaningful progress in supplementing traditional time-dependent curricula, helping to shorten learning curves, and improve knowledge retention. As for the Content dimension, high-quality simulation and metaverse content require alignment with learning objectives, interactivity, and deliberate practice that should be developmentally integrated from basic to advanced skills. With respect to the Assessment dimension, learning analytics and automated metrics from metaverse-enabled simulation systems have enhanced competency evaluation and formative feedback mechanisms. However, their integration into high-stakes testing is limited, and qualitative feedback and human observation remain crucial. CONCLUSION Our study provides an updated perspective on the achievements and limitations of using simulation to transform medical education, offering insights that can inform development priorities and research directions for human-centered, ethical metaverse applications that enhance healthcare professional training.
Collapse
Affiliation(s)
- Vitaliy Popov
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Natalie Mateju
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Caris Jeske
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Kadriye O. Lewis
- Children’s Mercy Kansas City, Department of Pediatrics, UMKC School of Medicine, Kansas City, MO, USA
| |
Collapse
|
20
|
Mossenson AI, Livingston PL, Tuyishime E, Brown JA. Assessing Healthcare Simulation Facilitation: A Scoping Review of Available Tools, Validity Evidence, and Context Suitability for Faculty Development in Low-Resource Settings. Simul Healthc 2024; 19:e135-e146. [PMID: 38595205 DOI: 10.1097/sih.0000000000000796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2024]
Abstract
SUMMARY STATEMENT Assessment tools support simulation facilitation skill development by guiding practice, structuring feedback, and promoting reflective learning among educators. This scoping review followed a systematic process to identify facilitation assessment tools used in postlicensure healthcare simulation. Secondary objectives included mapping of the validity evidence to support their use and a critical appraisal of their suitability for simulation faculty development in low-resource settings. Database searching, gray literature searching, and stakeholder engagement identified 11,568 sources for screening, of which 72 met criteria for full text review. Thirty sources met inclusion; 16 unique tools were identified. Tools exclusively originated from simulation practice in high-resource settings and predominantly focused on debriefing. Many tools have limited validity evidence supporting their use. In particular, the validity evidence supporting the extrapolation and implications of assessment is lacking. No current tool has high context suitability for use in low-resource settings.
Collapse
Affiliation(s)
- Adam I Mossenson
- From the SJOG Midland Public and Private Hospitals (A.I.M., J.A.B.), Perth, Australia; Dalhousie University (A.I.M., P.L.L.), Halifax, Canada; Curtin Medical School, Curtin University, Perth, Australia (A.I.M.); University of Rwanda College of Medicine and Health Sciences (E.T.), Kigali, Rwanda; Curtin School of Nursing (J.A.B.), Curtin University, Perth, Australia ; and Western Australian Group for Evidence Informed Healthcare Practice: A JBI Centre of Excellence (J.A.B.), Perth, Australia
| | | | | | | |
Collapse
|
21
|
Rabinowitz R, Drake CB, Talan JW, Nair SS, Hafiz A, Andriotis A, Kogan R, Du X, Li J, Hua W, Lin M, Kaufman BS. Just-in-Time Simulation Training to Augment Overnight ICU Resident Education. J Grad Med Educ 2024; 16:713-722. [PMID: 39677310 PMCID: PMC11641875 DOI: 10.4300/jgme-d-24-00268.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2024] [Revised: 05/26/2024] [Accepted: 08/16/2024] [Indexed: 12/17/2024] Open
Abstract
Background Patients who decompensate overnight experience worse outcomes than those who do so during the day. Just-in-time (JIT) simulation could improve on-call resident preparedness but has been minimally evaluated in critical care medicine (CCM) to date. Objective To determine whether JIT training can improve residents' performance in simulation and if those skills would transfer to better clinical management in adult CCM. Methods Second-year medicine residents participated in simulated decompensation events aligned to common medical intensive care unit (MICU) emergencies predicted to occur overnight by their attending intensivist. Simulation faculty scored their performance via critical action checklists. If the event occurred, MICU attendings rated residents' clinical management as well. At the rotation's conclusion, a variant of one previously trained scenario was simulated to assess for performance improvement. Resident perceptions were surveyed before, during, and after completion of the study. Results Twenty-eight residents participated; 22 of 28 (79%) completed the curriculum. Management of simulated decompensations improved following training (initial simulation checklist completion rate 60% vs 80% final simulation, P≤.001, Wilcoxon r=0.5). Predicted events occurred in 27 (45%) of the 60 shifts evaluated, with no observed difference in faculty ratings of overnight performance (median rating 4.5 if trained vs 3.0 if untrained; U=58.50; P=.12; Mann-Whitney r=0.30). Residents' self-reported preparedness to manage MICU emergencies improved significantly following training, from a median of 3.0 to 4.0 (P=.006, Wilcoxon r=0.42). Conclusions JIT simulation training improved residents' performance in simulation.
Collapse
Affiliation(s)
- Raphael Rabinowitz
- Raphael Rabinowitz, MD, is Clinical Assistant Professor, Department of Medicine, New York University (NYU) Grossman School of Medicine, New York, New York, USA
| | - Carolyn B. Drake
- Carolyn B. Drake, MD, MPH, is Clinical Assistant Professor, Department of Medicine, NYU Grossman School of Medicine, New York, New York, USA
| | - Jordan W. Talan
- Jordan W. Talan, MD, MHPE, is Assistant Professor, Department of Medicine, NYU Grossman School of Medicine, New York, New York, USA
| | - Sunil S. Nair
- Sunil S. Nair, MD, is Clinical Assistant Professor, Department of Medicine, Jefferson Health-Abington, Abington, Pennsylvania, USA
| | - Ali Hafiz
- Ali Hafiz, MD, is Adjunct Instructor, Department of Medicine, NYU Grossman School of Medicine, New York, New York, USA
| | - Anthony Andriotis
- Anthony Andriotis, MD, is Assistant Professor, Department of Medicine, NYU Grossman School of Medicine, New York, New York, USA
| | - Rebecca Kogan
- Rebecca Kogan, MD, is a Fellow, Pulmonary and Critical Care Medicine, Weill Cornell Medicine, New York, New York, USA
| | - Xinyue Du
- Xinyue Du, MSc, is Statistician, LLX Solutions, LLC, Waltham, MA, USA
| | - Jian Li
- Jian Li, MSc, is Research Data Analyst, Johns Hopkins University, Baltimore, Maryland, USA
| | - Wanyu Hua
- Wanyu Hua, MSc, is Research Assistant, University of Hong Kong, Hong Kong, Pokfulman, Hong Kong
| | - Miao Lin
- Miao Lin, MSc, is Data Analyst, Massachusetts General Hospital, Boston, Massachusetts, USA; and
| | - Brian S. Kaufman
- Brian S. Kaufman, MD, is Professor, Departments of Anesthesiology, Medicine, and Neurosurgery, NYU Grossman School of Medicine, New York, New York, USA
| |
Collapse
|
22
|
Raymond J, Dai DW, McAllister S. The interpretation-use argument- the essential ingredient for high quality assessment design and validation. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024:10.1007/s10459-024-10392-6. [PMID: 39589600 DOI: 10.1007/s10459-024-10392-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2024] [Accepted: 11/10/2024] [Indexed: 11/27/2024]
Abstract
There is increasing interest in health professions education (HPE) in applying argument-based validity approaches, such as Kane's, to assessment design. The critical first step in employing Kane's approach is to specify the interpretation-use argument (IUA). However, in the HPE literature, this step is often poorly articulated. This article provides guidance on developing the IUA using a worked example involving a workplace performance assessment tool. In developing the IUA, we have drawn inspiration from approaches used in the discipline of language assessment to situate the inferences, warrants and assumptions in the context of the assessment tool. The worked example makes use of Toulmin's model of informal logic/argumentation as a framework to structure the IUA and presents Toulmin diagrams for each inference such that the reader can connect the argument chain together. We also present several lessons learned so the reader can understand the issues we grappled with in developing the IUA. A well laid out IUA allows the argument to be critiqued by others and provides a framework to guide collection of validity evidence, and therefore is an essential ingredient in the work of assessment design and validation.
Collapse
Affiliation(s)
- Jacqueline Raymond
- School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia.
| | - David Wei Dai
- UCL Institute of Education, University College London, London, UK
| | - Sue McAllister
- School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
- Faculty of Health, University of Canberra, Canberra, Australia
| |
Collapse
|
23
|
Shahrezaei A, Sohani M, Taherkhani S, Zarghami SY. The impact of surgical simulation and training technologies on general surgery education. BMC MEDICAL EDUCATION 2024; 24:1297. [PMID: 39538209 PMCID: PMC11558898 DOI: 10.1186/s12909-024-06299-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Accepted: 11/04/2024] [Indexed: 11/16/2024]
Abstract
The landscape of general surgery education has undergone a significant transformation over the past few years, driven in large part by the advent of surgical simulation and training technologies. These innovative tools have revolutionized the way surgeons are trained, allowing for a more immersive, interactive, and effective learning experience. In this review, we will explore the impact of surgical simulation and training technologies on general surgery education, highlighting their benefits, challenges, and future directions. Enhancing the technical proficiency of surgical residents is one of the main benefits of surgical simulation and training technologies. By providing a realistic and controlled environment, With the use of simulations, residents may hone their surgical skills without compromising patient safety. Research has consistently demonstrated that training with simulations enhances surgical skills., reduces errors, and enhances overall performance. Furthermore, simulators can be programmed to mimic a wide range of surgical scenarios, enabling residents to cultivate the essential critical thinking and decision-making abilities required to manage intricate surgical cases. Another area of development is incorporating simulation-based training into the wider surgical curriculum. As simulation technologies become more widespread, they will need to be incorporated into the fabric of surgical education, rather than simply serving as an adjunct to traditional training methods. This will require a fundamental shift in the way surgical education is delivered, with a greater emphasis on simulation-based training and assessment.
Collapse
Affiliation(s)
- Aidin Shahrezaei
- School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Maryam Sohani
- School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Soroush Taherkhani
- Department of Physiology, Iran University of Medical Sciences, Tehran, Iran
| | - Seyed Yahya Zarghami
- Division of HPB Surgery & Abdominal Organ Transplantation, Department of Surgery, Firoozgar Hospital, Iran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
24
|
Brocke TK, Fox C, Clanahan JM, Klos CL, Chapman WC, Wise PE, Awad MM, Ohman KA. Extrapolative Validity Evidence of the Anastomosis Objective Structured Assessment of Technical Skill (A-OSATS) for Robotic Ileocolic Anastomosis. JOURNAL OF SURGICAL EDUCATION 2024; 81:1577-1584. [PMID: 39255546 DOI: 10.1016/j.jsurg.2024.07.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 07/24/2024] [Accepted: 07/28/2024] [Indexed: 09/12/2024]
Abstract
OBJECTIVE To collect validity evidence for the use of the Anastomosis Objective Structured Assessment of Technical Skills (A-OSATS) instrument, which has been developed to evaluate performance of a minimally invasive side-to-side bowel anastomosis with hand-sewn common enterotomy. DESIGN Residents performed a robotic ileocolic anastomosis simulation on an ex vivo porcine model. Faculty scored each resident with the A-OSATS and performed a provocative leak test on the completed anastomoses. Residents were reassessed on the sewing sub-score 1 month later. Data were compared with parametric and nonparametric analysis. SETTING Single academic general surgery residency PARTICIPANTS: PGY-4 and -5 general surgery residents (n = 17) RESULTS: PGY-5s performed better than PGY-4s in repeat A-OSATS sewing sub-score (mean 55/55 ± 0 vs 43 ± 4.9, p < 0.001) and time to complete (minutes, mean 14.5 ± 4.9 vs 21.2 ± 3.9, p = 0.01). There was a strong correlation between A-OSATS score and time (r = -0.67, p = 0.005). For the initial assessment, there was no significant difference in mean A-OSATS score between anastomoses that leaked and those that did not leak (137.3 ± 14.5 vs 150.1 ± 11.2, p = 0.098), but on repeat assessment, intact anastomoses had a higher mean A-OSATS sewing sub-score than those that leaked (52.2 ± 4.7 vs 39 ± 3.5, p = 0.007). There was no significant difference between initial A-OSATS score and repeat score (p = 0.14). CONCLUSIONS We provide extrapolative validity evidence for the A-OSATS instrument by comparing A-OSATS score to time to sew, provocative leak test, and discrimination between PGY-4s and PGY-5s. Generalizability validity evidence is provided by test-retest reliability. Further refinement is needed for the A-OSATS tool to be used for high-stakes entrustment decisions in resident-performed robotic ileocolic anastomoses.
Collapse
Affiliation(s)
- Tiffany K Brocke
- Department of Surgery, Washington University School of Medicine, St. Louis, Missouri.
| | - Cory Fox
- Department of Surgery, Washington University School of Medicine, St. Louis, Missouri
| | - Julie M Clanahan
- Department of Surgery, Washington University School of Medicine, St. Louis, Missouri
| | - Coen L Klos
- John Cochran VA Medical Center, St. Louis, Missouri
| | - William C Chapman
- Department of Surgery, Washington University School of Medicine, St. Louis, Missouri
| | - Paul E Wise
- Department of Surgery, Washington University School of Medicine, St. Louis, Missouri
| | - Michael M Awad
- Department of Surgery, Washington University School of Medicine, St. Louis, Missouri
| | - Kerri A Ohman
- Department of Surgery, Washington University School of Medicine, St. Louis, Missouri; John Cochran VA Medical Center, St. Louis, Missouri
| |
Collapse
|
25
|
Cold KM, Agbontaen K, Nielsen AO, Andersen CS, Singh S, Konge L. Artificial intelligence for automatic and objective assessment of competencies in flexible bronchoscopy. J Thorac Dis 2024; 16:5718-5726. [PMID: 39444895 PMCID: PMC11494585 DOI: 10.21037/jtd-24-841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Accepted: 07/12/2024] [Indexed: 10/25/2024]
Abstract
Background Bronchoscopy is a challenging technical procedure, and assessment of competence currently relies on expert raters. Human rating is time consuming and prone to rater bias. The aim of this study was to evaluate if a bronchial segment identification system based on artificial intelligence (AI) could automatically, instantly, and objectively assess competencies in flexible bronchoscopy in a valid way. Methods Participants were recruited at the Clinical Skills Zone of the European Respiratory Society Annual Conference in Milan, 9th-13th September 2023. The participants performed one full diagnostic bronchoscopy in a simulated setting and were rated immediately by the AI according to its four outcome measures: diagnostic completeness (DC), structured progress (SP), procedure time (PT), and mean intersegmental time (MIT). The procedures were video-recorded and rated after the conference by two blinded, expert raters using a previously validated assessment tool with nine items regarding anatomy and dexterity. Results Fifty-two participants from six different continents were included. All four outcome measures of the AI correlated significantly with the experts' anatomy-ratings (Pearson's correlation coefficient, P value): DC (r=0.47, P<0.001), SP (r=0.57, P<0.001), PT (r=-0.32, P=0.02), and MIT (r=-0.55, P<0.001) and also with the experts' dexterity-ratings: DC (r=0.38, P=0.006), SP (r=0.53, P<0.001), PT (r=-0.34, P=0.014), and MIT (r=-0.47, P<0.001). Conclusions The study provides initial validity evidence for AI-based immediate and automatic assessment of anatomical and navigational competencies in flexible bronchoscopy. SP provided stronger correlations with human experts' ratings than the traditional DC.
Collapse
Affiliation(s)
- Kristoffer Mazanti Cold
- Copenhagen Academy for Medical Education and Simulation (CAMES), Rigshospitalet, University of Copenhagen, the Capital Region of Denmark, Copenhagen, Denmark
| | - Kaladerhan Agbontaen
- Department of Intensive Care Unit, Chelsea and Westminster Hospital, Chelsea, London, UK
| | - Anne Orholm Nielsen
- Copenhagen Academy for Medical Education and Simulation (CAMES), Rigshospitalet, University of Copenhagen, the Capital Region of Denmark, Copenhagen, Denmark
- Bispebjerg Hospital, Department of Pulmonary Medicine, Capital Region of Denmark, Copenhagen, Denmark
| | | | - Suveer Singh
- Department of Intensive Care Unit, Royal Brompton Hospital, Chelsea, London, UK
- Faculty of Medicine, Imperial College London, Chelsea, London, UK
| | - Lars Konge
- Copenhagen Academy for Medical Education and Simulation (CAMES), Rigshospitalet, University of Copenhagen, the Capital Region of Denmark, Copenhagen, Denmark
| |
Collapse
|
26
|
Kinnear B, St-Onge C, Schumacher DJ, Marceau M, Naidu T. Validity in the Next Era of Assessment: Consequences, Social Impact, and Equity. PERSPECTIVES ON MEDICAL EDUCATION 2024; 13:452-459. [PMID: 39280703 PMCID: PMC11396166 DOI: 10.5334/pme.1150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 08/12/2024] [Indexed: 09/18/2024]
Abstract
Validity has long held a venerated place in education, leading some authors to refer to it as the "sine qua non" or "cardinal virtue" of assessment. And yet, validity has not held a fixed meaning; rather it has shifted in its definition and scope over time. In this Eye Opener, the authors explore if and how current conceptualizations of validity fit a next era of assessment that prioritizes patient care and learner equity. They posit that health profession education's conceptualization of validity will change in three related but distinct ways. First, consequences of assessment decisions will play a central role in validity arguments. Second, validity evidence regarding impacts of assessment on patients and society will be prioritized. Third, equity will be seen as part of validity rather than an unrelated concept. The authors argue that health professions education has the agency to change its ideology around validity, and to align with values that predominate the next era of assessment such as high-quality care and equity for learners and patients.
Collapse
Affiliation(s)
- Benjamin Kinnear
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| | - Christina St-Onge
- Department of Medicine, Researcher at the Center for Health Sciences Pedagogy, Université de Sherbrooke, Sherbrooke, Québec, Canada
| | - Daniel J Schumacher
- Department of Pediatrics, University of Cincinnati College of Medicine/Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Mélanie Marceau
- School of Nursing, Université de Sherbrooke, Sherbrooke, Québec, Canada
| | - Thirusha Naidu
- Department of Innovation in Medical Education, Faculty of Medicine, University of Ottawa, Canada
- Department of Psychiatry, University of KwaZulu-Natal, South Africa
| |
Collapse
|
27
|
Toale C, Morris M, Roche A, Voborsky M, Traynor O, Kavanagh D. Development and validation of a simulation-based assessment of operative competence for higher specialist trainees in general surgery. Surg Endosc 2024; 38:5086-5095. [PMID: 39020120 PMCID: PMC11362445 DOI: 10.1007/s00464-024-11024-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Accepted: 06/30/2024] [Indexed: 07/19/2024]
Abstract
BACKGROUND Simulation is increasingly being explored as an assessment modality. This study sought to develop and collate validity evidence for a novel simulation-based assessment of operative competence. We describe the approach to assessment design, development, pilot testing, and validity investigation. METHODS Eight procedural stations were generated using both virtual reality and bio-hybrid models. Content was identified from a previously conducted Delphi consensus study of trainers. Trainee performance was scored using an equally weighted Objective Structured Assessment of Technical Skills (OSATS) tool and a modified Procedure-Based Assessment (PBA) tool. Validity evidence was analyzed in accordance with Messick's validity framework. Both 'junior' (ST2-ST4) and 'senior' trainees (ST 5-ST8) were included to allow for comparative analysis. RESULTS Thirteen trainees were assessed by ten assessors across eight stations. Inter-station reliability was high (α = 0.81), and inter-rater reliability was acceptable (inter-class correlation coefficient 0.77). A significant difference in mean station score was observed between junior and senior trainees (44.82 vs 58.18, p = .004), while overall mean scores were moderately correlated with increasing training year (rs = .74, p = .004, Kendall's tau-b .57, p = 0.009). A pass-fail score generated using borderline regression methodology resulted in all 'senior' trainees passing and 4/6 of junior trainees failing the assessment. CONCLUSION This study reports validity evidence for a novel simulation-based assessment, designed to assess the operative competence of higher specialist trainees in general surgery.
Collapse
Affiliation(s)
- Conor Toale
- Department of Surgical Affairs, Royal College of Surgeons in Ireland, 121 St. Stephen's Green, Dublin, Ireland.
| | - Marie Morris
- Department of Surgical Affairs, Royal College of Surgeons in Ireland, 121 St. Stephen's Green, Dublin, Ireland
| | - Adam Roche
- SIM Centre for Simulation Education and Research, Royal College of Surgeons in Ireland, 123 St. Stephen's Green, Dublin, Ireland
| | - Miroslav Voborsky
- SIM Centre for Simulation Education and Research, Royal College of Surgeons in Ireland, 123 St. Stephen's Green, Dublin, Ireland
| | - Oscar Traynor
- Department of Surgical Affairs, Royal College of Surgeons in Ireland, 121 St. Stephen's Green, Dublin, Ireland
| | - Dara Kavanagh
- Department of Surgical Affairs, Royal College of Surgeons in Ireland, 121 St. Stephen's Green, Dublin, Ireland
| |
Collapse
|
28
|
Sønderup M, Gustafsson A, Konge L, Jacobsen ME. Intraoperative fluoroscopy skills in distal radius fracture surgery: valid and reliable assessment on a novel immersive virtual reality simulator. Acta Orthop 2024; 95:477-484. [PMID: 39192817 DOI: 10.2340/17453674.2024.41345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Indexed: 08/29/2024] Open
Abstract
BACKGROUND AND PURPOSE Orthopedic trainees must be able to perform intraoperative fluoroscopy imaging to assess the surgical result after volar locking plate surgeries of distal radius fractures. Guided by Messick's contemporary validity framework, the aim of our study was to gather evidence of validity for a test of proficiency for intraoperative imaging of a distal radius fracture using a novel immersive virtual reality simulator. METHODS 11 novices and 9 experienced surgeons employed at orthopedic departments completed 2 individual simulator sessions. At each session the participants performed 3 repetitions of an intraoperative fluoroscopic control of a distal radius fracture, consisting of 5 different fluoroscopic views. Several performance metrics were automatically recorded by the simulator and compared between the 2 groups. RESULTS Simulator metrics for 3 of the 5 fluoroscopic views could discriminate between novices and experienced surgeons. An estimated composite score based on these 3 views showed good test-retest reliability, ICC = 0.82 (confidence interval 0.65-0.92; P < 0.001). A discriminatory standard was set at a composite score of 6.15 points resulting in 1 false positive (i.e., novice scoring better than the standard), and 1 false negative (i.e., experienced surgeon scoring worse than the standard). CONCLUSION This study provided validity evidence from all 5 sources of Messick's contemporary validity framework (content, response process, internal structure, relationship with other variables, and consequences) for a simulation-based test of proficiency in intraoperative fluoroscopic control of a distal radius fracture fixated by a volar locking plate.
Collapse
Affiliation(s)
- Marie Sønderup
- Department of Clinical Medicine, Faculty of Health Science, University of Copenhagen
| | - Amandus Gustafsson
- Department of Clinical Medicine, Faculty of Health Science, University of Copenhagen; Copenhagen Academy for Medical Education and Simulation (CAMES), Rigshospitalet
| | - Lars Konge
- Department of Clinical Medicine, Faculty of Health Science, University of Copenhagen; Copenhagen Academy for Medical Education and Simulation (CAMES), Rigshospitalet
| | - Mads Emil Jacobsen
- Department of Clinical Medicine, Faculty of Health Science, University of Copenhagen; Copenhagen Academy for Medical Education and Simulation (CAMES), Rigshospitalet; Department of Orthopedic Surgery, Center for Orthopedic Research an Innovation (CORI), Næstved Slagelse Ringsted Hospitals, Denmark
| |
Collapse
|
29
|
Braid HR. Development and Evaluation of a Surgical Simulator and Assessment Rubric for Standing Castration of the Horse. JOURNAL OF VETERINARY MEDICAL EDUCATION 2024:e20230131. [PMID: 39504222 DOI: 10.3138/jvme-2023-0131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2024]
Abstract
In veterinary education, simulators are models or devices that can imitate a real patient or scenario and allow students to practice skills without the need for live patients. Castration is a common surgical procedure in all species, and the standing, open technique is frequently performed in horses. Although a simulator has been developed for equine closed castration, a simulator for standing castration in the horse has not yet been described. This two-part study focused on the design, creation, and evaluation of a simulator for teaching standing castration in the horse. A low-technology simulator was created using molded silicone testicles, cohesive bandage, stockings, and socks. A rubric was created for assessing performance using the simulator. Participants were recruited from three groups: university academic staff members (n = 12, majority equine veterinarians), equine veterinarians working in private practice (n = 9), and final-year veterinary students (n = 28). Each group tested the simulator while being graded using the developed rubric, and participants completed an anonymous online feedback questionnaire. Feedback was positive overall, with 98% of respondents (n = 48/49) stating that the model would be a useful addition to the veterinary curriculum. Furthermore, 100% of students reported that using the simulator increased their confidence in performing standing castration in horses. Evaluation of the model included assessment of responses from veterinarians and students regarding realism and usefulness of the simulator, comparison of rubric scores between veterinarians and students, and assessment of the reliability of the rubric. Median student rubric score was significantly lower than qualified veterinarians (p < .001), and Cronbach's alpha demonstrated that there was adequate internal reliability in rubric scoring (α = .85). It was determined that the simulator is effective for teaching the steps of the surgical procedure and for increasing student confidence.
Collapse
Affiliation(s)
- Helen R Braid
- Equine Practice, University of Liverpool, School of Veterinary Science, Institute of Infection, Veterinary and Ecological Sciences, Leahurst Campus, Neston, Wirral, CH64 7TE, United Kingdom
| |
Collapse
|
30
|
Wing R, Goldman MP, Prieto MM, Miller KA, Baluyot M, Tay KY, Bharath A, Patel D, Greenwald E, Larsen EP, Polikoff LA, Kerrey BT, Nishisaki A, Nagler J. Usability Testing Via Simulation: Optimizing the NEAR4PEM Preintubation Checklist With a Human Factors Approach. Pediatr Emerg Care 2024; 40:575-581. [PMID: 39078284 DOI: 10.1097/pec.0000000000003223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 07/31/2024]
Abstract
OBJECTIVES To inform development of a preintubation checklist for pediatric emergency departments via multicenter usability testing of a prototype checklist. METHODS This was a prospective, mixed methods study across 7 sites in the National Emergency Airway Registry for Pediatric Emergency Medicine (NEAR4PEM) collaborative. Pediatric emergency medicine attending physicians and senior fellows at each site were first oriented to a checklist prototype, including content previously identified using a modified Delphi approach. Each site used the checklist in 2 simulated cases: an "easy airway" and a "difficult airway" scenario. Facilitators recorded verbalization, completion, and timing of checklist items. After each simulation, participants completed an anonymous usability survey. Structured debriefings were used to gather additional feedback on checklist usability. Comments from the surveys and debriefing were qualitatively analyzed using a framework approach. Responses informed human factors-based optimization of the checklist. RESULTS Fifty-five pediatric emergency medicine physicians/fellows (4-13 per site) participated. Participants found the prototype checklist to be helpful, easy to use, clear, and of appropriate length. During the simulations, 93% of checklist items were verbalized and more than 80% were completed. Median time to checklist completion was 6.2 minutes (interquartile range, 4.8-7.1) for the first scenario and 4.2 minutes (interquartile range, 2.7-5.8) for the second. Survey and debriefing data identified the following strengths: facilitating a shared mental model, cognitively offloading the team leader, and prompting contingency planning. Suggestions for checklist improvement included clarifying specific items, providing more detailed prompts, and allowing institution-specific customization. Integration of these data with human factors heuristic inspection resulted in a final checklist. CONCLUSIONS Simulation-based, human factors usability testing of the National Emergency Airway Registry for Pediatric Emergency Medicine Preintubation Checklist allowed optimization prior to clinical implementation. Next steps involve integration into real-world settings utilizing rigorous implementation science strategies, with concurrent evaluation of the impact on patient outcomes and safety.
Collapse
Affiliation(s)
- Robyn Wing
- From the Division of Pediatric Emergency Medicine, Departments of Emergency Medicine and Pediatrics, Alpert Medical School of Brown University and Rhode Island Hospital/Hasbro Children's Hospital; Lifespan Medical Simulation Center, Providence, RI
| | - Michael P Goldman
- Departments of Pediatrics (Section of Pediatric Emergency Medicine) and Emergency Medicine, Yale University School of Medicine, New Haven, CT
| | - Monica M Prieto
- Perelman School of Medicine at the University of Pennsylvania, Division of Emergency Medicine, Department of Pediatrics, Children's Hospital of Philadelphia, Philadelphia, PA
| | - Kelsey A Miller
- Departments of Pediatrics and Emergency Medicine, Harvard Medical School, Division of Pediatric Emergency Medicine, Boston Children's Hospital, Boston, MA
| | - Mariju Baluyot
- Departments of Pediatrics and Emergency Medicine, Indiana University School of Medicine, Divisions of Pediatric Emergency Medicine and Simulation, Riley Hospital for Children, Indianapolis, IN
| | - Khoon-Yen Tay
- Perelman School of Medicine at the University of Pennsylvania, Division of Emergency Medicine, Children's Hospital of Philadelphia, Philadelphia, PA
| | - Anita Bharath
- Department of Emergency Medicine, Phoenix Children's, Phoenix, AZ
| | - Deepa Patel
- Department of Pediatrics, Zucker School of Medicine at Hofstra/Northwell, Division of Pediatric Emergency Medicine, Cohen Children's Medical Center, New Hyde Park, NY
| | - Emily Greenwald
- Department of Pediatrics, Duke Children's Hospital, Duke University Hospital, Durham, NC
| | - Ethan P Larsen
- Center for Healthcare Quality and Analytics, Children's Hospital of Philadelphia, Philadelphia, PA
| | - Lee A Polikoff
- Division of Critical Care Medicine, Department of Pediatrics, The Warren Alpert Medical School of Brown University, Providence, RI
| | - Benjamin T Kerrey
- University of Cincinnati, College of Medicine and the Division of Emergency Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, OH
| | - Akira Nishisaki
- Department of Anesthesiology, Critical Care, and Pediatrics, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA
| | - Joshua Nagler
- Departments of Emergency Medicine and Pediatrics, Harvard Medical School, Division of Pediatric Emergency Medicine, Boston Children's Hospital, Boston, MA
| |
Collapse
|
31
|
Gotzmann A, Boulet J, Zhang Y, McCormick J, Wojcik M, Bartman I, Pugh D. Conducting an objective structured clinical examination under COVID-restricted conditions. BMC MEDICAL EDUCATION 2024; 24:801. [PMID: 39061036 PMCID: PMC11282689 DOI: 10.1186/s12909-024-05774-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 07/12/2024] [Indexed: 07/28/2024]
Abstract
BACKGROUND The administration of performance assessments during the coronavirus disease of 2019 (COVID-19) pandemic posed many challenges, especially for examinations employed as part of certification and licensure. The National Assessment Collaboration (NAC) Examination, an Objective Structured Clinical Examination (OSCE), was modified during the pandemic. The purpose of this study was to gather evidence to support the reliability and validity of the modified NAC Examination. METHODS The modified NAC Examination was delivered to 2,433 candidates in 2020 and 2021. Cronbach's alpha, decision consistency, and accuracy values were calculated. Validity evidence includes comparisons of scores and sub-scores for demographic groups: gender (male vs. female), type of International Medical Graduate (IMG) (Canadians Studying Abroad (CSA) vs. non-CSA), postgraduate training (PGT) (no PGT vs. PGT), and language of examination (English vs. French). Criterion relationships were summarized using correlations within and between the NAC Examination and the Medical Council of Canada Qualifying Examination (MCCQE) Part I scores. RESULTS Reliability estimates were consistent with other OSCEs similar in length and previous NAC Examination administrations. Both total score and sub-score differences for gender were statistically significant. Total score differences by type of IMG and PGT were not statistically significant, but sub-score differences were statistically significant. Administration language was not statistically significant for either the total scores or sub-scores. Correlations were all statistically significant with some relationships being small or moderate (0.20 to 0.40) or large (> 0.40). CONCLUSIONS The NAC Examination yields reliable total scores and pass/fail decisions. Expected differences in total scores and sub-scores for defined groups were consistent with previous literature, and internal relationships amongst NAC Examination sub-scores and their external relationships with the MCCQE Part I supported both discriminant and criterion-related validity arguments. Modifications to OSCEs to address health restrictions can be implemented without compromising the overall quality of the assessment. This study outlines some of the validity and reliability analyses for OSCEs that required modifications due to COVID.
Collapse
Affiliation(s)
- Andrea Gotzmann
- Medical Council of Canada, 1021 Thomas Spratt Place, Ottawa, ON, K1G 5L5, Canada.
| | - John Boulet
- Medical Council of Canada, 1021 Thomas Spratt Place, Ottawa, ON, K1G 5L5, Canada
| | - Yichi Zhang
- Medical Council of Canada, 1021 Thomas Spratt Place, Ottawa, ON, K1G 5L5, Canada
| | - Judy McCormick
- Medical Council of Canada, 1021 Thomas Spratt Place, Ottawa, ON, K1G 5L5, Canada
| | - Mathieu Wojcik
- Medical Council of Canada, 1021 Thomas Spratt Place, Ottawa, ON, K1G 5L5, Canada
| | - Ilona Bartman
- Medical Council of Canada, 1021 Thomas Spratt Place, Ottawa, ON, K1G 5L5, Canada
| | - Debra Pugh
- Medical Council of Canada, 1021 Thomas Spratt Place, Ottawa, ON, K1G 5L5, Canada
| |
Collapse
|
32
|
Jalali A, Darvishi N, Kalhory P, Merati F, Vatandost S, Moradi K. Intensive care unit dignified care: Persian translation and psychometric evaluation. Nurs Open 2024; 11:e2238. [PMID: 38978289 PMCID: PMC11231042 DOI: 10.1002/nop2.2238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 04/30/2024] [Accepted: 06/21/2024] [Indexed: 07/10/2024] Open
Abstract
AIM The present study aimed to evaluate the psychometric properties of the Persian version of the 'Intensive Care Unit Dignified Care Questionnaire (IDCQ)' among Iranian nurses. DESIGN A methodological and psychometric study was conducted in 2022, involving nurses from six teaching hospitals in Kermanshah, Western Iran. METHODS The IDCQ was translated into Persian using a forward-backward translation method. Construct validity was assessed through exploratory factor analysis (EFA) and confirmatory factor analysis (CFA), employing a stratified sampling method with 455 critical care nurses. Internal consistency was gauged using Cronbach's alpha coefficient, while reliability was determined through the test-retest method. Analyses were performed using SPSS version 26 and Lisrel version 8 software. RESULTS EFA and CFA validated the instrument's two-factor, 17-item structure. The CFA indicated a well-fitting model with fit indices: CFI = 0.93, NNFI = 0.92, GFI = 0.861, RMSEA = 0.051 and SRMR = 0.046. Pearson's correlation coefficient substantiated a significant relationship between the items, subscales and the overall scale. The instrument's reliability was confirmed by a Cronbach's α coefficient of 0.88 and a test-retest reliability of 0.86. CONCLUSION The Persian version of the IDCQ, comprising two factors and 17 items, has been validated as a reliable and applicable tool for use within the Iranian nursing community.
Collapse
Affiliation(s)
- Amir Jalali
- Substance Abuse Prevention Research Center, Research Institute for HealthKermanshah University of Medical SciencesKermanshahIran
| | - Niloufar Darvishi
- Student Research CommitteeKermanshah University of Medical SciencesKermanshahIran
| | - Parnia Kalhory
- Department of Emergency and Critical Care Nursing, School of Nursing and MidwiferyKermanshah University of Medical SciencesKermanshahIran
| | - Fateme Merati
- Department of Emergency and Critical Care Nursing, School of Nursing and MidwiferyKermanshah University of Medical SciencesKermanshahIran
| | - Salam Vatandost
- Clinical Care Research Center, Institute for Health DevelopmentKurdistan University of Medical SciencesSanandajIran
| | - Khalil Moradi
- Department of Emergency and Critical Care Nursing, School of Nursing and MidwiferyKermanshah University of Medical SciencesKermanshahIran
| |
Collapse
|
33
|
Roche AF, Kavanagh D, McCawley N, O'Riordan JM, Cahir C, Toale C, O'Keeffe D, Lawler T, Condron CM. Collating evidence to support the validation of a simulated laparotomy incision and closure-training model. Am J Surg 2024; 233:84-89. [PMID: 38402084 DOI: 10.1016/j.amjsurg.2024.02.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 02/02/2024] [Accepted: 02/12/2024] [Indexed: 02/26/2024]
Abstract
BACKGROUND It is essential to evaluate the functionality of surgical simulation models, in order to determine whether they perform as intended. In this study, we assessed the use of a simulated laparotomy incision and closure-training model by collating validity evidence to determine its utility as well as pre and post-test interval data. METHOD This was a quantitative study design, informed by Messick's unified validity framework. In total, 93 participants (surgical trainees = 80, experts = 13) participated in this study. Evaluation of content validity and the models' relationships with other variables was conducted, along with a pre and post-test confidence assessment. RESULTS The model was deemed realistic and useful as a teaching tool, providing strong content validity evidence. In assessment of relationships with other variables, the expert group out-performed the novice group conclusively. Pre and post-test evaluation reported a statistically significant increase in confidence levels. CONCLUSION We present strong validity evidence of a novel laparotomy incision and closure simulation-training model.
Collapse
Affiliation(s)
- Adam F Roche
- RCSI SIM Centre for Simulation Education and Research, RCSI University of Medicine and Health Sciences, Dublin, Ireland.
| | - Dara Kavanagh
- Department of Surgical Affairs, RCSI University of Medicine and Health Sciences, Dublin, Ireland
| | - Niamh McCawley
- Department of Colorectal Surgery, Beaumont Hospital, Dublin, Ireland
| | - J M O'Riordan
- Department of Colorectal Surgery, Tallaght University Hospital, Dublin, Ireland
| | - Caitriona Cahir
- Data Science Centre, RCSI University of Medicine and Health Sciences, Dublin, Ireland
| | - Conor Toale
- Department of Surgical Affairs, RCSI University of Medicine and Health Sciences, Dublin, Ireland
| | - Dara O'Keeffe
- Department of Surgical Affairs, RCSI University of Medicine and Health Sciences, Dublin, Ireland
| | - Tim Lawler
- RCSI SIM Centre for Simulation Education and Research, RCSI University of Medicine and Health Sciences, Dublin, Ireland
| | - Claire M Condron
- RCSI SIM Centre for Simulation Education and Research, RCSI University of Medicine and Health Sciences, Dublin, Ireland
| |
Collapse
|
34
|
Lund S, Navarro S, D'Angelo JD, Park YS, Rivera M. Expanded Access to Video-Based Laparoscopic Skills Assessments: Ease, Reliability, and Accuracy. JOURNAL OF SURGICAL EDUCATION 2024; 81:850-857. [PMID: 38664172 DOI: 10.1016/j.jsurg.2024.03.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 01/29/2024] [Accepted: 03/13/2024] [Indexed: 05/13/2024]
Abstract
OBJECTIVE Video-based performance assessments provide essential feedback to surgical residents, but in-person and remote video-based assessment by trained proctors incurs significant cost. We aimed to determine the reliability, accuracy, and difficulty of untrained attending staff surgeon raters completing video-based assessments of a basic laparoscopic skill. Secondarily, we aimed to compare reliability and accuracy between 2 different types of assessment tools. DESIGN An anonymous survey was distributed electronically to surgical attendings via a national organizational listserv. Survey items included demographics, rating of video-based assessment experience (1 = have never completed video-based assessments, 5 = often complete video-based assessments), and rating of favorability toward video-based and in-person assessments (0 = not favorable, 100 = favorable). Participants watched 2 laparoscopic peg transfer performances, then rated each performance using an Objective Structured Assessment of Technical Skill (OSATS) form and the McGill Inanimate System for Training and Evaluation of Laparoscopic Skills (MISTELS). Participants then rated assessment completion ease (1 = Very Easy, 5 = Very Difficult). SETTING National survey of practicing surgeons. PARTICIPANTS Sixty-one surgery attendings with experience in laparoscopic surgery from 10 institutions participated as untrained raters. Six experienced laparoscopic skills proctors participated as expert raters. RESULTS Inter-rater reliability was substantial for both OSATS (k = 0.75) and MISTELS (k = 0.85). MISTELS accuracy was significantly higher than that of OSATS (κ: MISTELS = 0.18, 95%CI = [0.06,0.29]; OSATS = 0.02, 95%CI = [-0.01,0.04]). While participants were inexperienced with completing video-based assessments (median = 1/5), they perceived video-based assessments favorably (mean = 73.4) and felt assessment completion was "Easy" on average. CONCLUSIONS We demonstrate that faculty raters untrained in simulation-based assessments can successfully complete video-based assessments of basic laparoscopic skills with substantial inter-rater reliability without marked difficulty. These findings suggest an opportunity to increase access to feedback for trainees using video-based assessment of fundamental skills in laparoscopic surgery.
Collapse
Affiliation(s)
- Sarah Lund
- Mayo Clinic Department of Surgery, 200 1st Street SW, Rochester, Minnesota 55905.
| | - Sergio Navarro
- Mayo Clinic Department of Surgery, 200 1st Street SW, Rochester, Minnesota 55905
| | - Jonathan D D'Angelo
- Mayo Clinic Division of Colon and Rectal Surgery, 200 1st Street SW, Rochester, Minnesota 55905
| | - Yoon Soo Park
- Department of Medical Education, University of Illinois at Chicago College of Medicine, 808 S Wood Street, Chicago Illinois 60612
| | - Mariela Rivera
- Mayo Clinic Division of Trauma, Critical Care, and General Surgery, 200 1st Street SW, Rochester, Minnesota 55905
| |
Collapse
|
35
|
Deuchler S, Dail YA, Berger T, Sneyers A, Koch F, Buedel C, Ackermann H, Flockerzi E, Seitz B. Simulator-Based Versus Traditional Training of Fundus Biomicroscopy for Medical Students: A Prospective Randomized Trial. Ophthalmol Ther 2024; 13:1601-1617. [PMID: 38615132 PMCID: PMC11109054 DOI: 10.1007/s40123-024-00944-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 03/25/2024] [Indexed: 04/15/2024] Open
Abstract
INTRODUCTION Simulation training is an important component of medical education. In former studies, diagnostic simulation training for direct and indirect funduscopy was already proven to be an effective training method. In this prospective controlled trial, we investigated the effect of simulator-based fundus biomicroscopy training. METHODS After completing a 1-week ophthalmology clerkship, medical students at Saarland University Medical Center (n = 30) were block-randomized into two groups: The traditional group received supervised training examining the fundus of classmates using a slit lamp; the simulator group was trained using the Slit Lamp Simulator. All participants had to pass an Objective Structured Clinical Examination (OSCE); two masked ophthalmological faculty trainers graded the students' skills when examining patient's fundus using a slit lamp. A subjective assessment form and post-assessment surveys were obtained. Data were described using median (interquartile range [IQR]). RESULTS Twenty-five students (n = 14 in the simulator group, n = 11 in the traditional group) (n = 11) were eligible for statistical analysis. Interrater reliability was verified as significant for the overall score as well as for all subtasks (≤ 0.002) except subtask 1 (p = 0.12). The overall performance of medical students in the fundus biomicroscopy OSCE was statistically ranked significantly higher in the simulator group (27.0 [5.25]/28.0 [3.0] vs. 20.0 [7.5]/16.0 [10.0]) by both observers with an interrater reliability of IRR < 0.001 and a significance level of p = 0.003 for observer 1 and p < 0.001 for observer 2. For all subtasks, the scores given to students trained using the simulator were consistently higher than those given to students trained traditionally. The students' post-assessment forms confirmed these results. Students could learn the practical backgrounds of fundus biomicroscopy (p = 0.04), the identification (p < 0.001), and localization (p < 0.001) of pathologies significantly better with the simulator. CONCLUSIONS Traditional supervised methods are well complemented by simulation training. Our data indicate that the simulator helps with first patient contacts and enhances students' capacity to examine the fundus biomicroscopically.
Collapse
Affiliation(s)
- Svenja Deuchler
- Augenzentrum Frankfurt, Georg-Baumgarten-Straße 3, 60549, Frankfurt am Main, Germany.
- Department of Ophthalmology, Saarland University Medical Center, 66424, Homburg, Saar, Germany.
| | - Yaser Abu Dail
- Department of Ophthalmology, Saarland University Medical Center, 66424, Homburg, Saar, Germany
| | - Tim Berger
- Department of Ophthalmology, Saarland University Medical Center, 66424, Homburg, Saar, Germany
| | - Albéric Sneyers
- Department of Ophthalmology, Saarland University Medical Center, 66424, Homburg, Saar, Germany
| | - Frank Koch
- Augenzentrum Frankfurt, Georg-Baumgarten-Straße 3, 60549, Frankfurt am Main, Germany
| | - Claudia Buedel
- Augenzentrum Frankfurt, Georg-Baumgarten-Straße 3, 60549, Frankfurt am Main, Germany
| | - Hanns Ackermann
- Institute of Biostatistics, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Elias Flockerzi
- Department of Ophthalmology, Saarland University Medical Center, 66424, Homburg, Saar, Germany
| | - Berthold Seitz
- Department of Ophthalmology, Saarland University Medical Center, 66424, Homburg, Saar, Germany
| |
Collapse
|
36
|
Mao X, Boulet JR, Sandella JM, Oliverio MF, Smith L. A validity study of COMLEX-USA Level 3 with the new test design. J Osteopath Med 2024; 124:257-265. [PMID: 38498662 DOI: 10.1515/jom-2023-0011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 02/14/2024] [Indexed: 03/20/2024]
Abstract
CONTEXT The National Board of Osteopathic Medical Examiners (NBOME) administers the Comprehensive Osteopathic Medical Licensing Examination of the United States (COMLEX-USA), a three-level examination designed for licensure for the practice of osteopathic medicine. The examination design for COMLEX-USA Level 3 (L3) was changed in September 2018 to a two-day computer-based examination with two components: a multiple-choice question (MCQ) component with single best answer and a clinical decision-making (CDM) case component with extended multiple-choice (EMC) and short answer (SA) questions. Continued validation of the L3 examination, especially with the new design, is essential for the appropriate interpretation and use of the test scores. OBJECTIVES The purpose of this study is to gather evidence to support the validity of the L3 examination scores under the new design utilizing sources of evidence based on Kane's validity framework. METHODS Kane's validity framework contains four components of evidence to support the validity argument: Scoring, Generalization, Extrapolation, and Implication/Decision. In this study, we gathered data from various sources and conducted analyses to provide evidence that the L3 examination is validly measuring what it is supposed to measure. These include reviewing content coverage of the L3 examination, documenting scoring and reporting processes, estimating the reliability and decision accuracy/consistency of the scores, quantifying associations between the scores from the MCQ and CDM components and between scores from different competency domains of the L3 examination, exploring the relationships between L3 scores and scores from a performance-based assessment that measures related constructs, performing subgroup comparisons, and describing and justifying the criterion-referenced standard setting process. The analysis data contains first-attempt test scores for 8,366 candidates who took the L3 examination between September 2018 and December 2019. The performance-based assessment utilized as a criterion measure in this study is COMLEX-USA Level 2 Performance Evaluation (L2-PE). RESULTS All assessment forms were built through the automated test assembly (ATA) procedure to maximize parallelism in terms of content coverage and statistical properties across the forms. Scoring and reporting follows industry-standard quality-control procedures. The inter-rater reliability of SA rating, decision accuracy, and decision consistency for pass/fail classifications are all very high. There is a statistically significant positive association between the MCQ and the CDM components of the L3 examination. The patterns of associations, both within the L3 subscores and with L2-PE domain scores, fit with what is being measured. The subgroup comparisons by gender, race, and first language showed expected small differences in mean scores between the subgroups within each category and yielded findings that are consistent with those described in the literature. The L3 pass/fail standard was established through implementation of a defensible criterion-referenced procedure. CONCLUSIONS This study provides some additional validity evidence for the L3 examination based on Kane's validity framework. The validity of any measurement must be established through ongoing evaluation of the related evidence. The NBOME will continue to collect evidence to support validity arguments for the COMLEX-USA examination series.
Collapse
Affiliation(s)
- Xia Mao
- 159673 National Board of Osteopathic Medical Examiners , Chicago, IL, USA
| | - John R Boulet
- 159673 National Board of Osteopathic Medical Examiners , Chicago, IL, USA
| | - Jeanne M Sandella
- 159673 National Board of Osteopathic Medical Examiners , Chicago, IL, USA
| | - Michael F Oliverio
- Adjunct Clinical Faculty, Departments of Family Practice and OMM, NYIT-COM, North Bellmore, NY, USA
| | - Larissa Smith
- 159673 National Board of Osteopathic Medical Examiners , Chicago, IL, USA
| |
Collapse
|
37
|
Gustafsson A, Rölfing JD, Palm H, Viberg B, Grimstrup S, Konge L. Setting proficiency standards for simulation-based mastery learning of short antegrade femoral nail osteosynthesis: a multicenter study. Acta Orthop 2024; 95:275-281. [PMID: 38819402 PMCID: PMC11141712 DOI: 10.2340/17453674.2024.40812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 05/05/2024] [Indexed: 06/01/2024] Open
Abstract
BACKGROUND AND PURPOSE Orthopedic trainees frequently perform short antegrade femoral nail osteosynthesis of trochanteric fractures, but virtual reality simulation-based training (SBT) with haptic feedback has been unavailable. We explored a novel simulator, with the aim of gathering validity evidence for an embedded test and setting a credible pass/fail standard allowing trainees to practice to proficiency. PATIENTS AND METHODS The research, conducted from May to September 2020 across 3 Danish simulation centers, utilized the Swemac TraumaVision simulator for short antegrade femoral nail osteosynthesis. The validation process adhered to Messick's framework, covering all 5 sources of validity evidence. Participants included novice groups, categorized by training to plateau (n = 14) or to mastery (n = 10), and experts (n = 9), focusing on their performance metrics and training duration. RESULTS The novices in the plateau group and experts had hands-on training for 77 (95% confidence interval [CI] 59-95) and 52 (CI 36-69) minutes while the plateau test score, defined as the average of the last 4 scores, was 75% (CI 65-86) and 96% (CI 94-98) respectively. The pass/fail standard was established at the average expert plateau test score of 96%. All novices in the mastery group could meet this standard and interestingly without increased hands-on training time (65 [CI 46-84] minutes). CONCLUSION Our study provides supporting validity evidence from all sources of Messick's framework for a simulation-based test in short antegrade nail osteosynthesis of intertrochanteric hip fracture and establishes a defensible pass/fail standard for mastery learning of SBT. Novices who practiced using mastery learning were able to reach the pre-defined pass/fail standard and outperformed novices without a set goal for external motivation.
Collapse
Affiliation(s)
- Amandus Gustafsson
- Orthopaedic Department, Slagelse Hospital, Region Zealand, Slagelse; Copenhagen Academy for Medical Education and Simulation, Rigshospitalet, Copenhagen; Department of Clinical Medicine, Faculty of Health Science, University of Copenhagen, Copenhagen.
| | - Jan D Rölfing
- Department of Orthopaedics, Aarhus University Hospital, Aarhus; MidtSim, Corporate HR, Central Denmark Region, Aarhus
| | - Henrik Palm
- Orthopaedic Department, Bispebjerg Hospital, Region H, Copenhagen
| | - Bjarke Viberg
- Orthopaedic Department, Odense Hospital, Region Syd, Odense, Denmark
| | - Søren Grimstrup
- Copenhagen Academy for Medical Education and Simulation, Rigshospitalet, Copenhagen
| | - Lars Konge
- Copenhagen Academy for Medical Education and Simulation, Rigshospitalet, Copenhagen; Department of Clinical Medicine, Faculty of Health Science, University of Copenhagen, Copenhagen
| |
Collapse
|
38
|
Shabahang MM, Schwartz TA, Feldman LS. Practical Guide to Assessment Tool Development for Surgical Education Research. JAMA Surg 2024; 159:580-581. [PMID: 38170509 DOI: 10.1001/jamasurg.2023.6696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
This Guide to Statistics and Methods describes the process of validation and gathering validity evidence for assessment tool development for surgical education research.
Collapse
Affiliation(s)
| | - Todd A Schwartz
- Department of Biostatistics, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill
- Statistical Editor, JAMA Surgery
| | - Liane S Feldman
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
39
|
Stapleton SN, Cassara M, Roth B, Matulis C, Desmond C, Wong AH, Cardell A, Moadel T, Lei C, Munzer BW, Moss H, Nadir NA. The MIDAS touch: Frameworks for procedural model innovation and validation. AEM EDUCATION AND TRAINING 2024; 8:S24-S35. [PMID: 38774824 PMCID: PMC11102942 DOI: 10.1002/aet2.10980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 02/01/2024] [Accepted: 02/12/2024] [Indexed: 05/24/2024]
Abstract
Background Simulation-based procedural practice is crucial to emergency medicine skills training and maintenance. However, many commercial procedural models are either nonexistent or lacking in key elements. Simulationists often create their own novel models with minimal framework for designing, building, and validation. We propose two interlinked frameworks with the goal to systematically build and validate models for the desired educational outcomes. Methods Simulation Academy Research Committee and members with novel model development expertise assembled as the MIDAS (Model Innovation, Development and Assessment for Simulation) working group. This working group focused on improving novel model creation and validation beginning with a preconference workshop at 2023 Society for Academic Emergency Medicine Annual Meeting. The MIDAS group sought to (1) assess the current state of novel model validation and (2) develop frameworks for the broader simulation community to create, improve, and validate procedural models. Findings Workshop participants completed 17 surveys for a response rate of 100%. Many simulationists have created models but few have validated them. The most common barriers to validation were lack of standardized guidelines and familiarity with the validation process.We have combined principles from education and engineering fields into two interlinked frameworks. The first is centered on steps involved with model creation and refinement. The second is a framework for novel model validation processes. Implications These frameworks emphasize development of models through a deliberate, form-follows-function methodology, aimed at ensuring training quality through novel models. Following a blueprint of how to create, test, and improve models can save innovators time and energy, which in turn can yield greater and more plentiful innovation at lower time and financial cost. This guideline allows for more standardized approaches to model creation, thus improving future scholarship on novel models.
Collapse
Affiliation(s)
- Stephanie N. Stapleton
- Department of Emergency MedicineBoston University School of Medicine, Boston Medical CenterBostonMassachusettsUSA
| | - Michael Cassara
- Department of Emergency MedicineNorth Shore University Hospital, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Northwell Health Patient Safety Institute/Emergency Medical InstituteHempsteadNew YorkUSA
| | - Benjamin Roth
- Department of Emergency MedicinePrisma Health Upstate University of South Carolina School of Medicine at GreenvilleGreenvilleSouth CarolinaUSA
| | - Christina Matulis
- Division of Emergency MedicineNorthShore University Health SystemEvanstonIllinoisUSA
| | - Clare Desmond
- Division of Emergency MedicineNorthShore University Health SystemEvanstonIllinoisUSA
| | - Ambrose H. Wong
- Department of Emergency MedicineYale School of MedicineNew HavenConnecticutUSA
| | | | - Tiffany Moadel
- Donald and Barbara Zucker School of Medicine at Hofstra/NorthwellHempsteadNew YorkUSA
| | - Charles Lei
- Department of Emergency MedicineHennepin County Medical CenterMinneapolisMNUSA
| | - Brendan W. Munzer
- Department of Emergency MedicineTrinity Health Ann ArborAnn ArborMichiganUSA
| | - Hillary Moss
- Department of Emergency MedicineMontefiore Medical Center Moses Campus, Einstein College of MedicineBronxNew YorkUSA
| | - Nur Ain Nadir
- Department of Clinical SciencesKaiser Permanente Bernard Tyson School of MedicinePasadenaCaliforniaUSA
- Department of Emergency MedicineKaiser Permanente Central ValleyModestoCaliforniaUSA
| |
Collapse
|
40
|
Toale C, Morris M, Gross S, O'Keeffe DA, Ryan DM, Boland F, Doherty EM, Traynor OJ, Kavanagh DO. Performance in Irish Selection and Future Performance in Surgical Training. JAMA Surg 2024; 159:538-545. [PMID: 38446454 PMCID: PMC10918576 DOI: 10.1001/jamasurg.2024.0034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 12/04/2023] [Indexed: 03/07/2024]
Abstract
Importance Selection processes for surgical training should aim to identify candidates who will become competent independent practitioners and should aspire to high standards of reliability and validity. Objective To determine the association between measured candidate factors at the time of an Irish selection and assessment outcomes in surgical training, examined via rate of progression to Higher Specialist Training (HST), attrition rates, and performance as assessed through a multimodal framework of workplace-based and simulation-based assessments. Design, Setting, and Participants This retrospective observational cohort study included data from all successful applicants to the Royal College of Surgeons in Ireland (RCSI) national Core Surgical Training (CST) program. Participants included all trainees recruited to dedicated postgraduate surgical training from 2016 to 2020. These data were analyzed from July 11, 2016, through July 10, 2022. Exposures Selection decisions were based on a composite score that was derived from technical aptitude assessments, undergraduate academic performance, and a 4-station multiple mini-interview. Main outcomes and measures Assessment data, attrition rates, and rates of progression to HST were recorded for each trainee. CST performance was assessed using workplace-based and simulation-based technical and nontechnical skill assessments. Potential associations between selection and assessment measures were explored using Pearson correlation, logistic regression, and multiple linear-regression analyses. Results Data were available for 303 trainees. Composite scores were positively associated with progression to HST (odds ratio [OR], 1.09; 95% CI, 1.05-1.13). There was a weak positive correlation, ranging from 0.23 to 0.34, between scores and performance across all CST assessments. Multivariable linear regression analysis showed technical aptitude scores at application were associated with future operative performance assessment scores, both in the workplace (β = 0.31; 95% CI, 0.14-0.48) and simulated environments (β = 0.57; 95% CI, 0.33-0.81). There was evidence that the interpersonal skills interview station was associated with future performance in simulated communication skill assessments (β = 0.55; 95% CI, 0.22-0.87). Conclusions and Relevance In this study, performance at the time of Irish national selection, measured across technical and nontechnical domains in a multimodal fashion, was associated with future performance in the workplace and in simulated environments. Future studies will be required to explore the consequential validity of selection, including potential unintended effects of selection and ranking on candidate performance.
Collapse
Affiliation(s)
- Conor Toale
- Department of Surgical Affairs, RCSI University of Medicine and Health Sciences, Royal College of Surgeons in Ireland, Dublin, Ireland
| | - Marie Morris
- Department of Surgical Affairs, RCSI University of Medicine and Health Sciences, Royal College of Surgeons in Ireland, Dublin, Ireland
| | - Sara Gross
- Department of Surgical Affairs, RCSI University of Medicine and Health Sciences, Royal College of Surgeons in Ireland, Dublin, Ireland
| | - Dara A O'Keeffe
- Department of Surgical Affairs, RCSI University of Medicine and Health Sciences, Royal College of Surgeons in Ireland, Dublin, Ireland
| | - Donncha M Ryan
- Department of Surgical Affairs, RCSI University of Medicine and Health Sciences, Royal College of Surgeons in Ireland, Dublin, Ireland
| | - Fiona Boland
- Data Science Centre, RCSI University of Medicine and Health Sciences, Royal College of Surgeons in Ireland, Dublin, Ireland
| | - Eva M Doherty
- Department of Surgical Affairs, RCSI University of Medicine and Health Sciences, Royal College of Surgeons in Ireland, Dublin, Ireland
| | - Oscar J Traynor
- Department of Surgical Affairs, RCSI University of Medicine and Health Sciences, Royal College of Surgeons in Ireland, Dublin, Ireland
| | | |
Collapse
|
41
|
Ohlenburg H, Arnemann PH, Hessler M, Görlich D, Zarbock A, Friederichs H. Flipped Classroom: Improved team performance during resuscitation training through interactive pre-course content - a cluster-randomised controlled study. BMC MEDICAL EDUCATION 2024; 24:459. [PMID: 38671434 PMCID: PMC11046966 DOI: 10.1186/s12909-024-05438-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 04/17/2024] [Indexed: 04/28/2024]
Abstract
BACKGROUND Resuscitation is a team effort, and it is increasingly acknowledged that team cooperation requires training. Staff shortages in many healthcare systems worldwide, as well as recent pandemic restrictions, limit opportunities for collaborative team training. To address this challenge, a learner-centred approach known as flipped learning has been successfully implemented. This model comprises self-directed, asynchronous pre-course learning, followed by knowledge application and skill training during in-class sessions. The existing evidence supports the effectiveness of this approach for the acquisition of cognitive skills, but it is uncertain whether the flipped classroom model is suitable for the acquisition of team skills. The objective of this study was to determine if a flipped classroom approach, with an online workshop prior to an instructor-led course could improve team performance and key resuscitation variables during classroom training. METHODS A single-centre, cluster-randomised, rater-blinded study was conducted on 114 final year medical students at a University Hospital in Germany. The study randomly assigned students to either the intervention or control group using a computer script. Each team, regardless of group, performed two advanced life support (ALS) scenarios on a simulator. The two groups differed in the order in which they completed the flipped e-learning curriculum. The intervention group started with the e-learning component, and the control group started with an ALS scenario. Simulators were used for recording and analysing resuscitation performance indicators, while professionals assessed team performance as a primary outcome. RESULTS The analysis was conducted on the data of 96 participants in 21 teams, comprising of 11 intervention groups and 10 control groups. The intervention teams achieved higher team performance ratings during the first scenario compared to the control teams (Estimated marginal mean of global rating: 7.5 vs 5.6, p < 0.01; performance score: 4.4 vs 3.8, p < 0.05; global score: 4.4 vs 3.7, p < 0.001). However, these differences were not observed in the second scenario, where both study groups had used the e-learning tool. CONCLUSION Flipped classroom approaches using learner-paced e-learning prior to hands-on training can improve team performance. TRIAL REGISTRATION German Clinical Trials Register ( https://drks.de/search/de/trial/DRKS00013096 ).
Collapse
Affiliation(s)
- Hendrik Ohlenburg
- Institute of Education and Student Affairs, Studienhospital Münster, University of Münster, 48149, Münster, Germany.
| | - Philip-Helge Arnemann
- Department of Anaesthesiology, Intensive Care and Pain Medicine, Münster University Hospital, Münster, Germany
| | - Michael Hessler
- Department of Anaesthesiology, Intensive Care and Pain Medicine, Münster University Hospital, Münster, Germany
| | - Dennis Görlich
- Institute of Biostatistics and Clinical Research, University of Münster, Münster, Germany
| | - Alexander Zarbock
- Department of Anaesthesiology, Intensive Care and Pain Medicine, Münster University Hospital, Münster, Germany
| | - Hendrik Friederichs
- Medical Education Research Group, Medical School OWL, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
42
|
Wynn ST. Improving self-efficacy in behavioral health through interprofessional education. J Am Assoc Nurse Pract 2024; 36:202-209. [PMID: 37732894 DOI: 10.1097/jxx.0000000000000951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Accepted: 08/30/2023] [Indexed: 09/22/2023]
Abstract
ABSTRACT Interprofessional education (IPE) is important in preparing health profession students to practice in a workforce dependent on teamwork and collaboration. Many health profession students graduate without ever having active shared learning experiences in the academic setting. Opportunities for students to participate in activities that promote self-efficacy in competencies related to interprofessional collaborative practice are essential. The purpose of the project was to assess health profession students' perception of self-efficacy related to the core competencies of IPE. The project used a pre/post quantitative survey research design comprising a sample of students enrolled in clinical practicums in behavioral health care settings. Using standardized patients, students participated in timed simulated encounters. Participants ( n = 36) completed the 16-item Interprofessional Education Collaborative Competency Self-Assessment Tool on conclusion of the learning activity. Survey responses were scored on a 5-point Likert-type scale, with high scores indicating a stronger level of agreement of perceived self-efficacy. On the postsurvey, most items were rated as "agree" or "strongly agree." Item means ranged from 4.64 to 4.81. A positive association was found between students' self-efficacy and the utilization of standardized patients within an interprofessional experiential learning activity. The intervention contributed to improving self-efficacy in interprofessional competencies related to collaborative interaction and values.
Collapse
Affiliation(s)
- Stephanie T Wynn
- Moffett & Sanders School of Nursing, Samford University, Birmingham, Alabama
| |
Collapse
|
43
|
Schneyer RJ, Scheib SA, Green IC, Molina AL, Mara KC, Wright KN, Siedhoff MT, Truong MD. Validation of a Simulation Model for Robotic Myomectomy. J Minim Invasive Gynecol 2024; 31:330-340.e1. [PMID: 38307222 DOI: 10.1016/j.jmig.2024.01.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 01/15/2024] [Accepted: 01/17/2024] [Indexed: 02/04/2024]
Abstract
STUDY OBJECTIVE Several simulation models have been evaluated for gynecologic procedures such as hysterectomy, but there are limited published data for myomectomy. This study aimed to assess the validity of a low-cost robotic myomectomy model for surgical simulation training. DESIGN Prospective cohort simulation study. SETTING Surgical simulation laboratory. PARTICIPANTS Twelve obstetrics and gynecology residents and 4 fellowship-trained minimally invasive gynecologic surgeons were recruited for a 3:1 novice-to-expert ratio. INTERVENTIONS A robotic myomectomy simulation model was constructed using <$5 worth of materials: a foam cylinder, felt, a stress ball, bandage wrap, and multipurpose sealing wrap. Participants performed a simulation task involving 2 steps: fibroid enucleation and hysterotomy repair. Video-recorded performances were timed and scored by 2 blinded reviewers using the validated Global Evaluative Assessment of Robotic Skills (GEARS) scale (5-25 points) and a modified GEARS scale (5-40 points), which adds 3 novel domains specific to robotic myomectomy. Performance was also scored using predefined task errors. Participants completed a post-task questionnaire assessing the model's realism and utility. MEASUREMENTS AND MAIN RESULTS Median task completion time was shorter for experts than novices (9.7 vs 24.6 min, p = .001). Experts scored higher than novices on both the GEARS scale (median 23 vs 12, p = .004) and modified GEARS scale (36 vs 20, p = .004). Experts made fewer task errors than novices (median 15.5 vs 37.5, p = .034). For interrater reliability of scoring, the intraclass correlation coefficient was calculated to be 0.91 for the GEARS assessment, 0.93 for the modified GEARS assessment, and 0.60 for task errors. Using the contrasting groups method, the passing mark for the simulation task was set to a minimum modified GEARS score of 28 and a maximum of 28 errors. Most participants agreed that the model was realistic (62.5%) and useful for training (93.8%). CONCLUSION We have demonstrated evidence supporting the validity of a low-cost robotic myomectomy model. This simulation model and the performance assessments developed in this study provide further educational tools for robotic myomectomy training.
Collapse
Affiliation(s)
- Rebecca J Schneyer
- Department of Obstetrics and Gynecology, Cedars-Sinai Medical Center, Los Angeles, California (Drs. Schneyer, Molina, Wright, Siedhoff, and Truong).
| | - Stacey A Scheib
- Department of Obstetrics and Gynecology, Louisiana State University Health Sciences Center, New Orleans, Lousiana (Dr. Scheib)
| | - Isabel C Green
- Department of Obstetrics and Gynecology (Dr. Green), Mayo Clinic, Rochester, Minnesota
| | - Andrea L Molina
- Department of Obstetrics and Gynecology, Cedars-Sinai Medical Center, Los Angeles, California (Drs. Schneyer, Molina, Wright, Siedhoff, and Truong)
| | - Kristin C Mara
- Department of Quantitative Health Sciences (Ms. Mara), Mayo Clinic, Rochester, Minnesota
| | - Kelly N Wright
- Department of Obstetrics and Gynecology, Cedars-Sinai Medical Center, Los Angeles, California (Drs. Schneyer, Molina, Wright, Siedhoff, and Truong)
| | - Matthew T Siedhoff
- Department of Obstetrics and Gynecology, Cedars-Sinai Medical Center, Los Angeles, California (Drs. Schneyer, Molina, Wright, Siedhoff, and Truong)
| | - Mireille D Truong
- Department of Obstetrics and Gynecology, Cedars-Sinai Medical Center, Los Angeles, California (Drs. Schneyer, Molina, Wright, Siedhoff, and Truong)
| |
Collapse
|
44
|
Lopina N. A Mathematical Model Based on Stratifying the Severity of Medical Errors for Building Scenarios for Clinical Cases With Branching. Cureus 2024; 16:e58089. [PMID: 38738126 PMCID: PMC11088723 DOI: 10.7759/cureus.58089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/11/2024] [Indexed: 05/14/2024] Open
Abstract
Background There are no mathematical models or score systems available for assessing and creating clinical case simulations based on branching scenario scripts. Objective This study aimed to develop a mathematical model based on stratifying the severity of medical errors for building clinical cases with branching scenarios for clinical simulation. Methods This study was undertaken from August 2020 to August 2023. To build a mathematical model for building scenarios of clinical cases with branching, the classification of the seriousness of medication errors was used. A mathematical model was built for predicting and modeling the development of a clinical situation and as an assessment strategy. The study recruited a total of 34 participants, with 16 participants assigned to the branching scenarios without the mathematical model group and 18 participants assigned to the branching scenarios with the mathematical model group. Results A simple diagram of score based on stratification of the severity of medical errors and correct decisions in clinical practice for building interactive training scenarios with branching was proposed. According to this score system algorithm, each clinical decision-making step is scored points with plus or minus, from 0 to 10. The sum of the points for each block in the decision-making process is then added up. Each step in the overall clinical decision-making strategy is stratified by the proposed algorithm, and finally, the results of internal validation and implementation are presented. Conclusion A mathematical model and score system for building clinical case scenarios based on branching and classification of the seriousness of medication errors was developed. This system could help in the prediction and modeling of the development of events in particular clinical situations and the assessment of competency formation in medical simulation as well.
Collapse
Affiliation(s)
- Nataliia Lopina
- Simulation Training Platform "ClinCaseQuest", Med Inform Group LLC, Kharkiv, UKR
| |
Collapse
|
45
|
Carstensen SMD, Just SA, Pfeiffer-Jensen M, Østergaard M, Konge L, Terslev L. Solid validity evidence for two tools assessing competences in musculoskeletal ultrasound: a validity study. Rheumatology (Oxford) 2024; 63:765-771. [PMID: 37307078 DOI: 10.1093/rheumatology/kead286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 04/21/2023] [Accepted: 05/19/2023] [Indexed: 06/13/2023] Open
Abstract
OBJECTIVES Musculoskeletal ultrasound (MSUS) is increasingly used by rheumatologists in daily clinical practice. However, MSUS is only valuable in trained hands, and assessment of trainee competences is therefore essential before independent practice. Thus, this study aimed to establish validity evidence for the EULAR and the Objective Structured Assessment of Ultrasound Skills (OSAUS) tools used for assessing MSUS competences. METHODS Thirty physicians with different levels of MSUS experience (novices, intermediates, and experienced) performed four MSUS examinations of different joint areas on the same rheumatoid arthritis patient. All examinations were video recorded (n = 120), anonymized, and subsequently assessed in random order by two blinded raters using first the OSAUS assessment tool followed by the EULAR tool 1 month after. RESULTS The inter-rater reliability between the two raters was high for both the OSAUS and EULAR tools, with a Pearson correlation coefficient (PCC) of 0.807 and 0.848, respectively. Both tools demonstrated excellent inter-case reliability, with a Cronbach's alpha of 0.970 for OSAUS and 0.964 for EULAR. Furthermore, there was a strong linear correlation between the OSAUS and the EULAR performance scores and the participants' experience levels (R2 = 0.897 and R2 = 0.868, respectively) and a significant discrimination between different MSUS experience levels (P < 0.001 for both). CONCLUSIONS MSUS operator competences can be assessed reliably and valid using either the OSAUS or the EULAR assessment tool, thereby allowing a uniform competency-based MSUS education in the future. Although both tools demonstrated high inter-rater reliability, the EULAR tool was superior to OSAUS. TRIAL REGISTRATION ClinicalTrials.gov, http://clinicaltrials.gov, NCT05256355.
Collapse
Affiliation(s)
- Stine Maya Dreier Carstensen
- Copenhagen Center for Arthritis Research, Center for Rheumatology and Spine Diseases, Centre for Head and Orthopaedics, Copenhagen University Hospital-Rigshospitalet Glostrup, Copenhagen, Denmark
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, The University of Copenhagen, Copenhagen, Denmark
| | - Søren Andreas Just
- Section of Rheumatology, Department of Medicine, Svendborg Hospital-Odense University Hospital, Svendborg, Denmark
| | - Mogens Pfeiffer-Jensen
- Copenhagen Center for Arthritis Research, Center for Rheumatology and Spine Diseases, Centre for Head and Orthopaedics, Copenhagen University Hospital-Rigshospitalet Glostrup, Copenhagen, Denmark
| | - Mikkel Østergaard
- Copenhagen Center for Arthritis Research, Center for Rheumatology and Spine Diseases, Centre for Head and Orthopaedics, Copenhagen University Hospital-Rigshospitalet Glostrup, Copenhagen, Denmark
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, The University of Copenhagen, Copenhagen, Denmark
| | - Lars Konge
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, The University of Copenhagen, Copenhagen, Denmark
- Copenhagen Academy for Medical Education and Simulation, The Capital Region of Denmark, Copenhagen, Denmark
| | - Lene Terslev
- Copenhagen Center for Arthritis Research, Center for Rheumatology and Spine Diseases, Centre for Head and Orthopaedics, Copenhagen University Hospital-Rigshospitalet Glostrup, Copenhagen, Denmark
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, The University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
46
|
Zackoff MW, Cruse B, Sahay RD, Zhang B, Sosa T, Schwartz J, Depinet H, Schumacher D, Geis GL. Multiuser immersive virtual reality simulation for interprofessional sepsis recognition and management. J Hosp Med 2024; 19:185-192. [PMID: 38238875 DOI: 10.1002/jhm.13274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 12/12/2023] [Accepted: 12/22/2023] [Indexed: 03/02/2024]
Abstract
INTRODUCTION Sepsis is a leading cause of pediatric mortality. While there has been significant effort toward improving adherence to evidence-based care, gaps remain. Immersive multiuser virtual reality (MUVR) simulation may be an approach to enhance provider clinical competency and situation awareness for sepsis. METHODS A prospective, observational pilot of an interprofessional MUVR simulation assessing a decompensating patient from sepsis was conducted from January to June 2021. The study objective was to establish validity and acceptability evidence for the platform by assessing differences in sepsis recognition between experienced and novice participants. Interprofessional teams assessed and managed a patient together in the same VR experience with the primary outcome of time to recognition of sepsis utilizing the Situation Awareness Global Assessment Technique analyzed using a logistic regression model. Secondary outcomes were perceived clinical accuracy, relevancy to practice, and side effects experienced. RESULTS Seventy-two simulations included 144 participants. The cumulative odds ratio of recognizing sepsis at 2 min into the simulation in comparison to later time points by experienced versus novice providers were significantly higher with a cumulative odds ratio of 3.70 (95% confidence interval: 1.15-9.07, p = .004). Participants agreed that the simulation was clinically accurate (98.6%) and will impact their practice (81.1%), with a high degree of immersion (95.7%-99.3%), and the majority of side effects were perceived as mild (70.4%-81.4%). CONCLUSIONS Our novel MUVR simulation demonstrated significant differences in sepsis recognition between experienced and novice participants. This validity evidence along with the data on the simulation's acceptability supports expanded use in training and assessment.
Collapse
Affiliation(s)
- Matthew W Zackoff
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
- Division of Critical Care Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
- Center for Simulation and Research, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Bradley Cruse
- Center for Simulation and Research, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Rashmi D Sahay
- Division of Biostatistics and Epidemiology, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Bin Zhang
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
- Division of Biostatistics and Epidemiology, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Tina Sosa
- Department of Pediatrics, University of Rochester School of Medicine and Dentistry, Rochester, New York, USA
- Division of Pediatric Hospital Medicine, University of Rochester Medical Center, Rochester, New York, USA
- UR Medicine Quality Institute, University of Rochester Medical Center, Rochester, New York
| | - Jerome Schwartz
- Patient Services, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Holly Depinet
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
- Division of Emergency Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Daniel Schumacher
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
- Division of Emergency Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Gary L Geis
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
- Center for Simulation and Research, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
- Division of Emergency Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| |
Collapse
|
47
|
Gilliam C, Ramos M, Hilgenberg S, Rassbach C, Blankenburg R. Laying the Foundation: How to Develop Rigorous Health Professions Education Scholarship. Hosp Pediatr 2024; 14:e132-e137. [PMID: 38178786 DOI: 10.1542/hpeds.2023-007162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2024]
Affiliation(s)
| | - Margarita Ramos
- Pediatric Hospital Medicine, Children's National Medical Center, Washington, DC
| | - Sarah Hilgenberg
- Pediatrics, Stanford University School of Medicine, Stanford, California
| | - Caroline Rassbach
- Pediatrics, Stanford University School of Medicine, Stanford, California
| | | |
Collapse
|
48
|
Ismail FW, Afzal A, Durrani R, Qureshi R, Awan S, Brown MR. Exploring Endoscopic Competence in Gastroenterology Training: A Simulation-Based Comparative Analysis of GAGES, DOPS, and ACE Assessment Tools. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2024; 15:75-84. [PMID: 38312535 PMCID: PMC10838491 DOI: 10.2147/amep.s427076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 01/09/2024] [Indexed: 02/06/2024]
Abstract
Purpose Accurate and convenient evaluation tools are essential to document endoscopic competence in Gastroenterology training programs. The Direct Observation of Procedural Skills (DOPS), Global Assessment of Gastrointestinal Endoscopic Skills (GAGES), and Assessment of Endoscopic Competency (ACE) are widely used validated competency assessment tools for gastrointestinal endoscopy. However, studies comparing these 3 tools are lacking, leading to lack of standardization in this assessment. Through simulation, this study seeks to determine the most reliable, comprehensive, and user-friendly tool for standardizing endoscopy competency assessment. Methods A mixed-methods quantitative-qualitative approach was utilized with sequential deductive design. All nine trainees in a gastroenterology training program were assessed on endoscopic procedural competence using the Simbionix Gi-bronch-mentor high-fidelity simulator, with 2 faculty raters independently completing the 3 assessment forms of DOPS, GAGES, and ACE. Psychometric analysis was used to evaluate the tools' reliability. Additionally, faculty trainers participated in a focused group discussion (FGD) to investigate their experience in using the tools. Results For upper GI endoscopy, Cronbach's alpha values for internal consistency were 0.53, 0.8, and 0.87 for ACE, DOPS, and GAGES, respectively. Inter-rater reliability (IRR) scores were 0.79 (0.43-0.92) for ACE, 0.75 (-0.13-0.82) for DOPS, and 0.59 (-0.90-0.84) for GAGES. For colonoscopy, Cronbach's alpha values for internal consistency were 0.53, 0.82, and 0.85 for ACE, DOPS, and GAGES, respectively. IRR scores were 0.72 (0.39-0.96) for ACE, 0.78 (-0.12-0.86) for DOPS, and 0.53 (-0.91-0.78) for GAGES. The FGD yielded three key themes: the ideal tool should be scientifically sound, comprehensive, and user-friendly. Conclusion The DOPS tool performed favourably in both the qualitative assessment and psychometric evaluation to be considered the most balanced amongst the three assessment tools. We propose that the DOPS tool be used for endoscopic skill assessment in gastroenterology training programs. However, gastroenterology training programs need to match their learning outcomes with the available assessment tools to determine the most appropriate one in their context.
Collapse
Affiliation(s)
| | - Azam Afzal
- Aga Khan University Karachi, Sind, Pakistan
| | | | | | - Safia Awan
- Aga Khan University Karachi, Sind, Pakistan
| | - Michelle R Brown
- School of Health Professions, University of Alabama at Birmingham, Birmingham, AL, USA
| |
Collapse
|
49
|
Boal MWE, Anastasiou D, Tesfai F, Ghamrawi W, Mazomenos E, Curtis N, Collins JW, Sridhar A, Kelly J, Stoyanov D, Francis NK. Evaluation of objective tools and artificial intelligence in robotic surgery technical skills assessment: a systematic review. Br J Surg 2024; 111:znad331. [PMID: 37951600 PMCID: PMC10771126 DOI: 10.1093/bjs/znad331] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 09/18/2023] [Accepted: 09/19/2023] [Indexed: 11/14/2023]
Abstract
BACKGROUND There is a need to standardize training in robotic surgery, including objective assessment for accreditation. This systematic review aimed to identify objective tools for technical skills assessment, providing evaluation statuses to guide research and inform implementation into training curricula. METHODS A systematic literature search was conducted in accordance with the PRISMA guidelines. Ovid Embase/Medline, PubMed and Web of Science were searched. Inclusion criterion: robotic surgery technical skills tools. Exclusion criteria: non-technical, laparoscopy or open skills only. Manual tools and automated performance metrics (APMs) were analysed using Messick's concept of validity and the Oxford Centre of Evidence-Based Medicine (OCEBM) Levels of Evidence and Recommendation (LoR). A bespoke tool analysed artificial intelligence (AI) studies. The Modified Downs-Black checklist was used to assess risk of bias. RESULTS Two hundred and forty-seven studies were analysed, identifying: 8 global rating scales, 26 procedure-/task-specific tools, 3 main error-based methods, 10 simulators, 28 studies analysing APMs and 53 AI studies. Global Evaluative Assessment of Robotic Skills and the da Vinci Skills Simulator were the most evaluated tools at LoR 1 (OCEBM). Three procedure-specific tools, 3 error-based methods and 1 non-simulator APMs reached LoR 2. AI models estimated outcomes (skill or clinical), demonstrating superior accuracy rates in the laboratory with 60 per cent of methods reporting accuracies over 90 per cent, compared to real surgery ranging from 67 to 100 per cent. CONCLUSIONS Manual and automated assessment tools for robotic surgery are not well validated and require further evaluation before use in accreditation processes.PROSPERO: registration ID CRD42022304901.
Collapse
Affiliation(s)
- Matthew W E Boal
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
| | - Dimitrios Anastasiou
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Freweini Tesfai
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
| | - Walaa Ghamrawi
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
| | - Evangelos Mazomenos
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Nathan Curtis
- Department of General Surgey, Dorset County Hospital NHS Foundation Trust, Dorchester, UK
| | - Justin W Collins
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Ashwin Sridhar
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - John Kelly
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Computer Science, UCL, London, UK
| | - Nader K Francis
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- Yeovil District Hospital, Somerset Foundation NHS Trust, Yeovil, Somerset, UK
| |
Collapse
|
50
|
Teslak KE, Post JH, Tolsgaard MG, Rasmussen S, Purup MM, Friis ML. Simulation-based assessment of upper abdominal ultrasound skills. BMC MEDICAL EDUCATION 2024; 24:15. [PMID: 38172820 PMCID: PMC10765816 DOI: 10.1186/s12909-023-05018-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 12/28/2023] [Indexed: 01/05/2024]
Abstract
BACKGROUND Ultrasound is a safe and effective diagnostic tool used within several specialties. However, the quality of ultrasound scans relies on sufficiently skilled clinician operators. The aim of this study was to explore the validity of automated assessments of upper abdominal ultrasound skills using an ultrasound simulator. METHODS Twenty five novices and five experts were recruited, all of whom completed an assessment program for the evaluation of upper abdominal ultrasound skills on a virtual reality simulator. The program included five modules that assessed different organ systems using automated simulator metrics. We used Messick's framework to explore the validity evidence of these simulator metrics to determine the contents of a final simulator test. We used the contrasting groups method to establish a pass/fail level for the final simulator test. RESULTS Thirty seven out of 60 metrics were able to discriminate between novices and experts (p < 0.05). The median simulator score of the final simulator test including the metrics with validity evidence was 26.68% (range: 8.1-40.5%) for novices and 85.1% (range: 56.8-91.9%) for experts. The internal structure was assessed by Cronbach alpha (0.93) and intraclass correlation coefficient (0.89). The pass/fail level was determined to be 50.9%. This pass/fail criterion found no passing novices or failing experts. CONCLUSIONS This study collected validity evidence for simulation-based assessment of upper abdominal ultrasound examinations, which is the first step toward competency-based training. Future studies may examine how competency-based training in the simulated setting translates into improvements in clinical performances.
Collapse
Affiliation(s)
- Kristina E Teslak
- NordSim, Center for Skills Training and Simulation, Aalborg University Hospital, Aalborg, Denmark.
| | - Julie H Post
- NordSim, Center for Skills Training and Simulation, Aalborg University Hospital, Aalborg, Denmark
| | - Martin G Tolsgaard
- Copenhagen Academy for Medical Education and Simulation, Rigshospitalet, Copenhagen, Denmark
| | - Sten Rasmussen
- Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
| | - Mathias M Purup
- Department of Radiology, Aalborg University Hospital, Aalborg, Denmark
| | - Mikkel L Friis
- NordSim, Center for Skills Training and Simulation, Aalborg University Hospital, Aalborg, Denmark
| |
Collapse
|