1
|
Potter A, Munsch C, Watson E, Hopkins E, Kitromili S, O'Neill IC, Larbie J, Niittymaki E, Ramsay C, Burke J, Ralph N. Identifying Research Priorities in Digital Education for Health Care: Umbrella Review and Modified Delphi Method Study. J Med Internet Res 2025; 27:e66157. [PMID: 39969988 DOI: 10.2196/66157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 09/05/2024] [Revised: 10/10/2024] [Accepted: 10/29/2024] [Indexed: 02/20/2025] Open
Abstract
BACKGROUND In recent years, the use of digital technology in the education of health care professionals has surged, partly driven by the COVID-19 pandemic. However, there is still a need for focused research to establish evidence of its effectiveness. OBJECTIVE This study aimed to define the gaps in the evidence for the efficacy of digital education and to identify priority areas where future research has the potential to contribute to our understanding and use of digital education. METHODS We used a 2-stage approach to identify research priorities. First, an umbrella review of the recent literature (published between 2020 and 2023) was performed to identify and build on existing work. Second, expert consensus on the priority research questions was obtained using a modified Delphi method. RESULTS A total of 8857 potentially relevant papers were identified. Using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology, we included 217 papers for full review. All papers were either systematic reviews or meta-analyses. A total of 151 research recommendations were extracted from the 217 papers. These were analyzed, recategorized, and consolidated to create a final list of 63 questions. From these, a modified Delphi process with 42 experts was used to produce the top-five rated research priorities: (1) How do we measure the learning transfer from digital education into the clinical setting? (2) How can we optimize the use of artificial intelligence, machine learning, and deep learning to facilitate education and training? (3) What are the methodological requirements for high-quality rigorous studies assessing the outcomes of digital health education? (4) How does the design of digital education interventions (eg, format and modality) in health professionals' education and training curriculum affect learning outcomes? and (5) How should learning outcomes in the field of health professions' digital education be defined and standardized? CONCLUSIONS This review provides a prioritized list of research gaps in digital education in health care, which will be of use to researchers, educators, education providers, and funding agencies. Additional proposals are discussed regarding the next steps needed to advance this agenda, aiming to promote meaningful and practical research on the use of digital technologies and drive excellence in health care education.
Collapse
Affiliation(s)
- Alison Potter
- Technology Enhanced Learning, NHS England, Southampton, United Kingdom
| | - Chris Munsch
- Technology Enhanced Learning, NHS England, Leeds, United Kingdom
| | - Elaine Watson
- Technology Enhanced Learning, NHS England, Oxford, United Kingdom
| | - Emily Hopkins
- Knowledge Management Service, NHS England, Manchester, United Kingdom
| | - Sofia Kitromili
- Technology Enhanced Learning, NHS England, Southampton, United Kingdom
| | | | - Judy Larbie
- Technology Enhanced Learning, NHS England, London, United Kingdom
| | - Essi Niittymaki
- Technology Enhanced Learning, NHS England, London, United Kingdom
| | - Catriona Ramsay
- Technology Enhanced Learning, NHS England, Newcastle upon Tyne, United Kingdom
| | - Joshua Burke
- Manchester Foundation Trust, Manchester, United Kingdom
| | - Neil Ralph
- Technology Enhanced Learning, NHS England, London, United Kingdom
| |
Collapse
|
2
|
Sałacińska I, Trojnar P, Gebriné KÉ, Törő V, Sárváry A, Więch P. A comparative study of traditional high-fidelity (manikin-based) simulation and virtual high-fidelity simulations concerning their effectiveness and perception. Front Med (Lausanne) 2025; 12:1523768. [PMID: 39995686 PMCID: PMC11847899 DOI: 10.3389/fmed.2025.1523768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 11/06/2024] [Accepted: 01/28/2025] [Indexed: 02/26/2025] Open
Abstract
Introduction Medical simulation has become an integral part of medical student education. There is a limited body of literature comparing virtual and high-fidelity simulation in terms of effectiveness and student perception. Methods A total of 130 medical students at the University of Rzeszów participated in this cross-sectional study. The respondents were divided into two groups: students who completed a selected scenario using a virtual patient (Body Interact) and students who completed a scenario using traditional high-fidelity (manikin-based) simulation (HFS). After completing the scenario, students filled in the following questionnaires: the Simulation Design Scale (SDS), the Educational Practices Questionnaire (EPQ), the Student Satisfaction and Self-Confidence in Learning Scale (SSCL) and a customized survey questionnaire. Results The study found no significant difference in the effectiveness of HFS between students exposed to either type of simulation. Detailed analysis within specific categories - problem-solving, teamwork, and active learning - also showed no significant differences between virtual and traditional HFS. Furthermore, there were no notable differences between virtual and traditional simulations regarding specific aspects such as satisfaction with learning, self-confidence in learning, and expectations. However, within the virtual simulation group, females rated active learning significantly higher. Students aged 24-33 rated satisfaction with learning, self-confidence, overall effectiveness and perception of HFS, problem-solving, and active learning more favorably. Additionally, the levels of perceived effectiveness and satisfaction of higher years students with HFS increased. Conclusion Virtual patient simulation and traditional HFS foster the development of practical skills, as well as soft skills of medical students in challenging situations.
Collapse
Affiliation(s)
- Izabela Sałacińska
- Faculty of Health Sciences and Psychology, Collegium Medicum, University of Rzeszów, Rzeszów, Poland
| | - Patrycja Trojnar
- Institute of Health Care, Academy of Applied Sciences, Przemyśl, Poland
| | - Krisztina Éles Gebriné
- Department of Nursing and Midwifery, Faculty of Health Sciences, University of Debrecen, Nyíregyháza, Hungary
| | - Viktória Törő
- Department of Nursing and Midwifery, Faculty of Health Sciences, University of Debrecen, Nyíregyháza, Hungary
- Doctoral School of Health Sciences, University of Debrecen, Nyíregyháza, Hungary
| | - Attila Sárváry
- Department of Integrative Health Sciences, Faculty of Health Sciences, University of Debrecen, Nyíregyháza, Hungary
| | - Paweł Więch
- Faculty of Health Sciences and Psychology, Collegium Medicum, University of Rzeszów, Rzeszów, Poland
| |
Collapse
|
3
|
Happ MN, Howell TC, Pollak KI, Happ MF, Georgoff P, Mallory PP, Straube T, Greenberg JA, Tracy ET, Antiel RM. Building Surgical Character: A Dynamic Simulation Curriculum for Nontechnical Skills. JOURNAL OF SURGICAL EDUCATION 2025; 82:103416. [PMID: 39842161 DOI: 10.1016/j.jsurg.2024.103416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Academic Contribution Register] [Received: 07/05/2024] [Revised: 10/18/2024] [Accepted: 12/29/2024] [Indexed: 01/24/2025]
Abstract
OBJECTIVE Previous simulation curricula of nontechnical skills have focused on communication skills or empathy in isolation from technical skills, using feedback from one rater. We aimed to develop and pilot an expanded simulation curriculum focused on situational performance of select character attributes with the goal of determining curricular feasibility, use of a novel psychometric rating tool, and receptivity of curriculum by participants. DESIGN The simulation consisted of 2 contiguous parts requiring demonstration of both technical and nontechnical skills. Participants received immediate informal feedback on technical skills; nontechnical skills, namely empathy, courage, composure, humility and clarity, were formally assessed by external raters using a novel global psychometric rating tool. They were also assessed by a standardized patient actor using the CARE Measure for empathy and via participant self-assessment. After the simulation, participants completed a self-reflection exercise and individually debriefed with personalized feedback from research team coaches. At completion, participants were invited to complete a post-curriculum survey. Intraclass correlation coefficients (ICC) were calculated to evaluate interrater reliability. Wilcoxon rank-sum tests were conducted to compare median attribute scores of student and resident participants. Post-curriculum feedback was reported with representative quotations and percentages. SETTING The simulation was piloted in a dedicated simulation center at a tertiary care academic medical center during Spring 2024. PARTICIPANTS Six general surgery residents and six senior medical students pursuing surgical specialties voluntarily participated. RESULTS Ten participants (6 students, 4 residents) completed all components of the curriculum. Interrater reliability ranged from fair to excellent (ICC 0.68-0.98) for all attributes excluding Part 1 humility. Significantly higher median scores for resident participants were observed for courage in both parts as well as for Part 1 composure and clarity. Students scored significantly higher on Part 1 humility and Part 2 empathy. The empathy scores using the CARE Measure and our global psychometric rating tool were strongly correlated (r = 0.75). Participants generally rated themselves higher than external raters. Nearly all participants expressed that these skills are important (10, 100%) and not taught enough during training (9, 90%). Overall participant satisfaction was high. CONCLUSIONS This expanded simulation curriculum focused on expression of character attributes as nontechnical skills was feasible and well-received by participants. Our global psychometric rating tool demonstrated partial validity as determined by strong correlation with the validated CARE Measure. This curriculum represents the first of its kind to provide deliberate practice and structured assessment focused on expression of character attributes essential to becoming an effective surgeon.
Collapse
Affiliation(s)
- Megan N Happ
- Duke University School of Medicine, Durham, North Carolina.
| | - T Clark Howell
- Department of Surgery, Duke University, Durham, North Carolina
| | - Kathryn I Pollak
- Department of Population Health Sciences, Duke University, Durham, North Carolina; Cancer Prevention and Control, Duke Cancer Institute, Duke University Medical Center 2914, Durham, North Carolina
| | - Mallory F Happ
- University of North Carolina School of Medicine, Chapel Hill, North Carolina
| | | | - Palen P Mallory
- Department of Pediatrics, Duke University, Durham, North Carolina
| | - Tobias Straube
- Department of Pediatrics, Duke University, Durham, North Carolina
| | | | | | - Ryan M Antiel
- Department of Surgery, Duke University, Durham, North Carolina
| |
Collapse
|
4
|
Waters KM, Hwu R, Kulkarni M, Okonye J, Zamor R, Chaudhary S, Jergel A, Gillespie S, Lewis A, Krieger R, Menon V, Bell G, Levy J, Prynn T, Regan J, Mathai C, Goodwin N, Holmes S. Use of in situ simulation to improve team performance and utilization of a rapid sequence intubation checklist. AEM EDUCATION AND TRAINING 2024; 8:e11039. [PMID: 39534112 PMCID: PMC11551624 DOI: 10.1002/aet2.11039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Academic Contribution Register] [Received: 04/20/2024] [Revised: 09/26/2024] [Accepted: 10/19/2024] [Indexed: 11/16/2024]
Abstract
Background Intubation checklists have emerged as tools to reduce adverse events and improve efficiency during rapid sequence intubation (RSI) in pediatric emergency departments (PEDs). This study aimed to use multidisciplinary simulation (SIM) training as an educational tool to improve PED team performance during RSI scenarios through utilization of an RSI checklist. Methods We created a checklist modeled after previously published PED checklists. PED multidisciplinary teams participated in video-recorded SIM training sessions involving a scenario requiring intubation three times, first without interruption then while receiving our intervention of rapid-cycle deliberate practice (RCDP) debriefing focusing on checklist utilization and team dynamics. Learners went through the scenario once more uninterrupted to apply learned skills. Team performance was evaluated via video review using the Simulation Team Assessment Tool (STAT) focusing on airway management and human factors sections. Scores were compared before and after intervention along with pre- and postintervention surveys. Results A total of 483 learners participated in 64 SIM training sessions, 44 of whom met inclusion criteria and were included for data analysis. Scores increased postintervention for airway management, human factors and in total. Least-squares mean differences for total, airway, and human factors scores were 9.55 (95% confidence interval [CI] 7.24-11.85), 4.22 (95% CI 2.91-5.52), and 5.33 (95% CI 3.86-6.8), respectively, which was statistically significant with p-value of <0.001 across all categories. Surveys demonstrated improved role understanding and checklist utilization comfort postintervention. Conclusions This study supports the benefit of multidisciplinary SIM training with RCDP-style methodology as an educational method for improving airway management, teamwork skills, and RSI checklist utilization for PED staff. Incorporation of additional maintenance SIM sessions for ongoing education is likely to be further beneficial and would allow evaluation of degradation of skills over time following initial training.
Collapse
Affiliation(s)
- Kathleen M. Waters
- Emory University School of Medicine, Children's Healthcare of AtlantaAtlantaGeorgiaUSA
| | - Ruth Hwu
- Emory University School of Medicine, Children's Healthcare of AtlantaAtlantaGeorgiaUSA
| | - Mona Kulkarni
- Pediatric Emergency Medical Association, Children's Healthcare of AtlantaAtlantaGeorgiaUSA
| | - Jeffrey Okonye
- Emory University School of Medicine, Children's Healthcare of AtlantaAtlantaGeorgiaUSA
| | - Ronine Zamor
- Emory University School of Medicine, Children's Healthcare of AtlantaAtlantaGeorgiaUSA
| | - Sofia Chaudhary
- Emory University School of Medicine, Children's Healthcare of AtlantaAtlantaGeorgiaUSA
| | - Andrew Jergel
- Emory University School of Medicine, Children's Healthcare of AtlantaAtlantaGeorgiaUSA
| | - Scott Gillespie
- Emory University School of Medicine, Children's Healthcare of AtlantaAtlantaGeorgiaUSA
| | - Abby Lewis
- Emory University School of Medicine, Children's Healthcare of AtlantaAtlantaGeorgiaUSA
| | | | - Vidya Menon
- Emory University School of Medicine, Children's Healthcare of AtlantaAtlantaGeorgiaUSA
| | - Geovonni Bell
- Emory University School of MedicineAtlantaGeorgiaUSA
| | - Jacob Levy
- Emory University School of Medicine, Children's Healthcare of AtlantaAtlantaGeorgiaUSA
| | - Tory Prynn
- Emory University School of Medicine, Children's Healthcare of AtlantaAtlantaGeorgiaUSA
| | - Jacqueline Regan
- Emory University School of Medicine, Children's Healthcare of AtlantaAtlantaGeorgiaUSA
| | | | | | - Sherita Holmes
- Emory University School of Medicine, Children's Healthcare of AtlantaAtlantaGeorgiaUSA
| |
Collapse
|
5
|
Wespi R, Schwendimann L, Neher A, Birrenbach T, Schauber SK, Manser T, Sauter TC, Kämmer JE. TEAMs go VR-validating the TEAM in a virtual reality (VR) medical team training. Adv Simul (Lond) 2024; 9:38. [PMID: 39261889 PMCID: PMC11389291 DOI: 10.1186/s41077-024-00309-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 01/22/2024] [Accepted: 08/29/2024] [Indexed: 09/13/2024] Open
Abstract
BACKGROUND Inadequate collaboration in healthcare can lead to medical errors, highlighting the importance of interdisciplinary teamwork training. Virtual reality (VR) simulation-based training presents a promising, cost-effective approach. This study evaluates the effectiveness of the Team Emergency Assessment Measure (TEAM) for assessing healthcare student teams in VR environments to improve training methodologies. METHODS Forty-two medical and nursing students participated in a VR-based neurological emergency scenario as part of an interprofessional team training program. Their performances were assessed using a modified TEAM tool by two trained coders. Reliability, internal consistency, and concurrent validity of the tool were evaluated using intraclass correlation coefficients (ICC) and Cronbach's alpha. RESULTS Rater agreement on TEAM's leadership, teamwork, and task management domains was high, with ICC values between 0.75 and 0.90. Leadership demonstrated strong internal consistency (Cronbach's alpha = 0.90), while teamwork and task management showed moderate to acceptable consistency (alpha = 0.78 and 0.72, respectively). Overall, the TEAM tool exhibited high internal consistency (alpha = 0.89) and strong concurrent validity with significant correlations to global performance ratings. CONCLUSION The TEAM tool proved to be a reliable and valid instrument for evaluating team dynamics in VR-based training scenarios. This study highlights VR's potential in enhancing medical education, especially in remote or distanced learning contexts. It demonstrates a dependable approach for team performance assessment, adding value to VR-based medical training. These findings pave the way for more effective, accessible interdisciplinary team assessments, contributing significantly to the advancement of medical education.
Collapse
Affiliation(s)
- Rafael Wespi
- Department of Emergency Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
- Graduate School for Health Sciences, University of Bern, Bern, Switzerland.
| | - Lukas Schwendimann
- Department of Emergency Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Andrea Neher
- Department of Emergency Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Graduate School for Health Sciences, University of Bern, Bern, Switzerland
| | - Tanja Birrenbach
- Department of Emergency Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Stefan K Schauber
- Centre for Educational Measurement (CEMO) & Unit for Health Sciences Education, University of Oslo, Oslo, Norway
| | - Tanja Manser
- FHNW School of Applied Psychology, University of Applied Sciences and Arts, Northwestern Switzerland, Olten, Switzerland
- Division of Anesthesiology and Intensive Care, Department of Clinical Sciences, Intervention and Technology, Karolinska Institutet, Huddinge, Sweden
| | - Thomas C Sauter
- Department of Emergency Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Juliane E Kämmer
- Department of Emergency Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Department of Social and Communication Psychology, University of Göttingen, Göttingen, Germany
| |
Collapse
|
6
|
Soghikian S, Chipman M, Holmes J, Calhoun AW, Mallory LA. Assessing Team Performance in a Longitudinal Neonatal Resuscitation Simulation Training Program: Comparing Validity Evidence to Select the Best Tool. Cureus 2024; 16:e68810. [PMID: 39371693 PMCID: PMC11456317 DOI: 10.7759/cureus.68810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Accepted: 09/03/2024] [Indexed: 10/08/2024] Open
Abstract
Introduction Neonatal resuscitation is a high-acuity, low-occurrence event that requires ongoing practice by interprofessional teams to maintain proficiency. Simulation provides an ideal platform for team training and evaluation of team performance. Our simulation center supports a longitudinal in situ simulation training program for delivery room teams. In addition to adherence to the Neonatal Resuscitation Program standards, team performance assessment is an essential component of program evaluation and participant feedback. Multiple published teamwork assessment tools exist. Our objective was to select the tool with the best validity evidence for our program's needs. Methods We used Messick's framework to assess the validity of evidence for potential teamwork assessment tools. Four possible tools were identified from the literature: the Mayo High Performance Teamwork Scale (Mayo), Team Performance Observation Tool (TPOT), Clinical Teamwork Scale (CTS), and Team Emergency Assessment Measure (TEAM). Relevant context included team versus individual focus, external evaluator versus self-evaluation, and ease of use (which included efficiency, clarity of interpretation, and overall assessment). Three simulation experts identified consensus anchors for each tool and independently reviewed and scored 10 pre-recorded neonatal resuscitation simulations. Raters assigned each tool a rating according to efficiency, ease of interpretation, and completeness of teamwork assessment. Interrater reliability (IRR) was calculated using intraclass correlation for each tool across the three raters. Average team performance scores for each tool were correlated with neonatal resuscitation adherence scores for each video using Spearman's rank coefficient. Results There was a range of IRR between the tools, with Mayo having the best (single 0.55 and multi 0.78). Each of the three raters ranked Mayo optimally in terms of efficiency (mean 4.66 + 0.577) and ease of use (4+1). However, TPOT and CTS scored highest (mean 4.66 ± 0.577) for overall completeness of teamwork assessment. There was no significant correlation to NRP adherence scores for any teamwork tool. Conclusion Of the four tools assessed, Mayo demonstrated moderate IRR and scored highest for its ease of use and efficiency, though not completeness of assessment. The remaining three tools had poor IRR, which is not an uncommon problem with teamwork assessment tools. Our process emphasizes the fact that assessment tool validity is contextual. Factors such as a relatively narrow (and high) performance distribution and clinical context may have contributed to reliability challenges for tools that offered a more complete teamwork assessment.
Collapse
Affiliation(s)
- Sierra Soghikian
- Maine Track Program, Tufts University School of Medicine, Boston, USA
| | - Micheline Chipman
- Medical Education and Simulation, Hannaford Center for Safety, Innovation and Simulation, MaineHealth Brighton Campus, Portland, USA
| | - Jeffrey Holmes
- Emergency Medicine, MaineHealth Maine Medical Center, Portland, USA
| | - Aaron W Calhoun
- Pediatrics and Critical Care Medicine, University of Louisville, Louisville, USA
| | - Leah A Mallory
- Medical Education and Simulation, Hannaford Center for Safety Innovation and Simulation, MaineHealth Brighton Campus, Portland, USA
- Pediatric Hospital Medicine, MaineHealth Barbara Bush Children's Hospital, Portland, USA
| |
Collapse
|
7
|
Khoo DW, Roscoe AJ, Hwang NC. Beyond the self: a novel framework to enhance non-technical team skills for anesthesiologists. Minerva Anestesiol 2023; 89:1115-1126. [PMID: 38019175 DOI: 10.23736/s0375-9393.23.16729-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Indexed: 11/30/2023]
Abstract
Human factors and non-technical skills (NTS) have been identified as essential contributors to both the propagation and prevention of medical errors in the operating room. Despite extensive study and interventions to nurture and enhance NTS in anesthesiologists, gaps to effective team practice and patient safety remain. Furthermore, the link between added NTS training and clinically significant improved outcomes has not yet been demonstrated. We performed a narrative review to summarize the literature on existing systems and initiatives used to measure and nurture NTS in the clinical operating room setting. Controlled interventions performed to nurture NTS (N.=13) were identified and compared. We comment on the body of current evidence and highlight the achievements and limitations of interventions published thus far. We then propose a novel education and training framework to further develop and enhance non-technical skills in both individual anesthesiologists and operating room teams. We use the cardiac anesthesiology environment as a starting point to illustrate its use, with clinical examples. NTS is a key component of enhancing patient safety. Effective framing of its concepts is central to apply individual characteristics and skills in team environments in the OR and achieve tangible, beneficial patient outcomes.
Collapse
Affiliation(s)
- Deborah W Khoo
- Department of Anesthesiology, Singapore General Hospital, Singapore, Singapore -
| | - Andrew J Roscoe
- Department of Anesthesiology, Singapore General Hospital, Singapore, Singapore
- Department of Cardiothoracic Anesthesia, National Heart Center Singapore, Singapore, Singapore
| | - Nian C Hwang
- Department of Anesthesiology, Singapore General Hospital, Singapore, Singapore
- Department of Cardiothoracic Anesthesia, National Heart Center Singapore, Singapore, Singapore
| |
Collapse
|
8
|
Anand A, Jensen R, Korndorffer JR. More is not better: A scoping review of simulation in transition to residency programs. Surgery 2023; 174:1340-1348. [PMID: 37852830 DOI: 10.1016/j.surg.2023.08.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 03/14/2023] [Revised: 06/02/2023] [Accepted: 08/08/2023] [Indexed: 10/20/2023]
Abstract
BACKGROUND Transition to residency programs frequently use simulation to promote clinical skills but place limited emphasis on non-clinical skills. We conducted a scoping review to determine how simulation is being used in transition to residency programs and the key non-clinical skills addressed by simulation activities and tools in these programs. METHODS We searched PubMed, Scopus, and Embase to identify articles addressing transition to residency, simulation, and non-clinical skills/attributes. Two authors independently screened all abstracts and full-text articles and identified non-clinical attributes elicited in each study. Using descriptive statistics, we characterized the simulation activities and tools and the number and type of non-clinical attributes captured in the programs. Using analysis of variance, we compared the number of non-clinical attributes elicited based on the number of simulation activities used and compared the number of non-clinical attributes elicited based on the number of simulation tools used. RESULTS We identified 38 articles that met the study criteria. We characterized simulation activities as mock paging (37%), case-based scenarios (74%), and/or procedural skills training (39%). We found that the most common simulation tools were standardized patients (64.8%), and the most elicited non-clinical attributes were communication skills, critical thinking, and teamwork. Using more simulation activity categories or simulation tools did not increase the number of non-clinical skills elicited. CONCLUSION Simulation is used broadly in transition to residency programs but provides training in a few of the non-clinical skills required for a successful transition. Incorporating more simulation activities or tools does not increase the number of non-clinical attributes elicited, illustrating the importance of developing more targeted simulation activities to promote non-clinical skills more effectively.
Collapse
Affiliation(s)
- Ananya Anand
- Department of Surgery, Stanford University, Stanford, CA.
| | - Rachel Jensen
- Department of Surgery, Stanford University, Stanford, CA. https://twitter.com/GSEC_Surgery
| | - James R Korndorffer
- Department of Surgery, Stanford University, Stanford, CA. https://twitter.com/StanfordSurgery
| |
Collapse
|
9
|
Can Different Admissions to Medical School Predict Performance of Non-Technical Skill Performance in Simulated Clinical Settings? Healthcare (Basel) 2022; 11:healthcare11010046. [PMID: 36611506 PMCID: PMC9818855 DOI: 10.3390/healthcare11010046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 11/17/2022] [Revised: 12/13/2022] [Accepted: 12/22/2022] [Indexed: 12/28/2022] Open
Abstract
Non-technical skills (NTS) in medical care are essential to ensure patient safety. Focussing on applicants' NTS during medical school admission could be a promising approach to ensure that future physicians master NTS at a high level. Next to pre-university educational attainment, many selection tests have been developed worldwide to facilitate and standardise the selection process of medical students. The predictive validity of these tests regarding NTS performance in clinical settings has not been investigated (yet). Therefore, we explored the predictive validities and prognosis of the Hamburg MMI (HAM-Int), HAM-Nat, PEA, and waiting as well as other quota (as example) designated by the Federal Armed Forces) for NTS performance in clinical emergency medicine training of medical students. During 2017 and 2020, N = 729 second, third, and fourth year students were enrolled within the study. The mean age of participants was 26.68 years (SD 3.96) and 49% were female students. NTS of these students were assessed during simulation scenarios of emergency training with a validated rating tool. Students admitted via waiting quota and designated by the Armed Forces performed significantly better than students admitted by excellent PEA (p = 0.026). Non-EU students performed significantly inferior (p = 0.003). Our findings provide further insight to explain how and if admission to medical school could predict NTS performance of further physicians.
Collapse
|