1
|
Narasimha S, Obuseh M, Anton NE, Chen H, Chakrabarty R, Stefanidis D, Yu D. Eye tracking and audio sensors to evaluate surgeon's non-technical skills: An empirical study. APPLIED ERGONOMICS 2024; 119:104320. [PMID: 38797012 DOI: 10.1016/j.apergo.2024.104320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 05/06/2024] [Accepted: 05/22/2024] [Indexed: 05/29/2024]
Abstract
Non-Technical Skills (NTS) of medical teams are currently measured using subjective and resource-intensive ratings given by experts. This study explores if objective NTS assessment approaches with eye-tracking and audio sensors can measure teamwork and communication skills in surgery. Eight surgeons participated in a simulated two-phase surgical scenario developed to assess their NTS. Sensor-based audio, eye tracking and video data were collected and analyzed along with rating from the NOTSS scale. Different levels of communication were detected by the sensor data during the two phases of the simulated surgery. Sensor data detected leadership qualities among surgeons based on speech metrics, and eye tracking offered additional evidence about gaze patterns related to NTS. This objective approach to NTS measurement captured differences in communication in greater detail as opposed to a single collective rating obtained using current assessment tools.
Collapse
Affiliation(s)
| | - Marian Obuseh
- School of Industrial Engineering, Purdue University, USA
| | - Nicholas Eric Anton
- School of Industrial Engineering, Purdue University, USA; School of Medicine, Indiana University, USA
| | - Haozhi Chen
- School of Industrial Engineering, Purdue University, USA
| | | | | | - Denny Yu
- School of Industrial Engineering, Purdue University, USA
| |
Collapse
|
2
|
Pulcinelli M, Pinnelli M, Massaroni C, Lo Presti D, Fortino G, Schena E. Wearable Systems for Unveiling Collective Intelligence in Clinical Settings. SENSORS (BASEL, SWITZERLAND) 2023; 23:9777. [PMID: 38139623 PMCID: PMC10747409 DOI: 10.3390/s23249777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 11/29/2023] [Accepted: 12/07/2023] [Indexed: 12/24/2023]
Abstract
Nowadays, there is an ever-growing interest in assessing the collective intelligence (CI) of a team in a wide range of scenarios, thanks to its potential in enhancing teamwork and group performance. Recently, special attention has been devoted on the clinical setting, where breakdowns in teamwork, leadership, and communication can lead to adverse events, compromising patient safety. So far, researchers have mostly relied on surveys to study human behavior and group dynamics; however, this method is ineffective. In contrast, a promising solution to monitor behavioral and individual features that are reflective of CI is represented by wearable technologies. To date, the field of CI assessment still appears unstructured; therefore, the aim of this narrative review is to provide a detailed overview of the main group and individual parameters that can be monitored to evaluate CI in clinical settings, together with the wearables either already used to assess them or that have the potential to be applied in this scenario. The working principles, advantages, and disadvantages of each device are introduced in order to try to bring order in this field and provide a guide for future CI investigations in medical contexts.
Collapse
Affiliation(s)
- Martina Pulcinelli
- Research Unit of Measurements and Biomedical Instrumentation, Department of Engineering, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo 21, 00128 Roma, Italy; (M.P.); (M.P.); (C.M.); (E.S.)
| | - Mariangela Pinnelli
- Research Unit of Measurements and Biomedical Instrumentation, Department of Engineering, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo 21, 00128 Roma, Italy; (M.P.); (M.P.); (C.M.); (E.S.)
| | - Carlo Massaroni
- Research Unit of Measurements and Biomedical Instrumentation, Department of Engineering, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo 21, 00128 Roma, Italy; (M.P.); (M.P.); (C.M.); (E.S.)
- Fondazione Policlinico Universitario Campus Bio-Medico, Via Alvaro del Portillo, 200, 00128 Roma, Italy
| | - Daniela Lo Presti
- Research Unit of Measurements and Biomedical Instrumentation, Department of Engineering, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo 21, 00128 Roma, Italy; (M.P.); (M.P.); (C.M.); (E.S.)
- Fondazione Policlinico Universitario Campus Bio-Medico, Via Alvaro del Portillo, 200, 00128 Roma, Italy
| | - Giancarlo Fortino
- DIMES, University of Calabria, Via P. Bucci 41C, 87036 Rende, Italy;
| | - Emiliano Schena
- Research Unit of Measurements and Biomedical Instrumentation, Department of Engineering, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo 21, 00128 Roma, Italy; (M.P.); (M.P.); (C.M.); (E.S.)
- Fondazione Policlinico Universitario Campus Bio-Medico, Via Alvaro del Portillo, 200, 00128 Roma, Italy
| |
Collapse
|
3
|
Piumatti G, Cerutti B, Perron NJ. Assessing communication skills during OSCE: need for integrated psychometric approaches. BMC MEDICAL EDUCATION 2021; 21:106. [PMID: 33593345 PMCID: PMC7887794 DOI: 10.1186/s12909-021-02552-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Accepted: 02/09/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND Physicians' communication skills (CS) are known to significantly affect the quality of health care. Communication skills training programs are part of most undergraduate medical curricula and are usually assessed in Objective Structured Clinical Examinations (OSCE) throughout the curriculum. The adoption of reliable measurement instruments is thus essential to evaluate such skills. METHODS Using Exploratory Factor Analysis (EFA), Multi-Group Confirmatory Factor Analysis (MGCFA) and Item Response Theory analysis (IRT) the current retrospective study tested the factorial validity and reliability of a four-item global rating scale developed by Hodges and McIlroy to measure CS among 296 third- and fourth-year medical students at the Faculty of Medicine in Geneva, Switzerland, during OSCEs. RESULTS EFA results at each station showed good reliability scores. However, measurement invariance assessments through MGCFA across different stations (i.e., same students undergoing six or three stations) and across different groups of stations (i.e., different students undergoing groups of six or three stations) were not satisfactory, failing to meet the minimum requirements to establish measurement invariance and thus possibly affecting reliable comparisons between students' communication scores across stations. IRT revealed that the four communication items provided overlapping information focusing especially on high levels of the communication spectrum. CONCLUSIONS Using this four-item set in its current form it may be difficult to adequately differentiate between students who are poor in CS from those who perform better. Future directions in best-practices to assess CS among medical students in the context of OSCE may thus focus on (1) training examiners so to obtain scores that are more coherent across stations; and (2) evaluating items in terms of their ability to cover a wider spectrum of medical students' CS. In this respect, IRT can prove to be very useful for the continuous evaluation of CS measurement instruments in performance-based assessments.
Collapse
Affiliation(s)
- Giovanni Piumatti
- Division of Primary Care, Population Epidemiology Unit, Geneva University Hospitals, Geneva, Switzerland.
- Institute of Public Health, Faculty of BioMedical Sciences, Università della Svizzera Italiana, Lugano, Switzerland.
- Faculty of Medicine, Unit of Development and Research in Medical Education (UDREM), University of Geneva, Geneva, Switzerland.
| | - Bernard Cerutti
- Faculty of Medicine, Unit of Development and Research in Medical Education (UDREM), University of Geneva, Geneva, Switzerland
| | - Noëlle Junod Perron
- Faculty of Medicine, Unit of Development and Research in Medical Education (UDREM), University of Geneva, Geneva, Switzerland
- Institute of Primary Care, Geneva University Hospitals, Geneva, Switzerland
| |
Collapse
|
4
|
Gilligan C, Powell M, Lynagh MC, Ward BM, Lonsdale C, Harvey P, James EL, Rich D, Dewi SP, Nepal S, Croft HA, Silverman J. Interventions for improving medical students' interpersonal communication in medical consultations. Cochrane Database Syst Rev 2021; 2:CD012418. [PMID: 33559127 PMCID: PMC8094582 DOI: 10.1002/14651858.cd012418.pub2] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
BACKGROUND Communication is a common element in all medical consultations, affecting a range of outcomes for doctors and patients. The increasing demand for medical students to be trained to communicate effectively has seen the emergence of interpersonal communication skills as core graduate competencies in medical training around the world. Medical schools have adopted a range of approaches to develop and evaluate these competencies. OBJECTIVES To assess the effects of interventions for medical students that aim to improve interpersonal communication in medical consultations. SEARCH METHODS We searched five electronic databases: Cochrane Central Register of Controlled Trials, MEDLINE, Embase, PsycINFO, and ERIC (Educational Resource Information Centre) in September 2020, with no language, date, or publication status restrictions. We also screened reference lists of relevant articles and contacted authors of included studies. SELECTION CRITERIA We included randomised controlled trials (RCTs), cluster-RCTs (C-RCTs), and non-randomised controlled trials (quasi-RCTs) evaluating the effectiveness of interventions delivered to students in undergraduate or graduate-entry medical programmes. We included studies of interventions aiming to improve medical students' interpersonal communication during medical consultations. Included interventions targeted communication skills associated with empathy, relationship building, gathering information, and explanation and planning, as well as specific communication tasks such as listening, appropriate structure, and question style. DATA COLLECTION AND ANALYSIS We used standard methodological procedures expected by Cochrane. Two review authors independently reviewed all search results, extracted data, assessed the risk of bias of included studies, and rated the quality of evidence using GRADE. MAIN RESULTS We found 91 publications relating to 76 separate studies (involving 10,124 students): 55 RCTs, 9 quasi-RCTs, 7 C-RCTs, and 5 quasi-C-RCTs. We performed meta-analysis according to comparison and outcome. Among both effectiveness and comparative effectiveness analyses, we separated outcomes reporting on overall communication skills, empathy, rapport or relationship building, patient perceptions/satisfaction, information gathering, and explanation and planning. Overall communication skills and empathy were further divided as examiner- or simulated patient-assessed. The overall quality of evidence ranged from moderate to very low, and there was high, unexplained heterogeneity. Overall, interventions had positive effects on most outcomes, but generally small effect sizes and evidence quality limit the conclusions that can be drawn. Communication skills interventions in comparison to usual curricula or control may improve both overall communication skills (standardised mean difference (SMD) 0.92, 95% confidence interval (CI) 0.53 to 1.31; 18 studies, 1356 participants; I² = 90%; low-quality evidence) and empathy (SMD 0.64, 95% CI 0.23 to 1.05; 6 studies, 831 participants; I² = 86%; low-quality evidence) when assessed by experts, but not by simulated patients. Students' skills in information gathering probably also improve with educational intervention (SMD 1.07, 95% CI 0.61 to 1.54; 5 studies, 405 participants; I² = 78%; moderate-quality evidence), but there may be little to no effect on students' rapport (SMD 0.18, 95% CI -0.15 to 0.51; 9 studies, 834 participants; I² = 81%; low-quality evidence), and effects on information giving skills are uncertain (very low-quality evidence). We are uncertain whether experiential interventions improve overall communication skills in comparison to didactic approaches (SMD 0.08, 95% CI -0.02 to 0.19; 4 studies, 1578 participants; I² = 4%; very low-quality evidence). Electronic learning approaches may have little to no effect on students' empathy scores (SMD -0.13, 95% CI -0.68 to 0.43; 3 studies, 421 participants; I² = 82%; low-quality evidence) or on rapport (SMD 0.02, 95% CI -0.33 to 0.38; 3 studies, 176 participants; I² = 19%; moderate-quality evidence) compared to face-to-face approaches. There may be small negative effects of electronic interventions on information giving skills (low-quality evidence), and effects on information gathering skills are uncertain (very low-quality evidence). Personalised/specific feedback probably improves overall communication skills to a small degree in comparison to generic or no feedback (SMD 0.58, 95% CI 0.29 to 0.87; 6 studies, 502 participants; I² = 56%; moderate-quality evidence). There may be small positive effects of personalised feedback on empathy and information gathering skills (low quality), but effects on rapport are uncertain (very low quality), and we found no evidence on information giving skills. We are uncertain whether role-play with simulated patients outperforms peer role-play in improving students' overall communication skills (SMD 0.17, 95% CI -0.33 to 0.67; 4 studies, 637 participants; I² = 87%; very low-quality evidence). There may be little to no difference between effects of simulated patient and peer role-play on students' empathy (low-quality evidence) with no evidence on other outcomes for this comparison. Descriptive syntheses of results that could not be included in meta-analyses across outcomes and comparisons were mixed, as were effects of different interventions and comparisons on specific communication skills assessed by the included trials. Quality of evidence was downgraded due to methodological limitations across several risk of bias domains, high unexplained heterogeneity, and imprecision of results. In general, results remain consistent in sensitivity analysis based on risk of bias and adjustment for clustering. No adverse effects were reported. AUTHORS' CONCLUSIONS: This review represents a substantial body of evidence from which to draw, but further research is needed to strengthen the quality of the evidence base, to consider the long-term effects of interventions on students' behaviour as they progress through training and into practice, and to assess effects of interventions on patient outcomes. Efforts to standardise assessment and evaluation of interpersonal skills will strengthen future research efforts.
Collapse
Affiliation(s)
- Conor Gilligan
- School of Medicine and Public Health, University of Newcastle, Hunter Medical Research Institute, Callaghan, Australia
| | - Martine Powell
- Centre for Investigative Interviewing, Griffith Criminology Institute, Griffith University, Brisbane, Australia
| | - Marita C Lynagh
- School of Medicine and Public Health, University of Newcastle, Hunter Medical Research Institute, Callaghan, Australia
| | | | - Chris Lonsdale
- Institute for Positive Psychology and Education, Australian Catholic University, Strathfield, Australia
| | - Pam Harvey
- School of Rural Health, Monash University, Bendigo, Australia
| | - Erica L James
- School of Medicine and Public Health, University of Newcastle, Hunter Medical Research Institute, Callaghan, Australia
| | - Dominique Rich
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| | - Sari P Dewi
- School of Medicine and Public Health, University of Newcastle, Callaghan, Australia
| | - Smriti Nepal
- The Matilda Centre for Research in Mental Health and Substance Use, University of Sydney, Darlington, Australia
| | - Hayley A Croft
- School of Biomedical Sciences and Pharmacy, University of Newcastle, Callaghan, Australia
| | | |
Collapse
|
5
|
Piquette D, Goffi A, Lee C, Brydges R, Walsh CM, Mema B, Parshuram C. Resident competencies before and after short intensive care unit rotations: a multicentre pilot observational study. Can J Anaesth 2021; 68:235-244. [PMID: 33174164 DOI: 10.1007/s12630-020-01850-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Revised: 08/19/2020] [Accepted: 08/22/2020] [Indexed: 10/23/2022] Open
Abstract
PURPOSE Residency programs need to understand the competencies developed by residents during an intensive care unit (ICU) rotation, so that curricula and assessments maximize residents' learning. The primary study objective was to evaluate the feasibility for training programs and acceptability by residents of conducting a multi-competency assessment during a four-week ICU rotation. METHODS We conducted a prospective, multicentre observational pilot study in three ICUs. During weeks 1 and 4 of an ICU rotation, we conducted repeated standardized assessments of non-critical care specialty residents' competencies in cognitive reasoning (script concordance test [SCT]), procedural skills (objective structured assessment of technical skills [OSATS]-global rating scale], and communication skills through a written test, two procedural simulations, and a simulated encounter with a "family member". The feasibility outcomes included program costs, the proportion of enrolled residents able to complete at least one three-station assessment during their four-week ICU rotation, and acceptability of the assessment for the trainees. RESULTS We enrolled 63 (69%) of 91 eligible residents, with 58 (92%) completing at least one assessment. The total cost to conduct 90 assessments was CAD 33,800. The majority of participants agreed that the assessment was fair and that it measured important clinical abilities. For the 32 residents who completed two assessments, the mean (standard deviation) cognitive reasoning and procedural skill scores increased between weeks 1 and 4 [SCT difference, 3.1 (6.5), P = 0.01; OSATS difference for bag-mask ventilation and central line insertion, 0.4 (0.5) and 0.6 (0.8), respectively; both P ≤ 0.001]. Nevertheless, the communication scores did not change significantly. CONCLUSIONS A monthly multi-competency assessment for specialty residents rotating in the ICU is likely feasible for most programs with appropriate resources, and generally acceptable for residents. Specialty residents' cognitive reasoning and procedural skills may improve during a four-week ICU rotation, whereas communication skills may not.
Collapse
Affiliation(s)
- Dominique Piquette
- Sunnybrook Health Sciences Centre, University of Toronto, 2075 Bayview Avenue, Room D108, Toronto, ON, M4N3M5, Canada.
| | - Alberto Goffi
- St. Michael's Hospital, University of Toronto, Toronto, ON, Canada
| | - Christie Lee
- Mt. Sinai Hospital, University of Toronto, Toronto, ON, Canada
| | - Ryan Brydges
- The Wilson Centre, University of Toronto, Toronto, ON, Canada
| | - Catharine M Walsh
- The Hospital for Sick Children, University of Toronto, Toronto, ON, Canada
| | - Briseida Mema
- The Hospital for Sick Children, University of Toronto, Toronto, ON, Canada
| | - Chris Parshuram
- The Hospital for Sick Children, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
6
|
Pezel T, Coisne A, Bonnet G, Martins RP, Adjedj J, Bière L, Lattuca B, Turpeau S, Popovic B, Ivanes F, Lafitte S, Deharo JC, Bernard A. Simulation-based training in cardiology: State-of-the-art review from the French Commission of Simulation Teaching (Commission d'enseignement par simulation-COMSI) of the French Society of Cardiology. Arch Cardiovasc Dis 2021; 114:73-84. [PMID: 33419690 DOI: 10.1016/j.acvd.2020.10.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 10/27/2020] [Accepted: 10/27/2020] [Indexed: 11/26/2022]
Abstract
In our healthcare system, mindful of patient safety and the reduction of medical errors, simulation-based training has emerged as the cornerstone of medical education, allowing quality training in complete safety for patients. Initiated by anaesthesiologists, this teaching mode effectively allows a gradual transfer of learning, and has become an essential tool in cardiology teaching. Cardiologists are embracing simulation to master complex techniques in interventional cardiology, to manage crisis situations and unusual complications and to develop medical teamwork. Simulation methods in cardiology include high-fidelity simulators, clinical scenarios, serious games, hybrid simulation and virtual reality. Simulation involves all fields of cardiology: transoesophageal echocardiography, cardiac catheterization, coronary angioplasty and electrophysiology. Beyond purely technical issues, simulation can also enhance communication skills, by using standardized patients, and can improve the management of situations related to the announcement of serious diseases. In this review of recent literature, we present existing simulation modalities, their applications in different fields of cardiology and their advantages and limitations. Finally, we detail the growing role for simulation in the teaching of medical students following the recent legal obligation to use simulation to evaluate medical students in France.
Collapse
Affiliation(s)
- Théo Pezel
- Department of Cardiology, Lariboisiere Hospital, AP-HP, Inserm UMRS 942, University of Paris, 75010 Paris, France; Division of Cardiology, Johns Hopkins University, MD 21287-0409 Baltimore, USA; Ilumens Simulation Department, Paris Diderot University, 75010 Paris, France; French Commission of Simulation Teaching (Commission d'enseignement par simulation-COMSI) of the French Society of Cardiology, 75012 Paris, France
| | - Augustin Coisne
- French Commission of Simulation Teaching (Commission d'enseignement par simulation-COMSI) of the French Society of Cardiology, 75012 Paris, France; Department of Cardiovascular Explorations and Echocardiography-Heart Valve Clinic, CHU de Lille, 59000 Lille, France; Université de Lille, Inserm, CHU Lille, Institut Pasteur de Lille, U1011-EGID, 59000 Lille, France
| | - Guillaume Bonnet
- French Commission of Simulation Teaching (Commission d'enseignement par simulation-COMSI) of the French Society of Cardiology, 75012 Paris, France; Université de Paris, PARCC, INSERM, 75015 Paris, France; Hôpital Européen Georges Pompidou, Université de Paris, 75015 Paris, France
| | - Raphael P Martins
- French Commission of Simulation Teaching (Commission d'enseignement par simulation-COMSI) of the French Society of Cardiology, 75012 Paris, France; Université de Rennes, CHU de Rennes, INSERM, LTSI-UMR 1099, 35000 Rennes, France
| | - Julien Adjedj
- French Commission of Simulation Teaching (Commission d'enseignement par simulation-COMSI) of the French Society of Cardiology, 75012 Paris, France; Cardiology Department, Arnault Tzanck Institute, 06700 Saint-Laurent-du-Var, France
| | - Loïc Bière
- French Commission of Simulation Teaching (Commission d'enseignement par simulation-COMSI) of the French Society of Cardiology, 75012 Paris, France; Service de Cardiologie, CHU de Angers, Université Angers, 49100 Angers, France
| | - Benoit Lattuca
- French Commission of Simulation Teaching (Commission d'enseignement par simulation-COMSI) of the French Society of Cardiology, 75012 Paris, France; Cardiology Department, Nîmes University Hospital, Montpellier University, 30029 Nîmes, France
| | - Stéphanie Turpeau
- French Commission of Simulation Teaching (Commission d'enseignement par simulation-COMSI) of the French Society of Cardiology, 75012 Paris, France; Pôle Cardiologie, Angiologie, Néphrologie, Endocrinologie, Centre Hospitalier d'Avignon, 84000 Avignon, France
| | - Batric Popovic
- French Commission of Simulation Teaching (Commission d'enseignement par simulation-COMSI) of the French Society of Cardiology, 75012 Paris, France; Department of Cardiology, CHRU de Nancy, Université de Lorraine, 54000 Nancy, France
| | - Fabrice Ivanes
- French Commission of Simulation Teaching (Commission d'enseignement par simulation-COMSI) of the French Society of Cardiology, 75012 Paris, France; Tours University, 37000 Tours, France; Cardiology Department, Tours University Hospital, 37000 Tours, France
| | - Stéphane Lafitte
- French Commission of Simulation Teaching (Commission d'enseignement par simulation-COMSI) of the French Society of Cardiology, 75012 Paris, France; Cardiology Department, Bordeaux University Hospital, 33000 Bordeaux, France
| | - Jean Claude Deharo
- French Commission of Simulation Teaching (Commission d'enseignement par simulation-COMSI) of the French Society of Cardiology, 75012 Paris, France; Service de cardiologie, hôpital de la Timone, 33000 Marseille, France
| | - Anne Bernard
- French Commission of Simulation Teaching (Commission d'enseignement par simulation-COMSI) of the French Society of Cardiology, 75012 Paris, France; Tours University, 37000 Tours, France; Cardiology Department, Tours University Hospital, 37000 Tours, France.
| |
Collapse
|
7
|
Walsh CM, Scaffidi MA, Khan R, Arora A, Gimpaya N, Lin P, Satchwell J, Al-Mazroui A, Zarghom O, Sharma S, Kamani A, Genis S, Kalaichandran R, Grover SC. Non-technical skills curriculum incorporating simulation-based training improves performance in colonoscopy among novice endoscopists: Randomized controlled trial. Dig Endosc 2020; 32:940-948. [PMID: 31912560 DOI: 10.1111/den.13623] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 12/26/2019] [Accepted: 01/06/2020] [Indexed: 12/13/2022]
Abstract
BACKGROUND AND AIMS Non-technical skills (NTS), involving cognitive, social and interpersonal skills that complement technical skills, are important for the completion of safe and efficient procedures. We investigated the impact of a simulation-based curriculum with dedicated NTS training on novice endoscopists' performance of clinical colonoscopies. METHODS A single-blinded randomized controlled trial was conducted at a single center. Novice endoscopists were randomized to a control curriculum or a NTS curriculum. The control curriculum involved a didactic session, virtual reality (VR) simulator colonoscopy training, and integrated scenario practice using a VR simulator, a standardized patient, and endoscopy nurse. Feedback and training were provided by experienced endoscopists. The NTS curriculum group received similar training that included a small-group session on NTS, feedback targeting NTS, and access to a self-reflective NTS checklist. The primary outcome was performance during two clinical colonoscopies, assessed using the Joint Advisory Group Direct Observation of Procedural Skills (JAG DOPS) tool. RESULTS Thirty-nine participants completed the study. The NTS group (n = 21) had superior clinical performance during their first (P < 0.001) and second clinical colonoscopies (P < .0.001), compared to the control group (n = 18). The NTS group performed significantly better on the VR simulator (P < 0.05) and in the integrated scenario (P < 0.05). CONCLUSION Our findings demonstrate that dedicated NTS training led to improved performance of clinical colonoscopies among novices.
Collapse
Affiliation(s)
- Catharine M Walsh
- Division of Gastroenterology, Hepatology, and Nutrition, Learning Institute and Research Institute, Hospital for Sick Children, Toronto, Canada.,Faculty of Medicine, The Wilson Centre, University of Toronto, Toronto, Canada
| | - Michael A Scaffidi
- Division of Gastroenterology, Department of Medicine, University of Toronto, Toronto, Canada
| | - Rishad Khan
- Division of Gastroenterology, Department of Medicine, University of Toronto, Toronto, Canada
| | - Anuj Arora
- Division of Gastroenterology, Department of Medicine, University of Toronto, Toronto, Canada
| | - Nikko Gimpaya
- Division of Gastroenterology, Department of Medicine, University of Toronto, Toronto, Canada
| | - Peter Lin
- Division of Gastroenterology, Department of Medicine, University of Toronto, Toronto, Canada
| | - Joshua Satchwell
- Division of Gastroenterology, Department of Medicine, University of Toronto, Toronto, Canada
| | - Ahmed Al-Mazroui
- Division of Gastroenterology, Department of Medicine, University of Toronto, Toronto, Canada
| | - Omid Zarghom
- Division of Gastroenterology, Department of Medicine, University of Toronto, Toronto, Canada
| | - Suraj Sharma
- Division of Gastroenterology, Department of Medicine, University of Toronto, Toronto, Canada
| | - Alya Kamani
- Division of Gastroenterology, Department of Medicine, University of Toronto, Toronto, Canada
| | - Shai Genis
- Division of Gastroenterology, Department of Medicine, University of Toronto, Toronto, Canada
| | - Ruben Kalaichandran
- Division of Gastroenterology, Department of Medicine, University of Toronto, Toronto, Canada
| | - Samir C Grover
- Division of Gastroenterology, Department of Medicine, University of Toronto, Toronto, Canada.,Li Ka Shing Knowledge Institute, Toronto, Canada
| |
Collapse
|
8
|
Croft H, Gilligan C, Rasiah R, Levett-Jones T, Schneider J. Developing a validity argument for a simulation-based model of entrustment in dispensing skills assessment framework. CURRENTS IN PHARMACY TEACHING & LEARNING 2020; 12:1081-1092. [PMID: 32624137 DOI: 10.1016/j.cptl.2020.04.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2019] [Revised: 03/13/2020] [Accepted: 04/18/2020] [Indexed: 06/11/2023]
Abstract
INTRODUCTION Integrated assessment of multiple competencies at once, including entrustable professional activity (EPA) based assessment, is emerging as an effective approach to competency-based evaluation of health professionals. However, there is an absence of validated assessment frameworks in entry level pharmacy education. We aimed to develop an assessment framework and establish a validity argument, containing multiple sources of evidence, for use in the integrated assessment of pharmacy student's competency in all aspects of the supply of prescribed medicine(s). METHODS A two-phase prospective study was conducted. Phase 1 involved development and content validation of the Model of Entrustment in Dispensing Skills (MEDS) assessment framework using a literature review, a think-aloud study, and expert consultation. In phase 2, a pilot study was conducted with faculty and expert assessors to test the framework. Subsequent analysis involved psychometric evaluation of rating scales and usability testing. RESULTS Validity evidence was collected and organised across the two study phases. The MEDS framework had good evidence of content validity supported by the rigorous development and consultation process, as well as case sampling, with 88% of national practice-based competencies represented across the two simulations. Reliability coefficients were high and acceptable, supporting strong agreement across domains, students, and simulations as well as a strong correlation between the EPA and total score (spearman correlation rho 0.725, P < .001). CONCLUSIONS This study describes a valid and rigorous approach for the implementation and interpretation of an integrated simulation-based assessment tool for determining pharmacy student's progress towards entrustment for independent medication supply practice.
Collapse
Affiliation(s)
- Hayley Croft
- School of Biomedical Sciences and Pharmacy, Faculty of Health and Medicine, University of Newcastle, NSW, Australia.
| | - Conor Gilligan
- School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, NSW, Australia.
| | - Rohan Rasiah
- Western Australian Centre for Rural Health, University of Western Australia, WA, Australia.
| | | | - Jennifer Schneider
- School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, NSW, Australia.
| |
Collapse
|
9
|
Park YS, Chun KH, Lee KS, Lee YH. A study on evaluator factors affecting physician-patient interaction scores in clinical performance examinations: a single medical school experience. Yeungnam Univ J Med 2020; 38:118-126. [PMID: 32759629 PMCID: PMC8016627 DOI: 10.12701/yujm.2020.00423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 06/25/2020] [Indexed: 11/22/2022] Open
Abstract
Background This study is an analysis of evaluator factors affecting physician-patient interaction (PPI) scores in clinical performance examination (CPX). The purpose of this study was to investigate possible ways to increase the reliability of the CPX evaluation. Methods The six-item Yeungnam University Scale (YUS), four-item analytic global rating scale (AGRS), and one-item holistic rating scale (HRS) were used to evaluate student performance in PPI. A total of 72 fourth-year students from Yeungnam University College of Medicine in Korea participated in the evaluation with 32 faculty and 16 standardized patient (SP) raters. The study then examined the differences in scores between types of scale, raters (SP vs. faculty), faculty specialty, evaluation experience, and level of fatigue as time passes. Results There were significant differences between faculty and SP scores in all three scales and a significant correlation among raters’ scores. Scores given by raters on items related to their specialty were lower than those given by raters on items out of their specialty. On the YUS and AGRS, there were significant differences based on the faculty’s evaluation experience; scores by raters who had three to ten previous evaluation experiences were lower than others’ scores. There were also significant differences among SP raters on all scales. The correlation between the YUS and AGRS/HRS declined significantly according to the length of evaluation time. Conclusion In CPX, PPI score reliability was found to be significantly affected by the evaluator factors as well as the type of scale.
Collapse
Affiliation(s)
- Young Soon Park
- Department of Medical Education, Konyang University College of Medicine, Daejeon, Korea
| | - Kyung Hee Chun
- Department of Medical Education, Konyang University College of Medicine, Daejeon, Korea
| | - Kyeong Soo Lee
- Department of Preventive Medicine and Public Health, Yeungnam University College of Medicine, Daegu, Korea
| | - Young Hwan Lee
- Department of Medical Humanities, Yeungnam University College of Medicine, Daegu, Korea
| |
Collapse
|
10
|
Peltonen V, Peltonen LM, Salanterä S, Hoppu S, Elomaa J, Pappila T, Hevonoja E, Hurme S, Perkonoja K, Elomaa T, Tommila M. An observational study of technical and non-technical skills in advanced life support in the clinical setting. Resuscitation 2020; 153:162-168. [PMID: 32561474 DOI: 10.1016/j.resuscitation.2020.06.010] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Revised: 05/29/2020] [Accepted: 06/09/2020] [Indexed: 10/24/2022]
Abstract
OBJECTIVE Technical skills (TS) and non-technical skills (NTS) are the primary elements ensuring patient safety during advanced life support (ALS) and effective crisis resource management (CRM). Both skills are needed to perform high-quality ALS, though they are traditionally practiced separately. The evidence of the association between NTS and TS in high-quality ALS performance is insufficient. Hence, we aimed to evaluate the association between the skills in real-life in-hospital ALS situations. METHODS We video recorded real-life in-hospital ALS situations, analyzed TS and NTS demonstrated in them with an instrument measuring TS and NTS, and tested the linear association between NTS and TS using a linear mixed model. RESULTS Among 50 real-life in-hospital ALS situations that we recorded, 20 had adequate data for analysis. NTS and TS total scores were associated with one another (slope 0.48, P < 0.001). All NTS subcategories were associated with the TS total score (slopes ranging from 0.29 to 0.39, P < 0.001). The NTS total score and TS subcategories (chest compression quality, ventilation quality, rhythm control and defibrillation quality) were associated with one another (slopes ranging from 0.37 to 0.56, P < 0.01). CONCLUSIONS The resuscitation teams who demonstrated good NTS also performed the technical aspects of ALS better. The results suggest that NTS and TS have an association with one another in real-life in-hospital ALS situations. NTS performance had the most evident association with chest compression quality and rhythm control and defibrillation quality; these are considered the most crucial elements affecting outcomes of ALS. The findings of the study present novel information of what and why to emphasize in ALS training. CLINICAL TRIAL REGISTRATION ClinicalTrials.gov, NCT03017144.
Collapse
Affiliation(s)
- Ville Peltonen
- Anaesthesia and Intensive Care, Satakunta Hospital District, Sairaalantie 3, FI-28500 Pori, Finland; Department of Anaesthesiology and Intensive Care, University of Turku, P.O. Box 51, Kiinamyllynkatu 4-8, FI-20521, Turku, Finland; Division of Perioperative Services, Intensive Care Medicine and Pain Management, Turku University Hospital, P.O. Box 52, FI-20521 Turku, Finland.
| | | | - Sanna Salanterä
- Department of Nursing Science, University of Turku, FI-20014 Turku, Finland; Department of Development Unit, Turku University Hospital, Finland
| | - Sanna Hoppu
- Emergency Medical Services, Tampere University Hospital, P.O. Box 2000, FI-33521 Tampere, Finland
| | - Jaana Elomaa
- Division of Perioperative Services, Intensive Care Medicine and Pain Management, Turku University Hospital, P.O. Box 52, FI-20521 Turku, Finland
| | - Tomi Pappila
- Division of Emergency and Pre-Hospital Care, Satakunta Hospital District, Sairaalantie 3, FI-28500 Pori, Finland
| | - Eeva Hevonoja
- Emergency Medical Services, Turku University Hospital, P.O. Box 52, FI-20521 Turku, Finland
| | - Saija Hurme
- Department of Biostatistics, University of Turku, FI-20014 Turku, Finland
| | - Katariina Perkonoja
- Auria Clinical Informatics, Hospital District of Southwest Finland, Turku University Hospital 11B, P.O. Box 52, FI-20521 Turku, Finland
| | - Teemu Elomaa
- Emergency Medical Services, Turku University Hospital, P.O. Box 52, FI-20521 Turku, Finland
| | - Miretta Tommila
- Department of Anaesthesiology and Intensive Care, University of Turku, P.O. Box 51, Kiinamyllynkatu 4-8, FI-20521, Turku, Finland; Division of Perioperative Services, Intensive Care Medicine and Pain Management, Turku University Hospital, P.O. Box 52, FI-20521 Turku, Finland
| |
Collapse
|
11
|
Abstract
OBJECTIVE The importance of physician training in communication skills for motivating patients to adopt a healthy life-style and optimize clinical outcomes is increasingly recognized. This study inventoried and systematically reviewed the psychometric properties of, and the skills assessed by, existing assessment tools used to evaluate communication skills among physicians. METHODS This review was conducted in accordance with the PRISMA guidelines (PROSPERO: CRD42018091932). Four databases (PUBMED, EMBASE, PsychINFO, and SCOPUS) were searched up to December 2018, generating 3902 unique articles, which were screened by two authors. A total of 57 articles met the inclusion criteria and underwent full data extraction. RESULTS Forty-five different assessment tools were identified. Only 47% of the studies mentioned underlying theories or models for designing the tool. Fifteen communication skills were assessed across the tools, the five most prevalent were information giving (46%) or gathering (40%), eliciting patients' perspectives (44%), planning/goal setting (37%), and closing the session (32%). Most tools (93%) assessed communication skills using in-person role play exercises with standardized (61%) or real (32%) patients, but only 54% described the expertise of the raters who performed the evaluations. Overall, reporting of the psychometric properties of the assessment tools was poor-moderate (4.5 ± 1.3 out of 9). CONCLUSIONS Despite identifying several existing physician communication assessment tools, a high degree of heterogeneity between these tools, in terms of skills assessed and study quality, was observed, and most have been poorly validated. Research is needed to rigorously develop and validate accessible, convenient, "user-friendly," and easy to administer and score communication assessment tools.
Collapse
|
12
|
Phillips EC, Smith SE, Clarke B, Hamilton AL, Kerins J, Hofer J, Tallentire VR. Validity of the Medi-StuNTS behavioural marker system: assessing the non-technical skills of medical students during immersive simulation. BMJ SIMULATION & TECHNOLOGY ENHANCED LEARNING 2020; 7:3-10. [PMID: 35521075 PMCID: PMC8936660 DOI: 10.1136/bmjstel-2019-000506] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/30/2020] [Indexed: 11/04/2022]
Abstract
Background The Medical Students' Non-Technical Skills (Medi-StuNTS) behavioural marker system (BMS) is the first BMS to be developed specifically for medical students to facilitate training in non-technical skills (NTS) within immersive simulated acute care scenarios. In order to begin implementing the tool in practice, validity evidence must be sought. We aimed to assess the validity of the Medi-StuNTS system with reference to Messick's contemporary validity framework. Methods Two raters marked video-recorded performances of acute care simulation scenarios using the Medi-StuNTS system. Three groups were marked: third-year and fourth-year medical students (novices), final-year medical students (intermediates) and core medical trainees (experts). The scores were used to make assessments of relationships to the variable of clinical experience through expert-novice comparisons, inter-rater reliability, observability, exploratory factor analysis, inter-rater disagreements and differential item functioning. Results A significant difference was found between the three groups (p<0.005), with experts scoring significantly better than intermediates (p<0.005) and intermediates scoring significantly better than novices (p=0.001). There was a strong positive correlation between the two raters' scores (r=0.79), and an inter-rater disagreement of more than one point in less than one-fifth of cases. Across all scenarios, 99.7% of skill categories and 84% of skill elements were observable. Factor analysis demonstrated appropriate grouping of skill elements. Inconsistencies in test performance across learner groups were shown specifically in the skill categories of situation awareness and decision making and prioritisation. Conclusion We have demonstrated evidence for several aspects of validity of the Medi-StuNTS system when assessing medical students' NTS during immersive simulation. We can now begin to introduce this system into simulation-based education to maximise NTS training in this group.
Collapse
Affiliation(s)
- Emma Claire Phillips
- Scottish Centre for Simulation and Clinical Human Factors, Larbert, UK
- NHS Lothian, Edinburgh, UK
- The University of Edinburgh College of Medicine and Veterinary Medicine, Edinburgh, UK
| | - Samantha Eve Smith
- The University of Edinburgh College of Medicine and Veterinary Medicine, Edinburgh, UK
| | | | | | | | | | - Victoria Ruth Tallentire
- Scottish Centre for Simulation and Clinical Human Factors, Larbert, UK
- NHS Lothian, Edinburgh, UK
- The University of Edinburgh College of Medicine and Veterinary Medicine, Edinburgh, UK
| |
Collapse
|
13
|
Pezel T, Coisne A, Picard F, Gueret P. How simulation teaching is revolutionizing our relationship with cardiology. Arch Cardiovasc Dis 2020; 113:297-302. [PMID: 32291188 DOI: 10.1016/j.acvd.2020.03.010] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Revised: 03/10/2020] [Accepted: 03/11/2020] [Indexed: 12/26/2022]
Affiliation(s)
- Théo Pezel
- Inserm UMRS 942, Department of Cardiology, University of Paris, Lariboisière Hospital, Centre Hospitalo-Universitaire Lariboisière, AP-HP, 2, rue Ambroise-Paré, 75010 Paris, France; Division of Cardiology, Johns Hopkins University, 21287-0409 Baltimore, MD, USA; French Commission of Simulation Teaching (Commission d'enseignement par simulation [COMSI]) of the French Society of Cardiology, 75012 Paris, France.
| | - Augustin Coisne
- French Commission of Simulation Teaching (Commission d'enseignement par simulation [COMSI]) of the French Society of Cardiology, 75012 Paris, France; Department of Cardiovascular Explorations and Echocardiography, Heart Valve Clinic, CHU de Lille, 59000 Lille, France; Inserm UMR 1011, 59019 Lille, France; Institut Pasteur de Lille, 59000 Lille, France
| | - Fabien Picard
- French Commission of Simulation Teaching (Commission d'enseignement par simulation [COMSI]) of the French Society of Cardiology, 75012 Paris, France; Department of Cardiology, Cochin Hospital, hôpitaux universitaires Paris Centre, AP-HP, 75014 Paris, France; Inserm U970, Paris Cardiovascular Research Center (PARCC), Georges Pompidou European Hospital, 75015 Paris, France
| | - Pascal Gueret
- French Commission of Simulation Teaching (Commission d'enseignement par simulation [COMSI]) of the French Society of Cardiology, 75012 Paris, France; Department of Cardiology, Foch Hospital, 92150 Suresnes, France
| | -
- Inserm UMRS 942, Department of Cardiology, University of Paris, Lariboisière Hospital, Centre Hospitalo-Universitaire Lariboisière, AP-HP, 2, rue Ambroise-Paré, 75010 Paris, France; Division of Cardiology, Johns Hopkins University, 21287-0409 Baltimore, MD, USA; French Commission of Simulation Teaching (Commission d'enseignement par simulation [COMSI]) of the French Society of Cardiology, 75012 Paris, France; Department of Cardiovascular Explorations and Echocardiography, Heart Valve Clinic, CHU de Lille, 59000 Lille, France; Inserm UMR 1011, 59019 Lille, France; Institut Pasteur de Lille, 59000 Lille, France; Department of Cardiology, Cochin Hospital, hôpitaux universitaires Paris Centre, AP-HP, 75014 Paris, France; Inserm U970, Paris Cardiovascular Research Center (PARCC), Georges Pompidou European Hospital, 75015 Paris, France; Department of Cardiology, Foch Hospital, 92150 Suresnes, France
| |
Collapse
|
14
|
Brouwers M, Custers J, Bazelmans E, van Weel C, Laan R, van Weel-Baumgarten E. Assessment of medical students' integrated clinical communication skills: development of a tailor-made assessment tool. BMC MEDICAL EDUCATION 2019; 19:118. [PMID: 31035995 PMCID: PMC6489308 DOI: 10.1186/s12909-019-1557-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Accepted: 04/15/2019] [Indexed: 05/17/2023]
Abstract
BACKGROUND Since patient-centered communication is directly connected to clinical performance, it should be integrated with medical knowledge and clinical skills. Therefore, clinical communication skills should be trained and assessed as an integral part of the student's clinical performance. We were unable to identify a tool, which helps when assessing patient-centered communication skills as an integrated component of medical history taking ('the integrated medical interview'). Therefore, we decided to design a new tailor-made assessment tool, the BOCC (BeOordeling Communicatie en Consultvoering (Dutch), Assessment of Communication and Consultation (English) to help raters assess students' integrated clinical communication skills with the emphasis on patient-centred communication combined with the correct medical content. This is a first initiative to develop such a tool, and this paper describes the first steps in this process. METHODS We investigated the tool in a group of third-year medical students (n = 672) interviewing simulated patients. Internal structure and internal consistency were assessed. Regression analysis was conducted to investigate the relationship between scores on the instrument and general grading. Applicability to another context was tested in a group of fourth-year medical students (n = 374). RESULTS PCA showed five components (Communication skills, Problem clarification, Specific History, Problem influence and Integration Skills) with various Cronbach's alpha scores. The component Problem Clarification made the strongest unique contribution to the grade prediction. Applicability was good when investigated in another context. CONCLUSIONS The BOCC is designed to help raters assess students' integrated communication skills. It was assessed on internal structure and internal consistency. This tool is the first step in the assessment of the integrated medical interview and a basis for further investigation to reform it into a true measurement instrument on clinical communication skills.
Collapse
Affiliation(s)
- M. Brouwers
- Radboud Institute of Health Sciences, Dept. Primary and Community Care (161), Radboud University Medical Center, PO Box 9101, 6500 HB Nijmegen, The Netherlands
| | - J. Custers
- Departement of Medical Psychology, Radboud University Medical Center, PO Box 9101, 6500 HB Nijmegen, The Netherlands
| | - E. Bazelmans
- Departement of Medical Psychology, Radboud University Medical Center, PO Box 9101, 6500 HB Nijmegen, The Netherlands
| | - C. van Weel
- Radboud Institute of Health Sciences, Dept. Primary and Community Care (161), Radboud University Medical Center, PO Box 9101, 6500 HB Nijmegen, The Netherlands
- Department of Health Services Research and Policy, Australian National University, Canberra, Australia
| | - R. Laan
- Health Academy, Radboud University Medical Center, PO Box 9101, 6500 HB Nijmegen, The Netherlands
| | - E. van Weel-Baumgarten
- Radboud Institute of Health Sciences, Dept. Primary and Community Care (161), Radboud University Medical Center, PO Box 9101, 6500 HB Nijmegen, The Netherlands
| |
Collapse
|
15
|
Khan R, Plahouras J, Johnston BC, Scaffidi MA, Grover SC, Walsh CM. Virtual reality simulation training for health professions trainees in gastrointestinal endoscopy. Cochrane Database Syst Rev 2018; 8:CD008237. [PMID: 30117156 PMCID: PMC6513657 DOI: 10.1002/14651858.cd008237.pub3] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
BACKGROUND Endoscopy has traditionally been taught with novices practicing on real patients under the supervision of experienced endoscopists. Recently, the growing awareness of the need for patient safety has brought simulation training to the forefront. Simulation training can provide trainees with the chance to practice their skills in a learner-centred, risk-free environment. It is important to ensure that skills gained through simulation positively transfer to the clinical environment. This updated review was performed to evaluate the effectiveness of virtual reality (VR) simulation training in gastrointestinal endoscopy. OBJECTIVES To determine whether virtual reality simulation training can supplement and/or replace early conventional endoscopy training (apprenticeship model) in diagnostic oesophagogastroduodenoscopy, colonoscopy, and/or sigmoidoscopy for health professions trainees with limited or no prior endoscopic experience. SEARCH METHODS We searched the following health professions, educational, and computer databases until 12 July 2017: the Cochrane Central Register of Controlled Trials, Ovid MEDLINE, Ovid Embase, Scopus, Web of Science, BIOSIS Previews, CINAHL, AMED, ERIC, Education Full Text, CBCA Education, ACM Digital Library, IEEE Xplore, Abstracts in New Technology and Engineering, Computer and Information Systems Abstracts, and ProQuest Dissertations and Theses Global. We also searched the grey literature until November 2017. SELECTION CRITERIA We included randomised and quasi-randomised clinical trials comparing VR endoscopy simulation training versus any other method of endoscopy training with outcomes measured on humans in the clinical setting, including conventional patient-based training, training using another form of endoscopy simulation, or no training. We also included trials comparing two different methods of VR training. DATA COLLECTION AND ANALYSIS Two review authors independently assessed the eligibility and methodological quality of trials, and extracted data on the trial characteristics and outcomes. We pooled data for meta-analysis where participant groups were similar, studies assessed the same intervention and comparator, and had similar definitions of outcome measures. We calculated risk ratio for dichotomous outcomes with 95% confidence intervals (CI). We calculated mean difference (MD) and standardised mean difference (SMD) with 95% CI for continuous outcomes when studies reported the same or different outcome measures, respectively. We used GRADE to rate the quality of the evidence. MAIN RESULTS We included 18 trials (421 participants; 3817 endoscopic procedures). We judged three trials as at low risk of bias. Ten trials compared VR training with no training, five trials with conventional endoscopy training, one trial with another form of endoscopy simulation training, and two trials compared two different methods of VR training. Due to substantial clinical and methodological heterogeneity across our four comparisons, we did not perform a meta-analysis for several outcomes. We rated the quality of evidence as moderate, low, or very low due to risk of bias, imprecision, and heterogeneity.Virtual reality endoscopy simulation training versus no training: There was insufficient evidence to determine the effect on composite score of competency (MD 3.10, 95% CI -0.16 to 6.36; 1 trial, 24 procedures; low-quality evidence). Composite score of competency was based on 5-point Likert scales assessing seven domains: atraumatic technique, colonoscope advancement, use of instrument controls, flow of procedure, use of assistants, knowledge of specific procedure, and overall performance. Scoring range was from 7 to 35, a higher score representing a higher level of competence. Virtual reality training compared to no training likely provides participants with some benefit, as measured by independent procedure completion (RR 1.62, 95% CI 1.15 to 2.26; 6 trials, 815 procedures; moderate-quality evidence). We evaluated overall rating of performance (MD 0.45, 95% CI 0.15 to 0.75; 1 trial, 18 procedures), visualisation of mucosa (MD 0.60, 95% CI 0.20 to 1.00; 1 trial, 55 procedures), performance time (MD -0.20 minutes, 95% CI -0.71 to 0.30; 2 trials, 29 procedures), and patient discomfort (SMD -0.16, 95% CI -0.68 to 0.35; 2 trials, 145 procedures), all with very low-quality evidence. No trials reported procedure-related complications or critical flaws (e.g. bleeding, luminal perforation) (3 trials, 550 procedures; moderate-quality evidence).Virtual reality endoscopy simulation training versus conventional patient-based training: One trial reported composite score of competency but did not provide sufficient data for quantitative analysis. Virtual reality training compared to conventional patient-based training resulted in fewer independent procedure completions (RR 0.45, 95% CI 0.27 to 0.74; 2 trials, 174 procedures; low-quality evidence). We evaluated performance time (SMD 0.12, 95% CI -0.55 to 0.80; 2 trials, 34 procedures), overall rating of performance (MD -0.90, 95% CI -4.40 to 2.60; 1 trial, 16 procedures), and visualisation of mucosa (MD 0.0, 95% CI -6.02 to 6.02; 1 trial, 18 procedures), all with very low-quality evidence. Virtual reality training in combination with conventional training appears to be advantageous over VR training alone. No trials reported any procedure-related complications or critical flaws (3 trials, 72 procedures; very low-quality evidence).Virtual reality endoscopy simulation training versus another form of endoscopy simulation: Based on one study, there were no differences between groups with respect to composite score of competency, performance time, and visualisation of mucosa. Virtual reality training in combination with another form of endoscopy simulation training did not appear to confer any benefit compared to VR training alone.Two methods of virtual reality training: Based on one study, a structured VR simulation-based training curriculum compared to self regulated learning on a VR simulator appears to provide benefit with respect to a composite score evaluating competency. Based on another study, a progressive-learning curriculum that sequentially increases task difficulty provides benefit with respect to a composite score of competency over the structured VR training curriculum. AUTHORS' CONCLUSIONS VR simulation-based training can be used to supplement early conventional endoscopy training for health professions trainees with limited or no prior endoscopic experience. However, we found insufficient evidence to advise for or against the use of VR simulation-based training as a replacement for early conventional endoscopy training. The quality of the current evidence was low due to inadequate randomisation, allocation concealment, and/or blinding of outcome assessment in several trials. Further trials are needed that are at low risk of bias, utilise outcome measures with strong evidence of validity and reliability, and examine the optimal nature and duration of training.
Collapse
Affiliation(s)
- Rishad Khan
- Schulich School of Medicine and Dentistry, Western UniversityDepartment of MedicineLondonCanada
| | - Joanne Plahouras
- University of Toronto27 King's College CircleTorontoOntarioCanadaM5S 1A1
| | - Bradley C Johnston
- Dalhousie UniversityDepartment of Community Health and Epidemiology5790 University AvenueHalifaxNSCanadaB3H 1V7
| | - Michael A Scaffidi
- St. Michael's Hospital, University of TorontoDepartment of Medicine, Division of GastroenterologyTorontoONCanada
| | - Samir C Grover
- St. Michael's Hospital, University of TorontoDepartment of Medicine, Division of GastroenterologyTorontoONCanada
| | - Catharine M Walsh
- The Hospital for Sick ChildrenDivision of Gastroenterology, Hepatology, and Nutrition555 University AveTorontoONCanadaM5G 1X8
| | | |
Collapse
|
16
|
Grover SC, Scaffidi MA, Khan R, Garg A, Al-Mazroui A, Alomani T, Yu JJ, Plener IS, Al-Awamy M, Yong EL, Cino M, Ravindran NC, Zasowski M, Grantcharov TP, Walsh CM. Progressive learning in endoscopy simulation training improves clinical performance: a blinded randomized trial. Gastrointest Endosc 2017; 86:881-889. [PMID: 28366440 DOI: 10.1016/j.gie.2017.03.1529] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/02/2016] [Accepted: 03/17/2017] [Indexed: 02/06/2023]
Abstract
BACKGROUND AND AIMS A structured comprehensive curriculum (SCC) that uses simulation-based training (SBT) can improve clinical colonoscopy performance. This curriculum may be enhanced through the application of progressive learning, a training strategy centered on incrementally challenging learners. We aimed to determine whether a progressive learning-based curriculum (PLC) would lead to superior clinical performance compared with an SCC. METHODS This was a single-blinded randomized controlled trial conducted at a single academic center. Thirty-seven novice endoscopists were recruited and randomized to either a PLC (n = 18) or to an SCC (n = 19). The PLC comprised 6 hours of SBT, which progressed in complexity and difficulty. The SCC included 6 hours of SBT, with cases of random order of difficulty. Both groups received expert feedback and 4 hours of didactic teaching. Participants were assessed at baseline, immediately after training, and 4 to 6 weeks after training. The primary outcome was participants' performance during their first 2 clinical colonoscopies, as assessed by using the Joint Advisory Group Direct Observation of Procedural Skills assessment tool (JAG DOPS). Secondary outcomes were differences in endoscopic knowledge, technical and communication skills, and global performance in the simulated setting. RESULTS The PLC group outperformed the SCC group during first and second clinical colonoscopies, measured by JAG DOPS (P < .001). Additionally, the PLC group had superior technical and communication skills and global performance in the simulated setting (P < .05). There were no differences between groups in endoscopic knowledge (P > .05). CONCLUSIONS Our findings demonstrate the superiority of a PLC for endoscopic simulation, compared with an SCC. Challenging trainees progressively is a simple, theory-based approach to simulation whereby the performance of clinical colonoscopies can be improved. (Clinical trial registration number: NCT02000180.).
Collapse
Affiliation(s)
- Samir C Grover
- Division of Gastroenterology, St. Michael's Hospital Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Michael A Scaffidi
- Division of Gastroenterology, St. Michael's Hospital Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Rishad Khan
- Division of Gastroenterology, St. Michael's Hospital Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Ankit Garg
- Division of Gastroenterology, St. Michael's Hospital Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Ahmed Al-Mazroui
- Division of Gastroenterology, St. Michael's Hospital Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Tareq Alomani
- Division of Gastroenterology, St. Michael's Hospital Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Jeffrey J Yu
- Division of Gastroenterology, Hepatology, and Nutrition, Learning Institute, and Research Institute, Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada; The Wilson Centre, University of Toronto, Toronto, Ontario, Canada
| | - Ian S Plener
- Division of Gastroenterology, St. Michael's Hospital Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Mohamed Al-Awamy
- Division of Gastroenterology, St. Michael's Hospital Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Elaine L Yong
- Division of Gastroenterology, Sunnybrook Hospital, University of Toronto, Toronto, Ontario, Canada
| | - Maria Cino
- Division of Gastroenterology, Toronto Western Hospital, University of Toronto, Toronto, Ontario, Canada
| | - Nikila C Ravindran
- Division of Gastroenterology, St. Michael's Hospital Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Mark Zasowski
- Division of Gastroenterology, St. Michael's Hospital Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Teodor P Grantcharov
- Department of Surgery, St. Michael's Hospital, University of Toronto, Toronto, Ontario, Canada
| | - Catharine M Walsh
- Division of Gastroenterology, Hepatology, and Nutrition, Learning Institute, and Research Institute, Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada; The Wilson Centre, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
17
|
Khan R, Scaffidi MA, Walsh CM, Lin P, Al-Mazroui A, Chana B, Kalaichandran R, Lee W, Grantcharov TP, Grover SC. Simulation-Based Training of Non-Technical Skills in Colonoscopy: Protocol for a Randomized Controlled Trial. JMIR Res Protoc 2017; 6:e153. [PMID: 28778849 PMCID: PMC5562936 DOI: 10.2196/resprot.7690] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2017] [Accepted: 05/30/2017] [Indexed: 12/12/2022] Open
Abstract
Background Non-technical skills (NTS), such as communication and professionalism, contribute to the safe and effective completion of procedures. NTS training has previously been shown to improve surgical performance. Moreover, increases in NTS have been associated with improved clinical endoscopic performance. Despite this evidence, NTS training has not been tested as an intervention in endoscopy. Objective The aim of this study is to evaluate the effectiveness of a simulation-based training (SBT) curriculum of NTS on novice endoscopists’ performance of clinical colonoscopy. Methods Novice endoscopists were randomized to 2 groups. The control group received 4 hours of interactive didactic sessions on colonoscopy theory and 6 hours of SBT. Hours 5 and 6 of the SBT were integrated scenarios, wherein participants interacted with a standardized patient and nurse, while performing a colonoscopy on the virtual reality (VR) simulator. The NTS (intervention) group received the same teaching sessions but the last hour was focused on NTS teaching. The NTS group also reviewed a checklist of tasks relevant to NTS concepts prior to each integrated scenario case and was provided with dedicated feedback on their NTS performance during the integrated scenario practice. All participants were assessed at baseline, immediately after training, and 4 to 6 weeks post-training. The primary outcome measure is colonoscopy-specific performance in the clinical setting. Results In total, 42 novice endoscopists completed the study. Data collection and analysis is ongoing. We anticipate completion of all assessments by August 2017. Data analysis, manuscript writing, and subsequent submission for publication is expected to be completed by December 2017. Conclusions Results from this study may inform the implementation of NTS training into postgraduate gastrointestinal curricula. NTS curricula may improve attitudes towards patient safety and self-reflection among trainees. Moreover, enhanced NTS may lead to superior clinical performance and outcomes in colonoscopy. Trial Registration Clinicaltrial.gov NCT02877420; https://www.clinicaltrials.gov/ct2/show/NCT02877420 (Archived by WebCite at http://www.webcitation.org/6rw94ubXX NCT02877420)
Collapse
Affiliation(s)
- Rishad Khan
- St. Michael's Hospital, Division of Gastroenterology, University of Toronto, Toronto, ON, Canada
| | - Michael A Scaffidi
- St. Michael's Hospital, Division of Gastroenterology, University of Toronto, Toronto, ON, Canada
| | - Catharine M Walsh
- Hospital for Sick Children, Division of Gastroenterology, Hepatology, and Nutrition, Learning Institute, and Research Institute, University of Toronto, Toronto, ON, Canada.,The Wilson Centre, University of Toronto, Toronto, ON, Canada
| | - Peter Lin
- St. Michael's Hospital, Division of Gastroenterology, University of Toronto, Toronto, ON, Canada
| | - Ahmed Al-Mazroui
- St. Michael's Hospital, Division of Gastroenterology, University of Toronto, Toronto, ON, Canada
| | - Barinder Chana
- St. Michael's Hospital, Division of Gastroenterology, University of Toronto, Toronto, ON, Canada
| | - Ruben Kalaichandran
- St. Michael's Hospital, Division of Gastroenterology, University of Toronto, Toronto, ON, Canada
| | - Woojin Lee
- St. Michael's Hospital, Division of Gastroenterology, University of Toronto, Toronto, ON, Canada
| | - Teodor P Grantcharov
- St. Michael's Hospital, Department of General Surgery, University of Toronto, Toronto, ON, Canada
| | - Samir C Grover
- St. Michael's Hospital, Division of Gastroenterology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
18
|
Daniels VJ, Harley D. The effect on reliability and sensitivity to level of training of combining analytic and holistic rating scales for assessing communication skills in an internal medicine resident OSCE. PATIENT EDUCATION AND COUNSELING 2017; 100:1382-1386. [PMID: 28228339 DOI: 10.1016/j.pec.2017.02.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2015] [Revised: 02/06/2017] [Accepted: 02/09/2017] [Indexed: 05/17/2023]
Abstract
OBJECTIVE Although previous research has compared checklists to rating scales for assessing communication, the purpose of this study was to compare the effect on reliability and sensitivity to level of training of an analytic, a holistic, and a combined analytic-holistic rating scale in assessing communication skills. METHODS The University of Alberta Internal Medicine Residency runs OSCEs for postgraduate year (PGY) 1 and 2 residents and another for PGY-4 residents. Communication stations were scored with an analytic scale (empathy, non-verbal skills, verbal skills, and coherence subscales) and a holistic scale. Authors analyzed reliability of individual and combined scales using generalizability theory and evaluated each scale's sensitivity to level of training. RESULTS For analytic, holistic, and combined scales, 12, 12, and 11 stations respectively yielded a Phi of 0.8 for the PGY-1,2 cohort, and 16, 16, and 14 stations yielded a Phi of 0.8 for the PGY-4 cohort. PGY-4 residents scored higher on the combined scale, the analytic rating scale, and the non-verbal and coherence subscales. CONCLUSION A combined analytic-holistic rating scale increased score reliability and was sensitive to level of training. PRACTICE IMPLICATIONS Given increased validity evidence, OSCE developers should consider combining analytic and holistic scales when assessing communication skills.
Collapse
Affiliation(s)
| | - Dwight Harley
- School of Dentistry, University of Alberta, Edmonton, Canada
| |
Collapse
|
19
|
Abstract
Surgical education continues to evolve from the master-apprentice model. Newer methods of the process need to be used to manage the dual challenges of educating while providing safe surgical care. This requires integrating adult learning concepts into delivery of practical training and education in busy clinical environments. A narrative review aimed at outlining and integrating adult learning and surgical education theory was undertaken. Additionally, this information was used to relate the practical delivery of surgical training and education in day-to-day surgical practice. Concepts were sourced from reference material. Additional material was found using a PubMed search of the words: 'surgical education theory' and 'adult learning theory medical'. This yielded 1351 abstracts, of which 43 articles with a focus on key concepts in adult education theory were used. Key papers were used to formulate structure and additional cross-referenced papers were included where appropriate. Current concepts within adult learning have a lot to offer when considering how to better deliver surgical education and training. Better integration of adult learning theory can be fruitful. Individual teaching surgical units need to rethink their paradigms and consider how each individual can contribute to the education experience. Up skilling courses for trainers can do much to improve the delivery of surgical education. Understanding adult learning concepts and integrating these into day-to-day teaching can be valuable.
Collapse
Affiliation(s)
- Prem Rashid
- Department of Urology, Port Macquarie Base Hospital, Rural Clinical School, The University of New South Wales, Port Macquarie, NSW, 2444, Australia
| |
Collapse
|
20
|
Ponton-Carss A, Kortbeek JB, Ma IW. Assessment of technical and nontechnical skills in surgical residents. Am J Surg 2016; 212:1011-1019. [DOI: 10.1016/j.amjsurg.2016.03.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Revised: 02/24/2016] [Accepted: 03/22/2016] [Indexed: 01/03/2023]
|
21
|
Validity and Feasibility Evidence of Objective Structured Clinical Examination to Assess Competencies of Pediatric Critical Care Trainees. Crit Care Med 2016; 44:948-53. [PMID: 26862709 DOI: 10.1097/ccm.0000000000001604] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE The purpose of this study was to provide validity and feasibility evidence for the use of an objective structured clinical examination in the assessment of pediatric critical care medicine trainees. DESIGN This was a validation study. Validity evidence was based on Messick's framework. SETTING A tertiary, university-affiliated academic center. SUBJECTS Seventeen pediatric critical care medicine fellows were recruited in 2012 and 2013 academic year. INTERVENTIONS None. All subjects completed an objective structured clinical examination assessment. MEASUREMENTS AND MAIN RESULTS Seventeen trainees were assessed. Simulation scenarios were developed for content validity by pediatric critical care medicine and education experts using CanMEDS competencies. Scenarios were piloted before the study. Each scenario was evaluated by two interprofessional raters. Inter-rater agreement, measured using intraclass correlations, was 0.91 (SE = 0.09) across stations. Generalizability theory was used to evaluate internal structure and reliability. Reliability was moderate (G-coefficient = 0.67, Φ-coefficient = 0.52). The greatest source of variability was from participant by station variance (40.6%). Pearson correlation coefficients were used to evaluate the relationship of objective structured clinical examination with each traditional assessment instruments: multisource feedback, in-training evaluation report, short-answer questions, and Multidisciplinary Critical Care Knowledge Assessment Program. Performance on the objective structured clinical examination correlated with performance on the Multidisciplinary Critical Care Knowledge Assessment Program (r = 0.52; p = 0.032) and multisource feedback (r = 0.59; p = 0.017), but not with the overall performance on the in-training evaluation report (r = 0.37; p = 0.143) or short-answer questions (r = 0.08; p = 0.767). Consequences were not assessed. CONCLUSION Validity and feasibility evidence in this study indicate that the use of the objective structured clinical examination scores can be a valid way to assess CanMEDS competencies required for independent practice in pediatric critical care medicine.
Collapse
|
22
|
Cook DA, Lineberry M. Consequences Validity Evidence: Evaluating the Impact of Educational Assessments. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2016; 91:785-95. [PMID: 26839945 DOI: 10.1097/acm.0000000000001114] [Citation(s) in RCA: 93] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Because tests that do not alter management (i.e., influence decisions and actions) should not be performed, data on the consequences of assessment constitute a critical source of validity evidence. Consequences validity evidence is challenging for many educators to understand, perhaps because it has no counterpart in the older framework of content, criterion, and construct validity. The authors' purpose is to explain consequences validity evidence and propose a framework for organizing its collection and interpretation.Both clinical and educational assessments can be viewed as interventions. The act of administering or taking a test, the interpretation of scores, and the ensuing decisions and actions influence those being assessed (e.g., patients or students) and other people and systems (e.g., physicians, teachers, hospitals, schools). Consequences validity evidence examines such impacts of assessments. Despite its importance, consequences evidence is reported infrequently in health professions education (range 5%-20% of studies in recent systematic reviews) and is typically limited in scope and rigor.Consequences validity evidence can derive from evaluations of the impact on examinees, educators, schools, or the end target of practice (e.g., patients or health care systems); and the downstream impact of classifications (e.g., different score cut points and labels). Impact can result from the uses of scores or from the assessment activity itself, and can be intended or unintended and beneficial or harmful. Both quantitative and qualitative research methods are useful. The type, quantity, and rigor of consequences evidence required will vary depending on the assessment and the claims for its use.
Collapse
Affiliation(s)
- David A Cook
- D.A. Cook is professor of medicine and medical education, associate director, Mayo Clinic Online Learning, and consultant, Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, Minnesota. M. Lineberry is assistant professor of medical education, Department of Medical Education, and assistant director for research, Graham Clinical Performance Center, University of Illinois at Chicago, Chicago, Illinois
| | | |
Collapse
|
23
|
Carney PA, Palmer RT, Fuqua Miller M, Thayer EK, Estroff SE, Litzelman DK, Biagioli FE, Teal CR, Lambros A, Hatt WJ, Satterfield JM. Tools to Assess Behavioral and Social Science Competencies in Medical Education: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2016; 91:730-42. [PMID: 26796091 PMCID: PMC4846480 DOI: 10.1097/acm.0000000000001090] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
PURPOSE Behavioral and social science (BSS) competencies are needed to provide quality health care, but psychometrically validated measures to assess these competencies are difficult to find. Moreover, they have not been mapped to existing frameworks, like those from the Liaison Committee on Medical Education (LCME) and Accreditation Council for Graduate Medical Education (ACGME). This systematic review aimed to identify and evaluate the quality of assessment tools used to measure BSS competencies. METHOD The authors searched the literature published between January 2002 and March 2014 for articles reporting psychometric or other validity/reliability testing, using OVID, CINAHL, PubMed, ERIC, Research and Development Resource Base, SOCIOFILE, and PsycINFO. They reviewed 5,104 potentially relevant titles and abstracts. To guide their review, they mapped BSS competencies to existing LCME and ACGME frameworks. The final included articles fell into three categories: instrument development, which were of the highest quality; educational research, which were of the second highest quality; and curriculum evaluation, which were of lower quality. RESULTS Of the 114 included articles, 33 (29%) yielded strong evidence supporting tools to assess communication skills, cultural competence, empathy/compassion, behavioral health counseling, professionalism, and teamwork. Sixty-two (54%) articles yielded moderate evidence and 19 (17%) weak evidence. Articles mapped to all LCME standards and ACGME core competencies; the most common was communication skills. CONCLUSIONS These findings serve as a valuable resource for medical educators and researchers. More rigorous measurement validation and testing and more robust study designs are needed to understand how educational strategies contribute to BSS competency development.
Collapse
Affiliation(s)
- Patricia A Carney
- P.A. Carney is professor of family medicine and of public health and preventive medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. R.T. Palmer is assistant professor of family medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. M.F. Miller is senior research assistant, Department of Family Medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. E.K. Thayer is research assistant, Department of Family Medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. S.E. Estroff is professor, Department of Social Medicine, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina. D.K. Litzelman is D. Craig Brater Professor of Medicine and senior director for research in health professions education and practice, Indiana University School of Medicine, Indianapolis, Indiana. F.E. Biagioli is professor of family medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. C.R. Teal is assistant professor, Department of Medicine, and director, Educational Evaluation and Research, Office of Undergraduate Medical Education, Baylor College of Medicine, Houston, Texas. A. Lambros is active emeritus associate professor, Social Sciences & Health Policy, Wake Forest School of Medicine, Winston-Salem, North Carolina. W.J. Hatt is programmer analyst, Department of Family Medicine, Oregon Health & Science University School of Medicine, Portland, Oregon. J.M. Satterfield is professor of clinical medicine, University of California, San Francisco, School of Medicine, San Francisco, California
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
24
|
Lamba S, Tyrie LS, Bryczkowski S, Nagurka R. Teaching Surgery Residents the Skills to Communicate Difficult News to Patient and Family Members: A Literature Review. J Palliat Med 2016; 19:101-7. [DOI: 10.1089/jpm.2015.0292] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Affiliation(s)
- Sangeeta Lamba
- Department of Emergency Medicine, Rutgers New Jersey Medical School, Newark, New Jersey
| | - Leslie S. Tyrie
- Department of Surgery, Rutgers New Jersey Medical School, Newark, New Jersey
| | - Sarah Bryczkowski
- Department of Surgery, Rutgers New Jersey Medical School, Newark, New Jersey
| | - Roxanne Nagurka
- Department of Emergency Medicine, Rutgers New Jersey Medical School, Newark, New Jersey
| |
Collapse
|
25
|
Gjeraa K, Jepsen RMHG, Rewers M, Østergaard D, Dieckmann P. Exploring the relationship between anaesthesiologists' non-technical and technical skills. Acta Anaesthesiol Scand 2016; 60:36-47. [PMID: 26272742 DOI: 10.1111/aas.12598] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2015] [Revised: 06/24/2015] [Accepted: 07/08/2015] [Indexed: 11/29/2022]
Abstract
BACKGROUND A combination of non-technical skills (NTS) and technical skills (TS) is crucial for anaesthetic patient management. However, a deeper understanding of the relationship between these two skills remains to be explored. We investigated the characteristics of trainee anaesthesiologists' NTS and TS in a simulated unexpected difficult airway management scenario. METHODS A mixed-method approach was used to explore the relationship between NTS and TS in 25 videos of 2nd year trainee anaesthesiologists managing a simulated difficult airway scenario. The videos were assessed using the customised version of the Anaesthetists' Non-Technical Skills System, ANTSdk, and an adapted TS checklist for calculating the correlation between NTS and TS. Written descriptions of the observed NTS were analysed using directed content analysis. RESULTS The correlation between the NTS and the TS ratings was 0.106 (two-tailed significance of 0.613). Inter-rater reliability was substantial. Themes characterising good NTS included a systematic approach, planning and communicating decisions as well as responding to the evolving situation. A list of desirable, concrete NTS for the specific airway management situation was generated. CONCLUSION This study illustrates that anaesthesiologist trainees' NTS and TS were not correlated in this setting, but rather intertwined and how the interplay of NTS and TS can impact patient management. Themes describing the characteristics of NTS and a list of desirable, concrete NTS were developed to aid the understanding, training and use of NTS.
Collapse
Affiliation(s)
- K. Gjeraa
- Danish Institute for Medical Simulation; Herlev Hospital; Capital Region of Denmark and University of Copenhagen; Copenhagen Denmark
| | - R. M. H. G. Jepsen
- Danish Institute for Medical Simulation; Herlev Hospital; Capital Region of Denmark and University of Copenhagen; Copenhagen Denmark
| | - M. Rewers
- Danish Institute for Medical Simulation; Herlev Hospital; Capital Region of Denmark and University of Copenhagen; Copenhagen Denmark
| | - D. Østergaard
- Danish Institute for Medical Simulation; Herlev Hospital; Capital Region of Denmark and University of Copenhagen; Copenhagen Denmark
| | - P. Dieckmann
- Danish Institute for Medical Simulation; Herlev Hospital; Capital Region of Denmark and University of Copenhagen; Copenhagen Denmark
| |
Collapse
|
26
|
Hatala R, Cook DA, Brydges R, Hawkins R. Constructing a validity argument for the Objective Structured Assessment of Technical Skills (OSATS): a systematic review of validity evidence. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2015; 20:1149-75. [PMID: 25702196 DOI: 10.1007/s10459-015-9593-1] [Citation(s) in RCA: 90] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2014] [Accepted: 02/15/2015] [Indexed: 05/28/2023]
Abstract
In order to construct and evaluate the validity argument for the Objective Structured Assessment of Technical Skills (OSATS), based on Kane's framework, we conducted a systematic review. We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, Scopus, and selected reference lists through February 2013. Working in duplicate, we selected original research articles in any language evaluating the OSATS as an assessment tool for any health professional. We iteratively and collaboratively extracted validity evidence from included articles to construct and evaluate the validity argument for varied uses of the OSATS. Twenty-nine articles met the inclusion criteria, all focussed on surgical technical skills assessment. We identified three intended uses for the OSATS, namely formative feedback, high-stakes assessment and program evaluation. Following Kane's framework, four inferences in the validity argument were examined (scoring, generalization, extrapolation, decision). For formative feedback and high-stakes assessment, there was reasonable evidence for scoring and extrapolation. However, for high-stakes assessment there was a dearth of evidence for generalization aside from inter-rater reliability data and an absence of evidence linking multi-station OSATS scores to performance in real clinical settings. For program evaluation, the OSATS validity argument was supported by reasonable generalization and extrapolation evidence. There was a complete lack of evidence regarding implications and decisions based on OSATS scores. In general, validity evidence supported the use of the OSATS for formative feedback. Research to provide support for decisions based on OSATS scores is required if the OSATS is to be used for higher-stakes decisions and program evaluation.
Collapse
Affiliation(s)
- Rose Hatala
- Department of Medicine, University of British Columbia, Suite 5907, Burrard Bldg, St. Paul's Hospital, 1081 Burrard St, Vancouver, BC, V6Z 1Y6, Canada.
| | - David A Cook
- Mayo Clinic Online Learning and Mayo Clinic Multidisciplinary Simulation Center, Mayo Clinic College of Medicine, Rochester, MN, USA
- Division of General Internal Medicine, Mayo Clinic, Rochester, MN, USA
| | - Ryan Brydges
- Wilson Centre, University Health Network, Toronto, ON, Canada
| | - Richard Hawkins
- Medical Education Programs, American Medical Association, Chicago, IL, USA
| |
Collapse
|
27
|
|
28
|
|
29
|
Pugh D, Hamstra SJ, Wood TJ, Humphrey-Murto S, Touchie C, Yudkowsky R, Bordage G. A procedural skills OSCE: assessing technical and non-technical skills of internal medicine residents. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2015; 20:85-100. [PMID: 24823793 DOI: 10.1007/s10459-014-9512-x] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2013] [Accepted: 05/05/2014] [Indexed: 06/03/2023]
Abstract
Internists are required to perform a number of procedures that require mastery of technical and non-technical skills, however, formal assessment of these skills is often lacking. The purpose of this study was to develop, implement, and gather validity evidence for a procedural skills objective structured clinical examination (PS-OSCE) for internal medicine (IM) residents to assess their technical and non-technical skills when performing procedures. Thirty-five first to third-year IM residents participated in a 5-station PS-OSCE, which combined partial task models, standardized patients, and allied health professionals. Formal blueprinting was performed and content experts were used to develop the cases and rating instruments. Examiners underwent a frame-of-reference training session to prepare them for their rater role. Scores were compared by levels of training, experience, and to evaluation data from a non-procedural OSCE (IM-OSCE). Reliability was calculated using Generalizability analyses. Reliabilities for the technical and non-technical scores were 0.68 and 0.76, respectively. Third-year residents scored significantly higher than first-year residents on the technical (73.5 vs. 62.2%) and non-technical (83.2 vs. 75.1%) components of the PS-OSCE (p < 0.05). Residents who had performed the procedures more frequently scored higher on three of the five stations (p < 0.05). There was a moderate disattenuated correlation (r = 0.77) between the IM-OSCE and the technical component of the PS-OSCE scores. The PS-OSCE is a feasible method for assessing multiple competencies related to performing procedures and this study provides validity evidence to support its use as an in-training examination.
Collapse
Affiliation(s)
- Debra Pugh
- Department of Medicine, University of Ottawa, Ottawa, ON, Canada,
| | | | | | | | | | | | | |
Collapse
|
30
|
Brady S, Bogossian F, Gibbons K. The effectiveness of varied levels of simulation fidelity on integrated performance of technical skills in midwifery students--a randomised intervention trial. NURSE EDUCATION TODAY 2015; 35:524-529. [PMID: 25433985 DOI: 10.1016/j.nedt.2014.11.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2014] [Revised: 11/07/2014] [Accepted: 11/12/2014] [Indexed: 06/04/2023]
Abstract
BACKGROUND Simulation as a pedagogical approach is used in health professional education to address the need to safely develop effective clinical skills prior to undertaking clinical practice, in complex healthcare environments. Evidence for the use of simulation in midwifery is largely anecdotal, and research evaluating the effectiveness of different levels of simulation fidelity is lacking. OBJECTIVES To evaluate the effectiveness of varying levels of fidelity on simulated learning experiences and identify which best contributes to integrated and global clinical skills development in midwifery students. DESIGN Randomised three arm intervention trial. PARTICIPANTS Midwifery students who had yet to receive theoretical instruction in the performance of the clinical skill of vaginal examination. METHODS Midwifery students (n=69) received theoretical instruction in the performance of vaginal examination following random allocation into one of three intervention arms. Participants were recorded performing the procedure using low fidelity (part task trainer only), medium fidelity (part task trainer and life sized poster of a pregnant woman) or progressive fidelity (part task trainer and a simulated standardised patient). Senior midwifery students were recruited to act in the role of standardised patients. RESULTS There was a statistically significant difference in the mean total Global Rating Scale score between at least two of the three groups (p=0.009). The progressive fidelity group revealed as different from both the low fidelity group (p=0.010) and medium fidelity group (p=0.048). There was a statistically significant difference in the mean total Integrated Procedural Performance Instrument score between at least two of the three groups (p=0.012). The progressive fidelity group revealed as different from both the low fidelity group (p=0.026) and medium fidelity group (p=0.026). CONCLUSIONS Progressive and medium fidelity simulation yields better outcomes than low fidelity simulation and where resources are constrained medium fidelity equipment, such as a life sized poster can produce effective learning experiences for midwifery students.
Collapse
Affiliation(s)
- Susannah Brady
- The University of Queensland, School of Nursing and Midwifery, Herston Campus, Edith Cavell Building, Fourth Avenue, Herston, QLD 4029 Australia.
| | - Fiona Bogossian
- The University of Queensland, School of Nursing and Midwifery, Herston Campus, Edith Cavell Building, Fourth Avenue, Herston, QLD 4029 Australia.
| | - Kristen Gibbons
- The University of Queensland, School of Nursing and Midwifery, Herston Campus, Edith Cavell Building, Fourth Avenue, Herston, QLD 4029 Australia; Mater Research Office, Mater Medical Research Institute, The University of Queensland, South Brisbane, QLD 4101, Australia.
| |
Collapse
|
31
|
Ilgen JS, Ma IWY, Hatala R, Cook DA. A systematic review of validity evidence for checklists versus global rating scales in simulation-based assessment. MEDICAL EDUCATION 2015; 49:161-73. [PMID: 25626747 DOI: 10.1111/medu.12621] [Citation(s) in RCA: 198] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2014] [Revised: 08/01/2014] [Accepted: 09/09/2014] [Indexed: 05/14/2023]
Abstract
CONTEXT The relative advantages and disadvantages of checklists and global rating scales (GRSs) have long been debated. To compare the merits of these scale types, we conducted a systematic review of the validity evidence for checklists and GRSs in the context of simulation-based assessment of health professionals. METHODS We conducted a systematic review of multiple databases including MEDLINE, EMBASE and Scopus to February 2013. We selected studies that used both a GRS and checklist in the simulation-based assessment of health professionals. Reviewers working in duplicate evaluated five domains of validity evidence, including correlation between scales and reliability. We collected information about raters, instrument characteristics, assessment context, and task. We pooled reliability and correlation coefficients using random-effects meta-analysis. RESULTS We found 45 studies that used a checklist and GRS in simulation-based assessment. All studies included physicians or physicians in training; one study also included nurse anaesthetists. Topics of assessment included open and laparoscopic surgery (n = 22), endoscopy (n = 8), resuscitation (n = 7) and anaesthesiology (n = 4). The pooled GRS-checklist correlation was 0.76 (95% confidence interval [CI] 0.69-0.81, n = 16 studies). Inter-rater reliability was similar between scales (GRS 0.78, 95% CI 0.71-0.83, n = 23; checklist 0.81, 95% CI 0.75-0.85, n = 21), whereas GRS inter-item reliabilities (0.92, 95% CI 0.84-0.95, n = 6) and inter-station reliabilities (0.80, 95% CI 0.73-0.85, n = 10) were higher than those for checklists (0.66, 95% CI 0-0.84, n = 4 and 0.69, 95% CI 0.56-0.77, n = 10, respectively). Content evidence for GRSs usually referenced previously reported instruments (n = 33), whereas content evidence for checklists usually described expert consensus (n = 26). Checklists and GRSs usually had similar evidence for relations to other variables. CONCLUSIONS Checklist inter-rater reliability and trainee discrimination were more favourable than suggested in earlier work, but each task requires a separate checklist. Compared with the checklist, the GRS has higher average inter-item and inter-station reliability, can be used across multiple tasks, and may better capture nuanced elements of expertise.
Collapse
Affiliation(s)
- Jonathan S Ilgen
- Division of Emergency Medicine, Department of Medicine, University of Washington School of Medicine, Seattle, Washington, USA
| | | | | | | |
Collapse
|
32
|
Gosai J, Purva M, Gunn J. Simulation in cardiology: state of the art. Eur Heart J 2015; 36:777-83. [DOI: 10.1093/eurheartj/ehu527] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/27/2014] [Accepted: 12/27/2014] [Indexed: 01/01/2023] Open
|
33
|
|
34
|
Cook DA, Zendejas B, Hamstra SJ, Hatala R, Brydges R. What counts as validity evidence? Examples and prevalence in a systematic review of simulation-based assessment. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2014; 19:233-50. [PMID: 23636643 DOI: 10.1007/s10459-013-9458-4] [Citation(s) in RCA: 195] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2012] [Accepted: 04/09/2013] [Indexed: 05/26/2023]
Abstract
Ongoing transformations in health professions education underscore the need for valid and reliable assessment. The current standard for assessment validation requires evidence from five sources: content, response process, internal structure, relations with other variables, and consequences. However, researchers remain uncertain regarding the types of data that contribute to each evidence source. We sought to enumerate the validity evidence sources and supporting data elements for assessments using technology-enhanced simulation. We conducted a systematic literature search including MEDLINE, ERIC, and Scopus through May 2011. We included original research that evaluated the validity of simulation-based assessment scores using two or more evidence sources. Working in duplicate, we abstracted information on the prevalence of each evidence source and the underlying data elements. Among 217 eligible studies only six (3 %) referenced the five-source framework, and 51 (24 %) made no reference to any validity framework. The most common evidence sources and data elements were: relations with other variables (94 % of studies; reported most often as variation in simulator scores across training levels), internal structure (76 %; supported by reliability data or item analysis), and content (63 %; reported as expert panels or modification of existing instruments). Evidence of response process and consequences were each present in <10 % of studies. We conclude that relations with training level appear to be overrepresented in this field, while evidence of consequences and response process are infrequently reported. Validation science will be improved as educators use established frameworks to collect and interpret evidence from the full spectrum of possible sources and elements.
Collapse
Affiliation(s)
- David A Cook
- Office of Education Research, Mayo Medical School, Rochester, MN, USA,
| | | | | | | | | |
Collapse
|
35
|
Falcone JL, Claxton RN, Marshall GT. Communication skills training in surgical residency: a needs assessment and metacognition analysis of a difficult conversation objective structured clinical examination. JOURNAL OF SURGICAL EDUCATION 2014; 71:309-315. [PMID: 24797845 DOI: 10.1016/j.jsurg.2013.09.020] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2013] [Revised: 08/28/2013] [Accepted: 09/25/2013] [Indexed: 06/03/2023]
Abstract
BACKGROUND The objective structured clinical examination (OSCE) can be used to evaluate the Accreditation Council for Graduate Medical Education Core Competencies of Professionalism and Interpersonal and Communication Skills. The aim of this study was to describe general surgery resident performance on a "difficult conversation" OSCE. METHODS In this prospective study, junior and senior residents participated in a 2-station OSCE. Junior stations involved discussing operative risks and benefits and breaking bad news. Senior stations involved discussing goals of care and discussing transition to comfort measures only status. Residents completed post-OSCE checklist and Likert-based self-evaluations of experience, comfort, and confidence. Trained standardized patients (SPs) evaluated residents using communication skill-based checklists and Likert-based assessments. Pearson correlation coefficients were determined between self-assessment and SP assessment. Mann-Whitney U tests were conducted between junior and senior resident variables, using α = 0.05. RESULTS There were 27 junior residents (age 28.1 ± 1.9 years [29.6% female]) and 27 senior residents (age 32.1 ± 2.5 years [26.9% female]). The correlation of self-assessment and SP assessment of overall communication skills by junior residents was -0.32 on the risks and benefits case and 0.07 on the breaking bad news case. The correlation of self-assessment and SP assessment of overall communication skills by senior residents was 0.30 on the goals of care case and 0.26 on the comfort measures only case. SP assessments showed that junior residents had higher overall communication skills than senior residents (p = 0.03). Senior residents perceived that having difficult conversations was more level appropriate (p < 0.001), and they were less nervous having difficult conversations (p < 0.01) than junior residents. CONCLUSIONS We found that residents perform difficult conversations well, that subjective and objective skills are correlated, and that skills-based training is needed across all residency levels. This well-received method may be used to observe, document, and provide resident feedback for these important skills.
Collapse
Affiliation(s)
- John L Falcone
- University of Pittsburgh School of Medicine, Department of Surgery, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.
| | - René N Claxton
- University of Pittsburgh School of Medicine, Department of Medicine, Division of General Internal Medicine, Section of Palliative Care and Ethics, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania
| | - Gary T Marshall
- University of Pittsburgh School of Medicine, Department of Surgery, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania
| |
Collapse
|
36
|
Dijksterhuis MGK, Schuwirth LWT, Braat DDM, Teunissen PW, Scheele F. A qualitative study on trainees' and supervisors' perceptions of assessment for learning in postgraduate medical education. MEDICAL TEACHER 2013; 35:e1396-402. [PMID: 23600668 DOI: 10.3109/0142159x.2012.756576] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
INTRODUCTION Recent changes in postgraduate medical training curricula usually encompass a shift towards more formative assessment, or assessment for learning. However, though theoretically well suited to postgraduate training, evidence is emerging that engaging in formative assessment in daily clinical practice is complex. AIM We aimed to explore trainees' and supervisors' perceptions of what factors determine active engagement in formative assessment. METHODS Focus group study with postgraduate trainees and supervisors in obstetrics and gynaecology. RESULTS Three higher order themes emerged: individual perspectives on feedback, supportiveness of the learning environment and the credibility of feedback and/or feedback giver. CONCLUSION Engaging in formative assessment with a genuine impact on learning is complex and quite a challenge to both trainees and supervisors. Individual perspectives on feedback, a supportive learning environment and credibility of feedback are all important in this process. Every one of these should be taken into account when the utility of formative assessment in postgraduate medical training is evaluated.
Collapse
|
37
|
Brady S, Bogossian F, Gibbons K, Wells A, Lyon P, Bonney D, Barlow M, Jackson A. A protocol for evaluating progressive levels of simulation fidelity in the development of technical skills, integrated performance and woman centred clinical assessment skills in undergraduate midwifery students. BMC MEDICAL EDUCATION 2013; 13:72. [PMID: 23706037 PMCID: PMC3666945 DOI: 10.1186/1472-6920-13-72] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/02/2013] [Accepted: 05/21/2013] [Indexed: 06/02/2023]
Abstract
BACKGROUND Simulation as a pedagogical approach has been used in health professional education to address the need to safely develop effective clinical skills prior to undertaking clinical practice. However, evidence for the use of simulation in midwifery is largely anecdotal, and research evaluating the effectiveness of different levels of simulation fidelity are lacking.Woman centred care is a core premise of the midwifery profession and describes the behaviours of an individual midwife who demonstrates safe and effective care of the individual woman. Woman centred care occurs when the midwife modifies the care to ensure the needs of each individual woman are respected and addressed. However, a review of the literature demonstrates an absence of a valid and reliable tool to measure the development of woman centred care behaviours. This study aims to determine which level of fidelity in simulated learning experiences provides the most effective learning outcomes in the development of woman centred clinical assessment behaviors and skills in student midwives. METHODS/DESIGN Three-arm, randomised, intervention trial.In this research we plan to:a) trial three levels of simulation fidelity - low, medium and progressive, on student midwives performing the procedure of vaginal examination;b) measure clinical assessment skills using the Global Rating Scale (GRS) and Integrated Procedural Performance Instrument (IPPI); andc) pilot the newly developed Woman Centred Care Scale (WCCS) to measure clinical behaviors related to Woman-Centredness. DISCUSSION This project aims to enhance knowledge in relation to the appropriate levels of fidelity in simulation that yield the best educational outcomes for the development of woman centred clinical assessment in student midwives. The outcomes of this project may contribute to improved woman centred clinical assessment for student midwives, and more broadly influence decision making regarding education resource allocation for maternity simulation.
Collapse
Affiliation(s)
- Susannah Brady
- School of Nursing & Midwifery, The University of Queensland, Ipswich Campus, Salisbury Road, Ipswich, QLD 4035, Australia
| | - Fiona Bogossian
- School of Nursing and Midwifery, The University of Queensland, Herston Campus, Edith Cavell Building, Fourth Avenue,, Brisbane, Herston, QLD 4029, Australia
| | - Kristen Gibbons
- School of Nursing and Midwifery, The University of Queensland, Herston Campus, Edith Cavell Building, Fourth Avenue,, Brisbane, Herston, QLD 4029, Australia
- Mater Research Office, Mater Medical Research Institute, South Brisbane, QLD 4101, Australia
| | - Andrew Wells
- School of Nursing and Midwifery, The University of Queensland, Herston Campus, Edith Cavell Building, Fourth Avenue,, Brisbane, Herston, QLD 4029, Australia
- Mater Education, Mater Health Services, Mater Education Practice Improvement Center (MEPIC), Corporate Services Building, Level 4, Raymond Terrace, South Brisbane, QLD 4101, Australia
| | - Pauline Lyon
- School of Nursing and Midwifery, The University of Queensland, Herston Campus, Edith Cavell Building, Fourth Avenue,, Brisbane, Herston, QLD 4029, Australia
- Mater Education, Mater Health Services, Mater Education Practice Improvement Center (MEPIC), Corporate Services Building, Level 4, Raymond Terrace, South Brisbane, QLD 4101, Australia
| | - Donna Bonney
- School of Nursing and Midwifery, The University of Queensland, Herston Campus, Edith Cavell Building, Fourth Avenue,, Brisbane, Herston, QLD 4029, Australia
- Mater Education, Mater Health Services, Mater Education Practice Improvement Center (MEPIC), Corporate Services Building, Level 4, Raymond Terrace, South Brisbane, QLD 4101, Australia
| | - Melanie Barlow
- Mater Education, Mater Health Services, Mater Education Practice Improvement Center (MEPIC), Corporate Services Building, Level 4, Raymond Terrace, South Brisbane, QLD 4101, Australia
| | - Anne Jackson
- Mater Education, Mater Health Services, Mater Education Practice Improvement Center (MEPIC), Corporate Services Building, Level 4, Raymond Terrace, South Brisbane, QLD 4101, Australia
| |
Collapse
|
38
|
Touchie C, Humphrey-Murto S, Varpio L. Teaching and assessing procedural skills: a qualitative study. BMC MEDICAL EDUCATION 2013; 13:69. [PMID: 23672617 PMCID: PMC3658931 DOI: 10.1186/1472-6920-13-69] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2012] [Accepted: 05/02/2013] [Indexed: 05/19/2023]
Abstract
BACKGROUND Graduating Internal Medicine residents must possess sufficient skills to perform a variety of medical procedures. Little is known about resident experiences of acquiring procedural skills proficiency, of practicing these techniques, or of being assessed on their proficiency. The purpose of this study was to qualitatively investigate resident 1) experiences of the acquisition of procedural skills and 2) perceptions of procedural skills assessment methods available to them. METHODS Focus groups were conducted in the weeks following an assessment of procedural skills incorporated into an objective structured clinical examination (OSCE). Using fundamental qualitative description, emergent themes were identified and analyzed. RESULTS Residents perceived procedural skills assessment on the OSCE as a useful formative tool for direct observation and immediate feedback. This positive reaction was regularly expressed in conjunction with a frustration with available assessment systems. Participants reported that proficiency was acquired through resident directed learning with no formal mechanism to ensure acquisition or maintenance of skills. CONCLUSIONS The acquisition and assessment of procedural skills in Internal Medicine programs should move toward a more structured system of teaching, deliberate practice and objective assessment. We propose that directed, self-guided learning might meet these needs.
Collapse
Affiliation(s)
- Claire Touchie
- The Ottawa Hospital, General Campus, 501 Smyth Road, CPCR 2135 (Box 209), Ottawa, ON K1H 8L6, Canada
| | - Susan Humphrey-Murto
- The Ottawa Hospital, General Campus, 501 Smyth Road, CPCR 2135 (Box 209), Ottawa, ON K1H 8L6, Canada
| | - Lara Varpio
- Academy for Innovation in Medical Education, University of Ottawa, Faculty of Medicine, Roger Guindon Hall, Room 2034, 451 Smyth Road, Ottawa, ON K1H 8M5, CANADA
| |
Collapse
|
39
|
Hybrid simulation for knee arthrocentesis: improving fidelity in procedures training. J Gen Intern Med 2013; 28:723-7. [PMID: 23319411 PMCID: PMC3631077 DOI: 10.1007/s11606-012-2314-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2012] [Revised: 08/21/2012] [Accepted: 12/06/2012] [Indexed: 10/27/2022]
Abstract
BACKGROUND Procedures form a core competency for internists, yet many do not master these skills during residency. Simulation can help fill this gap, but many curricula focus on technical skills, and overlook communication skills necessary to perform procedures proficiently. Hybrid simulation (HS) is a novel way to teach and assess procedural skills in an integrated, contextually-based way. AIM To create a HS model for teaching arthrocentesis to internal medicine residents. SETTING Internal medicine residency program at the University of Toronto. PARTICIPANTS Twenty four second-year internal medicine residents. PROGRAM DESCRIPTION Residents were introduced to HS, given practice time with feedback from standardized patients (SPs) and faculty, and assessed individually using a different scenario and SP. Physicians scored overall performance using a 6-point procedural skills measure, and both physicians and SPs scored communication using a 5-point communication skills measure. PROGRAM EVALUATION Realism was highly rated by residents (4.13/5.00), SPs (4.00) and physicians (4.33), and was perceived to enhance learning. Residents' procedural skills were rated as 4.21/6.00 (3.00 - 5.00; ICC = 0.77, [0.53 - 0.92]), comparable to an experienced post-graduate year (PGY) 2-3; and all but one resident was considered competent. DISCUSSION HS facilitates simultaneous acquisition of technical and communication skills. Future research should examine whether HS improves transfer of skills to the clinical setting.
Collapse
|
40
|
Stefanovich A, Williams C, McKee P, Hagemann E, Carnahan H. Development and Validation of Tools for Evaluation of Orthosis Fabrication. Am J Occup Ther 2012; 66:739-46. [DOI: 10.5014/ajot.2012.005553] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Abstract
This study is the first phase of research aimed at developing new educational approaches to enhance occupational therapy students’ orthosis fabrication skills. Before the effectiveness of training can be determined, a method for evaluating performance must be established. Using the Delphi method, we developed a global rating scale and checklist for evaluating technical competence when fabricating metacarpophalangeal (MCP) joint–stabilizing orthoses. To determine the reliability and validity of these tools, three evaluators used them to assess and score orthotic fabrication performance by experienced and student occupational therapists. The results suggest that these measurement tools are valid and reliable indicators of the technical skills involved in fabricating an MCP joint–stabilizing orthosis. Future studies should focus on building on these tools to evaluate communication skills, technical skills for making other types of orthoses, and effectiveness of training programs.
Collapse
Affiliation(s)
- Andonia Stefanovich
- Andonia Stefanovich, MScOT, is Graduate, Department of Occupational Science and Occupational Therapy, University of Toronto, Toronto, ON, and Occupational Therapist, N Zaraska and Associates, Toronto, ON
| | - Camille Williams
- Camille Williams, MHSc, is PhD candidate, Graduate Department of Rehabilitation Science, and Fellow, Wilson Centre for Research in Education, University of Toronto, Toronto, ON
| | - Pat McKee
- Pat McKee, MSc, OT Reg.(Ont.), OT(C), is Associate Professor, Department of Occupational Science and Occupational Therapy, University of Toronto, Toronto, ON
| | - Eric Hagemann
- Eric Hagemann, MSc, is Graduate, Graduate Department of Rehabilitation Science, University of Toronto, Toronto, ON
| | - Heather Carnahan
- Heather Carnahan, PhD, is Professor, Department of Occupational Science and Occupational Therapy; Scientist, Wilson Centre for Research in Education, University of Toronto; and Director, Centre for Ambulatory Care Education, Women’s College Hospital, 160-500 University Avenue, Toronto, ON M5G 1V7, Canada;
| |
Collapse
|
41
|
Evaluating the Influence of Goal Setting on Intravenous Catheterization Skill Acquisition and Transfer in a Hybrid Simulation Training Context. Simul Healthc 2012; 7:236-42. [DOI: 10.1097/sih.0b013e31825993f2] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
42
|
Abstract
Evaluation of educational programs and assessment of learning are essential to maintain high-standard health science education, which includes pain education. Current models of program evaluations applied to the education of the health professions, such as the Kirkpatrick model, are mainly outcome based. More recently, efforts have been made to examine other process-based models such as the Context Input Process Product model. The present article proposes an approach that integrates both outcome- and process-based models with models of clinical performance assessment to provide a deeper understanding of a program function. Because assessment instruments are a critical part of program evaluation, it is suggested that standardization and rigour should be used in their selection, development and adaptation. The present article suggests an alternative to currently used models in pain education evaluation.
Collapse
|
43
|
Hull L, Arora S, Aggarwal R, Darzi A, Vincent C, Sevdalis N. The impact of nontechnical skills on technical performance in surgery: a systematic review. J Am Coll Surg 2011; 214:214-30. [PMID: 22200377 DOI: 10.1016/j.jamcollsurg.2011.10.016] [Citation(s) in RCA: 237] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2011] [Revised: 10/27/2011] [Accepted: 10/31/2011] [Indexed: 01/08/2023]
Abstract
BACKGROUND Failures in nontechnical and teamwork skills frequently lie at the heart of harm and near-misses in the operating room (OR). The purpose of this systematic review was to assess the impact of nontechnical skills on technical performance in surgery. STUDY DESIGN MEDLINE, EMBASE, PsycINFO databases were searched, and 2,041 articles were identified. After limits were applied, 341 articles were retrieved for evaluation. Of these, 28 articles were accepted for this review. Data were extracted from the articles regarding sample population, study design and setting, measures of nontechnical skills and technical performance, study findings, and limitations. RESULTS Of the 28 articles that met inclusion criteria, 21 articles assessed the impact of surgeons' nontechnical skills on their technical performance. The evidence suggests that receiving feedback and effectively coping with stressful events in the OR has a beneficial impact on certain aspects of technical performance. Conversely, increased levels of fatigue are associated with detriments to surgical skill. One article assessed the impact of anesthesiologists' nontechnical skills on anesthetic technical performance, finding a strong positive correlation between the 2 skill sets. Finally, 6 articles assessed the impact of multiple nontechnical skills of the entire OR team on surgical performance. A strong relationship between teamwork failure and technical error was empirically demonstrated in these studies. CONCLUSIONS Evidence suggests that certain nontechnical aspects of performance can enhance or, if lacking, contribute to deterioration of surgeons' technical performance. The precise extent of this effect remains to be elucidated.
Collapse
Affiliation(s)
- Louise Hull
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | | | | | | | | | | |
Collapse
|
44
|
Short simulation training improves objective skills in established advanced practitioners managing emergencies on the ward and surgical intensive care unit. ACTA ACUST UNITED AC 2011; 71:330-7; discussion 337-8. [PMID: 21825935 DOI: 10.1097/ta.0b013e31821f4721] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
BACKGROUND Several studies evaluating simulation training in intensive care unit (ICU) physicians have demonstrated improvement in leadership and management skills. No study to date has evaluated whether such training is useful in established ICU advanced practitioners (APs). We hypothesized that human patient simulator-based training would improve surgical ICU APs' skills at managing medical crises. METHODS After institutional review board approval, 12 APs completed ½ day of simulation training on the SimMan, Laerdal system. Each subject participated in five scenarios, first as team leader (pretraining scenario), then as observer for three scenarios, and finally, again as team leader (posttraining). Faculty teaching accompanied each scenario and preceded a debriefing session with video replay. Three experts scored emergency care skills (Airway-Breathing-Circulation [ABCs] sequence, recognition of shock, pneumothorax, etc.) and teamwork leadership/interpersonal skills. A multiple choice question examination and training effectiveness questionnaire were completed before and after training. Fellows underwent the same curriculum and served to validate the study. Pre- and postscores were compared using the Wilcoxon signed rank test with two-tailed significance of 0.05. RESULTS Improvement was seen in participants' scores combining all parameters (73% ± 13% vs. 80% ± 11%, p = 0.018). AP leadership/interpersonal skills (+12%), multiple choice question examination (+4%), and training effectiveness questionnaire (+6%) scores improved significantly (p < 0.05). Fellows teamwork leadership/interpersonal skills scores were higher than APs (p < 0.001) but training brought AP scores to fellow levels. Interrater reliability was high (r = 0.77, 95% confidence interval 0.71-0.82; p < 0.001). CONCLUSIONS Human patient simulator training in established surgical ICU APs improves leadership, teamwork, and self-confidence skills in managing medical emergencies. Such a validated curriculum may be useful as an AP continuing education resource.
Collapse
|
45
|
Lam G, Ayas NT, Griesdale DE, Peets AD. Medical simulation in respiratory and critical care medicine. Lung 2010; 188:445-57. [PMID: 20865270 DOI: 10.1007/s00408-010-9260-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2010] [Accepted: 09/08/2010] [Indexed: 01/09/2023]
Abstract
Simulation-based medical education has gained tremendous popularity over the past two decades. Driven by the patient safety movement, changes in the educational opportunities available to trainees and the rapidly evolving capabilities of computer technology, simulation-based medical education is now being used across the continuum of medical education. This review provides the reader with a perspective on simulation specific to respiratory and critical care medicine, including an overview of historical and modern simulation modalities and the current evidence supporting their use.
Collapse
Affiliation(s)
- Godfrey Lam
- Faculty of Medicine, University of British Columbia, Vancouver, BC, V6T 1Z3, Canada
| | | | | | | |
Collapse
|
46
|
Brydges R, Carnahan H, Rose D, Dubrowski A. Comparing self-guided learning and educator-guided learning formats for simulation-based clinical training. J Adv Nurs 2010; 66:1832-44. [DOI: 10.1111/j.1365-2648.2010.05338.x] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
47
|
Brydges R, Carnahan H, Rose D, Rose L, Dubrowski A. Coordinating progressive levels of simulation fidelity to maximize educational benefit. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2010; 85:806-12. [PMID: 20520031 DOI: 10.1097/acm.0b013e3181d7aabd] [Citation(s) in RCA: 100] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
PURPOSE To evaluate the effectiveness of a novel, simulation-based educational model rooted in scaffolding theory that capitalizes on a systematic progressive sequence of simulators that increase in realism (i.e., fidelity) and information content. METHOD Forty-five medical students were randomly assigned to practice intravenous catheterization using high-fidelity training, low-fidelity training, or progressive training from low to mid to high fidelity. One week later, participants completed a transfer test on a standardized patient simulation. Blinded expert raters assessed participants' global clinical performance, communication, procedure documentation, and technical skills on the transfer test. Participants' management of the resources available during practice was also recorded. Data were analyzed using multivariate analysis of variance. The study was conducted in fall 2008 at the University of Toronto. RESULTS The high-fidelity group scored higher (P < .05) than the low-fidelity group on all measures except procedure documentation. The progressive group scored higher (P < .05) than other groups for documentation and global clinical performance and was equivalent to the high-fidelity group for communication and technical skills. Total practice time was greatest for the progressive group; however, this group required little practice time on the resource-intensive high-fidelity simulator. CONCLUSIONS Allowing students to progress in their practice on simulators of increasing fidelity led to superior transfer of a broad range of clinical skills. Further, this progressive group was resource-efficient, as participants concentrated on lower fidelity and lower resource-intensive simulators. It is suggested that clinical training curricula incorporate exposure to multiple simulators to maximize educational benefit and potentially save educator time.
Collapse
Affiliation(s)
- Ryan Brydges
- Centre for Health Education Scholarship, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | | | | | | | | |
Collapse
|
48
|
Nestel D, Kneebone R. Perspective: authentic patient perspectives in simulations for procedural and surgical skills. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2010; 85:889-893. [PMID: 20520046 DOI: 10.1097/acm.0b013e3181d749ac] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
In this article, the authors consider the role of the patient in simulation-based training and assessment of clinical procedural skills. In recent years, there has been a progressive shift of emphasis from teacher-centered to student-centered education, resulting in a redefinition of approaches to medical education. Traditional models of transmission of information from an expert to a novice have been supplanted by a more student-centered approach. However, medical education is not a matter for teacher and student alone. At the center is always the patient, around whom everything must ultimately rotate. A further shift is occurring. The patient is becoming the focal point of medical teaching and learning. It is argued that this shift is necessary and that simulation in its widest sense can be used to support this process. However, sensitivity to what we are simulating is essential, especially when simulations purport to address patient perspectives. The essay first reviews the history of medical education "centeredness," then outlines ways in which real and simulated patients are currently involved in medical education. Patient-focused simulation (PFS) is described as a means of offering patients' perspectives during the acquisition of clinical procedural and surgical skills. The authors draw on their experiences of developing PFS and preliminary work to "authenticate" simulations from patient perspectives. The essay concludes with speculation on the value of a "complementarity" model that acknowledges the authentic and equal perspectives of patients, students, clinicians, and teachers.
Collapse
Affiliation(s)
- Debra Nestel
- Gippsland Medical School, Monash University, Melbourne, Australia.
| | | |
Collapse
|
49
|
Affiliation(s)
- Vanessa N Palter
- Department of Surgery, University of Toronto, and Division of General Surgery, St. Michael's Hospital, Toronto, Ontario.
| | | |
Collapse
|
50
|
Pelgrim EAM, Denessen EJPG, Hettinga AM, Postma CT. De kwaliteit van beoordelingen door simulatiepatiënten in een Objective Structured Clinical Examination (OSCE): een analyse van interbeoordelaarsovereenstemming. ACTA ACUST UNITED AC 2009. [DOI: 10.1007/bf03081811] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|