1
|
Miller DT, Michael S, Bell C, Brevik CH, Kaplan B, Svoboda E, Kendall J. Physical and biophysical markers of assessment in medical training: A scoping review of the literature. MEDICAL TEACHER 2024:1-9. [PMID: 38688520 DOI: 10.1080/0142159x.2024.2345269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 04/16/2024] [Indexed: 05/02/2024]
Abstract
PURPOSE Assessment in medical education has changed over time to measure the evolving skills required of current medical practice. Physical and biophysical markers of assessment attempt to use technology to gain insight into medical trainees' knowledge, skills, and attitudes. The authors conducted a scoping review to map the literature on the use of physical and biophysical markers of assessment in medical training. MATERIALS AND METHODS The authors searched seven databases on 1 August 2022, for publications that utilized physical or biophysical markers in the assessment of medical trainees (medical students, residents, fellows, and synonymous terms used in other countries). Physical or biophysical markers included: heart rate and heart rate variability, visual tracking and attention, pupillometry, hand motion analysis, skin conductivity, salivary cortisol, functional magnetic resonance imaging (fMRI), and functional near-infrared spectroscopy (fNIRS). The authors mapped the relevant literature using Bloom's taxonomy of knowledge, skills, and attitudes and extracted additional data including study design, study environment, and novice vs. expert differentiation from February to June 2023. RESULTS Of 6,069 unique articles, 443 met inclusion criteria. The majority of studies assessed trainees using heart rate variability (n = 160, 36%) followed by visual attention (n = 143, 32%), hand motion analysis (n = 67, 15%), salivary cortisol (n = 67, 15%), fMRI (n = 29, 7%), skin conductivity (n = 26, 6%), fNIRs (n = 19, 4%), and pupillometry (n = 16, 4%). The majority of studies (n = 167, 38%) analyzed non-technical skills, followed by studies that analyzed technical skills (n = 155, 35%), knowledge (n = 114, 26%), and attitudinal skills (n = 61, 14%). 169 studies (38%) attempted to use physical or biophysical markers to differentiate between novice and expert. CONCLUSION This review provides a comprehensive description of the current use of physical and biophysical markers in medical education training, including the current technology and skills assessed. Additionally, while physical and biophysical markers have the potential to augment current assessment in medical education, there remains significant gaps in research surrounding reliability, validity, cost, practicality, and educational impact of implementing these markers of assessment.
Collapse
Affiliation(s)
- Danielle T Miller
- Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, CO, USA
| | - Sarah Michael
- Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, CO, USA
| | - Colin Bell
- Department of Emergency Medicine, University of Calgary, Calgary, Canada
| | - Cody H Brevik
- Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, CO, USA
| | - Bonnie Kaplan
- Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, CO, USA
| | - Ellie Svoboda
- Education Informationist, Strauss Health Sciences Library, University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| | - John Kendall
- Department of Emergency Medicine, Stanford School of Medicine, Palo Alto, CA, USA
| |
Collapse
|
2
|
Galuret S, Vallée N, Tronchot A, Thomazeau H, Jannin P, Huaulmé A. Gaze behavior is related to objective technical skills assessment during virtual reality simulator-based surgical training: a proof of concept. Int J Comput Assist Radiol Surg 2023; 18:1697-1705. [PMID: 37286642 DOI: 10.1007/s11548-023-02961-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 05/16/2023] [Indexed: 06/09/2023]
Abstract
PURPOSE Simulation-based training allows surgical skills to be learned safely. Most virtual reality-based surgical simulators address technical skills without considering non-technical skills, such as gaze use. In this study, we investigated surgeons' visual behavior during virtual reality-based surgical training where visual guidance is provided. Our hypothesis was that the gaze distribution in the environment is correlated with the simulator's technical skills assessment. METHODS We recorded 25 surgical training sessions on an arthroscopic simulator. Trainees were equipped with a head-mounted eye-tracking device. A U-net was trained on two sessions to segment three simulator-specific areas of interest (AoI) and the background, to quantify gaze distribution. We tested whether the percentage of gazes in those areas was correlated with the simulator's scores. RESULTS The neural network was able to segment all AoI with a mean Intersection over Union superior to 94% for each area. The gaze percentage in the AoI differed among trainees. Despite several sources of data loss, we found significant correlations between gaze position and the simulator scores. For instance, trainees obtained better procedural scores when their gaze focused on the virtual assistance (Spearman correlation test, N = 7, r = 0.800, p = 0.031). CONCLUSION Our findings suggest that visual behavior should be quantified for assessing surgical expertise in simulation-based training environments, especially when visual guidance is provided. Ultimately visual behavior could be used to quantitatively assess surgeons' learning curve and expertise while training on VR simulators, in a way that complements existing metrics.
Collapse
Affiliation(s)
- Soline Galuret
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
| | - Nicolas Vallée
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
- Orthopedics and Trauma Department, Rennes University Hospital, 35000, Rennes, France
| | - Alexandre Tronchot
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
- Orthopedics and Trauma Department, Rennes University Hospital, 35000, Rennes, France
| | - Hervé Thomazeau
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
- Orthopedics and Trauma Department, Rennes University Hospital, 35000, Rennes, France
| | - Pierre Jannin
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France.
| | - Arnaud Huaulmé
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
| |
Collapse
|
3
|
Felinska EA, Fuchs TE, Kogkas A, Chen ZW, Otto B, Kowalewski KF, Petersen J, Müller-Stich BP, Mylonas G, Nickel F. Telestration with augmented reality improves surgical performance through gaze guidance. Surg Endosc 2023; 37:3557-3566. [PMID: 36609924 PMCID: PMC10156835 DOI: 10.1007/s00464-022-09859-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 12/27/2022] [Indexed: 01/07/2023]
Abstract
BACKGROUND In minimally invasive surgery (MIS), trainees need to learn how to interpret the operative field displayed on the laparoscopic screen. Experts currently guide trainees mainly verbally during laparoscopic procedures. A newly developed telestration system with augmented reality (iSurgeon) allows the instructor to display hand gestures in real-time on the laparoscopic screen in augmented reality to provide visual expert guidance (telestration). This study analysed the effect of telestration guided instructions on gaze behaviour during MIS training. METHODS In a randomized-controlled crossover study, 40 MIS naive medical students performed 8 laparoscopic tasks with telestration or with verbal instructions only. Pupil Core eye-tracking glasses were used to capture the instructor's and trainees' gazes. Gaze behaviour measures for tasks 1-7 were gaze latency, gaze convergence and collaborative gaze convergence. Performance measures included the number of errors in tasks 1-7 and trainee's ratings in structured and standardized performance scores in task 8 (ex vivo porcine laparoscopic cholecystectomy). RESULTS There was a significant improvement 1-7 on gaze latency [F(1,39) = 762.5, p < 0.01, ηp2 = 0.95], gaze convergence [F(1,39) = 482.8, p < 0.01, ηp2 = 0.93] and collaborative gaze convergence [F(1,39) = 408.4, p < 0.01, ηp2 = 0.91] upon instruction with iSurgeon. The number of errors was significantly lower in tasks 1-7 (0.18 ± 0.56 vs. 1.94 ± 1.80, p < 0.01) and the score ratings for laparoscopic cholecystectomy were significantly higher with telestration (global OSATS: 29 ± 2.5 vs. 25 ± 5.5, p < 0.01; task-specific OSATS: 60 ± 3 vs. 50 ± 6, p < 0.01). CONCLUSIONS Telestration with augmented reality successfully improved surgical performance. The trainee's gaze behaviour was improved by reducing the time from instruction to fixation on targets and leading to a higher convergence of the instructor's and the trainee's gazes. Also, the convergence of trainee's gaze and target areas increased with telestration. This confirms augmented reality-based telestration works by means of gaze guidance in MIS and could be used to improve training outcomes.
Collapse
Affiliation(s)
- Eleni Amelia Felinska
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - Thomas Ewald Fuchs
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - Alexandros Kogkas
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, SW7 2AZ, UK
- Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, London, SW7 2AZ, UK
| | - Zi-Wei Chen
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - Benjamin Otto
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - Karl-Friedrich Kowalewski
- Department of Urology and Urological Surgery, University Medical Center Mannheim, Heidelberg University, 68167, Mannheim, Germany
| | - Jens Petersen
- Department of Medical Image Computing, German Cancer Research Center, 69120, Heidelberg, Germany
| | - Beat Peter Müller-Stich
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - George Mylonas
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, SW7 2AZ, UK
- Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, London, SW7 2AZ, UK
| | - Felix Nickel
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany.
| |
Collapse
|
4
|
Singh R, Godiyal AK, Chavakula P, Suri A. Craniotomy Simulator with Force Myography and Machine Learning-Based Skills Assessment. Bioengineering (Basel) 2023; 10:bioengineering10040465. [PMID: 37106652 PMCID: PMC10136274 DOI: 10.3390/bioengineering10040465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 02/24/2023] [Accepted: 02/26/2023] [Indexed: 04/29/2023] Open
Abstract
Craniotomy is a fundamental component of neurosurgery that involves the removal of the skull bone flap. Simulation-based training of craniotomy is an efficient method to develop competent skills outside the operating room. Traditionally, an expert surgeon evaluates the surgical skills using rating scales, but this method is subjective, time-consuming, and tedious. Accordingly, the objective of the present study was to develop an anatomically accurate craniotomy simulator with realistic haptic feedback and objective evaluation of surgical skills. A CT scan segmentation-based craniotomy simulator with two bone flaps for drilling task was developed using 3D printed bone matrix material. Force myography (FMG) and machine learning were used to automatically evaluate the surgical skills. Twenty-two neurosurgeons participated in this study, including novices (n = 8), intermediates (n = 8), and experts (n = 6), and they performed the defined drilling experiments. They provided feedback on the effectiveness of the simulator using a Likert scale questionnaire on a scale ranging from 1 to 10. The data acquired from the FMG band was used to classify the surgical expertise into novice, intermediate and expert categories. The study employed naïve Bayes, linear discriminant (LDA), support vector machine (SVM), and decision tree (DT) classifiers with leave one out cross-validation. The neurosurgeons' feedback indicates that the developed simulator was found to be an effective tool to hone drilling skills. In addition, the bone matrix material provided good value in terms of haptic feedback (average score 7.1). For FMG-data-based skills evaluation, we achieved maximum accuracy using the naïve Bayes classifier (90.0 ± 14.8%). DT had a classification accuracy of 86.22 ± 20.8%, LDA had an accuracy of 81.9 ± 23.6%, and SVM had an accuracy of 76.7 ± 32.9%. The findings of this study indicate that materials with comparable biomechanical properties to those of real tissues are more effective for surgical simulation. In addition, force myography and machine learning provide objective and automated assessment of surgical drilling skills.
Collapse
Affiliation(s)
- Ramandeep Singh
- Neuro-Engineering Lab, Department of Neurosurgery, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Anoop Kant Godiyal
- Department of Physical Medicine and Rehabilitation, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Parikshith Chavakula
- Neuro-Engineering Lab, Department of Neurosurgery, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Ashish Suri
- Neuro-Engineering Lab, Department of Neurosurgery, All India Institute of Medical Sciences, New Delhi 110029, India
| |
Collapse
|
5
|
Eye Tracking Use in Surgical Research: A Systematic Review. J Surg Res 2022; 279:774-787. [PMID: 35944332 DOI: 10.1016/j.jss.2022.05.024] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 03/18/2022] [Accepted: 05/22/2022] [Indexed: 11/20/2022]
Abstract
INTRODUCTION Eye tracking (ET) is a popular tool to study what factors affect the visual behaviour of surgical team members. To our knowledge, there have been no reviews to date that evaluate the broad use of ET in surgical research. This review aims to identify and assess the quality of this evidence, to synthesize how ET can be used to inform surgical practice, and to provide recommendations to improve future ET surgical studies. METHODS In line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, a systematic literature review was conducted. An electronic search was performed in MEDLINE, Cochrane Central, Embase, and Web of Science databases up to September 2020. Included studies used ET to measure the visual behaviour of members of the surgical team during surgery or surgical tasks. The included studies were assessed by two independent reviewers. RESULTS A total of 7614 studies were identified, and 111 were included for data extraction. Eleven applications were identified; the four most common were skill assessment (41%), visual attention assessment (22%), workload measurement (17%), and skills training (10%). A summary was provided of the various ways ET could be used to inform surgical practice, and three areas were identified for the improvement of future ET studies in surgery. CONCLUSIONS This review provided a comprehensive summary of the various applications of ET in surgery and how ET could be used to inform surgical practice, including how to use ET to improve surgical education. The information provided in this review can also aid in the design and conduct of future ET surgical studies.
Collapse
|
6
|
Matsumoto S, Kawahira H, Oiwa K, Maeda Y, Nozawa A, Lefor AK, Hosoya Y, Sata N. Laparoscopic surgical skill evaluation with motion capture and eyeglass gaze cameras: A pilot study. Asian J Endosc Surg 2022; 15:619-628. [PMID: 35598888 DOI: 10.1111/ases.13065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 03/28/2022] [Accepted: 03/30/2022] [Indexed: 11/30/2022]
Abstract
INTRODUCTION An eyeglass gaze camera and a skeletal coordinate camera without sensors attached to the operator's body were used to monitor gaze and movement during a simulated surgical procedure. These new devices have the potential to change skill assessment for laparoscopic surgery. The suitability of these devices for skill assessment was investigated. MATERIAL AND METHODS Six medical students, six intermediate surgeons, and four experts performed suturing tasks in a dry box. The tip positions of the instruments were identified from video recordings. Performance was evaluated based on instrument movement, gaze, and skeletal coordination. RESULTS Task performance time and skeletal coordinates were not significantly different among skill levels. The total movement distance of the right instrument was significantly different depending on the skill level. The SD of the gaze coordinates was significantly different depending on skill level and was less for experts. The expert's gaze stayed in a small area with little blurring. CONCLUSIONS The SD of gaze point coordinates correlates with laparoscopic surgical skill level. These devices may facilitate objective intraoperative skill evaluation in future studies.
Collapse
Affiliation(s)
- Shiro Matsumoto
- Department of Surgery, Jichi Medical University, Tochigi, Japan
| | - Hiroshi Kawahira
- Medical Simulation Center, Jichi Medical University, Tochigi, Japan
| | - Kosuke Oiwa
- Department of Electrical Engineering and Electronics, Aoyama Gakuin University, Kanagawa, Japan
| | - Yoshitaka Maeda
- Medical Simulation Center, Jichi Medical University, Tochigi, Japan
| | - Akio Nozawa
- Department of Electrical Engineering and Electronics, Aoyama Gakuin University, Kanagawa, Japan
| | | | | | - Naohiro Sata
- Department of Surgery, Jichi Medical University, Tochigi, Japan
| |
Collapse
|
7
|
Maeda Y, Oiwa K, Matsumoto S, Nozawa A, Kawahira H. Years of experience is more effective in defining experts in the gaze analysis of laparoscopic suturing task than task duration. APPLIED ERGONOMICS 2021; 96:103474. [PMID: 34098406 DOI: 10.1016/j.apergo.2021.103474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Revised: 05/24/2021] [Accepted: 05/25/2021] [Indexed: 06/12/2023]
Abstract
In this study, the relationship between gaze patterns, task duration, and years of experience, which are commonly used to define and evaluate expert surgeons in laparoscopic surgery, was identified. Ten surgeons with 1-28 years of experience and six inexperienced students were included. Subjects used forceps to repeat the task of suturing a suture pad. Each subject wore an eye-marking recorder, and gaze points were recorded and analyzed. No significant relationship between task duration and gaze pattern was observed. However, there was a significant relationship between a surgeon's years of experience and the percentage of time spent gazing at the forceps. Subjects with more years of experience operated without looking at the forceps and fixed their gaze on the operational target. Therefore, when analyzing laparoscopic gazing patterns, it may be more appropriate to define an "expert" based on the years of experience rather than task duration.
Collapse
Affiliation(s)
- Yoshitaka Maeda
- Medical Simulation Center, Jichi Medical University, 3311-1, Yakushiji, Shimotsuke-shi, Tochigi, 329-0498, Japan.
| | - Kosuke Oiwa
- Department of Electrical Engineering and Electronics, Aoyama Gakuin University, 5-10-1 Fuchinobe, Chuo-ku, Sagamihara-shi, Kanagawa, 252-5258, Japan.
| | - Shiro Matsumoto
- The Departments of Surgery, Jichi Medical University, 3311-1, Yakushiji, Shimotsuke-shi, Tochigi, 329-0498, Japan.
| | - Akio Nozawa
- Department of Electrical Engineering and Electronics, Aoyama Gakuin University, 5-10-1 Fuchinobe, Chuo-ku, Sagamihara-shi, Kanagawa, 252-5258, Japan.
| | - Hiroshi Kawahira
- Medical Simulation Center, Jichi Medical University, 3311-1, Yakushiji, Shimotsuke-shi, Tochigi, 329-0498, Japan.
| |
Collapse
|
8
|
Zheng B, Jiang X, Bednarik R, Atkins MS. Action-related eye measures to assess surgical expertise. BJS Open 2021; 5:6363075. [PMID: 34476467 PMCID: PMC8413367 DOI: 10.1093/bjsopen/zrab068] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Revised: 06/11/2021] [Accepted: 06/13/2021] [Indexed: 11/18/2022] Open
Abstract
Background Eye-tracking offers a new list of performance measures for surgeons. Previous studies of eye-tracking have reported that action-related fixation is a good measuring tool for elite task performers. Other measures, including early eye engagement to target and early eye disengagement from the previous subtask, were also reported to distinguish between different expertise levels. These parameters were examined during laparoscopic surgery simulations in the present study, with a goal to identify the most useful measures for distinguishing surgical expertise. Methods Surgical operators, including experienced surgeons (expert), residents (intermediate), and university students (novice), were required to perform a laparoscopic task involving reaching, grasping, and loading, while their eye movements and performance videos were recorded. Spatiotemporal features of eye–hand coordination and action-related fixation were calculated and compared among the groups. Results The study included five experienced surgeons, seven residents, and 14 novices. Overall, experts performed tasks faster than novices. Examining eye–hand coordination on each subtask, it was found that experts managed to disengage their eyes earlier from the previous subtask, whereas novices disengaged their eyes from previous subtask with a significant delay. Early eye engagement to the current subtask was observed for all operators. There was no difference in action-related fixation between experienced surgeons and novices. Disengage time was strongly associated with the surgical experience score of the operators, better than both early-engage time and action-related fixation. Conclusion The spatiotemporal features of surgeons’ eye–hand coordination can be used to assess level of surgical experience.
Collapse
Affiliation(s)
- B Zheng
- Department of Surgery, University of Alberta, Canada
| | - X Jiang
- Computing Science, Memorial University of Newfoundland, Newfoundland, Canada
| | - R Bednarik
- School of Computing, University of Eastern Finland, Joensuu, Finland
| | - M S Atkins
- Computing Science, Simon Fraser University, British Columbia, Canada
| |
Collapse
|