1
|
Lopes A, Ward AD, Cecchini M. Eye tracking in digital pathology: A comprehensive literature review. J Pathol Inform 2024; 15:100383. [PMID: 38868488 PMCID: PMC11168484 DOI: 10.1016/j.jpi.2024.100383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 04/28/2024] [Accepted: 05/14/2024] [Indexed: 06/14/2024] Open
Abstract
Eye tracking has been used for decades in attempt to understand the cognitive processes of individuals. From memory access to problem-solving to decision-making, such insight has the potential to improve workflows and the education of students to become experts in relevant fields. Until recently, the traditional use of microscopes in pathology made eye tracking exceptionally difficult. However, the digital revolution of pathology from conventional microscopes to digital whole slide images allows for new research to be conducted and information to be learned with regards to pathologist visual search patterns and learning experiences. This has the promise to make pathology education more efficient and engaging, ultimately creating stronger and more proficient generations of pathologists to come. The goal of this review on eye tracking in pathology is to characterize and compare the visual search patterns of pathologists. The PubMed and Web of Science databases were searched using 'pathology' AND 'eye tracking' synonyms. A total of 22 relevant full-text articles published up to and including 2023 were identified and included in this review. Thematic analysis was conducted to organize each study into one or more of the 10 themes identified to characterize the visual search patterns of pathologists: (1) effect of experience, (2) fixations, (3) zooming, (4) panning, (5) saccades, (6) pupil diameter, (7) interpretation time, (8) strategies, (9) machine learning, and (10) education. Expert pathologists were found to have higher diagnostic accuracy, fewer fixations, and shorter interpretation times than pathologists with less experience. Further, literature on eye tracking in pathology indicates that there are several visual strategies for diagnostic interpretation of digital pathology images, but no evidence of a superior strategy exists. The educational implications of eye tracking in pathology have also been explored but the effect of teaching novices how to search as an expert remains unclear. In this article, the main challenges and prospects of eye tracking in pathology are briefly discussed along with their implications to the field.
Collapse
Affiliation(s)
- Alana Lopes
- Department of Medical Biophysics, Western University, London, ON N6A 3K7, Canada
- Gerald C. Baines Centre, London Health Sciences Centre, London, ON N6A 5W9, Canada
| | - Aaron D. Ward
- Department of Medical Biophysics, Western University, London, ON N6A 3K7, Canada
- Gerald C. Baines Centre, London Health Sciences Centre, London, ON N6A 5W9, Canada
- Department of Oncology, Western University, London, ON N6A 3K7, Canada
| | - Matthew Cecchini
- Department of Pathology and Laboratory Medicine, Schulich School of Medicine and Dentistry, Western University, London, ON N6A 3K7, Canada
| |
Collapse
|
2
|
Upasani S, Srinivasan D, Zhu Q, Du J, Leonessa A. Eye-Tracking in Physical Human-Robot Interaction: Mental Workload and Performance Prediction. HUMAN FACTORS 2024; 66:2104-2119. [PMID: 37793896 DOI: 10.1177/00187208231204704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/06/2023]
Abstract
BACKGROUND In Physical Human-Robot Interaction (pHRI), the need to learn the robot's motor-control dynamics is associated with increased cognitive load. Eye-tracking metrics can help understand the dynamics of fluctuating mental workload over the course of learning. OBJECTIVE The aim of this study was to test eye-tracking measures' sensitivity and reliability to variations in task difficulty, as well as their performance-prediction capability, in physical human-robot collaboration tasks involving an industrial robot for object comanipulation. METHODS Participants (9M, 9F) learned to coperform a virtual pick-and-place task with a bimanual robot over multiple trials. Joint stiffness of the robot was manipulated to increase motor-coordination demands. The psychometric properties of eye-tracking measures and their ability to predict performance was investigated. RESULTS Stationary Gaze Entropy and pupil diameter were the most reliable and sensitive measures of workload associated with changes in task difficulty and learning. Increased task difficulty was more likely to result in a robot-monitoring strategy. Eye-tracking measures were able to predict the occurrence of success or failure in each trial with 70% sensitivity and 71% accuracy. CONCLUSION The sensitivity and reliability of eye-tracking measures was acceptable, although values were lower than those observed in cognitive domains. Measures of gaze behaviors indicative of visual monitoring strategies were most sensitive to task difficulty manipulations, and should be explored further for the pHRI domain where motor-control and internal-model formation will likely be strong contributors to workload. APPLICATION Future collaborative robots can adapt to human cognitive state and skill-level measured using eye-tracking measures of workload and visual attention.
Collapse
Affiliation(s)
| | | | - Qi Zhu
- National Institute of Standards and Technology, Boulder, CO, USA
| | - Jing Du
- University of Florida, Gainesville, FL, USA
| | | |
Collapse
|
3
|
Colcuc C, Miersbach M, Cienfuegos M, Grüneweller N, Vordemvenne T, Wähnert D. Comparison of virtual reality and computed tomography in the preoperative planning of complex tibial plateau fractures. Arch Orthop Trauma Surg 2024; 144:2631-2639. [PMID: 38703213 PMCID: PMC11211142 DOI: 10.1007/s00402-024-05348-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 04/16/2024] [Indexed: 05/06/2024]
Abstract
INTRODUCTION Preoperative planning is a critical step in the success of any complex surgery. The pur-pose of this study is to evaluate the advantage of VR glasses in surgical planning of complex tibial plateau fractures compared to CT planning. MATERIALS AND METHODS Five orthopedic surgeons performed preoperative planning for 30 fractures using either conventional CT slices or VR visualization with a VR headset. Planning was performed in a randomized order with a 3-month interval between planning sessions. A standardized questionnaire assessed planned operative time, planning time, fracture classification and understanding, and surgeons' subjective confidence in surgical planning. RESULTS The mean planned operative time of 156 (SD 47) minutes was significantly lower (p < 0.001) in the VR group than in the CT group (172 min; SD 44). The mean planning time in the VR group was 3.48 min (SD 2.4), 17% longer than in the CT group (2.98 min, SD 1.9; p = 0.027). Relevant parameters influencing planning time were surgeon experience (-0.61 min) and estimated complexity of fracture treatment (+ 0.65 min). CONCLUSION The use of virtual reality for surgical planning of complex tibial plateau fractures resulted in significantly shorter planned operative time, while planning time was longer compared to CT planning. After VR planning, more surgeons felt (very) well prepared for surgery.
Collapse
Affiliation(s)
- Christian Colcuc
- Bielefeld University, Medical School and University Medical Center OWL, Protestant Hospital of the Bethel Foundation, Department of Trauma and Orthopaedic Surgery, Burgsteig 13, 33617, Bielefeld, Germany
| | - Marco Miersbach
- Bielefeld University, Medical School and University Medical Center OWL, Protestant Hospital of the Bethel Foundation, Department of Trauma and Orthopaedic Surgery, Burgsteig 13, 33617, Bielefeld, Germany
| | - Miguel Cienfuegos
- Bielefeld University, Center for Cognitive Interaction Technology CITEC, Universitätsstraße 25, 33615, Bielefeld, Germany
| | - Niklas Grüneweller
- Bielefeld University, Medical School and University Medical Center OWL, Protestant Hospital of the Bethel Foundation, Department of Trauma and Orthopaedic Surgery, Burgsteig 13, 33617, Bielefeld, Germany
| | - Thomas Vordemvenne
- Bielefeld University, Medical School and University Medical Center OWL, Protestant Hospital of the Bethel Foundation, Department of Trauma and Orthopaedic Surgery, Burgsteig 13, 33617, Bielefeld, Germany
| | - Dirk Wähnert
- Bielefeld University, Medical School and University Medical Center OWL, Protestant Hospital of the Bethel Foundation, Department of Trauma and Orthopaedic Surgery, Burgsteig 13, 33617, Bielefeld, Germany.
| |
Collapse
|
4
|
Wu Y, Zhang Z, Aghazadeh F, Zheng B. Early Eye Disengagement Is Regulated by Task Complexity and Task Repetition in Visual Tracking Task. SENSORS (BASEL, SWITZERLAND) 2024; 24:2984. [PMID: 38793839 PMCID: PMC11125091 DOI: 10.3390/s24102984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Revised: 05/06/2024] [Accepted: 05/06/2024] [Indexed: 05/26/2024]
Abstract
Understanding human actions often requires in-depth detection and interpretation of bio-signals. Early eye disengagement from the target (EEDT) represents a significant eye behavior that involves the proactive disengagement of the gazes from the target to gather information on the anticipated pathway, thereby enabling rapid reactions to the environment. It remains unknown how task difficulty and task repetition affect EEDT. We aim to provide direct evidence of how these factors influence EEDT. We developed a visual tracking task in which participants viewed arrow movement videos while their eye movements were tracked. The task complexity was increased by increasing movement steps. Every movement pattern was performed twice to assess the effect of repetition on eye movement. Participants were required to recall the movement patterns for recall accuracy evaluation and complete cognitive load assessment. EEDT was quantified by the fixation duration and frequency within the areas of eye before arrow. When task difficulty increased, we found the recall accuracy score decreased, the cognitive load increased, and EEDT decreased significantly. The EEDT was higher in the second trial, but significance only existed in tasks with lower complexity. EEDT was positively correlated with recall accuracy and negatively correlated with cognitive load. Performing EEDT was reduced by task complexity and increased by task repetition. EEDT may be a promising sensory measure for assessing task performance and cognitive load and can be used for the future development of eye-tracking-based sensors.
Collapse
Affiliation(s)
- Yun Wu
- Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, 162A Heritage Medical Research Centre, 11207-87 Ave NW, Edmonton, AB T6G 2S2, Canada; (Y.W.); (Z.Z.)
| | - Zhongshi Zhang
- Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, 162A Heritage Medical Research Centre, 11207-87 Ave NW, Edmonton, AB T6G 2S2, Canada; (Y.W.); (Z.Z.)
| | - Farzad Aghazadeh
- Department of Mechanical Engineering, University of Alberta, Edmonton, AB T6G2E1, Canada;
| | - Bin Zheng
- Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, 162A Heritage Medical Research Centre, 11207-87 Ave NW, Edmonton, AB T6G 2S2, Canada; (Y.W.); (Z.Z.)
| |
Collapse
|
5
|
Caloca-Amber S, Mauriz E, Vázquez-Casares AM. Exploring eye-tracking data as an indicator of situational awareness in nursing students during a cardiorespiratory arrest simulation. Nurse Educ Pract 2024; 76:103911. [PMID: 38359685 DOI: 10.1016/j.nepr.2024.103911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 12/30/2023] [Accepted: 01/29/2024] [Indexed: 02/17/2024]
Abstract
AIM To examine the components of visual attention that maintain situational awareness during simulation training in undergraduate nursing students with different instruction levels. BACKGROUND Eye-tracking can provide deep insight into the nurses' attention during simulated practice. Knowing which gaze patterns promote situational awareness can significantly improve nurse instruction. DESIGN A comparative observational study investigated the role of visual attention on the performance quality, psychophysiological parameters (vital signs, anxiety and stress) and socioemotional competencies (cognitive workload, motivation and self-efficacy) of nursing students with various experience levels. METHODS Thirty nursing students divided into two groups according to their academic level: first cycle (n=14) and second-cycle (n=16) faced a clinical simulation scenario to resolve a cardiorespiratory arrest event. Eye tracking-based analysis required the selection of six areas of interest. The monitorization of vital signs included measuring blood pressure, heart rate, respiratory rate and oxygen saturation before and after the simulation practice. Participants completed the socioemotional questionnaire (NASA-TLX). They answered the state subscale of the State-Trait Anxiety Inventory (STAI), the Visual Analogue Scale (VAS) of stress, the Situational Motivation Scale (SIMS) and the Baessler and Schwarzer General Self-Efficacy Scale. RESULTS The first-cycle group displayed higher vital sign scores than the second cycle, apart from the post-simulation respiratory rate. All physiological parameters increased in mean value after the clinical simulation, except oxygen saturation. Anxiety was the only parameter in the socioemotional domain to present a statistically significant difference between the groups. First-year nursing students showed greater anxiety, stress, mental workload, identified regulation and intrinsic motivation, while second-year students showed higher levels of amotivation, external regulation and perceived self-efficacy. Eye-tracking data (revisits, gaze and duration of fixations) exhibited statistically significant differences depending on the area of interest in both groups (p =. 05). The performance outcomes showed a negative and moderate association with gaze the total number of gazes in the second-cycle group (rho = -0.640, p = 0.010). CONCLUSION Eye-tracking-based analysis can help to predict performance quality while maintaining situational awareness during nursing instruction.
Collapse
Affiliation(s)
- Sandra Caloca-Amber
- Department of Nursing and Physiotherapy, Universidad de León, Campus de Vegazana, s/n, León 24071, Spain
| | - Elba Mauriz
- Department of Nursing and Physiotherapy, Universidad de León, Campus de Vegazana, s/n, León 24071, Spain.
| | - Ana M Vázquez-Casares
- Department of Nursing and Physiotherapy, Universidad de León, Campus de Vegazana, s/n, León 24071, Spain
| |
Collapse
|
6
|
Shafiei SB, Shadpour S, Mohler JL, Attwood K, Liu Q, Gutierrez C, Toussi MS. Developing surgical skill level classification model using visual metrics and a gradient boosting algorithm. ANNALS OF SURGERY OPEN 2023; 4:e292. [PMID: 37305561 PMCID: PMC10249659 DOI: 10.1097/as9.0000000000000292] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 04/24/2023] [Indexed: 06/13/2023] Open
Abstract
Objective Assessment of surgical skills is crucial for improving training standards and ensuring the quality of primary care. This study aimed to develop a gradient boosting classification model (GBM) to classify surgical expertise into inexperienced, competent, and experienced levels in robot-assisted surgery (RAS) using visual metrics. Methods Eye gaze data were recorded from 11 participants performing four subtasks; blunt dissection, retraction, cold dissection, and hot dissection using live pigs and the da Vinci robot. Eye gaze data were used to extract the visual metrics. One expert RAS surgeon evaluated each participant's performance and expertise level using the modified Global Evaluative Assessment of Robotic Skills (GEARS) assessment tool. The extracted visual metrics were used to classify surgical skill levels and to evaluate individual GEARS metrics. Analysis of Variance (ANOVA) was used to test the differences for each feature across skill levels. Results Classification accuracies for blunt dissection, retraction, cold dissection, and burn dissection were 95%, 96%, 96%, and 96%, respectively. The time to complete only the retraction was significantly different among the 3 skill levels (p-value = 0.04). Performance was significantly different for 3 categories of surgical skill level for all subtasks (p-values<0.01). The extracted visual metrics were strongly associated with GEARS metrics (R2>0.7 for GEARS metrics evaluation models). Conclusions Machine learning (ML) algorithms trained by visual metrics of RAS surgeons can classify surgical skill levels and evaluate GEARS measures. The time to complete a surgical subtask may not be considered a stand-alone factor for skill level assessment.
Collapse
Affiliation(s)
- Somayeh B. Shafiei
- From the Department of Urology, Roswell Park Comprehensive Cancer Center in Buffalo, NY
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, Ontario, Canada
| | - James L. Mohler
- From the Department of Urology, Roswell Park Comprehensive Cancer Center in Buffalo, NY
| | - Kristopher Attwood
- Department of Biostatistics and Bioinformatics, Roswell Park Comprehensive Cancer Center, Buffalo, NY
| | - Qian Liu
- Department of Biostatistics and Bioinformatics, Roswell Park Comprehensive Cancer Center, Buffalo, NY
| | - Camille Gutierrez
- Obstetrics and Gynecology Residency Program, Sisters of Charity Health System, Buffalo, NY
| | | |
Collapse
|
7
|
Li Y, Reed A, Kavoussi N, Wu JY. Eye gaze metrics for skill assessment and feedback in kidney stone surgery. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02901-6. [PMID: 37202714 DOI: 10.1007/s11548-023-02901-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 03/31/2023] [Indexed: 05/20/2023]
Abstract
PURPOSE Surgical skill assessment is essential for safe operations. In endoscopic kidney stone surgery, surgeons must perform a highly skill-dependent mental mapping from the pre-operative scan to the intraoperative endoscope image. Poor mental mapping can lead to incomplete exploration of the kidney and high reoperation rates. Yet there are few objective ways to evaluate competency. We propose to use unobtrusive eye-gaze measurements in the task space to evaluate skill and provide feedback. METHODS We capture the surgeons' eye gaze on the surgical monitor with the Microsoft Hololens 2. To enable stable and accurate gaze detection, we develop a calibration algorithm to refine the eye tracking of the Hololens. In addition, we use a QR code to locate the eye gaze on the surgical monitor. We then run a user study with three expert and three novice surgeons. Each surgeon is tasked to locate three needles representing kidney stones in three different kidney phantoms. RESULTS We find that experts have more focused gaze patterns. They complete the task faster, have smaller total gaze area, and the gaze fewer times outside the area of interest. While fixation to non-fixation ratio did not show significant difference in our findings, tracking the ratio over time shows different patterns between novices and experts. CONCLUSION We show that a non-negligible difference holds between novice and expert surgeons' gaze metrics in kidney stone identification in phantoms. Expert surgeons demonstrate more targeted gaze throughout a trial, indicating their higher level of proficiency. To improve the skill acquisition process for novice surgeons, we suggest providing sub-task specific feedback. This approach presents an objective and non-invasive method to assess surgical competence.
Collapse
Affiliation(s)
- Yizhou Li
- Department of Computer Science, Vanderbilt University, 2301 Vanderbilt Pl, Nashville, TN, 37240, USA.
| | - Amy Reed
- Department of Urology, Vanderbilt University Medical Center, 1211 Medical Center Dr, Nashville, TN, 37232, USA
| | - Nicholas Kavoussi
- Department of Urology, Vanderbilt University Medical Center, 1211 Medical Center Dr, Nashville, TN, 37232, USA
| | - Jie Ying Wu
- Department of Computer Science, Vanderbilt University, 2301 Vanderbilt Pl, Nashville, TN, 37240, USA.
| |
Collapse
|
8
|
Felinska EA, Fuchs TE, Kogkas A, Chen ZW, Otto B, Kowalewski KF, Petersen J, Müller-Stich BP, Mylonas G, Nickel F. Telestration with augmented reality improves surgical performance through gaze guidance. Surg Endosc 2023; 37:3557-3566. [PMID: 36609924 PMCID: PMC10156835 DOI: 10.1007/s00464-022-09859-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 12/27/2022] [Indexed: 01/07/2023]
Abstract
BACKGROUND In minimally invasive surgery (MIS), trainees need to learn how to interpret the operative field displayed on the laparoscopic screen. Experts currently guide trainees mainly verbally during laparoscopic procedures. A newly developed telestration system with augmented reality (iSurgeon) allows the instructor to display hand gestures in real-time on the laparoscopic screen in augmented reality to provide visual expert guidance (telestration). This study analysed the effect of telestration guided instructions on gaze behaviour during MIS training. METHODS In a randomized-controlled crossover study, 40 MIS naive medical students performed 8 laparoscopic tasks with telestration or with verbal instructions only. Pupil Core eye-tracking glasses were used to capture the instructor's and trainees' gazes. Gaze behaviour measures for tasks 1-7 were gaze latency, gaze convergence and collaborative gaze convergence. Performance measures included the number of errors in tasks 1-7 and trainee's ratings in structured and standardized performance scores in task 8 (ex vivo porcine laparoscopic cholecystectomy). RESULTS There was a significant improvement 1-7 on gaze latency [F(1,39) = 762.5, p < 0.01, ηp2 = 0.95], gaze convergence [F(1,39) = 482.8, p < 0.01, ηp2 = 0.93] and collaborative gaze convergence [F(1,39) = 408.4, p < 0.01, ηp2 = 0.91] upon instruction with iSurgeon. The number of errors was significantly lower in tasks 1-7 (0.18 ± 0.56 vs. 1.94 ± 1.80, p < 0.01) and the score ratings for laparoscopic cholecystectomy were significantly higher with telestration (global OSATS: 29 ± 2.5 vs. 25 ± 5.5, p < 0.01; task-specific OSATS: 60 ± 3 vs. 50 ± 6, p < 0.01). CONCLUSIONS Telestration with augmented reality successfully improved surgical performance. The trainee's gaze behaviour was improved by reducing the time from instruction to fixation on targets and leading to a higher convergence of the instructor's and the trainee's gazes. Also, the convergence of trainee's gaze and target areas increased with telestration. This confirms augmented reality-based telestration works by means of gaze guidance in MIS and could be used to improve training outcomes.
Collapse
Affiliation(s)
- Eleni Amelia Felinska
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - Thomas Ewald Fuchs
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - Alexandros Kogkas
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, SW7 2AZ, UK
- Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, London, SW7 2AZ, UK
| | - Zi-Wei Chen
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - Benjamin Otto
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - Karl-Friedrich Kowalewski
- Department of Urology and Urological Surgery, University Medical Center Mannheim, Heidelberg University, 68167, Mannheim, Germany
| | - Jens Petersen
- Department of Medical Image Computing, German Cancer Research Center, 69120, Heidelberg, Germany
| | - Beat Peter Müller-Stich
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - George Mylonas
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, SW7 2AZ, UK
- Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, London, SW7 2AZ, UK
| | - Felix Nickel
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany.
| |
Collapse
|
9
|
Chainey J, O'Kelly CJ, Zhang Y, Kim MJ, Zheng B. Gaze Behaviors of Neurosurgeon in Performing Gross Movements under Microscope. World Neurosurg 2022; 166:e469-e474. [PMID: 35843576 DOI: 10.1016/j.wneu.2022.07.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 07/07/2022] [Accepted: 07/07/2022] [Indexed: 12/15/2022]
Abstract
BACKGROUND Frequent intermittent bleeding control and suction are often necessary during microneurosurgical procedures. We compared the visual searching strategy that guides these types of gross hand movements between expert surgeons and neurosurgical residents. METHODS A total of 70 trials of a microsurgical task consisting of moving a cotton ball with bipolar forceps and a suction instrument under the microscope were recorded. Based on surgical videos, we extracted total task time, boarding time, gaze-tool time gap, number of fixations during the boarding time, and target-locked fixation duration. A comparison of these measures between experts and resident surgeons were performed. RESULTS No significant difference was found for total task time, boarding time, and number of fixations during the boarding time between the 2 groups of surgeons. However, we found significant differences in the measure of gaze-tool time gap (P < 0.001, partial η2 = 0.54) and the target-locked fixation (P < 0.001, partial η2 = 0.24). Specifically, expert microsurgeons' eyes move to the target board earlier before their tools in hands move (1.01 ± 0.30 seconds) compared with resident microsurgeons (0.44 ± 0.17 seconds). The target-locked fixation duration was also longer among experts comparing to residents (experts: 1.16 ± 0.82 seconds, residents: 0.40 ± 0.30 seconds). CONCLUSIONS Task time analysis for this basic microsurgical task is not sufficient to evaluate participants' level of expertise. Gaze behavior analysis helps to reveal hidden differences between experts and residents. This research provides more evidence that supports the use of gaze analysis for assessing surgeons' skills in microsurgery.
Collapse
Affiliation(s)
- Jonathan Chainey
- Division of Neurosurgery, University of Alberta, Edmonton, Canada
| | - Cian J O'Kelly
- Division of Neurosurgery, University of Alberta, Edmonton, Canada
| | - Yao Zhang
- Surgical Simulation Research Lab, University of Alberta, Edmonton, Canada
| | - Michael J Kim
- Division of General Surgery, University of Alberta Hospital, Edmonton, Canada
| | - Bin Zheng
- Surgical Simulation Research Lab, University of Alberta, Edmonton, Canada.
| |
Collapse
|