1
|
Nyström M, Hooge ITC, Hessels RS, Andersson R, Hansen DW, Johansson R, Niehorster DC. The fundamentals of eye tracking part 3: How to choose an eye tracker. Behav Res Methods 2025; 57:67. [PMID: 39843609 PMCID: PMC11754381 DOI: 10.3758/s13428-024-02587-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/08/2024] [Indexed: 01/24/2025]
Abstract
There is an abundance of commercial and open-source eye trackers available for researchers interested in gaze and eye movements. Which aspects should be considered when choosing an eye tracker? The paper describes what distinguishes different types of eye trackers, their suitability for different types of research questions, and highlights questions researchers should ask themselves to make an informed choice.
Collapse
Affiliation(s)
- Marcus Nyström
- Lund University Humanities Lab, Box 201, SE, 221 00, Lund, Sweden.
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | | | | | | | - Diederick C Niehorster
- Lund University Humanities Lab, Box 201, SE, 221 00, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| |
Collapse
|
2
|
Deng S, Kulkarni C, Oh J, Parker SH, Lau N. Comparison Between Scene-Independent and Scene-Dependent Eye Metrics in Assessing Psychomotor Skills. HUMAN FACTORS 2024:187208241302475. [PMID: 39608016 DOI: 10.1177/00187208241302475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2024]
Abstract
OBJECTIVE This study aims to compare the relative sensitivity between scene-independent and scene-dependent eye metrics in assessing trainees' performance in simulated psychomotor tasks. BACKGROUND Eye metrics have been extensively studied for skill assessment and training in psychomotor tasks, including aviation, driving, and surgery. These metrics can be categorized as scene-independent or scene-dependent, based on whether predefined areas of interest are considered. There is a paucity of direct comparisons between these metric types, particularly in their ability to assess performance during early training. METHOD Thirteen medical students practiced the peg transfer task in the Fundamentals of Laparoscopic Surgery. Scene-independent and scene-dependent eye metrics, completion time, and tool motion metrics were derived from eye-tracking data and task videos. K-means clustering of nine eye metrics identified three groups of practice trials with similar gaze behaviors, corresponding to three performance levels verified by completion time and tool motion metrics. A random forest model using eye metrics estimated classification accuracy and determined the feature importance of the eye metrics. RESULTS Scene-dependent eye metrics demonstrated a clearer linear trend with performance levels than scene-independent metrics. The random forest model achieved 88.59% accuracy, identifying the top four predictors of performance as scene-dependent metrics, whereas the two least effective predictors were scene-independent metrics. CONCLUSION Scene-dependent eye metrics are overall more sensitive than scene-independent ones for assessing trainee performance in simulated psychomotor tasks. APPLICATION The study's findings are significant for advancing eye metrics in psychomotor skill assessment and training, enhancing operator competency, and promoting safe operations.
Collapse
|
3
|
Kang H, Yang R, Song R, Yang C, Wang W. An Approach of Query Audience's Attention in Virtual Speech. SENSORS (BASEL, SWITZERLAND) 2024; 24:5363. [PMID: 39205057 PMCID: PMC11359125 DOI: 10.3390/s24165363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Revised: 07/22/2024] [Accepted: 07/23/2024] [Indexed: 09/04/2024]
Abstract
Virtual speeches are a very popular way for remote multi-user communication, but it has the disadvantage of the lack of eye contact. This paper proposes the evaluation of an online audience attention based on gaze tracking. Our research only uses webcams to capture the audience's head posture, gaze time, and other features, providing a low-cost method for attention monitoring with reference values across multiple domains. Meantime, we also propose a set of indexes which can be used to evaluate the audience's degree of attention, making up for the fact that the speaker cannot gauge the audience's concentration through eye contact during online speeches. We selected 96 students for a 20 min group simulation session and used Spearman's correlation coefficient to analyze the correlation between our evaluation indicators and concentration. The result showed that each evaluation index has a significant correlation with the degree of attention (p = 0.01), and all the students in the focused group met the thresholds set by each of our evaluation indicators, while the students in the non-focused group failed to reach the standard. During the simulation, eye movement data and EEG signals were measured synchronously for the second group of students. The EEG results of the students were consistent with the systematic evaluation. The performance of the measured EEG signals confirmed the accuracy of the systematic evaluation.
Collapse
Affiliation(s)
| | | | | | | | - Wenqing Wang
- School of Automation, Xi’an University of Posts and Telecommunications, Xi’an 710061, China; (H.K.); (R.Y.); (R.S.); (C.Y.)
| |
Collapse
|
4
|
Anton NE, Obuseh M, Lim C, Chen H, Yang J, Stefanidis D, Yu D. Nontechnical Skills Assessment in Acute Care Trauma Simulations: A Mixed Methods Approach Using Eye Tracking and Behavioral Marker Systems. Mil Med 2024; 189:719-727. [PMID: 39160814 PMCID: PMC11368209 DOI: 10.1093/milmed/usae240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 03/12/2024] [Accepted: 05/02/2024] [Indexed: 08/21/2024] Open
Abstract
INTRODUCTION The condition of trauma patients and the urgent need for timely resuscitation present unique challenges to trauma teams. These difficulties are exacerbated for military trauma teams in combat environments. Consequently, there is a need for continued improvement of nontechnical skills (NTS) training for trauma teams. However, current approaches to NTS assessment rely on subjective ratings, which can introduce bias. Accordingly, objective methods of NTS evaluation are needed. Eye-tracking (ET) methods have been applied to studying communication, situation awareness, and leadership in many health care settings, and could be applied to studying physicians' NTS during trauma situations. In this study, we aimed to assess the relationship between trauma team leaders' objective gaze patterns and subjective expert NTS ratings during patient care simulations. MATERIALS AND METHODS After Institutional Review Board approval, 9 trauma teams from first-year post-graduate general surgery and emergency medicine residents were recruited to participate in 1 of 2 trauma simulations (a difficult airway case and a multi-patient trauma). Each scenario lasted approximately 15 minutes. All team leaders wore a mobile ET system to evaluate gaze metrics-time to first fixation (TTFF), average fixation duration (AFD), and total percentage of the scenario (TPS) focused on Areas of Interest (AOI), which included patient, care team, diagnostic equipment, and patient care equipment. Trained faculty raters completed the Non-Technical Skills for Surgeons (NOTSS) assessment tool and the Trauma Non-Technical Skills (T-NOTECHS) scale. One-way analysis of variance, Kruskal-Wallis, and appropriate post-hoc pairwise comparison tests were run to assess differences between ET metrics across AOI groups. Spearman's Rho tests were used to assess correlations between ET and subjective NTS ratings. RESULTS Compared to other NTS domains, trauma teams scored relatively poorly on communication across both T-NOTECHS (3.29$ \pm $0.61, maximum = 5) and NOTSS (2.87$ \pm $0.66, maximum = 4). We found significant differences in trauma team leaders' TTFF between teammates and the patient (Team: 1.56 vs Patient: 29.82 seconds, P < .001). TTFF on the diagnostic equipment was negatively correlated (P < .05) to multiple measures of subjective NTS assessments. There were no significant differences in AFD between AOIs, and AFD on teammates was positively correlated (P < .05) to communication and teamwork. There were significant differences in TPS across most AOI pairs (P < .05), and the average TPS fixated was highest on the patient (32%). Finally, there were several significant correlations between additional ET and NTS metrics. CONCLUSIONS This study utilized a mixed methods approach to assess trauma team leaders' NTS in simulated acute care trauma simulations. Our results provide several objective insights into trauma team leaders' NTS behaviors during patient care simulations. Such objective insights provide a more comprehensive understanding of NTS behaviors and can be leveraged to guide NTS training of trauma physicians in the future. More studies are needed to apply these methods to capture NTS from a larger sample of teams in both simulated and real trauma environments.
Collapse
Affiliation(s)
- Nicholas E. Anton
- Department of Surgery, Indiana University School of Medicine, Indianapolis, IN, 46202, USA
- School of Industrial Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Marian Obuseh
- School of Industrial Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Chiho Lim
- School of Industrial Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Haozhi Chen
- School of Industrial Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Jing Yang
- Department of Industrial and Systems Engineering, University of Buffalo, Buffalo, NY, 14260, USA
| | - Dimitrios Stefanidis
- Department of Surgery, Indiana University School of Medicine, Indianapolis, IN, 46202, USA
| | - Denny Yu
- School of Industrial Engineering, Purdue University, West Lafayette, IN, 47907, USA
| |
Collapse
|
5
|
Nejad A, de Haan GA, Heutink J, Cornelissen FW. ACE-DNV: Automatic classification of gaze events in dynamic natural viewing. Behav Res Methods 2024; 56:3300-3314. [PMID: 38448726 PMCID: PMC11133063 DOI: 10.3758/s13428-024-02358-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/02/2024] [Indexed: 03/08/2024]
Abstract
Eye movements offer valuable insights for clinical interventions, diagnostics, and understanding visual perception. The process usually involves recording a participant's eye movements and analyzing them in terms of various gaze events. Manual identification of these events is extremely time-consuming. Although the field has seen the development of automatic event detection and classification methods, these methods have primarily focused on distinguishing events when participants remain stationary. With increasing interest in studying gaze behavior in freely moving participants, such as during daily activities like walking, new methods are required to automatically classify events in data collected under unrestricted conditions. Existing methods often rely on additional information from depth cameras or inertial measurement units (IMUs), which are not typically integrated into mobile eye trackers. To address this challenge, we present a framework for classifying gaze events based solely on eye-movement signals and scene video footage. Our approach, the Automatic Classification of gaze Events in Dynamic and Natural Viewing (ACE-DNV), analyzes eye movements in terms of velocity and direction and leverages visual odometry to capture head and body motion. Additionally, ACE-DNV assesses changes in image content surrounding the point of gaze. We evaluate the performance of ACE-DNV using a publicly available dataset and showcased its ability to discriminate between gaze fixation, gaze pursuit, gaze following, and gaze shifting (saccade) events. ACE-DNV exhibited comparable performance to previous methods, while eliminating the necessity for additional devices such as IMUs and depth cameras. In summary, ACE-DNV simplifies the automatic classification of gaze events in natural and dynamic environments. The source code is accessible at https://github.com/arnejad/ACE-DNV .
Collapse
Affiliation(s)
- Ashkan Nejad
- Department of Research and Improvement of Care, Royal Dutch Visio, Huizen, The Netherlands.
- Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| | - Gera A de Haan
- Department of Research and Improvement of Care, Royal Dutch Visio, Huizen, The Netherlands
- Department of Clinical and Developmental Neuropsychology, University of Groningen, Groningen, The Netherlands
| | - Joost Heutink
- Department of Research and Improvement of Care, Royal Dutch Visio, Huizen, The Netherlands
- Department of Clinical and Developmental Neuropsychology, University of Groningen, Groningen, The Netherlands
| | - Frans W Cornelissen
- Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
6
|
Bapna T, Valles J, Leng S, Pacilli M, Nataraja RM. Eye-tracking in surgery: a systematic review. ANZ J Surg 2023; 93:2600-2608. [PMID: 37668263 DOI: 10.1111/ans.18686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 08/20/2023] [Accepted: 08/22/2023] [Indexed: 09/06/2023]
Abstract
BACKGROUND Surgery is constantly evolving with the assistance of rapidly developing novel technology. Eye-tracking devices provide opportunities to monitor the acquisition of surgical skills, gain insight into performance, and enhance surgical practice. The aim of this review was to consolidate the available evidence for the use of eye-tracking in the surgical disciplines. METHODS A systematic literature review was conducted in accordance with PRISMA guidelines. A search of OVID Medline, EMBASE, Cochrane library, Scopus, and Science Direct was conducted January 2000 until December 2022. Studies involving eye-tracking in surgical training, assessment and technical innovation were included in the review. Non-surgical procedures, animal studies, and studies not involving surgical participants were excluded from the review. RESULTS The search returned a total of 12 054 articles, 80 of which were included in the final analysis and review. Seventeen studies involved eye-tracking in surgical training, 48 surgical assessment, and 20 were focussing on technical aspects of this technology. Twenty-six different eye-tracking devices were used in the included studies. Metrics such as the number of fixations, duration of fixations, dwell time, and cognitive workload were able to differentiate between novice and expert performance. Eight studies demonstrated the effectiveness of gaze-training for improving surgical skill. CONCLUSION The current literature shows a broad range of utility for a variety of eye-tracking devices in surgery. There remains a lack of standardization for metric parameters and gaze analysis techniques. Further research is required to validate its use to establish reliability and create uniform practices.
Collapse
Affiliation(s)
- Tanay Bapna
- Department of Paediatric Surgery & Surgical Simulation, Monash Children's Hospital, Melbourne, Victoria, Australia
| | - John Valles
- Department of Paediatric Surgery & Surgical Simulation, Monash Children's Hospital, Melbourne, Victoria, Australia
| | - Samantha Leng
- Department of Paediatric Surgery & Surgical Simulation, Monash Children's Hospital, Melbourne, Victoria, Australia
| | - Maurizio Pacilli
- Department of Paediatric Surgery & Surgical Simulation, Monash Children's Hospital, Melbourne, Victoria, Australia
- Departments of Paediatrics & Surgery, School of Clinical Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| | - Ramesh Mark Nataraja
- Department of Paediatric Surgery & Surgical Simulation, Monash Children's Hospital, Melbourne, Victoria, Australia
- Departments of Paediatrics & Surgery, School of Clinical Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| |
Collapse
|
7
|
Tzamaras HM, Wu HL, Moore JZ, Miller SR. Shifting Perspectives: A proposed framework for analyzing head-mounted eye-tracking data with dynamic areas of interest and dynamic scenes. PROCEEDINGS OF THE HUMAN FACTORS AND ERGONOMICS SOCIETY ... ANNUAL MEETING. HUMAN FACTORS AND ERGONOMICS SOCIETY. ANNUAL MEETING 2023; 67:953-958. [PMID: 38450120 PMCID: PMC10914345 DOI: 10.1177/21695067231192929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
Eye-tracking is a valuable research method for understanding human cognition and is readily employed in human factors research, including human factors in healthcare. While wearable mobile eye trackers have become more readily available, there are no existing analysis methods for accurately and efficiently mapping dynamic gaze data on dynamic areas of interest (AOIs), which limits their utility in human factors research. The purpose of this paper was to outline a proposed framework for automating the analysis of dynamic areas of interest by integrating computer vision and machine learning (CVML). The framework is then tested using a use-case of a Central Venous Catheterization trainer with six dynamic AOIs. While the results of the validity trial indicate there is room for improvement in the CVML method proposed, the framework provides direction and guidance for human factors researchers using dynamic AOIs.
Collapse
Affiliation(s)
| | - Hang-Ling Wu
- Pennsylvania State University Mechanical Engineering
| | - Jason Z Moore
- Pennsylvania State University Mechanical Engineering
| | | |
Collapse
|
8
|
Nespolo RG, Cole E, Wang D, Yi D, Leiderman YI. A Platform for Tracking Surgeon and Observer Gaze as a Surrogate for Attention in Ophthalmic Surgery. OPHTHALMOLOGY SCIENCE 2023; 3:100246. [PMID: 36748062 PMCID: PMC9898791 DOI: 10.1016/j.xops.2022.100246] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 10/25/2022] [Indexed: 11/10/2022]
Abstract
Purpose To develop and validate a platform that can extract eye gaze metrics from surgeons observing cataract and vitreoretinal procedures and to enable post hoc data analysis to assess potential discrepancies in eye movement behavior according to surgeon experience. Design Experimental, prospective, single-center study. Participants Eleven ophthalmic surgeons observing deidentified vitreoretinal and cataract surgical procedures performed at a single university-based medical center. Methods An open-source platform was developed to extract gaze coordinates and metrics from ophthalmic surgeons via a computer vision algorithm in conjunction with a neural network to track and segment instruments and tissues, identifying areas of attention in the visual field of study subjects. Eleven surgeons provided validation data by watching videos of 6 heterogeneous vitreoretinal and cataract surgical phases. Main Outcome Measures Accuracy and distance traveled by the eye gaze of participants and overlap of the participants' eye gaze with instruments and tissues while observing surgical procedures. Results The platform demonstrated repeatability of > 94% when acquiring the eye gaze behavior of subjects. Attending ophthalmic surgeons and clinical fellows exhibited a lower overall cartesian distance traveled in comparison to resident physicians in ophthalmology (P < 0.02). Ophthalmology residents and clinical fellows exhibited more fixations to the display area where surgical device parameters were superimposed than attending surgeons (P < 0.05). There was a trend toward gaze overlap with the instrument tooltip location among resident physicians in comparison to attending surgeons and fellows (41.42% vs. 34.8%, P > 0.2). The number and duration of fixations did not vary substantially among groups (P > 0.3). Conclusions The platform proved effective in extracting gaze metrics of ophthalmic surgeons. These preliminary data suggest that surgeon gaze behavior differs according to experience.
Collapse
Affiliation(s)
- Rogerio G. Nespolo
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
- Richard and Loan Hill Department of Biomedical Engineering, University of Illinois Chicago, Chicago, Illinois
| | - Emily Cole
- Richard and Loan Hill Department of Biomedical Engineering, University of Illinois Chicago, Chicago, Illinois
| | - Daniel Wang
- Richard and Loan Hill Department of Biomedical Engineering, University of Illinois Chicago, Chicago, Illinois
| | - Darvin Yi
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
- Richard and Loan Hill Department of Biomedical Engineering, University of Illinois Chicago, Chicago, Illinois
| | - Yannek I. Leiderman
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois Chicago, Chicago, Illinois
- Richard and Loan Hill Department of Biomedical Engineering, University of Illinois Chicago, Chicago, Illinois
| |
Collapse
|
9
|
Li Y, Reed A, Kavoussi N, Wu JY. Eye gaze metrics for skill assessment and feedback in kidney stone surgery. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02901-6. [PMID: 37202714 DOI: 10.1007/s11548-023-02901-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 03/31/2023] [Indexed: 05/20/2023]
Abstract
PURPOSE Surgical skill assessment is essential for safe operations. In endoscopic kidney stone surgery, surgeons must perform a highly skill-dependent mental mapping from the pre-operative scan to the intraoperative endoscope image. Poor mental mapping can lead to incomplete exploration of the kidney and high reoperation rates. Yet there are few objective ways to evaluate competency. We propose to use unobtrusive eye-gaze measurements in the task space to evaluate skill and provide feedback. METHODS We capture the surgeons' eye gaze on the surgical monitor with the Microsoft Hololens 2. To enable stable and accurate gaze detection, we develop a calibration algorithm to refine the eye tracking of the Hololens. In addition, we use a QR code to locate the eye gaze on the surgical monitor. We then run a user study with three expert and three novice surgeons. Each surgeon is tasked to locate three needles representing kidney stones in three different kidney phantoms. RESULTS We find that experts have more focused gaze patterns. They complete the task faster, have smaller total gaze area, and the gaze fewer times outside the area of interest. While fixation to non-fixation ratio did not show significant difference in our findings, tracking the ratio over time shows different patterns between novices and experts. CONCLUSION We show that a non-negligible difference holds between novice and expert surgeons' gaze metrics in kidney stone identification in phantoms. Expert surgeons demonstrate more targeted gaze throughout a trial, indicating their higher level of proficiency. To improve the skill acquisition process for novice surgeons, we suggest providing sub-task specific feedback. This approach presents an objective and non-invasive method to assess surgical competence.
Collapse
Affiliation(s)
- Yizhou Li
- Department of Computer Science, Vanderbilt University, 2301 Vanderbilt Pl, Nashville, TN, 37240, USA.
| | - Amy Reed
- Department of Urology, Vanderbilt University Medical Center, 1211 Medical Center Dr, Nashville, TN, 37232, USA
| | - Nicholas Kavoussi
- Department of Urology, Vanderbilt University Medical Center, 1211 Medical Center Dr, Nashville, TN, 37232, USA
| | - Jie Ying Wu
- Department of Computer Science, Vanderbilt University, 2301 Vanderbilt Pl, Nashville, TN, 37240, USA.
| |
Collapse
|
10
|
Berges AJ, Vedula SS, Chara A, Hager GD, Ishii M, Malpani A. Eye Tracking and Motion Data Predict Endoscopic Sinus Surgery Skill. Laryngoscope 2023; 133:500-505. [PMID: 35357011 PMCID: PMC9825109 DOI: 10.1002/lary.30121] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 03/10/2022] [Accepted: 03/14/2022] [Indexed: 01/11/2023]
Abstract
OBJECTIVE Endoscopic surgery has a considerable learning curve due to dissociation of the visual-motor axes, coupled with decreased tactile feedback and mobility. In particular, endoscopic sinus surgery (ESS) lacks objective skill assessment metrics to provide specific feedback to trainees. This study aims to identify summary metrics from eye tracking, endoscope motion, and tool motion to objectively assess surgeons' ESS skill. METHODS In this cross-sectional study, expert and novice surgeons performed ESS tasks of inserting an endoscope and tool into a cadaveric nose, touching an anatomical landmark, and withdrawing the endoscope and tool out of the nose. Tool and endoscope motion were collected using an electromagnetic tracker, and eye gaze was tracked using an infrared camera. Three expert surgeons provided binary assessments of low/high skill. 20 summary statistics were calculated for eye, tool, and endoscope motion and used in logistic regression models to predict surgical skill. RESULTS 14 metrics (10 eye gaze, 2 tool motion, and 2 endoscope motion) were significantly different between surgeons with low and high skill. Models to predict skill for 6/9 ESS tasks had an AUC >0.95. A combined model of all tasks (AUC 0.95, PPV 0.93, NPV 0.89) included metrics from eye tracking data and endoscope motion, indicating that these metrics are transferable across tasks. CONCLUSIONS Eye gaze, endoscope, and tool motion data can provide an objective and accurate measurement of ESS surgical performance. Incorporation of these algorithmic techniques intraoperatively could allow for automated skill assessment for trainees learning endoscopic surgery. LEVEL OF EVIDENCE N/A Laryngoscope, 133:500-505, 2023.
Collapse
Affiliation(s)
| | | | | | | | - Masaru Ishii
- Johns Hopkins Department of Otolaryngology–Head and Neck Surgery
| | | |
Collapse
|
11
|
Lin Z, Yang Z, Ye X. Immersive Experience and Climate Change Monitoring in Digital Landscapes: Evidence from Somatosensory Sense and Comfort. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:3332. [PMID: 36834034 PMCID: PMC9966150 DOI: 10.3390/ijerph20043332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 02/09/2023] [Accepted: 02/10/2023] [Indexed: 06/18/2023]
Abstract
In this study, the virtual engine software (Unity 2019, Unity Software Inc., San Francisco, California, the U.S.) was used to generate a digital landscape model, forming a virtual immersive environment. Through field investigation and emotional preference experiments, the ancient tree ecological area and the sunlight-exposed area were respectively monitored, and the somatosensory comfort evaluation model was established. The subjects showed the highest degree of interest in the ancient tree ecological area after landscape roaming experience, and the mean variance in SC fluctuation was 13.23% in experiments. The subjects were in a low arousal state and had a significant degree of interest in the digital landscape roaming scene, and there was a significant correlation between positive emotion, somatosensory comfort and the Rating of Perceived Exertion index; moreover, the somatosensory comfort of the ancient tree ecological area was higher than that of the sunlight-exposed area. Meanwhile, it was found that somatosensory comfort level can effectively distinguish the comfort level between the ancient tree ecological area and the sunlight-exposed area, which provides an important basis for monitoring extreme heat. This study concludes that, in terms of the goal of harmonious coexistence between human and nature, the evaluation model of somatosensory comfort can contribute to reducing people's adverse views on extreme weather conditions.
Collapse
Affiliation(s)
- Zhengsong Lin
- Virtual Landscape Design Lab, School of Art and Design, Wuhan Institute of Technology, Wuhan 430205, China; (Z.L.); (Z.Y.)
| | - Ziqian Yang
- Virtual Landscape Design Lab, School of Art and Design, Wuhan Institute of Technology, Wuhan 430205, China; (Z.L.); (Z.Y.)
| | - Xinyue Ye
- Department of Landscape Architecture and Urban Planning, Center for Geospatial Sciences, Applications and Technology, TAMIDS Design and Analytics Lab for Urban Artificial Intelligence, Texas A&M University, College Station, TX 77840, USA
| |
Collapse
|
12
|
Kulkarni CS, Deng S, Wang T, Hartman-Kenzler J, Barnes LE, Parker SH, Safford SD, Lau N. Scene-dependent, feedforward eye gaze metrics can differentiate technical skill levels of trainees in laparoscopic surgery. Surg Endosc 2023; 37:1569-1580. [PMID: 36123548 PMCID: PMC11062149 DOI: 10.1007/s00464-022-09582-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 08/25/2022] [Indexed: 10/14/2022]
Abstract
INTRODUCTION In laparoscopic surgery, looking in the target areas is an indicator of proficiency. However, gaze behaviors revealing feedforward control (i.e., looking ahead) and their importance have been under-investigated in surgery. This study aims to establish the sensitivity and relative importance of different scene-dependent gaze and motion metrics for estimating trainee proficiency levels in surgical skills. METHODS Medical students performed the Fundamentals of Laparoscopic Surgery peg transfer task while recording their gaze on the monitor and tool activities inside the trainer box. Using computer vision and fixation algorithms, five scene-dependent gaze metrics and one tool speed metric were computed for 499 practice trials. Cluster analysis on the six metrics was used to group the trials into different clusters/proficiency levels, and ANOVAs were conducted to test differences between proficiency levels. A Random Forest model was trained to study metric importance at predicting proficiency levels. RESULTS Three clusters were identified, corresponding to three proficiency levels. The correspondence between the clusters and proficiency levels was confirmed by differences between completion times (F2,488 = 38.94, p < .001). Further, ANOVAs revealed significant differences between the three levels for all six metrics. The Random Forest model predicted proficiency level with 99% out-of-bag accuracy and revealed that scene-dependent gaze metrics reflecting feedforward behaviors were more important for prediction than the ones reflecting feedback behaviors. CONCLUSION Scene-dependent gaze metrics revealed skill levels of trainees more precisely than between experts and novices as suggested in the literature. Further, feedforward gaze metrics appeared to be more important than feedback ones at predicting proficiency.
Collapse
Affiliation(s)
- Chaitanya S Kulkarni
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | - Shiyu Deng
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | - Tianzi Wang
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | | | - Laura E Barnes
- Environmental and Systems Engineering, University of Virginia, Charlottesville, VA, USA
| | | | - Shawn D Safford
- Division of Pediatric General and Thoracic Surgery, UPMC Children's Hospital of Pittsburgh, Harrisburg, PA, USA
| | - Nathan Lau
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA.
| |
Collapse
|
13
|
Madariaga S, Babul C, Egaña JI, Rubio-Venegas I, Güney G, Concha-Miranda M, Maldonado PE, Devia C. SaFiDe: Detection of saccade and fixation periods based on eye-movement attributes from video-oculography, scleral coil or electrooculography data. MethodsX 2023; 10:102041. [PMID: 36814691 PMCID: PMC9939704 DOI: 10.1016/j.mex.2023.102041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 01/23/2023] [Indexed: 01/26/2023] Open
Abstract
In this work we present SaFiDe, a deterministic method to detect eye movements (saccades and fixations) from eye-trace data. We developed this method for human and nonhuman primate data from video- and coil-recorded eye traces and further applied the algorithm to eye traces computed from electrooculograms. All the data analyzed were from free-exploration paradigms, where the main challenge was to detect periods of saccades and fixations that were uncued by the task. The method uses velocity and acceleration thresholds, calculated from the eye trace, to detect saccade and fixation periods. We show that our fully deterministic method detects saccades and fixations from eye traces during free visual exploration. The algorithm was implemented in MATLAB, and the code is publicly available on a GitHub repository.•The algorithm presented is entirely deterministic, simplifying the comparison between subjects and tasks.•Thus far, the algorithm presented can operate over video-based eye tracker data, human electrooculogram records, or monkey scleral eye coil data.
Collapse
Affiliation(s)
- Samuel Madariaga
- Departamento de Neurociencia, Facultad de Medicina, Universidad de Chile, Chile,Centro Nacional de Inteligencia Artificial, CENIA, Chile
| | - Cecilia Babul
- Departamento de Neurociencia, Facultad de Medicina, Universidad de Chile, Chile,Biomedical Neuroscience Institute, BNI, Universidad de Chile, Chile
| | - José Ignacio Egaña
- Departamento de Anestesiología y Medicina Perioperatoria, Facultad de Medicina, Universidad de Chile, Chile
| | - Iván Rubio-Venegas
- Departamento de Neurociencia, Facultad de Medicina, Universidad de Chile, Chile
| | - Gamze Güney
- Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany,Humboldt-Universität zu Berlin, Institut für Biologie, 10099 Berlin, Germany
| | - Miguel Concha-Miranda
- Departamento de Neurociencia, Facultad de Medicina, Universidad de Chile, Chile,Bernstein Center for Computational Neuroscience Berlin, Humboldt-Universität zu Berlin, 10115 Berlin, Germany
| | - Pedro E Maldonado
- Departamento de Neurociencia, Facultad de Medicina, Universidad de Chile, Chile,Centro Nacional de Inteligencia Artificial, CENIA, Chile,Biomedical Neuroscience Institute, BNI, Universidad de Chile, Chile
| | - Christ Devia
- Departamento de Neurociencia, Facultad de Medicina, Universidad de Chile, Chile,Centro Nacional de Inteligencia Artificial, CENIA, Chile,Biomedical Neuroscience Institute, BNI, Universidad de Chile, Chile,Corresponding author at: Centro Nacional de Inteligencia Artificial, CENIA, Chile.
| |
Collapse
|
14
|
Liu CH, Hung J, Chang CW, Lin JJH, Huang ES, Wang SL, Lee LA, Hsiao CT, Sung PS, Chao YP, Chang YJ. Oral presentation assessment and image reading behaviour on brain computed tomography reading in novice clinical learners: an eye-tracking study. BMC MEDICAL EDUCATION 2022; 22:738. [PMID: 36284299 PMCID: PMC9597969 DOI: 10.1186/s12909-022-03795-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 10/06/2022] [Accepted: 10/07/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND To study whether oral presentation (OP) assessment could reflect the novice learners' interpretation skills and reading behaviour on brain computed tomography (CT) reading. METHODS Eighty fifth-year medical students were recruited, received a 2-hour interactive workshop on how to read brain CT, and were assigned to read two brain CT images before and after instruction. We evaluated their image reading behaviour in terms of overall OP post-test rating, the lesion identification, and competency in systematic image reading after instruction. Students' reading behaviour in searching for the target lesions were recorded by the eye-tracking technique and were used to validate the accuracy of lesion reports. Statistical analyses, including lag sequential analysis (LSA), linear mixed models, and transition entropy (TE) were conducted to reveal temporal relations and spatial complexity of systematic image reading from the eye movement perspective. RESULTS The overall OP ratings [pre-test vs. post-test: 0 vs. 1 in case 1, 0 vs. 1 in case 2, p < 0.001] improved after instruction. Both the scores of systematic OP ratings [0 vs.1 in both cases, p < 0.001] and eye-tracking studies (Case 1: 3.42 ± 0.62 and 3.67 ± 0.37 in TE, p = 0.001; Case 2: 3.42 ± 0.76 and 3.75 ± 0.37 in TE, p = 0.002) showed that the image reading behaviour changed before and after instruction. The results of linear mixed models suggested a significant interaction between instruction and area of interests for case 1 (p < 0.001) and case 2 (p = 0.004). Visual attention to the target lesions in the case 1 assessed by dwell time were 506.50 ± 509.06 and 374.38 ± 464.68 milliseconds before and after instruction (p = 0.02). However, the dwell times in the case 2, the fixation counts and the frequencies of accurate lesion diagnoses in both cases did not change after instruction. CONCLUSION Our results showed OP performance may change concurrently with the medical students' reading behaviour on brain CT after a structured instruction.
Collapse
Affiliation(s)
- Chi-Hung Liu
- Department of Neurology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Division of Medical Education, Graduate Institute of Clinical Medical Sciences, College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Institute of Health Policy and Management, College of Public Health, National Taiwan University, Taipei, Taiwan
| | - June Hung
- Department of Neurology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Chun-Wei Chang
- Department of Neurology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - John J H Lin
- Graduate Institute of Science Education, National Taiwan Normal University, No. 88, Ting-Jou Rd., Sec. 4, Taipei City, Taiwan.
| | - Elaine Shinwei Huang
- Department of Neurology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
| | - Shu-Ling Wang
- Graduate Institute of Digital Learning and Education, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Li-Ang Lee
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Department of Otorhinolaryngology-Head and Neck Surgery, Linkou Main Branch, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Cheng-Ting Hsiao
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Department of Emergency Medicine, Chang Gung Memorial Hospital, Chiayi, Taiwan
- Chang Gung Medical Education Research Centre, Taoyuan, Taiwan
| | - Pi-Shan Sung
- Department of Neurology, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan, Taiwan
| | - Yi-Ping Chao
- Department of Neurology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- Department of Computer Science and Information Engineering, Chang Gung University, Taoyuan, Taiwan
- Department of Biomedical Engineering, Chang Gung University, Taoyuan, Taiwan
| | - Yeu-Jhy Chang
- Department of Neurology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Division of Medical Education, Graduate Institute of Clinical Medical Sciences, College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Chang Gung Medical Education Research Centre, Taoyuan, Taiwan
| |
Collapse
|
15
|
Eye Tracking Use in Surgical Research: A Systematic Review. J Surg Res 2022; 279:774-787. [PMID: 35944332 DOI: 10.1016/j.jss.2022.05.024] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 03/18/2022] [Accepted: 05/22/2022] [Indexed: 11/20/2022]
Abstract
INTRODUCTION Eye tracking (ET) is a popular tool to study what factors affect the visual behaviour of surgical team members. To our knowledge, there have been no reviews to date that evaluate the broad use of ET in surgical research. This review aims to identify and assess the quality of this evidence, to synthesize how ET can be used to inform surgical practice, and to provide recommendations to improve future ET surgical studies. METHODS In line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, a systematic literature review was conducted. An electronic search was performed in MEDLINE, Cochrane Central, Embase, and Web of Science databases up to September 2020. Included studies used ET to measure the visual behaviour of members of the surgical team during surgery or surgical tasks. The included studies were assessed by two independent reviewers. RESULTS A total of 7614 studies were identified, and 111 were included for data extraction. Eleven applications were identified; the four most common were skill assessment (41%), visual attention assessment (22%), workload measurement (17%), and skills training (10%). A summary was provided of the various ways ET could be used to inform surgical practice, and three areas were identified for the improvement of future ET studies in surgery. CONCLUSIONS This review provided a comprehensive summary of the various applications of ET in surgery and how ET could be used to inform surgical practice, including how to use ET to improve surgical education. The information provided in this review can also aid in the design and conduct of future ET surgical studies.
Collapse
|
16
|
Marín-Conesa E, Sánchez-Ferrer F, Grima-Murcia MD, Sánchez-Ferrer ML. The Application of a System of Eye Tracking in Laparoscopic Surgery: A New Didactic Tool to Visual Instructions. Front Surg 2021; 8:643611. [PMID: 34179065 PMCID: PMC8219847 DOI: 10.3389/fsurg.2021.643611] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 04/29/2021] [Indexed: 11/24/2022] Open
Abstract
Introduction: Laparoscopic surgery is an increasingly used technique, but it requires a high degree of learning, and communication between the operating room crew is considerably difficult. The use of eye tracking has been proposed as a didactic and evaluation tool in several settings, including in laparoscopy in simulators. Objectives: This study aimed to evaluate the usefulness of the use of eye tracking systems (Tobii glasses 2) in laparoscopic surgery as a didactic and assessment tool to improve communication in the operating room and improve patients' security. Methodology: An anonymous survey was sent to the students and medical teachers of a faculty of medicine and practicing doctors and residents. The message contained an explanation about the use of the Tobii glasses, a link to watch the video showing its use in a laparoscopic surgery, and the survey to complete after watching the video. Results: The survey was answered by 113 participants (51.3% medical students, 27.4% medical teachers, 18.6% practicing doctors, and 2.7% medicine residents). Eighty-three percent agreed with the usefulness of the “Tobii glasses” in the operating room for improving communication between the main surgeon and the assistant, for learning complex surgery techniques, for obtaining didactic videos, and for indicating anatomical structures. The item scored worst was the price of the glasses. Conclusions: It is possible to record and project expert gaze patterns in the operating room in real time using the Tobii glasses. This device allows improving communication among the surgical crew and the learning of residents and also improving the security of surgical patients.
Collapse
Affiliation(s)
- Ester Marín-Conesa
- Department of Neurology, 'Reina Sofía' University Hospital of Murcia, Murcia, Spain
| | - Francisco Sánchez-Ferrer
- Department of Pediatrics, "San Juan" University Hospital, Miguel Hernandez University, Alicante, Spain
| | | | - María Luisa Sánchez-Ferrer
- Department of Obstetrics and Gynecology of the University Hospital "Virgen de la Arrixaca", Institute for Biomedical Research of Murcia, IMIB-Arrixaca, University of Murcia, Murcia, Spain
| |
Collapse
|
17
|
Lau N, Hartman-Kenzler J, Fichtel E, Park J, Ponnala S, Parker SH, Fitzgibbons S, Safford SD. Attending Surgeons Demonstrate Greater Correlations of Skill Assessment and Anticipation of Adverse Events Than Residents During Laparoscopic Cholecystectomy. J Surg Res 2021; 262:140-148. [PMID: 33567387 DOI: 10.1016/j.jss.2020.12.064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Revised: 12/28/2020] [Accepted: 12/31/2020] [Indexed: 10/22/2022]
Abstract
BACKGROUND Surgical training includes the development of technical and nontechnical skills. While technical skills are more easily quantified, nontechnical skills such as situation awareness (SA) are more difficult to measure and quantify. This study investigated the relationships between different SA elements and expertise. METHODS Twenty attending and resident surgeons rated their anticipation of an impending adverse event while watching 20 videos of laparoscopic cholecystectomies with and without adverse events. After watching each video, they assessed surgeon skills and self-assessed their anticipation ratings. All participants answered a general confidence questionnaire before and after the study. RESULTS Videos with adverse events led to significantly higher anticipation of adverse events (P < 0.001), lower surgeon skill rating (P < 0.001), and higher self-assessment in their anticipation ratings (P < 0.001) across both participant groups. General confidence was significantly lower for residents than that for attending surgeons (P < 0.001). Compared with the residents, attendings exhibited stronger and more stable correlations between measurements of SA. When viewing videos with adverse events, attendings showed significantly higher correlation between anticipation of an impending adverse event and skill assessment of the surgeon (P = 0.005). CONCLUSIONS This study investigated how different elements of SA and their relationships were influenced by experience. The results indicated that attendings had stronger and more stable correlations between SA elements than residents, demonstrating how measurement correlations could be meaningful and sensitive indicators of expertise and autonomy readiness.
Collapse
Affiliation(s)
- Nathan Lau
- Grado Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, Virginia
| | | | - Eric Fichtel
- Grado Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, Virginia
| | - Juyeon Park
- Stony Brook University Medical Center, Stony Brook, New York
| | - Siddarth Ponnala
- Department of Industrial and Systems Engineering, University of Wisconsin, Madison, Wisconsin
| | - Sarah Henrickson Parker
- Virginia Tech Carilion School of Medicine, Virginia Tech, Roanoke, Virginia; Fralin Biomedical Research Institute, Virginia Tech, Roanoke, Virginia; Center for Simulation, Research and Patient Safety, Carilion Clinic, Roanoke, Virginia
| | - Shimae Fitzgibbons
- Department of Surgery, MedStar Georgetown University Hospital, Washington, District of Columbia
| | - Shawn D Safford
- Virginia Tech Carilion School of Medicine, Virginia Tech, Roanoke, Virginia; Center for Simulation, Research and Patient Safety, Carilion Clinic, Roanoke, Virginia; Department of Surgery, Penn State Health Children's Hospital, Hershey, Pennsylvania.
| |
Collapse
|
18
|
Menekse Dalveren GG, Cagiltay NE. Distinguishing Intermediate and Novice Surgeons by Eye Movements. Front Psychol 2020; 11:542752. [PMID: 33013592 PMCID: PMC7511664 DOI: 10.3389/fpsyg.2020.542752] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Accepted: 08/17/2020] [Indexed: 02/05/2023] Open
Abstract
Surgical skill-level assessment is key to collecting the required feedback and adapting the educational programs accordingly. Currently, these assessments for the minimal invasive surgery programs are primarily based on subjective methods, and there is no consensus on skill level classifications. One of the most detailed of these classifications categorize skill levels as beginner, novice, intermediate, sub-expert, and expert. To properly integrate skill assessment into minimal invasive surgical education programs and provide skill-based training alternatives, it is necessary to classify the skill levels in as detailed a way as possible and identify the differences between all skill levels in an objective manner. Yet, despite the existence of very encouraging results in the literature, most of the studies have been conducted to better understand the differences between novice and expert surgical skill levels leaving out the other crucial skill levels between them. Additionally, there are very limited studies by considering the eye-movement behaviors of surgical residents. To this end, the present study attempted to distinguish novice- and intermediate-level surgical residents based on their eye movements. The eye-movement data was recorded from 23 volunteer surgical residents while they were performing four computer-based simulated surgical tasks under different hand conditions. The data was analyzed using logistic regression to estimate the skill levels of both groups. The best results of the estimation revealing a 91.3% recognition rate of predicting novice and intermediate surgical residents on one scenario were selected from four under the dominant hand condition. These results show that the eye-movements can be potentially used to identify surgeons with intermediate and novice skills. However, the results also indicate that the order in which the scenarios are provided, and the design of the scenario, the tasks, and their appropriateness with the skill levels of the participants are all critical factors to be considered in improving the estimation ratio, and hence require thorough assessment for future research.
Collapse
Affiliation(s)
- Gonca Gokce Menekse Dalveren
- Department of Computer Science, Norwegian University of Science and Technology, Gjøvik, Norway.,Department of Information Systems Engineering, Atılım University, Ankara, Turkey
| | | |
Collapse
|
19
|
Koskinen J, Bednarik R, Vrzakova H, Elomaa AP. Combined Gaze Metrics as Stress-Sensitive Indicators of Microsurgical Proficiency. Surg Innov 2020; 27:614-622. [PMID: 32687734 PMCID: PMC7890692 DOI: 10.1177/1553350620942980] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
Background. Evaluation of microsurgical proficiency is conventionally subjective, time consuming, and unreliable. Eye movement–based metrics have been promising not only in detection of surgical expertise but also in identifying actual cognitive stress and workload. We investigated if pupil dilations and blinks could be utilized in parallel to accurately classify microsurgical proficiency and its moderating features, especially task-related stress. Methods. Participants (n = 11) were divided into groups based on prior experience in microsurgery: novices (n = 6) with no experience and trained microsurgeons (n = 5). All participants conducted standardized suturing tasks with authentic instruments and a surgical microscope. A support vector machine classifier was used to classify features of microsurgical expertise based on percentage changes in pupil size. Results. A total of 109 successful sutures with 1090 segments were recorded. Classification of expertise from sutures achieved accuracies between 74.3% and 76.0%. Classification from individual segments based on these same features was not feasible. Conclusions. Combined gaze metrics are applicable for classifying surgical proficiency during a defined task. Pupil dilation is also sensitive to external stress factors; however, the usefulness of blinks is impaired by low blink rates. The results can be translated to surgical education to improve feedback and should be investigated individually in the context of actual performance and in real patient operations. Combined gaze metrics may be ultimately utilized to help microsurgeons monitor their performance and workload in real time—which may lead to prevention of errors.
Collapse
Affiliation(s)
- Jani Koskinen
- School of Computing, 163043University of Eastern Finland, Finland
| | - Roman Bednarik
- School of Computing, 163043University of Eastern Finland, Finland
| | - Hana Vrzakova
- Microsurgery Center, 60650Kuopio University Hospital, Finland
| | - Antti-Pekka Elomaa
- Microsurgery Center, 60650Kuopio University Hospital, Finland.,Department of Neurosurgery, Institute of Clinical Medicine, 60650Kuopio University Hospital, Finland
| |
Collapse
|