1
|
Quéré B, Méneur L, Foulquier N, Pensec H, Devauchelle-Pensec V, Garrigues F, Saraux A. Can eye-tracking help to create a new method for X-ray analysis of rheumatoid arthritis patients, including joint segmentation and scoring methods? PLOS DIGITAL HEALTH 2024; 3:e0000616. [PMID: 39374482 PMCID: PMC11458192 DOI: 10.1371/journal.pdig.0000616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Accepted: 08/15/2024] [Indexed: 10/09/2024]
Abstract
Reading hand and foot X-rays in rheumatoid arthritis patients is difficult and time-consuming. In research, physicians use the modified Sharp van der Heijde Sharp (mvdH) score by reading of hand and foot radiographs. The aim of this study was to create a new method of determining the mvdH via eye tracking and to study its concordance with the mvdH score. We created a new method of quantifying the mvdH score based on reading time of a reader monitored via eye tracking (Tobii Pro Lab software) after training with the aid of a metronome. Radiographs were read twice by the trained eye-tracking reader and once by an experienced reference radiologist. A total of 440 joints were selected; 416 could be interpreted for erosion, and 396 could be interpreted for joint space narrowing (JSN) when read by eye tracking (eye tracking could not measure the time spent when two pathological joints were too close together). The agreement between eye tracking mvdH Sharp score and classical mvdH Sharp score yes (at least one erosion or JSN) versus no (no erosion or no JSN) was excellent for both erosions (kappa 0.97; 95% CI: 0.96-0.99) and JSN (kappa: 0.95; 95% CI: 0.93-0.097). This agreement by class (0 to 10) remained excellent for both erosions (kappa 0.82; 95% CI: 0.79-0.0.85) and JSN (kappa: 0.68; 95% CI: 0.65-0.0.71). To conclude, eye-tracking reading correlates strongly with classical mvdH-Sharp and is useful for assessing severity, segmenting joints and establishing a rapid score for lesions.
Collapse
Affiliation(s)
- Baptiste Quéré
- Department of Rheumatology, CHU Brest, France
- Université de Bretagne Occidentale (Univ Brest), France
- INSERM (U1227), LabEx IGO, France
| | | | - Nathan Foulquier
- Université de Bretagne Occidentale (Univ Brest), France
- INSERM (U1227), LabEx IGO, France
- Medical Information Department, Health Datawarehouse, CHU Brest, France
| | - Hugo Pensec
- Department of Rheumatology, CHU Brest, France
| | - Valérie Devauchelle-Pensec
- Department of Rheumatology, CHU Brest, France
- Université de Bretagne Occidentale (Univ Brest), France
- INSERM (U1227), LabEx IGO, France
| | | | - Alain Saraux
- Department of Rheumatology, CHU Brest, France
- Université de Bretagne Occidentale (Univ Brest), France
- INSERM (U1227), LabEx IGO, France
| |
Collapse
|
2
|
Kannan Loganathan P, Garg A, McNicol R, Wall C, Pointon M, McMeekin P, Godfrey A, Wagner M, Roehr CC. Assessment of Visual Attention in Teams with or without Dedicated Team Leaders: A Neonatal Simulation-Based Pilot Randomised Cross-Over Trial Utilising Low-Cost Eye-Tracking Technology. CHILDREN (BASEL, SWITZERLAND) 2024; 11:1023. [PMID: 39201956 PMCID: PMC11352304 DOI: 10.3390/children11081023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Revised: 08/15/2024] [Accepted: 08/16/2024] [Indexed: 09/03/2024]
Abstract
BACKGROUND Eye-tracking technology could be used to study human factors during teamwork. OBJECTIVES This work aimed to compare the visual attention (VA) of a team member acting as both a team leader and managing the airway, compared to a team member performing the focused task of managing the airway in the presence of a dedicated team leader. This work also aimed to report differences in team performance, behavioural skills, and workload between the two groups using validated tools. METHODS We conducted a simulation-based, pilot randomised controlled study. The participants included were volunteer paediatric trainees, nurse practitioners, and neonatal nurses. Three teams consisting of four team members were formed. Each team participated in two identical neonatal resuscitation simulation scenarios in a random order, once with and once without a team leader. Using a commercially available eye-tracking device, we analysed VA regarding attention to (1) a manikin, (2) a colleague, and (3) a monitor. Only the trainee who was the airway operator would wear eye-tracking glasses in both simulations. RESULTS In total, 6 simulation scenarios and 24 individual role allocations were analysed. Participants in a no-team-leader capacity had a greater number of total fixations on manikin and monitors, though this was not significant. There were no significant differences in team performance, behavioural skills, and individual workload. Physical demand was reported as significantly higher by participants in the group without a team leader. During debriefing, all the teams expressed their preference for having a dedicated team leader. CONCLUSION In our pilot study using low-cost technology, we could not demonstrate the difference in VA with the presence of a team leader.
Collapse
Affiliation(s)
- Prakash Kannan Loganathan
- Neonatal Intensive Care Unit, The James Cook University Hospital, Middlesbrough TS4 3BW, UK;
- Clinical Academic Office, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne NE1 7RU, UK
- Department of Physics, University of Durham, Durham DH1 3LE, UK
| | - Anip Garg
- Neonatal Intensive Care Unit, The James Cook University Hospital, Middlesbrough TS4 3BW, UK;
| | - Robert McNicol
- Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne NE1 8ST, UK; (R.M.); (C.W.); (M.P.); (A.G.)
| | - Conor Wall
- Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne NE1 8ST, UK; (R.M.); (C.W.); (M.P.); (A.G.)
| | - Matthew Pointon
- Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne NE1 8ST, UK; (R.M.); (C.W.); (M.P.); (A.G.)
| | - Peter McMeekin
- Department of Nursing, Midwifery, and Health, Northumbria University, Newcastle upon Tyne NE1 8ST, UK;
| | - Alan Godfrey
- Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne NE1 8ST, UK; (R.M.); (C.W.); (M.P.); (A.G.)
| | - Michael Wagner
- Division of Neonatology, Pediatric Intensive Care and Neuropediatrics, Department of Pediatrics, Comprehensive Center for Pediatrics, Medical University of Vienna, 1090 Vienna, Austria;
| | - Charles Christoph Roehr
- National Perinatal Epidemiology Unit, Medical Sciences Division, Nuffield Department of Population Health, University of Oxford, Oxford OX1 2JD, UK;
- Newborn Services, Southmead Hospital, North Bristol Trust, Bristol BS10 5NB, UK
- Faculty of Health Sciences, University of Bristol, Bristol BS8 1QU, UK
| |
Collapse
|
3
|
Specian Junior FC, Litchfield D, Sandars J, Cecilio-Fernandes D. Use of eye tracking in medical education. MEDICAL TEACHER 2024:1-8. [PMID: 38382474 DOI: 10.1080/0142159x.2024.2316863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 02/06/2024] [Indexed: 02/23/2024]
Abstract
Eye tracking has become increasingly applied in medical education research for studying the cognitive processes that occur during the performance of a task, such as image interpretation and surgical skills development. However, analysis and interpretation of the large amount of data obtained by eye tracking can be confusing. In this article, our intention is to clarify the analysis and interpretation of the data obtained from eye tracking. Understanding the relationship between eye tracking metrics (such as gaze, pupil and blink rate) and cognitive processes (such as visual attention, perception, memory and cognitive workload) is essential. The importance of calibration and how the limitations of eye tracking can be overcome is also highlighted.
Collapse
Affiliation(s)
| | | | - John Sandars
- Health Research Institute, Edge Hill University, Ormskirk, UK
| | - Dario Cecilio-Fernandes
- Department of Medical Psychology and Psychiatry, School of Medical Sciences, University of Campinas, Campinas, São Paulo, Brazil
| |
Collapse
|
4
|
Hu H, Li H, Wang B, Zhang M, Wu B, Wu X. Application of eye-tracking in nursing research: A scoping review. Nurs Open 2024; 11:e2108. [PMID: 38391099 PMCID: PMC10847623 DOI: 10.1002/nop2.2108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 12/25/2023] [Accepted: 01/11/2024] [Indexed: 02/24/2024] Open
Abstract
AIMS To map the themes and methods of nursing researches involving eye-tracking as a measurement, and offer suggestion for future nursing research using eye-tracking. DESIGN We conducted a scoping review following the methodology outlined in the JBI Manual for Evidence Synthesis on scoping reviews. METHODS Eligibility criteria were established based on Population (involving nursing or nursing students), Concept (utilizing eye-tracking as a research method), and Context (in any setting). Articles were retrieved from the PubMed, Web of Science, Embase, CINAHL, APA PsycInfo, and Scopus databases, spanning from database inception to November 17, 2023. The included studies were analysed using descriptive statistics and content analysis. RESULTS After duplicates were removed, 815 citations were identified from searches of electronic databases and other resources, and 66 met the inclusion criteria finally. Thirty-eight studies were conducted in a simulated environment. Five application domains were identified, and most of the studies (N = 50) were observational. The domains found in our review did not cover all topics of nursing research in the same depth. Additionally, 39 studies did not solely explicate eye-tracking data but instead integrated behavioural measures, scales/questionnaires, or other physiological data. CONCLUSIONS Eye-tracking emerges as a significant research tool in uncovering visual behaviour, particularly in nursing research focused on nursing education. This study not only summarized the application and interpretation of eye-tracking data but also recognized its potential in advancing clinical nursing research and practice. To effectively harness the capabilities of eye-tracking in elucidating cognitive processes, future research should aim for a clearer grasp of the theoretical underpinnings of the addressed research problems and methodological choices. It is crucial to emphasize the standardization of eye-tracking method reporting and ensuring data quality. No Patient or Public Contribution.
Collapse
Affiliation(s)
- Huiling Hu
- School of NursingPeking UniversityBeijingChina
| | - Huijun Li
- School of NursingPeking UniversityBeijingChina
- Department of NursingBeijing Children's Hospital, Capital Medical University, National Center for Children's HealthBeijingChina
| | - Binlin Wang
- School of NursingPeking UniversityBeijingChina
| | | | - Bilin Wu
- School of NursingPeking UniversityBeijingChina
| | - Xue Wu
- School of NursingPeking UniversityBeijingChina
- Peking University Health Science Centre for Evidence‐Based Nursing: A JBI Centre of ExcellenceBeijingChina
| |
Collapse
|
5
|
Leonardo J, Dickerson A, Wu Q. A Comparison of Night Hazard Detection between Younger and Older Drivers under Driving Simulation and Real-World Conditions. Occup Ther Health Care 2024; 38:59-77. [PMID: 38241185 DOI: 10.1080/07380577.2023.2232034] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 06/28/2023] [Indexed: 01/21/2024]
Abstract
''Using eye-tracking technology, this study examined hazard detection at night. Using a 2 (younger versus older) x 2 (simulator versus on road) repeated-measures mixed design, 16 older adults and 17 younger adults drove their own vehicle and on a driving simulator under nighttime conditions wearing eye tracking technology. Both driving conditions had three roadway hazards of pedestrians looking at their cell phone while posed to cross the roadway. Pupil glances were recorded using outcome measures of total fixation duration, number of fixations, and time to first fixation. Results showed older adults detected hazards similarly to younger adults, especially during on-road performance. Night hazard detection was similar across driving conditions except for time to first fixation, which was faster on-road for both age groups. Results support potential use of driving simulators as a proxy for on-road with night driving needed for research and practice.
Collapse
Affiliation(s)
| | - Anne Dickerson
- Department of Occupational Therapy, College of Allied Health Sciences, East Carolina University, Greenville, NC, USA
| | - Qiang Wu
- Department of Public Health, East Carolina University, Greenville, NC, USA
| |
Collapse
|
6
|
Armstrong MF, Orbelo DM, Wallerius KP, Lebechi CA, Lohse CM, Dey JK, Bayan SL. Visual Gaze Patterns in the Analysis of Glottic Lesions: Does Experience Increase Diagnostic Accuracy? Ann Otol Rhinol Laryngol 2024; 133:22-29. [PMID: 37365768 DOI: 10.1177/00034894231179519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
OBJECTIVES The purpose of this study was to evaluate visual gaze patterns and the ability to correctly identify cancer among participants of different experience levels when viewing benign and malignant vocal cord lesions. METHODS Thirty-one participants were divided into groups based on level of experience. These included novice (medical students, PGY1-2 otolaryngology residents), intermediate (PGY3-5 otolaryngology residents, gastroenterology fellow), advanced practice providers (physician assistants, nurse practitioners, and speech language pathologists), and experts (board-certified otolaryngologists). Each participant was shown 7 images of vocal cord pathology including glottic cancer, infectious laryngitis, and granuloma and asked to determine the likelihood of cancer on a scale of certain, probable, possible, and unlikely. Eye tracking data were collected and used to identify the area of interest (AOI) that each participant fixated on first, fixated on the longest, and had the greatest number of fixations. RESULTS No significant differences were seen among groups when comparing AOI with first fixation, AOI with longest fixation, or AOI with most fixations. Novices were significantly more likely to rate a low likelihood of cancer when viewing infectious laryngitis compared to more experienced groups (P < .001). There was no difference in likelihood of cancer rating among groups for the remaining images. CONCLUSIONS There was no significant difference in gaze targets among participants of different experience levels evaluating vocal cord pathology. Symmetric appearance of vocal cord lesions may explain differences seen in likelihood of cancer rating among groups. Future studies with larger sample sizes will better elucidate gaze targets that lead to accurate diagnosis of vocal cord pathology.
Collapse
Affiliation(s)
- Michael F Armstrong
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, MN, USA
| | - Diana M Orbelo
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, MN, USA
| | | | - Chiamaka A Lebechi
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, MN, USA
| | - Christine M Lohse
- Division of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, MN, USA
| | - Jacob K Dey
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, MN, USA
| | - Semirra L Bayan
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, MN, USA
| |
Collapse
|
7
|
Newport RA, Liu S, Di Ieva A. Analyzing Eye Paths Using Fractals. ADVANCES IN NEUROBIOLOGY 2024; 36:827-848. [PMID: 38468066 DOI: 10.1007/978-3-031-47606-8_42] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
Visual patterns reflect the anatomical and cognitive background underlying process governing how we perceive information, influenced by stimulus characteristics and our own visual perception. These patterns are both spatially complex and display self-similarity seen in fractal geometry at different scales, making them challenging to measure using the traditional topological dimensions used in Euclidean geometry.However, methods for measuring eye gaze patterns using fractals have shown success in quantifying geometric complexity, matchability, and implementation into machine learning methods. This success is due to the inherent capabilities that fractals possess when reducing dimensionality using Hilbert curves, measuring temporal complexity using the Higuchi fractal dimension (HFD), and determining geometric complexity using the Minkowski-Bouligand dimension.Understanding the many applications of fractals when measuring and analyzing eye gaze patterns can extend the current growing body of knowledge by identifying markers tied to neurological pathology. Additionally, in future work, fractals can facilitate defining imaging modalities in eye tracking diagnostics by exploiting their capability to acquire multiscale information, including complementary functions, structures, and dynamics.
Collapse
Affiliation(s)
- Robert Ahadizad Newport
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Human and Health Sciences, Macquarie University, Sydney, NSW, Australia.
| | - Sidong Liu
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Human and Health Sciences, Macquarie University, Sydney, NSW, Australia
| | - Antonio Di Ieva
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Human and Health Sciences, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
8
|
Cioffi GM, Pinilla-Echeverri N, Sheth T, Sibbald MG. Does artificial intelligence enhance physician interpretation of optical coherence tomography: insights from eye tracking. Front Cardiovasc Med 2023; 10:1283338. [PMID: 38144364 PMCID: PMC10739524 DOI: 10.3389/fcvm.2023.1283338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 11/20/2023] [Indexed: 12/26/2023] Open
Abstract
Background and objectives The adoption of optical coherence tomography (OCT) in percutaneous coronary intervention (PCI) is limited by need for real-time image interpretation expertise. Artificial intelligence (AI)-assisted Ultreon™ 2.0 software could address this barrier. We used eye tracking to understand how these software changes impact viewing efficiency and accuracy. Methods Eighteen interventional cardiologists and fellows at McMaster University, Canada, were included in the study and categorized as experienced or inexperienced based on lifetime OCT use. They were tasked with reviewing OCT images from both Ultreon™ 2.0 and AptiVue™ software platforms while their eye movements were recorded. Key metrics, such as time to first fixation on the area of interest, total task time, dwell time (time spent on the area of interest as a proportion of total task time), and interpretation accuracy, were evaluated using a mixed multivariate model. Results Physicians exhibited improved viewing efficiency with Ultreon™ 2.0, characterized by reduced time to first fixation (Ultreon™ 0.9 s vs. AptiVue™ 1.6 s, p = 0.007), reduced total task time (Ultreon™ 10.2 s vs. AptiVue™ 12.6 s, p = 0.006), and increased dwell time in the area of interest (Ultreon™ 58% vs. AptiVue™ 41%, p < 0.001). These effects were similar for experienced and inexperienced physicians. Accuracy of OCT image interpretation was preserved in both groups, with experienced physicians outperforming inexperienced physicians. Discussion Our study demonstrated that AI-enabled Ultreon™ 2.0 software can streamline the image interpretation process and improve viewing efficiency for both inexperienced and experienced physicians. Enhanced viewing efficiency implies reduced cognitive load potentially reducing the barriers for OCT adoption in PCI decision-making.
Collapse
Affiliation(s)
| | | | | | - Matthew Gary Sibbald
- Division of Cardiology, Hamilton General Hospital, Hamilton Health Sciences, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
9
|
Brevik C, Miller D, Kendall J, Michael S. Nontechnically speaking: A review of tools and methods in the teaching and assessment of nontechnical skills in emergency medicine training. AEM EDUCATION AND TRAINING 2023; 7:e10911. [PMID: 37974662 PMCID: PMC10641174 DOI: 10.1002/aet2.10911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Revised: 08/14/2023] [Accepted: 08/29/2023] [Indexed: 11/19/2023]
Affiliation(s)
- Cody Brevik
- Department of Emergency MedicineUniversity of Colorado School of Medicine, Anschutz Medical CampusAuroraColoradoUSA
| | - Danielle Miller
- Department of Emergency MedicineUniversity of Colorado School of Medicine, Anschutz Medical CampusAuroraColoradoUSA
| | - John Kendall
- Department of Emergency MedicineUniversity of Colorado School of Medicine, Anschutz Medical CampusAuroraColoradoUSA
| | - Sarah Michael
- Department of Emergency MedicineUniversity of Colorado School of Medicine, Anschutz Medical CampusAuroraColoradoUSA
| |
Collapse
|
10
|
Bapna T, Valles J, Leng S, Pacilli M, Nataraja RM. Eye-tracking in surgery: a systematic review. ANZ J Surg 2023; 93:2600-2608. [PMID: 37668263 DOI: 10.1111/ans.18686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 08/20/2023] [Accepted: 08/22/2023] [Indexed: 09/06/2023]
Abstract
BACKGROUND Surgery is constantly evolving with the assistance of rapidly developing novel technology. Eye-tracking devices provide opportunities to monitor the acquisition of surgical skills, gain insight into performance, and enhance surgical practice. The aim of this review was to consolidate the available evidence for the use of eye-tracking in the surgical disciplines. METHODS A systematic literature review was conducted in accordance with PRISMA guidelines. A search of OVID Medline, EMBASE, Cochrane library, Scopus, and Science Direct was conducted January 2000 until December 2022. Studies involving eye-tracking in surgical training, assessment and technical innovation were included in the review. Non-surgical procedures, animal studies, and studies not involving surgical participants were excluded from the review. RESULTS The search returned a total of 12 054 articles, 80 of which were included in the final analysis and review. Seventeen studies involved eye-tracking in surgical training, 48 surgical assessment, and 20 were focussing on technical aspects of this technology. Twenty-six different eye-tracking devices were used in the included studies. Metrics such as the number of fixations, duration of fixations, dwell time, and cognitive workload were able to differentiate between novice and expert performance. Eight studies demonstrated the effectiveness of gaze-training for improving surgical skill. CONCLUSION The current literature shows a broad range of utility for a variety of eye-tracking devices in surgery. There remains a lack of standardization for metric parameters and gaze analysis techniques. Further research is required to validate its use to establish reliability and create uniform practices.
Collapse
Affiliation(s)
- Tanay Bapna
- Department of Paediatric Surgery & Surgical Simulation, Monash Children's Hospital, Melbourne, Victoria, Australia
| | - John Valles
- Department of Paediatric Surgery & Surgical Simulation, Monash Children's Hospital, Melbourne, Victoria, Australia
| | - Samantha Leng
- Department of Paediatric Surgery & Surgical Simulation, Monash Children's Hospital, Melbourne, Victoria, Australia
| | - Maurizio Pacilli
- Department of Paediatric Surgery & Surgical Simulation, Monash Children's Hospital, Melbourne, Victoria, Australia
- Departments of Paediatrics & Surgery, School of Clinical Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| | - Ramesh Mark Nataraja
- Department of Paediatric Surgery & Surgical Simulation, Monash Children's Hospital, Melbourne, Victoria, Australia
- Departments of Paediatrics & Surgery, School of Clinical Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Victoria, Australia
| |
Collapse
|
11
|
Lee M, Desy J, Tonelli AC, Walsh MH, Ma IWY. The association of attentional foci and image interpretation accuracy in novices interpreting lung ultrasound images: an eye-tracking study. Ultrasound J 2023; 15:36. [PMID: 37697149 PMCID: PMC10495286 DOI: 10.1186/s13089-023-00333-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 08/02/2023] [Indexed: 09/13/2023] Open
Abstract
It is unclear, where learners focus their attention when interpreting point-of-care ultrasound (POCUS) images. This study seeks to determine the relationship between attentional foci metrics with lung ultrasound (LUS) interpretation accuracy in novice medical learners. A convenience sample of 14 medical residents with minimal LUS training viewed 8 LUS cineloops, with their eye-tracking patterns recorded. Areas of interest (AOI) for each cineloop were mapped independently by two experts, and externally validated by a third expert. Primary outcome of interest was image interpretation accuracy, presented as a percentage. Eye tracking captured 10 of 14 participants (71%) who completed the study. Participants spent a mean total of 8 min 44 s ± standard deviation (SD) 3 min 8 s on the cineloops, with 1 min 14 s ± SD 34 s spent fixated in the AOI. Mean accuracy score was 54.0% ± SD 16.8%. In regression analyses, fixation duration within AOI was positively associated with accuracy [beta-coefficients 28.9 standardized error (SE) 6.42, P = 0.002). Total time spent viewing the videos was also significantly associated with accuracy (beta-coefficient 5.08, SE 0.59, P < 0.0001). For each additional minute spent fixating within the AOI, accuracy scores increased by 28.9%. For each additional minute spent viewing the video, accuracy scores increased only by 5.1%. Interpretation accuracy is strongly associated with time spent fixating within the AOI. Image interpretation training should consider targeting AOIs.
Collapse
Affiliation(s)
- Matthew Lee
- Division of General Internal Medicine, Department of Medicine, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada
| | - Janeve Desy
- Division of General Internal Medicine, Department of Medicine, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada
| | - Ana Claudia Tonelli
- UNISINOS University, Hospital de Clinicas de Porto Alegre, Porto Alegre, Brazil
| | - Michael H Walsh
- Division of General Internal Medicine, Department of Medicine, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada
| | - Irene W Y Ma
- Division of General Internal Medicine, Department of Medicine, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada.
- W21C, University of Calgary, Calgary, AB, Canada.
| |
Collapse
|
12
|
Zafar A, Martin Calderon C, Yeboah AM, Dalton K, Irving E, Niechwiej-Szwedo E. Investigation of Camera-Free Eye-Tracking Glasses Compared to a Video-Based System. SENSORS (BASEL, SWITZERLAND) 2023; 23:7753. [PMID: 37765810 PMCID: PMC10535734 DOI: 10.3390/s23187753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 09/03/2023] [Accepted: 09/06/2023] [Indexed: 09/29/2023]
Abstract
Technological advances in eye-tracking have resulted in lightweight, portable solutions that are capable of capturing eye movements beyond laboratory settings. Eye-tracking devices have typically relied on heavier, video-based systems to detect pupil and corneal reflections. Advances in mobile eye-tracking technology could facilitate research and its application in ecological settings; more traditional laboratory research methods are able to be modified and transferred to real-world scenarios. One recent technology, the AdHawk MindLink, introduced a novel camera-free system embedded in typical eyeglass frames. This paper evaluates the AdHawk MindLink by comparing the eye-tracking recordings with a research "gold standard", the EyeLink II. By concurrently capturing data from both eyes, we compare the capability of each eye tracker to quantify metrics from fixation, saccade, and smooth pursuit tasks-typical elements in eye movement research-across a sample of 13 adults. The MindLink system was capable of capturing fixation stability within a radius of less than 0.5∘, estimating horizontal saccade amplitudes with an accuracy of 0.04∘± 2.3∘, vertical saccade amplitudes with an accuracy of 0.32∘± 2.3∘, and smooth pursuit speeds with an accuracy of 0.5 to 3∘s, depending on the pursuit speed. While the performance of the MindLink system in measuring fixation stability, saccade amplitude, and smooth pursuit eye movements were slightly inferior to the video-based system, MindLink provides sufficient gaze-tracking capabilities for dynamic settings and experiments.
Collapse
Affiliation(s)
- Abdullah Zafar
- Department of Kinesiology & Health Sciences, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.Z.)
| | - Claudia Martin Calderon
- Department of Kinesiology & Health Sciences, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.Z.)
| | - Anne Marie Yeboah
- School of Optometry & Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Kristine Dalton
- School of Optometry & Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Elizabeth Irving
- School of Optometry & Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Ewa Niechwiej-Szwedo
- Department of Kinesiology & Health Sciences, University of Waterloo, Waterloo, ON N2L 3G1, Canada; (A.Z.)
| |
Collapse
|
13
|
Galuret S, Vallée N, Tronchot A, Thomazeau H, Jannin P, Huaulmé A. Gaze behavior is related to objective technical skills assessment during virtual reality simulator-based surgical training: a proof of concept. Int J Comput Assist Radiol Surg 2023; 18:1697-1705. [PMID: 37286642 DOI: 10.1007/s11548-023-02961-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 05/16/2023] [Indexed: 06/09/2023]
Abstract
PURPOSE Simulation-based training allows surgical skills to be learned safely. Most virtual reality-based surgical simulators address technical skills without considering non-technical skills, such as gaze use. In this study, we investigated surgeons' visual behavior during virtual reality-based surgical training where visual guidance is provided. Our hypothesis was that the gaze distribution in the environment is correlated with the simulator's technical skills assessment. METHODS We recorded 25 surgical training sessions on an arthroscopic simulator. Trainees were equipped with a head-mounted eye-tracking device. A U-net was trained on two sessions to segment three simulator-specific areas of interest (AoI) and the background, to quantify gaze distribution. We tested whether the percentage of gazes in those areas was correlated with the simulator's scores. RESULTS The neural network was able to segment all AoI with a mean Intersection over Union superior to 94% for each area. The gaze percentage in the AoI differed among trainees. Despite several sources of data loss, we found significant correlations between gaze position and the simulator scores. For instance, trainees obtained better procedural scores when their gaze focused on the virtual assistance (Spearman correlation test, N = 7, r = 0.800, p = 0.031). CONCLUSION Our findings suggest that visual behavior should be quantified for assessing surgical expertise in simulation-based training environments, especially when visual guidance is provided. Ultimately visual behavior could be used to quantitatively assess surgeons' learning curve and expertise while training on VR simulators, in a way that complements existing metrics.
Collapse
Affiliation(s)
- Soline Galuret
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
| | - Nicolas Vallée
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
- Orthopedics and Trauma Department, Rennes University Hospital, 35000, Rennes, France
| | - Alexandre Tronchot
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
- Orthopedics and Trauma Department, Rennes University Hospital, 35000, Rennes, France
| | - Hervé Thomazeau
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
- Orthopedics and Trauma Department, Rennes University Hospital, 35000, Rennes, France
| | - Pierre Jannin
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France.
| | - Arnaud Huaulmé
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
| |
Collapse
|
14
|
Tzamaras HM, Wu HL, Moore JZ, Miller SR. Shifting Perspectives: A proposed framework for analyzing head-mounted eye-tracking data with dynamic areas of interest and dynamic scenes. PROCEEDINGS OF THE HUMAN FACTORS AND ERGONOMICS SOCIETY ... ANNUAL MEETING. HUMAN FACTORS AND ERGONOMICS SOCIETY. ANNUAL MEETING 2023; 67:953-958. [PMID: 38450120 PMCID: PMC10914345 DOI: 10.1177/21695067231192929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
Eye-tracking is a valuable research method for understanding human cognition and is readily employed in human factors research, including human factors in healthcare. While wearable mobile eye trackers have become more readily available, there are no existing analysis methods for accurately and efficiently mapping dynamic gaze data on dynamic areas of interest (AOIs), which limits their utility in human factors research. The purpose of this paper was to outline a proposed framework for automating the analysis of dynamic areas of interest by integrating computer vision and machine learning (CVML). The framework is then tested using a use-case of a Central Venous Catheterization trainer with six dynamic AOIs. While the results of the validity trial indicate there is room for improvement in the CVML method proposed, the framework provides direction and guidance for human factors researchers using dynamic AOIs.
Collapse
Affiliation(s)
| | - Hang-Ling Wu
- Pennsylvania State University Mechanical Engineering
| | - Jason Z Moore
- Pennsylvania State University Mechanical Engineering
| | | |
Collapse
|
15
|
Hafner C, Scharner V, Hermann M, Metelka P, Hurch B, Klaus DA, Schaubmayr W, Wagner M, Gleiss A, Willschke H, Hamp T. Eye-tracking during simulation-based echocardiography: a feasibility study. BMC MEDICAL EDUCATION 2023; 23:490. [PMID: 37393288 DOI: 10.1186/s12909-023-04458-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 06/15/2023] [Indexed: 07/03/2023]
Abstract
INTRODUCTION Due to the technical progress point-of-care ultrasound (POCUS) is increasingly used in critical care medicine. However, optimal training strategies and support for novices have not been thoroughly researched so far. Eye-tracking, which offers insights into the gaze behavior of experts may be a useful tool for better understanding. The aim of this study was to investigate the technical feasibility and usability of eye-tracking during echocardiography as well as to analyze differences of gaze patterns between experts and non-experts. METHODS Nine experts in echocardiography and six non-experts were equipped with eye-tracking glasses (Tobii, Stockholm, Sweden), while performing six medical cases on a simulator. For each view case specific areas of interests (AOI) were defined by the first three experts depending on the underlying pathology. Technical feasibility, participants' subjective experience on the usability of the eye-tracking glasses as well as the differences of relative dwell time (focus) inside the areas of interest (AOI) between six experts and six non-experts were evaluated. RESULTS Technical feasibility of eye-tracking during echocardiography was achieved with an accordance of 96% between the visual area orally described by participants and the area marked by the glasses. Experts had longer relative dwell time in the case specific AOI (50.6% versus 38.4%, p = 0.072) and performed ultrasound examinations faster (138 s versus 227 s, p = 0.068). Furthermore, experts fixated earlier in the AOI (5 s versus 10 s, p = 0.033). CONCLUSION This feasibility study demonstrates that eye-tracking can be used to analyze experts and non-experts gaze patterns during POCUS. Although, in this study the experts had a longer fixation time in the defined AOIs compared to non-experts, further studies are needed to investigate if eye-tracking could improve teaching of POCUS.
Collapse
Affiliation(s)
- Christina Hafner
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
- Ludwig Boltzmann Institute Digital Health and Patient Safety, Vienna, Austria
| | - Vincenz Scharner
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
- Ludwig Boltzmann Institute Digital Health and Patient Safety, Vienna, Austria
| | - Martina Hermann
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
- Ludwig Boltzmann Institute Digital Health and Patient Safety, Vienna, Austria
| | - Philipp Metelka
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Benedikt Hurch
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Daniel Alexander Klaus
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Wolfgang Schaubmayr
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Michael Wagner
- Department of Pediatrics, Comprehensive Center for Pediatrics, Medical University of Vienna, Vienna, Austria
| | - Andreas Gleiss
- Center for Medical Statistics, Informatics, and Intelligent Systems, Medical University of Vienna, Vienna, Austria
| | - Harald Willschke
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
- Ludwig Boltzmann Institute Digital Health and Patient Safety, Vienna, Austria
| | - Thomas Hamp
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
- Emergency Medical Service Vienna, Radetzkystraße 1, 1030, Vienna, Austria.
| |
Collapse
|
16
|
Zheng Y, Liu C, Lai NYG, Wang Q, Xia Q, Sun X, Zhang S. Current development of biosensing technologies towards diagnosis of mental diseases. Front Bioeng Biotechnol 2023; 11:1190211. [PMID: 37456720 PMCID: PMC10342212 DOI: 10.3389/fbioe.2023.1190211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 06/16/2023] [Indexed: 07/18/2023] Open
Abstract
The biosensor is an instrument that converts the concentration of biomarkers into electrical signals for detection. Biosensing technology is non-invasive, lightweight, automated, and biocompatible in nature. These features have significantly advanced medical diagnosis, particularly in the diagnosis of mental disorder in recent years. The traditional method of diagnosing mental disorders is time-intensive, expensive, and subject to individual interpretation. It involves a combination of the clinical experience by the psychiatrist and the physical symptoms and self-reported scales provided by the patient. Biosensors on the other hand can objectively and continually detect disease states by monitoring abnormal data in biomarkers. Hence, this paper reviews the application of biosensors in the detection of mental diseases, and the diagnostic methods are divided into five sub-themes of biosensors based on vision, EEG signal, EOG signal, and multi-signal. A prospective application in clinical diagnosis is also discussed.
Collapse
Affiliation(s)
- Yuhan Zheng
- Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo, China
- Ningbo Research Center, Ningbo Innovation Center, Zhejiang University, Ningbo, China
- Robotics Institute, Ningbo University of Technology, Ningbo, China
| | - Chen Liu
- Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo, China
- Ningbo Research Center, Ningbo Innovation Center, Zhejiang University, Ningbo, China
| | - Nai Yeen Gavin Lai
- Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo, China
| | - Qingfeng Wang
- Nottingham Ningbo China Beacons of Excellence Research and Innovation Institute, University of Nottingham Ningbo China, Ningbo, China
| | - Qinghua Xia
- Ningbo Research Center, Ningbo Innovation Center, Zhejiang University, Ningbo, China
| | - Xu Sun
- Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo, China
- Nottingham Ningbo China Beacons of Excellence Research and Innovation Institute, University of Nottingham Ningbo China, Ningbo, China
| | - Sheng Zhang
- Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo, China
- Ningbo Research Center, Ningbo Innovation Center, Zhejiang University, Ningbo, China
- School of Mechanical Engineering, Zhejiang University, Hangzhou, China
| |
Collapse
|
17
|
Sandseter EBH, Sando OJ, Lorås H, Kleppe R, Storli L, Brussoni M, Bundy A, Schwebel DC, Ball DJ, Haga M, Little H. Virtual Risk Management-Exploring Effects of Childhood Risk Experiences through Innovative Methods (ViRMa) for Primary School Children in Norway: Study Protocol for the ViRMa Project. JMIR Res Protoc 2023; 12:e45857. [PMID: 37285210 DOI: 10.2196/45857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 05/16/2023] [Accepted: 05/17/2023] [Indexed: 06/08/2023] Open
Abstract
BACKGROUND Research indicates that risky play benefits children's risk assessment and risk management skills and offers several positive health effects such as resilience, social skills, physical activity, well-being, and involvement. There are also indications that the lack of risky play and autonomy increases the likelihood of anxiety. Despite its well-documented importance, and the willingness of children to engage in risky play, this type of play is increasingly restricted. Assessing long-term effects of risky play has been problematic because of ethical issues with conducting studies designed to allow or encourage children to take physical risks with the potential of injury. OBJECTIVE The Virtual Risk Management project aims to examine children's development of risk management skills through risky play. To accomplish this, the project aims to use and validate newly developed and ethically appropriate data collection tools such as virtual reality, eye tracking, and motion capturing, and to provide insight into how children assess and handle risk situations and how children's past risky play experiences are associated with their risk management. METHODS We will recruit 500 children aged 7-10 years and their parents from primary schools in Norway. Children's risk management will be measured through data concerning their risk assessment, risk willingness, and risk handling when completing a number of tasks in 3 categories of virtual reality scenarios: street crossing, river crossing, and playing on playground equipment. The children will move around physically in a large space while conducting the tasks and wear 17 motion-capturing sensors that will measure their movements to analyze motor skills. We will also collect data on children's perceived motor competence and their sensation-seeking personality. To obtain data on children's risk experiences, parents will complete questionnaires on their parental style and risk tolerance, as well as information about the child's practical risk experience. RESULTS Four schools have been recruited to participate in data collection. The recruitment of children and parents for this study started in December 2022, and as of April 2023, a total of 433 parents have consented for their children to participate. CONCLUSIONS The Virtual Risk Management project will increase our understanding of how children's characteristics, upbringing, and previous experiences influence their learning and ability to handle challenges. Through development and use of cutting-edge technology and previously developed measures to describe aspects of the children's past experiences, this project addresses crucial topics related to children's health and development. Such knowledge may guide pedagogical questions and the development of educational, injury prevention, and other health-related interventions, and reveal essential areas for focus in future studies. It may also impact how risk is addressed in crucial societal institutions such as the family, early childhood education, and schools. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) DERR1-10.2196/45857.
Collapse
Affiliation(s)
- Ellen Beate Hansen Sandseter
- Department of Physical Education and Health, Queen Maud University College of Early Childhood Education, Trondheim, Norway
| | - Ole Johan Sando
- Department of Physical Education and Health, Queen Maud University College of Early Childhood Education, Trondheim, Norway
| | - Håvard Lorås
- Department of Physical Education and Health, Queen Maud University College of Early Childhood Education, Trondheim, Norway
- Department of Teacher Education, Norwegian University of Science and Technology, Trondheim, Norway
| | - Rasmus Kleppe
- Department of Physical Education and Health, Queen Maud University College of Early Childhood Education, Trondheim, Norway
| | - Lise Storli
- Department of Physical Education and Health, Queen Maud University College of Early Childhood Education, Trondheim, Norway
| | - Mariana Brussoni
- Department of Pediatrics, Human Early Learning Partnership, School of Population and Public Health, University of British Columbia, Vancouver, BC, Canada
- British Columbia Children's Hospital Research Institute, Vancouver, BC, Canada
| | - Anita Bundy
- Department of Occupational Therapy, Colorado State University, Fort Collins, CO, United States
| | - David C Schwebel
- Department of Psychology, University of Alabama at Birmingham, Birmingham, AL, United States
| | - David J Ball
- Department of Science and Technology, Centre for Decision Analysis and Risk Management, Middlesex University, London, United Kingdom
| | - Monika Haga
- Department of Teacher Education, Norwegian University of Science and Technology, Trondheim, Norway
| | - Helen Little
- School of Education, Macquarie University, Sydney, Australia
| |
Collapse
|
18
|
Brunyé TT, Drew T, Kerr KF, Shucard H, Powell K, Weaver DL, Elmore JG. Zoom behavior during visual search modulates pupil diameter and reflects adaptive control states. PLoS One 2023; 18:e0282616. [PMID: 36893083 PMCID: PMC9997932 DOI: 10.1371/journal.pone.0282616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 02/19/2023] [Indexed: 03/10/2023] Open
Abstract
Adaptive gain theory proposes that the dynamic shifts between exploration and exploitation control states are modulated by the locus coeruleus-norepinephrine system and reflected in tonic and phasic pupil diameter. This study tested predictions of this theory in the context of a societally important visual search task: the review and interpretation of digital whole slide images of breast biopsies by physicians (pathologists). As these medical images are searched, pathologists encounter difficult visual features and intermittently zoom in to examine features of interest. We propose that tonic and phasic pupil diameter changes during image review may correspond to perceived difficulty and dynamic shifts between exploration and exploitation control states. To examine this possibility, we monitored visual search behavior and tonic and phasic pupil diameter while pathologists (N = 89) interpreted 14 digital images of breast biopsy tissue (1,246 total images reviewed). After viewing the images, pathologists provided a diagnosis and rated the level of difficulty of the image. Analyses of tonic pupil diameter examined whether pupil dilation was associated with pathologists' difficulty ratings, diagnostic accuracy, and experience level. To examine phasic pupil diameter, we parsed continuous visual search data into discrete zoom-in and zoom-out events, including shifts from low to high magnification (e.g., 1× to 10×) and the reverse. Analyses examined whether zoom-in and zoom-out events were associated with phasic pupil diameter change. Results demonstrated that tonic pupil diameter was associated with image difficulty ratings and zoom level, and phasic pupil diameter showed constriction upon zoom-in events, and dilation immediately preceding a zoom-out event. Results are interpreted in the context of adaptive gain theory, information gain theory, and the monitoring and assessment of physicians' diagnostic interpretive processes.
Collapse
Affiliation(s)
- Tad T. Brunyé
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA, United States of America
| | - Trafton Drew
- Department of Psychology, University of Utah, Salt Lake City, UT, United States of America
| | - Kathleen F. Kerr
- Department of Biostatistics, University of Washington, Seattle, WA, United States of America
| | - Hannah Shucard
- Department of Biostatistics, University of Washington, Seattle, WA, United States of America
| | - Kate Powell
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA, United States of America
| | - Donald L. Weaver
- Department of Pathology, University of Vermont and Vermont Cancer Center, Burlington, VT, United States of America
| | - Joann G. Elmore
- David Geffen School of Medicine, Department of Medicine, University of California, Los Angeles, CA, United States of America
| |
Collapse
|
19
|
Berges AJ, Vedula SS, Chara A, Hager GD, Ishii M, Malpani A. Eye Tracking and Motion Data Predict Endoscopic Sinus Surgery Skill. Laryngoscope 2023; 133:500-505. [PMID: 35357011 PMCID: PMC9825109 DOI: 10.1002/lary.30121] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 03/10/2022] [Accepted: 03/14/2022] [Indexed: 01/11/2023]
Abstract
OBJECTIVE Endoscopic surgery has a considerable learning curve due to dissociation of the visual-motor axes, coupled with decreased tactile feedback and mobility. In particular, endoscopic sinus surgery (ESS) lacks objective skill assessment metrics to provide specific feedback to trainees. This study aims to identify summary metrics from eye tracking, endoscope motion, and tool motion to objectively assess surgeons' ESS skill. METHODS In this cross-sectional study, expert and novice surgeons performed ESS tasks of inserting an endoscope and tool into a cadaveric nose, touching an anatomical landmark, and withdrawing the endoscope and tool out of the nose. Tool and endoscope motion were collected using an electromagnetic tracker, and eye gaze was tracked using an infrared camera. Three expert surgeons provided binary assessments of low/high skill. 20 summary statistics were calculated for eye, tool, and endoscope motion and used in logistic regression models to predict surgical skill. RESULTS 14 metrics (10 eye gaze, 2 tool motion, and 2 endoscope motion) were significantly different between surgeons with low and high skill. Models to predict skill for 6/9 ESS tasks had an AUC >0.95. A combined model of all tasks (AUC 0.95, PPV 0.93, NPV 0.89) included metrics from eye tracking data and endoscope motion, indicating that these metrics are transferable across tasks. CONCLUSIONS Eye gaze, endoscope, and tool motion data can provide an objective and accurate measurement of ESS surgical performance. Incorporation of these algorithmic techniques intraoperatively could allow for automated skill assessment for trainees learning endoscopic surgery. LEVEL OF EVIDENCE N/A Laryngoscope, 133:500-505, 2023.
Collapse
Affiliation(s)
| | | | | | | | - Masaru Ishii
- Johns Hopkins Department of Otolaryngology–Head and Neck Surgery
| | | |
Collapse
|
20
|
Wallerius KP, Bayan SL, Armstrong MF, Lebechi CA, Dey JK, Orbelo DM. Visual Interpretation of Vocal Fold Paralysis in Flexible Laryngoscopy Using Eye Tracking Technology. J Voice 2023:S0892-1997(23)00091-7. [PMID: 37005128 DOI: 10.1016/j.jvoice.2023.02.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 02/24/2023] [Accepted: 02/24/2023] [Indexed: 04/03/2023]
Abstract
OBJECTIVES Interpretation of laryngoscopy is an important diagnostic skill in otolaryngology. There is, however, limited understanding of the specific visual strategies used while assessing flexible laryngoscopy video. Eye-tracking technology allows for objective study of eye movements during dynamic tasks. The purpose of the present study was to explore visual gaze strategies during laryngoscopy interpretation of unilateral vocal fold paralysis (UVFP) across clinician experience from novice to expert. METHODS Thirty individuals were shown five flexible laryngoscopy videos, each 10 seconds long. After viewing each video, participants reported their impressions of "left vocal fold paralysis," "right vocal fold paralysis," or "no vocal fold paralysis." Eye tracking data were collected and analyzed for duration of fixation and number of fixations on select areas of interest (AOI). Diagnostic accuracy and visual gaze patterns were compared between novice, experienced, and expert groups. RESULTS Diagnostic accuracy among learners in the novice group was significantly lower than those in the more experienced groups (P = 0.04). All groups demonstrated similar visual gaze patterns when viewing the video with normal bilateral vocal fold mobility, spending the greatest percentage of time viewing the trachea. There were differences among groups when viewing the videos of left or right VFP, but the trachea was always in the top three structures for greatest fixation duration and highest number of fixations. CONCLUSIONS Eye-tracking is a novel tool in the setting of laryngoscopy interpretation. With further study it has the potential to be useful for the training of otolaryngology learners to improve diagnostic skills.
Collapse
Affiliation(s)
- Katherine P Wallerius
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, Minnesota
| | - Semirra L Bayan
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, Minnesota
| | - Michael F Armstrong
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, Minnesota
| | - Chiamaka A Lebechi
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, Minnesota
| | - Jacob K Dey
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, Minnesota
| | - Diana M Orbelo
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Rochester, Minnesota.
| |
Collapse
|
21
|
Kulkarni CS, Deng S, Wang T, Hartman-Kenzler J, Barnes LE, Parker SH, Safford SD, Lau N. Scene-dependent, feedforward eye gaze metrics can differentiate technical skill levels of trainees in laparoscopic surgery. Surg Endosc 2023; 37:1569-1580. [PMID: 36123548 PMCID: PMC11062149 DOI: 10.1007/s00464-022-09582-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 08/25/2022] [Indexed: 10/14/2022]
Abstract
INTRODUCTION In laparoscopic surgery, looking in the target areas is an indicator of proficiency. However, gaze behaviors revealing feedforward control (i.e., looking ahead) and their importance have been under-investigated in surgery. This study aims to establish the sensitivity and relative importance of different scene-dependent gaze and motion metrics for estimating trainee proficiency levels in surgical skills. METHODS Medical students performed the Fundamentals of Laparoscopic Surgery peg transfer task while recording their gaze on the monitor and tool activities inside the trainer box. Using computer vision and fixation algorithms, five scene-dependent gaze metrics and one tool speed metric were computed for 499 practice trials. Cluster analysis on the six metrics was used to group the trials into different clusters/proficiency levels, and ANOVAs were conducted to test differences between proficiency levels. A Random Forest model was trained to study metric importance at predicting proficiency levels. RESULTS Three clusters were identified, corresponding to three proficiency levels. The correspondence between the clusters and proficiency levels was confirmed by differences between completion times (F2,488 = 38.94, p < .001). Further, ANOVAs revealed significant differences between the three levels for all six metrics. The Random Forest model predicted proficiency level with 99% out-of-bag accuracy and revealed that scene-dependent gaze metrics reflecting feedforward behaviors were more important for prediction than the ones reflecting feedback behaviors. CONCLUSION Scene-dependent gaze metrics revealed skill levels of trainees more precisely than between experts and novices as suggested in the literature. Further, feedforward gaze metrics appeared to be more important than feedback ones at predicting proficiency.
Collapse
Affiliation(s)
- Chaitanya S Kulkarni
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | - Shiyu Deng
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | - Tianzi Wang
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | | | - Laura E Barnes
- Environmental and Systems Engineering, University of Virginia, Charlottesville, VA, USA
| | | | - Shawn D Safford
- Division of Pediatric General and Thoracic Surgery, UPMC Children's Hospital of Pittsburgh, Harrisburg, PA, USA
| | - Nathan Lau
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA.
| |
Collapse
|
22
|
Laubrock J, Krutz A, Nübel J, Spethmann S. Gaze patterns reflect and predict expertise in dynamic echocardiographic imaging. J Med Imaging (Bellingham) 2023; 10:S11906. [PMID: 36968293 PMCID: PMC10031643 DOI: 10.1117/1.jmi.10.s1.s11906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 03/01/2023] [Indexed: 03/24/2023] Open
Abstract
Purpose Echocardiography is the most important modality in cardiac imaging. Rapid valid visual assessment is a critical skill for image interpretation. However, it is unclear how skilled viewers assess echocardiographic images. Therefore, guidance and implicit advice are needed for learners to achieve valid image interpretation. Approach Using a signal detection approach, we compared 15 certified experts with 15 medical students in their diagnostic decision-making and viewing behavior. To quantify attention allocation, we recorded eye movements while viewing dynamic echocardiographic imaging loops of patients with reduced ejection fraction and healthy controls. Participants evaluated left ventricular ejection fraction and image quality (as diagnostic and visual control tasks, respectively). Results Experts were much better at discriminating between patients and healthy controls (d ' of 2.58, versus 0.98 for novices). Eye tracking revealed that experts fixated diagnostically relevant areas earlier and more often, whereas novices were distracted by visually salient task-irrelevant stimuli. We show that expertise status can be almost perfectly classified either based on judgments or purely on eye movements and that an expertise score derived from viewing behavior predicts diagnostic quality. Conclusions Judgments and eye tracking revealed significant differences between echocardiography experts and novices that can be used to derive numerical expertise scores. Experts have implicitly learned to ignore the salient motion cue presented by the mitral valve and to focus on the diagnostically more relevant left ventricle. These findings have implications for echocardiography training, objective characterization of echocardiographic expertise, and the design of user-friendly interfaces for echocardiography.
Collapse
Affiliation(s)
- Jochen Laubrock
- University of Potsdam, Cognitive Science, Department of Psychology, Potsdam, Germany
| | - Alexander Krutz
- Heart Centre Brandenburg, Department of Cardiology, Bernau, Germany
- Brandenburg Medical School Theodor Fontane, Faculty of Health Sciences Brandenburg, Neuruppin, Germany
| | - Jonathan Nübel
- Heart Centre Brandenburg, Department of Cardiology, Bernau, Germany
- Brandenburg Medical School Theodor Fontane, Faculty of Health Sciences Brandenburg, Neuruppin, Germany
| | - Sebastian Spethmann
- Deutsches Herzzentrum der Charité, Department of Cardiology, Angiology, and Intensive Care Medicine, Berlin, Germany
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
23
|
Clear Aligners and Smart Eye Tracking Technology as a New Communication Strategy between Ethical and Legal Issues. Life (Basel) 2023; 13:life13020297. [PMID: 36836654 PMCID: PMC9967915 DOI: 10.3390/life13020297] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/16/2023] [Accepted: 01/17/2023] [Indexed: 01/26/2023] Open
Abstract
Smart eye-tracking technology (SEET) that determines visual attention using smartphones can be used to determine the aesthetic perception of different types of clear aligners. Its value as a communication and comprehension tool, in addition to the ethical and legal concerns which it entails, can be assessed. One hundred subjects (50 F, 50 M; age range 15-70) were equally distributed in non-orthodontic (A) and orthodontic (B) groups. A smartphone-based SEET app assessed their knowledge of and opinions on aligners. Subjects evaluated images of smiles not wearing aligners, with/without attachments and with straight/scalloped gingival margins, as a guided calibration step which formed the image control group. Subsequently, the subjects rated the same smiles, this time wearing aligners (experimental images group). Questionnaire data and average values for each group of patients, and images relating to fixation times and overall star scores, were analyzed using these tests: chi-square, t-test, Mann-Whitney U, Spearman's rho, and Wilcoxon (p < 0.05). One-way ANOVA and related post-hoc tests were also applied. Orthodontic patients were found to be better informed than non-orthodontic patients. Aesthetic perception could be swayed by several factors. Attachments scored lower in aesthetic evaluation. Lips distracted attention from attachments and improved evaluations. Attachment-free aligners were better rated overall. A more thorough understanding as to the opinions, expectations and aesthetic perception of aligners can improve communication with patients. Mobile SEET is remarkably promising, although it does require a careful medicolegal risk-benefit assessments for responsible and professional use.
Collapse
|
24
|
Wilkie R, Roze des Ordons AL, Cheng A, Lin Y. Exploring facilitator gaze patterns during difficult debriefing through eye-tracking analysis: a pilot study. Simul Healthc 2022. [DOI: 10.54531/pvrt9874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Managing difficult debriefing can be challenging for simulation facilitators. Debriefers may use eye contact as a strategy to build and maintain psychological safety during debriefing. Visual dominance ratio (VDR), a measure of social power, is defined as the percentage of time making eye contact while speaking divided by the percentage of time making eye contact while listening. Little is known about eye gaze patterns during difficult debriefings.
To demonstrate the feasibility of examining eye gaze patterns (i.e. VDR) among junior and senior facilitators during difficult debriefing.
We recruited 10 trained simulation facilitators (four seniors and six juniors) and observed them debriefing two actors. The actors were scripted to play the role of learners who were engaged in the first scenario, followed by upset (emotional) and confrontational in the second and third scenarios, respectively. The participant facilitators wore an eye-tracking device to record their eye movements and fixation duration. The fixation durations and VDRs were calculated and summarized with median and interquartile range. We explore the effect of scenarios and training level on VDRs using Friedman tests and Wilcoxon rank sum tests.
All 10 participants completed all three scenarios. There were no statistically significant differences in VDRs between the junior and senior facilitators for all three scenarios (baseline:
The use of eye-tracking device to measure VDR during debriefings is feasible. We did not demonstrate a difference between junior and seniors in eye gaze patterns during difficult debriefings.
Collapse
Affiliation(s)
- Ryan Wilkie
- 1Department of Emergency Medicine, Cumming School of Medicine, University of Calgary, Calgary, Canada
| | - Amanda L Roze des Ordons
- 2Department of Critical Care Medicine; Department of Anesthesiology; Division of Palliative Medicine; Department of Oncology; Cumming School of Medicine, University of Calgary, Calgary, Canada
| | - Adam Cheng
- 3Department of Pediatrics; Department of Emergency Medicine, Cumming School of Medicine, University of Calgary, Calgary, Canada
| | - Yiqun Lin
- 4Department of Pediatrics, Cumming School of Medicine, University of Calgary, Calgary, Canada
| |
Collapse
|
25
|
Gao H, Hasenbein L, Bozkir E, Göllner R, Kasneci E. Exploring Gender Differences in Computational Thinking Learning in a VR Classroom: Developing Machine Learning Models Using Eye-Tracking Data and Explaining the Models. INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION 2022. [DOI: 10.1007/s40593-022-00316-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractUnderstanding existing gender differences in the development of computational thinking skills is increasingly important for gaining valuable insights into bridging the gender gap. However, there are few studies to date that have examined gender differences based on the learning process in a realistic classroom context. In this work, we aim to investigate gender classification using students’ eye movements that reflect temporal human behavior during a computational thinking lesson in an immersive VR classroom. We trained several machine learning classifiers and showed that students’ eye movements provide discriminative information for gender classification. In addition, we employed a Shapley additive explanation (SHAP) approach for feature selection and further model interpretation. The classification model trained with the selected (best) eye movement feature set using SHAP achieved improved performance with an average accuracy of over $$70\%$$
70
%
. The SHAP values further explained the classification model by identifying important features and their impacts on the model output, namely gender. Our findings provide insights into the use of eye movements for in-depth investigations of gender differences in learning activities in VR classroom setups that are ecologically valid and may provide clues for providing personalized learning support and tutoring in such educational systems or optimizing system design.
Collapse
|
26
|
Sugimoto M, Tomita A, Oyamada M, Sato M. Eye-Tracking-Based Analysis of Situational Awareness of Nurses. Healthcare (Basel) 2022; 10:2131. [PMID: 36360472 PMCID: PMC9690882 DOI: 10.3390/healthcare10112131] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 10/17/2022] [Accepted: 10/24/2022] [Indexed: 04/11/2024] Open
Abstract
BACKGROUND Nurses are responsible for comprehensively identifying patient conditions and associated environments. We hypothesize that gaze trajectories of nurses differ based on their experiences, even under the same situation. METHODS An eye-tracking device monitored the gaze trajectories of nurses with various levels of experience, and nursing students during the intravenous injection task on a human patient simulator. RESULTS The areas of interest (AOIs) were identified in the recorded movies, and the gaze durations of AOIs showed different patterns between experienced nurses and nursing students. A state transition diagram visualized the recognition errors of the students and the repeated confirmation of the vital signs of the patient simulator. Clustering analysis of gaze durations also indicated similarity among the participants with similar experiences. CONCLUSIONS As expected, gaze trajectories differed among the participants. The developed gaze transition diagram visualized their differences and helped in interpreting their situational awareness based on visual perception. The demonstrated method can help in establishing an effective nursing education, particularly for learning the skills that are difficult to be verbalized.
Collapse
Affiliation(s)
- Masahiro Sugimoto
- Institute of Medical Sciences, Tokyo Medical University, Shinjuku, Tokyo 160-0022, Japan
- Institute for Advanced Biosciences, Keio University, Tsuruoka 997-0052, Japan
| | - Atsumi Tomita
- Institute of Medical Sciences, Tokyo Medical University, Shinjuku, Tokyo 160-0022, Japan
| | - Michiko Oyamada
- Department of Nursing, Nihon Institute of Medical Science, Moroyama 350-0435, Japan
| | - Mitsue Sato
- Department of Nursing, Kiryu University, Midori 379-2392, Japan
| |
Collapse
|
27
|
Liu CH, Hung J, Chang CW, Lin JJH, Huang ES, Wang SL, Lee LA, Hsiao CT, Sung PS, Chao YP, Chang YJ. Oral presentation assessment and image reading behaviour on brain computed tomography reading in novice clinical learners: an eye-tracking study. BMC MEDICAL EDUCATION 2022; 22:738. [PMID: 36284299 PMCID: PMC9597969 DOI: 10.1186/s12909-022-03795-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 10/06/2022] [Accepted: 10/07/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND To study whether oral presentation (OP) assessment could reflect the novice learners' interpretation skills and reading behaviour on brain computed tomography (CT) reading. METHODS Eighty fifth-year medical students were recruited, received a 2-hour interactive workshop on how to read brain CT, and were assigned to read two brain CT images before and after instruction. We evaluated their image reading behaviour in terms of overall OP post-test rating, the lesion identification, and competency in systematic image reading after instruction. Students' reading behaviour in searching for the target lesions were recorded by the eye-tracking technique and were used to validate the accuracy of lesion reports. Statistical analyses, including lag sequential analysis (LSA), linear mixed models, and transition entropy (TE) were conducted to reveal temporal relations and spatial complexity of systematic image reading from the eye movement perspective. RESULTS The overall OP ratings [pre-test vs. post-test: 0 vs. 1 in case 1, 0 vs. 1 in case 2, p < 0.001] improved after instruction. Both the scores of systematic OP ratings [0 vs.1 in both cases, p < 0.001] and eye-tracking studies (Case 1: 3.42 ± 0.62 and 3.67 ± 0.37 in TE, p = 0.001; Case 2: 3.42 ± 0.76 and 3.75 ± 0.37 in TE, p = 0.002) showed that the image reading behaviour changed before and after instruction. The results of linear mixed models suggested a significant interaction between instruction and area of interests for case 1 (p < 0.001) and case 2 (p = 0.004). Visual attention to the target lesions in the case 1 assessed by dwell time were 506.50 ± 509.06 and 374.38 ± 464.68 milliseconds before and after instruction (p = 0.02). However, the dwell times in the case 2, the fixation counts and the frequencies of accurate lesion diagnoses in both cases did not change after instruction. CONCLUSION Our results showed OP performance may change concurrently with the medical students' reading behaviour on brain CT after a structured instruction.
Collapse
Affiliation(s)
- Chi-Hung Liu
- Department of Neurology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Division of Medical Education, Graduate Institute of Clinical Medical Sciences, College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Institute of Health Policy and Management, College of Public Health, National Taiwan University, Taipei, Taiwan
| | - June Hung
- Department of Neurology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Chun-Wei Chang
- Department of Neurology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - John J H Lin
- Graduate Institute of Science Education, National Taiwan Normal University, No. 88, Ting-Jou Rd., Sec. 4, Taipei City, Taiwan.
| | - Elaine Shinwei Huang
- Department of Neurology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
| | - Shu-Ling Wang
- Graduate Institute of Digital Learning and Education, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Li-Ang Lee
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Department of Otorhinolaryngology-Head and Neck Surgery, Linkou Main Branch, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Cheng-Ting Hsiao
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Department of Emergency Medicine, Chang Gung Memorial Hospital, Chiayi, Taiwan
- Chang Gung Medical Education Research Centre, Taoyuan, Taiwan
| | - Pi-Shan Sung
- Department of Neurology, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan, Taiwan
| | - Yi-Ping Chao
- Department of Neurology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- Department of Computer Science and Information Engineering, Chang Gung University, Taoyuan, Taiwan
- Department of Biomedical Engineering, Chang Gung University, Taoyuan, Taiwan
| | - Yeu-Jhy Chang
- Department of Neurology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- School of Medicine, College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Division of Medical Education, Graduate Institute of Clinical Medical Sciences, College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Chang Gung Medical Education Research Centre, Taoyuan, Taiwan
| |
Collapse
|
28
|
Eye gaze and visual attention as a window into leadership and followership: A review of empirical insights and future directions. THE LEADERSHIP QUARTERLY 2022. [DOI: 10.1016/j.leaqua.2022.101654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
29
|
Ferrier-Barbut E, Gauthier P, Luengo V, Canlorbe G, Vitrani MA. Measuring the Quality of Learning in a Human–Robot Collaboration: A Study of Laparoscopic Surgery. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3476414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
Robot-Assisted Laparoscopic Surgery (RALS) is now prevalent in operating rooms. This situation requires future surgeons to learn Classic Laparoscopic Surgery (CLS) and RALS simultaneously. Therefore, along with the investigation of the differences in performance between the two techniques, it is essential to study the impact of training in RALS on the skills mastered in CLS. In this article, we study comanipulated RALS (Co-RALS), one of the two designs for RALS, where the human and the robot share the execution of the task. We use a rarely used in Human–Robot Interaction measuring tool: gaze tracking and time recording to measure for the acquisition of skills in CLS when training in Co-RALS or in CLS and time recording to compare the learning curves between Co-RALS and CLS. These metrics allow us to observe differences in Co-RALS and CLS. Training in Co-RALS develops slightly better but not significantly better hand–eye coordination skills and significantly better timewise performance compared with training in CLS alone. Co-RALS enhances timewise performance in laparoscopic surgery on specific types of tasks that require precision rather than depth perception skills compared with CLS. The results obtained enable us to further define the Human–Robot Interaction quality in Co-RALS.
Collapse
|
30
|
Bodet-Contentin L, Messet-Charrière H, Gissot V, Renault A, Muller G, Aubrey A, Gadrez P, Tavernier E, Ehrmann S. Assessing oral comprehension with an eye tracking based innovative device in critically ill patients and healthy volunteers: a cohort study. Crit Care 2022; 26:288. [PMID: 36151567 PMCID: PMC9508751 DOI: 10.1186/s13054-022-04137-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 08/19/2022] [Indexed: 02/03/2023] Open
Abstract
PURPOSE Communication of caregivers and relatives to patients is a major difficulty in intensive care units (ICU). Patient's comprehension capabilities are variable over time and traditional comprehension tests cannot be implemented. Our purpose was to evaluate an oral comprehension test adapted for its automatic implementation using eye-tracking technology among ICU patients. METHODS Prospective bi-centric cohort study was conducted on 60 healthy volunteers and 53 ICU patients. Subjects underwent an oral comprehension test using an eye-tracking device: Their results and characteristics were collected. The total duration of the test was 2 and a half minutes. RESULTS While performing the test, 48 patients (92%) received invasive ventilation. Among healthy volunteers, the median rate of right answers was very high (93% [interquartile range 87, 100]), whereas it was lower (33% [20, 67]) for patients. For both groups, a significantly lower right answers rate was observed with advancing age (67% [27, 80] vs. 27% [20, 38] among patients and 93% [93, 100] vs. 87% [73, 93] among healthy volunteers, below and above 60 years of age, respectively) and in case of lack of a bachelor's degree (60% [38, 87] vs. 27% [20, 57] among patients and 93% [93, 100] vs. 87% [73, 93] among healthy volunteers). For patients, the higher the severity of disease was, the lower the rate of correct answers was. CONCLUSION The eye-tracking-adapted comprehension test is easy and fast to use among ICU patients, and results seem coherent with various potential levels of comprehension as hypothesized in this study.
Collapse
Affiliation(s)
- Laetitia Bodet-Contentin
- grid.411167.40000 0004 1765 1600CHRU de Tours, Médecine Intensive Réanimation, 2 boulevard Tonnellé, Tours, France ,grid.7429.80000000121866389INSERM, SPHERE, UMR1246, Université de Tours Et Nantes, Tours, France
| | - Hélène Messet-Charrière
- grid.411167.40000 0004 1765 1600CHRU de Tours, Médecine Intensive Réanimation, 2 boulevard Tonnellé, Tours, France
| | | | - Aurélie Renault
- grid.413932.e0000 0004 1792 201XCHR Orléans, Médecine Intensive Réanimation, Orléans, France
| | - Grégoire Muller
- grid.413932.e0000 0004 1792 201XCHR Orléans, Médecine Intensive Réanimation, Orléans, France
| | - Aurélie Aubrey
- grid.411167.40000 0004 1765 1600CHRU de Tours, Médecine Intensive Réanimation, 2 boulevard Tonnellé, Tours, France
| | - Pierrick Gadrez
- grid.411167.40000 0004 1765 1600CHRU de Tours, Médecine Intensive Réanimation, 2 boulevard Tonnellé, Tours, France
| | - Elsa Tavernier
- grid.7429.80000000121866389INSERM, SPHERE, UMR1246, Université de Tours Et Nantes, Tours, France ,CIC INSERM 1415, Tours, France ,grid.488479.eCIC, Tours, France
| | - Stephan Ehrmann
- grid.411167.40000 0004 1765 1600CHRU de Tours, Médecine Intensive Réanimation, 2 boulevard Tonnellé, Tours, France ,grid.411167.40000 0004 1765 1600CRICS-TriggerSep FCRIN Research Network, CHRU Tours, CIC INSERM 1415, Médecine Intensive Réanimation, Tours, France ,Centre d’étude des pathologies respiratoires, U1100, INSERM, Université de Tours, Tours, France
| |
Collapse
|
31
|
Chan AHY, Lee WF, Van Gerven PWM, Chenkin J. Assessment of changes in gaze patterns during training in point-of-care ultrasound. BMC MEDICAL EDUCATION 2022; 22:658. [PMID: 36056331 PMCID: PMC9440555 DOI: 10.1186/s12909-022-03680-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 07/05/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Point-of-care ultrasound (POCUS) is a core skill in emergency medicine (EM), however, there is a lack of objective competency measures. Eye-tracking technology is a potentially useful assessment tool, as gaze patterns can reliably discriminate between experts and novices across medical specialties. We aim to determine if gaze metrics change in an independent and predictable manner during ultrasound training. METHODS A convenience sample of first-year residents from a single academic emergency department was recruited. Participants interpreted 16 ultrasound videos of the focused assessment with sonography for trauma (FAST) scan while their gaze patterns were recorded using a commercially available eye-tracking device. The intervention group then completed an introductory ultrasound course whereas the control group received no additional education. The gaze assessment was subsequently repeated. The primary outcome was total gaze duration on the area of interest (AOI). Secondary outcomes included time to fixation, mean duration of first fixation and mean number of fixations on the AOI. RESULTS 10 EM residents in the intervention group and 10 non-EM residents in the control group completed the study. After training, there was an 8.8 s increase in the total gaze time on the AOI in the intervention group compared to a 4.0 s decrease in the control group (p = .03). EM residents were also 3.8 s quicker to fixate on the AOI whereas the control group became 2.5 s slower (p = .04). There were no significant interactions on the number of fixations (0.43 vs. 0.18, p = .65) or duration of first fixation on the AOI (0.02 s vs. 0.06 s, p = .63). CONCLUSIONS There are significant and quantifiable changes in gaze metrics, which occur with incremental learning after an ultrasound course. Further research is needed to validate the serial use of eye-tracking technology in following a learner's progress toward competency in point-of-care ultrasound image interpretation.
Collapse
Affiliation(s)
- Alice H Y Chan
- Department of Emergency Medicine, Sunnybrook Health Sciences Center, University of Toronto, 2075 Bayview Avenue, AG245, Toronto, ON, M4N 3M5, Canada.
| | - Wei Feng Lee
- Department of Emergency Medicine, Sunnybrook Health Sciences Center, University of Toronto, 2075 Bayview Avenue, AG245, Toronto, ON, M4N 3M5, Canada
- Department of Emergency Medicine, Ng Teng Fong General Hospital, 1 Jurong East Street 21, Singapore, Singapore, 609606
| | - Pascal W M Van Gerven
- Department of Educational Development and Research, School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, Netherlands
| | - Jordan Chenkin
- Department of Emergency Medicine, Sunnybrook Health Sciences Center, University of Toronto, 2075 Bayview Avenue, AG245, Toronto, ON, M4N 3M5, Canada
| |
Collapse
|
32
|
Wolfe JM, Kosovicheva A, Wolfe B. Normal blindness: when we Look But Fail To See. Trends Cogn Sci 2022; 26:809-819. [PMID: 35872002 PMCID: PMC9378609 DOI: 10.1016/j.tics.2022.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 06/02/2022] [Accepted: 06/13/2022] [Indexed: 10/17/2022]
Abstract
Humans routinely miss important information that is 'right in front of our eyes', from overlooking typos in a paper to failing to see a cyclist in an intersection. Recent studies on these 'Looked But Failed To See' (LBFTS) errors point to a common mechanism underlying these failures, whether the missed item was an unexpected gorilla, the clearly defined target of a visual search, or that simple typo. We argue that normal blindness is the by-product of the limited-capacity prediction engine that is our visual system. The processes that evolved to allow us to move through the world with ease are virtually guaranteed to cause us to miss some significant stimuli, especially in important tasks like driving and medical image perception.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Brigham and Women's Hospital, 900 Commonwealth Avenue, Boston, MA 02215, USA; Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA.
| | - Anna Kosovicheva
- Department of Psychology, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, Ontario, L5L 1C6, Canada
| | - Benjamin Wolfe
- Department of Psychology, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, Ontario, L5L 1C6, Canada
| |
Collapse
|
33
|
Eye Tracking Use in Surgical Research: A Systematic Review. J Surg Res 2022; 279:774-787. [PMID: 35944332 DOI: 10.1016/j.jss.2022.05.024] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 03/18/2022] [Accepted: 05/22/2022] [Indexed: 11/20/2022]
Abstract
INTRODUCTION Eye tracking (ET) is a popular tool to study what factors affect the visual behaviour of surgical team members. To our knowledge, there have been no reviews to date that evaluate the broad use of ET in surgical research. This review aims to identify and assess the quality of this evidence, to synthesize how ET can be used to inform surgical practice, and to provide recommendations to improve future ET surgical studies. METHODS In line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, a systematic literature review was conducted. An electronic search was performed in MEDLINE, Cochrane Central, Embase, and Web of Science databases up to September 2020. Included studies used ET to measure the visual behaviour of members of the surgical team during surgery or surgical tasks. The included studies were assessed by two independent reviewers. RESULTS A total of 7614 studies were identified, and 111 were included for data extraction. Eleven applications were identified; the four most common were skill assessment (41%), visual attention assessment (22%), workload measurement (17%), and skills training (10%). A summary was provided of the various ways ET could be used to inform surgical practice, and three areas were identified for the improvement of future ET studies in surgery. CONCLUSIONS This review provided a comprehensive summary of the various applications of ET in surgery and how ET could be used to inform surgical practice, including how to use ET to improve surgical education. The information provided in this review can also aid in the design and conduct of future ET surgical studies.
Collapse
|
34
|
Chen L, Tang XJ, Chen XK, Ke N, Liu Q. Effect of the BOPPPS model combined with case-based learning versus lecture-based learning on ophthalmology education for five-year paediatric undergraduates in Southwest China. BMC MEDICAL EDUCATION 2022; 22:437. [PMID: 35668389 PMCID: PMC9170341 DOI: 10.1186/s12909-022-03514-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Accepted: 05/30/2022] [Indexed: 05/09/2023]
Abstract
BACKGROUND To investigate the effect of the bridge-in, objective, preassessment, participatory learning, post assessment, and summary (BOPPPS) model combined with case-based learning (CBL) on ophthalmology teaching for five-year paediatric undergraduates. METHODS The effects of the BOPPPS model combined with CBL (BOPPPS-CBL) and traditional lecture-based learning (LBL) on ophthalmology teaching were compared among students in a five-year programme. The questionnaire surveys of the students were collected and statistically analysed after the class. The final examination scores, including on elementary knowledge and case analysis, in the two groups were analysed. RESULTS There were no statistically significant differences between the teachers and students in the baseline data. More students agreed that the BOPPPS-CBL model helped develop their problem-solving skills, analytical skills and motivation for learning better than the LBL model. There was no significant difference in learning pressure between the two groups. The final examination scores of the BOPPPS-CBL group were significantly higher than those of the LBL group. The overall course satisfaction of the BOPPPS-CBL group was obviously higher than that of the LBL group. CONCLUSIONS The BOPPPS-CBL model is an effective ophthalmology teaching method for five-year paediatric undergraduates.
Collapse
Affiliation(s)
- Lin Chen
- Department of Ophthalmology, Children's Hospital of Chongqing Medical University, Ministry of Education Key Laboratory of Child Development and Disorders, China International Science and Technology Cooperation Base of Child Development and Critical Disorders, 136, Zhongshan 2nd RD, Yuzhong District, 400014, Chongqing, China
| | - Xiao-Jiao Tang
- Department of Ophthalmology, Children's Hospital of Chongqing Medical University, Ministry of Education Key Laboratory of Child Development and Disorders, China International Science and Technology Cooperation Base of Child Development and Critical Disorders, 136, Zhongshan 2nd RD, Yuzhong District, 400014, Chongqing, China
| | - Xin-Ke Chen
- Department of Ophthalmology, Children's Hospital of Chongqing Medical University, Ministry of Education Key Laboratory of Child Development and Disorders, China International Science and Technology Cooperation Base of Child Development and Critical Disorders, 136, Zhongshan 2nd RD, Yuzhong District, 400014, Chongqing, China
| | - Ning Ke
- Department of Ophthalmology, Children's Hospital of Chongqing Medical University, Ministry of Education Key Laboratory of Child Development and Disorders, China International Science and Technology Cooperation Base of Child Development and Critical Disorders, 136, Zhongshan 2nd RD, Yuzhong District, 400014, Chongqing, China
| | - Qin Liu
- Department of Ophthalmology, Children's Hospital of Chongqing Medical University, Ministry of Education Key Laboratory of Child Development and Disorders, China International Science and Technology Cooperation Base of Child Development and Critical Disorders, 136, Zhongshan 2nd RD, Yuzhong District, 400014, Chongqing, China.
| |
Collapse
|
35
|
Interplay of Message Frame and Reference Point on Recycled Water Acceptance in Green Community: Evidence from an Eye-Tracking Experiment. BUILDINGS 2022. [DOI: 10.3390/buildings12060741] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Public rejection of recycled water hinders the application of recycled water use projects in green communities. An effective information outreach strategy could help to overcome this obstacle. This study used message frames and reference points as control variables to design experimental materials and conduct eye-movement experiments to determine the effect of different information promotion strategies. The results of the study show that: (1) compared with the loss frame, the gain-framed messages are more effective; (2) self-referencing messages are more suitable for recycled water use promotion than other-referencing messages; (3) message frame (gain vs. loss) and reference point (self vs. others) have an interactive effect on the public’s information cognitive behavior; (4) the average duration of fixations for advertising message plays an intermediary role in the path of message frame and reference point jointly influencing the public acceptance. This study provides managerial implications for determining information dissemination strategies for applying recycled water-use projects in green communities.
Collapse
|
36
|
Liu X, Sanchez Perdomo YP, Zheng B, Duan X, Zhang Z, Zhang D. When medical trainees encountering a performance difficulty: evidence from pupillary responses. BMC MEDICAL EDUCATION 2022; 22:191. [PMID: 35305623 PMCID: PMC8934497 DOI: 10.1186/s12909-022-03256-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 03/13/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Medical trainees are required to learn many procedures following instructions to improve their skills. This study aims to investigate the pupillary response of trainees when they encounter moment of performance difficulty (MPD) during skill learning. Detecting the moment of performance difficulty is essential for educators to assist trainees when they need it. METHODS Eye motions were recorded while trainees practiced the thoracostomy procedure in the simulation model. To make pupillary data comparable among trainees, we proposed the adjusted pupil size (APS) normalizing pupil dilation for each trainee in their entire procedure. APS variables including APS, maxAPS, minAPS, meanAPS, medianAPS, and max interval indices were compared between easy and difficult subtasks; the APSs were compared among the three different performance situations, the moment of normal performance (MNP), MPD, and moment of seeking help (MSH). RESULTS The mixed ANOVA revealed that the adjusted pupil size variables, such as the maxAPS, the minAPS, the meanAPS, and the medianAPS, had significant differences between performance situations. Compared to MPD and MNP, pupil size was reduced during MSH. Trainees displayed a smaller accumulative frequency of APS during difficult subtask when compared to easy subtasks. CONCLUSIONS Results from this project suggest that pupil responses can be a good behavioral indicator. This study is a part of our research aiming to create an artificial intelligent system for medical trainees with automatic detection of their performance difficulty and delivering instructional messages using augmented reality technology.
Collapse
Affiliation(s)
- Xin Liu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada
- Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing, 100083, China
| | - Yerly Paola Sanchez Perdomo
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada
| | - Bin Zheng
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada.
- Department of Surgery, Faculty of Medicine and Dentistry, 162 Heritage Medical Research Centre, University of Alberta, 8440 112 St. NW. Edmonton, Alberta, T6G 2E1, Canada.
| | - Xiaoqin Duan
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada
- Department of Rehabilitation Medicine, Second Hospital of Jilin University, Changchun, Jilin, 130041, China
| | - Zhongshi Zhang
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada
- Department of Biological Sciences, University of Alberta, Edmonton, AB, T6G 2E9, Canada
| | - Dezheng Zhang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
- Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing, 100083, China
| |
Collapse
|
37
|
Implementing New Technologies to Improve Visual-Spatial Functions in Patients with Impaired Consciousness. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19053081. [PMID: 35270773 PMCID: PMC8910167 DOI: 10.3390/ijerph19053081] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 03/02/2022] [Accepted: 03/03/2022] [Indexed: 01/13/2023]
Abstract
The quality of life of patients with severe brain damage is compromised by, e.g., impaired cognitive functions and ocular dysfunction. The paper contains research findings regarding participants of an oculomotor training course aimed at the therapy of visual-spatial functions. Five male patients with brain damage who did not communicate, verbally or motorically, participated in the study. Over a six-week period, the subjects solved tasks associated with recognising objects, size perception, colour perception, perception of object structures (letters), perception of object structures (objects), detecting differences between images and assembling image components into the complete image with the use of an eye tracker. The findings present evidence of oculomotor training effectiveness based on a longer duration of the work with the eye tracker and improved visual-spatial functions.
Collapse
|
38
|
Wagner M, den Boer MC, Jansen S, Groepel P, Visser R, Witlox RSGM, Bekker V, Lopriore E, Berger A, te Pas AB. Video-based reflection on neonatal interventions during COVID-19 using eye-tracking glasses: an observational study. Arch Dis Child Fetal Neonatal Ed 2022; 107:156-160. [PMID: 34413092 PMCID: PMC8384497 DOI: 10.1136/archdischild-2021-321806] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Accepted: 06/16/2021] [Indexed: 11/17/2022]
Abstract
OBJECTIVE The aim of this study was to determine the experience with, and the feasibility of, point-of-view video recordings using eye-tracking glasses for training and reviewing neonatal interventions during the COVID-19 pandemic. DESIGN Observational prospective single-centre study. SETTING Neonatal intensive care unit at the Leiden University Medical Center. PARTICIPANTS All local neonatal healthcare providers. INTERVENTION There were two groups of participants: proceduralists, who wore eye-tracking glasses during procedures, and observers who later watched the procedures as part of a video-based reflection. MAIN OUTCOME MEASURES The primary outcome was the feasibility of, and the proceduralists and observers' experience with, the point-of-view eye-tracking videos as an additional tool for bedside teaching and video-based reflection. RESULTS We conducted 12 point-of-view recordings on 10 different patients (median gestational age of 30.9±3.5 weeks and weight of 1764 g) undergoing neonatal intubation (n=5), minimally invasive surfactant therapy (n=5) and umbilical line insertion (n=2). We conducted nine video-based observations with a total of 88 observers. The use of point-of-view recordings was perceived as feasible. Observers further reported the point-of-view recordings to be an educational benefit for them and a potentially instructional tool during COVID-19. CONCLUSION We proved the practicability of eye-tracking glasses for point-of-view recordings of neonatal procedures and videos for observation, educational sessions and logistics considerations, especially with the COVID-19 pandemic distancing measures reducing bedside teaching opportunities.
Collapse
Affiliation(s)
- Michael Wagner
- Department of Pediatrics, Comprehensive Center for Pediatrics, Medical University of Vienna, Vienna, Austria
| | - Maria C den Boer
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Sophie Jansen
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Peter Groepel
- Department of Applied Psychology, University of Vienna, Vienna, Austria
| | - Remco Visser
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Ruben S G M Witlox
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Vincent Bekker
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Enrico Lopriore
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Angelika Berger
- Department of Pediatrics, Comprehensive Center for Pediatrics, Medical University of Vienna, Vienna, Austria
| | - Arjan B te Pas
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| |
Collapse
|
39
|
Melnyk R, Chen Y, Holler T, Schuler N, Saba P, Quarrier S, Bloom J, Tabayoyong W, Frye T, Rashid H, Joseph J, Ghazi A. Utilizing head-mounted eye trackers to analyze patterns and decision-making strategies of 3D virtual modelling platform (IRIS ™) during preoperative planning for renal cancer surgeries. World J Urol 2022; 40:651-658. [PMID: 35066636 DOI: 10.1007/s00345-021-03906-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 12/06/2021] [Indexed: 01/02/2023] Open
Abstract
PURPOSE IRIS™ provides interactive, 3D anatomical visualizations of renal anatomy for pre-operative planning that can be manipulated by altering transparency, rotating, zooming, panning, and overlaying the CT scan. Our objective was to analyze how eye tracking metrics and utilization patterns differ between preoperative surgical planning of renal masses using IRIS and CT scans. METHODS Seven surgeons randomly reviewed IRIS and CT images of 9 patients with renal masses [5 high complexity (RENAL score ≥ 8), 4 low complexity (≤ 7)]. Surgeons answered a series of questions regarding patient anatomy, perceived difficulty (/100), confidence (/100), and surgical plan. Eye tracking metrics (mean pupil diameter, number of fixations, and gaze duration) were collected. RESULTS Surgeons spent significantly less time interpreting data from IRIS than CT scans (- 67.1 s, p < 0.01) and had higher inter-rater agreement of surgical approach after viewing IRIS (α = 0.16-0.34). After viewing IRIS, surgical plans although not statistically significant demonstrated a greater tendency towards a more selective ischemia approaches which positively correlated with improved identification of vascular anatomy. Planned surgical approach changed in 22/59 of the cases. Compared to viewing the CT scan, left and right mean pupil diameter and number/duration of fixations were significantly lower when using IRIS (p < 0.01, p < 0.01, p = 0.42, p < 0.01, respectively), indicating interpreting information from IRIS required less mental effort despite under-utilizing its interactive features. CONCLUSIONS Surgeons extrapolated more detailed information in less time with less mental effort using IRIS than CT scans and proposed surgical approaches with potential to enhanced surgical outcomes.
Collapse
Affiliation(s)
- Rachel Melnyk
- Simulation Innovation Lab, University of Rochester Medical Center (URMC), 601 Elmwood Ave, Rochester, NY, USA.
| | - Yuxin Chen
- Simulation Innovation Lab, University of Rochester Medical Center (URMC), 601 Elmwood Ave, Rochester, NY, USA
| | - Tyler Holler
- Simulation Innovation Lab, University of Rochester Medical Center (URMC), 601 Elmwood Ave, Rochester, NY, USA
| | - Nathan Schuler
- Simulation Innovation Lab, University of Rochester Medical Center (URMC), 601 Elmwood Ave, Rochester, NY, USA
| | - Patrick Saba
- Simulation Innovation Lab, University of Rochester Medical Center (URMC), 601 Elmwood Ave, Rochester, NY, USA
| | | | | | | | - Thomas Frye
- Department of Urology, URMC, Rochester, NY, USA
| | - Hani Rashid
- Department of Urology, URMC, Rochester, NY, USA
| | - Jean Joseph
- Department of Urology, URMC, Rochester, NY, USA
| | - Ahmed Ghazi
- Simulation Innovation Lab, University of Rochester Medical Center (URMC), 601 Elmwood Ave, Rochester, NY, USA.,Department of Urology, URMC, Rochester, NY, USA
| |
Collapse
|
40
|
Suzuki R, Kurita Y. Characteristics of gaze tracking during movement analysis by therapists. J Phys Ther Sci 2022; 34:36-39. [PMID: 35035077 PMCID: PMC8752271 DOI: 10.1589/jpts.34.36] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 10/13/2021] [Indexed: 11/24/2022] Open
Abstract
[Purpose] Visual assessment of the quality of movement is a common and important
component of physiotherapy. The purpose of this study is to quantify the level of
proficiency of therapists and to obtain a new index of proficiency by measuring the
coordinates of the gaze tracking trajectories of therapists with years of experience.
[Participants and Methods] Eighteen voluntary physiotherapists (1st year (n=4), 7th year
(n=1), 9th year (n=4), 10th year (n=3), 11th year (n=4), 13th year (n=1), and 21st year
(n=1)) were recruited for this study. [Results] Discriminant analysis according to the
size of the deviation between the X-axis and Y-axis of the range of gaze tracking during
movement analysis measured from each therapist showed that the percentage of
classification accuracy in the 10th year or less was 72.2%. Cluster analysis showed that
two clusters were formed. Thirteen therapists in Cluster 2 were in their 9th year or more.
Eye tracking trajectories can be classified by the 10th year of experience as a therapist.
[Conclusion] It was shown that full-fledged therapists with 10 years of experience also
expanded the range of eye tracking. The trajectory in the Y-axis direction tends to be
extended with their 9th year or more of experience. In this point, quantitative judgments
of eye-tracking results can serve as indicators of proficiency. The eye movements are
important as a tool to objectively measure skills from experience.
Collapse
Affiliation(s)
- Risa Suzuki
- Fuculty of Health Science Technology, Bunkyo University: 1196 Kamekubo, Fujimino, Saitama356-8533, Japan.,Department of Clinical Research, National Hospital Organization Murayama Medical Center, Japan.,Advanced Research Center for Human Sciences, Waseda University, Japan
| | | |
Collapse
|
41
|
Tolvanen O, Elomaa AP, Itkonen M, Vrzakova H, Bednarik R, Huotarinen A. Eye-Tracking Indicators of Workload in Surgery: A Systematic Review. J INVEST SURG 2022; 35:1340-1349. [PMID: 35038963 DOI: 10.1080/08941939.2021.2025282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BackgroundEye tracking is a powerful tool for unobtrusive and real time assessment of workload in clinical settings. Before the complex eye tracking derived surrogates can be proactively utilized to improve surgical safety, the indications, validity and reliability requires careful evaluation.MethodsWe conducted a systematic review of literature from 2010 to 2020 according to PRISMA guidelines. A search on PubMed, Cochrane, Scopus, Web of science, PsycInfo and Google scholar databases was conducted on July 2020. The following search query was used" ("eye tracking" OR "gaze tracking") AND (surgery OR surgical OR operative OR intraoperative) AND (workload OR stress)". Short papers, no peer reviewed or papers in which eye-tracking methodology was not used to investigate workload or stress factors in surgery, were omitted.ResultsA total of 17 (N = 17) studies were identified eligible to this review. Most of the studies (n = 15) measured workload in simulated setting. Task difficulty and expertise were the most studied factors. Studies consistently showed surgeon's eye movements such as pupil responses, gaze patterns, blinks were associated with the level of perceived workload. However, differences between measurements in operational room and simulated environments have been found.ConclusionPupil responses, blink rate and gaze indices are valid indicators of workload. However, the effect of distractions and non-technical factors on workload is underrepresented aspect in the literature even though recognized as underlying factors in successful surgery.
Collapse
Affiliation(s)
- Otto Tolvanen
- School of Medicine, University of Eastern Finland, Kuopio, Finland
| | - Antti-Pekka Elomaa
- Microsurgery Training Center, Kuopio University Hospital, Kuopio, Finland.,Neurosurgery of KUH NeuroCenter, Kuopio University Hospital, Kuopio, Finland
| | - Matti Itkonen
- Center of Brain Science (CBS), CBS-TOYOTA Collaboration Center, RIKEN, Nagoya, Japan
| | - Hana Vrzakova
- Microsurgery Training Center, Kuopio University Hospital, Kuopio, Finland
| | - Roman Bednarik
- School of Computing, University of Eastern Finland, Kuopio, Finland
| | - Antti Huotarinen
- School of Computing, University of Eastern Finland, Kuopio, Finland
| |
Collapse
|
42
|
Gong H, Hsieh SS, Holmes D, Cook D, Inoue A, Bartlett D, Baffour F, Takahashi H, Leng S, Yu L, McCollough CH, Fletcher JG. An interactive eye-tracking system for measuring radiologists' visual fixations in volumetric CT images: Implementation and initial eye-tracking accuracy validation. Med Phys 2021; 48:6710-6723. [PMID: 34534365 PMCID: PMC8595866 DOI: 10.1002/mp.15219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 08/28/2021] [Accepted: 08/30/2021] [Indexed: 01/17/2023] Open
Abstract
PURPOSE Eye-tracking approaches have been used to understand the visual search process in radiology. However, previous eye-tracking work in computer tomography (CT) has been limited largely to single cross-sectional images or video playback of the reconstructed volume, which do not accurately reflect radiologists' visual search activities and their interactivity with three-dimensional image data at a computer workstation (e.g., scroll, pan, and zoom) for visual evaluation of diagnostic imaging targets. We have developed a platform that integrates eye-tracking hardware with in-house-developed reader workstation software to allow monitoring of the visual search process and reader-image interactions in clinically relevant reader tasks. The purpose of this work is to validate the spatial accuracy of eye-tracking data using this platform for different eye-tracking data acquisition modes. METHODS An eye-tracker was integrated with a previously developed workstation designed for reader performance studies. The integrated system captured real-time eye movement and workstation events at 1000 Hz sampling frequency. The eye-tracker was operated either in head-stabilized mode or in free-movement mode. In head-stabilized mode, the reader positioned their head on a manufacturer-provided chinrest. In free-movement mode, a biofeedback tool emitted an audio cue when the head position was outside the data collection range (general biofeedback) or outside a narrower range of positions near the calibration position (strict biofeedback). Four radiologists and one resident were invited to participate in three studies to determine eye-tracking spatial accuracy under three constraint conditions: head-stabilized mode (i.e., with use of a chin rest), free movement with general biofeedback, and free movement with strict biofeedback. Study 1 evaluated the impact of head stabilization versus general or strict biofeedback using a cross-hair target prior to the integration of the eye-tracker with the image viewing workstation. In Study 2, after integration of the eye-tracker and reader workstation, readers were asked to fixate on targets that were randomly distributed within a volumetric digital phantom. In Study 3, readers used the integrated system to scroll through volumetric patient CT angiographic images while fixating on the centerline of designated blood vessels (from the left coronary artery to dorsalis pedis artery). Spatial accuracy was quantified as the offset between the center of the intended target and the detected fixation using units of image pixels and the degree of visual angle. RESULTS The three head position constraint conditions yielded comparable accuracy in the studies using digital phantoms. For Study 1 involving the digital crosshairs, the median ± the standard deviation of offset values among readers were 15.2 ± 7.0 image pixels with the chinrest, 14.2 ± 3.6 image pixels with strict biofeedback, and 19.1 ± 6.5 image pixels with general biofeedback. For Study 2 using the random dot phantom, the median ± standard deviation offset values were 16.7 ± 28.8 pixels with use of a chinrest, 16.5 ± 24.6 pixels using strict biofeedback, and 18.0 ± 22.4 pixels using general biofeedback, which translated to a visual angle of about 0.8° for all three conditions. We found no obvious association between eye-tracking accuracy and target size or view time. In Study 3 viewing patient images, use of the chinrest and strict biofeedback demonstrated comparable accuracy, while the use of general biofeedback demonstrated a slightly worse accuracy. The median ± standard deviation of offset values were 14.8 ± 11.4 pixels with use of a chinrest, 21.0 ± 16.2 pixels using strict biofeedback, and 29.7 ± 20.9 image pixels using general biofeedback. These corresponded to visual angles ranging from 0.7° to 1.3°. CONCLUSIONS An integrated eye-tracker system to assess reader eye movement and interactive viewing in relation to imaging targets demonstrated reasonable spatial accuracy for assessment of visual fixation. The head-free movement condition with audio biofeedback performed similarly to head-stabilized mode.
Collapse
Affiliation(s)
- Hao Gong
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - Scott S. Hsieh
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - David Holmes
- Department of Physiology & Biomedical Engineering, Mayo Clinic, Rochester, MN 55901
| | - David Cook
- Department of Internal Medicine, Mayo Clinic, Rochester, MN 55901
| | - Akitoshi Inoue
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - David Bartlett
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | | | | | - Shuai Leng
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - Lifeng Yu
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | | | | |
Collapse
|
43
|
Age norms for grating acuity and contrast sensitivity in children using eye tracking technology. Int Ophthalmol 2021; 42:747-756. [PMID: 34622374 DOI: 10.1007/s10792-021-02040-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 09/22/2021] [Indexed: 10/20/2022]
Abstract
KEY MESSAGES Visual acuity is the most used method to assess visual function in children. Contrast sensitivity complements the information provided for visual acuity, but it is not commonly used in clinical practice. Digital devices are increasingly used as a method to evaluate visual function, due to multiple advantages. Testing with these devices can improve the evaluation of visual development in children from a few months of age. Visual acuity and contrast sensitivity tests, using eye tracking technology, are able to measure visual function in children across a wide range of ages, objectively, quickly and without need of an experienced examiner. PURPOSE To report age-normative values for grating visual acuity and contrast sensitivity in healthy children using a digital device with eye tracking technology and to validate the grating acuity test. METHODS In the first project of the study, we examined healthy children aged between 6 months and 7 years with normal ophthalmological assessment. Grating visual acuity (VA) and contrast sensitivity (CS) were assessed using a preferential gaze paradigm with a DIVE (Device for an Integral Visual Examination) assisted with eye tracking technology to provide age norms. For the validation project, we compared LEA grating test (LGT) with DIVE VA in a group of children aged between 6 months and 4 years with normal and abnormal visual development. RESULTS Fifty-seven children (2.86 ± 1.55 years) were examined with DIVE VA test and 44 successfully completed DIVE CS test (3.06 ± 1.41 years). Both, VA and CS values increased with age, mainly along the first two years of life. Sixty-nine patients (1.34 ± 0.61 years) were included in the DIVE VA test validation. The mean difference between LGT and DIVE VA was - 1.05 ± 4.54 cpd with 95% limits of agreement (LoA) of - 9.95-7.84 cpd. Agreement between the two tests was higher in children younger than 1 year with a mean difference of - 0.19 ± 4.02 cpd. CONCLUSIONS DIVE is an automatic, objective and reliable tool to assess several visual function parameters in children, and it has good agreement with classical VA tests, especially for the first stage of life.
Collapse
|
44
|
Banse HE, McMillan CJ, Warren AL, Hecker KG, Wilson B, Skorobohach BJ, Carter RT, Lewin AC, Kondro DA, Ungrin MD, Dorosz SG, Baker RE, Dehghanpir SD, Grandt BB, Hale Mitchell LK, Anderson SJ. Development of and Validity Evidence for a Canine Ocular Model for Training Novice Veterinary Students to Perform a Fundic Examination. JOURNAL OF VETERINARY MEDICAL EDUCATION 2021; 48:620-628. [PMID: 33493101 DOI: 10.3138/jvme-2020-0035] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Indirect fundoscopy is challenging for novice learners, as patients are often intolerant of the procedure, impeding development of proficiency. To address this, we developed a canine ocular simulator that we hypothesized would improve student learning compared to live dogs. Six board-certified veterinary ophthalmologists and 19 second-year veterinary students (novices) performed an indirect fundic examination on the model and live dog. Prior to assessment, novices were introduced to the skill with a standardized teaching protocol and practiced (without feedback) with either the model (n = 10) or live dog (n = 9) for 30 minutes. All participants evaluated realism and usefulness of the model using a Likert-type scale. Performance on the live dog and model was evaluated in all participants using time to completion of task, performance of fundic examination using a checklist and global score, identification of objects in the fundus of the model, and evaluation of time spent looking at the fundus of the model using eye tracking. Novices (trained on simulator or live dogs) were compared in fundic examination performance on the live dog and identification of shapes in the model. In general, experts performed the fundic examination faster (p ≤ .0003) and more proficiently than the novices, although there were no differences in eye tracking behavior between groups (p ≥ .06). No differences were detected between training on simulator versus live dog in development of fundoscopy skills in novices (p ≥ .20). These findings suggest that this canine model may be an effective tool to train students to perform fundoscopy.
Collapse
|
45
|
Aust J, Mitrovic A, Pons D. Assessment of the Effect of Cleanliness on the Visual Inspection of Aircraft Engine Blades: An Eye Tracking Study. SENSORS (BASEL, SWITZERLAND) 2021; 21:6135. [PMID: 34577343 PMCID: PMC8473167 DOI: 10.3390/s21186135] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 09/03/2021] [Accepted: 09/07/2021] [Indexed: 01/20/2023]
Abstract
Background-The visual inspection of aircraft parts such as engine blades is crucial to ensure safe aircraft operation. There is a need to understand the reliability of such inspections and the factors that affect the results. In this study, the factor 'cleanliness' was analysed among other factors. Method-Fifty industry practitioners of three expertise levels inspected 24 images of parts with a variety of defects in clean and dirty conditions, resulting in a total of N = 1200 observations. The data were analysed statistically to evaluate the relationships between cleanliness and inspection performance. Eye tracking was applied to understand the search strategies of different levels of expertise for various part conditions. Results-The results show an inspection accuracy of 86.8% and 66.8% for clean and dirty blades, respectively. The statistical analysis showed that cleanliness and defect type influenced the inspection accuracy, while expertise was surprisingly not a significant factor. In contrast, inspection time was affected by expertise along with other factors, including cleanliness, defect type and visual acuity. Eye tracking revealed that inspectors (experts) apply a more structured and systematic search with less fixations and revisits compared to other groups. Conclusions-Cleaning prior to inspection leads to better results. Eye tracking revealed that inspectors used an underlying search strategy characterised by edge detection and differentiation between surface deposits and other types of damage, which contributed to better performance.
Collapse
Affiliation(s)
- Jonas Aust
- Department of Mechanical Engineering, University of Canterbury, Christchurch 8041, New Zealand;
| | - Antonija Mitrovic
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8041, New Zealand;
| | - Dirk Pons
- Department of Mechanical Engineering, University of Canterbury, Christchurch 8041, New Zealand;
| |
Collapse
|
46
|
Davrieux CF, Palermo M, Giménez ME. Online Education, Was It Here but We Did Not Know? J Laparoendosc Adv Surg Tech A 2021. [PMID: 34494898 DOI: 10.1089/lap.2021.0527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Background: Online education is not new. Their main options are talks, videos, and virtual courses. The quality and quantity of talks, together with the level of the speakers, is variable and heterogeneous. The arrival of the COVID-19 pandemic accelerated this process. The objective of this study was to analyze the result of a questionnaire on the current state of online education. Methods: Retrospective descriptive observational study based on a questionnaire. The participants consulted were Latin American physicians with different specialties. Results: A total of n = 361 participants were recruited. 26.9% had between 6 and 15 years of work experience. 63.1% carried out teleconsultation with their patients, and 96.1% attended between 1 and 10 talks/courses/webinar during the pandemic, whereas 1.6% did not attend any. "Talks" given received a rating of "Very Good" by 51.2%, and a 59.5% considered that the "Hybrid" option would be the best modality for future medical congresses in the postpandemic era. 84.7% considered that other possibilities of online teaching and online surgical training should be explored. Conclusion: Online education has marked the way of transmitting knowledge in recent years. It has been well accepted by those attending academic meetings.
Collapse
Affiliation(s)
- Carlos Federico Davrieux
- Department of Percutaneous Surgery, DAICIM Foundation, Buenos Aires, Argentina
- Department of General Surgery, Sanatorio de la Mujer, Rosario, Argentina
| | - Mariano Palermo
- Department of Percutaneous Surgery, DAICIM Foundation, Buenos Aires, Argentina
- School of Medicine, University of Buenos Aires, Buenos Aires, Argentina
- Department of Bariatric Surgery, DIAGNOMED, Buenos Aires, Argentina
| | - Mariano E Giménez
- Department of Percutaneous Surgery, DAICIM Foundation, Buenos Aires, Argentina
- School of Medicine, University of Buenos Aires, Buenos Aires, Argentina
- IHU-Strasbourg, (Hospital-University Institute), Strasbourg, France
- IRCAD (Institute for Research on Cancer of the Digestive System), Strasbourg, France
| |
Collapse
|
47
|
Yu Y, Luo Y, Huang L, Quan Y. The impact of contextual information on decision-making in footwear examination: An eye-tracking study. J Forensic Sci 2021; 66:2218-2231. [PMID: 34414574 DOI: 10.1111/1556-4029.14861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 07/22/2021] [Accepted: 08/03/2021] [Indexed: 11/26/2022]
Abstract
In order to investigate whether context factors and expectations might potentially influence the decision-making of forensic footwear examiners, we collected the gaze process of experts examining the shoeprint image sets through the eye-tracking recorder. Additional to eye movement data as an objective measure, questionnaires were completed, and survey was conducted afterwards. Twenty-three qualified examiners assessed the similarity among shoe images for 22 different cases, including three sets were laterally reversed. We divided the experiment into two sessions, and then compared the examiners' performance with and without contextual information. The results showed the effects of contextual bias manipulate on both behavioral data and eye tracking data. The consensus and accuracy of examiners with contextual information were higher than those without contextual information. In the eye-tracking data, there is a significant difference between fixation counts and saccadic counts under contextual information. In addition, we found that the contextual information produced significant changes in inter-examiner consistency as measured by the Earth Mover Distance metric. However, there is no significant statistic differences in saccadic amplitude and total fixation duration of the examiners after exposure to contextual information. Our research results are instructive for understanding the cognitive process of shoeprint examination involved in real cases. In this process, stimuli related to context factors may affect decision-making and behavior. Implications for contextual effect causes are discussed.
Collapse
Affiliation(s)
- Yang Yu
- School of Forensic Science, People's Public Security University of China, Beijing, China
| | - Yaping Luo
- Graduate School, People's Public, Security University of China, Beijing, China
| | - Lu Huang
- School of Medicine, Boston University, Boston, Massachusetts, USA
| | - Yongzhi Quan
- Department of investigation, Shanghai Police College, Shanghai, China
| |
Collapse
|
48
|
Kołodziej P, Tuszyńska-Bogucka W, Dzieńkowski M, Bogucki J, Kocki J, Milosz M, Kocki M, Reszka P, Kocki W, Bogucka-Kocka A. Eye Tracking-An Innovative Tool in Medical Parasitology. J Clin Med 2021; 10:jcm10132989. [PMID: 34279473 PMCID: PMC8268455 DOI: 10.3390/jcm10132989] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 06/29/2021] [Accepted: 06/29/2021] [Indexed: 11/16/2022] Open
Abstract
The innovative Eye Movement Modelling Examples (EMMEs) method can be used in medicine as an educational training tool for the assessment and verification of students and professionals. Our work was intended to analyse the possibility of using eye tracking tools to verify the skills and training of people engaged in laboratory medicine on the example of parasitological diagnostics. Professionally active laboratory diagnosticians working in a multi-profile laboratory (non-parasitological) (n = 16), laboratory diagnosticians no longer working in this profession (n = 10), and medical analyst students (n = 56), participated in the study. The studied group analysed microscopic images of parasitological preparations made with the cellSens Dimension Software (Olympus) system. Eye activity parameters were obtained using a stationary, video-based eye tracker Tobii TX300 which has a 3-ms temporal resolution. Eye movement activity parameters were analysed along with time parameters. The results of our studies have shown that the eye tracking method is a valuable tool for the analysis of parasitological preparations. Detailed quantitative and qualitative analysis confirmed that the EMMEs method may facilitate learning of the correct microscopic image scanning path. The analysis of the results of our studies allows us to conclude that the EMMEs method may be a valuable tool in the preparation of teaching materials in virtual microscopy. These teaching materials generated with the use of eye tracking, prepared by experienced professionals in the field of laboratory medicine, can be used during various training, simulations and courses in medical parasitology and contribute to the verification of education results, professional skills, and elimination of errors in parasitological diagnostics.
Collapse
Affiliation(s)
- Przemysław Kołodziej
- Chair and Department of Biology and Genetics, Medical University of Lublin, 20-093 Lublin, Poland;
- Correspondence: ; Tel.: +48-814-487-234
| | | | - Mariusz Dzieńkowski
- Department of Computer Science, Lublin University of Technology, 20-618 Lublin, Poland; (M.D.); (M.M.)
| | - Jacek Bogucki
- Department of Organic Chemistry, Medical University of Lublin, 20-093 Lublin, Poland;
| | - Janusz Kocki
- Department of Clinical Genetics, Medical University of Lublin, 20-080 Lublin, Poland;
| | - Marek Milosz
- Department of Computer Science, Lublin University of Technology, 20-618 Lublin, Poland; (M.D.); (M.M.)
| | - Marcin Kocki
- Scientific Circle at Department of Clinical Genetics, Medical University of Lublin, 20-080 Lublin, Poland; (M.K.); (P.R.)
| | - Patrycja Reszka
- Scientific Circle at Department of Clinical Genetics, Medical University of Lublin, 20-080 Lublin, Poland; (M.K.); (P.R.)
| | - Wojciech Kocki
- Department of Architecture and Urban Planning, Lublin University of Technology, 20-618 Lublin, Poland;
| | - Anna Bogucka-Kocka
- Chair and Department of Biology and Genetics, Medical University of Lublin, 20-093 Lublin, Poland;
| |
Collapse
|
49
|
Neurophysiological Measurements in Higher Education: A Systematic Literature Review. INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION 2021. [DOI: 10.1007/s40593-021-00256-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
50
|
Alyaman M, Sobuh M, Zaid AA, Kenney L, Galpin AJ, Al-Taee MA. Towards automation of dynamic-gaze video analysis taking functional upper-limb tasks as a case study. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 203:106041. [PMID: 33756186 DOI: 10.1016/j.cmpb.2021.106041] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Accepted: 03/03/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Previous studies in motor control have yielded clear evidence that gaze behavior (where someone looks) quantifies the attention paid to perform actions. However, eliciting clinically meaningful results from the gaze data has been done manually, rendering it incredibly tedious, time-consuming, and highly subjective. This paper aims to study the feasibility of automating the coding process of the gaze data taking functional upper-limb tasks as a case study. METHODS This is achieved by developing a new algorithm capable of coding the collected gaze data through three main stages; data preparation, data processing, and output generation. The input data in the form of a crosshair and a gaze video are converted into a 25 Hz frame rate sequence. Keyframes and non-key frames are then obtained and processed using a combination of image processing techniques and a fuzzy logic controller. In each trial, the location and duration of gaze fixation at the areas of interest (AOIs) are obtained. Once the gaze data is coded, it can be presented in different forms and formats, including the stacked color bar. RESULTS The obtained results showed that the developed coding algorithm highly agrees with the manual coding method but significantly faster and less prone to unsystematic errors. Statistical analysis showed that Cohen's Kappa ranges from 0.705 to 1.0. Moreover, based on the intra-class correlation coefficient (ICC), the agreement index between computerized and manual coding methods is found to be (i) 0.908 with 95% confidence intervals (0.867, 0.937) for the anatomical hand and (ii) 0.923 with 95% confidence intervals (0.888, 0.948) for the prosthetic hand. A Bland-Altman plot also showed that all data points are closely scattered around the mean. These findings confirm the validity and effectiveness of the developed coding algorithm. CONCLUSION The developed algorithm demonstrated that it is feasible to automate the coding of the gaze data, reduce the coding time, and improve the coding process's reliability.
Collapse
Affiliation(s)
- Musa Alyaman
- Mechatronics Engineering Department, School of Engineering, The University of Jordan, Amman, 11942, Jordan.
| | - Mohammad Sobuh
- Department of Orthotics & Prosthetics, School of Rehabilitation Sciences. The University of Jordan, Amman, 11942, Jordan
| | - Alaa Abu Zaid
- Mechatronics Engineering Department, School of Engineering, The University of Jordan, Amman, 11942, Jordan
| | - Laurence Kenney
- School of Health and Society, University of Salford, Manchester M5 4WT, UK
| | - Adam J Galpin
- School of Health and Society, University of Salford, Manchester M5 4WT, UK
| | - Majid A Al-Taee
- School of Electrical Engineering, Electronics and Computer Science, University of Liverpool, Liverpool L69 3BX, UK
| |
Collapse
|