1
|
Dehn KI, Maiello G, Hartmann FT, Morgenstern Y, Hawkins SJ, Offner T, Walter J, Hassenklöver T, Manzini I, Fleming RW. Human shape perception spontaneously discovers the biological origin of novel, but natural, stimuli. J R Soc Interface 2025; 22:20240931. [PMID: 40393522 DOI: 10.1098/rsif.2024.0931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2024] [Revised: 03/17/2025] [Accepted: 04/24/2025] [Indexed: 05/22/2025] Open
Abstract
Humans excel at categorizing objects by shape. This facility involves identifying shape features that objects have in common with other members of their class and relies-at least in part-on semantic/cognitive constructs. For example, plants sprout branches, fish grow fins, shoes are moulded to our feet. Can humans parse shapes according to the processes that give shapes their key characteristics, even when such processes are hidden? To answer this, we investigated how humans perceive the shape of cells from the olfactory system of Xenopus laevis tadpoles. These objects are novel to most humans yet occur in nature and cluster into classes following their underlying biological function. We reconstructed three-dimensional (3D) cell models through 3D microscopy and photogrammetry, then conducted psychophysical experiments. Human participants performed two tasks: they arranged 3D-printed cell models by similarity and rated them along eight visual dimensions. Participants were highly consistent in their arrangements and ratings and spontaneously grouped stimuli to reflect the cell classes, unwittingly revealing the underlying processes shaping these forms. Our findings thus demonstrate that human perceptual organization mechanisms spontaneously parse the biological systematicities of never-before-seen, natural shapes. Integrating such human perceptual strategies into automated systems may enhance morphology-based analysis in biology and medicine.
Collapse
Affiliation(s)
- Kira Isabel Dehn
- Department of Psychology, Justus Liebig University Giessen, Giessen, Hessen, Germany
| | - Guido Maiello
- School of Psychology, University of Southampton, Southampton, England, UK
| | - Frieder Tom Hartmann
- Department of Psychology, Justus Liebig University Giessen, Giessen, Hessen, Germany
| | - Yaniv Morgenstern
- Erasmus School of Social and Behavioural Sciences, Erasmus University Rotterdam, Rotterdam, Zuid-Holland, The Netherlands
| | - Sara Joy Hawkins
- School of Biological Sciences, University of Southampton, Southampton, England, UK
| | - Thomas Offner
- Georg August University of Göttingen, Göttingen, Lower Saxony, Germany
| | - Joshua Walter
- Department of Animal Physiology and Molecular Biomedicine, Justus Liebig University Giessen, Giessen, Hessen, Germany
| | - Thomas Hassenklöver
- Department of Animal Physiology and Molecular Biomedicine, Justus Liebig University Giessen, Giessen, Hessen, Germany
| | - Ivan Manzini
- Department of Animal Physiology and Molecular Biomedicine, Justus Liebig University Giessen, Giessen, Hessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Marburg, Hessen, Germany
| | - Roland W Fleming
- Department of Psychology, Justus Liebig University Giessen, Giessen, Hessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Marburg, Hessen, Germany
| |
Collapse
|
2
|
Alemanno M, Di Pompeo I, Marcaccio M, Canini D, Curcio G, Migliore S. From Gaze to Game: A Systematic Review of Eye-Tracking Applications in Basketball. Brain Sci 2025; 15:421. [PMID: 40309899 PMCID: PMC12025553 DOI: 10.3390/brainsci15040421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2025] [Revised: 04/17/2025] [Accepted: 04/18/2025] [Indexed: 05/02/2025] Open
Abstract
Background/Objectives: Eye-tracking technology has gained increasing attention in sports science, as it provides valuable insights into visual attention, decision-making, and motor planning. This systematic review examines the application of eye-tracking technology in basketball, highlighting its role in analyzing cognitive and perceptual strategies in players, referees, and coaches. Methods: A systematic search was conducted following PRISMA guidelines. Studies published up until December 2024 were retrieved from PubMed and Web of Science using keywords related to basketball, eye tracking, and visual search. The inclusion criteria focused on studies using eye-tracking technology to assess athletes, referees, and coaches. A total of 1706 articles were screened, of which 19 met the eligibility criteria. Results: Eye-tracking studies have shown that expert basketball players exhibit longer quiet eye (QE) durations and more efficient gaze behaviors compared to novices. In high-pressure situations, skilled players maintain more stable QE characteristics, leading to better shot accuracy. Referees rely on efficient gaze strategies to make split-second decisions, although less experienced referees tend to neglect key visual cues. In coaching, eye-tracking studies suggest that guided gaze techniques improve tactical understanding in novice players but have limited effects on experienced athletes. Conclusions: Eye tracking is a powerful tool for studying cognitive and behavioral functioning in basketball, offering valuable insights for performance enhancement and training strategies. Future research should explore real-game settings using mobile eye trackers and integrate artificial intelligence to further refine gaze-based training methods.
Collapse
Affiliation(s)
- Michela Alemanno
- Department of Biotechnological and Applied Clinical Sciences, University of L’Aquila, 67100 L’Aquila, Italy; (M.A.); (I.D.P.); (M.M.); (S.M.)
| | - Ilaria Di Pompeo
- Department of Biotechnological and Applied Clinical Sciences, University of L’Aquila, 67100 L’Aquila, Italy; (M.A.); (I.D.P.); (M.M.); (S.M.)
| | - Martina Marcaccio
- Department of Biotechnological and Applied Clinical Sciences, University of L’Aquila, 67100 L’Aquila, Italy; (M.A.); (I.D.P.); (M.M.); (S.M.)
| | - Daniele Canini
- Department of Movement, Human and Health Sciences, University of Rome “Foro Italico”, 00135 Rome, Italy;
| | - Giuseppe Curcio
- Department of Biotechnological and Applied Clinical Sciences, University of L’Aquila, 67100 L’Aquila, Italy; (M.A.); (I.D.P.); (M.M.); (S.M.)
| | - Simone Migliore
- Department of Biotechnological and Applied Clinical Sciences, University of L’Aquila, 67100 L’Aquila, Italy; (M.A.); (I.D.P.); (M.M.); (S.M.)
| |
Collapse
|
3
|
Tonishi T, Ishibashi F, Okusa K, Mochida K, Suzuki S. Effects of a training system that tracks the operator's gaze pattern during endoscopic submucosal dissection on hemostasis. World J Gastrointest Endosc 2025; 17:104315. [PMID: 40125505 PMCID: PMC11923982 DOI: 10.4253/wjge.v17.i3.104315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/17/2024] [Revised: 02/16/2025] [Accepted: 02/27/2025] [Indexed: 03/14/2025] Open
Abstract
BACKGROUND The early acquisition of skills required to perform hemostasis during endoscopy may be hindered by the lack of tools that allow assessments of the operator's viewpoint. Understanding the operator's viewpoint may facilitate the skills. AIM To evaluate the effects of a training system using operator gaze patterns during gastric endoscopic submucosal dissection (ESD) on hemostasis. METHODS An eye-tracking system was developed to record the operator's viewpoints during gastric ESD, displaying the viewpoint as a circle. In phase 1, videos of three trainees' viewpoints were recorded. After reviewing these, trainees were recorded again in phase 2. The videos from both phases were retrospectively reviewed, and short clips were created to evaluate the hemostasis skills. Outcome measures included the time to recognize the bleeding point, the time to complete hemostasis, and the number of coagulation attempts. RESULTS Eight cases treated with ESD were reviewed, and 10 video clips of hemostasis were created. The time required to recognize the bleeding point during phase 2 was significantly shorter than that during phase 1 (8.3 ± 4.1 seconds vs 23.1 ± 19.2 seconds; P = 0.049). The time required to complete hemostasis during phase 1 and that during phase 2 were not significantly different (15.4 ± 6.8 seconds vs 31.9 ± 21.7 seconds; P = 0.056). Significantly fewer coagulation attempts were performed during phase 2 (1.8 ± 0.7 vs 3.2 ± 1.0; P = 0.004). CONCLUSION Short-term training did not reduce hemostasis completion time but significantly improved bleeding point recognition and reduced coagulation attempts. Learning from the operator's viewpoint can facilitate acquiring hemostasis skills during ESD.
Collapse
Affiliation(s)
- Takao Tonishi
- Department of Gastroenterology, International University of Health and Welfare Ichikawa Hospital, Chiba 272-0827, Japan
- International University of Health and Welfare Graduate School of Medicine, Chiba 286-8686, Japan
| | - Fumiaki Ishibashi
- Department of Gastroenterology, International University of Health and Welfare Ichikawa Hospital, Chiba 272-0827, Japan
- International University of Health and Welfare Graduate School of Medicine, Chiba 286-8686, Japan
| | - Kosuke Okusa
- Department of Data Science for Business Innovation, Chuo University, Tokyo 112-0003, Japan
| | - Kentaro Mochida
- Department of Gastroenterology, International University of Health and Welfare Ichikawa Hospital, Chiba 272-0827, Japan
- International University of Health and Welfare Graduate School of Medicine, Chiba 286-8686, Japan
| | - Sho Suzuki
- Department of Gastroenterology, International University of Health and Welfare Ichikawa Hospital, Chiba 272-0827, Japan
- International University of Health and Welfare Graduate School of Medicine, Chiba 286-8686, Japan
| |
Collapse
|
4
|
Byrne CA, Voute LC, Marshall JF. Interobserver agreement during clinical magnetic resonance imaging of the equine foot. Equine Vet J 2025; 57:406-418. [PMID: 38946165 DOI: 10.1111/evj.14126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 06/02/2024] [Indexed: 07/02/2024]
Abstract
BACKGROUND Agreement between experienced observers for assessment of pathology and assessment confidence are poorly documented for magnetic resonance imaging (MRI) of the equine foot. OBJECTIVES To report interobserver agreement for pathology assessment and observer confidence for key anatomical structures of the equine foot during MRI. STUDY DESIGN Exploratory clinical study. METHODS Ten experienced observers (diploma or associate level) assessed 15 equine foot MRI studies acquired from clinical databases of 3 MRI systems. Observers graded pathology in seven key anatomical structures (Grade 1: no pathology, Grade 2: mild pathology, Grade 3: moderate pathology, Grade 4: severe pathology) and provided a grade for their confidence for each pathology assessment (Grade 1: high confidence, Grade 2: moderate confidence, Grade 3: limited confidence, Grade 4: no confidence). Interobserver agreement for the presence/absence of pathology and agreement for individual grades of pathology were assessed with Fleiss' kappa (k). Overall interobserver agreement for pathology was determined using Fleiss' kappa and Kendall's coefficient of concordance (KCC). The distribution of grading was also visualised with bubble charts. RESULTS Interobserver agreement for the presence/absence of pathology of individual anatomical structures was poor-to-fair, except for the navicular bone which had moderate agreement (k = 0.52). Relative agreement for pathology grading (accounting for the ranking of grades) ranged from KCC = 0.19 for the distal interphalangeal joint to KCC = 0.70 for the navicular bone. Agreement was generally greatest at the extremes of pathology. Observer confidence in pathology assessment was generally moderate to high. MAIN LIMITATIONS Distribution of pathology varied between anatomical structures due to random selection of clinical MRI studies. Observers had most experience with low-field MRI. CONCLUSIONS Even with experienced observers, there can be notable variation in the perceived severity of foot pathology on MRI for individual cases, which could be important in a clinical context.
Collapse
Affiliation(s)
- Christian A Byrne
- School of Veterinary Medicine, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK
| | - Lance C Voute
- School of Veterinary Medicine, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK
| | - John F Marshall
- School of Veterinary Medicine, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK
| |
Collapse
|
5
|
Hsieh SS, Holmes Iii DR, Carter RE, Tan N, Inoue A, Yalon M, Gong H, Sudhir Pillai P, Leng S, Yu L, Fidler JL, Cook DA, McCollough CH, Fletcher JG. Peripheral liver metastases are more frequently missed than central metastases in contrast-enhanced CT: insights from a 25-reader performance study. Abdom Radiol (NY) 2025; 50:668-676. [PMID: 39162799 PMCID: PMC11794030 DOI: 10.1007/s00261-024-04520-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Revised: 07/29/2024] [Accepted: 08/05/2024] [Indexed: 08/21/2024]
Abstract
PURPOSE Subtle liver metastases may be missed in contrast enhanced CT imaging. We determined the impact of lesion location and conspicuity on metastasis detection using data from a prior reader study. METHODS In the prior reader study, 25 radiologists examined 40 CT exams each and circumscribed all suspected hepatic metastases. CT exams were chosen to include a total of 91 visually challenging metastases. The detectability of a metastasis was defined as the fraction of radiologists that circumscribed it. A conspicuity index was calculated for each metastasis by multiplying metastasis diameter with its contrast, defined as the difference between the average of a circular region within the metastasis and the average of the surrounding circular region of liver parenchyma. The effects of distance from liver edge and of conspicuity index on metastasis detectability were measured using multivariable linear regression. RESULTS The median metastasis was 1.4 cm from the edge (interquartile range [IQR], 0.9-2.1 cm). Its diameter was 1.2 cm (IQR, 0.9-1.8 cm), and its contrast was 38 HU (IQR, 23-68 HU). An increase of one standard deviation in conspicuity index was associated with a 6.9% increase in detectability (p = 0.008), whereas an increase of one standard deviation in distance from the liver edge was associated with a 5.5% increase in detectability (p = 0.03). CONCLUSION Peripheral liver metastases were missed more frequently than central liver metastases, with this effect depending on metastasis size and contrast.
Collapse
Affiliation(s)
| | | | | | | | - Akitoshi Inoue
- Mayo Clinic, Rochester, USA
- Shiga University of Medical Science, Ōtsu, Japan
| | | | | | - Parvathy Sudhir Pillai
- Mayo Clinic, Rochester, USA
- The University of Texas MD Anderson Cancer Center, Houston, USA
| | | | | | | | | | | | | |
Collapse
|
6
|
Anikina A, Ibragimova D, Mustafaev T, Mello-Thoms C, Ibragimov B. Prediction of radiological decision errors from longitudinal analysis of gaze and image features. Artif Intell Med 2025; 160:103051. [PMID: 39708677 DOI: 10.1016/j.artmed.2024.103051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 11/27/2024] [Accepted: 12/05/2024] [Indexed: 12/23/2024]
Abstract
Medical imaging, particularly radiography, is an indispensable part of diagnosing many chest diseases. Final diagnoses are made by radiologists based on images, but the decision-making process is always associated with a risk of incorrect interpretation. Incorrectly interpreted data can lead to delays in treatment, a prescription of inappropriate therapy, or even a completely missed diagnosis. In this context, our study aims to determine whether it is possible to predict diagnostic errors made by radiologists using eye-tracking technology. For this purpose, we asked 4 radiologists with different levels of experience to analyze 1000 images covering a wide range of chest diseases. Using eye-tracking data, we calculated the radiologists' gaze fixation points and generated feature vectors based on this data to describe the radiologists' gaze behavior during image analysis. Additionally, we emulated the process of revealing the read images following radiologists' gaze data to create a more comprehensive picture of their analysis. Then we applied a recurrent neural network to predict diagnostic errors. Our results showed a 0.7755 ROC AUC score, demonstrating a significant potential for this approach in enhancing the accuracy of diagnostic error recognition.
Collapse
Affiliation(s)
- Anna Anikina
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | | | - Tamerlan Mustafaev
- Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, USA
| | | | - Bulat Ibragimov
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark.
| |
Collapse
|
7
|
Baetzner AS, Hill Y, Roszipal B, Gerwann S, Beutel M, Birrenbach T, Karlseder M, Mohr S, Salg GA, Schrom-Feiertag H, Frenkel MO, Wrzus C. Mass Casualty Incident Training in Immersive Virtual Reality: Quasi-Experimental Evaluation of Multimethod Performance Indicators. J Med Internet Res 2025; 27:e63241. [PMID: 39869892 PMCID: PMC11811659 DOI: 10.2196/63241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 10/14/2024] [Accepted: 11/22/2024] [Indexed: 01/29/2025] Open
Abstract
BACKGROUND Immersive virtual reality (iVR) has emerged as a training method to prepare medical first responders (MFRs) for mass casualty incidents (MCIs) and disasters in a resource-efficient, flexible, and safe manner. However, systematic evaluations and validations of potential performance indicators for virtual MCI training are still lacking. OBJECTIVE This study aimed to investigate whether different performance indicators based on visual attention, triage performance, and information transmission can be effectively extended to MCI training in iVR by testing if they can discriminate between different levels of expertise. Furthermore, the study examined the extent to which such objective indicators correlate with subjective performance assessments. METHODS A total of 76 participants (mean age 25.54, SD 6.01 y; 45/76, 59% male) with different medical expertise (MFRs: paramedics and emergency physicians; non-MFRs: medical students, in-hospital nurses, and other physicians) participated in 5 virtual MCI scenarios of varying complexity in a randomized order. Tasks involved assessing the situation, triaging virtual patients, and transmitting relevant information to a control center. Performance indicators included eye-tracking-based visual attention, triage accuracy, triage speed, information transmission efficiency, and self-assessment of performance. Expertise was determined based on the occupational group (39/76, 51% MFRs vs 37/76, 49% non-MFRs) and a knowledge test with patient vignettes. RESULTS Triage accuracy (d=0.48), triage speed (d=0.42), and information transmission efficiency (d=1.13) differentiated significantly between MFRs and non-MFRs. In addition, higher triage accuracy was significantly associated with higher triage knowledge test scores (Spearman ρ=0.40). Visual attention was not significantly associated with expertise. Furthermore, subjective performance was not correlated with any other performance indicator. CONCLUSIONS iVR-based MCI scenarios proved to be a valuable tool for assessing the performance of MFRs. The results suggest that iVR could be integrated into current MCI training curricula to provide frequent, objective, and potentially (partly) automated performance assessments in a controlled environment. In particular, performance indicators, such as triage accuracy, triage speed, and information transmission efficiency, capture multiple aspects of performance and are recommended for integration. While the examined visual attention indicators did not function as valid performance indicators in this study, future research could further explore visual attention in MCI training and examine other indicators, such as holistic gaze patterns. Overall, the results underscore the importance of integrating objective indicators to enhance trainers' feedback and provide trainees with guidance on evaluating and reflecting on their own performance.
Collapse
Affiliation(s)
- Anke Sabine Baetzner
- Institute of Sports and Sports Sciences, Heidelberg University, Heidelberg, Germany
| | - Yannick Hill
- Department of Human Movement Sciences, Faculty of Behavioral and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
- Institute of Brain and Behaviour Amsterdam, Amsterdam, Netherlands
- Lyda Hill Institute for Human Resilience, University of Colorado Colorado Springs, Colorado Springs, CO, United States
| | | | - Solène Gerwann
- Institute of Sports and Sports Sciences, Heidelberg University, Heidelberg, Germany
| | - Matthias Beutel
- Institute of Sports and Sports Sciences, Heidelberg University, Heidelberg, Germany
| | - Tanja Birrenbach
- Department of Emergency Medicine, Inselspital University Hospital, University of Bern, Bern, Switzerland
| | | | - Stefan Mohr
- Medical Faculty, Heidelberg University, Heidelberg, Germany
- Department of Anesthesiology, University Hospital Heidelberg, Heidelberg, Germany
| | - Gabriel Alexander Salg
- Medical Faculty, Heidelberg University, Heidelberg, Germany
- General-, Visceral- and Transplantation Surgery, University Hospital Heidelberg, Heidelberg, Germany
| | | | - Marie Ottilie Frenkel
- Institute of Sports and Sports Sciences, Heidelberg University, Heidelberg, Germany
- Psychology in Health Care, Faculty Health, Safety, Society, Furtwangen University, Freiburg, Germany
| | - Cornelia Wrzus
- Psychological Institute and Network Aging Research, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
8
|
Koninckx PR, Ussia A, Stepanian A, Saridogan E, Malzoni M, Miller CE, Keckstein J, Wattiez A, Page G, Bosteels J, Lesaffre E, Adamyan L. The Evidence-Based Medicine Management of Endometriosis Should Be Updated for the Limitations of Trial Evidence, the Multivariability of Decisions, Collective Experience, Heuristics, and Bayesian Thinking. J Clin Med 2025; 14:248. [PMID: 39797330 PMCID: PMC11720984 DOI: 10.3390/jcm14010248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2024] [Revised: 12/23/2024] [Accepted: 12/28/2024] [Indexed: 01/13/2025] Open
Abstract
Background/Objectives: The diagnosis and treatment of endometriosis should be based on the best available evidence. Emphasising the risk of bias, the pyramid of evidence has the double-blind, randomised controlled trial and its meta-analyses on top. After the grading of all evidence by a group of experts, clinical guidelines are formulated using well-defined rules. Unfortunately, the impact of evidence-based medicine (EBM) on the management of endometriosis has been limited and, possibly, occasionally harmful. Methods: For this research, the inherent problems of diagnosis and treatment were discussed by a working group of endometriosis and EBM specialists, and the relevant literature was reviewed. Results: Most clinical decisions are multivariable, but randomized controlled trials (RCTs) cannot handle multivariability because adopting a factorial design would require prohibitively large cohorts and create randomization problems. Single-factor RCTs represent a simplification of the clinical reality. Heuristics and intuition are both important for training and decision-making in surgery; experience, Bayesian thinking, and learning from the past are seldom considered. Black swan events or severe complications and accidents are marginally discussed in EBM since trial evidence is limited for rare medical events. Conclusions: The limitations of EBM for managing endometriosis and the complementarity of multivariability, heuristics, Bayesian thinking, and experience should be recognized. Especially in surgery, the value of training and heuristics, as well as the importance of documenting the collective experience and of the prevention of complications, are fundamental. These additions to EBM and guidelines will be useful in changing the Wild West mentality of surgery resulting from the limited scope of EBM data because of the inherent multivariability, combined with the low number of similar interventions.
Collapse
Affiliation(s)
- Philippe R. Koninckx
- Departments of Obstetrics and Gynecology, Katholieke University Leuven, 3000 Leuven, Belgium
- Departments of Obstetrics and Gynecology, University of Oxford, Oxford OX1 2JD, UK
- Departments of Obstetrics and Gynecology, University Cattolica, del Sacro Cuore, 00168 Rome, Italy
- Departments of Obstetrics and Gynecology, Moscow State University, 119991 Moscow, Russia
| | | | - Assia Stepanian
- Academia of Women’s Health and Endoscopic Surgery, Atlanta, GA 30328, USA
| | - Ertan Saridogan
- Elizabeth Garrett Anderson Institute for Women’s Health, University College London, London WC1E 6 AU, UK
| | | | - Charles E. Miller
- Department of Clinical Sciences, Rosalind Franklin University of Medicine and Science, Chicago, IL 60064, USA
- Department of Minimally Invasive Gynecologic Surgery, Advocate Lutheran General Hospital, Park Ridge, IL 60068, USA
| | - Jörg Keckstein
- Endometriosis Centre, Dres. Keckstein, 9500 Villach, Austria
- Faculty of Medicine, University Ulm, 89081 Ulm, Germany
| | - Arnaud Wattiez
- Departments of Obstetrics and Gynecology, Faculty of Medicine, Latifa Hospital, Dubai 9115, United Arab Emirates
- Departments of Obstetrics and Gynecology, University of Strasbourg, 67081 Strasbourg, France
| | - Geert Page
- Coordinator Clinical Guidance Project VVOG, 9100 Sint-Niklaas, Belgium
| | - Jan Bosteels
- Departments of Obstetrics and Gynecology, AZ Imelda, 2820 Bonheiden, Belgium
- Department of Human Structure and Repair, University of Ghent, 9000 Ghent, Belgium
| | | | - Leila Adamyan
- Department of Operative Gynecology, Federal State Budget Institution V. I. Kulakov Research Centre for Obstetrics, Gynecology, and Perinatology, Ministry of Health of the Russian Federation, 117997 Moscow, Russia
- Department of Reproductive Medicine and Surgery, Moscow State University of Medicine and Dentistry, 127473 Moscow, Russia
| |
Collapse
|
9
|
Almashaikhi J, Elkhodary HM, Bhadila GY, Felemban OM, Al Tuwirqi AA. Diagnostic accuracy and radiographic interpretation of pre-eruptive intra-coronal resorption among dental practitioners using eye-tracking technology. Digit Health 2025; 11:20552076251315620. [PMID: 39949845 PMCID: PMC11822827 DOI: 10.1177/20552076251315620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Accepted: 01/09/2025] [Indexed: 02/16/2025] Open
Abstract
Background Pre-eruptive intra-coronal resorption (PEIR) is a condition in which unerupted teeth exhibit coronal radiolucency consistent with resorptive loss of coronal tooth structure. These lesions are discovered incidentally on routine radiographs. Aim To measure the radiographic interpretation and diagnostic accuracy of PEIR among dental practitioners at King Abdulaziz University Dental Hospital using eye-tracking technology. Methods In this cross-sectional study, 125 interns, general dentists, and postgraduate residents examined five panoramic radiographs, including a case of impaction, and the rest were radiographs with PEIR of different severities. In this study, PEIR recognition was assessed using a validated questionnaire uploaded to an eye-tracking device (Sensomotoric Instruments SMI). Results The findings revealed an association between the severity of the PEIR lesion and the detection of the affected teeth. As the severity increased, the participants were more able to identify the affected teeth, and the percentage of overlooking decreased. The dentist's level of education and years of clinical experience influenced the diagnostic accuracy and radiographic interpretation of the PEIR lesions. Conclusions The diagnostic accuracy and radiographic interpretation of PEIR lesions were affected by participants' level of education and years of clinical experience. Based on this study, PEIR lesions may remain undetected until they reach advanced stages.
Collapse
Affiliation(s)
- Jamila Almashaikhi
- Pediatric Dentistry Department, Faculty of Dentistry, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Heba M. Elkhodary
- Pediatric Dentistry Department, Faculty of Dentistry, King Abdulaziz University, Jeddah, Saudi Arabia
- Department of Pedodontics and Oral Health, Faculty of Dental Medicine for Girls, Al Azhar University, Cairo, Egypt
| | - Ghalia Y. Bhadila
- Pediatric Dentistry Department, Faculty of Dentistry, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Osama M. Felemban
- Pediatric Dentistry Department, Faculty of Dentistry, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Amani A. Al Tuwirqi
- Pediatric Dentistry Department, Faculty of Dentistry, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
10
|
Lorenz J, Zevano J, Otto N, Schneider B, Papan C, Missler M, Darici D. Looking at Social Interactions in Medical Education with Dual Eye-Tracking Technology: A Scoping Review. MEDEDPUBLISH 2024; 14:215. [PMID: 39931308 PMCID: PMC11809160 DOI: 10.12688/mep.20577.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/16/2024] [Indexed: 02/13/2025] Open
Abstract
Purpose Social interactions are fundamental to effective medical practice, yet assessing these complex dynamics in educational settings remains challenging. This review critically examines the emerging use of dual eye-tracking technology as a novel method to quantify, analyze, and enhance social interactions within medical education contexts. Materials and Methods We performed a scoping review of the literature, focusing on studies that utilized dual eye-tracking within medical education contexts. Our search included multiple databases and journals. We extracted information on technical setups, areas of application, participant characteristics, dual eye-tracking metrics, and main findings. Results Ten studies published between 2012 and 2021 met the inclusion criteria, with 90% utilizing dual screen-based- and 10% dual mobile eye-tracking. All studies were conducted in the context of surgical training, primarily focusing on laparoscopic surgery. We identified two main applications of dual eye-tracking: (1) as an educational intervention to improve collaboration, (2) as a diagnostic tool to identify interaction pattern that were associated with learning. Key metrics included joint visual attention, gaze delay and joint mental effort. Conclusion Dual eye-tracking offers a promising technology for enhancing medical education by providing high-resolution, real-time data on social interactions. However, current research is limited by small sample sizes, outdated technology, and a narrow focus on surgical contexts. We discuss the broader implications and potential for medical education research and practice.
Collapse
Affiliation(s)
| | | | - Nils Otto
- Institute for Anatomy and Neurobiology, University of Münster, Münster, Germany
| | | | - Cihan Papan
- Insistute for Hygiene and Public Health, University Hospital Bonn, Bonn, Germany
| | - Markus Missler
- Institute for Anatomy and Neurobiology, University of Münster, Münster, Germany
| | - Dogus Darici
- Institute for Anatomy and Neurobiology, University of Münster, Münster, Germany
| |
Collapse
|
11
|
Lopes A, Ward AD, Cecchini M. Eye tracking in digital pathology: A comprehensive literature review. J Pathol Inform 2024; 15:100383. [PMID: 38868488 PMCID: PMC11168484 DOI: 10.1016/j.jpi.2024.100383] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 04/28/2024] [Accepted: 05/14/2024] [Indexed: 06/14/2024] Open
Abstract
Eye tracking has been used for decades in attempt to understand the cognitive processes of individuals. From memory access to problem-solving to decision-making, such insight has the potential to improve workflows and the education of students to become experts in relevant fields. Until recently, the traditional use of microscopes in pathology made eye tracking exceptionally difficult. However, the digital revolution of pathology from conventional microscopes to digital whole slide images allows for new research to be conducted and information to be learned with regards to pathologist visual search patterns and learning experiences. This has the promise to make pathology education more efficient and engaging, ultimately creating stronger and more proficient generations of pathologists to come. The goal of this review on eye tracking in pathology is to characterize and compare the visual search patterns of pathologists. The PubMed and Web of Science databases were searched using 'pathology' AND 'eye tracking' synonyms. A total of 22 relevant full-text articles published up to and including 2023 were identified and included in this review. Thematic analysis was conducted to organize each study into one or more of the 10 themes identified to characterize the visual search patterns of pathologists: (1) effect of experience, (2) fixations, (3) zooming, (4) panning, (5) saccades, (6) pupil diameter, (7) interpretation time, (8) strategies, (9) machine learning, and (10) education. Expert pathologists were found to have higher diagnostic accuracy, fewer fixations, and shorter interpretation times than pathologists with less experience. Further, literature on eye tracking in pathology indicates that there are several visual strategies for diagnostic interpretation of digital pathology images, but no evidence of a superior strategy exists. The educational implications of eye tracking in pathology have also been explored but the effect of teaching novices how to search as an expert remains unclear. In this article, the main challenges and prospects of eye tracking in pathology are briefly discussed along with their implications to the field.
Collapse
Affiliation(s)
- Alana Lopes
- Department of Medical Biophysics, Western University, London, ON N6A 3K7, Canada
- Gerald C. Baines Centre, London Health Sciences Centre, London, ON N6A 5W9, Canada
| | - Aaron D. Ward
- Department of Medical Biophysics, Western University, London, ON N6A 3K7, Canada
- Gerald C. Baines Centre, London Health Sciences Centre, London, ON N6A 5W9, Canada
- Department of Oncology, Western University, London, ON N6A 3K7, Canada
| | - Matthew Cecchini
- Department of Pathology and Laboratory Medicine, Schulich School of Medicine and Dentistry, Western University, London, ON N6A 3K7, Canada
| |
Collapse
|
12
|
Vrzáková H, Tapiala J, Iso-Mustajärvi M, Timonen T, Dietz A. Estimating Cognitive Workload Using Task-Related Pupillary Responses in Simulated Drilling in Cochlear Implantation. Laryngoscope 2024; 134:5087-5095. [PMID: 38989899 DOI: 10.1002/lary.31612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 05/31/2024] [Accepted: 06/17/2024] [Indexed: 07/12/2024]
Abstract
OBJECTIVES Training of temporal bone drilling requires more than mastering technical skills with the drill. Skills such as visual imagery, bimanual dexterity, and stress management need to be mastered along with precise knowledge of anatomy. In otorhinolaryngology, these psychomotor skills underlie performance in the drilling of the temporal bone for access to the inner ear in cochlear implant surgery. However, little is known about how psychomotor skills and workload management impact the practitioners' continuous and overall performance. METHODS To understand how the practitioner's workload and performance unfolds over time, we examine task-evoked pupillary responses (TEPR) of 22 medical students who performed transmastoid-posterior tympanotomy (TMPT) and removal of the bony overhang of the round window niche in a 3D-printed model of the temporal bone. We investigate how students' TEPR metrics (Average Pupil Size [APS], Index of Pupil Activity [IPA], and Low/High Index of Pupillary Activity [LHIPA]) and time spent in drilling phases correspond to the performance in key drilling phases. RESULTS All TEPR measures revealed significant differences between key drilling phases that corresponded to the anticipated workload. Enlarging the facial recess lasted significantly longer than other phases. IPA captured significant increase of workload in thinning of the posterior canal wall, while APS revealed increased workload during the drilling of the bony overhang. CONCLUSION Our findings contribute to the contemporary competency-based medical residency programs where objective and continuous monitoring of participants' progress allows to track progress in expertise acquisition. Laryngoscope, 134:5087-5095, 2024.
Collapse
Affiliation(s)
- Hana Vrzáková
- School of Computing, University of Eastern Finland, Joensuu, Finland
| | - Jesse Tapiala
- School of Medicine, Institute of Clinical Medicine, University of Eastern Finland, Kuopio, Finland
| | | | - Tomi Timonen
- Department of Otorhinolaryngology, Kuopio University Hospital, Kuopio, Finland
| | - Aarno Dietz
- Department of Otorhinolaryngology, Kuopio University Hospital, Kuopio, Finland
| |
Collapse
|
13
|
Kok EM, Niehorster DC, van der Gijp A, Rutgers DR, Auffermann WF, van der Schaaf M, Kester L, van Gog T. The effects of gaze-display feedback on medical students' self-monitoring and learning in radiology. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024; 29:1689-1710. [PMID: 38555550 PMCID: PMC11549167 DOI: 10.1007/s10459-024-10322-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 03/03/2024] [Indexed: 04/02/2024]
Abstract
Self-monitoring is essential for effectively regulating learning, but difficult in visual diagnostic tasks such as radiograph interpretation. Eye-tracking technology can visualize viewing behavior in gaze displays, thereby providing information about visual search and decision-making. We hypothesized that individually adaptive gaze-display feedback improves posttest performance and self-monitoring of medical students who learn to detect nodules in radiographs. We investigated the effects of: (1) Search displays, showing which part of the image was searched by the participant; and (2) Decision displays, showing which parts of the image received prolonged attention in 78 medical students. After a pretest and instruction, participants practiced identifying nodules in 16 cases under search-display, decision-display, or no feedback conditions (n = 26 per condition). A 10-case posttest, without feedback, was administered to assess learning outcomes. After each case, participants provided self-monitoring and confidence judgments. Afterward, participants reported on self-efficacy, perceived competence, feedback use, and perceived usefulness of the feedback. Bayesian analyses showed no benefits of gaze displays for post-test performance, monitoring accuracy (absolute difference between participants' estimated and their actual test performance), completeness of viewing behavior, self-efficacy, and perceived competence. Participants receiving search-displays reported greater feedback utilization than participants receiving decision-displays, and also found the feedback more useful when the gaze data displayed was precise and accurate. As the completeness of search was not related to posttest performance, search displays might not have been sufficiently informative to improve self-monitoring. Information from decision displays was rarely used to inform self-monitoring. Further research should address if and when gaze displays can support learning.
Collapse
Affiliation(s)
- Ellen M Kok
- Department of Education, Utrecht University, P.O. Box 80140, 3508 CS, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Anouk van der Gijp
- Department of Radiology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Dirk R Rutgers
- Department of Radiology, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | - Marieke van der Schaaf
- Utrecht Center for Research and Development in Health Professions Education, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Liesbeth Kester
- Department of Education, Utrecht University, P.O. Box 80140, 3508 CS, Utrecht, The Netherlands
| | - Tamara van Gog
- Department of Education, Utrecht University, P.O. Box 80140, 3508 CS, Utrecht, The Netherlands
| |
Collapse
|
14
|
Specian Junior FC, Litchfield D, Sandars J, Cecilio-Fernandes D. Use of eye tracking in medical education. MEDICAL TEACHER 2024; 46:1502-1509. [PMID: 38382474 DOI: 10.1080/0142159x.2024.2316863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 02/06/2024] [Indexed: 02/23/2024]
Abstract
Eye tracking has become increasingly applied in medical education research for studying the cognitive processes that occur during the performance of a task, such as image interpretation and surgical skills development. However, analysis and interpretation of the large amount of data obtained by eye tracking can be confusing. In this article, our intention is to clarify the analysis and interpretation of the data obtained from eye tracking. Understanding the relationship between eye tracking metrics (such as gaze, pupil and blink rate) and cognitive processes (such as visual attention, perception, memory and cognitive workload) is essential. The importance of calibration and how the limitations of eye tracking can be overcome is also highlighted.
Collapse
Affiliation(s)
| | | | - John Sandars
- Health Research Institute, Edge Hill University, Ormskirk, UK
| | - Dario Cecilio-Fernandes
- Department of Medical Psychology and Psychiatry, School of Medical Sciences, University of Campinas, Campinas, São Paulo, Brazil
| |
Collapse
|
15
|
Chen J, Yuan Z, Xi J, Gao Z, Li Y, Zhu X, Shi YS, Guan F, Wang Y. Efficient and Accurate Semi-Automatic Neuron Tracing with Extended Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:7299-7309. [PMID: 39255163 DOI: 10.1109/tvcg.2024.3456197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Neuron tracing, alternately referred to as neuron reconstruction, is the procedure for extracting the digital representation of the three-dimensional neuronal morphology from stacks of microscopic images. Achieving accurate neuron tracing is critical for profiling the neuroanatomical structure at single-cell level and analyzing the neuronal circuits and projections at whole-brain scale. However, the process often demands substantial human involvement and represents a nontrivial task. Conventional solutions towards neuron tracing often contend with challenges such as non-intuitive user interactions, suboptimal data generation throughput, and ambiguous visualization. In this paper, we introduce a novel method that leverages the power of extended reality (XR) for intuitive and progressive semi-automatic neuron tracing in real time. In our method, we have defined a set of interactors for controllable and efficient interactions for neuron tracing in an immersive environment. We have also developed a GPU-accelerated automatic tracing algorithm that can generate updated neuron reconstruction in real time. In addition, we have built a visualizer for fast and improved visual experience, particularly when working with both volumetric images and 3D objects. Our method has been successfully implemented with one virtual reality (VR) headset and one augmented reality (AR) headset with satisfying results achieved. We also conducted two user studies and proved the effectiveness of the interactors and the efficiency of our method in comparison with other approaches for neuron tracing.
Collapse
|
16
|
Šoková B, Baránková M, Halamová J. Fixation patterns in pairs of facial expressions-preferences of self-critical individuals. PeerJ Comput Sci 2024; 10:e2413. [PMID: 39650388 PMCID: PMC11623007 DOI: 10.7717/peerj-cs.2413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 09/23/2024] [Indexed: 12/11/2024]
Abstract
So far, studies have revealed some differences in how long self-critical individuals fixate on specific facial expressions and difficulties in recognising these expressions. However, the research has also indicated a need to distinguish between the different forms of self-criticism (inadequate or hated self), the key underlying factor in psychopathology. Therefore, the aim of the current research was to explore fixation patterns for all seven primary emotions (happiness, sadness, fear, disgust, contempt, anger, and surprise) and the neutral face expression in relation to level of self-criticism by presenting random facial stimuli in the right or left visual field. Based on the previous studies, two groups were defined, and the pattern of fixations and eye movements were compared (high and low inadequate and hated self). The research sample consisted of 120 adult participants, 60 women and 60 men. We used the Forms of Self-Criticizing and Self-Reassuring Scale to measure self-criticism. As stimuli for the eye-tracking task, we used facial expressions from the Umeå University Database of Facial Expressions database. Eye movements were recorded using the Tobii X2 eye tracker. Results showed that in highly self-critical participants with inadequate self, time to first fixation and duration of first fixation was shorter. Respondents with higher inadequate self also exhibited a sustained pattern in fixations (total fixation duration; total fixation duration ratio and average fixation duration)-fixation time increased as self-criticism increased, indicating heightened attention to facial expressions. On the other hand, individuals with high hated self showed increased total fixation duration and fixation count for emotions presented in the right visual field but did not differ in initial fixation metrics in comparison with high inadequate self group. These results suggest that the two forms of self-criticism - inadequate self and hated self, may function as distinct mechanisms in relation to emotional processing, with implications for their role as potential transdiagnostic markers of psychopathology based on the fixation eye-tracking metrics.
Collapse
Affiliation(s)
- Bronislava Šoková
- Institute of Applied Psychology, Faculty of Social and Economic Sciences, Comenius University, Bratislava, Slovakia
| | - Martina Baránková
- Institute of Applied Psychology, Faculty of Social and Economic Sciences, Comenius University, Bratislava, Slovakia
| | - Júlia Halamová
- Institute of Applied Psychology, Faculty of Social and Economic Sciences, Comenius University, Bratislava, Slovakia
| |
Collapse
|
17
|
Worley L, Colley MA, Rodriguez CC, Redden D, Logullo D, Pearson W. Enhancing Imaging Anatomy Competency: Integrating Digital Imaging and Communications in Medicine (DICOM) Viewers Into the Anatomy Lab Experience. Cureus 2024; 16:e68878. [PMID: 39376869 PMCID: PMC11457894 DOI: 10.7759/cureus.68878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Accepted: 09/06/2024] [Indexed: 10/09/2024] Open
Abstract
INTRODUCTION Radiologic interpretation is a skill necessary for all physicians to provide quality care for their patients. However, some medical students are not exposed to Digital Imaging and Communications in Medicine (DICOM) imaging manipulation until their third year during clinical rotations. The objective of this study is to evaluate how medical students exposed to DICOM manipulation perform on identifying anatomical structures compared to students who were not exposed. METHODS This was a cross-sectional cohort study with 19 medical student participants organized into a test and control group. The test group consisted of first-year students who had been exposed to a new imaging anatomy curriculum (n = 9). The control group consisted of second-year students who had not had this experience (n = 10). The outcomes measured included quiz performance, self-reported confidence levels, and eye-tracking data. RESULTS Students in the test group performed better on the quiz compared to students in the control group (p = 0.03). Confidence between the test and control groups was not significantly different (p = 0.16), though a moderate to large effect size difference was noted (Hedges' g = 0.75). Saccade peak velocity and fixation duration between the groups were not significantly different (p = 0.29, p = 0.77), though a moderate effect size improvement was noted in saccade peak velocity for the test group (Hedges' g = 0.49). CONCLUSION The results from this study suggest that the early introduction of DICOM imaging into a medical school curriculum does impact students' performance when asked to identify anatomical structures on a standardized quiz.
Collapse
Affiliation(s)
- Luke Worley
- Anatomical Sciences, Edward Via College of Osteopathic Medicine, Auburn, USA
| | - Maria A Colley
- Anatomical Sciences, Edward Via College of Osteopathic Medicine, Auburn, USA
| | | | - David Redden
- Research and Biostatistics, Edward Via College of Osteopathic Medicine, Auburn, USA
| | - Drew Logullo
- Biomedical Affairs and Research, Edward Via College of Osteopathic Medicine, Auburn, USA
| | - William Pearson
- Anatomical Sciences, Edward Via College of Osteopathic Medicine, Auburn, USA
| |
Collapse
|
18
|
Šola HM, Qureshi FH, Khawaja S. Predicting Behaviour Patterns in Online and PDF Magazines with AI Eye-Tracking. Behav Sci (Basel) 2024; 14:677. [PMID: 39199073 PMCID: PMC11351346 DOI: 10.3390/bs14080677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Revised: 07/25/2024] [Accepted: 07/31/2024] [Indexed: 09/01/2024] Open
Abstract
This study aims to improve college magazines, making them more engaging and user-friendly. We combined eye-tracking technology with artificial intelligence to accurately predict consumer behaviours and preferences. Our analysis included three college magazines, both online and in PDF format. We evaluated user experience using neuromarketing eye-tracking AI prediction software, trained on a large consumer neuroscience dataset of eye-tracking recordings from 180,000 participants, using Tobii X2 30 equipment, encompassing over 100 billion data points and 15 consumer contexts. An analysis was conducted with R programming v. 2023.06.0+421 and advanced SPSS statistics v. 27, IBM. (ANOVA, Welch's Two-Sample t-test, and Pearson's correlation). Our research demonstrated the potential of modern eye-tracking AI technologies in providing insights into various types of attention, including focus, engagement, cognitive demand, and clarity. The scientific accuracy of our findings, at 97-99%, underscores the reliability and robustness of our research, instilling confidence in the audience. This study also emphasizes the potential for future research to explore automated datasets, enhancing reliability and applicability across various fields and inspiring hope for further advancements in the field.
Collapse
Affiliation(s)
- Hedda Martina Šola
- Oxford Centre For Applied Research and Entrepreneurship (OxCARE), Oxford Business College, 65 George Street, Oxford OX1 2BQ, UK
- Institute for Neuromarketing & Intellectual Property, Jurja Ves III spur no 4, 10000 Zagreb, Croatia
| | | | - Sarwar Khawaja
- Oxford Business College, 65 George Street, Oxford OX1 2BQ, UK; (F.H.Q.); (S.K.)
| |
Collapse
|
19
|
Ibragimov B, Mello-Thoms C. The Use of Machine Learning in Eye Tracking Studies in Medical Imaging: A Review. IEEE J Biomed Health Inform 2024; 28:3597-3612. [PMID: 38421842 PMCID: PMC11262011 DOI: 10.1109/jbhi.2024.3371893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2024]
Abstract
Machine learning (ML) has revolutionized medical image-based diagnostics. In this review, we cover a rapidly emerging field that can be potentially significantly impacted by ML - eye tracking in medical imaging. The review investigates the clinical, algorithmic, and hardware properties of the existing studies. In particular, it evaluates 1) the type of eye-tracking equipment used and how the equipment aligns with study aims; 2) the software required to record and process eye-tracking data, which often requires user interface development, and controller command and voice recording; 3) the ML methodology utilized depending on the anatomy of interest, gaze data representation, and target clinical application. The review concludes with a summary of recommendations for future studies, and confirms that the inclusion of gaze data broadens the ML applicability in Radiology from computer-aided diagnosis (CAD) to gaze-based image annotation, physicians' error detection, fatigue recognition, and other areas of potentially high research and clinical impact.
Collapse
|
20
|
Eminaga O, Abbas M, Kunder C, Tolkach Y, Han R, Brooks JD, Nolley R, Semjonow A, Boegemann M, West R, Long J, Fan RE, Bettendorf O. Critical evaluation of artificial intelligence as a digital twin of pathologists for prostate cancer pathology. Sci Rep 2024; 14:5284. [PMID: 38438436 PMCID: PMC10912767 DOI: 10.1038/s41598-024-55228-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 02/21/2024] [Indexed: 03/06/2024] Open
Abstract
Prostate cancer pathology plays a crucial role in clinical management but is time-consuming. Artificial intelligence (AI) shows promise in detecting prostate cancer and grading patterns. We tested an AI-based digital twin of a pathologist, vPatho, on 2603 histological images of prostate tissue stained with hematoxylin and eosin. We analyzed various factors influencing tumor grade discordance between the vPatho system and six human pathologists. Our results demonstrated that vPatho achieved comparable performance in prostate cancer detection and tumor volume estimation, as reported in the literature. The concordance levels between vPatho and human pathologists were examined. Notably, moderate to substantial agreement was observed in identifying complementary histological features such as ductal, cribriform, nerve, blood vessel, and lymphocyte infiltration. However, concordance in tumor grading decreased when applied to prostatectomy specimens (κ = 0.44) compared to biopsy cores (κ = 0.70). Adjusting the decision threshold for the secondary Gleason pattern from 5 to 10% improved the concordance level between pathologists and vPatho for tumor grading on prostatectomy specimens (κ from 0.44 to 0.64). Potential causes of grade discordance included the vertical extent of tumors toward the prostate boundary and the proportions of slides with prostate cancer. Gleason pattern 4 was particularly associated with this population. Notably, the grade according to vPatho was not specific to any of the six pathologists involved in routine clinical grading. In conclusion, our study highlights the potential utility of AI in developing a digital twin for a pathologist. This approach can help uncover limitations in AI adoption and the practical application of the current grading system for prostate cancer pathology.
Collapse
Affiliation(s)
| | - Mahmoud Abbas
- Department of Pathology, Prostate Center, University Hospital Muenster, Muenster, Germany.
| | - Christian Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, USA
| | - Yuri Tolkach
- Department of Pathology, Cologne University Hospital, Cologne, Germany
| | - Ryan Han
- Department of Computer Science, Stanford University, Stanford, USA
| | - James D Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Rosalie Nolley
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Axel Semjonow
- Department of Urology, Prostate Center, University Hospital Muenster, Muenster, Germany
| | - Martin Boegemann
- Department of Urology, Prostate Center, University Hospital Muenster, Muenster, Germany
| | - Robert West
- Department of Pathology, Cologne University Hospital, Cologne, Germany
| | - Jin Long
- Department of Pediatrics, Stanford University School of Medicine, Stanford, USA
| | - Richard E Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | | |
Collapse
|
21
|
Ahmadi N, Sasangohar F, Yang J, Yu D, Danesh V, Klahn S, Masud F. Quantifying Workload and Stress in Intensive Care Unit Nurses: Preliminary Evaluation Using Continuous Eye-Tracking. HUMAN FACTORS 2024; 66:714-728. [PMID: 35511206 DOI: 10.1177/00187208221085335] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE (1) To assess mental workloads of intensive care unit (ICU) nurses in 12-hour working shifts (days and nights) using eye movement data; (2) to explore the impact of stress on the ocular metrics of nurses performing patient care in the ICU. BACKGROUND Prior studies have employed workload scoring systems or accelerometer data to assess ICU nurses' workload. This is the first naturalistic attempt to explore nurses' mental workload using eye movement data. METHODS Tobii Pro Glasses 2 eye-tracking and Empatica E4 devices were used to collect eye movement and physiological data from 15 nurses during 12-hour shifts (252 observation hours). We used mixed-effect models and an ordinal regression model with a random effect to analyze the changes in eye movement metrics during high stress episodes. RESULTS While the cadence and characteristics of nurse workload can vary between day shift and night shift, no significant difference in eye movement values was detected. However, eye movement metrics showed that the initial handoff period of nursing shifts has a higher mental workload compared with other times. Analysis of ocular metrics showed that stress is positively associated with an increase in number of eye fixations and gaze entropy, but negatively correlated with the duration of saccades and pupil diameter. CONCLUSION Eye-tracking technology can be used to assess the temporal variation of stress and associated changes with mental workload in the ICU environment. A real-time system could be developed for monitoring stress and workload for intervention development.
Collapse
Affiliation(s)
- Nima Ahmadi
- Center for Outcomes Research, Houston Methodist, Houston, TX, USA
| | - Farzan Sasangohar
- Center for Outcomes Research, Houston Methodist, Houston, TX, USA and Industrial and Systems Engineering, Texas A&M University, College Station, TX, USA
| | - Jing Yang
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| | - Denny Yu
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| | - Valerie Danesh
- Baylor Scott & White Health, Center for Applied Health Research, Dallas, TX, USA and University of Texas at Austin, School of Nursing, Austin, TX, USA
| | - Steven Klahn
- Center for Critical Care, Houston Methodist Hospital, Houston, TX, USA
| | - Faisal Masud
- Center for Critical Care, Houston Methodist Hospital, Houston, TX, USA
| |
Collapse
|
22
|
Kavuri A, Das M. Examining the Influence of Digital Phantom Models in Virtual Imaging Trials for Tomographic Breast Imaging. ARXIV 2024:arXiv:2402.00812v1. [PMID: 38351932 PMCID: PMC10862940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/19/2024]
Abstract
Purpose Digital phantoms are one of the key components of virtual imaging trials (VITs) that aims to assess and optimize new medical imaging systems and algorithms. However, these phantoms vary in their voxel resolution, appearance and structural details. This study aims to examine whether and how variations between digital phantoms influence system optimization with digital breast tomosynthesis (DBT) as a chosen modality. Methods We selected widely used and open access digital breast phantoms generated with different methods. For each phantom type, we created an ensemble of DBT images to test acquisition strategies. Human observer localization ROC (LROC) was used to assess observer performance studies for each case. Noise power spectrum (NPS) was estimated to compare the phantom structural components. Further, we computed several gaze metrics to quantify the gaze pattern when viewing images generated from different phantom types. Results Our LROC results show that the arc samplings for peak performance were approximately 2.5° and 6° in Bakic and XCAT breast phantoms respectively for 3-mm lesion detection task and indicate that system optimization outcomes from VITs can vary with phantom types and structural frequency components. Additionally, a significant correlation (p¡0.01) between gaze metrics and diagnostic performance suggests that gaze analysis can be used to understand and evaluate task difficulty in VITs. Conclusion Our results point to the critical need to evaluate realism in digital phantoms as well as ensuring sufficient structural variations at spatial frequencies relevant to the signal size for an intended task. In addition, standardizing phantom generation and validation tools might aid in lower discrepancies among independently conducted VITs for system or algorithmic optimizations.
Collapse
Affiliation(s)
- Amar Kavuri
- Department of Biomedical Engineering, University of Houston, Houston, TX-77204, USA
| | - Mini Das
- Department of Biomedical Engineering, University of Houston, Houston, TX-77204, USA
- Department of Physics, University of Houston, Houston, TX-77204, USA
| |
Collapse
|
23
|
Hsieh SS, Inoue A, Yalon M, Cook DA, Gong H, Sudhir Pillai P, Johnson MP, Fidler JL, Leng S, Yu L, Carter RE, Holmes DR, McCollough CH, Fletcher JG. Targeted Training Reduces Search Errors but Not Classification Errors for Hepatic Metastasis Detection at Contrast-Enhanced CT. Acad Radiol 2024; 31:448-456. [PMID: 37567818 PMCID: PMC10853479 DOI: 10.1016/j.acra.2023.06.017] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 06/15/2023] [Accepted: 06/20/2023] [Indexed: 08/13/2023]
Abstract
RATIONALE AND OBJECTIVES Methods are needed to improve the detection of hepatic metastases. Errors occur in both lesion detection (search) and decisions of benign versus malignant (classification). Our purpose was to evaluate a training program to reduce search errors and classification errors in the detection of hepatic metastases in contrast-enhanced abdominal computed tomography (CT). MATERIALS AND METHODS After Institutional Review Board approval, we conducted a single-group prospective pretest-posttest study. Pretest and posttest were identical and consisted of interpreting 40 contrast-enhanced abdominal CT exams containing 91 liver metastases under eye tracking. Between pretest and posttest, readers completed search training with eye-tracker feedback and coaching to increase interpretation time, use liver windows, and use coronal reformations. They also completed classification training with part-task practice, rating lesions as benign or malignant. The primary outcome was metastases missed due to search errors (<2 seconds gaze under eye tracker) and classification errors (>2 seconds). Jackknife free-response receiver operator characteristic (JAFROC) analysis was also conducted. RESULTS A total of 31 radiologist readers (8 abdominal subspecialists, 8 nonabdominal subspecialists, 15 senior residents/fellows) participated. Search errors were reduced (pretest 11%, posttest 8%, difference 3% [95% confidence interval, 0.3%-5.1%], P = .01), but there was no difference in classification errors (difference 0%, P = .97) or in JAFROC figure of merit (difference -0.01, P = .36). In subgroup analysis, abdominal subspecialists demonstrated no evidence of change. CONCLUSION Targeted training reduced search errors but not classification errors for the detection of hepatic metastases at contrast-enhanced abdominal CT. Improvements were not seen in all subgroups.
Collapse
Affiliation(s)
- Scott S Hsieh
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.); Department of General Internal Medicine, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H.).
| | - Akitoshi Inoue
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| | - Mariana Yalon
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| | - David A Cook
- Quantitative Health Services - Clinical Trials and Biostatistics, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (D.A.C.)
| | - Hao Gong
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| | - Parvathy Sudhir Pillai
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| | - Matthew P Johnson
- Department of Physiology Biomedical Engineering, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (M.P.J., R.E.C.)
| | - Jeff L Fidler
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| | - Shuai Leng
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| | - Lifeng Yu
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| | - Rickey E Carter
- Department of Physiology Biomedical Engineering, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (M.P.J., R.E.C.)
| | - David R Holmes
- Quantitative Health Services - Clinical Trials and Biostatistics, Mayo Clinic, 4500 San Pablo Road, Jacksonville, FL 32224 (D.R.H. III)
| | - Cynthia H McCollough
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| | - Joel G Fletcher
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| |
Collapse
|
24
|
Sugimoto M, Oyamada M, Tomita A, Inada C, Sato M. Assessing the Link between Nurses' Proficiency and Situational Awareness in Neonatal Care Practice Using an Eye Tracker: An Observational Study Using a Simulator. Healthcare (Basel) 2024; 12:157. [PMID: 38255046 PMCID: PMC10815009 DOI: 10.3390/healthcare12020157] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 12/28/2023] [Accepted: 01/04/2024] [Indexed: 01/24/2024] Open
Abstract
Nurses are expected to depend on a wide variety of visually available pieces of patient information to understand situations. Thus, we assumed a relationship between nurses' skills and their gaze trajectories. An observational study using a simulator was conducted to analyze gaze during neonatal care practice using eye tracking. We defined the face, thorax, and abdomen of the neonate, the timer, and the pulse oximeter as areas of interest (AOIs). We compared the eye trajectories for respiration and heart rate assessment between 7 experienced and 13 novice nurses. There were no statistically significant differences in the time spent on each AOI for breathing or heart rate confirmation. However, in novice nurses, we observed a significantly higher number of instances of gazing at the thorax and abdomen. The deviation in the number of instances of gazing at the face was also significantly higher among novice nurses. These results indicate that experienced and novice nurses differ in their gaze movements during situational awareness. These objective and quantitative differences in gaze trajectories may help to establish new educational tools for less experienced nurses.
Collapse
Affiliation(s)
- Masahiro Sugimoto
- Institute for Advanced Biosciences, Keio University, Tsuruoka 997-0052, Japan
- Institute of Medical Sciences, Tokyo Medical University, Shinjuku, Tokyo 160-0022, Japan;
| | - Michiko Oyamada
- Faculty of Human Care Department, Tohto University, 1-1 Hinode-cho, Numazu 410-0032, Japan;
- Department of Nursing, Nihon Institute of Medical Science, Iruma 350-0435, Japan
| | - Atsumi Tomita
- Institute of Medical Sciences, Tokyo Medical University, Shinjuku, Tokyo 160-0022, Japan;
| | - Chiharu Inada
- Faculty of Nursing, Japanese Red Cross College of Nursing, 4-1-3 Hiroo, Shibuya, Tokyo 150-0012, Japan;
| | - Mitsue Sato
- Department of Nursing, Kiryu University, Midori 379-2392, Japan;
| |
Collapse
|
25
|
Wang H, Yu Z, Wang X. Expertise differences in cognitive interpreting: A meta-analysis of eye tracking studies across four decades. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2024; 15:e1667. [PMID: 37858956 DOI: 10.1002/wcs.1667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Revised: 08/12/2023] [Accepted: 09/13/2023] [Indexed: 10/21/2023]
Abstract
This meta-analytic research delves into the influence of expertise on cognitive interpreting, emphasizing time efficiency, accuracy, and cognitive effort, in alignment with prevailing expertise theories that link professional development and cognitive efficiency. The study assimilates empirical data from 18 eye-tracking studies conducted over the past four decades, encompassing a sample of 1581 interpreters. The objective is to elucidate the role of expertise in interpretative performance while tracing the evolution of these dynamics over time. Findings suggest that expert interpreters outperform novices in time efficiency and accuracy and exhibit lower cognitive effort, especially in sight and consecutive interpreting. This effect is particularly pronounced in the English-Chinese language pair and with the use of E-prime and Tobii eye-tracking systems. Further, fixation count and pupil size are essential metrics impacting cognitive effort. These findings have vital implications for interpreter training programs, suggesting a focus on expertise development to enhance efficiency and accuracy, reduce cognitive load, and emphasize the importance of sight interpreting as a foundational skill. The selection of technology and understanding of specific ocular metrics also emerged as essential for future research and practical applications in the interpreting industry. This article is categorized under: Psychology > Theory and Methods Linguistics > Cognitive.
Collapse
Affiliation(s)
- Huan Wang
- Faculty of Foreign Studies, Beijing Language and Culture University, Beijing, China
| | - Zhonggen Yu
- Faculty of Foreign Studies, Beijing Language and Culture University, Beijing, China
- Academy of International Language Services, Center for Intelligent Language Education Research, National Base for Language Service Export, Beijing Language and Culture University, Beijing, China
| | - Xiaohui Wang
- Faculty of Foreign Studies, Beijing Language and Culture University, Beijing, China
| |
Collapse
|
26
|
Hofmeijer EIS, Wu SC, Vliegenthart R, Slump CH, van der Heijden F, Tan CO. Artificial CT images can enhance variation of case images in diagnostic radiology skills training. Insights Imaging 2023; 14:186. [PMID: 37934344 PMCID: PMC10630276 DOI: 10.1186/s13244-023-01508-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 08/22/2023] [Indexed: 11/08/2023] Open
Abstract
OBJECTIVES We sought to investigate if artificial medical images can blend with original ones and whether they adhere to the variable anatomical constraints provided. METHODS Artificial images were generated with a generative model trained on publicly available standard and low-dose chest CT images (805 scans; 39,803 2D images), of which 17% contained evidence of pathological formations (lung nodules). The test set (90 scans; 5121 2D images) was used to assess if artificial images (512 × 512 primary and control image sets) blended in with original images, using both quantitative metrics and expert opinion. We further assessed if pathology characteristics in the artificial images can be manipulated. RESULTS Primary and control artificial images attained an average objective similarity of 0.78 ± 0.04 (ranging from 0 [entirely dissimilar] to 1[identical]) and 0.76 ± 0.06, respectively. Five radiologists with experience in chest and thoracic imaging provided a subjective measure of image quality; they rated artificial images as 3.13 ± 0.46 (range of 1 [unrealistic] to 4 [almost indistinguishable to the original image]), close to their rating of the original images (3.73 ± 0.31). Radiologists clearly distinguished images in the control sets (2.32 ± 0.48 and 1.07 ± 0.19). In almost a quarter of the scenarios, they were not able to distinguish primary artificial images from the original ones. CONCLUSION Artificial images can be generated in a way such that they blend in with original images and adhere to anatomical constraints, which can be manipulated to augment the variability of cases. CRITICAL RELEVANCE STATEMENT Artificial medical images can be used to enhance the availability and variety of medical training images by creating new but comparable images that can blend in with original images. KEY POINTS • Artificial images, similar to original ones, can be created using generative networks. • Pathological features of artificial images can be adjusted through guiding the network. • Artificial images proved viable to augment the depth and broadening of diagnostic training.
Collapse
Affiliation(s)
- Elfi Inez Saïda Hofmeijer
- Robotics and Mechatronics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Enschede, The Netherlands.
| | - Sheng-Chih Wu
- Robotics and Mechatronics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Enschede, The Netherlands
| | - Rozemarijn Vliegenthart
- Dept of Radiology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Cornelis Herman Slump
- Robotics and Mechatronics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Enschede, The Netherlands
| | - Ferdi van der Heijden
- Robotics and Mechatronics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Enschede, The Netherlands
| | - Can Ozan Tan
- Robotics and Mechatronics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Enschede, The Netherlands
| |
Collapse
|
27
|
Akerman M, Choudhary S, Liebmann JM, Cioffi GA, Chen RWS, Thakoor KA. Extracting decision-making features from the unstructured eye movements of clinicians on glaucoma OCT reports and developing AI models to classify expertise. Front Med (Lausanne) 2023; 10:1251183. [PMID: 37841006 PMCID: PMC10571140 DOI: 10.3389/fmed.2023.1251183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 09/14/2023] [Indexed: 10/17/2023] Open
Abstract
This study aimed to investigate the eye movement patterns of ophthalmologists with varying expertise levels during the assessment of optical coherence tomography (OCT) reports for glaucoma detection. Objectives included evaluating eye gaze metrics and patterns as a function of ophthalmic education, deriving novel features from eye-tracking, and developing binary classification models for disease detection and expertise differentiation. Thirteen ophthalmology residents, fellows, and clinicians specializing in glaucoma participated in the study. Junior residents had less than 1 year of experience, while senior residents had 2-3 years of experience. The expert group consisted of fellows and faculty with over 3 to 30+ years of experience. Each participant was presented with a set of 20 Topcon OCT reports (10 healthy and 10 glaucomatous) and was asked to determine the presence or absence of glaucoma and rate their confidence of diagnosis. The eye movements of each participant were recorded as they diagnosed the reports using a Pupil Labs Core eye tracker. Expert ophthalmologists exhibited more refined and focused eye fixations, particularly on specific regions of the OCT reports, such as the retinal nerve fiber layer (RNFL) probability map and circumpapillary RNFL b-scan. The binary classification models developed using the derived features demonstrated high accuracy up to 94.0% in differentiating between expert and novice clinicians. The derived features and trained binary classification models hold promise for improving the accuracy of glaucoma detection and distinguishing between expert and novice ophthalmologists. These findings have implications for enhancing ophthalmic education and for the development of effective diagnostic tools.
Collapse
Affiliation(s)
- Michelle Akerman
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Sanmati Choudhary
- Department of Computer Science, Columbia University, New York, NY, United States
| | - Jeffrey M. Liebmann
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| | - George A. Cioffi
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| | - Royce W. S. Chen
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| | - Kaveri A. Thakoor
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
- Department of Computer Science, Columbia University, New York, NY, United States
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| |
Collapse
|
28
|
Darici D, Reissner C, Missler M. Webcam-based eye-tracking to measure visual expertise of medical students during online histology training. GMS JOURNAL FOR MEDICAL EDUCATION 2023; 40:Doc60. [PMID: 37881524 PMCID: PMC10594038 DOI: 10.3205/zma001642] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 06/06/2023] [Accepted: 07/07/2023] [Indexed: 10/27/2023]
Abstract
Objectives Visual expertise is essential for image-based tasks that rely on visual cues, such as in radiology or histology. Studies suggest that eye movements are related to visual expertise and can be measured by near-infrared eye-tracking. With the popularity of device-embedded webcam eye-tracking technology, cost-effective use in educational contexts has recently become amenable. This study investigated the feasibility of such methodology in a curricular online-only histology course during the 2021 summer term. Methods At two timepoints (t1 and t2), third-semester medical students were asked to diagnose a series of histological slides while their eye movements were recorded. Students' eye metrics, performance and behavioral measures were analyzed using variance analyses and multiple regression models. Results First, webcam-eye tracking provided eye movement data with satisfactory quality (mean accuracy=115.7 px±31.1). Second, the eye movement metrics reflected the students' proficiency in finding relevant image sections (fixation count on relevant areas=6.96±1.56 vs. irrelevant areas=4.50±1.25). Third, students' eye movement metrics successfully predicted their performance (R2adj=0.39, p<0.001). Conclusion This study supports the use of webcam-eye-tracking expanding the range of educational tools available in the (digital) classroom. As the students' interest in using the webcam eye-tracking was high, possible areas of implementation will be discussed.
Collapse
Affiliation(s)
- Dogus Darici
- Westfälische-Wilhelms-University, Institute of Anatomy and Neurobiology, Münster, Germany
| | - Carsten Reissner
- Westfälische-Wilhelms-University, Institute of Anatomy and Neurobiology, Münster, Germany
| | - Markus Missler
- Westfälische-Wilhelms-University, Institute of Anatomy and Neurobiology, Münster, Germany
| |
Collapse
|
29
|
Lee M, Desy J, Tonelli AC, Walsh MH, Ma IWY. The association of attentional foci and image interpretation accuracy in novices interpreting lung ultrasound images: an eye-tracking study. Ultrasound J 2023; 15:36. [PMID: 37697149 PMCID: PMC10495286 DOI: 10.1186/s13089-023-00333-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 08/02/2023] [Indexed: 09/13/2023] Open
Abstract
It is unclear, where learners focus their attention when interpreting point-of-care ultrasound (POCUS) images. This study seeks to determine the relationship between attentional foci metrics with lung ultrasound (LUS) interpretation accuracy in novice medical learners. A convenience sample of 14 medical residents with minimal LUS training viewed 8 LUS cineloops, with their eye-tracking patterns recorded. Areas of interest (AOI) for each cineloop were mapped independently by two experts, and externally validated by a third expert. Primary outcome of interest was image interpretation accuracy, presented as a percentage. Eye tracking captured 10 of 14 participants (71%) who completed the study. Participants spent a mean total of 8 min 44 s ± standard deviation (SD) 3 min 8 s on the cineloops, with 1 min 14 s ± SD 34 s spent fixated in the AOI. Mean accuracy score was 54.0% ± SD 16.8%. In regression analyses, fixation duration within AOI was positively associated with accuracy [beta-coefficients 28.9 standardized error (SE) 6.42, P = 0.002). Total time spent viewing the videos was also significantly associated with accuracy (beta-coefficient 5.08, SE 0.59, P < 0.0001). For each additional minute spent fixating within the AOI, accuracy scores increased by 28.9%. For each additional minute spent viewing the video, accuracy scores increased only by 5.1%. Interpretation accuracy is strongly associated with time spent fixating within the AOI. Image interpretation training should consider targeting AOIs.
Collapse
Affiliation(s)
- Matthew Lee
- Division of General Internal Medicine, Department of Medicine, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada
| | - Janeve Desy
- Division of General Internal Medicine, Department of Medicine, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada
| | - Ana Claudia Tonelli
- UNISINOS University, Hospital de Clinicas de Porto Alegre, Porto Alegre, Brazil
| | - Michael H Walsh
- Division of General Internal Medicine, Department of Medicine, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada
| | - Irene W Y Ma
- Division of General Internal Medicine, Department of Medicine, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada.
- W21C, University of Calgary, Calgary, AB, Canada.
| |
Collapse
|
30
|
Tzamaras HM, Wu HL, Moore JZ, Miller SR. Shifting Perspectives: A proposed framework for analyzing head-mounted eye-tracking data with dynamic areas of interest and dynamic scenes. PROCEEDINGS OF THE HUMAN FACTORS AND ERGONOMICS SOCIETY ... ANNUAL MEETING. HUMAN FACTORS AND ERGONOMICS SOCIETY. ANNUAL MEETING 2023; 67:953-958. [PMID: 38450120 PMCID: PMC10914345 DOI: 10.1177/21695067231192929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
Eye-tracking is a valuable research method for understanding human cognition and is readily employed in human factors research, including human factors in healthcare. While wearable mobile eye trackers have become more readily available, there are no existing analysis methods for accurately and efficiently mapping dynamic gaze data on dynamic areas of interest (AOIs), which limits their utility in human factors research. The purpose of this paper was to outline a proposed framework for automating the analysis of dynamic areas of interest by integrating computer vision and machine learning (CVML). The framework is then tested using a use-case of a Central Venous Catheterization trainer with six dynamic AOIs. While the results of the validity trial indicate there is room for improvement in the CVML method proposed, the framework provides direction and guidance for human factors researchers using dynamic AOIs.
Collapse
Affiliation(s)
| | - Hang-Ling Wu
- Pennsylvania State University Mechanical Engineering
| | - Jason Z Moore
- Pennsylvania State University Mechanical Engineering
| | | |
Collapse
|
31
|
Bradley H, Smith BA, Wilson RB. Qualitative and Quantitative Measures of Joint Attention Development in the First Year of Life: A Scoping Review. INFANT AND CHILD DEVELOPMENT 2023; 32:e2422. [PMID: 37872965 PMCID: PMC10588805 DOI: 10.1002/icd.2422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 03/30/2023] [Indexed: 10/25/2023]
Abstract
Joint attention (JA) is the purposeful coordination of an individual's focus of attention with that of another and begins to develop within the first year of life. Delayed, or atypically developing, JA is an early behavioral sign of many developmental disabilities and so assessing JA in infancy can improve our understanding of trajectories of typical and atypical development. This scoping review identified the most common methods for assessing JA in the first year of life. Methods of JA were divided into qualitative and quantitative categories. Out of an identified 13,898 articles, 106 were selected after a robust search of four databases. Frequent methods used were eye tracking, electroencephalography (EEG), behavioral coding and the Early Social Communication Scale (ECSC). These methods were used to assess JA in typically and atypically developing infants in the first year of life. This study provides a comprehensive review of the past and current state of measurement of JA in the literature, the strengths and limitations of the measures used, and the next steps to consider for researchers interested in investigating JA to strengthen this field going forwards.
Collapse
Affiliation(s)
- Holly Bradley
- Division of Behavioral Pediatrics, Children's Hospital Los Angeles, Los Angeles, California
| | - Beth A Smith
- Division of Behavioral Pediatrics, Children's Hospital Los Angeles, Los Angeles, California
- Developmental Neuroscience and Neurogenetics Program, The Saban Research Institute
- Department of Pediatrics, Keck School of Medicine, University of Southern California
| | - Rujuta B Wilson
- David Geffen School of Medicine at UCLA, UCLA Semel Institute for Neuroscience and Human Behavior, Divisions of Pediatric Neurology and Child Psychiatry, Los Angeles, California, USA
| |
Collapse
|
32
|
Kaushal S, Sun Y, Zukerman R, Chen RWS, Thakoor KA. Detecting Eye Disease Using Vision Transformers Informed by Ophthalmology Resident Gaze Data . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083657 DOI: 10.1109/embc40787.2023.10340746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
We showcase two proof-of-concept approaches for enhancing the Vision Transformer (ViT) model by integrating ophthalmology resident gaze data into its training. The resulting Fixation-Order-Informed ViT and Ophthalmologist-Gaze-Augmented ViT show greater accuracy and computational efficiency than ViT for detection of the eye disease, glaucoma.Clinical relevance- By enhancing glaucoma detection via our gaze-informed ViTs, we introduce a new paradigm for medical experts to directly interface with medical AI, leading the way for more accurate and interpretable AI 'teammates' in the ophthalmic clinic.
Collapse
|
33
|
Jiang H, Hou Y, Miao H, Ye H, Gao M, Li X, Jin R, Liu J. Eye tracking based deep learning analysis for the early detection of diabetic retinopathy: A pilot study. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2023]
|
34
|
Murray NP, Lewinski W, Sandri Heidner G, Lawton J, Horn R. Gaze Control and Tactical Decision-Making Under Stress in Active-Duty Police Officers During a Live Use-of-Force Response. J Mot Behav 2023; 56:30-41. [PMID: 37385608 DOI: 10.1080/00222895.2023.2229946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 03/21/2023] [Accepted: 06/15/2023] [Indexed: 07/01/2023]
Abstract
Police officers during dynamic and stressful encounters are required to make rapid decisions that rely on effective decision-making, experience, and intuition. Tactical decision-making is influenced by the officer's capability to recognize critical visual information and estimation of threat. The purpose of the current study is to investigate how visual search patterns using cluster analysis and factors that differentiate expertise (e.g., years of service, tactical training, related experiences) influence tactical decision-making in active-duty police officers (44 active-duty police officers) during high stress, high threat, realistic use of force scenario following a car accident and to examine the relationships between visual search patterns and physiological response (heart rate). A cluster analysis of visual search variables (fixation duration, fixation location difference score, and number of fixations) produced an Efficient Scan and an Inefficient Scan group. Specifically, the Efficient Scan group demonstrated longer total fixation duration and differences in area of interests (AOI) fixation duration compared to the Inefficient Scan group. Despite both groups exhibiting a rise in physiological stress response (HR) throughout the high-stress scenario, the Efficient Scan group had a history of tactical training, improved return fire performance, had higher sleep time total, and demonstrated increased processing efficiency and effective attentional control, due to having a background of increased tactical training.
Collapse
Affiliation(s)
- Nicholas P Murray
- Department of Kinesiology, East Carolina University, Greenville, NC, USA
| | | | - Gustavo Sandri Heidner
- Department of Exercise Science & Physical Education, Montclair State University, Montclair, NJ, USA
| | - Joshua Lawton
- Department of Kinesiology, East Carolina University, Greenville, NC, USA
| | - Robert Horn
- Department of Exercise Science & Physical Education, Montclair State University, Montclair, NJ, USA
| |
Collapse
|
35
|
Darici D, Masthoff M, Rischen R, Schmitz M, Ohlenburg H, Missler M. Medical imaging training with eye movement modeling examples: A randomized controlled study. MEDICAL TEACHER 2023:1-7. [PMID: 36943681 DOI: 10.1080/0142159x.2023.2189538] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
PURPOSE To determine whether ultrasound training in which the expert's eye movements are superimposed to the underlying ultrasound video (eye movement modeling examples; EMMEs) leads to better learner outcomes than traditional eye movement-free instructions. MATERIALS AND METHODS 106 undergraduate medical students were randomized in two groups; 51 students in the EMME group watched 5-min ultrasound examination videos combined with the eye movements of an expert performing the task. The identical videos without the eye movements were shown to 55 students in the control group. Performance and behavioral parameters were compared prepost interventional using ANOVAs. Additionally, cognitive load, and prior knowledge in anatomy were surveyed. RESULTS After training, the EMME group identified more sonoanatomical structures correctly, and completed the tasks faster than the control group. This effect was partly mediated by a reduction of extraneous cognitive load. Participants with greater prior anatomical knowledge benefited the most from the EMME training. CONCLUSION Displaying experts' eye movements in medical imaging training appears to be an effective way to foster medical interpretation skills of undergraduate medical students. One underlying mechanism might be that practicing with eye movements reduces cognitive load and helps learners activate their prior knowledge.
Collapse
Affiliation(s)
- Dogus Darici
- Institute of Anatomy and Neurobiology, Westfälische Wilhelms-University, Münster, Germany
| | - Max Masthoff
- Clinic for Radiology, University Hospital Münster, Münster, Germany
| | - Robert Rischen
- Clinic for Radiology, University Hospital Münster, Münster, Germany
| | - Martina Schmitz
- Institute of Anatomy and Vascular Biology, Westfälische Wilhelms-University, Münster, Germany
| | - Hendrik Ohlenburg
- Institute of Education and Student Affairs, Studienhospital Münster, University of Münster, Germany
| | - Markus Missler
- Institute of Anatomy and Neurobiology, Westfälische Wilhelms-University, Münster, Germany
| |
Collapse
|
36
|
Drew T, Konold CE, Lavelle M, Brunyé TT, Kerr KF, Shucard H, Weaver DL, Elmore JG. Pathologist pupil dilation reflects experience level and difficulty in diagnosing medical images. J Med Imaging (Bellingham) 2023; 10:025503. [PMID: 37096053 PMCID: PMC10122150 DOI: 10.1117/1.jmi.10.2.025503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Revised: 03/26/2023] [Accepted: 04/10/2023] [Indexed: 04/26/2023] Open
Abstract
Purpose: Digital whole slide imaging allows pathologists to view slides on a computer screen instead of under a microscope. Digital viewing allows for real-time monitoring of pathologists' search behavior and neurophysiological responses during the diagnostic process. One particular neurophysiological measure, pupil diameter, could provide a basis for evaluating clinical competence during training or developing tools that support the diagnostic process. Prior research shows that pupil diameter is sensitive to cognitive load and arousal, and it switches between exploration and exploitation of a visual image. Different categories of lesions in pathology pose different levels of challenge, as indicated by diagnostic disagreement among pathologists. If pupil diameter is sensitive to the perceived difficulty in diagnosing biopsies, eye-tracking could potentially be used to identify biopsies that may benefit from a second opinion. Approach: We measured case onset baseline-corrected (phasic) and uncorrected (tonic) pupil diameter in 90 pathologists who each viewed and diagnosed 14 digital breast biopsy cases that cover the diagnostic spectrum from benign to invasive breast cancer. Pupil data were extracted from the beginning of viewing and interpreting of each individual case. After removing 122 trials ( < 10 % ) with poor eye-tracking quality, 1138 trials remained. We used multiple linear regression with robust standard error estimates to account for dependent observations within pathologists. Results: We found a positive association between the magnitude of phasic dilation and subject-centered difficulty ratings and between the magnitude of tonic dilation and untransformed difficulty ratings. When controlling for case diagnostic category, only the tonic-difficulty relationship persisted. Conclusions: Results suggest that tonic pupil dilation may indicate overall arousal differences between pathologists as they interpret biopsy cases and could signal a need for additional training, experience, or automated decision aids. Phasic dilation is sensitive to characteristics of biopsies that tend to elicit higher difficulty ratings and could indicate a need for a second opinion.
Collapse
Affiliation(s)
- Trafton Drew
- University of Utah, Department of Psychology, Salt Lake City, Utah, United States
| | - Catherine E. Konold
- University of Utah, Department of Psychology, Salt Lake City, Utah, United States
| | - Mark Lavelle
- University of New Mexico, Department of Psychology, Albuquerque, New Mexico, United States
| | - Tad T. Brunyé
- Tufts University, Center for Applied Brain and Cognitive Sciences, Medford, Massachusetts, United States
| | - Kathleen F. Kerr
- University of Washington, Department of Biostatistics, Seattle, Washington, United States
| | - Hannah Shucard
- University of Washington, Department of Biostatistics, Seattle, Washington, United States
| | - Donald L. Weaver
- University of Vermont, Department of Pathology & Laboratory Medicine, Burlington, Vermont, United States
| | - Joann G. Elmore
- David Geffen School of Medicine UCLA, Department of Medicine, Los Angeles, California, United States
| |
Collapse
|
37
|
Analysis of gaze patterns during facade inspection to understand inspector sense-making processes. Sci Rep 2023; 13:2929. [PMID: 36804607 PMCID: PMC9941087 DOI: 10.1038/s41598-023-29950-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 02/10/2023] [Indexed: 02/22/2023] Open
Abstract
This work seeks to capture how an expert interacts with a structure during a facade inspection so that more detailed and situationally-aware inspections can be done with autonomous robots in the future. Eye tracking maps where an inspector is looking during a structural inspection, and it recognizes implicit human attention. Experiments were performed on a facade during a damage assessment to analyze key, visually-based features that are important for understanding human-infrastructure interaction during the process. For data collection and analysis, experiments were conducted to assess an inspector's behavioral changes while assessing a real structure. These eye tracking features provided the basis for the inspector's intent prediction and were used to understand how humans interact with the structure during the inspection processes. This method will facilitate information-sharing and decision-making during the inspection processes for collaborative human-robot teams; thus, it will enable unmanned aerial vehicle (UAV) for future building inspection through artificial intelligence support.
Collapse
|
38
|
Kulkarni CS, Deng S, Wang T, Hartman-Kenzler J, Barnes LE, Parker SH, Safford SD, Lau N. Scene-dependent, feedforward eye gaze metrics can differentiate technical skill levels of trainees in laparoscopic surgery. Surg Endosc 2023; 37:1569-1580. [PMID: 36123548 PMCID: PMC11062149 DOI: 10.1007/s00464-022-09582-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 08/25/2022] [Indexed: 10/14/2022]
Abstract
INTRODUCTION In laparoscopic surgery, looking in the target areas is an indicator of proficiency. However, gaze behaviors revealing feedforward control (i.e., looking ahead) and their importance have been under-investigated in surgery. This study aims to establish the sensitivity and relative importance of different scene-dependent gaze and motion metrics for estimating trainee proficiency levels in surgical skills. METHODS Medical students performed the Fundamentals of Laparoscopic Surgery peg transfer task while recording their gaze on the monitor and tool activities inside the trainer box. Using computer vision and fixation algorithms, five scene-dependent gaze metrics and one tool speed metric were computed for 499 practice trials. Cluster analysis on the six metrics was used to group the trials into different clusters/proficiency levels, and ANOVAs were conducted to test differences between proficiency levels. A Random Forest model was trained to study metric importance at predicting proficiency levels. RESULTS Three clusters were identified, corresponding to three proficiency levels. The correspondence between the clusters and proficiency levels was confirmed by differences between completion times (F2,488 = 38.94, p < .001). Further, ANOVAs revealed significant differences between the three levels for all six metrics. The Random Forest model predicted proficiency level with 99% out-of-bag accuracy and revealed that scene-dependent gaze metrics reflecting feedforward behaviors were more important for prediction than the ones reflecting feedback behaviors. CONCLUSION Scene-dependent gaze metrics revealed skill levels of trainees more precisely than between experts and novices as suggested in the literature. Further, feedforward gaze metrics appeared to be more important than feedback ones at predicting proficiency.
Collapse
Affiliation(s)
- Chaitanya S Kulkarni
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | - Shiyu Deng
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | - Tianzi Wang
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | | | - Laura E Barnes
- Environmental and Systems Engineering, University of Virginia, Charlottesville, VA, USA
| | | | - Shawn D Safford
- Division of Pediatric General and Thoracic Surgery, UPMC Children's Hospital of Pittsburgh, Harrisburg, PA, USA
| | - Nathan Lau
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA.
| |
Collapse
|
39
|
Hsieh SS, Cook DA, Inoue A, Gong H, Sudhir Pillai P, Johnson MP, Leng S, Yu L, Fidler JL, Holmes DR, Carter RE, McCollough CH, Fletcher JG. Understanding Reader Variability: A 25-Radiologist Study on Liver Metastasis Detection at CT. Radiology 2023; 306:e220266. [PMID: 36194112 PMCID: PMC9870852 DOI: 10.1148/radiol.220266] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 07/07/2022] [Accepted: 08/17/2022] [Indexed: 01/26/2023]
Abstract
Background Substantial interreader variability exists for common tasks in CT imaging, such as detection of hepatic metastases. This variability can undermine patient care by leading to misdiagnosis. Purpose To determine the impact of interreader variability associated with (a) reader experience, (b) image navigation patterns (eg, eye movements, workstation interactions), and (c) eye gaze time at missed liver metastases on contrast-enhanced abdominal CT images. Materials and Methods In a single-center prospective observational trial at an academic institution between December 2020 and February 2021, readers were recruited to examine 40 contrast-enhanced abdominal CT studies (eight normal, 32 containing 91 liver metastases). Readers circumscribed hepatic metastases and reported confidence. The workstation tracked image navigation and eye movements. Performance was quantified by using the area under the jackknife alternative free-response receiver operator characteristic (JAFROC-1) curve and per-metastasis sensitivity and was associated with reader experience and image navigation variables. Differences in area under JAFROC curve were assessed with the Kruskal-Wallis test followed by the Dunn test, and effects of image navigation were assessed by using the Wilcoxon signed-rank test. Results Twenty-five readers (median age, 38 years; IQR, 31-45 years; 19 men) were recruited and included nine subspecialized abdominal radiologists, five nonabdominal staff radiologists, and 11 senior residents or fellows. Reader experience explained differences in area under the JAFROC curve, with abdominal radiologists demonstrating greater area under the JAFROC curve (mean, 0.77; 95% CI: 0.75, 0.79) than trainees (mean, 0.71; 95% CI: 0.69, 0.73) (P = .02) or nonabdominal subspecialists (mean, 0.69; 95% CI: 0.60, 0.78) (P = .03). Sensitivity was similar within the reader experience groups (P = .96). Image navigation variables that were associated with higher sensitivity included longer interpretation time (P = .003) and greater use of coronal images (P < .001). The eye gaze time was at least 0.5 and 2.0 seconds for 71% (266 of 377) and 40% (149 of 377) of missed metastases, respectively. Conclusion Abdominal radiologists demonstrated better discrimination for the detection of liver metastases on abdominal contrast-enhanced CT images. Missed metastases frequently received at least a brief eye gaze. Higher sensitivity was associated with longer interpretation time and greater use of liver display windows and coronal images. © RSNA, 2022 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Scott S. Hsieh
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - David A. Cook
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Akitoshi Inoue
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Hao Gong
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Parvathy Sudhir Pillai
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Matthew P. Johnson
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Shuai Leng
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Lifeng Yu
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Jeff L. Fidler
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - David R. Holmes
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Rickey E. Carter
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Cynthia H. McCollough
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Joel G. Fletcher
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| |
Collapse
|
40
|
Laubrock J, Krutz A, Nübel J, Spethmann S. Gaze patterns reflect and predict expertise in dynamic echocardiographic imaging. J Med Imaging (Bellingham) 2023; 10:S11906. [PMID: 36968293 PMCID: PMC10031643 DOI: 10.1117/1.jmi.10.s1.s11906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 03/01/2023] [Indexed: 03/24/2023] Open
Abstract
Purpose Echocardiography is the most important modality in cardiac imaging. Rapid valid visual assessment is a critical skill for image interpretation. However, it is unclear how skilled viewers assess echocardiographic images. Therefore, guidance and implicit advice are needed for learners to achieve valid image interpretation. Approach Using a signal detection approach, we compared 15 certified experts with 15 medical students in their diagnostic decision-making and viewing behavior. To quantify attention allocation, we recorded eye movements while viewing dynamic echocardiographic imaging loops of patients with reduced ejection fraction and healthy controls. Participants evaluated left ventricular ejection fraction and image quality (as diagnostic and visual control tasks, respectively). Results Experts were much better at discriminating between patients and healthy controls (d ' of 2.58, versus 0.98 for novices). Eye tracking revealed that experts fixated diagnostically relevant areas earlier and more often, whereas novices were distracted by visually salient task-irrelevant stimuli. We show that expertise status can be almost perfectly classified either based on judgments or purely on eye movements and that an expertise score derived from viewing behavior predicts diagnostic quality. Conclusions Judgments and eye tracking revealed significant differences between echocardiography experts and novices that can be used to derive numerical expertise scores. Experts have implicitly learned to ignore the salient motion cue presented by the mitral valve and to focus on the diagnostically more relevant left ventricle. These findings have implications for echocardiography training, objective characterization of echocardiographic expertise, and the design of user-friendly interfaces for echocardiography.
Collapse
Affiliation(s)
- Jochen Laubrock
- University of Potsdam, Cognitive Science, Department of Psychology, Potsdam, Germany
| | - Alexander Krutz
- Heart Centre Brandenburg, Department of Cardiology, Bernau, Germany
- Brandenburg Medical School Theodor Fontane, Faculty of Health Sciences Brandenburg, Neuruppin, Germany
| | - Jonathan Nübel
- Heart Centre Brandenburg, Department of Cardiology, Bernau, Germany
- Brandenburg Medical School Theodor Fontane, Faculty of Health Sciences Brandenburg, Neuruppin, Germany
| | - Sebastian Spethmann
- Deutsches Herzzentrum der Charité, Department of Cardiology, Angiology, and Intensive Care Medicine, Berlin, Germany
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
41
|
Botch TL, Garcia BD, Choi YB, Feffer N, Robertson CE. Active visual search in naturalistic environments reflects individual differences in classic visual search performance. Sci Rep 2023; 13:631. [PMID: 36635491 PMCID: PMC9837148 DOI: 10.1038/s41598-023-27896-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 01/10/2023] [Indexed: 01/13/2023] Open
Abstract
Visual search is a ubiquitous activity in real-world environments. Yet, traditionally, visual search is investigated in tightly controlled paradigms, where head-restricted participants locate a minimalistic target in a cluttered array that is presented on a computer screen. Do traditional visual search tasks predict performance in naturalistic settings, where participants actively explore complex, real-world scenes? Here, we leverage advances in virtual reality technology to test the degree to which classic and naturalistic search are limited by a common factor, set size, and the degree to which individual differences in classic search behavior predict naturalistic search behavior in a large sample of individuals (N = 75). In a naturalistic search task, participants looked for an object within their environment via a combination of head-turns and eye-movements using a head-mounted display. Then, in a classic search task, participants searched for a target within a simple array of colored letters using only eye-movements. In each task, we found that participants' search performance was impacted by increases in set size-the number of items in the visual display. Critically, we observed that participants' efficiency in classic search tasks-the degree to which set size slowed performance-indeed predicted efficiency in real-world scenes. These results demonstrate that classic, computer-based visual search tasks are excellent models of active, real-world search behavior.
Collapse
Affiliation(s)
- Thomas L Botch
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA.
| | - Brenda D Garcia
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| | - Yeo Bi Choi
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| | - Nicholas Feffer
- Department of Computer Science, Dartmouth College, Hanover, NH, 03755, USA
- Department of Computer Science, Stanford University, Stanford, CA, 94305, USA
| | - Caroline E Robertson
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| |
Collapse
|
42
|
Sugimoto M, Tomita A, Oyamada M, Sato M. Eye-Tracking-Based Analysis of Situational Awareness of Nurses. Healthcare (Basel) 2022; 10:2131. [PMID: 36360472 PMCID: PMC9690882 DOI: 10.3390/healthcare10112131] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 10/17/2022] [Accepted: 10/24/2022] [Indexed: 04/11/2024] Open
Abstract
BACKGROUND Nurses are responsible for comprehensively identifying patient conditions and associated environments. We hypothesize that gaze trajectories of nurses differ based on their experiences, even under the same situation. METHODS An eye-tracking device monitored the gaze trajectories of nurses with various levels of experience, and nursing students during the intravenous injection task on a human patient simulator. RESULTS The areas of interest (AOIs) were identified in the recorded movies, and the gaze durations of AOIs showed different patterns between experienced nurses and nursing students. A state transition diagram visualized the recognition errors of the students and the repeated confirmation of the vital signs of the patient simulator. Clustering analysis of gaze durations also indicated similarity among the participants with similar experiences. CONCLUSIONS As expected, gaze trajectories differed among the participants. The developed gaze transition diagram visualized their differences and helped in interpreting their situational awareness based on visual perception. The demonstrated method can help in establishing an effective nursing education, particularly for learning the skills that are difficult to be verbalized.
Collapse
Affiliation(s)
- Masahiro Sugimoto
- Institute of Medical Sciences, Tokyo Medical University, Shinjuku, Tokyo 160-0022, Japan
- Institute for Advanced Biosciences, Keio University, Tsuruoka 997-0052, Japan
| | - Atsumi Tomita
- Institute of Medical Sciences, Tokyo Medical University, Shinjuku, Tokyo 160-0022, Japan
| | - Michiko Oyamada
- Department of Nursing, Nihon Institute of Medical Science, Moroyama 350-0435, Japan
| | - Mitsue Sato
- Department of Nursing, Kiryu University, Midori 379-2392, Japan
| |
Collapse
|
43
|
Wang S, Ouyang X, Liu T, Wang Q, Shen D. Follow My Eye: Using Gaze to Supervise Computer-Aided Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1688-1698. [PMID: 35085074 DOI: 10.1109/tmi.2022.3146973] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
When deep neural network (DNN) was first introduced to the medical image analysis community, researchers were impressed by its performance. However, it is evident now that a large number of manually labeled data is often a must to train a properly functioning DNN. This demand for supervision data and labels is a major bottleneck in current medical image analysis, since collecting a large number of annotations from experienced experts can be time-consuming and expensive. In this paper, we demonstrate that the eye movement of radiologists reading medical images can be a new form of supervision to train the DNN-based computer-aided diagnosis (CAD) system. Particularly, we record the tracks of the radiologists' gaze when they are reading images. The gaze information is processed and then used to supervise the DNN's attention via an Attention Consistency module. To the best of our knowledge, the above pipeline is among the earliest efforts to leverage expert eye movement for deep-learning-based CAD. We have conducted extensive experiments on knee X-ray images for osteoarthritis assessment. The results show that our method can achieve considerable improvement in diagnosis performance, with the help of gaze supervision.
Collapse
|
44
|
Wolfe JM, Lyu W, Dong J, Wu CC. What eye tracking can tell us about how radiologists use automated breast ultrasound. J Med Imaging (Bellingham) 2022; 9:045502. [PMID: 35911209 PMCID: PMC9315059 DOI: 10.1117/1.jmi.9.4.045502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 07/08/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: Automated breast ultrasound (ABUS) presents three-dimensional (3D) representations of the breast in the form of stacks of coronal and transverse plane images. ABUS is especially useful for the assessment of dense breasts. Here, we present the first eye tracking data showing how radiologists search and evaluate ABUS cases. Approach: Twelve readers evaluated single-breast cases in 20-min sessions. Positive findings were present in 56% of the evaluated cases. Eye position and the currently visible coronal and transverse slice were tracked, allowing for reconstruction of 3D "scanpaths." Results: Individual readers had consistent search strategies. Most readers had strategies that involved examination of all available images. Overall accuracy was 0.74 (sensitivity = 0.66 and specificity = 0.84). The 20 false negative errors across all readers can be classified using Kundel's (1978) taxonomy: 17 are "decision" errors (readers found the target but misclassified it as normal or benign). There was one recognition error and two "search" errors. This is an unusually high proportion of decision errors. Readers spent essentially the same proportion of time viewing coronal and transverse images, regardless of whether the case was positive or negative, correct or incorrect. Readers tended to use a "scanner" strategy when viewing coronal images and a "driller" strategy when viewing transverse images. Conclusions: These results suggest that ABUS errors are more likely to be errors of interpretation than of search. Further research could determine if readers' exploration of all images is useful or if, in some negative cases, search of transverse images is redundant following a search of coronal images.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Brigham and Women's Hospital, Boston, Massachusetts, United States.,Harvard Medical School, Boston, Massachusetts, United States
| | - Wanyi Lyu
- Brigham and Women's Hospital, Boston, Massachusetts, United States
| | - Jeffrey Dong
- Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States
| | - Chia-Chien Wu
- Brigham and Women's Hospital, Boston, Massachusetts, United States.,Harvard Medical School, Boston, Massachusetts, United States
| |
Collapse
|
45
|
Ahmadi N, Romoser M, Salmon C. Improving the tactical scanning of student pilots: A gaze-based training intervention for transition from visual flight into instrument meteorological conditions. APPLIED ERGONOMICS 2022; 100:103642. [PMID: 34871832 DOI: 10.1016/j.apergo.2021.103642] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 10/15/2021] [Accepted: 11/08/2021] [Indexed: 06/13/2023]
Abstract
Eye tracking has been applied to train novice drivers and clinicians; however, such applications in aviation are limited. This study develops a gaze-based intervention using video-based, expert commentary, and 3M (Mistake, Mitigation, Mastery) training to instruct visual flight rule student pilots on an instrument cross-check to mitigate the risk of losing aircraft control when they inadvertently enter instrument meteorological conditions (IMC). Twenty general aviation student pilots were randomized into control and experimental groups. Dwell time, return time, entropy, Kullback-Leibler divergence, and deviations from flight paths were compared before and after training to straight-and-level-flight (LF) and standard left level turn (LT) scenarios. After the training, the experimental pilots significantly increased dwell time on primary instruments (PIs), reduced randomness in visual search, and fixated on the PIs in shorter times (in the scenario of LT). In terms of piloting, all experimental pilots successfully kept the aircraft control while five control pilots lost control in IMC; significant differences in altitude and rate of climb deviations were observed between groups (in the scenario of LF).
Collapse
Affiliation(s)
- Nima Ahmadi
- Western New England University, Department of Industrial Engineering and Engineering Management, Springfield, MA, 01119-2684, USA.
| | - Matthew Romoser
- Western New England University, Department of Industrial Engineering and Engineering Management, Springfield, MA, 01119-2684, USA.
| | - Christian Salmon
- Western New England University, Department of Industrial Engineering and Engineering Management, Springfield, MA, 01119-2684, USA.
| |
Collapse
|
46
|
Assessment of Aircraft Engine Blade Inspection Performance Using Attribute Agreement Analysis. SAFETY 2022. [DOI: 10.3390/safety8020023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Background—Visual inspection is an important element of aircraft engine maintenance to assure flight safety. Predominantly performed by human operators, those maintenance activities are prone to human error. While false negatives imply a risk to aviation safety, false positives can lead to increased maintenance cost. The aim of the present study was to evaluate the human performance in visual inspection of aero engine blades, specifically the operators’ consistency, accuracy, and reproducibility, as well as the system reliability. Methods—Photographs of 26 blades were presented to 50 industry practitioners of three skill levels to assess their performance. Each image was shown to each operator twice in random order, leading to N = 2600 observations. The data were statistically analysed using Attribute Agreement Analysis (AAA) and Kappa analysis. Results—The results show that operators were on average 82.5% consistent with their serviceability decision, while achieving an inspection accuracy of 67.7%. The operators’ reproducibility was 15.4%, as was the accuracy of all operators with the ground truth. Subsequently, the false-positive and false-negative rates were analysed separately to the overall inspection accuracy, showing that 20 operators (40%) achieved acceptable performances, thus meeting the required standard. Conclusions—In aviation maintenance the false-negative rate of <5% as per Aerospace Standard AS13100 is arguably the single most important metric since it determines the safety outcomes. The results of this study show acceptable false-negative performance in 60% of appraisers. Thus, there is the desirability to seek ways to improve the performance. Some suggestions are given in this regard.
Collapse
|
47
|
Wagner M, den Boer MC, Jansen S, Groepel P, Visser R, Witlox RSGM, Bekker V, Lopriore E, Berger A, te Pas AB. Video-based reflection on neonatal interventions during COVID-19 using eye-tracking glasses: an observational study. Arch Dis Child Fetal Neonatal Ed 2022; 107:156-160. [PMID: 34413092 PMCID: PMC8384497 DOI: 10.1136/archdischild-2021-321806] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Accepted: 06/16/2021] [Indexed: 11/17/2022]
Abstract
OBJECTIVE The aim of this study was to determine the experience with, and the feasibility of, point-of-view video recordings using eye-tracking glasses for training and reviewing neonatal interventions during the COVID-19 pandemic. DESIGN Observational prospective single-centre study. SETTING Neonatal intensive care unit at the Leiden University Medical Center. PARTICIPANTS All local neonatal healthcare providers. INTERVENTION There were two groups of participants: proceduralists, who wore eye-tracking glasses during procedures, and observers who later watched the procedures as part of a video-based reflection. MAIN OUTCOME MEASURES The primary outcome was the feasibility of, and the proceduralists and observers' experience with, the point-of-view eye-tracking videos as an additional tool for bedside teaching and video-based reflection. RESULTS We conducted 12 point-of-view recordings on 10 different patients (median gestational age of 30.9±3.5 weeks and weight of 1764 g) undergoing neonatal intubation (n=5), minimally invasive surfactant therapy (n=5) and umbilical line insertion (n=2). We conducted nine video-based observations with a total of 88 observers. The use of point-of-view recordings was perceived as feasible. Observers further reported the point-of-view recordings to be an educational benefit for them and a potentially instructional tool during COVID-19. CONCLUSION We proved the practicability of eye-tracking glasses for point-of-view recordings of neonatal procedures and videos for observation, educational sessions and logistics considerations, especially with the COVID-19 pandemic distancing measures reducing bedside teaching opportunities.
Collapse
Affiliation(s)
- Michael Wagner
- Department of Pediatrics, Comprehensive Center for Pediatrics, Medical University of Vienna, Vienna, Austria
| | - Maria C den Boer
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Sophie Jansen
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Peter Groepel
- Department of Applied Psychology, University of Vienna, Vienna, Austria
| | - Remco Visser
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Ruben S G M Witlox
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Vincent Bekker
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Enrico Lopriore
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Angelika Berger
- Department of Pediatrics, Comprehensive Center for Pediatrics, Medical University of Vienna, Vienna, Austria
| | - Arjan B te Pas
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| |
Collapse
|
48
|
Chattopadhyay AK, Chattopadhyay S. VIRDOCD: A VIRtual DOCtor to predict dengue fatality. EXPERT SYSTEMS 2022; 39. [DOI: 10.1111/exsy.12796] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Accepted: 08/06/2021] [Indexed: 02/05/2023]
Abstract
AbstractClinicians make routine diagnosis by scrutinizing patients' medical signs and symptoms, a skill popularly referred to as ‘Clinical Eye’. This skill evolves through trial‐and‐error and improves with time. The success of the therapeutic regime relies largely on the accuracy of interpretation of such sign‐symptoms, analysing which a clinician assesses the severity of the illness. The present study is an attempt to propose a complementary medical front by mathematically modelling the ‘Clinical Eye’ of a VIRtual DOCtor, using statistical and machine intelligence tools (SMI), to analyse Dengue epidemic infected patients (100 case studies with 11 weighted sign‐symptoms). The SMI in VIRDOCD reads medical data and translates these into a vector comprising multiple linear regression (MLR) coefficients to predict infection severity grades of dengue patients that clone the clinician's experience‐based assessment. Risk managed through ANOVA, the dengue severity grade prediction accuracy from VIRDOCD is found higher (ca 75%) than conventional clinical practice (ca 71.4%, mean accuracy profile assessed by a team of 10 senior consultants). Free of human errors and capable of deciphering even minute differences from almost identical symptoms (to the Clinical eye), VIRDOCD is uniquely individualized in its decision‐making ability. The algorithm has been validated against Random Forest classification (RF, ca 63%), another regression‐based classifier similar to MLR that can be trained through supervised learning. We find that MLR‐based VIRDOCD is superior to RF in predicting the grade of Dengue morbidity. VIRDOCD can be further extended to analyse other epidemic infections, such as COVID‐19.
Collapse
|
49
|
Evaluation of Influence Factors on the Visual Inspection Performance of Aircraft Engine Blades. AEROSPACE 2021. [DOI: 10.3390/aerospace9010018] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Background—There are various influence factors that affect visual inspection of aircraft engine blades including type of inspection, defect type, severity level, blade perspective and background colour. The effect of those factors on the inspection performance was assessed. Method—The inspection accuracy of fifty industry practitioners was measured for 137 blade images, leading to N = 6850 observations. The data were statistically analysed to identify the significant factors. Subsequent evaluation of the eye tracking data provided additional insights into the inspection process. Results—Inspection accuracies in borescope inspections were significantly lower compared to piece-part inspection at 63.8% and 82.6%, respectively. Airfoil dents (19.0%), cracks (11.0%), and blockage (8.0%) were the most difficult defects to detect, while nicks (100.0%), tears (95.5%), and tip curls (89.0%) had the highest detection rates. The classification accuracy was lowest for airfoil dents (5.3%), burns (38.4%), and tears (44.9%), while coating loss (98.1%), nicks (90.0%), and blockage (87.5%) were most accurately classified. Defects of severity level S1 (72.0%) were more difficult to detect than increased severity levels S2 (92.8%) and S3 (99.0%). Moreover, visual perspectives perpendicular to the airfoil led to better inspection rates (up to 87.5%) than edge perspectives (51.0% to 66.5%). Background colour was not a significant factor. The eye tracking results of novices showed an unstructured search path, characterised by numerous fixations, leading to longer inspection times. Experts in contrast applied a systematic search strategy with focus on the edges, and showed a better defect discrimination ability. This observation was consistent across all stimuli, thus independent of the influence factors. Conclusions—Eye tracking identified the challenges of the inspection process and errors made. A revised inspection framework was proposed based on insights gained, and support the idea of an underlying mental model.
Collapse
|
50
|
Gong H, Hsieh SS, Holmes D, Cook D, Inoue A, Bartlett D, Baffour F, Takahashi H, Leng S, Yu L, McCollough CH, Fletcher JG. An interactive eye-tracking system for measuring radiologists' visual fixations in volumetric CT images: Implementation and initial eye-tracking accuracy validation. Med Phys 2021; 48:6710-6723. [PMID: 34534365 PMCID: PMC8595866 DOI: 10.1002/mp.15219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 08/28/2021] [Accepted: 08/30/2021] [Indexed: 01/17/2023] Open
Abstract
PURPOSE Eye-tracking approaches have been used to understand the visual search process in radiology. However, previous eye-tracking work in computer tomography (CT) has been limited largely to single cross-sectional images or video playback of the reconstructed volume, which do not accurately reflect radiologists' visual search activities and their interactivity with three-dimensional image data at a computer workstation (e.g., scroll, pan, and zoom) for visual evaluation of diagnostic imaging targets. We have developed a platform that integrates eye-tracking hardware with in-house-developed reader workstation software to allow monitoring of the visual search process and reader-image interactions in clinically relevant reader tasks. The purpose of this work is to validate the spatial accuracy of eye-tracking data using this platform for different eye-tracking data acquisition modes. METHODS An eye-tracker was integrated with a previously developed workstation designed for reader performance studies. The integrated system captured real-time eye movement and workstation events at 1000 Hz sampling frequency. The eye-tracker was operated either in head-stabilized mode or in free-movement mode. In head-stabilized mode, the reader positioned their head on a manufacturer-provided chinrest. In free-movement mode, a biofeedback tool emitted an audio cue when the head position was outside the data collection range (general biofeedback) or outside a narrower range of positions near the calibration position (strict biofeedback). Four radiologists and one resident were invited to participate in three studies to determine eye-tracking spatial accuracy under three constraint conditions: head-stabilized mode (i.e., with use of a chin rest), free movement with general biofeedback, and free movement with strict biofeedback. Study 1 evaluated the impact of head stabilization versus general or strict biofeedback using a cross-hair target prior to the integration of the eye-tracker with the image viewing workstation. In Study 2, after integration of the eye-tracker and reader workstation, readers were asked to fixate on targets that were randomly distributed within a volumetric digital phantom. In Study 3, readers used the integrated system to scroll through volumetric patient CT angiographic images while fixating on the centerline of designated blood vessels (from the left coronary artery to dorsalis pedis artery). Spatial accuracy was quantified as the offset between the center of the intended target and the detected fixation using units of image pixels and the degree of visual angle. RESULTS The three head position constraint conditions yielded comparable accuracy in the studies using digital phantoms. For Study 1 involving the digital crosshairs, the median ± the standard deviation of offset values among readers were 15.2 ± 7.0 image pixels with the chinrest, 14.2 ± 3.6 image pixels with strict biofeedback, and 19.1 ± 6.5 image pixels with general biofeedback. For Study 2 using the random dot phantom, the median ± standard deviation offset values were 16.7 ± 28.8 pixels with use of a chinrest, 16.5 ± 24.6 pixels using strict biofeedback, and 18.0 ± 22.4 pixels using general biofeedback, which translated to a visual angle of about 0.8° for all three conditions. We found no obvious association between eye-tracking accuracy and target size or view time. In Study 3 viewing patient images, use of the chinrest and strict biofeedback demonstrated comparable accuracy, while the use of general biofeedback demonstrated a slightly worse accuracy. The median ± standard deviation of offset values were 14.8 ± 11.4 pixels with use of a chinrest, 21.0 ± 16.2 pixels using strict biofeedback, and 29.7 ± 20.9 image pixels using general biofeedback. These corresponded to visual angles ranging from 0.7° to 1.3°. CONCLUSIONS An integrated eye-tracker system to assess reader eye movement and interactive viewing in relation to imaging targets demonstrated reasonable spatial accuracy for assessment of visual fixation. The head-free movement condition with audio biofeedback performed similarly to head-stabilized mode.
Collapse
Affiliation(s)
- Hao Gong
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - Scott S. Hsieh
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - David Holmes
- Department of Physiology & Biomedical Engineering, Mayo Clinic, Rochester, MN 55901
| | - David Cook
- Department of Internal Medicine, Mayo Clinic, Rochester, MN 55901
| | - Akitoshi Inoue
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - David Bartlett
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | | | | | - Shuai Leng
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - Lifeng Yu
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | | | | |
Collapse
|