1
|
Anikina A, Ibragimova D, Mustafaev T, Mello-Thoms C, Ibragimov B. Prediction of radiological decision errors from longitudinal analysis of gaze and image features. Artif Intell Med 2024; 160:103051. [PMID: 39708677 DOI: 10.1016/j.artmed.2024.103051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 11/27/2024] [Accepted: 12/05/2024] [Indexed: 12/23/2024]
Abstract
Medical imaging, particularly radiography, is an indispensable part of diagnosing many chest diseases. Final diagnoses are made by radiologists based on images, but the decision-making process is always associated with a risk of incorrect interpretation. Incorrectly interpreted data can lead to delays in treatment, a prescription of inappropriate therapy, or even a completely missed diagnosis. In this context, our study aims to determine whether it is possible to predict diagnostic errors made by radiologists using eye-tracking technology. For this purpose, we asked 4 radiologists with different levels of experience to analyze 1000 images covering a wide range of chest diseases. Using eye-tracking data, we calculated the radiologists' gaze fixation points and generated feature vectors based on this data to describe the radiologists' gaze behavior during image analysis. Additionally, we emulated the process of revealing the read images following radiologists' gaze data to create a more comprehensive picture of their analysis. Then we applied a recurrent neural network to predict diagnostic errors. Our results showed a 0.7755 ROC AUC score, demonstrating a significant potential for this approach in enhancing the accuracy of diagnostic error recognition.
Collapse
Affiliation(s)
- Anna Anikina
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | | | - Tamerlan Mustafaev
- Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, USA
| | | | - Bulat Ibragimov
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark.
| |
Collapse
|
2
|
Lopes A, Ward AD, Cecchini M. Eye tracking in digital pathology: A comprehensive literature review. J Pathol Inform 2024; 15:100383. [PMID: 38868488 PMCID: PMC11168484 DOI: 10.1016/j.jpi.2024.100383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 04/28/2024] [Accepted: 05/14/2024] [Indexed: 06/14/2024] Open
Abstract
Eye tracking has been used for decades in attempt to understand the cognitive processes of individuals. From memory access to problem-solving to decision-making, such insight has the potential to improve workflows and the education of students to become experts in relevant fields. Until recently, the traditional use of microscopes in pathology made eye tracking exceptionally difficult. However, the digital revolution of pathology from conventional microscopes to digital whole slide images allows for new research to be conducted and information to be learned with regards to pathologist visual search patterns and learning experiences. This has the promise to make pathology education more efficient and engaging, ultimately creating stronger and more proficient generations of pathologists to come. The goal of this review on eye tracking in pathology is to characterize and compare the visual search patterns of pathologists. The PubMed and Web of Science databases were searched using 'pathology' AND 'eye tracking' synonyms. A total of 22 relevant full-text articles published up to and including 2023 were identified and included in this review. Thematic analysis was conducted to organize each study into one or more of the 10 themes identified to characterize the visual search patterns of pathologists: (1) effect of experience, (2) fixations, (3) zooming, (4) panning, (5) saccades, (6) pupil diameter, (7) interpretation time, (8) strategies, (9) machine learning, and (10) education. Expert pathologists were found to have higher diagnostic accuracy, fewer fixations, and shorter interpretation times than pathologists with less experience. Further, literature on eye tracking in pathology indicates that there are several visual strategies for diagnostic interpretation of digital pathology images, but no evidence of a superior strategy exists. The educational implications of eye tracking in pathology have also been explored but the effect of teaching novices how to search as an expert remains unclear. In this article, the main challenges and prospects of eye tracking in pathology are briefly discussed along with their implications to the field.
Collapse
Affiliation(s)
- Alana Lopes
- Department of Medical Biophysics, Western University, London, ON N6A 3K7, Canada
- Gerald C. Baines Centre, London Health Sciences Centre, London, ON N6A 5W9, Canada
| | - Aaron D. Ward
- Department of Medical Biophysics, Western University, London, ON N6A 3K7, Canada
- Gerald C. Baines Centre, London Health Sciences Centre, London, ON N6A 5W9, Canada
- Department of Oncology, Western University, London, ON N6A 3K7, Canada
| | - Matthew Cecchini
- Department of Pathology and Laboratory Medicine, Schulich School of Medicine and Dentistry, Western University, London, ON N6A 3K7, Canada
| |
Collapse
|
3
|
Vrzáková H, Tapiala J, Iso-Mustajärvi M, Timonen T, Dietz A. Estimating Cognitive Workload Using Task-Related Pupillary Responses in Simulated Drilling in Cochlear Implantation. Laryngoscope 2024; 134:5087-5095. [PMID: 38989899 DOI: 10.1002/lary.31612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 05/31/2024] [Accepted: 06/17/2024] [Indexed: 07/12/2024]
Abstract
OBJECTIVES Training of temporal bone drilling requires more than mastering technical skills with the drill. Skills such as visual imagery, bimanual dexterity, and stress management need to be mastered along with precise knowledge of anatomy. In otorhinolaryngology, these psychomotor skills underlie performance in the drilling of the temporal bone for access to the inner ear in cochlear implant surgery. However, little is known about how psychomotor skills and workload management impact the practitioners' continuous and overall performance. METHODS To understand how the practitioner's workload and performance unfolds over time, we examine task-evoked pupillary responses (TEPR) of 22 medical students who performed transmastoid-posterior tympanotomy (TMPT) and removal of the bony overhang of the round window niche in a 3D-printed model of the temporal bone. We investigate how students' TEPR metrics (Average Pupil Size [APS], Index of Pupil Activity [IPA], and Low/High Index of Pupillary Activity [LHIPA]) and time spent in drilling phases correspond to the performance in key drilling phases. RESULTS All TEPR measures revealed significant differences between key drilling phases that corresponded to the anticipated workload. Enlarging the facial recess lasted significantly longer than other phases. IPA captured significant increase of workload in thinning of the posterior canal wall, while APS revealed increased workload during the drilling of the bony overhang. CONCLUSION Our findings contribute to the contemporary competency-based medical residency programs where objective and continuous monitoring of participants' progress allows to track progress in expertise acquisition. Laryngoscope, 134:5087-5095, 2024.
Collapse
Affiliation(s)
- Hana Vrzáková
- School of Computing, University of Eastern Finland, Joensuu, Finland
| | - Jesse Tapiala
- School of Medicine, Institute of Clinical Medicine, University of Eastern Finland, Kuopio, Finland
| | | | - Tomi Timonen
- Department of Otorhinolaryngology, Kuopio University Hospital, Kuopio, Finland
| | - Aarno Dietz
- Department of Otorhinolaryngology, Kuopio University Hospital, Kuopio, Finland
| |
Collapse
|
4
|
Kok EM, Niehorster DC, van der Gijp A, Rutgers DR, Auffermann WF, van der Schaaf M, Kester L, van Gog T. The effects of gaze-display feedback on medical students' self-monitoring and learning in radiology. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024; 29:1689-1710. [PMID: 38555550 PMCID: PMC11549167 DOI: 10.1007/s10459-024-10322-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 03/03/2024] [Indexed: 04/02/2024]
Abstract
Self-monitoring is essential for effectively regulating learning, but difficult in visual diagnostic tasks such as radiograph interpretation. Eye-tracking technology can visualize viewing behavior in gaze displays, thereby providing information about visual search and decision-making. We hypothesized that individually adaptive gaze-display feedback improves posttest performance and self-monitoring of medical students who learn to detect nodules in radiographs. We investigated the effects of: (1) Search displays, showing which part of the image was searched by the participant; and (2) Decision displays, showing which parts of the image received prolonged attention in 78 medical students. After a pretest and instruction, participants practiced identifying nodules in 16 cases under search-display, decision-display, or no feedback conditions (n = 26 per condition). A 10-case posttest, without feedback, was administered to assess learning outcomes. After each case, participants provided self-monitoring and confidence judgments. Afterward, participants reported on self-efficacy, perceived competence, feedback use, and perceived usefulness of the feedback. Bayesian analyses showed no benefits of gaze displays for post-test performance, monitoring accuracy (absolute difference between participants' estimated and their actual test performance), completeness of viewing behavior, self-efficacy, and perceived competence. Participants receiving search-displays reported greater feedback utilization than participants receiving decision-displays, and also found the feedback more useful when the gaze data displayed was precise and accurate. As the completeness of search was not related to posttest performance, search displays might not have been sufficiently informative to improve self-monitoring. Information from decision displays was rarely used to inform self-monitoring. Further research should address if and when gaze displays can support learning.
Collapse
Affiliation(s)
- Ellen M Kok
- Department of Education, Utrecht University, P.O. Box 80140, 3508 CS, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Anouk van der Gijp
- Department of Radiology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Dirk R Rutgers
- Department of Radiology, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | - Marieke van der Schaaf
- Utrecht Center for Research and Development in Health Professions Education, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Liesbeth Kester
- Department of Education, Utrecht University, P.O. Box 80140, 3508 CS, Utrecht, The Netherlands
| | - Tamara van Gog
- Department of Education, Utrecht University, P.O. Box 80140, 3508 CS, Utrecht, The Netherlands
| |
Collapse
|
5
|
Specian Junior FC, Litchfield D, Sandars J, Cecilio-Fernandes D. Use of eye tracking in medical education. MEDICAL TEACHER 2024; 46:1502-1509. [PMID: 38382474 DOI: 10.1080/0142159x.2024.2316863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 02/06/2024] [Indexed: 02/23/2024]
Abstract
Eye tracking has become increasingly applied in medical education research for studying the cognitive processes that occur during the performance of a task, such as image interpretation and surgical skills development. However, analysis and interpretation of the large amount of data obtained by eye tracking can be confusing. In this article, our intention is to clarify the analysis and interpretation of the data obtained from eye tracking. Understanding the relationship between eye tracking metrics (such as gaze, pupil and blink rate) and cognitive processes (such as visual attention, perception, memory and cognitive workload) is essential. The importance of calibration and how the limitations of eye tracking can be overcome is also highlighted.
Collapse
Affiliation(s)
| | | | - John Sandars
- Health Research Institute, Edge Hill University, Ormskirk, UK
| | - Dario Cecilio-Fernandes
- Department of Medical Psychology and Psychiatry, School of Medical Sciences, University of Campinas, Campinas, São Paulo, Brazil
| |
Collapse
|
6
|
Chen J, Yuan Z, Xi J, Gao Z, Li Y, Zhu X, Shi YS, Guan F, Wang Y. Efficient and Accurate Semi-Automatic Neuron Tracing with Extended Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:7299-7309. [PMID: 39255163 DOI: 10.1109/tvcg.2024.3456197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Neuron tracing, alternately referred to as neuron reconstruction, is the procedure for extracting the digital representation of the three-dimensional neuronal morphology from stacks of microscopic images. Achieving accurate neuron tracing is critical for profiling the neuroanatomical structure at single-cell level and analyzing the neuronal circuits and projections at whole-brain scale. However, the process often demands substantial human involvement and represents a nontrivial task. Conventional solutions towards neuron tracing often contend with challenges such as non-intuitive user interactions, suboptimal data generation throughput, and ambiguous visualization. In this paper, we introduce a novel method that leverages the power of extended reality (XR) for intuitive and progressive semi-automatic neuron tracing in real time. In our method, we have defined a set of interactors for controllable and efficient interactions for neuron tracing in an immersive environment. We have also developed a GPU-accelerated automatic tracing algorithm that can generate updated neuron reconstruction in real time. In addition, we have built a visualizer for fast and improved visual experience, particularly when working with both volumetric images and 3D objects. Our method has been successfully implemented with one virtual reality (VR) headset and one augmented reality (AR) headset with satisfying results achieved. We also conducted two user studies and proved the effectiveness of the interactors and the efficiency of our method in comparison with other approaches for neuron tracing.
Collapse
|
7
|
Šoková B, Baránková M, Halamová J. Fixation patterns in pairs of facial expressions-preferences of self-critical individuals. PeerJ Comput Sci 2024; 10:e2413. [PMID: 39650388 PMCID: PMC11623007 DOI: 10.7717/peerj-cs.2413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 09/23/2024] [Indexed: 12/11/2024]
Abstract
So far, studies have revealed some differences in how long self-critical individuals fixate on specific facial expressions and difficulties in recognising these expressions. However, the research has also indicated a need to distinguish between the different forms of self-criticism (inadequate or hated self), the key underlying factor in psychopathology. Therefore, the aim of the current research was to explore fixation patterns for all seven primary emotions (happiness, sadness, fear, disgust, contempt, anger, and surprise) and the neutral face expression in relation to level of self-criticism by presenting random facial stimuli in the right or left visual field. Based on the previous studies, two groups were defined, and the pattern of fixations and eye movements were compared (high and low inadequate and hated self). The research sample consisted of 120 adult participants, 60 women and 60 men. We used the Forms of Self-Criticizing and Self-Reassuring Scale to measure self-criticism. As stimuli for the eye-tracking task, we used facial expressions from the Umeå University Database of Facial Expressions database. Eye movements were recorded using the Tobii X2 eye tracker. Results showed that in highly self-critical participants with inadequate self, time to first fixation and duration of first fixation was shorter. Respondents with higher inadequate self also exhibited a sustained pattern in fixations (total fixation duration; total fixation duration ratio and average fixation duration)-fixation time increased as self-criticism increased, indicating heightened attention to facial expressions. On the other hand, individuals with high hated self showed increased total fixation duration and fixation count for emotions presented in the right visual field but did not differ in initial fixation metrics in comparison with high inadequate self group. These results suggest that the two forms of self-criticism - inadequate self and hated self, may function as distinct mechanisms in relation to emotional processing, with implications for their role as potential transdiagnostic markers of psychopathology based on the fixation eye-tracking metrics.
Collapse
Affiliation(s)
- Bronislava Šoková
- Institute of Applied Psychology, Faculty of Social and Economic Sciences, Comenius University, Bratislava, Slovakia
| | - Martina Baránková
- Institute of Applied Psychology, Faculty of Social and Economic Sciences, Comenius University, Bratislava, Slovakia
| | - Júlia Halamová
- Institute of Applied Psychology, Faculty of Social and Economic Sciences, Comenius University, Bratislava, Slovakia
| |
Collapse
|
8
|
Worley L, Colley MA, Rodriguez CC, Redden D, Logullo D, Pearson W. Enhancing Imaging Anatomy Competency: Integrating Digital Imaging and Communications in Medicine (DICOM) Viewers Into the Anatomy Lab Experience. Cureus 2024; 16:e68878. [PMID: 39376869 PMCID: PMC11457894 DOI: 10.7759/cureus.68878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Accepted: 09/06/2024] [Indexed: 10/09/2024] Open
Abstract
INTRODUCTION Radiologic interpretation is a skill necessary for all physicians to provide quality care for their patients. However, some medical students are not exposed to Digital Imaging and Communications in Medicine (DICOM) imaging manipulation until their third year during clinical rotations. The objective of this study is to evaluate how medical students exposed to DICOM manipulation perform on identifying anatomical structures compared to students who were not exposed. METHODS This was a cross-sectional cohort study with 19 medical student participants organized into a test and control group. The test group consisted of first-year students who had been exposed to a new imaging anatomy curriculum (n = 9). The control group consisted of second-year students who had not had this experience (n = 10). The outcomes measured included quiz performance, self-reported confidence levels, and eye-tracking data. RESULTS Students in the test group performed better on the quiz compared to students in the control group (p = 0.03). Confidence between the test and control groups was not significantly different (p = 0.16), though a moderate to large effect size difference was noted (Hedges' g = 0.75). Saccade peak velocity and fixation duration between the groups were not significantly different (p = 0.29, p = 0.77), though a moderate effect size improvement was noted in saccade peak velocity for the test group (Hedges' g = 0.49). CONCLUSION The results from this study suggest that the early introduction of DICOM imaging into a medical school curriculum does impact students' performance when asked to identify anatomical structures on a standardized quiz.
Collapse
Affiliation(s)
- Luke Worley
- Anatomical Sciences, Edward Via College of Osteopathic Medicine, Auburn, USA
| | - Maria A Colley
- Anatomical Sciences, Edward Via College of Osteopathic Medicine, Auburn, USA
| | | | - David Redden
- Research and Biostatistics, Edward Via College of Osteopathic Medicine, Auburn, USA
| | - Drew Logullo
- Biomedical Affairs and Research, Edward Via College of Osteopathic Medicine, Auburn, USA
| | - William Pearson
- Anatomical Sciences, Edward Via College of Osteopathic Medicine, Auburn, USA
| |
Collapse
|
9
|
Hsieh SS, Holmes Iii DR, Carter RE, Tan N, Inoue A, Yalon M, Gong H, Sudhir Pillai P, Leng S, Yu L, Fidler JL, Cook DA, McCollough CH, Fletcher JG. Peripheral liver metastases are more frequently missed than central metastases in contrast-enhanced CT: insights from a 25-reader performance study. Abdom Radiol (NY) 2024:10.1007/s00261-024-04520-4. [PMID: 39162799 DOI: 10.1007/s00261-024-04520-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Revised: 07/29/2024] [Accepted: 08/05/2024] [Indexed: 08/21/2024]
Abstract
PURPOSE Subtle liver metastases may be missed in contrast enhanced CT imaging. We determined the impact of lesion location and conspicuity on metastasis detection using data from a prior reader study. METHODS In the prior reader study, 25 radiologists examined 40 CT exams each and circumscribed all suspected hepatic metastases. CT exams were chosen to include a total of 91 visually challenging metastases. The detectability of a metastasis was defined as the fraction of radiologists that circumscribed it. A conspicuity index was calculated for each metastasis by multiplying metastasis diameter with its contrast, defined as the difference between the average of a circular region within the metastasis and the average of the surrounding circular region of liver parenchyma. The effects of distance from liver edge and of conspicuity index on metastasis detectability were measured using multivariable linear regression. RESULTS The median metastasis was 1.4 cm from the edge (interquartile range [IQR], 0.9-2.1 cm). Its diameter was 1.2 cm (IQR, 0.9-1.8 cm), and its contrast was 38 HU (IQR, 23-68 HU). An increase of one standard deviation in conspicuity index was associated with a 6.9% increase in detectability (p = 0.008), whereas an increase of one standard deviation in distance from the liver edge was associated with a 5.5% increase in detectability (p = 0.03). CONCLUSION Peripheral liver metastases were missed more frequently than central liver metastases, with this effect depending on metastasis size and contrast.
Collapse
Affiliation(s)
| | | | | | | | - Akitoshi Inoue
- Mayo Clinic, Rochester, USA
- Shiga University of Medical Science, Ōtsu, Japan
| | | | | | - Parvathy Sudhir Pillai
- Mayo Clinic, Rochester, USA
- The University of Texas MD Anderson Cancer Center, Houston, USA
| | | | | | | | | | | | | |
Collapse
|
10
|
Šola HM, Qureshi FH, Khawaja S. Predicting Behaviour Patterns in Online and PDF Magazines with AI Eye-Tracking. Behav Sci (Basel) 2024; 14:677. [PMID: 39199073 PMCID: PMC11351346 DOI: 10.3390/bs14080677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Revised: 07/25/2024] [Accepted: 07/31/2024] [Indexed: 09/01/2024] Open
Abstract
This study aims to improve college magazines, making them more engaging and user-friendly. We combined eye-tracking technology with artificial intelligence to accurately predict consumer behaviours and preferences. Our analysis included three college magazines, both online and in PDF format. We evaluated user experience using neuromarketing eye-tracking AI prediction software, trained on a large consumer neuroscience dataset of eye-tracking recordings from 180,000 participants, using Tobii X2 30 equipment, encompassing over 100 billion data points and 15 consumer contexts. An analysis was conducted with R programming v. 2023.06.0+421 and advanced SPSS statistics v. 27, IBM. (ANOVA, Welch's Two-Sample t-test, and Pearson's correlation). Our research demonstrated the potential of modern eye-tracking AI technologies in providing insights into various types of attention, including focus, engagement, cognitive demand, and clarity. The scientific accuracy of our findings, at 97-99%, underscores the reliability and robustness of our research, instilling confidence in the audience. This study also emphasizes the potential for future research to explore automated datasets, enhancing reliability and applicability across various fields and inspiring hope for further advancements in the field.
Collapse
Affiliation(s)
- Hedda Martina Šola
- Oxford Centre For Applied Research and Entrepreneurship (OxCARE), Oxford Business College, 65 George Street, Oxford OX1 2BQ, UK
- Institute for Neuromarketing & Intellectual Property, Jurja Ves III spur no 4, 10000 Zagreb, Croatia
| | | | - Sarwar Khawaja
- Oxford Business College, 65 George Street, Oxford OX1 2BQ, UK; (F.H.Q.); (S.K.)
| |
Collapse
|
11
|
Byrne CA, Voute LC, Marshall JF. Interobserver agreement during clinical magnetic resonance imaging of the equine foot. Equine Vet J 2024. [PMID: 38946165 DOI: 10.1111/evj.14126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 06/02/2024] [Indexed: 07/02/2024]
Abstract
BACKGROUND Agreement between experienced observers for assessment of pathology and assessment confidence are poorly documented for magnetic resonance imaging (MRI) of the equine foot. OBJECTIVES To report interobserver agreement for pathology assessment and observer confidence for key anatomical structures of the equine foot during MRI. STUDY DESIGN Exploratory clinical study. METHODS Ten experienced observers (diploma or associate level) assessed 15 equine foot MRI studies acquired from clinical databases of 3 MRI systems. Observers graded pathology in seven key anatomical structures (Grade 1: no pathology, Grade 2: mild pathology, Grade 3: moderate pathology, Grade 4: severe pathology) and provided a grade for their confidence for each pathology assessment (Grade 1: high confidence, Grade 2: moderate confidence, Grade 3: limited confidence, Grade 4: no confidence). Interobserver agreement for the presence/absence of pathology and agreement for individual grades of pathology were assessed with Fleiss' kappa (k). Overall interobserver agreement for pathology was determined using Fleiss' kappa and Kendall's coefficient of concordance (KCC). The distribution of grading was also visualised with bubble charts. RESULTS Interobserver agreement for the presence/absence of pathology of individual anatomical structures was poor-to-fair, except for the navicular bone which had moderate agreement (k = 0.52). Relative agreement for pathology grading (accounting for the ranking of grades) ranged from KCC = 0.19 for the distal interphalangeal joint to KCC = 0.70 for the navicular bone. Agreement was generally greatest at the extremes of pathology. Observer confidence in pathology assessment was generally moderate to high. MAIN LIMITATIONS Distribution of pathology varied between anatomical structures due to random selection of clinical MRI studies. Observers had most experience with low-field MRI. CONCLUSIONS Even with experienced observers, there can be notable variation in the perceived severity of foot pathology on MRI for individual cases, which could be important in a clinical context.
Collapse
Affiliation(s)
- Christian A Byrne
- School of Veterinary Medicine, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK
| | - Lance C Voute
- School of Veterinary Medicine, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK
| | - John F Marshall
- School of Veterinary Medicine, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK
| |
Collapse
|
12
|
Ibragimov B, Mello-Thoms C. The Use of Machine Learning in Eye Tracking Studies in Medical Imaging: A Review. IEEE J Biomed Health Inform 2024; 28:3597-3612. [PMID: 38421842 PMCID: PMC11262011 DOI: 10.1109/jbhi.2024.3371893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2024]
Abstract
Machine learning (ML) has revolutionized medical image-based diagnostics. In this review, we cover a rapidly emerging field that can be potentially significantly impacted by ML - eye tracking in medical imaging. The review investigates the clinical, algorithmic, and hardware properties of the existing studies. In particular, it evaluates 1) the type of eye-tracking equipment used and how the equipment aligns with study aims; 2) the software required to record and process eye-tracking data, which often requires user interface development, and controller command and voice recording; 3) the ML methodology utilized depending on the anatomy of interest, gaze data representation, and target clinical application. The review concludes with a summary of recommendations for future studies, and confirms that the inclusion of gaze data broadens the ML applicability in Radiology from computer-aided diagnosis (CAD) to gaze-based image annotation, physicians' error detection, fatigue recognition, and other areas of potentially high research and clinical impact.
Collapse
|
13
|
Eminaga O, Abbas M, Kunder C, Tolkach Y, Han R, Brooks JD, Nolley R, Semjonow A, Boegemann M, West R, Long J, Fan RE, Bettendorf O. Critical evaluation of artificial intelligence as a digital twin of pathologists for prostate cancer pathology. Sci Rep 2024; 14:5284. [PMID: 38438436 PMCID: PMC10912767 DOI: 10.1038/s41598-024-55228-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 02/21/2024] [Indexed: 03/06/2024] Open
Abstract
Prostate cancer pathology plays a crucial role in clinical management but is time-consuming. Artificial intelligence (AI) shows promise in detecting prostate cancer and grading patterns. We tested an AI-based digital twin of a pathologist, vPatho, on 2603 histological images of prostate tissue stained with hematoxylin and eosin. We analyzed various factors influencing tumor grade discordance between the vPatho system and six human pathologists. Our results demonstrated that vPatho achieved comparable performance in prostate cancer detection and tumor volume estimation, as reported in the literature. The concordance levels between vPatho and human pathologists were examined. Notably, moderate to substantial agreement was observed in identifying complementary histological features such as ductal, cribriform, nerve, blood vessel, and lymphocyte infiltration. However, concordance in tumor grading decreased when applied to prostatectomy specimens (κ = 0.44) compared to biopsy cores (κ = 0.70). Adjusting the decision threshold for the secondary Gleason pattern from 5 to 10% improved the concordance level between pathologists and vPatho for tumor grading on prostatectomy specimens (κ from 0.44 to 0.64). Potential causes of grade discordance included the vertical extent of tumors toward the prostate boundary and the proportions of slides with prostate cancer. Gleason pattern 4 was particularly associated with this population. Notably, the grade according to vPatho was not specific to any of the six pathologists involved in routine clinical grading. In conclusion, our study highlights the potential utility of AI in developing a digital twin for a pathologist. This approach can help uncover limitations in AI adoption and the practical application of the current grading system for prostate cancer pathology.
Collapse
Affiliation(s)
| | - Mahmoud Abbas
- Department of Pathology, Prostate Center, University Hospital Muenster, Muenster, Germany.
| | - Christian Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, USA
| | - Yuri Tolkach
- Department of Pathology, Cologne University Hospital, Cologne, Germany
| | - Ryan Han
- Department of Computer Science, Stanford University, Stanford, USA
| | - James D Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Rosalie Nolley
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Axel Semjonow
- Department of Urology, Prostate Center, University Hospital Muenster, Muenster, Germany
| | - Martin Boegemann
- Department of Urology, Prostate Center, University Hospital Muenster, Muenster, Germany
| | - Robert West
- Department of Pathology, Cologne University Hospital, Cologne, Germany
| | - Jin Long
- Department of Pediatrics, Stanford University School of Medicine, Stanford, USA
| | - Richard E Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | | |
Collapse
|
14
|
Ahmadi N, Sasangohar F, Yang J, Yu D, Danesh V, Klahn S, Masud F. Quantifying Workload and Stress in Intensive Care Unit Nurses: Preliminary Evaluation Using Continuous Eye-Tracking. HUMAN FACTORS 2024; 66:714-728. [PMID: 35511206 DOI: 10.1177/00187208221085335] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE (1) To assess mental workloads of intensive care unit (ICU) nurses in 12-hour working shifts (days and nights) using eye movement data; (2) to explore the impact of stress on the ocular metrics of nurses performing patient care in the ICU. BACKGROUND Prior studies have employed workload scoring systems or accelerometer data to assess ICU nurses' workload. This is the first naturalistic attempt to explore nurses' mental workload using eye movement data. METHODS Tobii Pro Glasses 2 eye-tracking and Empatica E4 devices were used to collect eye movement and physiological data from 15 nurses during 12-hour shifts (252 observation hours). We used mixed-effect models and an ordinal regression model with a random effect to analyze the changes in eye movement metrics during high stress episodes. RESULTS While the cadence and characteristics of nurse workload can vary between day shift and night shift, no significant difference in eye movement values was detected. However, eye movement metrics showed that the initial handoff period of nursing shifts has a higher mental workload compared with other times. Analysis of ocular metrics showed that stress is positively associated with an increase in number of eye fixations and gaze entropy, but negatively correlated with the duration of saccades and pupil diameter. CONCLUSION Eye-tracking technology can be used to assess the temporal variation of stress and associated changes with mental workload in the ICU environment. A real-time system could be developed for monitoring stress and workload for intervention development.
Collapse
Affiliation(s)
- Nima Ahmadi
- Center for Outcomes Research, Houston Methodist, Houston, TX, USA
| | - Farzan Sasangohar
- Center for Outcomes Research, Houston Methodist, Houston, TX, USA and Industrial and Systems Engineering, Texas A&M University, College Station, TX, USA
| | - Jing Yang
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| | - Denny Yu
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| | - Valerie Danesh
- Baylor Scott & White Health, Center for Applied Health Research, Dallas, TX, USA and University of Texas at Austin, School of Nursing, Austin, TX, USA
| | - Steven Klahn
- Center for Critical Care, Houston Methodist Hospital, Houston, TX, USA
| | - Faisal Masud
- Center for Critical Care, Houston Methodist Hospital, Houston, TX, USA
| |
Collapse
|
15
|
Kavuri A, Das M. Examining the Influence of Digital Phantom Models in Virtual Imaging Trials for Tomographic Breast Imaging. ARXIV 2024:arXiv:2402.00812v1. [PMID: 38351932 PMCID: PMC10862940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/19/2024]
Abstract
Purpose Digital phantoms are one of the key components of virtual imaging trials (VITs) that aims to assess and optimize new medical imaging systems and algorithms. However, these phantoms vary in their voxel resolution, appearance and structural details. This study aims to examine whether and how variations between digital phantoms influence system optimization with digital breast tomosynthesis (DBT) as a chosen modality. Methods We selected widely used and open access digital breast phantoms generated with different methods. For each phantom type, we created an ensemble of DBT images to test acquisition strategies. Human observer localization ROC (LROC) was used to assess observer performance studies for each case. Noise power spectrum (NPS) was estimated to compare the phantom structural components. Further, we computed several gaze metrics to quantify the gaze pattern when viewing images generated from different phantom types. Results Our LROC results show that the arc samplings for peak performance were approximately 2.5° and 6° in Bakic and XCAT breast phantoms respectively for 3-mm lesion detection task and indicate that system optimization outcomes from VITs can vary with phantom types and structural frequency components. Additionally, a significant correlation (p¡0.01) between gaze metrics and diagnostic performance suggests that gaze analysis can be used to understand and evaluate task difficulty in VITs. Conclusion Our results point to the critical need to evaluate realism in digital phantoms as well as ensuring sufficient structural variations at spatial frequencies relevant to the signal size for an intended task. In addition, standardizing phantom generation and validation tools might aid in lower discrepancies among independently conducted VITs for system or algorithmic optimizations.
Collapse
Affiliation(s)
- Amar Kavuri
- Department of Biomedical Engineering, University of Houston, Houston, TX-77204, USA
| | - Mini Das
- Department of Biomedical Engineering, University of Houston, Houston, TX-77204, USA
- Department of Physics, University of Houston, Houston, TX-77204, USA
| |
Collapse
|
16
|
Hsieh SS, Inoue A, Yalon M, Cook DA, Gong H, Sudhir Pillai P, Johnson MP, Fidler JL, Leng S, Yu L, Carter RE, Holmes DR, McCollough CH, Fletcher JG. Targeted Training Reduces Search Errors but Not Classification Errors for Hepatic Metastasis Detection at Contrast-Enhanced CT. Acad Radiol 2024; 31:448-456. [PMID: 37567818 PMCID: PMC10853479 DOI: 10.1016/j.acra.2023.06.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 06/15/2023] [Accepted: 06/20/2023] [Indexed: 08/13/2023]
Abstract
RATIONALE AND OBJECTIVES Methods are needed to improve the detection of hepatic metastases. Errors occur in both lesion detection (search) and decisions of benign versus malignant (classification). Our purpose was to evaluate a training program to reduce search errors and classification errors in the detection of hepatic metastases in contrast-enhanced abdominal computed tomography (CT). MATERIALS AND METHODS After Institutional Review Board approval, we conducted a single-group prospective pretest-posttest study. Pretest and posttest were identical and consisted of interpreting 40 contrast-enhanced abdominal CT exams containing 91 liver metastases under eye tracking. Between pretest and posttest, readers completed search training with eye-tracker feedback and coaching to increase interpretation time, use liver windows, and use coronal reformations. They also completed classification training with part-task practice, rating lesions as benign or malignant. The primary outcome was metastases missed due to search errors (<2 seconds gaze under eye tracker) and classification errors (>2 seconds). Jackknife free-response receiver operator characteristic (JAFROC) analysis was also conducted. RESULTS A total of 31 radiologist readers (8 abdominal subspecialists, 8 nonabdominal subspecialists, 15 senior residents/fellows) participated. Search errors were reduced (pretest 11%, posttest 8%, difference 3% [95% confidence interval, 0.3%-5.1%], P = .01), but there was no difference in classification errors (difference 0%, P = .97) or in JAFROC figure of merit (difference -0.01, P = .36). In subgroup analysis, abdominal subspecialists demonstrated no evidence of change. CONCLUSION Targeted training reduced search errors but not classification errors for the detection of hepatic metastases at contrast-enhanced abdominal CT. Improvements were not seen in all subgroups.
Collapse
Affiliation(s)
- Scott S Hsieh
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.); Department of General Internal Medicine, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H.).
| | - Akitoshi Inoue
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| | - Mariana Yalon
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| | - David A Cook
- Quantitative Health Services - Clinical Trials and Biostatistics, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (D.A.C.)
| | - Hao Gong
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| | - Parvathy Sudhir Pillai
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| | - Matthew P Johnson
- Department of Physiology Biomedical Engineering, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (M.P.J., R.E.C.)
| | - Jeff L Fidler
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| | - Shuai Leng
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| | - Lifeng Yu
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| | - Rickey E Carter
- Department of Physiology Biomedical Engineering, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (M.P.J., R.E.C.)
| | - David R Holmes
- Quantitative Health Services - Clinical Trials and Biostatistics, Mayo Clinic, 4500 San Pablo Road, Jacksonville, FL 32224 (D.R.H. III)
| | - Cynthia H McCollough
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| | - Joel G Fletcher
- Department of Radiology, Mayo Clinic, 200 First St. SW, Rochester, MN 55905 (S.S.H., A.I., M.Y., H.G., P.S.P., J.L.F., S.L., L.Y., C.H.McC., J.G.F.)
| |
Collapse
|
17
|
Sugimoto M, Oyamada M, Tomita A, Inada C, Sato M. Assessing the Link between Nurses' Proficiency and Situational Awareness in Neonatal Care Practice Using an Eye Tracker: An Observational Study Using a Simulator. Healthcare (Basel) 2024; 12:157. [PMID: 38255046 PMCID: PMC10815009 DOI: 10.3390/healthcare12020157] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 12/28/2023] [Accepted: 01/04/2024] [Indexed: 01/24/2024] Open
Abstract
Nurses are expected to depend on a wide variety of visually available pieces of patient information to understand situations. Thus, we assumed a relationship between nurses' skills and their gaze trajectories. An observational study using a simulator was conducted to analyze gaze during neonatal care practice using eye tracking. We defined the face, thorax, and abdomen of the neonate, the timer, and the pulse oximeter as areas of interest (AOIs). We compared the eye trajectories for respiration and heart rate assessment between 7 experienced and 13 novice nurses. There were no statistically significant differences in the time spent on each AOI for breathing or heart rate confirmation. However, in novice nurses, we observed a significantly higher number of instances of gazing at the thorax and abdomen. The deviation in the number of instances of gazing at the face was also significantly higher among novice nurses. These results indicate that experienced and novice nurses differ in their gaze movements during situational awareness. These objective and quantitative differences in gaze trajectories may help to establish new educational tools for less experienced nurses.
Collapse
Affiliation(s)
- Masahiro Sugimoto
- Institute for Advanced Biosciences, Keio University, Tsuruoka 997-0052, Japan
- Institute of Medical Sciences, Tokyo Medical University, Shinjuku, Tokyo 160-0022, Japan;
| | - Michiko Oyamada
- Faculty of Human Care Department, Tohto University, 1-1 Hinode-cho, Numazu 410-0032, Japan;
- Department of Nursing, Nihon Institute of Medical Science, Iruma 350-0435, Japan
| | - Atsumi Tomita
- Institute of Medical Sciences, Tokyo Medical University, Shinjuku, Tokyo 160-0022, Japan;
| | - Chiharu Inada
- Faculty of Nursing, Japanese Red Cross College of Nursing, 4-1-3 Hiroo, Shibuya, Tokyo 150-0012, Japan;
| | - Mitsue Sato
- Department of Nursing, Kiryu University, Midori 379-2392, Japan;
| |
Collapse
|
18
|
Wang H, Yu Z, Wang X. Expertise differences in cognitive interpreting: A meta-analysis of eye tracking studies across four decades. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2024; 15:e1667. [PMID: 37858956 DOI: 10.1002/wcs.1667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Revised: 08/12/2023] [Accepted: 09/13/2023] [Indexed: 10/21/2023]
Abstract
This meta-analytic research delves into the influence of expertise on cognitive interpreting, emphasizing time efficiency, accuracy, and cognitive effort, in alignment with prevailing expertise theories that link professional development and cognitive efficiency. The study assimilates empirical data from 18 eye-tracking studies conducted over the past four decades, encompassing a sample of 1581 interpreters. The objective is to elucidate the role of expertise in interpretative performance while tracing the evolution of these dynamics over time. Findings suggest that expert interpreters outperform novices in time efficiency and accuracy and exhibit lower cognitive effort, especially in sight and consecutive interpreting. This effect is particularly pronounced in the English-Chinese language pair and with the use of E-prime and Tobii eye-tracking systems. Further, fixation count and pupil size are essential metrics impacting cognitive effort. These findings have vital implications for interpreter training programs, suggesting a focus on expertise development to enhance efficiency and accuracy, reduce cognitive load, and emphasize the importance of sight interpreting as a foundational skill. The selection of technology and understanding of specific ocular metrics also emerged as essential for future research and practical applications in the interpreting industry. This article is categorized under: Psychology > Theory and Methods Linguistics > Cognitive.
Collapse
Affiliation(s)
- Huan Wang
- Faculty of Foreign Studies, Beijing Language and Culture University, Beijing, China
| | - Zhonggen Yu
- Faculty of Foreign Studies, Beijing Language and Culture University, Beijing, China
- Academy of International Language Services, Center for Intelligent Language Education Research, National Base for Language Service Export, Beijing Language and Culture University, Beijing, China
| | - Xiaohui Wang
- Faculty of Foreign Studies, Beijing Language and Culture University, Beijing, China
| |
Collapse
|
19
|
Hofmeijer EIS, Wu SC, Vliegenthart R, Slump CH, van der Heijden F, Tan CO. Artificial CT images can enhance variation of case images in diagnostic radiology skills training. Insights Imaging 2023; 14:186. [PMID: 37934344 PMCID: PMC10630276 DOI: 10.1186/s13244-023-01508-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 08/22/2023] [Indexed: 11/08/2023] Open
Abstract
OBJECTIVES We sought to investigate if artificial medical images can blend with original ones and whether they adhere to the variable anatomical constraints provided. METHODS Artificial images were generated with a generative model trained on publicly available standard and low-dose chest CT images (805 scans; 39,803 2D images), of which 17% contained evidence of pathological formations (lung nodules). The test set (90 scans; 5121 2D images) was used to assess if artificial images (512 × 512 primary and control image sets) blended in with original images, using both quantitative metrics and expert opinion. We further assessed if pathology characteristics in the artificial images can be manipulated. RESULTS Primary and control artificial images attained an average objective similarity of 0.78 ± 0.04 (ranging from 0 [entirely dissimilar] to 1[identical]) and 0.76 ± 0.06, respectively. Five radiologists with experience in chest and thoracic imaging provided a subjective measure of image quality; they rated artificial images as 3.13 ± 0.46 (range of 1 [unrealistic] to 4 [almost indistinguishable to the original image]), close to their rating of the original images (3.73 ± 0.31). Radiologists clearly distinguished images in the control sets (2.32 ± 0.48 and 1.07 ± 0.19). In almost a quarter of the scenarios, they were not able to distinguish primary artificial images from the original ones. CONCLUSION Artificial images can be generated in a way such that they blend in with original images and adhere to anatomical constraints, which can be manipulated to augment the variability of cases. CRITICAL RELEVANCE STATEMENT Artificial medical images can be used to enhance the availability and variety of medical training images by creating new but comparable images that can blend in with original images. KEY POINTS • Artificial images, similar to original ones, can be created using generative networks. • Pathological features of artificial images can be adjusted through guiding the network. • Artificial images proved viable to augment the depth and broadening of diagnostic training.
Collapse
Affiliation(s)
- Elfi Inez Saïda Hofmeijer
- Robotics and Mechatronics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Enschede, The Netherlands.
| | - Sheng-Chih Wu
- Robotics and Mechatronics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Enschede, The Netherlands
| | - Rozemarijn Vliegenthart
- Dept of Radiology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Cornelis Herman Slump
- Robotics and Mechatronics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Enschede, The Netherlands
| | - Ferdi van der Heijden
- Robotics and Mechatronics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Enschede, The Netherlands
| | - Can Ozan Tan
- Robotics and Mechatronics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Enschede, The Netherlands
| |
Collapse
|
20
|
Akerman M, Choudhary S, Liebmann JM, Cioffi GA, Chen RWS, Thakoor KA. Extracting decision-making features from the unstructured eye movements of clinicians on glaucoma OCT reports and developing AI models to classify expertise. Front Med (Lausanne) 2023; 10:1251183. [PMID: 37841006 PMCID: PMC10571140 DOI: 10.3389/fmed.2023.1251183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 09/14/2023] [Indexed: 10/17/2023] Open
Abstract
This study aimed to investigate the eye movement patterns of ophthalmologists with varying expertise levels during the assessment of optical coherence tomography (OCT) reports for glaucoma detection. Objectives included evaluating eye gaze metrics and patterns as a function of ophthalmic education, deriving novel features from eye-tracking, and developing binary classification models for disease detection and expertise differentiation. Thirteen ophthalmology residents, fellows, and clinicians specializing in glaucoma participated in the study. Junior residents had less than 1 year of experience, while senior residents had 2-3 years of experience. The expert group consisted of fellows and faculty with over 3 to 30+ years of experience. Each participant was presented with a set of 20 Topcon OCT reports (10 healthy and 10 glaucomatous) and was asked to determine the presence or absence of glaucoma and rate their confidence of diagnosis. The eye movements of each participant were recorded as they diagnosed the reports using a Pupil Labs Core eye tracker. Expert ophthalmologists exhibited more refined and focused eye fixations, particularly on specific regions of the OCT reports, such as the retinal nerve fiber layer (RNFL) probability map and circumpapillary RNFL b-scan. The binary classification models developed using the derived features demonstrated high accuracy up to 94.0% in differentiating between expert and novice clinicians. The derived features and trained binary classification models hold promise for improving the accuracy of glaucoma detection and distinguishing between expert and novice ophthalmologists. These findings have implications for enhancing ophthalmic education and for the development of effective diagnostic tools.
Collapse
Affiliation(s)
- Michelle Akerman
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Sanmati Choudhary
- Department of Computer Science, Columbia University, New York, NY, United States
| | - Jeffrey M. Liebmann
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| | - George A. Cioffi
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| | - Royce W. S. Chen
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| | - Kaveri A. Thakoor
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
- Department of Computer Science, Columbia University, New York, NY, United States
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| |
Collapse
|
21
|
Darici D, Reissner C, Missler M. Webcam-based eye-tracking to measure visual expertise of medical students during online histology training. GMS JOURNAL FOR MEDICAL EDUCATION 2023; 40:Doc60. [PMID: 37881524 PMCID: PMC10594038 DOI: 10.3205/zma001642] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 06/06/2023] [Accepted: 07/07/2023] [Indexed: 10/27/2023]
Abstract
Objectives Visual expertise is essential for image-based tasks that rely on visual cues, such as in radiology or histology. Studies suggest that eye movements are related to visual expertise and can be measured by near-infrared eye-tracking. With the popularity of device-embedded webcam eye-tracking technology, cost-effective use in educational contexts has recently become amenable. This study investigated the feasibility of such methodology in a curricular online-only histology course during the 2021 summer term. Methods At two timepoints (t1 and t2), third-semester medical students were asked to diagnose a series of histological slides while their eye movements were recorded. Students' eye metrics, performance and behavioral measures were analyzed using variance analyses and multiple regression models. Results First, webcam-eye tracking provided eye movement data with satisfactory quality (mean accuracy=115.7 px±31.1). Second, the eye movement metrics reflected the students' proficiency in finding relevant image sections (fixation count on relevant areas=6.96±1.56 vs. irrelevant areas=4.50±1.25). Third, students' eye movement metrics successfully predicted their performance (R2adj=0.39, p<0.001). Conclusion This study supports the use of webcam-eye-tracking expanding the range of educational tools available in the (digital) classroom. As the students' interest in using the webcam eye-tracking was high, possible areas of implementation will be discussed.
Collapse
Affiliation(s)
- Dogus Darici
- Westfälische-Wilhelms-University, Institute of Anatomy and Neurobiology, Münster, Germany
| | - Carsten Reissner
- Westfälische-Wilhelms-University, Institute of Anatomy and Neurobiology, Münster, Germany
| | - Markus Missler
- Westfälische-Wilhelms-University, Institute of Anatomy and Neurobiology, Münster, Germany
| |
Collapse
|
22
|
Lee M, Desy J, Tonelli AC, Walsh MH, Ma IWY. The association of attentional foci and image interpretation accuracy in novices interpreting lung ultrasound images: an eye-tracking study. Ultrasound J 2023; 15:36. [PMID: 37697149 PMCID: PMC10495286 DOI: 10.1186/s13089-023-00333-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 08/02/2023] [Indexed: 09/13/2023] Open
Abstract
It is unclear, where learners focus their attention when interpreting point-of-care ultrasound (POCUS) images. This study seeks to determine the relationship between attentional foci metrics with lung ultrasound (LUS) interpretation accuracy in novice medical learners. A convenience sample of 14 medical residents with minimal LUS training viewed 8 LUS cineloops, with their eye-tracking patterns recorded. Areas of interest (AOI) for each cineloop were mapped independently by two experts, and externally validated by a third expert. Primary outcome of interest was image interpretation accuracy, presented as a percentage. Eye tracking captured 10 of 14 participants (71%) who completed the study. Participants spent a mean total of 8 min 44 s ± standard deviation (SD) 3 min 8 s on the cineloops, with 1 min 14 s ± SD 34 s spent fixated in the AOI. Mean accuracy score was 54.0% ± SD 16.8%. In regression analyses, fixation duration within AOI was positively associated with accuracy [beta-coefficients 28.9 standardized error (SE) 6.42, P = 0.002). Total time spent viewing the videos was also significantly associated with accuracy (beta-coefficient 5.08, SE 0.59, P < 0.0001). For each additional minute spent fixating within the AOI, accuracy scores increased by 28.9%. For each additional minute spent viewing the video, accuracy scores increased only by 5.1%. Interpretation accuracy is strongly associated with time spent fixating within the AOI. Image interpretation training should consider targeting AOIs.
Collapse
Affiliation(s)
- Matthew Lee
- Division of General Internal Medicine, Department of Medicine, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada
| | - Janeve Desy
- Division of General Internal Medicine, Department of Medicine, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada
| | - Ana Claudia Tonelli
- UNISINOS University, Hospital de Clinicas de Porto Alegre, Porto Alegre, Brazil
| | - Michael H Walsh
- Division of General Internal Medicine, Department of Medicine, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada
| | - Irene W Y Ma
- Division of General Internal Medicine, Department of Medicine, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada.
- W21C, University of Calgary, Calgary, AB, Canada.
| |
Collapse
|
23
|
Tzamaras HM, Wu HL, Moore JZ, Miller SR. Shifting Perspectives: A proposed framework for analyzing head-mounted eye-tracking data with dynamic areas of interest and dynamic scenes. PROCEEDINGS OF THE HUMAN FACTORS AND ERGONOMICS SOCIETY ... ANNUAL MEETING. HUMAN FACTORS AND ERGONOMICS SOCIETY. ANNUAL MEETING 2023; 67:953-958. [PMID: 38450120 PMCID: PMC10914345 DOI: 10.1177/21695067231192929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
Eye-tracking is a valuable research method for understanding human cognition and is readily employed in human factors research, including human factors in healthcare. While wearable mobile eye trackers have become more readily available, there are no existing analysis methods for accurately and efficiently mapping dynamic gaze data on dynamic areas of interest (AOIs), which limits their utility in human factors research. The purpose of this paper was to outline a proposed framework for automating the analysis of dynamic areas of interest by integrating computer vision and machine learning (CVML). The framework is then tested using a use-case of a Central Venous Catheterization trainer with six dynamic AOIs. While the results of the validity trial indicate there is room for improvement in the CVML method proposed, the framework provides direction and guidance for human factors researchers using dynamic AOIs.
Collapse
Affiliation(s)
| | - Hang-Ling Wu
- Pennsylvania State University Mechanical Engineering
| | - Jason Z Moore
- Pennsylvania State University Mechanical Engineering
| | | |
Collapse
|
24
|
Bradley H, Smith BA, Wilson RB. Qualitative and Quantitative Measures of Joint Attention Development in the First Year of Life: A Scoping Review. INFANT AND CHILD DEVELOPMENT 2023; 32:e2422. [PMID: 37872965 PMCID: PMC10588805 DOI: 10.1002/icd.2422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 03/30/2023] [Indexed: 10/25/2023]
Abstract
Joint attention (JA) is the purposeful coordination of an individual's focus of attention with that of another and begins to develop within the first year of life. Delayed, or atypically developing, JA is an early behavioral sign of many developmental disabilities and so assessing JA in infancy can improve our understanding of trajectories of typical and atypical development. This scoping review identified the most common methods for assessing JA in the first year of life. Methods of JA were divided into qualitative and quantitative categories. Out of an identified 13,898 articles, 106 were selected after a robust search of four databases. Frequent methods used were eye tracking, electroencephalography (EEG), behavioral coding and the Early Social Communication Scale (ECSC). These methods were used to assess JA in typically and atypically developing infants in the first year of life. This study provides a comprehensive review of the past and current state of measurement of JA in the literature, the strengths and limitations of the measures used, and the next steps to consider for researchers interested in investigating JA to strengthen this field going forwards.
Collapse
Affiliation(s)
- Holly Bradley
- Division of Behavioral Pediatrics, Children's Hospital Los Angeles, Los Angeles, California
| | - Beth A Smith
- Division of Behavioral Pediatrics, Children's Hospital Los Angeles, Los Angeles, California
- Developmental Neuroscience and Neurogenetics Program, The Saban Research Institute
- Department of Pediatrics, Keck School of Medicine, University of Southern California
| | - Rujuta B Wilson
- David Geffen School of Medicine at UCLA, UCLA Semel Institute for Neuroscience and Human Behavior, Divisions of Pediatric Neurology and Child Psychiatry, Los Angeles, California, USA
| |
Collapse
|
25
|
Kaushal S, Sun Y, Zukerman R, Chen RWS, Thakoor KA. Detecting Eye Disease Using Vision Transformers Informed by Ophthalmology Resident Gaze Data . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083657 DOI: 10.1109/embc40787.2023.10340746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
We showcase two proof-of-concept approaches for enhancing the Vision Transformer (ViT) model by integrating ophthalmology resident gaze data into its training. The resulting Fixation-Order-Informed ViT and Ophthalmologist-Gaze-Augmented ViT show greater accuracy and computational efficiency than ViT for detection of the eye disease, glaucoma.Clinical relevance- By enhancing glaucoma detection via our gaze-informed ViTs, we introduce a new paradigm for medical experts to directly interface with medical AI, leading the way for more accurate and interpretable AI 'teammates' in the ophthalmic clinic.
Collapse
|
26
|
Jiang H, Hou Y, Miao H, Ye H, Gao M, Li X, Jin R, Liu J. Eye tracking based deep learning analysis for the early detection of diabetic retinopathy: A pilot study. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2023]
|
27
|
Murray NP, Lewinski W, Sandri Heidner G, Lawton J, Horn R. Gaze Control and Tactical Decision-Making Under Stress in Active-Duty Police Officers During a Live Use-of-Force Response. J Mot Behav 2023; 56:30-41. [PMID: 37385608 DOI: 10.1080/00222895.2023.2229946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 03/21/2023] [Accepted: 06/15/2023] [Indexed: 07/01/2023]
Abstract
Police officers during dynamic and stressful encounters are required to make rapid decisions that rely on effective decision-making, experience, and intuition. Tactical decision-making is influenced by the officer's capability to recognize critical visual information and estimation of threat. The purpose of the current study is to investigate how visual search patterns using cluster analysis and factors that differentiate expertise (e.g., years of service, tactical training, related experiences) influence tactical decision-making in active-duty police officers (44 active-duty police officers) during high stress, high threat, realistic use of force scenario following a car accident and to examine the relationships between visual search patterns and physiological response (heart rate). A cluster analysis of visual search variables (fixation duration, fixation location difference score, and number of fixations) produced an Efficient Scan and an Inefficient Scan group. Specifically, the Efficient Scan group demonstrated longer total fixation duration and differences in area of interests (AOI) fixation duration compared to the Inefficient Scan group. Despite both groups exhibiting a rise in physiological stress response (HR) throughout the high-stress scenario, the Efficient Scan group had a history of tactical training, improved return fire performance, had higher sleep time total, and demonstrated increased processing efficiency and effective attentional control, due to having a background of increased tactical training.
Collapse
Affiliation(s)
- Nicholas P Murray
- Department of Kinesiology, East Carolina University, Greenville, NC, USA
| | | | - Gustavo Sandri Heidner
- Department of Exercise Science & Physical Education, Montclair State University, Montclair, NJ, USA
| | - Joshua Lawton
- Department of Kinesiology, East Carolina University, Greenville, NC, USA
| | - Robert Horn
- Department of Exercise Science & Physical Education, Montclair State University, Montclair, NJ, USA
| |
Collapse
|
28
|
Darici D, Masthoff M, Rischen R, Schmitz M, Ohlenburg H, Missler M. Medical imaging training with eye movement modeling examples: A randomized controlled study. MEDICAL TEACHER 2023:1-7. [PMID: 36943681 DOI: 10.1080/0142159x.2023.2189538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
PURPOSE To determine whether ultrasound training in which the expert's eye movements are superimposed to the underlying ultrasound video (eye movement modeling examples; EMMEs) leads to better learner outcomes than traditional eye movement-free instructions. MATERIALS AND METHODS 106 undergraduate medical students were randomized in two groups; 51 students in the EMME group watched 5-min ultrasound examination videos combined with the eye movements of an expert performing the task. The identical videos without the eye movements were shown to 55 students in the control group. Performance and behavioral parameters were compared prepost interventional using ANOVAs. Additionally, cognitive load, and prior knowledge in anatomy were surveyed. RESULTS After training, the EMME group identified more sonoanatomical structures correctly, and completed the tasks faster than the control group. This effect was partly mediated by a reduction of extraneous cognitive load. Participants with greater prior anatomical knowledge benefited the most from the EMME training. CONCLUSION Displaying experts' eye movements in medical imaging training appears to be an effective way to foster medical interpretation skills of undergraduate medical students. One underlying mechanism might be that practicing with eye movements reduces cognitive load and helps learners activate their prior knowledge.
Collapse
Affiliation(s)
- Dogus Darici
- Institute of Anatomy and Neurobiology, Westfälische Wilhelms-University, Münster, Germany
| | - Max Masthoff
- Clinic for Radiology, University Hospital Münster, Münster, Germany
| | - Robert Rischen
- Clinic for Radiology, University Hospital Münster, Münster, Germany
| | - Martina Schmitz
- Institute of Anatomy and Vascular Biology, Westfälische Wilhelms-University, Münster, Germany
| | - Hendrik Ohlenburg
- Institute of Education and Student Affairs, Studienhospital Münster, University of Münster, Germany
| | - Markus Missler
- Institute of Anatomy and Neurobiology, Westfälische Wilhelms-University, Münster, Germany
| |
Collapse
|
29
|
Drew T, Konold CE, Lavelle M, Brunyé TT, Kerr KF, Shucard H, Weaver DL, Elmore JG. Pathologist pupil dilation reflects experience level and difficulty in diagnosing medical images. J Med Imaging (Bellingham) 2023; 10:025503. [PMID: 37096053 PMCID: PMC10122150 DOI: 10.1117/1.jmi.10.2.025503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Revised: 03/26/2023] [Accepted: 04/10/2023] [Indexed: 04/26/2023] Open
Abstract
Purpose: Digital whole slide imaging allows pathologists to view slides on a computer screen instead of under a microscope. Digital viewing allows for real-time monitoring of pathologists' search behavior and neurophysiological responses during the diagnostic process. One particular neurophysiological measure, pupil diameter, could provide a basis for evaluating clinical competence during training or developing tools that support the diagnostic process. Prior research shows that pupil diameter is sensitive to cognitive load and arousal, and it switches between exploration and exploitation of a visual image. Different categories of lesions in pathology pose different levels of challenge, as indicated by diagnostic disagreement among pathologists. If pupil diameter is sensitive to the perceived difficulty in diagnosing biopsies, eye-tracking could potentially be used to identify biopsies that may benefit from a second opinion. Approach: We measured case onset baseline-corrected (phasic) and uncorrected (tonic) pupil diameter in 90 pathologists who each viewed and diagnosed 14 digital breast biopsy cases that cover the diagnostic spectrum from benign to invasive breast cancer. Pupil data were extracted from the beginning of viewing and interpreting of each individual case. After removing 122 trials ( < 10 % ) with poor eye-tracking quality, 1138 trials remained. We used multiple linear regression with robust standard error estimates to account for dependent observations within pathologists. Results: We found a positive association between the magnitude of phasic dilation and subject-centered difficulty ratings and between the magnitude of tonic dilation and untransformed difficulty ratings. When controlling for case diagnostic category, only the tonic-difficulty relationship persisted. Conclusions: Results suggest that tonic pupil dilation may indicate overall arousal differences between pathologists as they interpret biopsy cases and could signal a need for additional training, experience, or automated decision aids. Phasic dilation is sensitive to characteristics of biopsies that tend to elicit higher difficulty ratings and could indicate a need for a second opinion.
Collapse
Affiliation(s)
- Trafton Drew
- University of Utah, Department of Psychology, Salt Lake City, Utah, United States
| | - Catherine E. Konold
- University of Utah, Department of Psychology, Salt Lake City, Utah, United States
| | - Mark Lavelle
- University of New Mexico, Department of Psychology, Albuquerque, New Mexico, United States
| | - Tad T. Brunyé
- Tufts University, Center for Applied Brain and Cognitive Sciences, Medford, Massachusetts, United States
| | - Kathleen F. Kerr
- University of Washington, Department of Biostatistics, Seattle, Washington, United States
| | - Hannah Shucard
- University of Washington, Department of Biostatistics, Seattle, Washington, United States
| | - Donald L. Weaver
- University of Vermont, Department of Pathology & Laboratory Medicine, Burlington, Vermont, United States
| | - Joann G. Elmore
- David Geffen School of Medicine UCLA, Department of Medicine, Los Angeles, California, United States
| |
Collapse
|
30
|
Analysis of gaze patterns during facade inspection to understand inspector sense-making processes. Sci Rep 2023; 13:2929. [PMID: 36804607 PMCID: PMC9941087 DOI: 10.1038/s41598-023-29950-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 02/10/2023] [Indexed: 02/22/2023] Open
Abstract
This work seeks to capture how an expert interacts with a structure during a facade inspection so that more detailed and situationally-aware inspections can be done with autonomous robots in the future. Eye tracking maps where an inspector is looking during a structural inspection, and it recognizes implicit human attention. Experiments were performed on a facade during a damage assessment to analyze key, visually-based features that are important for understanding human-infrastructure interaction during the process. For data collection and analysis, experiments were conducted to assess an inspector's behavioral changes while assessing a real structure. These eye tracking features provided the basis for the inspector's intent prediction and were used to understand how humans interact with the structure during the inspection processes. This method will facilitate information-sharing and decision-making during the inspection processes for collaborative human-robot teams; thus, it will enable unmanned aerial vehicle (UAV) for future building inspection through artificial intelligence support.
Collapse
|
31
|
Hsieh SS, Cook DA, Inoue A, Gong H, Sudhir Pillai P, Johnson MP, Leng S, Yu L, Fidler JL, Holmes DR, Carter RE, McCollough CH, Fletcher JG. Understanding Reader Variability: A 25-Radiologist Study on Liver Metastasis Detection at CT. Radiology 2023; 306:e220266. [PMID: 36194112 PMCID: PMC9870852 DOI: 10.1148/radiol.220266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 07/07/2022] [Accepted: 08/17/2022] [Indexed: 01/26/2023]
Abstract
Background Substantial interreader variability exists for common tasks in CT imaging, such as detection of hepatic metastases. This variability can undermine patient care by leading to misdiagnosis. Purpose To determine the impact of interreader variability associated with (a) reader experience, (b) image navigation patterns (eg, eye movements, workstation interactions), and (c) eye gaze time at missed liver metastases on contrast-enhanced abdominal CT images. Materials and Methods In a single-center prospective observational trial at an academic institution between December 2020 and February 2021, readers were recruited to examine 40 contrast-enhanced abdominal CT studies (eight normal, 32 containing 91 liver metastases). Readers circumscribed hepatic metastases and reported confidence. The workstation tracked image navigation and eye movements. Performance was quantified by using the area under the jackknife alternative free-response receiver operator characteristic (JAFROC-1) curve and per-metastasis sensitivity and was associated with reader experience and image navigation variables. Differences in area under JAFROC curve were assessed with the Kruskal-Wallis test followed by the Dunn test, and effects of image navigation were assessed by using the Wilcoxon signed-rank test. Results Twenty-five readers (median age, 38 years; IQR, 31-45 years; 19 men) were recruited and included nine subspecialized abdominal radiologists, five nonabdominal staff radiologists, and 11 senior residents or fellows. Reader experience explained differences in area under the JAFROC curve, with abdominal radiologists demonstrating greater area under the JAFROC curve (mean, 0.77; 95% CI: 0.75, 0.79) than trainees (mean, 0.71; 95% CI: 0.69, 0.73) (P = .02) or nonabdominal subspecialists (mean, 0.69; 95% CI: 0.60, 0.78) (P = .03). Sensitivity was similar within the reader experience groups (P = .96). Image navigation variables that were associated with higher sensitivity included longer interpretation time (P = .003) and greater use of coronal images (P < .001). The eye gaze time was at least 0.5 and 2.0 seconds for 71% (266 of 377) and 40% (149 of 377) of missed metastases, respectively. Conclusion Abdominal radiologists demonstrated better discrimination for the detection of liver metastases on abdominal contrast-enhanced CT images. Missed metastases frequently received at least a brief eye gaze. Higher sensitivity was associated with longer interpretation time and greater use of liver display windows and coronal images. © RSNA, 2022 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Scott S. Hsieh
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - David A. Cook
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Akitoshi Inoue
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Hao Gong
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Parvathy Sudhir Pillai
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Matthew P. Johnson
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Shuai Leng
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Lifeng Yu
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Jeff L. Fidler
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - David R. Holmes
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Rickey E. Carter
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Cynthia H. McCollough
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| | - Joel G. Fletcher
- From the Departments of Radiology (S.S.H., A.I., H.G., P.S.P., S.L.,
L.Y., J.L.F., C.H.M., J.G.F.), General Internal Medicine (D.A.C.), Quantitative
Health Services–Clinical Trials and Biostatistics (M.P.J.), and
Physiology and Biomedical Engineering (D.R.H.), Mayo Clinic Rochester, 200 First
St SW, Rochester, MN 55905; and Department of Quantitative Health
Services–Clinical Trials and Biostatistics, Mayo Clinic, Jacksonville,
Fla (R.E.C.)
| |
Collapse
|
32
|
Kulkarni CS, Deng S, Wang T, Hartman-Kenzler J, Barnes LE, Parker SH, Safford SD, Lau N. Scene-dependent, feedforward eye gaze metrics can differentiate technical skill levels of trainees in laparoscopic surgery. Surg Endosc 2023; 37:1569-1580. [PMID: 36123548 PMCID: PMC11062149 DOI: 10.1007/s00464-022-09582-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 08/25/2022] [Indexed: 10/14/2022]
Abstract
INTRODUCTION In laparoscopic surgery, looking in the target areas is an indicator of proficiency. However, gaze behaviors revealing feedforward control (i.e., looking ahead) and their importance have been under-investigated in surgery. This study aims to establish the sensitivity and relative importance of different scene-dependent gaze and motion metrics for estimating trainee proficiency levels in surgical skills. METHODS Medical students performed the Fundamentals of Laparoscopic Surgery peg transfer task while recording their gaze on the monitor and tool activities inside the trainer box. Using computer vision and fixation algorithms, five scene-dependent gaze metrics and one tool speed metric were computed for 499 practice trials. Cluster analysis on the six metrics was used to group the trials into different clusters/proficiency levels, and ANOVAs were conducted to test differences between proficiency levels. A Random Forest model was trained to study metric importance at predicting proficiency levels. RESULTS Three clusters were identified, corresponding to three proficiency levels. The correspondence between the clusters and proficiency levels was confirmed by differences between completion times (F2,488 = 38.94, p < .001). Further, ANOVAs revealed significant differences between the three levels for all six metrics. The Random Forest model predicted proficiency level with 99% out-of-bag accuracy and revealed that scene-dependent gaze metrics reflecting feedforward behaviors were more important for prediction than the ones reflecting feedback behaviors. CONCLUSION Scene-dependent gaze metrics revealed skill levels of trainees more precisely than between experts and novices as suggested in the literature. Further, feedforward gaze metrics appeared to be more important than feedback ones at predicting proficiency.
Collapse
Affiliation(s)
- Chaitanya S Kulkarni
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | - Shiyu Deng
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | - Tianzi Wang
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | | | - Laura E Barnes
- Environmental and Systems Engineering, University of Virginia, Charlottesville, VA, USA
| | | | - Shawn D Safford
- Division of Pediatric General and Thoracic Surgery, UPMC Children's Hospital of Pittsburgh, Harrisburg, PA, USA
| | - Nathan Lau
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA.
| |
Collapse
|
33
|
Laubrock J, Krutz A, Nübel J, Spethmann S. Gaze patterns reflect and predict expertise in dynamic echocardiographic imaging. J Med Imaging (Bellingham) 2023; 10:S11906. [PMID: 36968293 PMCID: PMC10031643 DOI: 10.1117/1.jmi.10.s1.s11906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 03/01/2023] [Indexed: 03/24/2023] Open
Abstract
Purpose Echocardiography is the most important modality in cardiac imaging. Rapid valid visual assessment is a critical skill for image interpretation. However, it is unclear how skilled viewers assess echocardiographic images. Therefore, guidance and implicit advice are needed for learners to achieve valid image interpretation. Approach Using a signal detection approach, we compared 15 certified experts with 15 medical students in their diagnostic decision-making and viewing behavior. To quantify attention allocation, we recorded eye movements while viewing dynamic echocardiographic imaging loops of patients with reduced ejection fraction and healthy controls. Participants evaluated left ventricular ejection fraction and image quality (as diagnostic and visual control tasks, respectively). Results Experts were much better at discriminating between patients and healthy controls (d ' of 2.58, versus 0.98 for novices). Eye tracking revealed that experts fixated diagnostically relevant areas earlier and more often, whereas novices were distracted by visually salient task-irrelevant stimuli. We show that expertise status can be almost perfectly classified either based on judgments or purely on eye movements and that an expertise score derived from viewing behavior predicts diagnostic quality. Conclusions Judgments and eye tracking revealed significant differences between echocardiography experts and novices that can be used to derive numerical expertise scores. Experts have implicitly learned to ignore the salient motion cue presented by the mitral valve and to focus on the diagnostically more relevant left ventricle. These findings have implications for echocardiography training, objective characterization of echocardiographic expertise, and the design of user-friendly interfaces for echocardiography.
Collapse
Affiliation(s)
- Jochen Laubrock
- University of Potsdam, Cognitive Science, Department of Psychology, Potsdam, Germany
| | - Alexander Krutz
- Heart Centre Brandenburg, Department of Cardiology, Bernau, Germany
- Brandenburg Medical School Theodor Fontane, Faculty of Health Sciences Brandenburg, Neuruppin, Germany
| | - Jonathan Nübel
- Heart Centre Brandenburg, Department of Cardiology, Bernau, Germany
- Brandenburg Medical School Theodor Fontane, Faculty of Health Sciences Brandenburg, Neuruppin, Germany
| | - Sebastian Spethmann
- Deutsches Herzzentrum der Charité, Department of Cardiology, Angiology, and Intensive Care Medicine, Berlin, Germany
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
34
|
Botch TL, Garcia BD, Choi YB, Feffer N, Robertson CE. Active visual search in naturalistic environments reflects individual differences in classic visual search performance. Sci Rep 2023; 13:631. [PMID: 36635491 PMCID: PMC9837148 DOI: 10.1038/s41598-023-27896-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 01/10/2023] [Indexed: 01/13/2023] Open
Abstract
Visual search is a ubiquitous activity in real-world environments. Yet, traditionally, visual search is investigated in tightly controlled paradigms, where head-restricted participants locate a minimalistic target in a cluttered array that is presented on a computer screen. Do traditional visual search tasks predict performance in naturalistic settings, where participants actively explore complex, real-world scenes? Here, we leverage advances in virtual reality technology to test the degree to which classic and naturalistic search are limited by a common factor, set size, and the degree to which individual differences in classic search behavior predict naturalistic search behavior in a large sample of individuals (N = 75). In a naturalistic search task, participants looked for an object within their environment via a combination of head-turns and eye-movements using a head-mounted display. Then, in a classic search task, participants searched for a target within a simple array of colored letters using only eye-movements. In each task, we found that participants' search performance was impacted by increases in set size-the number of items in the visual display. Critically, we observed that participants' efficiency in classic search tasks-the degree to which set size slowed performance-indeed predicted efficiency in real-world scenes. These results demonstrate that classic, computer-based visual search tasks are excellent models of active, real-world search behavior.
Collapse
Affiliation(s)
- Thomas L Botch
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA.
| | - Brenda D Garcia
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| | - Yeo Bi Choi
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| | - Nicholas Feffer
- Department of Computer Science, Dartmouth College, Hanover, NH, 03755, USA
- Department of Computer Science, Stanford University, Stanford, CA, 94305, USA
| | - Caroline E Robertson
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| |
Collapse
|
35
|
Sugimoto M, Tomita A, Oyamada M, Sato M. Eye-Tracking-Based Analysis of Situational Awareness of Nurses. Healthcare (Basel) 2022; 10:2131. [PMID: 36360472 PMCID: PMC9690882 DOI: 10.3390/healthcare10112131] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 10/17/2022] [Accepted: 10/24/2022] [Indexed: 04/11/2024] Open
Abstract
BACKGROUND Nurses are responsible for comprehensively identifying patient conditions and associated environments. We hypothesize that gaze trajectories of nurses differ based on their experiences, even under the same situation. METHODS An eye-tracking device monitored the gaze trajectories of nurses with various levels of experience, and nursing students during the intravenous injection task on a human patient simulator. RESULTS The areas of interest (AOIs) were identified in the recorded movies, and the gaze durations of AOIs showed different patterns between experienced nurses and nursing students. A state transition diagram visualized the recognition errors of the students and the repeated confirmation of the vital signs of the patient simulator. Clustering analysis of gaze durations also indicated similarity among the participants with similar experiences. CONCLUSIONS As expected, gaze trajectories differed among the participants. The developed gaze transition diagram visualized their differences and helped in interpreting their situational awareness based on visual perception. The demonstrated method can help in establishing an effective nursing education, particularly for learning the skills that are difficult to be verbalized.
Collapse
Affiliation(s)
- Masahiro Sugimoto
- Institute of Medical Sciences, Tokyo Medical University, Shinjuku, Tokyo 160-0022, Japan
- Institute for Advanced Biosciences, Keio University, Tsuruoka 997-0052, Japan
| | - Atsumi Tomita
- Institute of Medical Sciences, Tokyo Medical University, Shinjuku, Tokyo 160-0022, Japan
| | - Michiko Oyamada
- Department of Nursing, Nihon Institute of Medical Science, Moroyama 350-0435, Japan
| | - Mitsue Sato
- Department of Nursing, Kiryu University, Midori 379-2392, Japan
| |
Collapse
|
36
|
Wang S, Ouyang X, Liu T, Wang Q, Shen D. Follow My Eye: Using Gaze to Supervise Computer-Aided Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1688-1698. [PMID: 35085074 DOI: 10.1109/tmi.2022.3146973] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
When deep neural network (DNN) was first introduced to the medical image analysis community, researchers were impressed by its performance. However, it is evident now that a large number of manually labeled data is often a must to train a properly functioning DNN. This demand for supervision data and labels is a major bottleneck in current medical image analysis, since collecting a large number of annotations from experienced experts can be time-consuming and expensive. In this paper, we demonstrate that the eye movement of radiologists reading medical images can be a new form of supervision to train the DNN-based computer-aided diagnosis (CAD) system. Particularly, we record the tracks of the radiologists' gaze when they are reading images. The gaze information is processed and then used to supervise the DNN's attention via an Attention Consistency module. To the best of our knowledge, the above pipeline is among the earliest efforts to leverage expert eye movement for deep-learning-based CAD. We have conducted extensive experiments on knee X-ray images for osteoarthritis assessment. The results show that our method can achieve considerable improvement in diagnosis performance, with the help of gaze supervision.
Collapse
|
37
|
Wolfe JM, Lyu W, Dong J, Wu CC. What eye tracking can tell us about how radiologists use automated breast ultrasound. J Med Imaging (Bellingham) 2022; 9:045502. [PMID: 35911209 PMCID: PMC9315059 DOI: 10.1117/1.jmi.9.4.045502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 07/08/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: Automated breast ultrasound (ABUS) presents three-dimensional (3D) representations of the breast in the form of stacks of coronal and transverse plane images. ABUS is especially useful for the assessment of dense breasts. Here, we present the first eye tracking data showing how radiologists search and evaluate ABUS cases. Approach: Twelve readers evaluated single-breast cases in 20-min sessions. Positive findings were present in 56% of the evaluated cases. Eye position and the currently visible coronal and transverse slice were tracked, allowing for reconstruction of 3D "scanpaths." Results: Individual readers had consistent search strategies. Most readers had strategies that involved examination of all available images. Overall accuracy was 0.74 (sensitivity = 0.66 and specificity = 0.84). The 20 false negative errors across all readers can be classified using Kundel's (1978) taxonomy: 17 are "decision" errors (readers found the target but misclassified it as normal or benign). There was one recognition error and two "search" errors. This is an unusually high proportion of decision errors. Readers spent essentially the same proportion of time viewing coronal and transverse images, regardless of whether the case was positive or negative, correct or incorrect. Readers tended to use a "scanner" strategy when viewing coronal images and a "driller" strategy when viewing transverse images. Conclusions: These results suggest that ABUS errors are more likely to be errors of interpretation than of search. Further research could determine if readers' exploration of all images is useful or if, in some negative cases, search of transverse images is redundant following a search of coronal images.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Brigham and Women's Hospital, Boston, Massachusetts, United States.,Harvard Medical School, Boston, Massachusetts, United States
| | - Wanyi Lyu
- Brigham and Women's Hospital, Boston, Massachusetts, United States
| | - Jeffrey Dong
- Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States
| | - Chia-Chien Wu
- Brigham and Women's Hospital, Boston, Massachusetts, United States.,Harvard Medical School, Boston, Massachusetts, United States
| |
Collapse
|
38
|
Ahmadi N, Romoser M, Salmon C. Improving the tactical scanning of student pilots: A gaze-based training intervention for transition from visual flight into instrument meteorological conditions. APPLIED ERGONOMICS 2022; 100:103642. [PMID: 34871832 DOI: 10.1016/j.apergo.2021.103642] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 10/15/2021] [Accepted: 11/08/2021] [Indexed: 06/13/2023]
Abstract
Eye tracking has been applied to train novice drivers and clinicians; however, such applications in aviation are limited. This study develops a gaze-based intervention using video-based, expert commentary, and 3M (Mistake, Mitigation, Mastery) training to instruct visual flight rule student pilots on an instrument cross-check to mitigate the risk of losing aircraft control when they inadvertently enter instrument meteorological conditions (IMC). Twenty general aviation student pilots were randomized into control and experimental groups. Dwell time, return time, entropy, Kullback-Leibler divergence, and deviations from flight paths were compared before and after training to straight-and-level-flight (LF) and standard left level turn (LT) scenarios. After the training, the experimental pilots significantly increased dwell time on primary instruments (PIs), reduced randomness in visual search, and fixated on the PIs in shorter times (in the scenario of LT). In terms of piloting, all experimental pilots successfully kept the aircraft control while five control pilots lost control in IMC; significant differences in altitude and rate of climb deviations were observed between groups (in the scenario of LF).
Collapse
Affiliation(s)
- Nima Ahmadi
- Western New England University, Department of Industrial Engineering and Engineering Management, Springfield, MA, 01119-2684, USA.
| | - Matthew Romoser
- Western New England University, Department of Industrial Engineering and Engineering Management, Springfield, MA, 01119-2684, USA.
| | - Christian Salmon
- Western New England University, Department of Industrial Engineering and Engineering Management, Springfield, MA, 01119-2684, USA.
| |
Collapse
|
39
|
Assessment of Aircraft Engine Blade Inspection Performance Using Attribute Agreement Analysis. SAFETY 2022. [DOI: 10.3390/safety8020023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Background—Visual inspection is an important element of aircraft engine maintenance to assure flight safety. Predominantly performed by human operators, those maintenance activities are prone to human error. While false negatives imply a risk to aviation safety, false positives can lead to increased maintenance cost. The aim of the present study was to evaluate the human performance in visual inspection of aero engine blades, specifically the operators’ consistency, accuracy, and reproducibility, as well as the system reliability. Methods—Photographs of 26 blades were presented to 50 industry practitioners of three skill levels to assess their performance. Each image was shown to each operator twice in random order, leading to N = 2600 observations. The data were statistically analysed using Attribute Agreement Analysis (AAA) and Kappa analysis. Results—The results show that operators were on average 82.5% consistent with their serviceability decision, while achieving an inspection accuracy of 67.7%. The operators’ reproducibility was 15.4%, as was the accuracy of all operators with the ground truth. Subsequently, the false-positive and false-negative rates were analysed separately to the overall inspection accuracy, showing that 20 operators (40%) achieved acceptable performances, thus meeting the required standard. Conclusions—In aviation maintenance the false-negative rate of <5% as per Aerospace Standard AS13100 is arguably the single most important metric since it determines the safety outcomes. The results of this study show acceptable false-negative performance in 60% of appraisers. Thus, there is the desirability to seek ways to improve the performance. Some suggestions are given in this regard.
Collapse
|
40
|
Wagner M, den Boer MC, Jansen S, Groepel P, Visser R, Witlox RSGM, Bekker V, Lopriore E, Berger A, te Pas AB. Video-based reflection on neonatal interventions during COVID-19 using eye-tracking glasses: an observational study. Arch Dis Child Fetal Neonatal Ed 2022; 107:156-160. [PMID: 34413092 PMCID: PMC8384497 DOI: 10.1136/archdischild-2021-321806] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Accepted: 06/16/2021] [Indexed: 11/17/2022]
Abstract
OBJECTIVE The aim of this study was to determine the experience with, and the feasibility of, point-of-view video recordings using eye-tracking glasses for training and reviewing neonatal interventions during the COVID-19 pandemic. DESIGN Observational prospective single-centre study. SETTING Neonatal intensive care unit at the Leiden University Medical Center. PARTICIPANTS All local neonatal healthcare providers. INTERVENTION There were two groups of participants: proceduralists, who wore eye-tracking glasses during procedures, and observers who later watched the procedures as part of a video-based reflection. MAIN OUTCOME MEASURES The primary outcome was the feasibility of, and the proceduralists and observers' experience with, the point-of-view eye-tracking videos as an additional tool for bedside teaching and video-based reflection. RESULTS We conducted 12 point-of-view recordings on 10 different patients (median gestational age of 30.9±3.5 weeks and weight of 1764 g) undergoing neonatal intubation (n=5), minimally invasive surfactant therapy (n=5) and umbilical line insertion (n=2). We conducted nine video-based observations with a total of 88 observers. The use of point-of-view recordings was perceived as feasible. Observers further reported the point-of-view recordings to be an educational benefit for them and a potentially instructional tool during COVID-19. CONCLUSION We proved the practicability of eye-tracking glasses for point-of-view recordings of neonatal procedures and videos for observation, educational sessions and logistics considerations, especially with the COVID-19 pandemic distancing measures reducing bedside teaching opportunities.
Collapse
Affiliation(s)
- Michael Wagner
- Department of Pediatrics, Comprehensive Center for Pediatrics, Medical University of Vienna, Vienna, Austria
| | - Maria C den Boer
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Sophie Jansen
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Peter Groepel
- Department of Applied Psychology, University of Vienna, Vienna, Austria
| | - Remco Visser
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Ruben S G M Witlox
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Vincent Bekker
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Enrico Lopriore
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| | - Angelika Berger
- Department of Pediatrics, Comprehensive Center for Pediatrics, Medical University of Vienna, Vienna, Austria
| | - Arjan B te Pas
- Department of Pediatrics, Leiden University Medical Center, Leiden, The Netherlands
| |
Collapse
|
41
|
Evaluation of Influence Factors on the Visual Inspection Performance of Aircraft Engine Blades. AEROSPACE 2021. [DOI: 10.3390/aerospace9010018] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Background—There are various influence factors that affect visual inspection of aircraft engine blades including type of inspection, defect type, severity level, blade perspective and background colour. The effect of those factors on the inspection performance was assessed. Method—The inspection accuracy of fifty industry practitioners was measured for 137 blade images, leading to N = 6850 observations. The data were statistically analysed to identify the significant factors. Subsequent evaluation of the eye tracking data provided additional insights into the inspection process. Results—Inspection accuracies in borescope inspections were significantly lower compared to piece-part inspection at 63.8% and 82.6%, respectively. Airfoil dents (19.0%), cracks (11.0%), and blockage (8.0%) were the most difficult defects to detect, while nicks (100.0%), tears (95.5%), and tip curls (89.0%) had the highest detection rates. The classification accuracy was lowest for airfoil dents (5.3%), burns (38.4%), and tears (44.9%), while coating loss (98.1%), nicks (90.0%), and blockage (87.5%) were most accurately classified. Defects of severity level S1 (72.0%) were more difficult to detect than increased severity levels S2 (92.8%) and S3 (99.0%). Moreover, visual perspectives perpendicular to the airfoil led to better inspection rates (up to 87.5%) than edge perspectives (51.0% to 66.5%). Background colour was not a significant factor. The eye tracking results of novices showed an unstructured search path, characterised by numerous fixations, leading to longer inspection times. Experts in contrast applied a systematic search strategy with focus on the edges, and showed a better defect discrimination ability. This observation was consistent across all stimuli, thus independent of the influence factors. Conclusions—Eye tracking identified the challenges of the inspection process and errors made. A revised inspection framework was proposed based on insights gained, and support the idea of an underlying mental model.
Collapse
|
42
|
Gong H, Hsieh SS, Holmes D, Cook D, Inoue A, Bartlett D, Baffour F, Takahashi H, Leng S, Yu L, McCollough CH, Fletcher JG. An interactive eye-tracking system for measuring radiologists' visual fixations in volumetric CT images: Implementation and initial eye-tracking accuracy validation. Med Phys 2021; 48:6710-6723. [PMID: 34534365 PMCID: PMC8595866 DOI: 10.1002/mp.15219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 08/28/2021] [Accepted: 08/30/2021] [Indexed: 01/17/2023] Open
Abstract
PURPOSE Eye-tracking approaches have been used to understand the visual search process in radiology. However, previous eye-tracking work in computer tomography (CT) has been limited largely to single cross-sectional images or video playback of the reconstructed volume, which do not accurately reflect radiologists' visual search activities and their interactivity with three-dimensional image data at a computer workstation (e.g., scroll, pan, and zoom) for visual evaluation of diagnostic imaging targets. We have developed a platform that integrates eye-tracking hardware with in-house-developed reader workstation software to allow monitoring of the visual search process and reader-image interactions in clinically relevant reader tasks. The purpose of this work is to validate the spatial accuracy of eye-tracking data using this platform for different eye-tracking data acquisition modes. METHODS An eye-tracker was integrated with a previously developed workstation designed for reader performance studies. The integrated system captured real-time eye movement and workstation events at 1000 Hz sampling frequency. The eye-tracker was operated either in head-stabilized mode or in free-movement mode. In head-stabilized mode, the reader positioned their head on a manufacturer-provided chinrest. In free-movement mode, a biofeedback tool emitted an audio cue when the head position was outside the data collection range (general biofeedback) or outside a narrower range of positions near the calibration position (strict biofeedback). Four radiologists and one resident were invited to participate in three studies to determine eye-tracking spatial accuracy under three constraint conditions: head-stabilized mode (i.e., with use of a chin rest), free movement with general biofeedback, and free movement with strict biofeedback. Study 1 evaluated the impact of head stabilization versus general or strict biofeedback using a cross-hair target prior to the integration of the eye-tracker with the image viewing workstation. In Study 2, after integration of the eye-tracker and reader workstation, readers were asked to fixate on targets that were randomly distributed within a volumetric digital phantom. In Study 3, readers used the integrated system to scroll through volumetric patient CT angiographic images while fixating on the centerline of designated blood vessels (from the left coronary artery to dorsalis pedis artery). Spatial accuracy was quantified as the offset between the center of the intended target and the detected fixation using units of image pixels and the degree of visual angle. RESULTS The three head position constraint conditions yielded comparable accuracy in the studies using digital phantoms. For Study 1 involving the digital crosshairs, the median ± the standard deviation of offset values among readers were 15.2 ± 7.0 image pixels with the chinrest, 14.2 ± 3.6 image pixels with strict biofeedback, and 19.1 ± 6.5 image pixels with general biofeedback. For Study 2 using the random dot phantom, the median ± standard deviation offset values were 16.7 ± 28.8 pixels with use of a chinrest, 16.5 ± 24.6 pixels using strict biofeedback, and 18.0 ± 22.4 pixels using general biofeedback, which translated to a visual angle of about 0.8° for all three conditions. We found no obvious association between eye-tracking accuracy and target size or view time. In Study 3 viewing patient images, use of the chinrest and strict biofeedback demonstrated comparable accuracy, while the use of general biofeedback demonstrated a slightly worse accuracy. The median ± standard deviation of offset values were 14.8 ± 11.4 pixels with use of a chinrest, 21.0 ± 16.2 pixels using strict biofeedback, and 29.7 ± 20.9 image pixels using general biofeedback. These corresponded to visual angles ranging from 0.7° to 1.3°. CONCLUSIONS An integrated eye-tracker system to assess reader eye movement and interactive viewing in relation to imaging targets demonstrated reasonable spatial accuracy for assessment of visual fixation. The head-free movement condition with audio biofeedback performed similarly to head-stabilized mode.
Collapse
Affiliation(s)
- Hao Gong
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - Scott S. Hsieh
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - David Holmes
- Department of Physiology & Biomedical Engineering, Mayo Clinic, Rochester, MN 55901
| | - David Cook
- Department of Internal Medicine, Mayo Clinic, Rochester, MN 55901
| | - Akitoshi Inoue
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - David Bartlett
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | | | | | - Shuai Leng
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - Lifeng Yu
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | | | | |
Collapse
|
43
|
Drew T, Lavelle M, Kerr KF, Shucard H, Brunyé TT, Weaver DL, Elmore JG. More scanning, but not zooming, is associated with diagnostic accuracy in evaluating digital breast pathology slides. J Vis 2021; 21:7. [PMID: 34636845 PMCID: PMC8525842 DOI: 10.1167/jov.21.11.7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 09/15/2021] [Indexed: 12/02/2022] Open
Abstract
Diagnoses of medical images can invite strikingly diverse strategies for image navigation and visual search. In computed tomography screening for lung nodules, distinct strategies, termed scanning and drilling, relate to both radiologists' clinical experience and accuracy in lesion detection. Here, we examined associations between search patterns and accuracy for pathologists (N = 92) interpreting a diverse set of breast biopsy images. While changes in depth in volumetric images reveal new structures through movement in the z-plane, in digital pathology changes in depth are associated with increased magnification. Thus, "drilling" in radiology may be more appropriately termed "zooming" in pathology. We monitored eye-movements and navigation through digital pathology slides to derive metrics of how quickly the pathologists moved through XY (scanning) and Z (zooming) space. Prior research on eye-movements in depth has categorized clinicians as either "scanners" or "drillers." In contrast, we found that there was no reliable association between a clinician's tendency to scan or zoom while examining digital pathology slides. Thus, in the current work we treated scanning and zooming as continuous predictors rather than categorizing as either a "scanner" or "zoomer." In contrast to prior work in volumetric chest images, we found significant associations between accuracy and scanning rate but not zooming rate. These findings suggest fundamental differences in the relative value of information types and review behaviors across two image formats. Our data suggest that pathologists gather critical information by scanning on a given plane of depth, whereas radiologists drill through depth to interrogate critical features.
Collapse
Affiliation(s)
- Trafton Drew
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Mark Lavelle
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Kathleen F Kerr
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Hannah Shucard
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Tad T Brunyé
- Department of Psychology, Tufts University, Medford, MA, USA
| | - Donald L Weaver
- Department of Pathology & Laboratory Medicine, University of Vermont, Burlington, VT, USA
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| |
Collapse
|
44
|
Kowalski B, Huang X, Steven S, Dubra A. Hybrid FPGA-CPU pupil tracker. BIOMEDICAL OPTICS EXPRESS 2021; 12:6496-6513. [PMID: 34745752 PMCID: PMC8548015 DOI: 10.1364/boe.433766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 08/17/2021] [Accepted: 09/11/2021] [Indexed: 06/13/2023]
Abstract
An off-axis monocular pupil tracker designed for eventual integration in ophthalmoscopes for eye movement stabilization is described and demonstrated. The instrument consists of light-emitting diodes, a camera, a field-programmable gate array (FPGA) and a central processing unit (CPU). The raw camera image undergoes background subtraction, field-flattening, 1-dimensional low-pass filtering, thresholding and robust pupil edge detection on an FPGA pixel stream, followed by least-squares fitting of the pupil edge pixel coordinates to an ellipse in the CPU. Experimental data suggest that the proposed algorithms require raw images with a minimum of ∼32 gray levels to achieve sub-pixel pupil center accuracy. Tests with two different cameras operating at 575, 1250 and 5400 frames per second trained on a model pupil achieved 0.5-1.5 μm pupil center estimation precision with 0.6-2.1 ms combined image download, FPGA and CPU processing latency. Pupil tracking data from a fixating human subject show that the tracker operation only requires the adjustment of a single parameter, namely an image intensity threshold. The latency of the proposed pupil tracker is limited by camera download time (latency) and sensitivity (precision).
Collapse
Affiliation(s)
| | - Xiaojing Huang
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
- Institute of Optics, University of Rochester, Rochester, NY 14620, USA
| | - Samuel Steven
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
- Institute of Optics, University of Rochester, Rochester, NY 14620, USA
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
| |
Collapse
|
45
|
Chattopadhyay AK, Chattopadhyay S. VIRDOCD
: A
VIRtual DOCtor
to predict dengue fatality. EXPERT SYSTEMS 2021. [DOI: 10.1111/exsy.12796] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
46
|
Aust J, Mitrovic A, Pons D. Assessment of the Effect of Cleanliness on the Visual Inspection of Aircraft Engine Blades: An Eye Tracking Study. SENSORS (BASEL, SWITZERLAND) 2021; 21:6135. [PMID: 34577343 PMCID: PMC8473167 DOI: 10.3390/s21186135] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 09/03/2021] [Accepted: 09/07/2021] [Indexed: 01/20/2023]
Abstract
Background-The visual inspection of aircraft parts such as engine blades is crucial to ensure safe aircraft operation. There is a need to understand the reliability of such inspections and the factors that affect the results. In this study, the factor 'cleanliness' was analysed among other factors. Method-Fifty industry practitioners of three expertise levels inspected 24 images of parts with a variety of defects in clean and dirty conditions, resulting in a total of N = 1200 observations. The data were analysed statistically to evaluate the relationships between cleanliness and inspection performance. Eye tracking was applied to understand the search strategies of different levels of expertise for various part conditions. Results-The results show an inspection accuracy of 86.8% and 66.8% for clean and dirty blades, respectively. The statistical analysis showed that cleanliness and defect type influenced the inspection accuracy, while expertise was surprisingly not a significant factor. In contrast, inspection time was affected by expertise along with other factors, including cleanliness, defect type and visual acuity. Eye tracking revealed that inspectors (experts) apply a more structured and systematic search with less fixations and revisits compared to other groups. Conclusions-Cleaning prior to inspection leads to better results. Eye tracking revealed that inspectors used an underlying search strategy characterised by edge detection and differentiation between surface deposits and other types of damage, which contributed to better performance.
Collapse
Affiliation(s)
- Jonas Aust
- Department of Mechanical Engineering, University of Canterbury, Christchurch 8041, New Zealand;
| | - Antonija Mitrovic
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8041, New Zealand;
| | - Dirk Pons
- Department of Mechanical Engineering, University of Canterbury, Christchurch 8041, New Zealand;
| |
Collapse
|
47
|
The Multi-Level Pattern Memory Test (MPMT): Initial Validation of a Novel Performance Validity Test. Brain Sci 2021; 11:brainsci11081039. [PMID: 34439658 PMCID: PMC8393330 DOI: 10.3390/brainsci11081039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 07/30/2021] [Accepted: 08/01/2021] [Indexed: 11/16/2022] Open
Abstract
Performance validity tests (PVTs) are used for the detection of noncredible performance in neuropsychological assessments. The aim of the study was to assess the efficacy (i.e., discrimination capacity) of a novel PVT, the Multi-Level Pattern Memory Test (MPMT). It includes stages that allow profile analysis (i.e., detecting noncredible performance based on an analysis of participants' performance across stages) and minimizes the likelihood that it would be perceived as a PVT by examinees. In addition, it utilizes nonverbal stimuli and is therefore more likely to be cross-culturally valid. In Experiment 1, participants that were instructed to simulate cognitive impairment performed less accurately than honest controls in the MPMT (n = 67). Importantly, the MPMT has shown an adequate discrimination capacity, though somewhat lower than an established PVT (i.e., Test of Memory Malingering-TOMM). Experiment 2 (n = 77) validated the findings of the first experiment while also indicating a dissociation between the simulators' objective performance and their perceived cognitive load while performing the MPMT. The MPMT and the profile analysis based on its outcome measures show initial promise in detecting noncredible performance. It may, therefore, increase the range of available PVTs at the disposal of clinicians, though further validation in clinical settings is mandated. The fact that it is an open-source software will hopefully also encourage the development of research programs aimed at clarifying the cognitive processes involved in noncredible performance and the impact of PVT characteristics on clinical utility.
Collapse
|
48
|
Kołodziej P, Tuszyńska-Bogucka W, Dzieńkowski M, Bogucki J, Kocki J, Milosz M, Kocki M, Reszka P, Kocki W, Bogucka-Kocka A. Eye Tracking-An Innovative Tool in Medical Parasitology. J Clin Med 2021; 10:jcm10132989. [PMID: 34279473 PMCID: PMC8268455 DOI: 10.3390/jcm10132989] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 06/29/2021] [Accepted: 06/29/2021] [Indexed: 11/16/2022] Open
Abstract
The innovative Eye Movement Modelling Examples (EMMEs) method can be used in medicine as an educational training tool for the assessment and verification of students and professionals. Our work was intended to analyse the possibility of using eye tracking tools to verify the skills and training of people engaged in laboratory medicine on the example of parasitological diagnostics. Professionally active laboratory diagnosticians working in a multi-profile laboratory (non-parasitological) (n = 16), laboratory diagnosticians no longer working in this profession (n = 10), and medical analyst students (n = 56), participated in the study. The studied group analysed microscopic images of parasitological preparations made with the cellSens Dimension Software (Olympus) system. Eye activity parameters were obtained using a stationary, video-based eye tracker Tobii TX300 which has a 3-ms temporal resolution. Eye movement activity parameters were analysed along with time parameters. The results of our studies have shown that the eye tracking method is a valuable tool for the analysis of parasitological preparations. Detailed quantitative and qualitative analysis confirmed that the EMMEs method may facilitate learning of the correct microscopic image scanning path. The analysis of the results of our studies allows us to conclude that the EMMEs method may be a valuable tool in the preparation of teaching materials in virtual microscopy. These teaching materials generated with the use of eye tracking, prepared by experienced professionals in the field of laboratory medicine, can be used during various training, simulations and courses in medical parasitology and contribute to the verification of education results, professional skills, and elimination of errors in parasitological diagnostics.
Collapse
Affiliation(s)
- Przemysław Kołodziej
- Chair and Department of Biology and Genetics, Medical University of Lublin, 20-093 Lublin, Poland;
- Correspondence: ; Tel.: +48-814-487-234
| | | | - Mariusz Dzieńkowski
- Department of Computer Science, Lublin University of Technology, 20-618 Lublin, Poland; (M.D.); (M.M.)
| | - Jacek Bogucki
- Department of Organic Chemistry, Medical University of Lublin, 20-093 Lublin, Poland;
| | - Janusz Kocki
- Department of Clinical Genetics, Medical University of Lublin, 20-080 Lublin, Poland;
| | - Marek Milosz
- Department of Computer Science, Lublin University of Technology, 20-618 Lublin, Poland; (M.D.); (M.M.)
| | - Marcin Kocki
- Scientific Circle at Department of Clinical Genetics, Medical University of Lublin, 20-080 Lublin, Poland; (M.K.); (P.R.)
| | - Patrycja Reszka
- Scientific Circle at Department of Clinical Genetics, Medical University of Lublin, 20-080 Lublin, Poland; (M.K.); (P.R.)
| | - Wojciech Kocki
- Department of Architecture and Urban Planning, Lublin University of Technology, 20-618 Lublin, Poland;
| | - Anna Bogucka-Kocka
- Chair and Department of Biology and Genetics, Medical University of Lublin, 20-093 Lublin, Poland;
| |
Collapse
|
49
|
Brunyé TT, Drew T, Saikia MJ, Kerr KF, Eguchi MM, Lee AC, May C, Elder DE, Elmore JG. Melanoma in the Blink of an Eye: Pathologists' Rapid Detection, Classification, and Localization of Skin Abnormalities. VISUAL COGNITION 2021; 29:386-400. [PMID: 35197796 PMCID: PMC8863358 DOI: 10.1080/13506285.2021.1943093] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Accepted: 06/09/2021] [Indexed: 10/21/2022]
Abstract
Expert radiologists can quickly extract a basic "gist" understanding of a medical image following less than a second exposure, leading to above-chance diagnostic classification of images. Most of this work has focused on radiology tasks (such as screening mammography), and it is currently unclear whether this pattern of results and the nature of visual expertise underlying this ability are applicable to pathology, another medical imaging domain demanding visual diagnostic interpretation. To further characterize the detection, localization, and diagnosis of medical images, this study examined eye movements and diagnostic decision-making when pathologists were briefly exposed to digital whole slide images of melanocytic skin biopsies. Twelve resident (N = 5), fellow (N = 5), and attending pathologists (N = 2) with experience interpreting dermatopathology briefly viewed 48 cases presented for 500 ms each, and we tracked their eye movements towards histological abnormalities, their ability to classify images as containing or not containing invasive melanoma, and their ability to localize critical image regions. Results demonstrated rapid shifts of the eyes towards critical abnormalities during image viewing, high diagnostic sensitivity and specificity, and a surprisingly accurate ability to localize critical diagnostic image regions. Furthermore, when pathologists fixated critical regions with their eyes, they were subsequently much more likely to successfully localize that region on an outline of the image. Results are discussed relative to models of medical image interpretation and innovative methods for monitoring and assessing expertise development during medical education and training.
Collapse
Affiliation(s)
- Tad T. Brunyé
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA, USA
| | - Trafton Drew
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Manob Jyoti Saikia
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA, USA
| | - Kathleen F. Kerr
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Megan M. Eguchi
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Annie C. Lee
- Department of Medicine, David Geffen School of Medicine, University of California Los Angeles, CA, USA
| | - Caitlin May
- Dermatopathology Northwest, Bellevue, WA, USA
| | - David E. Elder
- Division of Anatomic Pathology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - Joann G. Elmore
- Department of Medicine, David Geffen School of Medicine, University of California Los Angeles, CA, USA
| |
Collapse
|
50
|
Yeh PH, Liu CH, Sun MH, Chi SC, Hwang YS. To measure the amount of ocular deviation in strabismus patients with an eye-tracking virtual reality headset. BMC Ophthalmol 2021; 21:246. [PMID: 34088299 PMCID: PMC8178882 DOI: 10.1186/s12886-021-02016-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Accepted: 05/26/2021] [Indexed: 11/10/2022] Open
Abstract
PURPOSE To investigate the accuracy of a newly developed, eye-tracking virtual reality (VR)-based ocular deviation measurement system in strabismus patients. METHODS A VR-based ocular deviation measurement system was designed to simulate the alternative prism cover test (APCT). A fixation target was made to alternate between two screens, one in front of each eye, to simulate the steps of a normal prism cover test. Patient's eye movements were recorded by built-in eye tracking. The angle of ocular deviation was compared between the APCT and the VR-based system. RESULTS This study included 38 patients with strabismus. The angle of ocular deviation measured by the VR-based system and the APCT showed good to excellent correlation (intraclass correlation coefficient, ICC = 0.897 (range: 0.810-0.945)). The 95% limits of agreement was 11.32 PD. Subgroup analysis revealed a significant difference between esotropia and exotropia (p < 0.001). In the esotropia group, the amount of ocular deviation measured by the VR-based system was greater than that measured by the APCT (mean = 4.65 PD), while in the exotropia group, the amount of ocular deviation measured by the VR-based system was less than that of the APCT (mean = - 3.01 PD). The ICC was 0.962 (range: 0.902-0.986) in the esotropia group and 0.862 (range: 0.651-0.950) in the exotropia group. The 95% limits of agreement were 6.62 PD and 11.25 PD in the esotropia and exotropia groups, respectively. CONCLUSIONS This study reports the first application of a consumer-grade and commercial-grade VR-based device for assessing angle of ocular deviation in strabismus patients. This device could provide measurements with near excellent correlation with the APCT. The system also provides the first step to digitize the strabismus examination, as well as the possibility for its application in telemedicine.
Collapse
Affiliation(s)
- Po-Han Yeh
- Department of Ophthalmology, Chang Gung Memorial Hospital, Chang Gung University College of Medicine, No 5, Fu-Shin Street, Kwei-Shan District, Tau-Yuan City, Taiwan
| | - Chun-Hsiu Liu
- Department of Ophthalmology, Chang Gung Memorial Hospital, Chang Gung University College of Medicine, No 5, Fu-Shin Street, Kwei-Shan District, Tau-Yuan City, Taiwan.
| | - Ming-Hui Sun
- Department of Ophthalmology, Chang Gung Memorial Hospital, Chang Gung University College of Medicine, No 5, Fu-Shin Street, Kwei-Shan District, Tau-Yuan City, Taiwan
| | - Sheng-Chu Chi
- Department of Ophthalmology, Chang Gung Memorial Hospital, Chang Gung University College of Medicine, No 5, Fu-Shin Street, Kwei-Shan District, Tau-Yuan City, Taiwan
| | - Yih-Shiou Hwang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Chang Gung University College of Medicine, No 5, Fu-Shin Street, Kwei-Shan District, Tau-Yuan City, Taiwan.
| |
Collapse
|