51
|
Drew T, Lavelle M, Kerr KF, Shucard H, Brunyé TT, Weaver DL, Elmore JG. More scanning, but not zooming, is associated with diagnostic accuracy in evaluating digital breast pathology slides. J Vis 2021; 21:7. [PMID: 34636845 PMCID: PMC8525842 DOI: 10.1167/jov.21.11.7] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 09/15/2021] [Indexed: 12/02/2022] Open
Abstract
Diagnoses of medical images can invite strikingly diverse strategies for image navigation and visual search. In computed tomography screening for lung nodules, distinct strategies, termed scanning and drilling, relate to both radiologists' clinical experience and accuracy in lesion detection. Here, we examined associations between search patterns and accuracy for pathologists (N = 92) interpreting a diverse set of breast biopsy images. While changes in depth in volumetric images reveal new structures through movement in the z-plane, in digital pathology changes in depth are associated with increased magnification. Thus, "drilling" in radiology may be more appropriately termed "zooming" in pathology. We monitored eye-movements and navigation through digital pathology slides to derive metrics of how quickly the pathologists moved through XY (scanning) and Z (zooming) space. Prior research on eye-movements in depth has categorized clinicians as either "scanners" or "drillers." In contrast, we found that there was no reliable association between a clinician's tendency to scan or zoom while examining digital pathology slides. Thus, in the current work we treated scanning and zooming as continuous predictors rather than categorizing as either a "scanner" or "zoomer." In contrast to prior work in volumetric chest images, we found significant associations between accuracy and scanning rate but not zooming rate. These findings suggest fundamental differences in the relative value of information types and review behaviors across two image formats. Our data suggest that pathologists gather critical information by scanning on a given plane of depth, whereas radiologists drill through depth to interrogate critical features.
Collapse
Affiliation(s)
- Trafton Drew
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Mark Lavelle
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Kathleen F Kerr
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Hannah Shucard
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Tad T Brunyé
- Department of Psychology, Tufts University, Medford, MA, USA
| | - Donald L Weaver
- Department of Pathology & Laboratory Medicine, University of Vermont, Burlington, VT, USA
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| |
Collapse
|
52
|
Kowalski B, Huang X, Steven S, Dubra A. Hybrid FPGA-CPU pupil tracker. BIOMEDICAL OPTICS EXPRESS 2021; 12:6496-6513. [PMID: 34745752 PMCID: PMC8548015 DOI: 10.1364/boe.433766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 08/17/2021] [Accepted: 09/11/2021] [Indexed: 06/13/2023]
Abstract
An off-axis monocular pupil tracker designed for eventual integration in ophthalmoscopes for eye movement stabilization is described and demonstrated. The instrument consists of light-emitting diodes, a camera, a field-programmable gate array (FPGA) and a central processing unit (CPU). The raw camera image undergoes background subtraction, field-flattening, 1-dimensional low-pass filtering, thresholding and robust pupil edge detection on an FPGA pixel stream, followed by least-squares fitting of the pupil edge pixel coordinates to an ellipse in the CPU. Experimental data suggest that the proposed algorithms require raw images with a minimum of ∼32 gray levels to achieve sub-pixel pupil center accuracy. Tests with two different cameras operating at 575, 1250 and 5400 frames per second trained on a model pupil achieved 0.5-1.5 μm pupil center estimation precision with 0.6-2.1 ms combined image download, FPGA and CPU processing latency. Pupil tracking data from a fixating human subject show that the tracker operation only requires the adjustment of a single parameter, namely an image intensity threshold. The latency of the proposed pupil tracker is limited by camera download time (latency) and sensitivity (precision).
Collapse
Affiliation(s)
| | - Xiaojing Huang
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
- Institute of Optics, University of Rochester, Rochester, NY 14620, USA
| | - Samuel Steven
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
- Institute of Optics, University of Rochester, Rochester, NY 14620, USA
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
| |
Collapse
|
53
|
Aust J, Mitrovic A, Pons D. Assessment of the Effect of Cleanliness on the Visual Inspection of Aircraft Engine Blades: An Eye Tracking Study. SENSORS (BASEL, SWITZERLAND) 2021; 21:6135. [PMID: 34577343 PMCID: PMC8473167 DOI: 10.3390/s21186135] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 09/03/2021] [Accepted: 09/07/2021] [Indexed: 01/20/2023]
Abstract
Background-The visual inspection of aircraft parts such as engine blades is crucial to ensure safe aircraft operation. There is a need to understand the reliability of such inspections and the factors that affect the results. In this study, the factor 'cleanliness' was analysed among other factors. Method-Fifty industry practitioners of three expertise levels inspected 24 images of parts with a variety of defects in clean and dirty conditions, resulting in a total of N = 1200 observations. The data were analysed statistically to evaluate the relationships between cleanliness and inspection performance. Eye tracking was applied to understand the search strategies of different levels of expertise for various part conditions. Results-The results show an inspection accuracy of 86.8% and 66.8% for clean and dirty blades, respectively. The statistical analysis showed that cleanliness and defect type influenced the inspection accuracy, while expertise was surprisingly not a significant factor. In contrast, inspection time was affected by expertise along with other factors, including cleanliness, defect type and visual acuity. Eye tracking revealed that inspectors (experts) apply a more structured and systematic search with less fixations and revisits compared to other groups. Conclusions-Cleaning prior to inspection leads to better results. Eye tracking revealed that inspectors used an underlying search strategy characterised by edge detection and differentiation between surface deposits and other types of damage, which contributed to better performance.
Collapse
Affiliation(s)
- Jonas Aust
- Department of Mechanical Engineering, University of Canterbury, Christchurch 8041, New Zealand;
| | - Antonija Mitrovic
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8041, New Zealand;
| | - Dirk Pons
- Department of Mechanical Engineering, University of Canterbury, Christchurch 8041, New Zealand;
| |
Collapse
|
54
|
The Multi-Level Pattern Memory Test (MPMT): Initial Validation of a Novel Performance Validity Test. Brain Sci 2021; 11:brainsci11081039. [PMID: 34439658 PMCID: PMC8393330 DOI: 10.3390/brainsci11081039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 07/30/2021] [Accepted: 08/01/2021] [Indexed: 11/16/2022] Open
Abstract
Performance validity tests (PVTs) are used for the detection of noncredible performance in neuropsychological assessments. The aim of the study was to assess the efficacy (i.e., discrimination capacity) of a novel PVT, the Multi-Level Pattern Memory Test (MPMT). It includes stages that allow profile analysis (i.e., detecting noncredible performance based on an analysis of participants' performance across stages) and minimizes the likelihood that it would be perceived as a PVT by examinees. In addition, it utilizes nonverbal stimuli and is therefore more likely to be cross-culturally valid. In Experiment 1, participants that were instructed to simulate cognitive impairment performed less accurately than honest controls in the MPMT (n = 67). Importantly, the MPMT has shown an adequate discrimination capacity, though somewhat lower than an established PVT (i.e., Test of Memory Malingering-TOMM). Experiment 2 (n = 77) validated the findings of the first experiment while also indicating a dissociation between the simulators' objective performance and their perceived cognitive load while performing the MPMT. The MPMT and the profile analysis based on its outcome measures show initial promise in detecting noncredible performance. It may, therefore, increase the range of available PVTs at the disposal of clinicians, though further validation in clinical settings is mandated. The fact that it is an open-source software will hopefully also encourage the development of research programs aimed at clarifying the cognitive processes involved in noncredible performance and the impact of PVT characteristics on clinical utility.
Collapse
|
55
|
Kołodziej P, Tuszyńska-Bogucka W, Dzieńkowski M, Bogucki J, Kocki J, Milosz M, Kocki M, Reszka P, Kocki W, Bogucka-Kocka A. Eye Tracking-An Innovative Tool in Medical Parasitology. J Clin Med 2021; 10:jcm10132989. [PMID: 34279473 PMCID: PMC8268455 DOI: 10.3390/jcm10132989] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 06/29/2021] [Accepted: 06/29/2021] [Indexed: 11/16/2022] Open
Abstract
The innovative Eye Movement Modelling Examples (EMMEs) method can be used in medicine as an educational training tool for the assessment and verification of students and professionals. Our work was intended to analyse the possibility of using eye tracking tools to verify the skills and training of people engaged in laboratory medicine on the example of parasitological diagnostics. Professionally active laboratory diagnosticians working in a multi-profile laboratory (non-parasitological) (n = 16), laboratory diagnosticians no longer working in this profession (n = 10), and medical analyst students (n = 56), participated in the study. The studied group analysed microscopic images of parasitological preparations made with the cellSens Dimension Software (Olympus) system. Eye activity parameters were obtained using a stationary, video-based eye tracker Tobii TX300 which has a 3-ms temporal resolution. Eye movement activity parameters were analysed along with time parameters. The results of our studies have shown that the eye tracking method is a valuable tool for the analysis of parasitological preparations. Detailed quantitative and qualitative analysis confirmed that the EMMEs method may facilitate learning of the correct microscopic image scanning path. The analysis of the results of our studies allows us to conclude that the EMMEs method may be a valuable tool in the preparation of teaching materials in virtual microscopy. These teaching materials generated with the use of eye tracking, prepared by experienced professionals in the field of laboratory medicine, can be used during various training, simulations and courses in medical parasitology and contribute to the verification of education results, professional skills, and elimination of errors in parasitological diagnostics.
Collapse
Affiliation(s)
- Przemysław Kołodziej
- Chair and Department of Biology and Genetics, Medical University of Lublin, 20-093 Lublin, Poland;
- Correspondence: ; Tel.: +48-814-487-234
| | | | - Mariusz Dzieńkowski
- Department of Computer Science, Lublin University of Technology, 20-618 Lublin, Poland; (M.D.); (M.M.)
| | - Jacek Bogucki
- Department of Organic Chemistry, Medical University of Lublin, 20-093 Lublin, Poland;
| | - Janusz Kocki
- Department of Clinical Genetics, Medical University of Lublin, 20-080 Lublin, Poland;
| | - Marek Milosz
- Department of Computer Science, Lublin University of Technology, 20-618 Lublin, Poland; (M.D.); (M.M.)
| | - Marcin Kocki
- Scientific Circle at Department of Clinical Genetics, Medical University of Lublin, 20-080 Lublin, Poland; (M.K.); (P.R.)
| | - Patrycja Reszka
- Scientific Circle at Department of Clinical Genetics, Medical University of Lublin, 20-080 Lublin, Poland; (M.K.); (P.R.)
| | - Wojciech Kocki
- Department of Architecture and Urban Planning, Lublin University of Technology, 20-618 Lublin, Poland;
| | - Anna Bogucka-Kocka
- Chair and Department of Biology and Genetics, Medical University of Lublin, 20-093 Lublin, Poland;
| |
Collapse
|
56
|
Brunyé TT, Drew T, Saikia MJ, Kerr KF, Eguchi MM, Lee AC, May C, Elder DE, Elmore JG. Melanoma in the Blink of an Eye: Pathologists' Rapid Detection, Classification, and Localization of Skin Abnormalities. VISUAL COGNITION 2021; 29:386-400. [PMID: 35197796 PMCID: PMC8863358 DOI: 10.1080/13506285.2021.1943093] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Accepted: 06/09/2021] [Indexed: 10/21/2022]
Abstract
Expert radiologists can quickly extract a basic "gist" understanding of a medical image following less than a second exposure, leading to above-chance diagnostic classification of images. Most of this work has focused on radiology tasks (such as screening mammography), and it is currently unclear whether this pattern of results and the nature of visual expertise underlying this ability are applicable to pathology, another medical imaging domain demanding visual diagnostic interpretation. To further characterize the detection, localization, and diagnosis of medical images, this study examined eye movements and diagnostic decision-making when pathologists were briefly exposed to digital whole slide images of melanocytic skin biopsies. Twelve resident (N = 5), fellow (N = 5), and attending pathologists (N = 2) with experience interpreting dermatopathology briefly viewed 48 cases presented for 500 ms each, and we tracked their eye movements towards histological abnormalities, their ability to classify images as containing or not containing invasive melanoma, and their ability to localize critical image regions. Results demonstrated rapid shifts of the eyes towards critical abnormalities during image viewing, high diagnostic sensitivity and specificity, and a surprisingly accurate ability to localize critical diagnostic image regions. Furthermore, when pathologists fixated critical regions with their eyes, they were subsequently much more likely to successfully localize that region on an outline of the image. Results are discussed relative to models of medical image interpretation and innovative methods for monitoring and assessing expertise development during medical education and training.
Collapse
Affiliation(s)
- Tad T. Brunyé
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA, USA
| | - Trafton Drew
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Manob Jyoti Saikia
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA, USA
| | - Kathleen F. Kerr
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Megan M. Eguchi
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Annie C. Lee
- Department of Medicine, David Geffen School of Medicine, University of California Los Angeles, CA, USA
| | - Caitlin May
- Dermatopathology Northwest, Bellevue, WA, USA
| | - David E. Elder
- Division of Anatomic Pathology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - Joann G. Elmore
- Department of Medicine, David Geffen School of Medicine, University of California Los Angeles, CA, USA
| |
Collapse
|
57
|
Yeh PH, Liu CH, Sun MH, Chi SC, Hwang YS. To measure the amount of ocular deviation in strabismus patients with an eye-tracking virtual reality headset. BMC Ophthalmol 2021; 21:246. [PMID: 34088299 PMCID: PMC8178882 DOI: 10.1186/s12886-021-02016-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Accepted: 05/26/2021] [Indexed: 11/10/2022] Open
Abstract
PURPOSE To investigate the accuracy of a newly developed, eye-tracking virtual reality (VR)-based ocular deviation measurement system in strabismus patients. METHODS A VR-based ocular deviation measurement system was designed to simulate the alternative prism cover test (APCT). A fixation target was made to alternate between two screens, one in front of each eye, to simulate the steps of a normal prism cover test. Patient's eye movements were recorded by built-in eye tracking. The angle of ocular deviation was compared between the APCT and the VR-based system. RESULTS This study included 38 patients with strabismus. The angle of ocular deviation measured by the VR-based system and the APCT showed good to excellent correlation (intraclass correlation coefficient, ICC = 0.897 (range: 0.810-0.945)). The 95% limits of agreement was 11.32 PD. Subgroup analysis revealed a significant difference between esotropia and exotropia (p < 0.001). In the esotropia group, the amount of ocular deviation measured by the VR-based system was greater than that measured by the APCT (mean = 4.65 PD), while in the exotropia group, the amount of ocular deviation measured by the VR-based system was less than that of the APCT (mean = - 3.01 PD). The ICC was 0.962 (range: 0.902-0.986) in the esotropia group and 0.862 (range: 0.651-0.950) in the exotropia group. The 95% limits of agreement were 6.62 PD and 11.25 PD in the esotropia and exotropia groups, respectively. CONCLUSIONS This study reports the first application of a consumer-grade and commercial-grade VR-based device for assessing angle of ocular deviation in strabismus patients. This device could provide measurements with near excellent correlation with the APCT. The system also provides the first step to digitize the strabismus examination, as well as the possibility for its application in telemedicine.
Collapse
Affiliation(s)
- Po-Han Yeh
- Department of Ophthalmology, Chang Gung Memorial Hospital, Chang Gung University College of Medicine, No 5, Fu-Shin Street, Kwei-Shan District, Tau-Yuan City, Taiwan
| | - Chun-Hsiu Liu
- Department of Ophthalmology, Chang Gung Memorial Hospital, Chang Gung University College of Medicine, No 5, Fu-Shin Street, Kwei-Shan District, Tau-Yuan City, Taiwan.
| | - Ming-Hui Sun
- Department of Ophthalmology, Chang Gung Memorial Hospital, Chang Gung University College of Medicine, No 5, Fu-Shin Street, Kwei-Shan District, Tau-Yuan City, Taiwan
| | - Sheng-Chu Chi
- Department of Ophthalmology, Chang Gung Memorial Hospital, Chang Gung University College of Medicine, No 5, Fu-Shin Street, Kwei-Shan District, Tau-Yuan City, Taiwan
| | - Yih-Shiou Hwang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Chang Gung University College of Medicine, No 5, Fu-Shin Street, Kwei-Shan District, Tau-Yuan City, Taiwan.
| |
Collapse
|
58
|
Wu W, Hall AK, Braund H, Bell CR, Szulewski A. The Development of Visual Expertise in ECG Interpretation: An Eye-Tracking Augmented Re Situ Interview Approach. TEACHING AND LEARNING IN MEDICINE 2021; 33:258-269. [PMID: 33302734 DOI: 10.1080/10401334.2020.1844009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Phenomenon: Visual expertise in medicine involves a complex interplay between expert visual behavior patterns and higher-level cognitive processes. Previous studies of visual expertise in medicine have centered around traditionally visually intensive disciplines such as radiology and pathology. However, there is limited study of visual expertise in electrocardiogram (ECG) interpretation, a common clinical task that is associated with high error rates. This qualitatively driven multi-methods study aimed to describe differences in cognitive approaches to ECG interpretation between medical students, emergency medicine (EM) residents, and EM attending physicians. Approach: Ten medical students, 10 EM residents, and 10 EM attending physicians were recruited from one tertiary academic center to participate in this study. Participants interpreted 10 ECGs with a screen-based eye-tracking device, then underwent a subjective re situ interview augmented by playback of the participants' own gaze scan-paths via eye-tracking. Interviews were transcribed verbatim and an emergent thematic analysis was performed across participant groups. Diagnostic speed, accuracy, and heat maps of fixation distribution were collected to supplement the qualitative findings. Findings: Qualitative analysis demonstrated differences among the cohorts in three major themes: dual-process reasoning, ability to prioritize, and clinical implications. These qualitative findings were aligned with differences in visual behavior demonstrated by heat maps of fixation distribution across each ECG. More experienced participants completed ECG interpretation significantly faster and more accurately than less experienced participants. Insights: The cognitive processes related to ECG interpretation differed between novices and more experienced providers in EM. Understanding the differences in cognitive approaches to ECG interpretation between these groups may help inform best practices in teaching this ubiquitous diagnostic skill.
Collapse
Affiliation(s)
- William Wu
- School of Medicine, Queen's University, Kingston, Ontario, Canada
| | - Andrew K Hall
- Department of Emergency Medicine, Kingston Health Science Center, Queen's University, Kingston, Ontario, Canada
| | - Heather Braund
- Office of Professional Development and Educational Scholarship, Faculty of Health Sciences, Faculty of Education, Queen's University, Kingston, Ontario, Canada
| | - Colin R Bell
- Department of Emergency Medicine, Kingston Health Science Center, Queen's University, Kingston, Ontario, Canada
| | - Adam Szulewski
- Department of Emergency Medicine, Kingston Health Science Center, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
59
|
Quen MTZ, Mountstephens J, Teh YG, Teo J. Medical image interpretation training with a low‐cost eye tracking and feedback system: A preliminary study. Healthc Technol Lett 2021. [DOI: 10.1049/htl2.12014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023] Open
Affiliation(s)
- Mathieson Tan Zui Quen
- Faculty of Computing and Informatics Universiti Malaysia Sabah Kota Kinabalu Sabah Malaysia
| | - J. Mountstephens
- Faculty of Computing and Informatics Universiti Malaysia Sabah Kota Kinabalu Sabah Malaysia
| | - Yong Guang Teh
- Faculty of Medicine and Health Sciences Universiti Malaysia Sabah Kota Kinabalu Sabah Malaysia
| | - J. Teo
- Faculty of Computing and Informatics Universiti Malaysia Sabah Kota Kinabalu Sabah Malaysia
| |
Collapse
|
60
|
Lee A, Chung H, Cho Y, Kim JL, Choi J, Lee E, Kim B, Cho SJ, Kim SG. Identification of gaze pattern and blind spots by upper gastrointestinal endoscopy using an eye-tracking technique. Surg Endosc 2021; 36:2574-2581. [PMID: 34013392 DOI: 10.1007/s00464-021-08546-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 05/04/2021] [Indexed: 11/24/2022]
Abstract
BACKGROUND The lesion detection rate of esophagogastroduodenoscopy (EGD) varies depending on the degree of experience of the endoscopist and anatomical blind spots. This study aimed to identify gaze patterns and blind spots by analyzing the endoscopist's gaze during real-time EGD. METHODS Five endoscopists were enrolled in this study. The endoscopist's eye gaze tracked by an eye tracker was selected from the esophagogastric junction to the second portion of the duodenum without the esophagus during insertion and withdrawal, and then matched with photos. Gaze patterns were visualized as a gaze plot, blind spot detection as a heatmap, observation time (OT), fixation duration (FD), and FD-to-OT ratio. RESULTS The mean OT and FD were 11.10 ± 11.14 min and 8.37 ± 9.95 min, respectively, and the FD-to-OT ratio was 72.5%. A total of 34.3% of the time was spent observing the antrum. When observing the body of the stomach, it took longer to observe the high body in the retroflexion view and the low-to-mid body in the forward view. CONCLUSIONS It is necessary to minimize gaze distraction and observe the posterior wall in the retroflexion view. Our results suggest that eye-tracking techniques may be useful for future endoscopic training and education.
Collapse
Affiliation(s)
- Ayoung Lee
- Division of Gastroenterology, Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea.,Department of Internal Medicine, Ewha Womans University School of Medicine, Seoul, Republic of Korea
| | - Hyunsoo Chung
- Division of Gastroenterology, Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea.
| | - Yejin Cho
- Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Jue Lie Kim
- Division of Gastroenterology, Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Jinju Choi
- Division of Gastroenterology, Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Eunwoo Lee
- Division of Gastroenterology, Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Bokyung Kim
- Division of Gastroenterology, Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Soo-Jeong Cho
- Division of Gastroenterology, Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Sang Gyun Kim
- Division of Gastroenterology, Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
61
|
Ralekar C, Gandhi TK, Chaudhury S. Collaborative Human Machine Attention Module for Character Recognition. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR) 2021. [DOI: 10.1109/icpr48806.2021.9413229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
|
62
|
Thakoor KA, Koorathota SC, Hood DC, Sajda P. Robust and Interpretable Convolutional Neural Networks to Detect Glaucoma in Optical Coherence Tomography Images. IEEE Trans Biomed Eng 2020; 68:2456-2466. [PMID: 33290209 DOI: 10.1109/tbme.2020.3043215] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Recent studies suggest that deep learning systems can now achieve performance on par with medical experts in diagnosis of disease. A prime example is in the field of ophthalmology, where convolutional neural networks (CNNs) have been used to detect retinal and ocular diseases. However, this type of artificial intelligence (AI) has yet to be adopted clinically due to questions regarding robustness of the algorithms to datasets collected at new clinical sites and a lack of explainability of AI-based predictions, especially relative to those of human expert counterparts. In this work, we develop CNN architectures that demonstrate robust detection of glaucoma in optical coherence tomography (OCT) images and test with concept activation vectors (TCAVs) to infer what image concepts CNNs use to generate predictions. Furthermore, we compare TCAV results to eye fixations of clinicians, to identify common decision-making features used by both AI and human experts. We find that employing fine-tuned transfer learning and CNN ensemble learning create end-to-end deep learning models with superior robustness compared to previously reported hybrid deep-learning/machine-learning models, and TCAV/eye-fixation comparison suggests the importance of three OCT report sub-images that are consistent with areas of interest fixated upon by OCT experts to detect glaucoma. The pipeline described here for evaluating CNN robustness and validating interpretable image concepts used by CNNs with eye movements of experts has the potential to help standardize the acceptance of new AI tools for use in the clinic.
Collapse
|
63
|
Castner N, Appel T, Eder T, Richter J, Scheiter K, Keutel C, Hüttig F, Duchowski A, Kasneci E. Pupil diameter differentiates expertise in dental radiography visual search. PLoS One 2020; 15:e0223941. [PMID: 32469952 PMCID: PMC7259659 DOI: 10.1371/journal.pone.0223941] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 05/13/2020] [Indexed: 01/22/2023] Open
Abstract
Expert behavior is characterized by rapid information processing abilities, dependent on more structured schemata in long-term memory designated for their domain-specific tasks. From this understanding, expertise can effectively reduce cognitive load on a domain-specific task. However, certain tasks could still evoke different gradations of load even for an expert, e.g., when having to detect subtle anomalies in dental radiographs. Our aim was to measure pupil diameter response to anomalies of varying levels of difficulty in expert and student dentists’ visual examination of panoramic radiographs. We found that students’ pupil diameter dilated significantly from baseline compared to experts, but anomaly difficulty had no effect on pupillary response. In contrast, experts’ pupil diameter responded to varying levels of anomaly difficulty, where more difficult anomalies evoked greater pupil dilation from baseline. Experts thus showed proportional pupillary response indicative of increasing cognitive load with increasingly difficult anomalies, whereas students showed pupillary response indicative of higher cognitive load for all anomalies when compared to experts.
Collapse
Affiliation(s)
- Nora Castner
- Human-Computer Interaction, Institute of Computer Science, University Tübingen, Tübingen, Germany
- * E-mail:
| | - Tobias Appel
- Human-Computer Interaction, Institute of Computer Science, University Tübingen, Tübingen, Germany
| | - Thérése Eder
- Multiple Representations Lab, Leibniz-Institut für Wissensmedien, Tübingen, Germany
| | - Juliane Richter
- Multiple Representations Lab, Leibniz-Institut für Wissensmedien, Tübingen, Germany
| | - Katharina Scheiter
- Multiple Representations Lab, Leibniz-Institut für Wissensmedien, Tübingen, Germany
- University Tübingen, Tübingen, Germany
| | - Constanze Keutel
- Department of Oral- and Maxillofacial Radiology, University Clinic for Dentistry, Oral Medicine, and Maxillofacial Surgery, University of Tübingen, Tübingen, Germany
| | - Fabian Hüttig
- Department of Prosthodontics, University Clinic for Dentistry, Oral Medicine, and Maxillofacial Surgery, University of Tübingen, Tübingen, Germany
| | - Andrew Duchowski
- Visual Computing, Clemson University, Clemson, South Carolina, United States of America
| | - Enkelejda Kasneci
- Human-Computer Interaction, Institute of Computer Science, University Tübingen, Tübingen, Germany
| |
Collapse
|
64
|
Botelho MG, Ekambaram M, Bhuyan SY, Yeung AWK, Tanaka R, Bornstein MM, Li KY. A comparison of visual identification of dental radiographic and nonradiographic images using eye tracking technology. Clin Exp Dent Res 2020; 6:59-68. [PMID: 32067393 PMCID: PMC7025973 DOI: 10.1002/cre2.249] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2019] [Revised: 08/16/2019] [Accepted: 08/21/2019] [Indexed: 12/03/2022] Open
Abstract
OBJECTIVES Eye tracking has been used in medical radiology to understand observers' gaze patterns during radiological diagnosis. This study examines the visual identification ability of junior hospital dental officers (JHDOs) and dental surgery assistants (DSAs) in radiographic and nonradiographic images using eye tracking technology and examines if there is a correlation. MATERIAL AND METHODS Nine JHDOs and nine DSAs examined six radiographic images and 16 nonradiographic images using eye tracking. The areas of interest (AOIs) of the radiographic images were rated as easy, medium, and hard, and the nonradiographic images were categorized as pattern recognition, face recognition, and image comparison. The participants were required to identify and locate the AOIs. Data analysis of the two domains, entire slide and AOI, was conducted by evaluating the eye tracking metrics (ETM) and the performance outcomes. ETM consisted of six parameters, and performance outcomes consisted of four parameters. RESULTS No significant differences were observed for ETMs for JHDOs and DSAs for both radiographic and nonradiographic images. The JHDOs showed significantly higher percentage in identifying AOIs than DSAs for all the radiographic images (72.7% vs. 36.4%, p = .004) and for the easy categorization of radiographic AOIs (85.7% vs. 42.9%, p = .012). JHDOs with higher correct identification percentage in face recognition had a shorter dwell time in AOIs. CONCLUSIONS Although no significant relation was observed between radiographic and nonradiographic images, there were some evidence that visual recognition skills may impact certain attributes of the visual search pattern in radiographic images.
Collapse
Affiliation(s)
- Michael G. Botelho
- Prosthodontics, Faculty of DentistryThe University of Hong KongHong KongSARChina
| | | | - Sangeeta Y. Bhuyan
- Prosthodontics, Faculty of DentistryThe University of Hong KongHong KongSARChina
| | - Andy Wai Kan Yeung
- Oral and Maxillofacial Radiology, Applied Oral Sciences, Faculty of DentistryThe University of Hong KongHong KongSARChina
| | - Ray Tanaka
- Oral and Maxillofacial Radiology, Applied Oral Sciences, Faculty of DentistryThe University of Hong KongHong KongSARChina
| | - Michael M. Bornstein
- Oral and Maxillofacial Radiology, Applied Oral Sciences, Faculty of DentistryThe University of Hong KongHong KongSARChina
| | - Kar Yan Li
- Centralized Research Laboratories, Faculty of DentistryThe University of Hong KongHong KongSARChina
| |
Collapse
|
65
|
Focus is in the gaze of the beholder. Pediatr Res 2020; 87:434-435. [PMID: 31706256 PMCID: PMC7035968 DOI: 10.1038/s41390-019-0671-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Accepted: 10/21/2019] [Indexed: 11/08/2022]
|
66
|
Wu CC, Wolfe JM. Eye Movements in Medical Image Perception: A Selective Review of Past, Present and Future. Vision (Basel) 2019; 3:E32. [PMID: 31735833 PMCID: PMC6802791 DOI: 10.3390/vision3020032] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 06/09/2019] [Accepted: 06/18/2019] [Indexed: 12/21/2022] Open
Abstract
The eye movements of experts, reading medical images, have been studied for many years. Unlike topics such as face perception, medical image perception research needs to cope with substantial, qualitative changes in the stimuli under study due to dramatic advances in medical imaging technology. For example, little is known about how radiologists search through 3D volumes of image data because they simply did not exist when earlier eye tracking studies were performed. Moreover, improvements in the affordability and portability of modern eye trackers make other, new studies practical. Here, we review some uses of eye movements in the study of medical image perception with an emphasis on newer work. We ask how basic research on scene perception relates to studies of medical 'scenes' and we discuss how tracking experts' eyes may provide useful insights for medical education and screening efficiency.
Collapse
Affiliation(s)
- Chia-Chien Wu
- Visual Attention Lab, Department of Surgery, Brigham & Women’s Hospital, 65 Landsdowne St, Cambridge, MA 02139, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
| | - Jeremy M. Wolfe
- Visual Attention Lab, Department of Surgery, Brigham & Women’s Hospital, 65 Landsdowne St, Cambridge, MA 02139, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
- Department of Ophthalmology, Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|