1
|
Lopes A, Rasmussen S, Au R, Chakravarthy V, Chinnery T, Christie J, Djordjevic B, Gomez JA, Grindrod N, Policelli R, Sharma A, Tran C, Walsh JC, Wehrli B, Ward AD, Cecchini MJ. Identification of Distinct Visual Scan Paths for Pathologists in Rare-Element Search Tasks. Int J Surg Pathol 2025; 33:861-870. [PMID: 39563530 PMCID: PMC12069827 DOI: 10.1177/10668969241294239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Revised: 08/27/2024] [Accepted: 10/08/2024] [Indexed: 11/21/2024]
Abstract
BackgroundThe search for rare elements, like mitotic figures, is crucial in pathology. Combining digital pathology with eye-tracking technology allows for the detailed study of how pathologists complete these important tasks.ObjectivesTo determine if pathologists have distinct search characteristics in domain- and nondomain-specific tasks.DesignSix pathologists and six graduate students were recruited as observers. Each observer was given five digital "Where's Waldo?" puzzles and asked to search for the Waldo character as a nondomain-specific task. Each pathologist was then given five images of a breast digital pathology slide to search for a single mitotic figure as a domain-specific task. The observers' eye gaze data were collected.ResultsPathologists' median fixation duration was 244 ms, compared to 300 ms for nonpathologists searching for Waldo (P < .001), and compared to 233 ms for pathologists searching for mitotic figures (P = .003). Pathologists' median fixation and saccade rates were 3.17/second and 2.77/second, respectively, compared to 2.61/second and 2.47/second for nonpathologists searching for Waldo (P < .001), and compared to 3.34/second and 3.09/second for pathologists searching for mitotic figures (P = .222 and P = .187, respectively). There was no significant difference between the two cohorts in their accuracy in identifying the target of their search.ConclusionsWhen searching for rare elements during a nondomain-specific search task, pathologists' search characteristics were fundamentally different compared to nonpathologists, indicating pathologists can rapidly classify the objects of their fixations without compromising accuracy. Further, pathologists' search characteristics were fundamentally different between a domain-specific and nondomain-specific rare-element search task.
Collapse
Affiliation(s)
- Alana Lopes
- Department of Medical Biophysics, Western University, London, Ontario, Canada
- Gerald C. Baines Centre, London Health Sciences Centre, London, Ontario, Canada
| | - Sean Rasmussen
- Department of Pathology and Laboratory Medicine, Western University and London Health Science Centre, London, Ontario, Canada
| | - Ryan Au
- Department of Medical Biophysics, Western University, London, Ontario, Canada
- Gerald C. Baines Centre, London Health Sciences Centre, London, Ontario, Canada
| | - Vignesh Chakravarthy
- Department of Medical Biophysics, Western University, London, Ontario, Canada
- Gerald C. Baines Centre, London Health Sciences Centre, London, Ontario, Canada
| | - Tricia Chinnery
- Department of Medical Biophysics, Western University, London, Ontario, Canada
- Gerald C. Baines Centre, London Health Sciences Centre, London, Ontario, Canada
| | - Jaryd Christie
- Department of Medical Biophysics, Western University, London, Ontario, Canada
- Gerald C. Baines Centre, London Health Sciences Centre, London, Ontario, Canada
| | - Bojana Djordjevic
- Department of Pathology and Laboratory Medicine, Western University and London Health Science Centre, London, Ontario, Canada
| | - Jose A. Gomez
- Department of Pathology and Laboratory Medicine, Western University and London Health Science Centre, London, Ontario, Canada
| | - Natalie Grindrod
- Department of Pathology and Laboratory Medicine, Western University and London Health Science Centre, London, Ontario, Canada
| | - Robert Policelli
- Department of Medical Biophysics, Western University, London, Ontario, Canada
- Gerald C. Baines Centre, London Health Sciences Centre, London, Ontario, Canada
| | - Anurag Sharma
- Department of Pathology and Laboratory Medicine, Western University and London Health Science Centre, London, Ontario, Canada
| | - Christopher Tran
- Department of Pathology and Laboratory Medicine, Western University and London Health Science Centre, London, Ontario, Canada
| | - Joanna C. Walsh
- Department of Pathology and Laboratory Medicine, Western University and London Health Science Centre, London, Ontario, Canada
| | - Bret Wehrli
- Department of Pathology and Laboratory Medicine, Western University and London Health Science Centre, London, Ontario, Canada
| | - Aaron D. Ward
- Department of Medical Biophysics, Western University, London, Ontario, Canada
- Gerald C. Baines Centre, London Health Sciences Centre, London, Ontario, Canada
- Department of Oncology, Western University, London, Ontario, Canada
| | - Matthew J. Cecchini
- Department of Pathology and Laboratory Medicine, Western University and London Health Science Centre, London, Ontario, Canada
| |
Collapse
|
2
|
Gupta P, Sheth N, AlAhmadi R, Yao X, Heiferman MJ. The Effect of Experience on Visual Search Patterns in Retinal Imaging Analysis. Ophthalmic Surg Lasers Imaging Retina 2025:1-9. [PMID: 40163634 DOI: 10.3928/23258160-20250228-03] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/02/2025]
Abstract
BACKGROUND AND OBJECTIVE The increasing use of retinal diagnostic imaging necessitates a standardized viewing technique. This study investigates visual search patterns among ophthalmologists at various experience levels using eye-tracking technology. PATIENTS AND METHODS Participants included postgraduate year 2, 3, and 4 residents, retina fellows, and attending ophthalmologists, who analyzed fundus images while their eye movements were tracked. RESULTS Results indicated that attendings had shorter fixation durations (0.15 ± 0.04 seconds) and saccade lengths (0.06° ± 0.01°), indicating faster image information processing than novice physicians. Experts also analyzed a higher proportion of the image area (49.43% ± 7.34%) and possessed a global-focal search pattern, suggesting increased thoroughness. CONCLUSION Experts in ophthalmology demonstrate gaze characteristics that reflect faster image processing and a more thorough analysis of diagnostic imaging. We recommend that residents be taught a standardized method for image interpretation that emulates expert analysis through a disc-macula-vessel-periphery sequence with radial sweeps. [Ophthalmic Surg Lasers Imaging Retina 2025;56:XX-XX.].
Collapse
|
3
|
Yamada R, Xu K, Kondo S, Fujimoto M. Why the gaze behavior of expert physicians and novice medical students differ during a simulated medical interview: A mixed methods study. PLoS One 2025; 20:e0315405. [PMID: 39746055 PMCID: PMC11694983 DOI: 10.1371/journal.pone.0315405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Accepted: 11/26/2024] [Indexed: 01/04/2025] Open
Abstract
Human cognition is reflected in gaze behavior, which involves eye movements to fixate or shift focus between areas. In natural interactions, gaze behavior serves two functions: signal transmission and information gathering. While expert gaze as a tool for gathering information has been studied, its underlying cognitive processes remain insufficiently explored. This study investigated differences in gaze behavior and cognition between expert physicians and novice medical students during a simulated medical interview with a simulated patient, drawing implications for medical education. This study employed an exploratory sequential mixed methods design. During the simulated medical interview, participants' gaze behavior was measured across five areas: the patient's eyes, face, body trunk, medical chart, and medical questionnaire. A hierarchical Bayesian model analyzed differences in gaze behavior between expert physicians and novice medical students. Then, a semi-structured interview was conducted with participants to discern their perceptions during their gaze behavior; their recorded gaze behavior was presented to them, and analyzed using a qualitative descriptive approach. Model analyses indicated that experts looked at the simulated patient's eyes less frequently compared to novices during the simulated medical interview. Expert physicians stated that because of the potential for discomfort, looking at the patient's eyes was less frequent, despite its importance for obtaining diagnostic findings. Conversely, novice medical students did not provide narratives for obtaining such findings, but increased the number of times they did so to improve patient satisfaction. This association between different perceptions of gaze behavior may lead to new approaches in medical education. This study highlights the importance of understanding gaze behavior in the context of medical education and suggests that different motivations underlie the gaze behavior of expert physicians and novice medical students. Incorporating training in effective gaze behavior may improve the quality of patient care and medical students' learning outcomes.
Collapse
Affiliation(s)
- Rie Yamada
- Department of Adult Nursing, Faculty of Medicine, Academic Assembly, University of Toyama, Toyama, Japan
| | - Kuangzhe Xu
- Institute for Promotion of Higher Education, Hirosaki University, Aomori, Japan
| | - Satoshi Kondo
- Department of Medical Education, Graduate School of Medicine, University of Toyama, Toyama, Japan
- Center for Medical Education and Career Development, Graduate School of Medicine, University of Toyama, Toyama, Japan
| | - Makoto Fujimoto
- Department of Japanese Oriental Medicine, Faculty of Medicine, Academic Assembly, University of Toyama, Toyama, Japan
| |
Collapse
|
4
|
Lopes A, Ward AD, Cecchini M. Eye tracking in digital pathology: A comprehensive literature review. J Pathol Inform 2024; 15:100383. [PMID: 38868488 PMCID: PMC11168484 DOI: 10.1016/j.jpi.2024.100383] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 04/28/2024] [Accepted: 05/14/2024] [Indexed: 06/14/2024] Open
Abstract
Eye tracking has been used for decades in attempt to understand the cognitive processes of individuals. From memory access to problem-solving to decision-making, such insight has the potential to improve workflows and the education of students to become experts in relevant fields. Until recently, the traditional use of microscopes in pathology made eye tracking exceptionally difficult. However, the digital revolution of pathology from conventional microscopes to digital whole slide images allows for new research to be conducted and information to be learned with regards to pathologist visual search patterns and learning experiences. This has the promise to make pathology education more efficient and engaging, ultimately creating stronger and more proficient generations of pathologists to come. The goal of this review on eye tracking in pathology is to characterize and compare the visual search patterns of pathologists. The PubMed and Web of Science databases were searched using 'pathology' AND 'eye tracking' synonyms. A total of 22 relevant full-text articles published up to and including 2023 were identified and included in this review. Thematic analysis was conducted to organize each study into one or more of the 10 themes identified to characterize the visual search patterns of pathologists: (1) effect of experience, (2) fixations, (3) zooming, (4) panning, (5) saccades, (6) pupil diameter, (7) interpretation time, (8) strategies, (9) machine learning, and (10) education. Expert pathologists were found to have higher diagnostic accuracy, fewer fixations, and shorter interpretation times than pathologists with less experience. Further, literature on eye tracking in pathology indicates that there are several visual strategies for diagnostic interpretation of digital pathology images, but no evidence of a superior strategy exists. The educational implications of eye tracking in pathology have also been explored but the effect of teaching novices how to search as an expert remains unclear. In this article, the main challenges and prospects of eye tracking in pathology are briefly discussed along with their implications to the field.
Collapse
Affiliation(s)
- Alana Lopes
- Department of Medical Biophysics, Western University, London, ON N6A 3K7, Canada
- Gerald C. Baines Centre, London Health Sciences Centre, London, ON N6A 5W9, Canada
| | - Aaron D. Ward
- Department of Medical Biophysics, Western University, London, ON N6A 3K7, Canada
- Gerald C. Baines Centre, London Health Sciences Centre, London, ON N6A 5W9, Canada
- Department of Oncology, Western University, London, ON N6A 3K7, Canada
| | - Matthew Cecchini
- Department of Pathology and Laboratory Medicine, Schulich School of Medicine and Dentistry, Western University, London, ON N6A 3K7, Canada
| |
Collapse
|
5
|
Bellstedt M, Holtrup A, Otto N, Berndt M, Scherff AD, Papan C, Robitzsch A, Missler M, Darici D. Gaze cueing improves pattern recognition of histology learners. ANATOMICAL SCIENCES EDUCATION 2024; 17:1461-1472. [PMID: 39135334 DOI: 10.1002/ase.2498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 07/01/2024] [Accepted: 07/21/2024] [Indexed: 10/01/2024]
Abstract
Experts perceive and evaluate domain-specific visual information with high accuracy. In doing so, they exhibit eye movements referred to as "expert gaze" to rapidly focus on task-relevant areas. Using eye tracking, it is possible to record these implicit gaze patterns and present them to histology novice learners during training. This article presents a comprehensive evaluation of such expert gaze cueing on pattern recognition of medical students in histology. For this purpose, 53 students were randomized into two groups over eight histology sessions. The control group was presented with an instructional histology video featuring voice commentary. The gaze cueing group was presented the same video, but with an additional overlay of a live recording of the expert's eye movements. Afterward, students' pattern recognition was assessed through 20 image-based tasks (5 retention, 15 transfer) and their cognitive load with the Paas scale. Results showed that gaze cueing significantly outperformed the control group (p = 0.007; d = 0.40). This effect was evident for both, retention (p = 0.003) and transfer tasks (p = 0.046), and generalized across different histological contexts. The cognitive load was similar in both groups. In conclusion, gaze cueing helps histology novice learners to develop their pattern recognition skills, offering a promising method for histology education. Histology educators could benefit from this instructional strategy to provide new forms of attentional guidance to learners in visually complex learning environments.
Collapse
Affiliation(s)
- Michelle Bellstedt
- Institute of Anatomy and Molecular Neurobiology, University of Münster, Münster, Germany
| | - Adrian Holtrup
- Institute of Anatomy and Molecular Neurobiology, University of Münster, Münster, Germany
| | - Nils Otto
- Institute of Anatomy and Molecular Neurobiology, University of Münster, Münster, Germany
| | - Markus Berndt
- Institute of Medical Education, LMU University Hospital, LMU Munich, Munich, Germany
| | - Aline Doreen Scherff
- Institute of Medical Education, LMU University Hospital, LMU Munich, Munich, Germany
| | - Cihan Papan
- Institute for Hygiene and Public Health, University Hospital Bonn, Bonn, Germany
| | - Anita Robitzsch
- Clinic for Psychosomatic Medicine and Psychotherapy, LVR-University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Markus Missler
- Institute of Anatomy and Molecular Neurobiology, University of Münster, Münster, Germany
| | - Dogus Darici
- Institute of Anatomy and Molecular Neurobiology, University of Münster, Münster, Germany
| |
Collapse
|
6
|
Brunyé TT, Konold CE, Wang J, Kerr KF, Drew T, Shucard H, Soroka K, Weaver DL, Elmore JG. Do Physicians Remember Cases? Implications for Longitudinal Designs in Medical Research and Competency Assessment. RESEARCH METHODS IN MEDICINE & HEALTH SCIENCES 2024; 5:76-82. [PMID: 39896337 PMCID: PMC11785405 DOI: 10.1177/26320843231199453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2025]
Abstract
Background In pathology and other specialties of diagnostic medicine, longitudinal studies and competency assessments often involve physicians interpreting the same images multiple times. In these designs, a washout period is used to reduce the chances that later interpretations are influenced by prior exposure. Objective/s The present study examines whether a washout period between 9-39 months is sufficient to prevent three effects of prior exposure when pathologists review digital breast tissue biopsies and render diagnostic decisions: faster case review durations, higher confidence, and lower perceived difficulty. Methods In a longitudinal breast pathology study, 48 resident pathologists reviewed a mix of five novel and five repeated digital whole slide images during Phase 2, occurring 9-39 months after an initial Phase 1 review. Importantly, cases that were repeated for some participants in Phase 2 were novel for other participants in Phase 2. We statistically tested for differences in participants' case review duration, self-reported confidence, and self-reported difficulty in Phase 2 based on whether the case was novel or repeated. Results No statistically significant difference in review time, confidence, or difficulty as a function of whether the case was repeated or novel in a Phase 2 review occurring 9-39 months after initial viewing; this same result was found in a subset of participants with a shorter (9-14-month) washout. Conclusion These results provide evidence to support the efficacy of at least a 9-month washout period in the design of longitudinal medical imaging and informatics studies to ensure no detectable effect of initial exposure on participant's subsequent case review.
Collapse
Affiliation(s)
- Tad T. Brunyé
- Center for Applied Brain and Cognitive Sciences, Tufts University, 177 College Ave., Suite 090, Medford, MA 02155 USA
- Department of Psychology, Tufts University, 490 Boston Ave., Medford, MA 02155 USA
| | - Catherine E. Konold
- Department of Psychology, University of Utah, 380 S 1530 E Beh S 502, Salt Lake City, UT 84112 USA
| | - Jason Wang
- David Geffen School of Medicine, Department of Medicine, University of California, Los Angeles, 885 Tiverton Drive, Los Angeles, CA 90095 USA
| | - Kathleen F. Kerr
- Department of Biostatistics, University of Washington, 1705 NE Pacific Street, Seattle, WA 98195 USA
| | - Trafton Drew
- Department of Psychology, University of Utah, 380 S 1530 E Beh S 502, Salt Lake City, UT 84112 USA
| | - Hannah Shucard
- Department of Biostatistics, University of Washington, 1705 NE Pacific Street, Seattle, WA 98195 USA
| | - Kim Soroka
- David Geffen School of Medicine, Department of Medicine, University of California, Los Angeles, 885 Tiverton Drive, Los Angeles, CA 90095 USA
| | - Donald L. Weaver
- Department of Pathology and Laboratory Medicine, Larner College of Medicine, University of Vermont and Vermont Cancer Center, 89 Beaumont Ave., Burlington, VT 05405 USA
| | - Joann G. Elmore
- David Geffen School of Medicine, Department of Medicine, University of California, Los Angeles, 885 Tiverton Drive, Los Angeles, CA 90095 USA
| |
Collapse
|
7
|
Brunyé TT, Booth K, Hendel D, Kerr KF, Shucard H, Weaver DL, Elmore JG. Machine learning classification of diagnostic accuracy in pathologists interpreting breast biopsies. J Am Med Inform Assoc 2024; 31:552-562. [PMID: 38031453 PMCID: PMC10873842 DOI: 10.1093/jamia/ocad232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 10/19/2023] [Accepted: 11/20/2023] [Indexed: 12/01/2023] Open
Abstract
OBJECTIVE This study explores the feasibility of using machine learning to predict accurate versus inaccurate diagnoses made by pathologists based on their spatiotemporal viewing behavior when evaluating digital breast biopsy images. MATERIALS AND METHODS The study gathered data from 140 pathologists of varying experience levels who each reviewed a set of 14 digital whole slide images of breast biopsy tissue. Pathologists' viewing behavior, including zooming and panning actions, was recorded during image evaluation. A total of 30 features were extracted from the viewing behavior data, and 4 machine learning algorithms were used to build classifiers for predicting diagnostic accuracy. RESULTS The Random Forest classifier demonstrated the best overall performance, achieving a test accuracy of 0.81 and area under the receiver-operator characteristic curve of 0.86. Features related to attention distribution and focus on critical regions of interest were found to be important predictors of diagnostic accuracy. Further including case-level and pathologist-level information incrementally improved classifier performance. DISCUSSION Results suggest that pathologists' viewing behavior during digital image evaluation can be leveraged to predict diagnostic accuracy, affording automated feedback and decision support systems based on viewing behavior to aid in training and, ultimately, clinical practice. They also carry implications for basic research examining the interplay between perception, thought, and action in diagnostic decision-making. CONCLUSION The classifiers developed herein have potential applications in training and clinical settings to provide timely feedback and support to pathologists during diagnostic decision-making. Further research could explore the generalizability of these findings to other medical domains and varied levels of expertise.
Collapse
Affiliation(s)
- Tad T Brunyé
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA 02155, United States
- Department of Psychology, Tufts University, Medford, MA 02155, United States
| | - Kelsey Booth
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA 02155, United States
| | - Dalit Hendel
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA 02155, United States
| | - Kathleen F Kerr
- Department of Biostatistics, University of Washington, Seattle, WA 98105, United States
| | - Hannah Shucard
- Department of Biostatistics, University of Washington, Seattle, WA 98105, United States
| | - Donald L Weaver
- Department of Pathology and Laboratory Medicine, Larner College of Medicine, University of Vermont and Vermont Cancer Center, Burlington, VT 05405, United States
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA 90095, United States
| |
Collapse
|
8
|
Homfray B, Attwood A, Channon SB. Anatomy in Practice: How Do Equine and Production Animal Veterinarians Apply Anatomy in Primary Care Settings? JOURNAL OF VETERINARY MEDICAL EDUCATION 2023; 50:643-653. [PMID: 36198110 DOI: 10.3138/jvme-2022-0074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
To successfully prepare veterinary undergraduates for the workplace, it is critical that anatomy educators consider the context in which developing knowledge and skills will be applied. This study aimed to establish how farm animal and equine general practitioners use anatomy and related skills within their daily work. Qualitative ethnographic data in the form of observations and semi-structured interviews were collected from 12 veterinarians working in equine or farm animal first-opinion practice. Data underwent thematic analysis using a grounded theory approach. The five themes identified were relevant to both equine and farm animal veterinarians and represented the breadth and complexity of anatomy, its importance for professional and practical competence, as well as the requirement for continuous learning. The centrality and broad and multifaceted nature of anatomy was found to challenge equine and farm animal veterinarians, highlighting that essential anatomy knowledge and related skills are vital for their professional and practical competence. This aligns with the previously described experiences of companion animal clinicians. In equine practice, the complexity of anatomical knowledge required was particularly high, especially in relation to diagnostic imaging and assessing normal variation. This resulted in greater importance being placed on formal and informal professional development opportunities. For farm animal clinicians, anatomy application in the context of necropsy and euthanasia was particularly noted. Our findings allow anatomy educators to design appropriate and effective learning opportunities to ensure that veterinary graduates are equipped with the skills, knowledge, and resources required to succeed in first-opinion veterinary practice.
Collapse
Affiliation(s)
- Ben Homfray
- Mifeddygon Dolgellau Veterinary Surgery, Bala Rd., Dolgellau LL40 2YF Wales
| | - Ali Attwood
- Department of Comparative Biomedical Sciences, Royal Veterinary College, London NW1 0TU UK
| | - Sarah B Channon
- Veterinary Anatomy, Department of Comparative Biomedical Sciences, Royal Veterinary College, London NW1 0TU UK
| |
Collapse
|
9
|
Akerman M, Choudhary S, Liebmann JM, Cioffi GA, Chen RWS, Thakoor KA. Extracting decision-making features from the unstructured eye movements of clinicians on glaucoma OCT reports and developing AI models to classify expertise. Front Med (Lausanne) 2023; 10:1251183. [PMID: 37841006 PMCID: PMC10571140 DOI: 10.3389/fmed.2023.1251183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 09/14/2023] [Indexed: 10/17/2023] Open
Abstract
This study aimed to investigate the eye movement patterns of ophthalmologists with varying expertise levels during the assessment of optical coherence tomography (OCT) reports for glaucoma detection. Objectives included evaluating eye gaze metrics and patterns as a function of ophthalmic education, deriving novel features from eye-tracking, and developing binary classification models for disease detection and expertise differentiation. Thirteen ophthalmology residents, fellows, and clinicians specializing in glaucoma participated in the study. Junior residents had less than 1 year of experience, while senior residents had 2-3 years of experience. The expert group consisted of fellows and faculty with over 3 to 30+ years of experience. Each participant was presented with a set of 20 Topcon OCT reports (10 healthy and 10 glaucomatous) and was asked to determine the presence or absence of glaucoma and rate their confidence of diagnosis. The eye movements of each participant were recorded as they diagnosed the reports using a Pupil Labs Core eye tracker. Expert ophthalmologists exhibited more refined and focused eye fixations, particularly on specific regions of the OCT reports, such as the retinal nerve fiber layer (RNFL) probability map and circumpapillary RNFL b-scan. The binary classification models developed using the derived features demonstrated high accuracy up to 94.0% in differentiating between expert and novice clinicians. The derived features and trained binary classification models hold promise for improving the accuracy of glaucoma detection and distinguishing between expert and novice ophthalmologists. These findings have implications for enhancing ophthalmic education and for the development of effective diagnostic tools.
Collapse
Affiliation(s)
- Michelle Akerman
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Sanmati Choudhary
- Department of Computer Science, Columbia University, New York, NY, United States
| | - Jeffrey M. Liebmann
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| | - George A. Cioffi
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| | - Royce W. S. Chen
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| | - Kaveri A. Thakoor
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
- Department of Computer Science, Columbia University, New York, NY, United States
- Edward S. Harkness Eye Institute, Department of Ophthalmology, Columbia University Irving Medical Center, New York, NY, United States
| |
Collapse
|
10
|
Sauter D, Lodde G, Nensa F, Schadendorf D, Livingstone E, Kukuk M. Deep learning in computational dermatopathology of melanoma: A technical systematic literature review. Comput Biol Med 2023; 163:107083. [PMID: 37315382 DOI: 10.1016/j.compbiomed.2023.107083] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 05/10/2023] [Accepted: 05/27/2023] [Indexed: 06/16/2023]
Abstract
Deep learning (DL) has become one of the major approaches in computational dermatopathology, evidenced by a significant increase in this topic in the current literature. We aim to provide a structured and comprehensive overview of peer-reviewed publications on DL applied to dermatopathology focused on melanoma. In comparison to well-published DL methods on non-medical images (e.g., classification on ImageNet), this field of application comprises a specific set of challenges, such as staining artifacts, large gigapixel images, and various magnification levels. Thus, we are particularly interested in the pathology-specific technical state-of-the-art. We also aim to summarize the best performances achieved thus far with respect to accuracy, along with an overview of self-reported limitations. Accordingly, we conducted a systematic literature review of peer-reviewed journal and conference articles published between 2012 and 2022 in the databases ACM Digital Library, Embase, IEEE Xplore, PubMed, and Scopus, expanded by forward and backward searches to identify 495 potentially eligible studies. After screening for relevance and quality, a total of 54 studies were included. We qualitatively summarized and analyzed these studies from technical, problem-oriented, and task-oriented perspectives. Our findings suggest that the technical aspects of DL for histopathology in melanoma can be further improved. The DL methodology was adopted later in this field, and still lacks the wider adoption of DL methods already shown to be effective for other applications. We also discuss upcoming trends toward ImageNet-based feature extraction and larger models. While DL has achieved human-competitive accuracy in routine pathological tasks, its performance on advanced tasks is still inferior to wet-lab testing (for example). Finally, we discuss the challenges impeding the translation of DL methods to clinical practice and provide insight into future research directions.
Collapse
Affiliation(s)
- Daniel Sauter
- Department of Computer Science, Fachhochschule Dortmund, 44227 Dortmund, Germany.
| | - Georg Lodde
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany
| | - Felix Nensa
- Institute for AI in Medicine (IKIM), University Hospital Essen, 45131 Essen, Germany; Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, 45147 Essen, Germany
| | - Dirk Schadendorf
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany
| | | | - Markus Kukuk
- Department of Computer Science, Fachhochschule Dortmund, 44227 Dortmund, Germany
| |
Collapse
|
11
|
Hafner C, Scharner V, Hermann M, Metelka P, Hurch B, Klaus DA, Schaubmayr W, Wagner M, Gleiss A, Willschke H, Hamp T. Eye-tracking during simulation-based echocardiography: a feasibility study. BMC MEDICAL EDUCATION 2023; 23:490. [PMID: 37393288 DOI: 10.1186/s12909-023-04458-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 06/15/2023] [Indexed: 07/03/2023]
Abstract
INTRODUCTION Due to the technical progress point-of-care ultrasound (POCUS) is increasingly used in critical care medicine. However, optimal training strategies and support for novices have not been thoroughly researched so far. Eye-tracking, which offers insights into the gaze behavior of experts may be a useful tool for better understanding. The aim of this study was to investigate the technical feasibility and usability of eye-tracking during echocardiography as well as to analyze differences of gaze patterns between experts and non-experts. METHODS Nine experts in echocardiography and six non-experts were equipped with eye-tracking glasses (Tobii, Stockholm, Sweden), while performing six medical cases on a simulator. For each view case specific areas of interests (AOI) were defined by the first three experts depending on the underlying pathology. Technical feasibility, participants' subjective experience on the usability of the eye-tracking glasses as well as the differences of relative dwell time (focus) inside the areas of interest (AOI) between six experts and six non-experts were evaluated. RESULTS Technical feasibility of eye-tracking during echocardiography was achieved with an accordance of 96% between the visual area orally described by participants and the area marked by the glasses. Experts had longer relative dwell time in the case specific AOI (50.6% versus 38.4%, p = 0.072) and performed ultrasound examinations faster (138 s versus 227 s, p = 0.068). Furthermore, experts fixated earlier in the AOI (5 s versus 10 s, p = 0.033). CONCLUSION This feasibility study demonstrates that eye-tracking can be used to analyze experts and non-experts gaze patterns during POCUS. Although, in this study the experts had a longer fixation time in the defined AOIs compared to non-experts, further studies are needed to investigate if eye-tracking could improve teaching of POCUS.
Collapse
Affiliation(s)
- Christina Hafner
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
- Ludwig Boltzmann Institute Digital Health and Patient Safety, Vienna, Austria
| | - Vincenz Scharner
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
- Ludwig Boltzmann Institute Digital Health and Patient Safety, Vienna, Austria
| | - Martina Hermann
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
- Ludwig Boltzmann Institute Digital Health and Patient Safety, Vienna, Austria
| | - Philipp Metelka
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Benedikt Hurch
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Daniel Alexander Klaus
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Wolfgang Schaubmayr
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Michael Wagner
- Department of Pediatrics, Comprehensive Center for Pediatrics, Medical University of Vienna, Vienna, Austria
| | - Andreas Gleiss
- Center for Medical Statistics, Informatics, and Intelligent Systems, Medical University of Vienna, Vienna, Austria
| | - Harald Willschke
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
- Ludwig Boltzmann Institute Digital Health and Patient Safety, Vienna, Austria
| | - Thomas Hamp
- Department of Anaesthesia, General Intensive Care and Pain Medicine, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
- Emergency Medical Service Vienna, Radetzkystraße 1, 1030, Vienna, Austria.
| |
Collapse
|
12
|
Brunyé TT, Balla A, Drew T, Elmore JG, Kerr KF, Shucard H, Weaver DL. From Image to Diagnosis: Characterizing Sources of Error in Histopathologic Interpretation. Mod Pathol 2023; 36:100162. [PMID: 36948400 PMCID: PMC11386950 DOI: 10.1016/j.modpat.2023.100162] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 02/11/2023] [Accepted: 03/07/2023] [Indexed: 03/24/2023]
Abstract
An accurate histopathologic diagnosis on surgical biopsy material is necessary for the clinical management of patients and has important implications for research, clinical trial design/enrollment, and public health education. This study used a mixed methods approach to isolate sources of diagnostic error while residents and attending pathologists interpreted digitized breast biopsy slides. Ninety participants, including pathology residents and attending physicians at major United States medical centers reviewed a set of 14 digitized whole-slide images of breast biopsies. Each case had a consensus-defined diagnosis and critical region of interest (cROI) representing the most significant pathology on the slide. Participants were asked to view unmarked digitized slides, draw their participant region of interest (pROI), describe its features, and render a diagnosis. Participants' review behavior was tracked using case viewer software and an eye-tracking device. Diagnostic accuracy was calculated in comparison to the consensus diagnosis. We measured the frequency of errors emerging during 4 interpretive phases: (1) detecting the cROI, (2) recognizing its relevance, (3) using the correct terminology to describe findings in the pROI, and (4) making a diagnostic decision. According to eye-tracking data, trainees and attending pathologists were very likely (∼94% of the time) to find the cROI when inspecting a slide. However, trainees were less likely to consider the cROI relevant to their diagnosis. Pathology trainees (41% of cases) were more likely to use incorrect terminology to describe pROI features than attending pathologists (21% of cases). Failure to accurately describe features was the only factor strongly associated with an incorrect diagnosis. Identifying where errors emerge in the interpretive and/or descriptive process and working on building organ-specific feature recognition and verbal fluency in describing those features are critical steps for achieving competency in diagnostic decision making.
Collapse
Affiliation(s)
- Tad T Brunyé
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, Massachusetts; Department of Psychology, Tufts University, Medford, Massachusetts.
| | - Agnes Balla
- Department of Pathology, University of Vermont and Vermont Cancer Center, Burlington, Vermont
| | - Trafton Drew
- Department of Psychology, University of Utah, Salt Lake City, Utah
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, California
| | - Kathleen F Kerr
- Department of Biostatistics, University of Washington, Seattle, Washington, DC
| | - Hannah Shucard
- Department of Biostatistics, University of Washington, Seattle, Washington, DC
| | - Donald L Weaver
- Department of Pathology, University of Vermont and Vermont Cancer Center, Burlington, Vermont
| |
Collapse
|
13
|
Drew T, Konold CE, Lavelle M, Brunyé TT, Kerr KF, Shucard H, Weaver DL, Elmore JG. Pathologist pupil dilation reflects experience level and difficulty in diagnosing medical images. J Med Imaging (Bellingham) 2023; 10:025503. [PMID: 37096053 PMCID: PMC10122150 DOI: 10.1117/1.jmi.10.2.025503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Revised: 03/26/2023] [Accepted: 04/10/2023] [Indexed: 04/26/2023] Open
Abstract
Purpose: Digital whole slide imaging allows pathologists to view slides on a computer screen instead of under a microscope. Digital viewing allows for real-time monitoring of pathologists' search behavior and neurophysiological responses during the diagnostic process. One particular neurophysiological measure, pupil diameter, could provide a basis for evaluating clinical competence during training or developing tools that support the diagnostic process. Prior research shows that pupil diameter is sensitive to cognitive load and arousal, and it switches between exploration and exploitation of a visual image. Different categories of lesions in pathology pose different levels of challenge, as indicated by diagnostic disagreement among pathologists. If pupil diameter is sensitive to the perceived difficulty in diagnosing biopsies, eye-tracking could potentially be used to identify biopsies that may benefit from a second opinion. Approach: We measured case onset baseline-corrected (phasic) and uncorrected (tonic) pupil diameter in 90 pathologists who each viewed and diagnosed 14 digital breast biopsy cases that cover the diagnostic spectrum from benign to invasive breast cancer. Pupil data were extracted from the beginning of viewing and interpreting of each individual case. After removing 122 trials ( < 10 % ) with poor eye-tracking quality, 1138 trials remained. We used multiple linear regression with robust standard error estimates to account for dependent observations within pathologists. Results: We found a positive association between the magnitude of phasic dilation and subject-centered difficulty ratings and between the magnitude of tonic dilation and untransformed difficulty ratings. When controlling for case diagnostic category, only the tonic-difficulty relationship persisted. Conclusions: Results suggest that tonic pupil dilation may indicate overall arousal differences between pathologists as they interpret biopsy cases and could signal a need for additional training, experience, or automated decision aids. Phasic dilation is sensitive to characteristics of biopsies that tend to elicit higher difficulty ratings and could indicate a need for a second opinion.
Collapse
Affiliation(s)
- Trafton Drew
- University of Utah, Department of Psychology, Salt Lake City, Utah, United States
| | - Catherine E. Konold
- University of Utah, Department of Psychology, Salt Lake City, Utah, United States
| | - Mark Lavelle
- University of New Mexico, Department of Psychology, Albuquerque, New Mexico, United States
| | - Tad T. Brunyé
- Tufts University, Center for Applied Brain and Cognitive Sciences, Medford, Massachusetts, United States
| | - Kathleen F. Kerr
- University of Washington, Department of Biostatistics, Seattle, Washington, United States
| | - Hannah Shucard
- University of Washington, Department of Biostatistics, Seattle, Washington, United States
| | - Donald L. Weaver
- University of Vermont, Department of Pathology & Laboratory Medicine, Burlington, Vermont, United States
| | - Joann G. Elmore
- David Geffen School of Medicine UCLA, Department of Medicine, Los Angeles, California, United States
| |
Collapse
|
14
|
Zhang H, Hung SW, Chen YP, Ku JW, Tseng P, Lu YH, Yang CT. Hip fracture or not? The reversed prevalence effect among non-experts' diagnosis. Cogn Res Princ Implic 2023; 8:1. [PMID: 36600082 DOI: 10.1186/s41235-022-00455-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 12/15/2022] [Indexed: 01/06/2023] Open
Abstract
Despite numerous investigations of the prevalence effect on medical image perception, little research has been done to examine the effect of expertise, and its possible interaction with prevalence. In this study, medical practitioners were instructed to detect the presence of hip fracture in 50 X-ray images with either high prevalence (Nsignal = 40) or low prevalence (Nsignal = 10). Results showed that compared to novices (e.g., pediatricians, dentists, neurologists), the manipulation of prevalence shifted participant's criteria in a different direction for experts who perform hip fracture diagnosis on a daily basis. That is, when prevalence rate is low (pfracture-present = 0.2), experts held more conservative criteria in answering "fracture-present," whereas novices were more likely to believe there was fracture. Importantly, participants' detection discriminability did not vary by the prevalence condition. In addition, all participants were more conservative with "fracture-present" responses when task difficulty increased. We suspect the apparent opposite criteria shift between experts and novices may have come from medical training that made novices to believe that a miss would result in larger cost compared to false positive, or because they failed to update their prior belief about the signal prevalence in the task, both would suggest that novices and experts may have different beliefs in placing the optimal strategy in the hip fracture diagnosis. Our work can contribute to medical education training as well as other applied clinical diagnosis that aims to mitigate the prevalence effect.
Collapse
Affiliation(s)
- Hanshu Zhang
- School of Psychology, Central China Normal University, Wuhan, Hubei, China
| | - Shen-Wu Hung
- Department of Orthopedics, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan
| | - Yu-Pin Chen
- Department of Orthopedics, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan
- Department of Orthopedics, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan
| | - Jan-Wen Ku
- Department of Radiology, Shuang-Ho Hospital, Taipei Medical University, No. 291, Zhongzheng Rd., Zhonghe Dist., New Taipei City, Taiwan
| | - Philip Tseng
- Graduate Institute of Mind, Brain, and Consciousness, Taipei Medical University, Taipei, Taiwan
| | - Yueh-Hsun Lu
- Department of Radiology, Shuang-Ho Hospital, Taipei Medical University, No. 291, Zhongzheng Rd., Zhonghe Dist., New Taipei City, Taiwan.
- Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan.
| | - Cheng-Ta Yang
- Graduate Institute of Mind, Brain, and Consciousness, Taipei Medical University, Taipei, Taiwan
- Department of Psychology, National Chung-Kung University, Tainan, Taiwan
| |
Collapse
|
15
|
Evaluation of expert skills in refinery patrol inspection: visual attention and head positioning behavior. Heliyon 2022; 8:e12117. [PMID: 36544846 PMCID: PMC9761707 DOI: 10.1016/j.heliyon.2022.e12117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 10/02/2022] [Accepted: 11/28/2022] [Indexed: 12/12/2022] Open
Abstract
We aimed to clarify expert skills in refinery patrol inspection using data collected through a virtual reality experimental system. As body positioning and postural changes are relevant factors during refinery patrol inspection tasks, we measured and analyzed both visual attention and head positioning behavior among experts and "knowledgeable novices" who were engaged in the engineering of the refinery but had less inspection experience. The participants performed a simulated inspection task, and the results showed that 1) expert inspectors could find more defects compared to knowledgeable novices, 2) visual attention behavior was similar between knowledgeable novices and experts, and 3) experts tended to position their heads at various heights and further from the inspection target to obtain visual information more effectively from the target compared to knowledgeable novices. This study presented the differences in head positioning behavior between expert and novice inspectors for the first time. These results suggest that to evaluate the skills used in inspecting relatively larger targets, both visual attention and head positioning behavior of the inspectors must be measured.
Collapse
|
16
|
Yang M, Xie Z, Wang Z, Yuan Y, Zhang J. Su-MICL: Severity-Guided Multiple Instance Curriculum Learning for Histopathology Image Interpretable Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3533-3543. [PMID: 35786552 DOI: 10.1109/tmi.2022.3188326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Histopathology image classification plays a critical role in clinical diagnosis. However, due to the absence of clinical interpretability, most existing image-level classifiers remain impractical. To acquire the essential interpretability, lesion-level diagnosis is also provided, relying on detailed lesion-level annotations. Although the multiple-instance learning (MIL)-based approach can identify lesions by only utilizing image-level annotations, it requires overly strict prior information and has limited accuracy in lesion-level tasks. Here, we present a novel severity-guided multiple instance curriculum learning (Su-MICL) strategy to avoid tedious labeling. The proposed Su-MICL is under a MIL framework with a neglected prior: disease severity to define the learning difficulty of training images. Based on the difficulty degree, a curriculum is developed to train a model utilizing images from easy to hard. The experimental results for two histopathology image datasets demonstrate that Su-MICL achieves comparable performance to the state-of-the-art weakly supervised methods for image-level classification, and its performance for identifying lesions is closest to the supervised learning method. Without tedious lesion labeling, the Su-MICL approach can provide an interpretable diagnosis, as well as an effective insight to aid histopathology image diagnosis.
Collapse
|
17
|
Carrigan AJ, Charlton A, Foucar E, Wiggins MW, Georgiou A, Palmeri TJ, Curby KM. The Role of Cue-Based Strategies in Skilled Diagnosis Among Pathologists. HUMAN FACTORS 2022; 64:1154-1167. [PMID: 33586457 DOI: 10.1177/0018720821990160] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
OBJECTIVE This research was designed to test whether behavioral indicators of pathology-related cue utilization were associated with performance on a diagnostic task. BACKGROUND Across many domains, including pathology, successful diagnosis depends on pattern recognition that is supported by associations in memory in the form of cues. Previous studies have focused on the specific information or knowledge on which medical image expertise relies. The target in this study is the more general ability to identify and interpret relevant information. METHOD Data were collected from 54 histopathologists in both conference and online settings. The participants completed a pathology edition of the Expert Intensive Skills Evaluation 2.0 (EXPERTise 2.0) to establish behavioral indicators of context-related cue utilization. They also completed a separate diagnostic task designed to examine related diagnostic skills. RESULTS Behavioral indicators of higher or lower cue utilization were based on the participants' performance across five tasks. Accounting for the number of cases reported per year, higher cue utilization was associated with greater accuracy on the diagnostic task. A post hoc analysis suggested that higher cue utilization may be associated with a greater capacity to recognize low prevalence cases. CONCLUSION This study provides support for the role of cue utilization in the development and maintenance of skilled diagnosis amongst pathologists. APPLICATION Pathologist training needs to be structured to ensure that learners have the opportunity to form cue-based strategies and associations in memory, especially for less commonly seen diseases.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Kim M Curby
- 7788 Macquarie University, Sydney, Australia
| |
Collapse
|
18
|
Siegmund SE, Manning DK, Davineni PK, Dong F. Deriving tumor purity from cancer next generation sequencing data: applications for quantitative ERBB2 (HER2) copy number analysis and germline inference of BRCA1 and BRCA2 mutations. Mod Pathol 2022; 35:1458-1467. [PMID: 35902772 DOI: 10.1038/s41379-022-01083-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 03/30/2022] [Accepted: 04/03/2022] [Indexed: 11/09/2022]
Abstract
Tumor purity, or the relative contribution of tumor cells out of all cells in a pathological specimen, influences mutation identification and clinical interpretation of cancer panel next generation sequencing results. Here, we describe a method of calculating tumor purity using pathologist-guided copy number analysis from sequencing data. Molecular calculation of tumor purity showed strong linear correlation with purity derived from driver KRAS or BRAF variant allele fractions in colorectal cancers (R2 = 0.79) compared to histological estimation in the same set of colorectal cancers (R2 = 0.01) and in a broader dataset of cancers with various diagnoses (R2 = 0.35). We used calculated tumor purity to quantitate ERBB2 copy number in breast carcinomas with equivocal immunohistochemical staining and demonstrated strong correlation with fluorescence in situ hybridization (R2 = 0.88). Finally, we used calculated tumor purity to infer the germline status of variants in breast and ovarian carcinomas with concurrent germline testing. Tumor-only next generation sequencing correctly predicted the somatic versus germline nature of 26 of 26 (100%) pathogenic TP53, BRCA1 and BRCA2 variants. In this article, we describe a framework for calculating tumor purity from cancer next generation sequencing data. Accurate tumor purity assessment can be assimilated into interpretation pipelines to derive clinically useful information from cancer genomic panels.
Collapse
Affiliation(s)
| | | | - Phani K Davineni
- Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA
| | - Fei Dong
- Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA.
| |
Collapse
|
19
|
Rasoolijaberi M, Babaei M, Riasatian A, Hemati S, Ashrafi P, Gonzalez R, Tizhoosh HR. Multi-Magnification Image Search in Digital Pathology. IEEE J Biomed Health Inform 2022; 26:4611-4622. [PMID: 35687644 DOI: 10.1109/jbhi.2022.3181531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper investigates the effect of magnification on content-based image search in digital pathology archives and proposes to use multi-magnification image representation. Image search in large archives of digital pathology slides provides researchers and medical professionals with an opportunity to match records of current and past patients and learn from evidently diagnosed and treated cases. When working with microscopes, pathologists switch between different magnification levels while examining tissue specimens to find and evaluate various morphological features. Inspired by the conventional pathology workflow, we have investigated several magnification levels in digital pathology and their combinations to minimize the gap between AI-enabled image search methods and clinical settings. The proposed searching framework does not rely on any regional annotation and potentially applies to millions of unlabelled (raw) whole slide images. This paper suggests two approaches for combining magnification levels and compares their performance. The first approach obtains a single-vector deep feature representation for a digital slide, whereas the second approach works with a multi-vector deep feature representation. We report the search results of 20×, 10×, and 5× magnifications and their combinations on a subset of The Cancer Genome Atlas (TCGA) repository. The experiments verify that cell-level information at the highest magnification is essential for searching for diagnostic purposes. In contrast, low-magnification information may improve this assessment depending on the tumor type. Our multi-magnification approach achieved up to 11% F1-score improvement in searching among the urinary tract and brain tumor subtypes compared to the single-magnification image search.
Collapse
|
20
|
Ghezloo F, Wang PC, Kerr KF, Brunyé TT, Drew T, Chang OH, Reisch LM, Shapiro LG, Elmore JG. An analysis of pathologists' viewing processes as they diagnose whole slide digital images. J Pathol Inform 2022; 13:100104. [PMID: 36268085 PMCID: PMC9576972 DOI: 10.1016/j.jpi.2022.100104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 04/07/2022] [Accepted: 04/09/2022] [Indexed: 10/27/2022] Open
Abstract
Although pathologists have their own viewing habits while diagnosing, viewing behaviors leading to the most accurate diagnoses are under-investigated. Digital whole slide imaging has enabled investigators to analyze pathologists' visual interpretation of histopathological features using mouse and viewport tracking techniques. In this study, we provide definitions for basic viewing behavior variables and investigate the association of pathologists' characteristics and viewing behaviors, and how they relate to diagnostic accuracy when interpreting whole slide images. We use recordings of 32 pathologists' actions while interpreting a set of 36 digital whole slide skin biopsy images (5 sets of 36 cases; 180 cases total). These viewport tracking data include the coordinates of a viewport scene on pathologists' screens, the magnification level at which that viewport was viewed, as well as a timestamp. We define a set of variables to quantify pathologists' viewing behaviors such as zooming, panning, and interacting with a consensus reference panel's selected region of interest (ROI). We examine the association of these viewing behaviors with pathologists' demographics, clinical characteristics, and diagnostic accuracy using cross-classified multilevel models. Viewing behaviors differ based on clinical experience of the pathologists. Pathologists with a higher caseload of melanocytic skin biopsy cases and pathologists with board certification and/or fellowship training in dermatopathology have lower average zoom and lower variance of zoom levels. Viewing behaviors associated with higher diagnostic accuracy include higher average and variance of zoom levels, a lower magnification percentage (a measure of consecutive zooming behavior), higher total interpretation time, and higher amount of time spent viewing ROIs. Scanning behavior, which refers to panning with a fixed zoom level, has marginally significant positive association with accuracy. Pathologists' training, clinical experience, and their exposure to a range of cases are associated with their viewing behaviors, which may contribute to their diagnostic accuracy. Research in computational pathology integrating digital imaging and clinical informatics opens up new avenues for leveraging viewing behaviors in medical education and training, potentially improving patient care and the effectiveness of clinical workflow.
Collapse
Affiliation(s)
- Fatemeh Ghezloo
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA
| | - Pin-Chieh Wang
- Department of Medicine, University of California, Los Angeles, David Geffen School of Medicine, Los Angeles, CA, USA
| | - Kathleen F. Kerr
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Tad T. Brunyé
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA, USA
| | - Trafton Drew
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Oliver H. Chang
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, WA, USA
| | - Lisa M. Reisch
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Linda G. Shapiro
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA
| | - Joann G. Elmore
- Department of Medicine, University of California, Los Angeles, David Geffen School of Medicine, Los Angeles, CA, USA
| |
Collapse
|
21
|
Mikhailov IA, Khvostikov AV, Krylov AS. [Methodical approaches to annotation and labeling of histological images in order to automatically detect the layers of the stomach wall and the depth of invasion of gastric cancer]. Arkh Patol 2022; 84:67-73. [PMID: 36469721 DOI: 10.17116/patol20228406167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
OBJECTIVE Development of original methodological approaches to annotation and labeling of histological images in relation to the problem of automatic segmentation of the layers of the stomach wall. MATERIAL AND METHODS Three image collections were used in the study: NCT-CRC-HE-100K, CRC-VAL-HE-7K, and part of the PATH-DT-MSU collection. The used part of the original PATH-DT-MSU collection contains 20 histological images obtained using a high performance digital scanning microscope. UNLABELLED Each image is a fragment of the stomach wall, cut from the surgical material of gastric cancer and stained with hematoxylin and eosin. Images were obtained using a scanning microscope Leica Aperio AT2 (Leica Microsystems Inc., Germany), annotations were made using Aperio ImageScope 12.3.3 (Leica Microsystems Inc., Germany). RESULTS A labeling system is proposed that includes 5 classes (tissue types): areas of gastric adenocarcinoma (TUM), unchanged areas of the lamina propria (LP), unchanged areas of the muscular lamina of the mucosa (MM), a class of underlying tissues (AT), including areas of the submucosa, own muscular layer of the stomach and subserous sections, image background (BG). The advantage of this marking technique is to ensure high efficiency of recognition of the muscularis lamina (MM) - a natural «line» separating the lamina propria of the mucous membrane and all other underlying layers of the stomach wall. The disadvantages of the technique include a small number of classes, which leads to insufficient detailing of automatic segmentation. CONCLUSION In the course of the study, an original technique for labeling and annotating images was developed, including 5 classes (types of tissues). This technique is effective at the initial stages of teaching mathematical algorithms for the classification and segmentation of histological images. Further stages in the development of a real diagnostic algorithm to automatically determine the depth of invasion of gastric cancer require the correction and development of the presented method of marking and annotation.
Collapse
Affiliation(s)
| | | | - A S Krylov
- Lomonosov Moscow State University, Moscow, Russia
| |
Collapse
|
22
|
Carrigan AJ, Charlton A, Wiggins MW, Georgiou A, Palmeri T, Curby KM. Cue utilisation reduces the impact of response bias in histopathology. APPLIED ERGONOMICS 2022; 98:103590. [PMID: 34598079 DOI: 10.1016/j.apergo.2021.103590] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 08/18/2021] [Accepted: 09/03/2021] [Indexed: 06/13/2023]
Abstract
Histopathologists make diagnostic decisions that are thought to be based on pattern recognition, likely informed by cue-based associations formed in memory, a process known as cue utilisation. Typically, the cases presented to the histopathologist have already been classified as 'abnormal' by clinical examination and/or other diagnostic tests. This results in a high disease prevalence, the potential for 'abnormality priming', and a response bias leading to false positives on normal cases. This study investigated whether higher cue utilisation is associated with a reduction in positive response bias in the diagnostic decisions of histopathologists. Data were collected from eighty-two histopathologists who completed a series of demographic and experience-related questions and the histopathology edition of the Expert Intensive Skills Evaluation 2.0 (EXPERTise 2.0) to establish behavioural indicators of context-related cue utilisation. They also completed a separate, diagnostic task comprising breast histopathology images where the frequency of abnormality was manipulated to create a high disease prevalence context for diagnostic decisions relating to normal tissue. Participants were assigned to higher or lower cue utilisation groups based on their performance on EXPERTise 2.0. When the effects of experience were controlled, higher cue utilisation was specifically associated with a greater accuracy classifying normal images, recording a lower positive response bias. This study suggests that cue utilisation may play a protective role against response biases in histopathology settings.
Collapse
Affiliation(s)
- A J Carrigan
- Department of Psychology, Macquarie University, Sydney, Australia; Centre for Elite Performance, Expertise & Training, Macquarie University, Sydney, Australia.
| | - A Charlton
- Department of Histopathology, Auckland City Hospital, and Department of Molecular Medicine and Pathology, University of Auckland, New Zealand
| | - M W Wiggins
- Department of Psychology, Macquarie University, Sydney, Australia; Centre for Elite Performance, Expertise & Training, Macquarie University, Sydney, Australia
| | - A Georgiou
- Centre for Health Systems and Safety Research, Macquarie University, Sydney, Australia
| | - T Palmeri
- Department of Psychology, Vanderbilt University, Nashville, United States
| | - K M Curby
- Department of Psychology, Macquarie University, Sydney, Australia; Centre for Elite Performance, Expertise & Training, Macquarie University, Sydney, Australia
| |
Collapse
|
23
|
Evaluation of Influence Factors on the Visual Inspection Performance of Aircraft Engine Blades. AEROSPACE 2021. [DOI: 10.3390/aerospace9010018] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Background—There are various influence factors that affect visual inspection of aircraft engine blades including type of inspection, defect type, severity level, blade perspective and background colour. The effect of those factors on the inspection performance was assessed. Method—The inspection accuracy of fifty industry practitioners was measured for 137 blade images, leading to N = 6850 observations. The data were statistically analysed to identify the significant factors. Subsequent evaluation of the eye tracking data provided additional insights into the inspection process. Results—Inspection accuracies in borescope inspections were significantly lower compared to piece-part inspection at 63.8% and 82.6%, respectively. Airfoil dents (19.0%), cracks (11.0%), and blockage (8.0%) were the most difficult defects to detect, while nicks (100.0%), tears (95.5%), and tip curls (89.0%) had the highest detection rates. The classification accuracy was lowest for airfoil dents (5.3%), burns (38.4%), and tears (44.9%), while coating loss (98.1%), nicks (90.0%), and blockage (87.5%) were most accurately classified. Defects of severity level S1 (72.0%) were more difficult to detect than increased severity levels S2 (92.8%) and S3 (99.0%). Moreover, visual perspectives perpendicular to the airfoil led to better inspection rates (up to 87.5%) than edge perspectives (51.0% to 66.5%). Background colour was not a significant factor. The eye tracking results of novices showed an unstructured search path, characterised by numerous fixations, leading to longer inspection times. Experts in contrast applied a systematic search strategy with focus on the edges, and showed a better defect discrimination ability. This observation was consistent across all stimuli, thus independent of the influence factors. Conclusions—Eye tracking identified the challenges of the inspection process and errors made. A revised inspection framework was proposed based on insights gained, and support the idea of an underlying mental model.
Collapse
|
24
|
Li J, Mi W, Guo Y, Ren X, Fu H, Zhang T, Zou H, Liang Z. Artificial intelligence for histological subtype classification of breast cancer: combining multi‐scale feature maps and recurrent attention model. Histopathology 2021; 80:836-846. [DOI: 10.1111/his.14613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 11/09/2021] [Accepted: 12/22/2021] [Indexed: 11/30/2022]
Affiliation(s)
- Junjie Li
- Department of Pathology, State Key Laboratory of Complex Severe and Rare Diseases, Molecular Pathology Research Center Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College Beijing 100730 China
| | - Weiming Mi
- Department of Automation, School of Information Science and Technology Tsinghua University Beijing 100084 China
| | - Yucheng Guo
- Tsimage Medical Technology Yihai Center No. 2039 Shenyan Road, Yantian District Shenzhen 518081 China
- Center for Intelligent Medical Imaging & Health Research Institute of Tsinghua University in Shenzhen Shenzhen 518057 China
| | - Xinyu Ren
- Department of Pathology, State Key Laboratory of Complex Severe and Rare Diseases, Molecular Pathology Research Center Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College Beijing 100730 China
| | - Hao Fu
- Tsimage Medical Technology Yihai Center No. 2039 Shenyan Road, Yantian District Shenzhen 518081 China
| | - Tao Zhang
- Department of Automation, School of Information Science and Technology Tsinghua University Beijing 100084 China
| | - Hao Zou
- Tsimage Medical Technology Yihai Center No. 2039 Shenyan Road, Yantian District Shenzhen 518081 China
- Center for Intelligent Medical Imaging & Health Research Institute of Tsinghua University in Shenzhen Shenzhen 518057 China
| | - Zhiyong Liang
- Department of Pathology, State Key Laboratory of Complex Severe and Rare Diseases, Molecular Pathology Research Center Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College Beijing 100730 China
| |
Collapse
|
25
|
Mehrvar S, Himmel LE, Babburi P, Goldberg AL, Guffroy M, Janardhan K, Krempley AL, Bawa B. Deep Learning Approaches and Applications in Toxicologic Histopathology: Current Status and Future Perspectives. J Pathol Inform 2021; 12:42. [PMID: 34881097 PMCID: PMC8609289 DOI: 10.4103/jpi.jpi_36_21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 07/18/2021] [Indexed: 12/13/2022] Open
Abstract
Whole slide imaging enables the use of a wide array of digital image analysis tools that are revolutionizing pathology. Recent advances in digital pathology and deep convolutional neural networks have created an enormous opportunity to improve workflow efficiency, provide more quantitative, objective, and consistent assessments of pathology datasets, and develop decision support systems. Such innovations are already making their way into clinical practice. However, the progress of machine learning - in particular, deep learning (DL) - has been rather slower in nonclinical toxicology studies. Histopathology data from toxicology studies are critical during the drug development process that is required by regulatory bodies to assess drug-related toxicity in laboratory animals and its impact on human safety in clinical trials. Due to the high volume of slides routinely evaluated, low-throughput, or narrowly performing DL methods that may work well in small-scale diagnostic studies or for the identification of a single abnormality are tedious and impractical for toxicologic pathology. Furthermore, regulatory requirements around good laboratory practice are a major hurdle for the adoption of DL in toxicologic pathology. This paper reviews the major DL concepts, emerging applications, and examples of DL in toxicologic pathology image analysis. We end with a discussion of specific challenges and directions for future research.
Collapse
Affiliation(s)
- Shima Mehrvar
- Preclinical Safety, AbbVie Inc., North Chicago, IL, USA
| | | | - Pradeep Babburi
- Business Technology Solutions, AbbVie Inc., North Chicago, IL, USA
| | | | | | | | | | | |
Collapse
|
26
|
Feng YZ, Liu S, Cheng ZY, Quiroz JC, Rezazadegan D, Chen PK, Lin QT, Qian L, Liu XF, Berkovsky S, Coiera E, Song L, Qiu XM, Cai XR. Severity Assessment and Progression Prediction of COVID-19 Patients Based on the LesionEncoder Framework and Chest CT. INFORMATION 2021; 12:471. [DOI: 10.3390/info12110471] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Automatic severity assessment and progression prediction can facilitate admission, triage, and referral of COVID-19 patients. This study aims to explore the potential use of lung lesion features in the management of COVID-19, based on the assumption that lesion features may carry important diagnostic and prognostic information for quantifying infection severity and forecasting disease progression. A novel LesionEncoder framework is proposed to detect lesions in chest CT scans and to encode lesion features for automatic severity assessment and progression prediction. The LesionEncoder framework consists of a U-Net module for detecting lesions and extracting features from individual CT slices, and a recurrent neural network (RNN) module for learning the relationship between feature vectors and collectively classifying the sequence of feature vectors. Chest CT scans of two cohorts of COVID-19 patients from two hospitals in China were used for training and testing the proposed framework. When applied to assessing severity, this framework outperformed baseline methods achieving a sensitivity of 0.818, specificity of 0.952, accuracy of 0.940, and AUC of 0.903. It also outperformed the other tested methods in disease progression prediction with a sensitivity of 0.667, specificity of 0.838, accuracy of 0.829, and AUC of 0.736. The LesionEncoder framework demonstrates a strong potential for clinical application in current COVID-19 management, particularly in automatic severity assessment of COVID-19 patients. This framework also has a potential for other lesion-focused medical image analyses.
Collapse
Affiliation(s)
- You-Zhen Feng
- Medical Imaging Centre, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Sidong Liu
- Centre for Health Informatics, Australian Institute of Health Innovation, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney 2113, Australia
| | - Zhong-Yuan Cheng
- Medical Imaging Centre, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Juan C. Quiroz
- Centre for Big Data Research in Health, University of New South Wales, Sydney 1466, Australia
| | - Dana Rezazadegan
- Department of Computer Science and Software Engineering, Swinburne University of Technology, Melbourne 3000, Australia
| | - Ping-Kang Chen
- Medical Imaging Centre, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Qi-Ting Lin
- Medical Imaging Centre, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Long Qian
- Department of Biomedical Engineering, Peking University, Beijing 100871, China
| | - Xiao-Fang Liu
- Tianjin Key Laboratory of Intelligent Robotics, Institute of Robotics and Automatic Information System, College of Artificial Intelligence, Nankai University, Tianjin 300350, China
| | - Shlomo Berkovsky
- Centre for Health Informatics, Australian Institute of Health Innovation, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney 2113, Australia
| | - Enrico Coiera
- Centre for Health Informatics, Australian Institute of Health Innovation, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney 2113, Australia
| | - Lei Song
- Department of Radiology, Xiangyang Central Hospital, Affiliated Hospital of Hubei University of Arts and Science, Xiangyang 441003, China
| | - Xiao-Ming Qiu
- Department of Radiology, Huangshi Central Hospital, Affiliated Hospital of Hubei Polytechnic University, Edong Healthcare Group, Huangshi 435002, China
| | - Xiang-Ran Cai
- Medical Imaging Centre, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| |
Collapse
|
27
|
Shinoda H, Yamamoto T, Imai-Matsumura K. Teachers' visual processing of children's off-task behaviors in class: A comparison between teachers and student teachers. PLoS One 2021; 16:e0259410. [PMID: 34731202 PMCID: PMC8565755 DOI: 10.1371/journal.pone.0259410] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 10/20/2021] [Indexed: 11/19/2022] Open
Abstract
As teachers are responsible for responding instantaneously to students' statements and actions, the progress of the class, and their teaching purpose, they need to be able to engage in responsive teaching. Teachers obtain information about students' learning by observing them in the classroom, and subsequently make instructional decisions based on this information. Teachers need to be sensitive to student behaviors and respond accordingly, because there are students who follow the teacher's instructions and those who do not in every classroom. Skilled teachers may distribute their gaze over the entire class and discover off-task behaviors. So how does a teacher's visual processing and noticing ability develop? It is important to clarify this process for both experienced teachers and student teachers. Therefore, the purpose of this study was to investigate whether there is a difference in visual processing and the ability to notice off-task behaviors in class between teachers and student teachers through gaze analysis. Using an eye tracking device, 76 teachers and 147 student teachers were asked to watch a video, and gaze measurements were collected. In the video, students exhibiting off-task behaviors in class were prompted by their classroom teacher to participate in the lesson. After the video, the participants were asked if they could identify the students who had displayed off-task behaviors and whom the teachers had warned. The results showed that teachers gazed at students engaging in off-task behaviors in class more often and noticed them at a higher rate than student teachers did. These results may be attributed to differences in the experiences of visual processing of relevant information in the classroom between teachers and student teachers. Thus, the findings on teachers' visual processing by direct measurement of gaze will be able to contribute to teachers' development.
Collapse
Affiliation(s)
- Hirofumi Shinoda
- Graduate School of Education, Bukkyo University, Kita-ku, Kyoto, Japan
| | | | | |
Collapse
|
28
|
Gong H, Hsieh SS, Holmes D, Cook D, Inoue A, Bartlett D, Baffour F, Takahashi H, Leng S, Yu L, McCollough CH, Fletcher JG. An interactive eye-tracking system for measuring radiologists' visual fixations in volumetric CT images: Implementation and initial eye-tracking accuracy validation. Med Phys 2021; 48:6710-6723. [PMID: 34534365 PMCID: PMC8595866 DOI: 10.1002/mp.15219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 08/28/2021] [Accepted: 08/30/2021] [Indexed: 01/17/2023] Open
Abstract
PURPOSE Eye-tracking approaches have been used to understand the visual search process in radiology. However, previous eye-tracking work in computer tomography (CT) has been limited largely to single cross-sectional images or video playback of the reconstructed volume, which do not accurately reflect radiologists' visual search activities and their interactivity with three-dimensional image data at a computer workstation (e.g., scroll, pan, and zoom) for visual evaluation of diagnostic imaging targets. We have developed a platform that integrates eye-tracking hardware with in-house-developed reader workstation software to allow monitoring of the visual search process and reader-image interactions in clinically relevant reader tasks. The purpose of this work is to validate the spatial accuracy of eye-tracking data using this platform for different eye-tracking data acquisition modes. METHODS An eye-tracker was integrated with a previously developed workstation designed for reader performance studies. The integrated system captured real-time eye movement and workstation events at 1000 Hz sampling frequency. The eye-tracker was operated either in head-stabilized mode or in free-movement mode. In head-stabilized mode, the reader positioned their head on a manufacturer-provided chinrest. In free-movement mode, a biofeedback tool emitted an audio cue when the head position was outside the data collection range (general biofeedback) or outside a narrower range of positions near the calibration position (strict biofeedback). Four radiologists and one resident were invited to participate in three studies to determine eye-tracking spatial accuracy under three constraint conditions: head-stabilized mode (i.e., with use of a chin rest), free movement with general biofeedback, and free movement with strict biofeedback. Study 1 evaluated the impact of head stabilization versus general or strict biofeedback using a cross-hair target prior to the integration of the eye-tracker with the image viewing workstation. In Study 2, after integration of the eye-tracker and reader workstation, readers were asked to fixate on targets that were randomly distributed within a volumetric digital phantom. In Study 3, readers used the integrated system to scroll through volumetric patient CT angiographic images while fixating on the centerline of designated blood vessels (from the left coronary artery to dorsalis pedis artery). Spatial accuracy was quantified as the offset between the center of the intended target and the detected fixation using units of image pixels and the degree of visual angle. RESULTS The three head position constraint conditions yielded comparable accuracy in the studies using digital phantoms. For Study 1 involving the digital crosshairs, the median ± the standard deviation of offset values among readers were 15.2 ± 7.0 image pixels with the chinrest, 14.2 ± 3.6 image pixels with strict biofeedback, and 19.1 ± 6.5 image pixels with general biofeedback. For Study 2 using the random dot phantom, the median ± standard deviation offset values were 16.7 ± 28.8 pixels with use of a chinrest, 16.5 ± 24.6 pixels using strict biofeedback, and 18.0 ± 22.4 pixels using general biofeedback, which translated to a visual angle of about 0.8° for all three conditions. We found no obvious association between eye-tracking accuracy and target size or view time. In Study 3 viewing patient images, use of the chinrest and strict biofeedback demonstrated comparable accuracy, while the use of general biofeedback demonstrated a slightly worse accuracy. The median ± standard deviation of offset values were 14.8 ± 11.4 pixels with use of a chinrest, 21.0 ± 16.2 pixels using strict biofeedback, and 29.7 ± 20.9 image pixels using general biofeedback. These corresponded to visual angles ranging from 0.7° to 1.3°. CONCLUSIONS An integrated eye-tracker system to assess reader eye movement and interactive viewing in relation to imaging targets demonstrated reasonable spatial accuracy for assessment of visual fixation. The head-free movement condition with audio biofeedback performed similarly to head-stabilized mode.
Collapse
Affiliation(s)
- Hao Gong
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - Scott S. Hsieh
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - David Holmes
- Department of Physiology & Biomedical Engineering, Mayo Clinic, Rochester, MN 55901
| | - David Cook
- Department of Internal Medicine, Mayo Clinic, Rochester, MN 55901
| | - Akitoshi Inoue
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - David Bartlett
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | | | | | - Shuai Leng
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - Lifeng Yu
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | | | | |
Collapse
|
29
|
Aust J, Mitrovic A, Pons D. Assessment of the Effect of Cleanliness on the Visual Inspection of Aircraft Engine Blades: An Eye Tracking Study. SENSORS (BASEL, SWITZERLAND) 2021; 21:6135. [PMID: 34577343 PMCID: PMC8473167 DOI: 10.3390/s21186135] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 09/03/2021] [Accepted: 09/07/2021] [Indexed: 01/20/2023]
Abstract
Background-The visual inspection of aircraft parts such as engine blades is crucial to ensure safe aircraft operation. There is a need to understand the reliability of such inspections and the factors that affect the results. In this study, the factor 'cleanliness' was analysed among other factors. Method-Fifty industry practitioners of three expertise levels inspected 24 images of parts with a variety of defects in clean and dirty conditions, resulting in a total of N = 1200 observations. The data were analysed statistically to evaluate the relationships between cleanliness and inspection performance. Eye tracking was applied to understand the search strategies of different levels of expertise for various part conditions. Results-The results show an inspection accuracy of 86.8% and 66.8% for clean and dirty blades, respectively. The statistical analysis showed that cleanliness and defect type influenced the inspection accuracy, while expertise was surprisingly not a significant factor. In contrast, inspection time was affected by expertise along with other factors, including cleanliness, defect type and visual acuity. Eye tracking revealed that inspectors (experts) apply a more structured and systematic search with less fixations and revisits compared to other groups. Conclusions-Cleaning prior to inspection leads to better results. Eye tracking revealed that inspectors used an underlying search strategy characterised by edge detection and differentiation between surface deposits and other types of damage, which contributed to better performance.
Collapse
Affiliation(s)
- Jonas Aust
- Department of Mechanical Engineering, University of Canterbury, Christchurch 8041, New Zealand;
| | - Antonija Mitrovic
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8041, New Zealand;
| | - Dirk Pons
- Department of Mechanical Engineering, University of Canterbury, Christchurch 8041, New Zealand;
| |
Collapse
|
30
|
An algorithmic approach to determine expertise development using object-related gaze pattern sequences. Behav Res Methods 2021; 54:493-507. [PMID: 34258709 PMCID: PMC8863757 DOI: 10.3758/s13428-021-01652-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/14/2021] [Indexed: 12/20/2022]
Abstract
Eye tracking (ET) technology is increasingly utilized to quantify visual behavior in the study of the development of domain-specific expertise. However, the identification and measurement of distinct gaze patterns using traditional ET metrics has been challenging, and the insights gained shown to be inconclusive about the nature of expert gaze behavior. In this article, we introduce an algorithmic approach for the extraction of object-related gaze sequences and determine task-related expertise by investigating the development of gaze sequence patterns during a multi-trial study of a simplified airplane assembly task. We demonstrate the algorithm in a study where novice (n = 28) and expert (n = 2) eye movements were recorded in successive trials (n = 8), allowing us to verify whether similar patterns develop with increasing expertise. In the proposed approach, AOI sequences were transformed to string representation and processed using the k-mer method, a well-known method from the field of computational biology. Our results for expertise development suggest that basic tendencies are visible in traditional ET metrics, such as the fixation duration, but are much more evident for k-mers of k > 2. With increased on-task experience, the appearance of expert k-mer patterns in novice gaze sequences was shown to increase significantly (p < 0.001). The results illustrate that the multi-trial k-mer approach is suitable for revealing specific cognitive processes and can quantify learning progress using gaze patterns that include both spatial and temporal information, which could provide a valuable tool for novice training and expert assessment.
Collapse
|
31
|
Lu X, Mehta S, Brunyé TT, Weaver DL, Elmore JG, Shapiro LG. Analysis of Regions of Interest and Distractor Regions in Breast Biopsy Images. ... IEEE-EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS. IEEE-EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS 2021; 2021:10.1109/bhi50953.2021.9508513. [PMID: 36589620 PMCID: PMC9801511 DOI: 10.1109/bhi50953.2021.9508513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
This paper studies why pathologists can misdiagnose diagnostically challenging breast biopsy cases, using a data set of 240 whole slide images (WSIs). Three experienced pathologists agreed on a consensus reference ground-truth diagnosis for each slide and also a consensus region of interest (ROI) from which the diagnosis could best be made. A study group of 87 other pathologists then diagnosed test sets (60 slides each) and marked their own regions of interest. Diagnoses and ROIs were categorized such that if on a given slide, their ROI differed from the consensus ROI and their diagnosis was incorrect, that ROI was called a distractor. We used the HATNet transformer-based deep learning classifier to evaluate the visual similarities and differences between the true (consensus) ROIs and the distractors. Results showed high accuracy for both the similarity and difference networks, showcasing the challenging nature of feature classification with breast biopsy images. This study is important in the potential use of its results for teaching pathologists how to diagnose breast biopsy slides.
Collapse
Affiliation(s)
- Ximing Lu
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle
| | - Sachin Mehta
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle
| | - Tad T. Brunyé
- Center for Applied Brain and Cognitive Sciences, School of Engineering, Tufts University, Medford
| | | | - Joann G. Elmore
- David Geffen School of Medicine, University of California, Los Angeles
| | - Linda G. Shapiro
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle
| |
Collapse
|
32
|
Brunyé TT, Drew T, Saikia MJ, Kerr KF, Eguchi MM, Lee AC, May C, Elder DE, Elmore JG. Melanoma in the Blink of an Eye: Pathologists' Rapid Detection, Classification, and Localization of Skin Abnormalities. VISUAL COGNITION 2021; 29:386-400. [PMID: 35197796 PMCID: PMC8863358 DOI: 10.1080/13506285.2021.1943093] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Accepted: 06/09/2021] [Indexed: 10/21/2022]
Abstract
Expert radiologists can quickly extract a basic "gist" understanding of a medical image following less than a second exposure, leading to above-chance diagnostic classification of images. Most of this work has focused on radiology tasks (such as screening mammography), and it is currently unclear whether this pattern of results and the nature of visual expertise underlying this ability are applicable to pathology, another medical imaging domain demanding visual diagnostic interpretation. To further characterize the detection, localization, and diagnosis of medical images, this study examined eye movements and diagnostic decision-making when pathologists were briefly exposed to digital whole slide images of melanocytic skin biopsies. Twelve resident (N = 5), fellow (N = 5), and attending pathologists (N = 2) with experience interpreting dermatopathology briefly viewed 48 cases presented for 500 ms each, and we tracked their eye movements towards histological abnormalities, their ability to classify images as containing or not containing invasive melanoma, and their ability to localize critical image regions. Results demonstrated rapid shifts of the eyes towards critical abnormalities during image viewing, high diagnostic sensitivity and specificity, and a surprisingly accurate ability to localize critical diagnostic image regions. Furthermore, when pathologists fixated critical regions with their eyes, they were subsequently much more likely to successfully localize that region on an outline of the image. Results are discussed relative to models of medical image interpretation and innovative methods for monitoring and assessing expertise development during medical education and training.
Collapse
Affiliation(s)
- Tad T. Brunyé
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA, USA
| | - Trafton Drew
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Manob Jyoti Saikia
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA, USA
| | - Kathleen F. Kerr
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Megan M. Eguchi
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Annie C. Lee
- Department of Medicine, David Geffen School of Medicine, University of California Los Angeles, CA, USA
| | - Caitlin May
- Dermatopathology Northwest, Bellevue, WA, USA
| | - David E. Elder
- Division of Anatomic Pathology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - Joann G. Elmore
- Department of Medicine, David Geffen School of Medicine, University of California Los Angeles, CA, USA
| |
Collapse
|
33
|
Mercan C, Aygunes B, Aksoy S, Mercan E, Shapiro LG, Weaver DL, Elmore JG. Deep Feature Representations for Variable-Sized Regions of Interest in Breast Histopathology. IEEE J Biomed Health Inform 2021; 25:2041-2049. [PMID: 33166257 PMCID: PMC8274968 DOI: 10.1109/jbhi.2020.3036734] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Modeling variable-sized regions of interest (ROIs) in whole slide images using deep convolutional networks is a challenging task, as these networks typically require fixed-sized inputs that should contain sufficient structural and contextual information for classification. We propose a deep feature extraction framework that builds an ROI-level feature representation via weighted aggregation of the representations of variable numbers of fixed-sized patches sampled from nuclei-dense regions in breast histopathology images. METHODS First, the initial patch-level feature representations are extracted from both fully-connected layer activations and pixel-level convolutional layer activations of a deep network, and the weights are obtained from the class predictions of the same network trained on patch samples. Then, the final patch-level feature representations are computed by concatenation of weighted instances of the extracted feature activations. Finally, the ROI-level representation is obtained by fusion of the patch-level representations by average pooling. RESULTS Experiments using a well-characterized data set of 240 slides containing 437 ROIs marked by experienced pathologists with variable sizes and shapes result in an accuracy score of 72.65% in classifying ROIs into four diagnostic categories that cover the whole histologic spectrum. CONCLUSION The results show that the proposed feature representations are superior to existing approaches and provide accuracies that are higher than the average accuracy of another set of pathologists. SIGNIFICANCE The proposed generic representation that can be extracted from any type of deep convolutional architecture combines the patch appearance information captured by the network activations and the diagnostic relevance predicted by the class-specific scoring of patches for effective modeling of variable-sized ROIs.
Collapse
|
34
|
The Role of Symmetry in the Aesthetics of Residential Building Façades Using Cognitive Science Methods. Symmetry (Basel) 2020. [DOI: 10.3390/sym12091438] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
Symmetry is an important visual feature for humans and its application in architecture is completely evident. This paper aims to investigate the role of symmetry in the aesthetics judgment of residential building façades and study the pattern of eye movement based on the expertise of subjects in architecture. In order to implement this in the present paper, we have created images in two categories: symmetrical and asymmetrical façade images. The experiment design allows us to investigate the preference of subjects and their reaction time to decide about presented images as well as record their eye movements. It was inferred that the aesthetic experience of a building façade is influenced by the expertise of the subjects. There is a significant difference between experts and non-experts in all conditions, and symmetrical façades are in line with the taste of non-expert subjects. Moreover, the patterns of fixational eye movements indicate that the horizontal or vertical symmetry (mirror symmetry) has a profound influence on the observer’s attention, but there is a difference in the points watched and their fixation duration. Thus, although symmetry may attract the same attention during eye movements on façade images, it does not necessarily lead to the same preference between the expert and non-expert groups.
Collapse
|
35
|
Digit eyes: Learning-related changes in information access in a computer game parallel those of oculomotor attention in laboratory studies. Atten Percept Psychophys 2020; 82:2434-2447. [PMID: 32333371 DOI: 10.3758/s13414-020-02019-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Active sensing theory is founded upon the dynamic relationship between information sampling and an observer's evolving goals. Oculomotor activity is a well studied method of sampling; a mouse or a keyboard can also be used to access information past the current screen. We examine information access patterns of StarCraft 2 players at multiple skill levels. The first measures are analogous to existing eye-movement studies: fixation frequency, fixation targets, and fixation duration all change as a function of skill, and are commensurate with known properties of eye movements in learning. Actions that require visual attention at moderate skill levels are eventually performed with little visual attention at all. This (a) confirms the generalizability of laboratory studies of attention and learning using eye movements to digital interface use, and (b) suggests that a wide variety of information access behaviors may be considered as a unified set of phenomena.
Collapse
|
36
|
Hayashi K, Aono S, Fujiwara M, Shiro Y, Ushida T. Difference in eye movements during gait analysis between professionals and trainees. PLoS One 2020; 15:e0232246. [PMID: 32353030 PMCID: PMC7192381 DOI: 10.1371/journal.pone.0232246] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Accepted: 04/11/2020] [Indexed: 11/18/2022] Open
Abstract
INTRODUCTION Observational gait analysis is a widely used skill in physical therapy. Meanwhile, the skill has not been investigated using objective assessments. The present study investigated the differences in eye movement between professionals and trainees, while observing gait analysis. METHODS The participants included in this study were 26 professional physical therapists and 26 physical therapist trainees. The participants, wearing eye tracker systems, were asked to describe gait abnormalities of a patient as much as possible. The eye movement parameters of interest were fixation count, average fixation duration, and total fixation duration. RESULTS The number of gait abnormalities described was significantly higher in professionals than in trainees, overall and in limbs of the patient. The fixation count was significantly higher in professionals when compared to trainees. Additionally, the average fixation duration and total fixation duration were significantly shorter in professionals. Conversely, in trunks, the number of gait abnormalities and eye movements showed no significant differences between groups. CONCLUSIONS Professionals require shorter fixation durations on areas of interest than trainees, while describing a higher number of gait abnormalities.
Collapse
Affiliation(s)
- Kazuhiro Hayashi
- Multidisciplinary Pain Center, Aichi Medical University, Nagakute, Japan
- Department of Rehabilitation, Aichi Medical University Hospital, Nagakute, Japan
| | - Shuichi Aono
- Multidisciplinary Pain Center, Aichi Medical University, Nagakute, Japan
- Department of Pain Data Management, Aichi Medical University, Nagakute, Japan
| | - Mitsuhiro Fujiwara
- Department of Rehabilitation, Kamiiida Rehabilitation Hospital, Nagoya, Japan
| | - Yukiko Shiro
- Multidisciplinary Pain Center, Aichi Medical University, Nagakute, Japan
- Department of Physical Therapy, Faculty of Rehabilitation Sciences, Nagoya Gakuin University, Nagoya, Japan
| | - Takahiro Ushida
- Multidisciplinary Pain Center, Aichi Medical University, Nagakute, Japan
| |
Collapse
|
37
|
Chatelain P, Sharma H, Drukker L, Papageorghiou AT, Noble JA. Evaluation of Gaze Tracking Calibration for Longitudinal Biomedical Imaging Studies. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:153-163. [PMID: 30188843 DOI: 10.1109/tcyb.2018.2866274] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Gaze tracking is a promising technology for studying the visual perception of clinicians during image-based medical exams. It could be used in longitudinal studies to analyze their perceptive process, explore human-machine interactions, and develop innovative computer-aided imaging systems. However, using a remote eye tracker in an unconstrained environment and over time periods of weeks requires a certain guarantee of performance to ensure that collected gaze data are fit for purpose. We report the results of evaluating eye tracking calibration for longitudinal studies. First, we tested the performance of an eye tracker on a cohort of 13 users over a period of one month. For each participant, the eye tracker was calibrated during the first session. The participants were asked to sit in front of a monitor equipped with the eye tracker, but their position was not constrained. Second, we tested the performance of the eye tracker on sonographers positioned in front of a cart-based ultrasound scanner. Experimental results show a decrease of accuracy between calibration and later testing of 0.30° and a further degradation over time at a rate of 0.13°. month-1. The overall median accuracy was 1.00° (50.9 pixels) and the overall median precision was 0.16° (8.3 pixels). The results from the ultrasonography setting show a decrease of accuracy of 0.16° between calibration and later testing. This slow degradation of gaze tracking accuracy could impact the data quality in long-term studies. Therefore, the results we present here can help in planning such long-term gaze tracking studies.
Collapse
|
38
|
Ashworth J, Thompson J, Mercer C. Learning to look: Evaluating the student experience of an interactive image appraisal activity. Radiography (Lond) 2019; 25:314-319. [DOI: 10.1016/j.radi.2019.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2018] [Revised: 02/15/2019] [Accepted: 02/18/2019] [Indexed: 11/25/2022]
|
39
|
Koury HF, Leonard CJ, Carry PM, Lee LMJ. An Expert Derived Feedforward Histology Module Improves Pattern Recognition Efficiency in Novice Students. ANATOMICAL SCIENCES EDUCATION 2019; 12:645-654. [PMID: 30586223 DOI: 10.1002/ase.1854] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Revised: 12/18/2018] [Accepted: 12/19/2018] [Indexed: 06/09/2023]
Abstract
Histology is a visually oriented, foundational anatomical sciences subject in professional health curricula that has seen a dramatic reduction in educational contact hours and an increase in content migration to a digital platform. While the digital migration of histology laboratories has transformed histology education, few studies have shown the impact of this change on visual literacy development, a critical competency in histology. The objective of this study was to assess whether providing a video clip of an expert's gaze while completing leukocyte identification tasks would increase the efficiency and performance of novices completing similar identification tasks. In a randomized study, one group of novices (n = 9) was provided with training materials that included expert eye gaze, while the other group (n = 12) was provided training materials with identical content, but without the expert eye gaze. Eye movement parameters including fixation rate and total scan path distance, and performance measures including time-to-task-completion and accuracy, were collected during an identification task assessment. Compared to the control group, the average fixation duration was 13.2% higher (P < 0.02) and scan path distance was 35.0% shorter in the experimental group (P = 0.14). Analysis of task performance measures revealed no significant difference between the groups. These preliminary results suggest a more efficient search performed by the experimental group, indicating the potential efficacy of training using an expert's gaze to enhance visual literacy development. With further investigation, such feedforward enhanced training methods could be utilized for histology and other visually oriented subjects.
Collapse
Affiliation(s)
- Hannah F Koury
- Master of Science in Modern Human Anatomy Program, University of Colorado, Graduate School, Aurora, Colorado
| | - Carly J Leonard
- Department of Psychology, University of Colorado Denver, Denver, Colorado
| | - Patrick M Carry
- Musculoskeletal Research Center, Department of Orthopedics, Colorado Children's Hospital, Aurora, Colorado
| | - Lisa M J Lee
- Master of Science in Modern Human Anatomy Program, University of Colorado, Graduate School, Aurora, Colorado
- Department of Cell and Developmental Biology, University of Colorado School of Medicine, Aurora, Colorado
| |
Collapse
|
40
|
Mukherjee M, Donnelly A, Rose B, Warren DE, Lyden E, Chantziantoniou N, Dimmitt B, Varley K, Pantanowitz L. Eye tracking in cytotechnology education: "visualizing" students becoming experts. J Am Soc Cytopathol 2019; 9:76-83. [PMID: 31401035 DOI: 10.1016/j.jasc.2019.07.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2019] [Revised: 07/03/2019] [Accepted: 07/07/2019] [Indexed: 10/26/2022]
Abstract
INTRODUCTION This study reports the potential of eye-tracking technology in determining screening skills of cytotechnology (CT) students while examining digital images (DI). MATERIALS AND METHODS Twenty-five static DI of gynecologic cytology specimens were serially displayed on a computer monitor for evaluation by 16 CT students and 3 cytotechnologists at 3 locations. During evaluation, participant's eye movements were monitored with a Mirametrix S2 eye tracker (iMotions, Boston, MA) and EyeWorks software (Eyetracking, Solana Beach, CA). Students completed the protocol at: Period1 (P1)-4 months, Period2 (P2)-7 months, Period3 (P3)-11 months during their 1-year training; and the cytotechnologists only once. A general linear mixed model was used to analyze the results. RESULTS The proportion of agreement on interpretations for cytotechnologists, students during P1, and students during P3 were 0.83, 0.62, and 0.70 respectively. The mean task duration in seconds for cytotechnologists, students during P1, and students during P3 were 21.1, 34.6, and 24.9 respectively. The mean number of fixation points for cytotechnologists, students during P1, and students during P3 were 14.5, 52.2, and 35.3, respectively. The mean number of gaze observations of cytotechnologists, students during P1, and students during P3 on region of interest (ROI) 1 were 77.93, 181.12, and 123.83, respectively; and, ROI 2 were 38.90, 142.79, and 92.46, respectively. CONCLUSIONS This study demonstrated that students had decreased time, number of fixation points, gaze observations on ROI, and increased agreement with the reference interpretations at the end of the training program, indicating that their screening skills were progressing towards the level of practicing cytotechnologists.
Collapse
Affiliation(s)
- Maheswari Mukherjee
- Cytotechnology Education, College of Allied Health Professions, University of Nebraska Medical Center, Omaha, Nebraska.
| | - Amber Donnelly
- Cytotechnology Education, College of Allied Health Professions, University of Nebraska Medical Center, Omaha, Nebraska
| | - Blake Rose
- Department of Pathology and Microbiology, Nebraska Medicine, Omaha, Nebraska
| | - David E Warren
- Department of Neurological Sciences, University of Nebraska Medical Center, Omaha, Nebraska
| | - Elizabeth Lyden
- Department of Biostatistics, College of Public Health, Omaha, Nebraska
| | | | - Brian Dimmitt
- Department of Anatomic Pathology, Carle Foundation Hospital, Urbana, Illinois
| | - Karyn Varley
- Department of Pathology, University of Pittsburgh Medical Center Magee-Womens Hospital, Pittsburgh, Pennsylvania
| | - Liron Pantanowitz
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania
| |
Collapse
|
41
|
Mercan E, Shapiro LG, Brunyé TT, Weaver DL, Elmore JG. Characterizing Diagnostic Search Patterns in Digital Breast Pathology: Scanners and Drillers. J Digit Imaging 2019; 31:32-41. [PMID: 28681097 DOI: 10.1007/s10278-017-9990-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Following a baseline demographic survey, 87 pathologists interpreted 240 digital whole slide images of breast biopsy specimens representing a range of diagnostic categories from benign to atypia, ductal carcinoma in situ, and invasive cancer. A web-based viewer recorded pathologists' behaviors while interpreting a subset of 60 randomly selected and randomly ordered slides. To characterize diagnostic search patterns, we used the viewport location, time stamp, and zoom level data to calculate four variables: average zoom level, maximum zoom level, zoom level variance, and scanning percentage. Two distinct search strategies were confirmed: scanning is characterized by panning at a constant zoom level, while drilling involves zooming in and out at various locations. Statistical analysis was applied to examine the associations of different visual interpretive strategies with pathologist characteristics, diagnostic accuracy, and efficiency. We found that females scanned more than males, and age was positively correlated with scanning percentage, while the facility size was negatively correlated. Throughout 60 cases, the scanning percentage and total interpretation time per slide decreased, and these two variables were positively correlated. The scanning percentage was not predictive of diagnostic accuracy. Increasing average zoom level, maximum zoom level, and zoom variance were correlated with over-interpretation.
Collapse
Affiliation(s)
- Ezgi Mercan
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA.
| | - Linda G Shapiro
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA
| | - Tad T Brunyé
- Department of Psychology, Tufts University, Medford, MA, USA
| | - Donald L Weaver
- Department of Pathology and UVM Cancer Center, University of Vermont, Burlington, VT, USA
| | - Joann G Elmore
- Department of Medicine, University of Washington School of Medicine, Seattle, WA, USA
| |
Collapse
|
42
|
Brunyé TT, Nallamothu BK, Elmore JG. Eye-tracking for assessing medical image interpretation: A pilot feasibility study comparing novice vs expert cardiologists. PERSPECTIVES ON MEDICAL EDUCATION 2019; 8:65-73. [PMID: 30977060 PMCID: PMC6468026 DOI: 10.1007/s40037-019-0505-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
INTRODUCTION As specialized medical professionals such as radiologists, pathologists, and cardiologists gain education and experience, their diagnostic efficiency and accuracy change, and they show altered eye movement patterns during medical image interpretation. Existing research in this area is limited to interpretation of static medical images, such as digitized whole slide biopsies, making it difficult to understand how expertise development might manifest during dynamic image interpretation, such as with angiograms or volumetric scans. METHODS A two-group (novice, expert) comparative pilot study examined the feasibility and utility of tracking and interpreting eye movement patterns while cardiologists viewed video-based coronary angiograms. A non-invasive eye tracking system recorded cardiologists' (n = 8) visual behaviour while they viewed and diagnosed a series of eight angiogram videos. Analyses assessed frame-by-frame video navigation behaviour, eye fixation behaviour, and resulting diagnostic decision making. RESULTS Relative to novices, expert cardiologists demonstrated shorter and less variable video review times, fewer eye fixations and saccadic eye movements, and less time spent paused on individual video frames. Novices showed repeated eye fixations on critical image frames and regions, though these were not predictive of accurate diagnostic decisions. DISCUSSION These preliminary results demonstrate interpretive decision errors among novices, suggesting they identify and process critical diagnostic features, but sometimes fail to accurately interpret those features. Results also showcase the feasibility of tracking and understanding eye movements during video-based coronary angiogram interpretation and suggest that eye tracking may be valuable for informing assessments of competency progression during medical education and training.
Collapse
Affiliation(s)
- Tad T. Brunyé
- Center for Applied Brain & Cognitive Sciences, Tufts University, Medford, MA USA
| | | | - Joann G. Elmore
- Department of Medicine, University of Washington, Seattle, WA USA
| |
Collapse
|
43
|
Brunyé TT, Drew T, Weaver DL, Elmore JG. A review of eye tracking for understanding and improving diagnostic interpretation. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2019; 4:7. [PMID: 30796618 PMCID: PMC6515770 DOI: 10.1186/s41235-019-0159-2] [Citation(s) in RCA: 64] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Accepted: 02/01/2019] [Indexed: 12/29/2022]
Abstract
Inspecting digital imaging for primary diagnosis introduces perceptual and cognitive demands for physicians tasked with interpreting visual medical information and arriving at appropriate diagnoses and treatment decisions. The process of medical interpretation and diagnosis involves a complex interplay between visual perception and multiple cognitive processes, including memory retrieval, problem-solving, and decision-making. Eye-tracking technologies are becoming increasingly available in the consumer and research markets and provide novel opportunities to learn more about the interpretive process, including differences between novices and experts, how heuristics and biases shape visual perception and decision-making, and the mechanisms underlying misinterpretation and misdiagnosis. The present review provides an overview of eye-tracking technology, the perceptual and cognitive processes involved in medical interpretation, how eye tracking has been employed to understand medical interpretation and promote medical education and training, and some of the promises and challenges for future applications of this technology.
Collapse
Affiliation(s)
- Tad T Brunyé
- Center for Applied Brain and Cognitive Sciences, Tufts University, 200 Boston Ave., Suite 3000, Medford, MA, 02155, USA.
| | - Trafton Drew
- Department of Psychology, University of Utah, 380 1530 E, Salt Lake City, UT, 84112, USA
| | - Donald L Weaver
- Department of Pathology and University of Vermont Cancer Center, University of Vermont, 111 Colchester Ave., Burlington, VT, 05401, USA
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine at UCLA, University of California at Los Angeles, 10833 Le Conte Ave., Los Angeles, CA, 90095, USA
| |
Collapse
|
44
|
Hanhan J, King R, Harrison TK, Kou A, Howard SK, Borg LK, Shum C, Udani AD, Mariano ER. A Pilot Project Using Eye-Tracking Technology to Design a Standardised Anaesthesia Workspace. Turk J Anaesthesiol Reanim 2018; 46:411-415. [PMID: 30505602 DOI: 10.5152/tjar.2018.67934] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2018] [Accepted: 07/31/2018] [Indexed: 11/22/2022] Open
Abstract
Objective Maximising safe handoff procedures ensures patient safety. Anaesthesiology practices have primarily focused on developing better communication tools. However, these tools tend to ignore the physical layout of the anaesthesia workspace itself. Standardising the anaesthesia workspace has the potential to improve patient safety. The design process should incorporate end user feedback and objective data. Methods This pilot project aims to design a standardised anaesthesia workspace using eye-tracking technology at a single university-affiliated Veterans Affairs hospital. Twelve practising anaesthesiologists observed a series of images representing five clinical scenarios. Each of these had a question prompting them to look for certain items commonly found in the anaesthesia workspace. Using eye-tracking technology, the gaze data of participants were recorded. These data were used to generate heat maps of the specific areas of interest in the workspace that received the most fixation counts. Results The laryngoscope and propofol had the highest percentages of gaze fixations on the left-hand side of the workstation, in closest proximity to the anaesthesiologist. Atropine, although the highest percentage of gaze fixations (33%) placed it on the right-hand side of the workstation, also had 25% of gaze fixations centred over the anaesthesia cart. Conclusion Gaze fixation analyses showed that anaesthesiologists identified locations for the laryngoscope and propofol within easy reach and emergency medications further away. Because eye tracking can provide objective data to influence the design process, it may be useful when developing standardised anaesthesia workspace templates for individual practices.
Collapse
Affiliation(s)
- Jaber Hanhan
- University of Utah School of Medicine, Salt Lake City, Utah, USA
| | - Roderick King
- Stanford University School of Medicine, Stanford, CA, USA
| | - T Kyle Harrison
- Department of Anaesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, CA, USA; Anesthesiology and Perioperative Care Service, Veterans Affairs Palo Alto Health Care System, Palo Alto, CA, USA
| | - Alex Kou
- Department of Anaesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, CA, USA; Anesthesiology and Perioperative Care Service, Veterans Affairs Palo Alto Health Care System, Palo Alto, CA, USA
| | - Steven K Howard
- Department of Anaesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, CA, USA; Anesthesiology and Perioperative Care Service, Veterans Affairs Palo Alto Health Care System, Palo Alto, CA, USA
| | - Lindsay K Borg
- Department of Anaesthesiology, Kaiser Permanente Northwest, Portland, OR, USA
| | - Cynthia Shum
- Anesthesiology and Perioperative Care Service, Veterans Affairs Palo Alto Health Care System, Palo Alto, CA, USA
| | - Ankeet D Udani
- Department of Anaesthesiology, Duke University School of Medicine, Durham, NC, USA
| | - Edward R Mariano
- Department of Anaesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, CA, USA; Anesthesiology and Perioperative Care Service, Veterans Affairs Palo Alto Health Care System, Palo Alto, CA, USA
| |
Collapse
|
45
|
Gecer B, Aksoy S, Mercan E, Shapiro LG, Weaver DL, Elmore JG. Detection and classification of cancer in whole slide breast histopathology images using deep convolutional networks. PATTERN RECOGNITION 2018; 84:345-356. [PMID: 30679879 PMCID: PMC6342566 DOI: 10.1016/j.patcog.2018.07.022] [Citation(s) in RCA: 79] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Generalizability of algorithms for binary cancer vs. no cancer classification is unknown for clinically more significant multi-class scenarios where intermediate categories have different risk factors and treatment strategies. We present a system that classifies whole slide images (WSI) of breast biopsies into five diagnostic categories. First, a saliency detector that uses a pipeline of four fully convolutional networks, trained with samples from records of pathologists' screenings, performs multi-scale localization of diagnostically relevant regions of interest in WSI. Then, a convolutional network, trained from consensus-derived reference samples, classifies image patches as non-proliferative or proliferative changes, atypical ductal hyperplasia, ductal carcinoma in situ, and invasive carcinoma. Finally, the saliency and classification maps are fused for pixel-wise labeling and slide-level categorization. Experiments using 240 WSI showed that both saliency detector and classifier networks performed better than competing algorithms, and the five-class slide-level accuracy of 55% was not statistically different from the predictions of 45 pathologists. We also present example visualizations of the learned representations for breast cancer diagnosis.
Collapse
Affiliation(s)
- Baris Gecer
- Department of Computer Engineering, Bilkent University, Ankara, 06800, Turkey
| | - Selim Aksoy
- Department of Computer Engineering, Bilkent University, Ankara, 06800, Turkey
| | - Ezgi Mercan
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA 98195, USA
| | - Linda G. Shapiro
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA 98195, USA
| | - Donald L. Weaver
- Department of Pathology, University of Vermont, Burlington, VT 05405, USA
| | - Joann G. Elmore
- Department of Medicine, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|
46
|
Kyroudi A, Petersson K, Ozsahin M, Bourhis J, Bochud F, Moeckli R. Analysis of the treatment plan evaluation process in radiotherapy through eye tracking. Z Med Phys 2018; 28:318-324. [DOI: 10.1016/j.zemedi.2017.11.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2017] [Revised: 11/08/2017] [Accepted: 11/21/2017] [Indexed: 11/25/2022]
|
47
|
Mercan E, Aksoy S, Shapiro LG, Weaver DL, Brunyé TT, Elmore JG. Localization of Diagnostically Relevant Regions of Interest in Whole Slide Images: a Comparative Study. J Digit Imaging 2018; 29:496-506. [PMID: 26961982 DOI: 10.1007/s10278-016-9873-1] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
Whole slide digital imaging technology enables researchers to study pathologists' interpretive behavior as they view digital slides and gain new understanding of the diagnostic medical decision-making process. In this study, we propose a simple yet important analysis to extract diagnostically relevant regions of interest (ROIs) from tracking records using only pathologists' actions as they viewed biopsy specimens in the whole slide digital imaging format (zooming, panning, and fixating). We use these extracted regions in a visual bag-of-words model based on color and texture features to predict diagnostically relevant ROIs on whole slide images. Using a logistic regression classifier in a cross-validation setting on 240 digital breast biopsy slides and viewport tracking logs of three expert pathologists, we produce probability maps that show 74 % overlap with the actual regions at which pathologists looked. We compare different bag-of-words models by changing dictionary size, visual word definition (patches vs. superpixels), and training data (automatically extracted ROIs vs. manually marked ROIs). This study is a first step in understanding the scanning behaviors of pathologists and the underlying reasons for diagnostic errors.
Collapse
Affiliation(s)
- Ezgi Mercan
- Department of Computer Science & Engineering, Paul G. Allen Center for Computing, University of Washington, 185 Stevens Way, Seattle, WA, 98195, USA.
| | - Selim Aksoy
- Department of Computer Engineering, Bilkent University, Bilkent, 06800, Ankara, Turkey
| | - Linda G Shapiro
- Department of Computer Science & Engineering, Paul G. Allen Center for Computing, University of Washington, 185 Stevens Way, Seattle, WA, 98195, USA
| | - Donald L Weaver
- Department of Pathology, University of Vermont, Burlington, VT, 05405, USA
| | - Tad T Brunyé
- Department of Psychology, Tufts University, Medford, MA, 02155, USA
| | - Joann G Elmore
- Department of Medicine, University of Washington, Seattle, WA, 98195, USA
| |
Collapse
|
48
|
Komura D, Ishikawa S. Machine Learning Methods for Histopathological Image Analysis. Comput Struct Biotechnol J 2018; 16:34-42. [PMID: 30275936 PMCID: PMC6158771 DOI: 10.1016/j.csbj.2018.01.001] [Citation(s) in RCA: 382] [Impact Index Per Article: 54.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Revised: 12/03/2017] [Accepted: 01/14/2018] [Indexed: 12/12/2022] Open
Abstract
Abundant accumulation of digital histopathological images has led to the increased demand for their analysis, such as computer-aided diagnosis using machine learning techniques. However, digital pathological images and related tasks have some issues to be considered. In this mini-review, we introduce the application of digital pathological image analysis using machine learning algorithms, address some problems specific to such analysis, and propose possible solutions.
Collapse
Affiliation(s)
- Daisuke Komura
- Department of Genomic Pathology, Medical Research Institute, Tokyo Medical and Dental University, Tokyo, Japan
| | | |
Collapse
|
49
|
Borg LK, Harrison TK, Kou A, Mariano ER, Udani AD, Kim TE, Shum C, Howard SK. Preliminary Experience Using Eye-Tracking Technology to Differentiate Novice and Expert Image Interpretation for Ultrasound-Guided Regional Anesthesia. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2018; 37:329-336. [PMID: 28777464 DOI: 10.1002/jum.14334] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Accepted: 04/24/2017] [Indexed: 06/07/2023]
Abstract
OBJECTIVES Objective measures are needed to guide the novice's pathway to expertise. Within and outside medicine, eye tracking has been used for both training and assessment. We designed this study to test the hypothesis that eye tracking may differentiate novices from experts in static image interpretation for ultrasound (US)-guided regional anesthesia. METHODS We recruited novice anesthesiology residents and regional anesthesiology experts. Participants wore eye-tracking glasses, were shown 5 sonograms of US-guided regional anesthesia, and were asked a series of anatomy-based questions related to each image while their eye movements were recorded. The answer to each question was a location on the sonogram, defined as the area of interest (AOI). The primary outcome was the total gaze time in the AOI (seconds). Secondary outcomes were the total gaze time outside the AOI (seconds), total time to answer (seconds), and time to first fixation on the AOI (seconds). RESULTS Five novices and 5 experts completed the study. Although the gaze time (mean ± SD) in the AOI was not different between groups (7 ± 4 seconds for novices and 7 ± 3 seconds for experts; P = .150), the gaze time outside the AOI was greater for novices (75 ± 18 versus 44 ± 4 seconds for experts; P = .005). The total time to answer and total time to first fixation in the AOI were both shorter for experts. CONCLUSIONS Experts in US-guided regional anesthesia take less time to identify sonoanatomy and spend less unfocused time away from a target compared to novices. Eye tracking is a potentially useful tool to differentiate novices from experts in the domain of US image interpretation.
Collapse
Affiliation(s)
- Lindsay K Borg
- Department of Anesthesiology, Perioperative, and Pain Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - T Kyle Harrison
- Department of Anesthesiology, Perioperative, and Pain Medicine, Stanford University School of Medicine, Stanford, California, USA
- Anesthesiology and Perioperative Care Service, VA Palo Alto Health Care System, Palo Alto, California, USA
| | - Alex Kou
- Department of Anesthesiology, Perioperative, and Pain Medicine, Stanford University School of Medicine, Stanford, California, USA
- Anesthesiology and Perioperative Care Service, VA Palo Alto Health Care System, Palo Alto, California, USA
| | - Edward R Mariano
- Department of Anesthesiology, Perioperative, and Pain Medicine, Stanford University School of Medicine, Stanford, California, USA
- Anesthesiology and Perioperative Care Service, VA Palo Alto Health Care System, Palo Alto, California, USA
| | - Ankeet D Udani
- Department of Anesthesiology, Perioperative, and Pain Medicine, Stanford University School of Medicine, Stanford, California, USA
- Department of Anesthesiology, Duke University School of Medicine, Durham, North Carolina, USA
| | - T Edward Kim
- Department of Anesthesiology, Perioperative, and Pain Medicine, Stanford University School of Medicine, Stanford, California, USA
- Anesthesiology and Perioperative Care Service, VA Palo Alto Health Care System, Palo Alto, California, USA
| | - Cynthia Shum
- Anesthesiology and Perioperative Care Service, VA Palo Alto Health Care System, Palo Alto, California, USA
| | - Steven K Howard
- Department of Anesthesiology, Perioperative, and Pain Medicine, Stanford University School of Medicine, Stanford, California, USA
- Anesthesiology and Perioperative Care Service, VA Palo Alto Health Care System, Palo Alto, California, USA
| |
Collapse
|
50
|
Brunyé TT, Mahoney CR. Exercise-Induced Physiological Arousal Biases Attention Toward Threatening Scene Details. Psychol Rep 2018; 122:79-95. [PMID: 29300141 DOI: 10.1177/0033294117750629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The present experiment examined whether physiological arousal induced by acute bouts of aerobic exercise would influence attention and memory for scenes depicting or not depicting weapons. In a repeated-measures design, participants exercised at either low or high exertion levels. During exercise, they were presented with images, some of which depicted weapons; immediately following exercise, they completed a recognition test for portions of central and peripheral scene regions. Two primary results emerged. First, in the low exertion condition, we replicated extant research showing inferior peripheral scene memory when images contained, versus did not contain, weapons. Second, the high exertion condition increased central scene memory relative to low exertion, and this effect was specific to images containing weapons. Thus, we provide evidence for accentuated weapon focus effects during states of exercise-induced physiological arousal. These results contribute new applied and theoretical understandings regarding the interactions between physiological state, breadth of attention, and memory.
Collapse
Affiliation(s)
- Tad T Brunyé
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA, USA; Department of Psychology, Tufts University, Medford, MA, USA; Cognitive Science Team, U.S. Army NSRDEC, Natick, MA, USA
| | - Caroline R Mahoney
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA, USA; Cognitive Science Team, U.S. Army NSRDEC, Natick, MA, USA
| |
Collapse
|