1
|
Lopes A, Ward AD, Cecchini M. Eye tracking in digital pathology: A comprehensive literature review. J Pathol Inform 2024; 15:100383. [PMID: 38868488 PMCID: PMC11168484 DOI: 10.1016/j.jpi.2024.100383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 04/28/2024] [Accepted: 05/14/2024] [Indexed: 06/14/2024] Open
Abstract
Eye tracking has been used for decades in attempt to understand the cognitive processes of individuals. From memory access to problem-solving to decision-making, such insight has the potential to improve workflows and the education of students to become experts in relevant fields. Until recently, the traditional use of microscopes in pathology made eye tracking exceptionally difficult. However, the digital revolution of pathology from conventional microscopes to digital whole slide images allows for new research to be conducted and information to be learned with regards to pathologist visual search patterns and learning experiences. This has the promise to make pathology education more efficient and engaging, ultimately creating stronger and more proficient generations of pathologists to come. The goal of this review on eye tracking in pathology is to characterize and compare the visual search patterns of pathologists. The PubMed and Web of Science databases were searched using 'pathology' AND 'eye tracking' synonyms. A total of 22 relevant full-text articles published up to and including 2023 were identified and included in this review. Thematic analysis was conducted to organize each study into one or more of the 10 themes identified to characterize the visual search patterns of pathologists: (1) effect of experience, (2) fixations, (3) zooming, (4) panning, (5) saccades, (6) pupil diameter, (7) interpretation time, (8) strategies, (9) machine learning, and (10) education. Expert pathologists were found to have higher diagnostic accuracy, fewer fixations, and shorter interpretation times than pathologists with less experience. Further, literature on eye tracking in pathology indicates that there are several visual strategies for diagnostic interpretation of digital pathology images, but no evidence of a superior strategy exists. The educational implications of eye tracking in pathology have also been explored but the effect of teaching novices how to search as an expert remains unclear. In this article, the main challenges and prospects of eye tracking in pathology are briefly discussed along with their implications to the field.
Collapse
Affiliation(s)
- Alana Lopes
- Department of Medical Biophysics, Western University, London, ON N6A 3K7, Canada
- Gerald C. Baines Centre, London Health Sciences Centre, London, ON N6A 5W9, Canada
| | - Aaron D. Ward
- Department of Medical Biophysics, Western University, London, ON N6A 3K7, Canada
- Gerald C. Baines Centre, London Health Sciences Centre, London, ON N6A 5W9, Canada
- Department of Oncology, Western University, London, ON N6A 3K7, Canada
| | - Matthew Cecchini
- Department of Pathology and Laboratory Medicine, Schulich School of Medicine and Dentistry, Western University, London, ON N6A 3K7, Canada
| |
Collapse
|
2
|
Ghezloo F, Chang OH, Knezevich SR, Shaw KC, Thigpen KG, Reisch LM, Shapiro LG, Elmore JG. Robust ROI Detection in Whole Slide Images Guided by Pathologists' Viewing Patterns. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01202-x. [PMID: 39122892 DOI: 10.1007/s10278-024-01202-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Revised: 06/24/2024] [Accepted: 07/05/2024] [Indexed: 08/12/2024]
Abstract
Deep learning techniques offer improvements in computer-aided diagnosis systems. However, acquiring image domain annotations is challenging due to the knowledge and commitment required of expert pathologists. Pathologists often identify regions in whole slide images with diagnostic relevance rather than examining the entire slide, with a positive correlation between the time spent on these critical image regions and diagnostic accuracy. In this paper, a heatmap is generated to represent pathologists' viewing patterns during diagnosis and used to guide a deep learning architecture during training. The proposed system outperforms traditional approaches based on color and texture image characteristics, integrating pathologists' domain expertise to enhance region of interest detection without needing individual case annotations. Evaluating our best model, a U-Net model with a pre-trained ResNet-18 encoder, on a skin biopsy whole slide image dataset for melanoma diagnosis, shows its potential in detecting regions of interest, surpassing conventional methods with an increase of 20%, 11%, 22%, and 12% in precision, recall, F1-score, and Intersection over Union, respectively. In a clinical evaluation, three dermatopathologists agreed on the model's effectiveness in replicating pathologists' diagnostic viewing behavior and accurately identifying critical regions. Finally, our study demonstrates that incorporating heatmaps as supplementary signals can enhance the performance of computer-aided diagnosis systems. Without the availability of eye tracking data, identifying precise focus areas is challenging, but our approach shows promise in assisting pathologists in improving diagnostic accuracy and efficiency, streamlining annotation processes, and aiding the training of new pathologists.
Collapse
Affiliation(s)
- Fatemeh Ghezloo
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA.
| | - Oliver H Chang
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, WA, USA
| | | | | | | | - Lisa M Reisch
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Linda G Shapiro
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine, University of California, Los AngelesLos Angeles, CA, USA
| |
Collapse
|
3
|
Arsiwala-Scheppach LT, Castner NJ, Rohrer C, Mertens S, Kasneci E, Cejudo Grano de Oro JE, Schwendicke F. Impact of artificial intelligence on dentists' gaze during caries detection: A randomized controlled trial. J Dent 2024; 140:104793. [PMID: 38016620 DOI: 10.1016/j.jdent.2023.104793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 11/15/2023] [Accepted: 11/24/2023] [Indexed: 11/30/2023] Open
Abstract
OBJECTIVES We aimed to understand how artificial intelligence (AI) influences dentists by comparing their gaze behavior when using versus not using an AI software to detect primary proximal carious lesions on bitewing radiographs. METHODS 22 dentists assessed a median of 18 bitewing images resulting in 170 datasets from dentists without AI and 179 datasets from dentists with AI, after excluding data with poor gaze recording quality. We compared time to first fixation, fixation count, average fixation duration, and fixation frequency between both trial groups. Analyses were performed for the entire image and stratified by (1) presence of carious lesions and/or restorations and (2) lesion depth (E1/2: outer/inner enamel; D1-3 outer-inner third of dentin). We also compared the transitional pattern of the dentists' gaze between the trial groups. RESULTS Median time to first fixation was shorter in all groups of teeth for dentists with AI versus without AI, although p>0.05. Dentists with AI had more fixations (median=68, IQR=31, 116) on teeth with restorations compared to dentists without AI (median=47, IQR=19, 100), p = 0.01. In turn, average fixation duration was longer on teeth with caries for the dentists with AI than those without AI; although p>0.05. The visual search strategy employed by dentists with AI was less systematic with a lower proportion of lateral tooth-wise transitions compared to dentists without AI. CONCLUSIONS Dentists with AI exhibited more efficient viewing behavior compared to dentists without AI, e.g., lesser time taken to notice caries and/or restorations, more fixations on teeth with restorations, and fixating for shorter durations on teeth without carious lesions and/or restorations. CLINICAL SIGNIFICANCE Analysis of dentists' gaze patterns while using AI-generated annotations of carious lesions demonstrates how AI influences their data extraction methods for dental images. Such insights can be exploited to improve, and even customize, AI-based diagnostic tools, thus reducing the dentists' extraneous attentional processing and allowing for more thorough examination of other image areas.
Collapse
Affiliation(s)
- Lubaina T Arsiwala-Scheppach
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Aßmannshauser Straße 4-6, 14197, Berlin, Germany; ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, Germany.
| | | | - Csaba Rohrer
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Aßmannshauser Straße 4-6, 14197, Berlin, Germany
| | - Sarah Mertens
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Aßmannshauser Straße 4-6, 14197, Berlin, Germany
| | - Enkelejda Kasneci
- Human-Centered Technologies for Learning, Technical University Munich, Germany
| | - Jose Eduardo Cejudo Grano de Oro
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Aßmannshauser Straße 4-6, 14197, Berlin, Germany
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Aßmannshauser Straße 4-6, 14197, Berlin, Germany; ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, Germany
| |
Collapse
|
4
|
Darici D, Reissner C, Missler M. Webcam-based eye-tracking to measure visual expertise of medical students during online histology training. GMS JOURNAL FOR MEDICAL EDUCATION 2023; 40:Doc60. [PMID: 37881524 PMCID: PMC10594038 DOI: 10.3205/zma001642] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 06/06/2023] [Accepted: 07/07/2023] [Indexed: 10/27/2023]
Abstract
Objectives Visual expertise is essential for image-based tasks that rely on visual cues, such as in radiology or histology. Studies suggest that eye movements are related to visual expertise and can be measured by near-infrared eye-tracking. With the popularity of device-embedded webcam eye-tracking technology, cost-effective use in educational contexts has recently become amenable. This study investigated the feasibility of such methodology in a curricular online-only histology course during the 2021 summer term. Methods At two timepoints (t1 and t2), third-semester medical students were asked to diagnose a series of histological slides while their eye movements were recorded. Students' eye metrics, performance and behavioral measures were analyzed using variance analyses and multiple regression models. Results First, webcam-eye tracking provided eye movement data with satisfactory quality (mean accuracy=115.7 px±31.1). Second, the eye movement metrics reflected the students' proficiency in finding relevant image sections (fixation count on relevant areas=6.96±1.56 vs. irrelevant areas=4.50±1.25). Third, students' eye movement metrics successfully predicted their performance (R2adj=0.39, p<0.001). Conclusion This study supports the use of webcam-eye-tracking expanding the range of educational tools available in the (digital) classroom. As the students' interest in using the webcam eye-tracking was high, possible areas of implementation will be discussed.
Collapse
Affiliation(s)
- Dogus Darici
- Westfälische-Wilhelms-University, Institute of Anatomy and Neurobiology, Münster, Germany
| | - Carsten Reissner
- Westfälische-Wilhelms-University, Institute of Anatomy and Neurobiology, Münster, Germany
| | - Markus Missler
- Westfälische-Wilhelms-University, Institute of Anatomy and Neurobiology, Münster, Germany
| |
Collapse
|
5
|
Arsiwala-Scheppach LT, Castner N, Rohrer C, Mertens S, Kasneci E, Cejudo Grano de Oro JE, Krois J, Schwendicke F. Gaze patterns of dentists while evaluating bitewing radiographs. J Dent 2023; 135:104585. [PMID: 37301462 DOI: 10.1016/j.jdent.2023.104585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 05/15/2023] [Accepted: 06/07/2023] [Indexed: 06/12/2023] Open
Abstract
OBJECTIVES Understanding dentists' gaze patterns on radiographs may allow to unravel sources of their limited accuracy and develop strategies to mitigate them. We conducted an eye tracking experiment to characterize dentists' scanpaths and thus their gaze patterns when assessing bitewing radiographs to detect primary proximal carious lesions. METHODS 22 dentists assessed a median of nine bitewing images each, resulting in 170 datasets after excluding data with poor quality of gaze recording. Fixation was defined as an area of attentional focus related to visual stimuli. We calculated time to first fixation, fixation count, average fixation duration, and fixation frequency. Analyses were performed for the entire image and stratified by (1) presence of carious lesions and/or restorations and (2) lesion depth (E1/2: outer/inner enamel; D1-3: outer-inner third of dentin). We also examined the transitional nature of the dentists' gaze. RESULTS Dentists had more fixations on teeth with lesions and/or restorations (median=138 [interquartile range=87, 204]) than teeth without them (32 [15, 66]), p<0.001. Notably, teeth with lesions had longer fixation durations (407 milliseconds [242, 591]) than those with restorations (289 milliseconds [216, 337]), p<0.001. Time to first fixation was longer for teeth with E1 lesions (17,128 milliseconds [8813, 21,540]) than lesions of other depths (p = 0.049). The highest number of fixations were on teeth with D2 lesions (43 [20, 51]) and lowest on teeth with E1 lesions (5 [1, 37]), p<0.001. Generally, a systematic tooth-by-tooth gaze pattern was observed. CONCLUSIONS As hypothesized, while visually inspecting bitewing radiographic images, dentists employed a heightened focus on certain image features/areas, relevant to the assigned task. Also, they generally examined the entire image in a systematic tooth-by-tooth pattern.
Collapse
Affiliation(s)
- Lubaina T Arsiwala-Scheppach
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Germany; ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, Switzerland.
| | - Nora Castner
- Department of Computer Science, University of Tuebingen, Tuebingen, Germany
| | - Csaba Rohrer
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Germany
| | - Sarah Mertens
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Germany
| | - Enkelejda Kasneci
- Department of Computer Science, Technical University of Munich, Germany
| | - Jose Eduardo Cejudo Grano de Oro
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Germany
| | - Joachim Krois
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Germany; ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, Switzerland
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Germany; ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, Switzerland
| |
Collapse
|
6
|
Brunyé TT, Drew T, Kerr KF, Shucard H, Powell K, Weaver DL, Elmore JG. Zoom behavior during visual search modulates pupil diameter and reflects adaptive control states. PLoS One 2023; 18:e0282616. [PMID: 36893083 PMCID: PMC9997932 DOI: 10.1371/journal.pone.0282616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 02/19/2023] [Indexed: 03/10/2023] Open
Abstract
Adaptive gain theory proposes that the dynamic shifts between exploration and exploitation control states are modulated by the locus coeruleus-norepinephrine system and reflected in tonic and phasic pupil diameter. This study tested predictions of this theory in the context of a societally important visual search task: the review and interpretation of digital whole slide images of breast biopsies by physicians (pathologists). As these medical images are searched, pathologists encounter difficult visual features and intermittently zoom in to examine features of interest. We propose that tonic and phasic pupil diameter changes during image review may correspond to perceived difficulty and dynamic shifts between exploration and exploitation control states. To examine this possibility, we monitored visual search behavior and tonic and phasic pupil diameter while pathologists (N = 89) interpreted 14 digital images of breast biopsy tissue (1,246 total images reviewed). After viewing the images, pathologists provided a diagnosis and rated the level of difficulty of the image. Analyses of tonic pupil diameter examined whether pupil dilation was associated with pathologists' difficulty ratings, diagnostic accuracy, and experience level. To examine phasic pupil diameter, we parsed continuous visual search data into discrete zoom-in and zoom-out events, including shifts from low to high magnification (e.g., 1× to 10×) and the reverse. Analyses examined whether zoom-in and zoom-out events were associated with phasic pupil diameter change. Results demonstrated that tonic pupil diameter was associated with image difficulty ratings and zoom level, and phasic pupil diameter showed constriction upon zoom-in events, and dilation immediately preceding a zoom-out event. Results are interpreted in the context of adaptive gain theory, information gain theory, and the monitoring and assessment of physicians' diagnostic interpretive processes.
Collapse
Affiliation(s)
- Tad T. Brunyé
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA, United States of America
| | - Trafton Drew
- Department of Psychology, University of Utah, Salt Lake City, UT, United States of America
| | - Kathleen F. Kerr
- Department of Biostatistics, University of Washington, Seattle, WA, United States of America
| | - Hannah Shucard
- Department of Biostatistics, University of Washington, Seattle, WA, United States of America
| | - Kate Powell
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA, United States of America
| | - Donald L. Weaver
- Department of Pathology, University of Vermont and Vermont Cancer Center, Burlington, VT, United States of America
| | - Joann G. Elmore
- David Geffen School of Medicine, Department of Medicine, University of California, Los Angeles, CA, United States of America
| |
Collapse
|
7
|
Wang Z, Manassi M, Ren Z, Ghirardo C, Canas-Bajo T, Murai Y, Zhou M, Whitney D. Idiosyncratic biases in the perception of medical images. Front Psychol 2022; 13:1049831. [PMID: 36600706 PMCID: PMC9806180 DOI: 10.3389/fpsyg.2022.1049831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 11/29/2022] [Indexed: 12/23/2022] Open
Abstract
Introduction Radiologists routinely make life-altering decisions. Optimizing these decisions has been an important goal for many years and has prompted a great deal of research on the basic perceptual mechanisms that underlie radiologists' decisions. Previous studies have found that there are substantial individual differences in radiologists' diagnostic performance (e.g., sensitivity) due to experience, training, or search strategies. In addition to variations in sensitivity, however, another possibility is that radiologists might have perceptual biases-systematic misperceptions of visual stimuli. Although a great deal of research has investigated radiologist sensitivity, very little has explored the presence of perceptual biases or the individual differences in these. Methods Here, we test whether radiologists' have perceptual biases using controlled artificial and Generative Adversarial Networks-generated realistic medical images. In Experiment 1, observers adjusted the appearance of simulated tumors to match the previously shown targets. In Experiment 2, observers were shown with a mix of real and GAN-generated CT lesion images and they rated the realness of each image. Results We show that every tested individual radiologist was characterized by unique and systematic perceptual biases; these perceptual biases cannot be simply explained by attentional differences, and they can be observed in different imaging modalities and task settings, suggesting that idiosyncratic biases in medical image perception may widely exist. Discussion Characterizing and understanding these biases could be important for many practical settings such as training, pairing readers, and career selection for radiologists. These results may have consequential implications for many other fields as well, where individual observers are the linchpins for life-altering perceptual decisions.
Collapse
Affiliation(s)
- Zixuan Wang
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | - Mauro Manassi
- School of Psychology, University of Aberdeen, King’s College, Aberdeen, United Kingdom
| | - Zhihang Ren
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
- Vision Science Group, University of California, Berkeley, Berkeley, CA, United States
| | - Cristina Ghirardo
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | - Teresa Canas-Bajo
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
- Vision Science Group, University of California, Berkeley, Berkeley, CA, United States
| | - Yuki Murai
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Koganei, Japan
| | - Min Zhou
- Department of Pediatrics, The First People's Hospital of Shuangliu District, Chengdu, Sichuan, China
| | - David Whitney
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
- Vision Science Group, University of California, Berkeley, Berkeley, CA, United States
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
8
|
Nofallah S, Wu W, Liu K, Ghezloo F, Elmore JG, Shapiro LG. Automated analysis of whole slide digital skin biopsy images. Front Artif Intell 2022; 5:1005086. [PMID: 36204597 PMCID: PMC9531680 DOI: 10.3389/frai.2022.1005086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 08/25/2022] [Indexed: 11/23/2022] Open
Abstract
A rapidly increasing rate of melanoma diagnosis has been noted over the past three decades, and nearly 1 in 4 skin biopsies are diagnosed as melanocytic lesions. The gold standard for diagnosis of melanoma is the histopathological examination by a pathologist to analyze biopsy material at both the cellular and structural levels. A pathologist's diagnosis is often subjective and prone to variability, while deep learning image analysis methods may improve and complement current diagnostic and prognostic capabilities. Mitoses are important entities when reviewing skin biopsy cases as their presence carries prognostic information; thus, their precise detection is an important factor for clinical care. In addition, semantic segmentation of clinically important structures in skin biopsies might help the diagnosis pipeline with an accurate classification. We aim to provide prognostic and diagnostic information on skin biopsy images, including the detection of cellular level entities, segmentation of clinically important tissue structures, and other important factors toward the accurate diagnosis of skin biopsy images. This paper is an overview of our work on analysis of digital whole slide skin biopsy images, including mitotic figure (mitosis) detection, semantic segmentation, diagnosis, and analysis of pathologists' viewing patterns, and with new work on melanocyte detection. Deep learning has been applied to our methods for all the detection, segmentation, and diagnosis work. In our studies, deep learning is proven superior to prior approaches to skin biopsy analysis. Our work on analysis of pathologists' viewing patterns is the only such work in the skin biopsy literature. Our work covers the whole spectrum from low-level entities through diagnosis and understanding what pathologists do in performing their diagnoses.
Collapse
Affiliation(s)
- Shima Nofallah
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, United States
| | - Wenjun Wu
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA, United States
| | - Kechun Liu
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, United States
| | - Fatemeh Ghezloo
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, United States
| | - Joann G. Elmore
- David Geffen School of Medicine, University of California Los Angeles (UCLA), Los Angeles, CA, United States
| | - Linda G. Shapiro
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, United States
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA, United States
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, United States
| |
Collapse
|
9
|
Ashman K, Zhuge H, Shanley E, Fox S, Halat S, Sholl A, Summa B, Brown JQ. Whole slide image data utilization informed by digital diagnosis patterns. J Pathol Inform 2022; 13:100113. [PMID: 36268057 PMCID: PMC9577055 DOI: 10.1016/j.jpi.2022.100113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/28/2021] [Indexed: 12/24/2022] Open
Abstract
Context Despite the benefits of digital pathology, data storage and management of digital whole slide images introduces new logistical and infrastructure challenges to traditionally analog pathology labs. Aims Our goal was to analyze pathologist slide diagnosis patterns to determine the minimum number of pixels required during the diagnosis. Methods We developed a method of using pathologist viewing patterns to vary digital image resolution across virtual slides, which we call variable resolution images. An additional pathologist reviewed the variable resolution images to determine if diagnoses could still be rendered. Results Across all slides, the pathologists rarely zoomed in to the full resolution level. As a result, the variable resolution images are significantly smaller than the original whole slide images. Despite the reduction in image sizes, the final pathologist reviewer could still proide diagnoses on the variable resolution slide images. Conclusions Future studies will be conducted to understand variability in resolution requirements between and within pathologists. These findings have the potential to dramatically reduce the data storage requirements of high-resolution whole slide images.
Collapse
Affiliation(s)
- Kimberly Ashman
- Tulane University, Department of Biomedical Engineering, New Orleans, LA 70118, USA
| | - Huimin Zhuge
- Tulane University, Department of Biomedical Engineering, New Orleans, LA 70118, USA
| | - Erin Shanley
- Tulane University, Department of Biomedical Engineering, New Orleans, LA 70118, USA
| | - Sharon Fox
- LSU Health Sciences Center, Department of Pathology, New Orleans, LA 70112, USA
| | - Shams Halat
- Tulane School of Medicine, Tulane University Department of Pathology and Lab Medicine, New Orleans, LA 70112, USA
| | - Andrew Sholl
- Delta Pathology Group, Touro Infirmary, New Orleans, LA 70115, USA
| | - Brian Summa
- Tulane University, Department of Computer Science, New Orleans, LA 70118, USA
| | - J. Quincy Brown
- Tulane University, Department of Biomedical Engineering, New Orleans, LA 70118, USA
| |
Collapse
|
10
|
Ghezloo F, Wang PC, Kerr KF, Brunyé TT, Drew T, Chang OH, Reisch LM, Shapiro LG, Elmore JG. An analysis of pathologists' viewing processes as they diagnose whole slide digital images. J Pathol Inform 2022; 13:100104. [PMID: 36268085 PMCID: PMC9576972 DOI: 10.1016/j.jpi.2022.100104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 04/07/2022] [Accepted: 04/09/2022] [Indexed: 10/27/2022] Open
Abstract
Although pathologists have their own viewing habits while diagnosing, viewing behaviors leading to the most accurate diagnoses are under-investigated. Digital whole slide imaging has enabled investigators to analyze pathologists' visual interpretation of histopathological features using mouse and viewport tracking techniques. In this study, we provide definitions for basic viewing behavior variables and investigate the association of pathologists' characteristics and viewing behaviors, and how they relate to diagnostic accuracy when interpreting whole slide images. We use recordings of 32 pathologists' actions while interpreting a set of 36 digital whole slide skin biopsy images (5 sets of 36 cases; 180 cases total). These viewport tracking data include the coordinates of a viewport scene on pathologists' screens, the magnification level at which that viewport was viewed, as well as a timestamp. We define a set of variables to quantify pathologists' viewing behaviors such as zooming, panning, and interacting with a consensus reference panel's selected region of interest (ROI). We examine the association of these viewing behaviors with pathologists' demographics, clinical characteristics, and diagnostic accuracy using cross-classified multilevel models. Viewing behaviors differ based on clinical experience of the pathologists. Pathologists with a higher caseload of melanocytic skin biopsy cases and pathologists with board certification and/or fellowship training in dermatopathology have lower average zoom and lower variance of zoom levels. Viewing behaviors associated with higher diagnostic accuracy include higher average and variance of zoom levels, a lower magnification percentage (a measure of consecutive zooming behavior), higher total interpretation time, and higher amount of time spent viewing ROIs. Scanning behavior, which refers to panning with a fixed zoom level, has marginally significant positive association with accuracy. Pathologists' training, clinical experience, and their exposure to a range of cases are associated with their viewing behaviors, which may contribute to their diagnostic accuracy. Research in computational pathology integrating digital imaging and clinical informatics opens up new avenues for leveraging viewing behaviors in medical education and training, potentially improving patient care and the effectiveness of clinical workflow.
Collapse
Affiliation(s)
- Fatemeh Ghezloo
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA
| | - Pin-Chieh Wang
- Department of Medicine, University of California, Los Angeles, David Geffen School of Medicine, Los Angeles, CA, USA
| | - Kathleen F. Kerr
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Tad T. Brunyé
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA, USA
| | - Trafton Drew
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Oliver H. Chang
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, WA, USA
| | - Lisa M. Reisch
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Linda G. Shapiro
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA
| | - Joann G. Elmore
- Department of Medicine, University of California, Los Angeles, David Geffen School of Medicine, Los Angeles, CA, USA
| |
Collapse
|
11
|
Drew T, Lavelle M, Kerr KF, Shucard H, Brunyé TT, Weaver DL, Elmore JG. More scanning, but not zooming, is associated with diagnostic accuracy in evaluating digital breast pathology slides. J Vis 2021; 21:7. [PMID: 34636845 PMCID: PMC8525842 DOI: 10.1167/jov.21.11.7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 09/15/2021] [Indexed: 12/02/2022] Open
Abstract
Diagnoses of medical images can invite strikingly diverse strategies for image navigation and visual search. In computed tomography screening for lung nodules, distinct strategies, termed scanning and drilling, relate to both radiologists' clinical experience and accuracy in lesion detection. Here, we examined associations between search patterns and accuracy for pathologists (N = 92) interpreting a diverse set of breast biopsy images. While changes in depth in volumetric images reveal new structures through movement in the z-plane, in digital pathology changes in depth are associated with increased magnification. Thus, "drilling" in radiology may be more appropriately termed "zooming" in pathology. We monitored eye-movements and navigation through digital pathology slides to derive metrics of how quickly the pathologists moved through XY (scanning) and Z (zooming) space. Prior research on eye-movements in depth has categorized clinicians as either "scanners" or "drillers." In contrast, we found that there was no reliable association between a clinician's tendency to scan or zoom while examining digital pathology slides. Thus, in the current work we treated scanning and zooming as continuous predictors rather than categorizing as either a "scanner" or "zoomer." In contrast to prior work in volumetric chest images, we found significant associations between accuracy and scanning rate but not zooming rate. These findings suggest fundamental differences in the relative value of information types and review behaviors across two image formats. Our data suggest that pathologists gather critical information by scanning on a given plane of depth, whereas radiologists drill through depth to interrogate critical features.
Collapse
Affiliation(s)
- Trafton Drew
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Mark Lavelle
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Kathleen F Kerr
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Hannah Shucard
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Tad T Brunyé
- Department of Psychology, Tufts University, Medford, MA, USA
| | - Donald L Weaver
- Department of Pathology & Laboratory Medicine, University of Vermont, Burlington, VT, USA
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| |
Collapse
|
12
|
Zheng Y, Jiang Z, Xie F, Shi J, Zhang H, Huai J, Cao M, Yang X. Diagnostic Regions Attention Network (DRA-Net) for Histopathology WSI Recommendation and Retrieval. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1090-1103. [PMID: 33351756 DOI: 10.1109/tmi.2020.3046636] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The development of whole slide imaging techniques and online digital pathology platforms have accelerated the popularization of telepathology for remote tumor diagnoses. During a diagnosis, the behavior information of the pathologist can be recorded by the platform and then archived with the digital case. The browsing path of the pathologist on the WSI is one of the valuable information in the digital database because the image content within the path is expected to be highly correlated with the diagnosis report of the pathologist. In this article, we proposed a novel approach for computer-assisted cancer diagnosis named session-based histopathology image recommendation (SHIR) based on the browsing paths on WSIs. To achieve the SHIR, we developed a novel diagnostic regions attention network (DRA-Net) to learn the pathology knowledge from the image content associated with the browsing paths. The DRA-Net does not rely on the pixel-level or region-level annotations of pathologists. All the data for training can be automatically collected by the digital pathology platform without interrupting the pathologists' diagnoses. The proposed approaches were evaluated on a gastric dataset containing 983 cases within 5 categories of gastric lesions. The quantitative and qualitative assessments on the dataset have demonstrated the proposed SHIR framework with the novel DRA-Net is effective in recommending diagnostically relevant cases for auxiliary diagnosis. The MRR and MAP for the recommendation are respectively 0.816 and 0.836 on the gastric dataset. The source code of the DRA-Net is available at https://github.com/zhengyushan/dpathnet.
Collapse
|
13
|
Wu W, Mehta S, Nofallah S, Knezevich S, May CJ, Chang OH, Elmore JG, Shapiro LG. Scale-Aware Transformers for Diagnosing Melanocytic Lesions. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:163526-163541. [PMID: 35211363 PMCID: PMC8865389 DOI: 10.1109/access.2021.3132958] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Diagnosing melanocytic lesions is one of the most challenging areas of pathology with extensive intra- and inter-observer variability. The gold standard for a diagnosis of invasive melanoma is the examination of histopathological whole slide skin biopsy images by an experienced dermatopathologist. Digitized whole slide images offer novel opportunities for computer programs to improve the diagnostic performance of pathologists. In order to automatically classify such images, representations that reflect the content and context of the input images are needed. In this paper, we introduce a novel self-attention-based network to learn representations from digital whole slide images of melanocytic skin lesions at multiple scales. Our model softly weighs representations from multiple scales, allowing it to discriminate between diagnosis-relevant and -irrelevant information automatically. Our experiments show that our method outperforms five other state-of-the-art whole slide image classification methods by a significant margin. Our method also achieves comparable performance to 187 practicing U.S. pathologists who interpreted the same cases in an independent study. To facilitate relevant research, full training and inference code is made publicly available at https://github.com/meredith-wenjunwu/ScATNet.
Collapse
Affiliation(s)
- Wenjun Wu
- Department of Medical Education and Biomedical Informatics, University of Washington, Seattle, WA 98195, USA
| | - Sachin Mehta
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA
| | - Shima Nofallah
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA
| | | | | | - Oliver H Chang
- Department of Pathology, University of Washington, Seattle, WA 98195, USA
| | - Joann G Elmore
- David Geffen School of Medicine, UCLA, Los Angeles, CA 90024, USA
| | - Linda G Shapiro
- Department of Medical Education and Biomedical Informatics, University of Washington, Seattle, WA 98195, USA
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|
14
|
Alexander RG, Waite S, Macknik SL, Martinez-Conde S. What do radiologists look for? Advances and limitations of perceptual learning in radiologic search. J Vis 2020; 20:17. [PMID: 33057623 PMCID: PMC7571277 DOI: 10.1167/jov.20.10.17] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 09/14/2020] [Indexed: 12/31/2022] Open
Abstract
Supported by guidance from training during residency programs, radiologists learn clinically relevant visual features by viewing thousands of medical images. Yet the precise visual features that expert radiologists use in their clinical practice remain unknown. Identifying such features would allow the development of perceptual learning training methods targeted to the optimization of radiology training and the reduction of medical error. Here we review attempts to bridge current gaps in understanding with a focus on computational saliency models that characterize and predict gaze behavior in radiologists. There have been great strides toward the accurate prediction of relevant medical information within images, thereby facilitating the development of novel computer-aided detection and diagnostic tools. In some cases, computational models have achieved equivalent sensitivity to that of radiologists, suggesting that we may be close to identifying the underlying visual representations that radiologists use. However, because the relevant bottom-up features vary across task context and imaging modalities, it will also be necessary to identify relevant top-down factors before perceptual expertise in radiology can be fully understood. Progress along these dimensions will improve the tools available for educating new generations of radiologists, and aid in the detection of medically relevant information, ultimately improving patient health.
Collapse
Affiliation(s)
- Robert G Alexander
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Stephen Waite
- Department of Radiology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Stephen L Macknik
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| |
Collapse
|
15
|
Williams LH, Drew T. What do we know about volumetric medical image interpretation?: a review of the basic science and medical image perception literatures. Cogn Res Princ Implic 2019; 4:21. [PMID: 31286283 PMCID: PMC6614227 DOI: 10.1186/s41235-019-0171-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Accepted: 05/19/2019] [Indexed: 11/26/2022] Open
Abstract
Interpretation of volumetric medical images represents a rapidly growing proportion of the workload in radiology. However, relatively little is known about the strategies that best guide search behavior when looking for abnormalities in volumetric images. Although there is extensive literature on two-dimensional medical image perception, it is an open question whether the conclusions drawn from these images can be generalized to volumetric images. Importantly, volumetric images have distinct characteristics (e.g., scrolling through depth, smooth-pursuit eye-movements, motion onset cues, etc.) that should be considered in future research. In this manuscript, we will review the literature on medical image perception and discuss relevant findings from basic science that can be used to generate predictions about expertise in volumetric image interpretation. By better understanding search through volumetric images, we may be able to identify common sources of error, characterize the optimal strategies for searching through depth, or develop new training and assessment techniques for radiology residents.
Collapse
|
16
|
Wu CC, Wolfe JM. Eye Movements in Medical Image Perception: A Selective Review of Past, Present and Future. Vision (Basel) 2019; 3:E32. [PMID: 31735833 PMCID: PMC6802791 DOI: 10.3390/vision3020032] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 06/09/2019] [Accepted: 06/18/2019] [Indexed: 12/21/2022] Open
Abstract
The eye movements of experts, reading medical images, have been studied for many years. Unlike topics such as face perception, medical image perception research needs to cope with substantial, qualitative changes in the stimuli under study due to dramatic advances in medical imaging technology. For example, little is known about how radiologists search through 3D volumes of image data because they simply did not exist when earlier eye tracking studies were performed. Moreover, improvements in the affordability and portability of modern eye trackers make other, new studies practical. Here, we review some uses of eye movements in the study of medical image perception with an emphasis on newer work. We ask how basic research on scene perception relates to studies of medical 'scenes' and we discuss how tracking experts' eyes may provide useful insights for medical education and screening efficiency.
Collapse
Affiliation(s)
- Chia-Chien Wu
- Visual Attention Lab, Department of Surgery, Brigham & Women’s Hospital, 65 Landsdowne St, Cambridge, MA 02139, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
| | - Jeremy M. Wolfe
- Visual Attention Lab, Department of Surgery, Brigham & Women’s Hospital, 65 Landsdowne St, Cambridge, MA 02139, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
- Department of Ophthalmology, Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
17
|
Brunyé TT, Drew T, Weaver DL, Elmore JG. A review of eye tracking for understanding and improving diagnostic interpretation. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2019; 4:7. [PMID: 30796618 PMCID: PMC6515770 DOI: 10.1186/s41235-019-0159-2] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Accepted: 02/01/2019] [Indexed: 12/29/2022]
Abstract
Inspecting digital imaging for primary diagnosis introduces perceptual and cognitive demands for physicians tasked with interpreting visual medical information and arriving at appropriate diagnoses and treatment decisions. The process of medical interpretation and diagnosis involves a complex interplay between visual perception and multiple cognitive processes, including memory retrieval, problem-solving, and decision-making. Eye-tracking technologies are becoming increasingly available in the consumer and research markets and provide novel opportunities to learn more about the interpretive process, including differences between novices and experts, how heuristics and biases shape visual perception and decision-making, and the mechanisms underlying misinterpretation and misdiagnosis. The present review provides an overview of eye-tracking technology, the perceptual and cognitive processes involved in medical interpretation, how eye tracking has been employed to understand medical interpretation and promote medical education and training, and some of the promises and challenges for future applications of this technology.
Collapse
Affiliation(s)
- Tad T Brunyé
- Center for Applied Brain and Cognitive Sciences, Tufts University, 200 Boston Ave., Suite 3000, Medford, MA, 02155, USA.
| | - Trafton Drew
- Department of Psychology, University of Utah, 380 1530 E, Salt Lake City, UT, 84112, USA
| | - Donald L Weaver
- Department of Pathology and University of Vermont Cancer Center, University of Vermont, 111 Colchester Ave., Burlington, VT, 05401, USA
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine at UCLA, University of California at Los Angeles, 10833 Le Conte Ave., Los Angeles, CA, 90095, USA
| |
Collapse
|
18
|
Lago MA, Abbey CK, Barufaldi B, Bakic PR, Weinstein SP, Maidment AD, Eckstein MP. Interactions of lesion detectability and size across single-slice DBT and 3D DBT. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10577:105770X. [PMID: 32435080 PMCID: PMC7237825 DOI: 10.1117/12.2293873] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Three dimensional image modalities introduce a new paradigm for visual search requiring visual exploration of a larger search space than 2D imaging modalities. The large number of slices in the 3D volumes and the limited reading times make it difficult for radiologists to explore thoroughly by fixating with their high resolution fovea on all regions of each slice. Thus, for 3D images, observers must rely much more on their visual periphery (points away from fixation) to process image information. We previously found a dissociation in signal detectability between 2D and 3D search tasks for small signals in synthetic textures evaluated with non-radiologist trained observers. Here, we extend our evaluation to more clinically realistic backgrounds and radiologist observers. We studied the detectability of simulated microcalcifications (MCALC) and masses (MASS) in Digital Breast Tomosynthesis (DBT) utilizing virtual breast phantoms. We compared the lesion detectability of 8 radiologists during free search in 3D DBT and a 2D single-slice DBT (center slice of the 3D DBT). Our results show that the detectability of the microcalcification degrades significantly in 3D DBT with respect to the 2D single-slice DBT. On the other hand, the detectability for masses does not show this behavior and its detectability is not significantly different. The large deterioration of the 3D detectability of microcalcifications relative to masses may be related to the peripheral processing given the high number of cases in which the microcalcification was missed and the high number of search errors. Together, the results extend previous findings with synthetic textures and highlight how search in 3D images is distinct from 2D search as a consequence of the interaction between search strategies and the visibility of signals in the visual periphery.
Collapse
Affiliation(s)
- Miguel A Lago
- Department of Psychological and Brain Sciences, University of California Santa Barbara, Santa Barbara, CA., USA
| | - Craig K Abbey
- Department of Psychological and Brain Sciences, University of California Santa Barbara, Santa Barbara, CA., USA
| | - Bruno Barufaldi
- Department of Radiology, University of Pennsylvania, Philadelphia, PA., USA
| | - Predrag R Bakic
- Department of Radiology, University of Pennsylvania, Philadelphia, PA., USA
| | - Susan P Weinstein
- Department of Radiology, University of Pennsylvania, Philadelphia, PA., USA
| | - Andrew D Maidment
- Department of Radiology, University of Pennsylvania, Philadelphia, PA., USA
| | - Miguel P Eckstein
- Department of Psychological and Brain Sciences, University of California Santa Barbara, Santa Barbara, CA., USA
| |
Collapse
|