1
|
Hsieh SS, Holmes Iii DR, Carter RE, Tan N, Inoue A, Yalon M, Gong H, Sudhir Pillai P, Leng S, Yu L, Fidler JL, Cook DA, McCollough CH, Fletcher JG. Peripheral liver metastases are more frequently missed than central metastases in contrast-enhanced CT: insights from a 25-reader performance study. Abdom Radiol (NY) 2024:10.1007/s00261-024-04520-4. [PMID: 39162799 DOI: 10.1007/s00261-024-04520-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Revised: 07/29/2024] [Accepted: 08/05/2024] [Indexed: 08/21/2024]
Abstract
PURPOSE Subtle liver metastases may be missed in contrast enhanced CT imaging. We determined the impact of lesion location and conspicuity on metastasis detection using data from a prior reader study. METHODS In the prior reader study, 25 radiologists examined 40 CT exams each and circumscribed all suspected hepatic metastases. CT exams were chosen to include a total of 91 visually challenging metastases. The detectability of a metastasis was defined as the fraction of radiologists that circumscribed it. A conspicuity index was calculated for each metastasis by multiplying metastasis diameter with its contrast, defined as the difference between the average of a circular region within the metastasis and the average of the surrounding circular region of liver parenchyma. The effects of distance from liver edge and of conspicuity index on metastasis detectability were measured using multivariable linear regression. RESULTS The median metastasis was 1.4 cm from the edge (interquartile range [IQR], 0.9-2.1 cm). Its diameter was 1.2 cm (IQR, 0.9-1.8 cm), and its contrast was 38 HU (IQR, 23-68 HU). An increase of one standard deviation in conspicuity index was associated with a 6.9% increase in detectability (p = 0.008), whereas an increase of one standard deviation in distance from the liver edge was associated with a 5.5% increase in detectability (p = 0.03). CONCLUSION Peripheral liver metastases were missed more frequently than central liver metastases, with this effect depending on metastasis size and contrast.
Collapse
Affiliation(s)
| | | | | | | | - Akitoshi Inoue
- Mayo Clinic, Rochester, USA
- Shiga University of Medical Science, Ōtsu, Japan
| | | | | | - Parvathy Sudhir Pillai
- Mayo Clinic, Rochester, USA
- The University of Texas MD Anderson Cancer Center, Houston, USA
| | | | | | | | | | | | | |
Collapse
|
2
|
Bella-Fernández M, Suero Suñé M, Gil-Gómez de Liaño B. The time course of visual foraging in the lifespan: Spatial scanning, organization search, and target processing. Psychon Bull Rev 2024; 31:325-339. [PMID: 37620634 PMCID: PMC10867067 DOI: 10.3758/s13423-023-02345-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/19/2023] [Indexed: 08/26/2023]
Abstract
Visual foraging is a variant of visual search, consisting of searching for an undetermined number of targets among distractors (e.g., looking for various LEGO pieces in a box). Under non-exhaustive tasks, the observer scans the display, picking those targets needed, not necessarily all of them, before leaving the search. To understand how the organization of such natural foraging tasks works, several measures of spatial scanning and organization have been proposed in the exhaustive foraging literature: best-r, intertarget distances, PAO, and target intersections. In the present study, we apply these measures and new Bayesian indexes to determine how the time course of visual foraging is organized in a dynamic non-exhaustive paradigm. In a large sample of observers (279 participants, 4-25 years old), we compare feature and conjunction foraging and explore how factors like set size and time course, not previously tested in exhaustive foraging, might affect search organization in non-exhaustive dynamic tasks. The results replicate previous findings showing younger observers' searching being less organized, feature conditions being more organized than conjunction conditions, and organization leading to a more effective search. Interestingly, observers tend to be less organized as set size increases, and search is less organized within a patch as it advances in time: Search organization decreases when search termination is coming, suggesting organization measures as potential clues to understand quitting rules in search. Our results highlight the importance of studying search organization in foraging as a critical source of understanding complex cognitive processes in visual search.
Collapse
Affiliation(s)
- Marcos Bella-Fernández
- Facultad de Psicología, Universidad Autónoma de Madrid, C/ Ivan Pavlov 6, 28029, Madrid, Spain.
- Universidad Pontificia de Comillas, Madrid, Spain.
| | - Manuel Suero Suñé
- Facultad de Psicología, Universidad Autónoma de Madrid, C/ Ivan Pavlov 6, 28029, Madrid, Spain
| | | |
Collapse
|
3
|
Wu CC, Wolfe JM. The Functional Visual Field(s) in simple visual search. Vision Res 2022; 190:107965. [PMID: 34775158 PMCID: PMC8976560 DOI: 10.1016/j.visres.2021.107965] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 10/07/2021] [Accepted: 10/12/2021] [Indexed: 01/03/2023]
Abstract
During a visual search for a target among distractors, observers do not fixate every location in the search array. Rather processing is thought to occur within a Functional Visual Field (FVF) surrounding each fixation. We argue that there are three questions that can be asked at each fixation and that these imply three different senses of the FVF. 1) Can I identify what is at location XY? This defines a resolution FVF. 2) To what shall I attend during this fixation? This defines an Attentional FVF. 3) Where should I fixate next? This defines an Exploratory FVF. We examine FVFs 2&3 using eye movements in visual search. In three Experiments, we collected eye movements during visual search for the target letter T among distractor letter Ls (Exps 1 and 3) or for a color X orientation conjunction (Exp 2). Saccades that do not go to the target can be used to define the Exploratory FVF. The saccade that goes to the target can be used to define the Attentional FVF since the target was probably covertly detected during the prior fixation. The Exploratory FVF is larger than the Attentional FVF for all three experiments. Interestingly, the probability that the next saccade would go to the target was always well below 1.0, even when the current fixation was close to the target and well within any reasonable estimate of the FVF. Measuring search-based Exploratory and Attentional FVFs sheds light on how we can miss clearly visible targets.
Collapse
Affiliation(s)
- Chia-Chien Wu
- Harvard Medical School, Boston, MA, USA; Brigham & Women's Hospital, Boston, MA, USA.
| | - Jeremy M Wolfe
- Harvard Medical School, Boston, MA, USA; Brigham & Women's Hospital, Boston, MA, USA
| |
Collapse
|
4
|
Abstract
This paper describes Guided Search 6.0 (GS6), a revised model of visual search. When we encounter a scene, we can see something everywhere. However, we cannot recognize more than a few items at a time. Attention is used to select items so that their features can be "bound" into recognizable objects. Attention is "guided" so that items can be processed in an intelligent order. In GS6, this guidance comes from five sources of preattentive information: (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g., priming), (4) reward, and (5) scene syntax and semantics. These sources are combined into a spatial "priority map," a dynamic attentional landscape that evolves over the course of search. Selective attention is guided to the most active location in the priority map approximately 20 times per second. Guidance will not be uniform across the visual field. It will favor items near the point of fixation. Three types of functional visual field (FVFs) describe the nature of these foveal biases. There is a resolution FVF, an FVF governing exploratory eye movements, and an FVF governing covert deployments of attention. To be identified as targets or rejected as distractors, items must be compared to target templates held in memory. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 ms/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid of serial and parallel processes. In GS6, if a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Setting of that threshold is adaptive, allowing feedback about performance to shape subsequent searches. Simulation shows that the combination of asynchronous diffusion and a quitting signal can produce the basic patterns of response time and error data from a range of search experiments.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Ophthalmology and Radiology, Brigham & Women's Hospital/Harvard Medical School, Cambridge, MA, USA.
- Visual Attention Lab, 65 Landsdowne St, 4th Floor, Cambridge, MA, 02139, USA.
| |
Collapse
|
5
|
Wolfe JM, Wu CC, Li J, Suresh SB. What do experts look at and what do experts find when reading mammograms? J Med Imaging (Bellingham) 2021; 8:045501. [PMID: 34277890 DOI: 10.1117/1.jmi.8.4.045501] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 06/28/2021] [Indexed: 12/21/2022] Open
Abstract
Purpose: Radiologists sometimes fail to report clearly visible, clinically significant findings. Eye tracking can provide insight into the causes of such errors. Approach: We tracked eye movements of 17 radiologists, searching for masses in 80 mammograms (60 with masses). Results: Errors were classified using the Kundel et al. (1978) taxonomy: search errors (target never fixated), recognition errors (fixated < 500 ms ), or decision errors (fixated > 500 ms ). Error proportions replicated Krupinski (1996): search 25%, recognition 25%, and decision 50%. Interestingly, we found few differences between experts and residents in accuracy or eye movement metrics. Error categorization depends on the definition of the useful field of view (UFOV) around fixation. We explored different UFOV definitions, based on targeting saccades and search saccades. Targeting saccades averaged slightly longer than search saccades. Of most interest, we found that the probability that the eyes would move to the target on the next saccade or even on one of the next three saccades was strikingly low ( ∼ 33 % , even when the eyes were < 2 deg from the target). This makes it clear that observers do not fully process everything within a UFOV. Using a probabilistic UFOV, we find, unsurprisingly, that observers cover more of the image when no target is present than when it is found. Interestingly, we do not find evidence that observers cover too little of the image on trials when they miss the target. Conclusions: These results indicate that many errors in mammography reflect failed deployment of attention; not failure to fixate clinically significant locations.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Brigham and Women's Hospital, Boston, Massachusetts, United States.,Harvard Medical School, Cambridge, Massachusetts, United States
| | - Chia-Chien Wu
- Brigham and Women's Hospital, Boston, Massachusetts, United States.,Harvard Medical School, Cambridge, Massachusetts, United States
| | - Jonathan Li
- Melbourne Medical School, Melbourne, Victoria, Australia
| | - Sneha B Suresh
- Brigham and Women's Hospital, Boston, Massachusetts, United States
| |
Collapse
|
6
|
Van De Luecht M, Reed WM. The cognitive and perceptual processes that affect observer performance in lung cancer detection: a scoping review. J Med Radiat Sci 2021; 68:175-185. [PMID: 33556995 PMCID: PMC8168065 DOI: 10.1002/jmrs.456] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Accepted: 12/11/2020] [Indexed: 12/19/2022] Open
Abstract
INTRODUCTION Early detection of malignant pulmonary nodules through screening has been shown to reduce lung cancer-related mortality by 20%. However, perceptual and cognitive factors that affect nodule detection are poorly understood. This review examines the cognitive and visual processes of various observers, with a particular focus on radiologists, during lung nodule detection. METHODS Four databases (Medline, Embase, Scopus and PubMed) were searched to extract studies on eye-tracking in pulmonary nodule detection. Studies were included if they used eye-tracking to assess the search and detection of lung nodules in computed tomography or 2D radiographic imaging. Data were charted according to identified themes and synthesised using a thematic narrative approach. RESULTS The literature search yielded 25 articles and five themes were discovered: 1 - functional visual field and satisfaction of search, 2 - expert search patterns, 3 - error classification through dwell time, 4 - the impact of the viewing environment and 5 - the effect of prevalence expectation on search. Functional visual field reduced to 2.7° in 3D imaging compared to 5° in 2D radiographs. Although greater visual coverage improved nodule detection, incomplete search was not responsible for missed nodules. Most radiological errors during lung nodule detection were decision-making errors (30%-45%). Dwell times associated with false-positive (FP) decisions informed feedback systems to improve diagnosis. Interruptions did not influence diagnostic performance; however, it increased viewing time by 8% and produced a 23.1% search continuation accuracy. Comparative scanning was found to increase the detection of low contrast nodules. Prevalence expectation did not directly affect diagnostic accuracy; however, decision-making time increased by 2.32 seconds with high prevalence expectations. CONCLUSION Visual and cognitive factors influence pulmonary nodule detection. Insights gained from eye-tracking can inform advancements in lung screening. Further exploration of eye-tracking in lung screening, particularly with low-dose computed tomography (LDCT), will benefit the future of lung cancer screening.
Collapse
Affiliation(s)
- Monica‐Rose Van De Luecht
- Discipline of Medical Imaging ScienceFaculty of Medicine and HealthSydney School of Health SciencesThe University of SydneySydneyNSWAustralia
| | - Warren Michael Reed
- Medical Imaging Optimisation and Perception Group (MIOPeG)Discipline of Medical Imaging ScienceSydney School of Health SciencesFaculty of Medicine and HealthThe University of SydneySydneyNSWAustralia
| |
Collapse
|
7
|
Image Annotation by Eye Tracking: Accuracy and Precision of Centerlines of Obstructed Small-Bowel Segments Placed Using Eye Trackers. J Digit Imaging 2020; 32:855-864. [PMID: 31144146 DOI: 10.1007/s10278-018-0169-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Small-bowel obstruction (SBO) is a common and important disease, for which machine learning tools have yet to be developed. Image annotation is a critical first step for development of such tools. This study assesses whether image annotation by eye tracking is sufficiently accurate and precise to serve as a first step in the development of machine learning tools for detection of SBO on CT. Seven subjects diagnosed with SBO by CT were included in the study. For each subject, an obstructed segment of bowel was chosen. Three observers annotated the centerline of the segment by manual fiducial placement and by visual fiducial placement using a Tobii 4c eye tracker. Each annotation was repeated three times. The distance between centerlines was calculated after alignment using dynamic time warping (DTW) and statistically compared to clinical thresholds for diagnosis of SBO. Intra-observer DTW distance between manual and visual centerlines was calculated as a measure of accuracy. These distances were 1.1 ± 0.2, 1.3 ± 0.4, and 1.8 ± 0.2 cm for the three observers and were less than 1.5 cm for two of three observers (P < 0.01). Intra- and inter-observer DTW distances between centerlines placed with each method were calculated as measures of precision. These distances were 0.6 ± 0.1 and 0.8 ± 0.2 cm for manual centerlines, 1.1 ± 0.4 and 1.9 ± 0.6 cm for visual centerlines, and were less than 3.0 cm in all cases (P < 0.01). Results suggest that eye tracking-based annotation is sufficiently accurate and precise for small-bowel centerline annotation for use in machine learning-based applications.
Collapse
|
8
|
Lago MA, Sechopoulos I, Bochud FO, Eckstein MP. Measurement of the useful field of view for single slices of different imaging modalities and targets. J Med Imaging (Bellingham) 2020; 7:022411. [PMID: 32064303 PMCID: PMC7007584 DOI: 10.1117/1.jmi.7.2.022411] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Accepted: 01/17/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: With three-dimensional (3-D) images displayed as stacks of 2-D images, radiologists rely more heavily on vision away from their fixation point to visually process information, guide eye movements, and detect abnormalities. Thus the ability to detect targets away from the fixation point, commonly characterized as the useful field of view (UFOV), becomes critical for these 3-D imaging modalities. We investigate how the UFOV, defined as the eccentricity, in which detection performance degrades to a given probability, varies across imaging modalities and targets. Approach: We measure the detectability of different targets at various distances from gaze locations for single slices of liver computed tomography (CT), 2-D digital mammograms (DM), and single slices of digital breast tomosynthesis (DBT) cases. Observers with varying expertise were instructed to maintain their gaze at a point while a short display of the image was flashed and an eye tracker verified observer's steady fixation. Display times were 200 and 1000 ms for CT images and 500 ms for DM and DBT images. Results: We find variations in the UFOV from 9 to 12 deg for liver CT to as small as 2.5 to 5 deg for calcification clusters in breast images (DM and DBT). We compare our results to those reported in the literature for lung nodules and discuss the differences across methods used to measure the UFOV, their dependence on case selection/task difficulty, viewing conditions, and observer expertise. We propose a complementary measure defined in terms of performance degradation relative to the peak foveal performance (relative-UFOV) to circumvent UFOV's variations with case selection/task difficulty. Conclusion: Our results highlight the variations in the UFOV across imaging modalities, target types, observer expertise, and measurement methods and suggest an additional relative-UFOV measure to more thoroughly characterize the detection performance away from point of fixation.
Collapse
Affiliation(s)
- Miguel A. Lago
- University of California, Institute for Collaborative Biotechnologies, Department of Psychological and Brain Sciences, Santa Barbara, California, United States
| | - Ioannis Sechopoulos
- Radboud University Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen, The Netherlands
- Dutch Expert Centre for Screening, Nijmegen, The Netherlands
| | - François O. Bochud
- University Hospital and University of Lausanne, Institute of Radiation Physics, Lausanne, Switzerland
| | - Miguel P. Eckstein
- University of California, Institute for Collaborative Biotechnologies, Department of Psychological and Brain Sciences, Santa Barbara, California, United States
| |
Collapse
|
9
|
Williams LH, Drew T. What do we know about volumetric medical image interpretation?: a review of the basic science and medical image perception literatures. Cogn Res Princ Implic 2019; 4:21. [PMID: 31286283 PMCID: PMC6614227 DOI: 10.1186/s41235-019-0171-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Accepted: 05/19/2019] [Indexed: 11/26/2022] Open
Abstract
Interpretation of volumetric medical images represents a rapidly growing proportion of the workload in radiology. However, relatively little is known about the strategies that best guide search behavior when looking for abnormalities in volumetric images. Although there is extensive literature on two-dimensional medical image perception, it is an open question whether the conclusions drawn from these images can be generalized to volumetric images. Importantly, volumetric images have distinct characteristics (e.g., scrolling through depth, smooth-pursuit eye-movements, motion onset cues, etc.) that should be considered in future research. In this manuscript, we will review the literature on medical image perception and discuss relevant findings from basic science that can be used to generate predictions about expertise in volumetric image interpretation. By better understanding search through volumetric images, we may be able to identify common sources of error, characterize the optimal strategies for searching through depth, or develop new training and assessment techniques for radiology residents.
Collapse
|
10
|
Wu CC, Wolfe JM. Eye Movements in Medical Image Perception: A Selective Review of Past, Present and Future. Vision (Basel) 2019; 3:E32. [PMID: 31735833 PMCID: PMC6802791 DOI: 10.3390/vision3020032] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 06/09/2019] [Accepted: 06/18/2019] [Indexed: 12/21/2022] Open
Abstract
The eye movements of experts, reading medical images, have been studied for many years. Unlike topics such as face perception, medical image perception research needs to cope with substantial, qualitative changes in the stimuli under study due to dramatic advances in medical imaging technology. For example, little is known about how radiologists search through 3D volumes of image data because they simply did not exist when earlier eye tracking studies were performed. Moreover, improvements in the affordability and portability of modern eye trackers make other, new studies practical. Here, we review some uses of eye movements in the study of medical image perception with an emphasis on newer work. We ask how basic research on scene perception relates to studies of medical 'scenes' and we discuss how tracking experts' eyes may provide useful insights for medical education and screening efficiency.
Collapse
Affiliation(s)
- Chia-Chien Wu
- Visual Attention Lab, Department of Surgery, Brigham & Women’s Hospital, 65 Landsdowne St, Cambridge, MA 02139, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
| | - Jeremy M. Wolfe
- Visual Attention Lab, Department of Surgery, Brigham & Women’s Hospital, 65 Landsdowne St, Cambridge, MA 02139, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
- Department of Ophthalmology, Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
11
|
Robins M, Solomon J, Hoye J, Smith T, Zheng Y, Ebner L, Choudhury KR, Samei E. Interchangeability between real and three-dimensional simulated lung tumors in computed tomography: an interalgorithm volumetry study. J Med Imaging (Bellingham) 2019; 5:035504. [PMID: 30840716 DOI: 10.1117/1.jmi.5.3.035504] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Accepted: 08/27/2018] [Indexed: 12/17/2022] Open
Abstract
Using hybrid datasets consisting of patient-derived computed tomography (CT) images with digitally inserted computational tumors, we establish volumetric interchangeability between real and computational lung tumors in CT. Pathologically-confirmed malignancies from 30 thoracic patient cases from the RIDER database were modeled. Tumors were either isolated or attached to lung structures. Patient images were acquired on one of two CT scanner models (Lightspeed 16 or VCT; GE Healthcare) using standard chest protocol. Real tumors were segmented and used to inform the size and shape of simulated tumors. Simulated tumors developed in Duke Lesion Tool (Duke University) were inserted using a validated image-domain insertion program. Four readers performed volume measurements using three commercial segmentation tools. We compared the volume estimation performance of segmentation tools between real tumors in actual patient CT images and corresponding simulated tumors virtually inserted into the same patient images (i.e., hybrid datasets). Comparisons involved (1) direct assessment of measured volumes and the standard deviation between simulated and real tumors across readers and tools, respectively, (2) multivariate analysis, involving segmentation tools, readers, tumor shape, and attachment, and (3) effect of local tumor environment on volume measurement. Volume comparison showed consistent trends (9% volumetric difference) between real and simulated tumors across all segmentation tools, readers, shapes, and attachments. Across all cases, readers, and segmentation tools, an intraclass correlation coefficient = 0.99 indicates that simulated tumors correlated strongly with real tumors ( p = 0.95 ). In addition, the impact of the local tumor environment on tumor volume measurement was found to have a segmentation tool-related influence. Strong agreement between simulated tumors modeled in this study compared to their real counterparts suggests a high degree of similarity. This indicates that, volumetrically, simulated tumors embedded into patient CT data can serve as reasonable surrogates to real patient data.
Collapse
Affiliation(s)
- Marthony Robins
- Carl E. Ravin Advanced Imaging Laboratories, Durham, North Carolina, United States.,Duke University, Medical Physics Graduate Program, Durham, North Carolina, United States.,Duke University Medical Center, Department of Radiology, Durham, North Carolina, United States
| | - Justin Solomon
- Carl E. Ravin Advanced Imaging Laboratories, Durham, North Carolina, United States.,Duke University, Medical Physics Graduate Program, Durham, North Carolina, United States.,Duke University Medical Center, Department of Radiology, Durham, North Carolina, United States
| | - Jocelyn Hoye
- Carl E. Ravin Advanced Imaging Laboratories, Durham, North Carolina, United States.,Duke University, Medical Physics Graduate Program, Durham, North Carolina, United States.,Duke University Medical Center, Department of Radiology, Durham, North Carolina, United States
| | - Taylor Smith
- Carl E. Ravin Advanced Imaging Laboratories, Durham, North Carolina, United States.,Duke University, Medical Physics Graduate Program, Durham, North Carolina, United States.,Duke University Medical Center, Department of Radiology, Durham, North Carolina, United States
| | - Yuese Zheng
- Carl E. Ravin Advanced Imaging Laboratories, Durham, North Carolina, United States.,Duke University Medical Center, Department of Radiology, Durham, North Carolina, United States
| | - Lukas Ebner
- Duke University Medical Center, Department of Radiology, Durham, North Carolina, United States.,University of Bern, Department of Diagnostic, Interventional and Pediatric Radiology Inselspital, Bern, Switzerland
| | - Kingshuk Roy Choudhury
- Carl E. Ravin Advanced Imaging Laboratories, Durham, North Carolina, United States.,Duke University Medical Center, Department of Radiology, Durham, North Carolina, United States
| | - Ehsan Samei
- Carl E. Ravin Advanced Imaging Laboratories, Durham, North Carolina, United States.,Duke University, Medical Physics Graduate Program, Durham, North Carolina, United States.,Duke University Medical Center, Department of Radiology, Durham, North Carolina, United States
| |
Collapse
|