1
|
Hegde S, Nanayakkara S, Cox S, Vasa R, Gao J. Australian Dentists' Knowledge of the Consequences of Interpretive Errors in Dental Radiographs and Potential Mitigation Measures. Clin Exp Dent Res 2024; 10:e70027. [PMID: 39420698 DOI: 10.1002/cre2.70027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 08/09/2024] [Accepted: 10/01/2024] [Indexed: 10/19/2024] Open
Abstract
OBJECTIVES Dental radiographs, typically taken and interpreted by dentists, are essential for diagnosis and effective treatment planning. Interpretive errors in dental radiographs, stemming from failures of visual and cognitive processes, can affect both patients and clinicians. This survey aimed to assess the dental practitioners' perceptions of the consequences of these errors and potential measures to minimize them. MATERIALS AND METHODS This online anonymized survey assessed Australian dental practitioners' perceptions of the consequences of these errors and potential mitigation measures using ranking, Likert scale, and open-ended questions. The data were analyzed using descriptive statistics and bivariate analysis. RESULTS Participants identified undertreatment (72%) and legal implications (82%) as the most significant consequences of interpretive errors, whereas severe harm to patients was deemed the least likely. Dental practitioners placed a greater emphasis on maintaining a high level of competence and the well-being of their patients. Utilizing high-quality images (63.9%) and appropriate radiographs (59.7%) were identified as the most effective measures to minimize interpretive errors. Participants showed hesitancy regarding the reliance on machine learning as a clinical decision-making tool. CONCLUSIONS The survey provides valuable practical insights into the consequences and targeted measures to minimize the occurrence of interpretive errors. Efforts to minimize interpretive errors should address patient safety and practitioners' concerns about professional reputation and business viability. The study also suggests further research into the role of machine learning algorithms in reducing interpretive errors in dentistry.
Collapse
Affiliation(s)
- Shwetha Hegde
- Dentomaxillofacial Radiology, Sydney Dental School, University of Sydney, Sydney, Australia
| | - Shanika Nanayakkara
- Sydney Dental School, Institute of Dental Research, Westmead Centre for Oral Health, University of Sydney, Sydney, Australia
| | - Stephen Cox
- Discipline of Oral Surgery, Sydney Dental School, University of Sydney, Sydney, Australia
| | - Rajesh Vasa
- Translational Research and Development, Applied Artificial Intelligence, Deakin University, Melbourne, Australia
| | - Jinlong Gao
- Sydney Dental School, Institute of Dental Research, Westmead Centre for Oral Health, University of Sydney, Sydney, Australia
| |
Collapse
|
2
|
Introzzi L, Zonca J, Cabitza F, Cherubini P, Reverberi C. Enhancing human-AI collaboration: The case of colonoscopy. Dig Liver Dis 2024; 56:1131-1139. [PMID: 37940501 DOI: 10.1016/j.dld.2023.10.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 10/19/2023] [Accepted: 10/23/2023] [Indexed: 11/10/2023]
Abstract
Diagnostic errors impact patient health and healthcare costs. Artificial Intelligence (AI) shows promise in mitigating this burden by supporting Medical Doctors in decision-making. However, the mere display of excellent or even superhuman performance by AI in specific tasks does not guarantee a positive impact on medical practice. Effective AI assistance should target the primary causes of human errors and foster effective collaborative decision-making with human experts who remain the ultimate decision-makers. In this narrative review, we apply these principles to the specific scenario of AI assistance during colonoscopy. By unraveling the neurocognitive foundations of the colonoscopy procedure, we identify multiple bottlenecks in perception, attention, and decision-making that contribute to diagnostic errors, shedding light on potential interventions to mitigate them. Furthermore, we explored how existing AI devices fare in clinical practice and whether they achieved an optimal integration with the human decision-maker. We argue that to foster optimal Human-AI collaboration, future research should expand our knowledge of factors influencing AI's impact, establish evidence-based cognitive models, and develop training programs based on them. These efforts will enhance human-AI collaboration, ultimately improving diagnostic accuracy and patient outcomes. The principles illuminated in this review hold more general value, extending their relevance to a wide array of medical procedures and beyond.
Collapse
Affiliation(s)
- Luca Introzzi
- Department of Psychology, Università Milano - Bicocca, Milano, Italy
| | - Joshua Zonca
- Department of Psychology, Università Milano - Bicocca, Milano, Italy; Milan Center for Neuroscience, Università Milano - Bicocca, Milano, Italy
| | - Federico Cabitza
- Department of Informatics, Systems and Communication, Università Milano - Bicocca, Milano, Italy; IRCCS Istituto Ortopedico Galeazzi, Milano, Italy
| | - Paolo Cherubini
- Department of Brain and Behavioral Sciences, Università Statale di Pavia, Pavia, Italy
| | - Carlo Reverberi
- Department of Psychology, Università Milano - Bicocca, Milano, Italy; Milan Center for Neuroscience, Università Milano - Bicocca, Milano, Italy.
| |
Collapse
|
3
|
Klein DS, Karmakar S, Jonnalagadda A, Abbey CK, Eckstein MP. Greater benefits of deep learning-based computer-aided detection systems for finding small signals in 3D volumetric medical images. J Med Imaging (Bellingham) 2024; 11:045501. [PMID: 38988989 PMCID: PMC11232702 DOI: 10.1117/1.jmi.11.4.045501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 06/17/2024] [Accepted: 06/20/2024] [Indexed: 07/12/2024] Open
Abstract
Purpose Radiologists are tasked with visually scrutinizing large amounts of data produced by 3D volumetric imaging modalities. Small signals can go unnoticed during the 3D search because they are hard to detect in the visual periphery. Recent advances in machine learning and computer vision have led to effective computer-aided detection (CADe) support systems with the potential to mitigate perceptual errors. Approach Sixteen nonexpert observers searched through digital breast tomosynthesis (DBT) phantoms and single cross-sectional slices of the DBT phantoms. The 3D/2D searches occurred with and without a convolutional neural network (CNN)-based CADe support system. The model provided observers with bounding boxes superimposed on the image stimuli while they looked for a small microcalcification signal and a large mass signal. Eye gaze positions were recorded and correlated with changes in the area under the ROC curve (AUC). Results The CNN-CADe improved the 3D search for the small microcalcification signal ( Δ AUC = 0.098 , p = 0.0002 ) and the 2D search for the large mass signal ( Δ AUC = 0.076 , p = 0.002 ). The CNN-CADe benefit in 3D for the small signal was markedly greater than in 2D ( Δ Δ AUC = 0.066 , p = 0.035 ). Analysis of individual differences suggests that those who explored the least with eye movements benefited the most from the CNN-CADe ( r = - 0.528 , p = 0.036 ). However, for the large signal, the 2D benefit was not significantly greater than the 3D benefit ( Δ Δ AUC = 0.033 , p = 0.133 ). Conclusion The CNN-CADe brings unique performance benefits to the 3D (versus 2D) search of small signals by reducing errors caused by the underexploration of the volumetric data.
Collapse
Affiliation(s)
- Devi S. Klein
- University of California, Department of Psychological and Brain Sciences, Santa Barbara, California, United States
| | - Srijita Karmakar
- University of California, Department of Psychological and Brain Sciences, Santa Barbara, California, United States
| | - Aditya Jonnalagadda
- University of California, Department of Electrical and Computer Engineering, Santa Barbara, California, United States
| | - Craig K. Abbey
- University of California, Department of Psychological and Brain Sciences, Santa Barbara, California, United States
| | - Miguel P. Eckstein
- University of California, Department of Psychological and Brain Sciences, Santa Barbara, California, United States
- University of California, Department of Electrical and Computer Engineering, Santa Barbara, California, United States
- University of California, Department of Computer Science, Santa Barbara, California, United States
| |
Collapse
|
4
|
Byrne CA, Voute LC, Marshall JF. Interobserver agreement during clinical magnetic resonance imaging of the equine foot. Equine Vet J 2024. [PMID: 38946165 DOI: 10.1111/evj.14126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 06/02/2024] [Indexed: 07/02/2024]
Abstract
BACKGROUND Agreement between experienced observers for assessment of pathology and assessment confidence are poorly documented for magnetic resonance imaging (MRI) of the equine foot. OBJECTIVES To report interobserver agreement for pathology assessment and observer confidence for key anatomical structures of the equine foot during MRI. STUDY DESIGN Exploratory clinical study. METHODS Ten experienced observers (diploma or associate level) assessed 15 equine foot MRI studies acquired from clinical databases of 3 MRI systems. Observers graded pathology in seven key anatomical structures (Grade 1: no pathology, Grade 2: mild pathology, Grade 3: moderate pathology, Grade 4: severe pathology) and provided a grade for their confidence for each pathology assessment (Grade 1: high confidence, Grade 2: moderate confidence, Grade 3: limited confidence, Grade 4: no confidence). Interobserver agreement for the presence/absence of pathology and agreement for individual grades of pathology were assessed with Fleiss' kappa (k). Overall interobserver agreement for pathology was determined using Fleiss' kappa and Kendall's coefficient of concordance (KCC). The distribution of grading was also visualised with bubble charts. RESULTS Interobserver agreement for the presence/absence of pathology of individual anatomical structures was poor-to-fair, except for the navicular bone which had moderate agreement (k = 0.52). Relative agreement for pathology grading (accounting for the ranking of grades) ranged from KCC = 0.19 for the distal interphalangeal joint to KCC = 0.70 for the navicular bone. Agreement was generally greatest at the extremes of pathology. Observer confidence in pathology assessment was generally moderate to high. MAIN LIMITATIONS Distribution of pathology varied between anatomical structures due to random selection of clinical MRI studies. Observers had most experience with low-field MRI. CONCLUSIONS Even with experienced observers, there can be notable variation in the perceived severity of foot pathology on MRI for individual cases, which could be important in a clinical context.
Collapse
Affiliation(s)
- Christian A Byrne
- School of Veterinary Medicine, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK
| | - Lance C Voute
- School of Veterinary Medicine, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK
| | - John F Marshall
- School of Veterinary Medicine, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, UK
| |
Collapse
|
5
|
Cioffi GM, Pinilla-Echeverri N, Sheth T, Sibbald MG. Does artificial intelligence enhance physician interpretation of optical coherence tomography: insights from eye tracking. Front Cardiovasc Med 2023; 10:1283338. [PMID: 38144364 PMCID: PMC10739524 DOI: 10.3389/fcvm.2023.1283338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 11/20/2023] [Indexed: 12/26/2023] Open
Abstract
Background and objectives The adoption of optical coherence tomography (OCT) in percutaneous coronary intervention (PCI) is limited by need for real-time image interpretation expertise. Artificial intelligence (AI)-assisted Ultreon™ 2.0 software could address this barrier. We used eye tracking to understand how these software changes impact viewing efficiency and accuracy. Methods Eighteen interventional cardiologists and fellows at McMaster University, Canada, were included in the study and categorized as experienced or inexperienced based on lifetime OCT use. They were tasked with reviewing OCT images from both Ultreon™ 2.0 and AptiVue™ software platforms while their eye movements were recorded. Key metrics, such as time to first fixation on the area of interest, total task time, dwell time (time spent on the area of interest as a proportion of total task time), and interpretation accuracy, were evaluated using a mixed multivariate model. Results Physicians exhibited improved viewing efficiency with Ultreon™ 2.0, characterized by reduced time to first fixation (Ultreon™ 0.9 s vs. AptiVue™ 1.6 s, p = 0.007), reduced total task time (Ultreon™ 10.2 s vs. AptiVue™ 12.6 s, p = 0.006), and increased dwell time in the area of interest (Ultreon™ 58% vs. AptiVue™ 41%, p < 0.001). These effects were similar for experienced and inexperienced physicians. Accuracy of OCT image interpretation was preserved in both groups, with experienced physicians outperforming inexperienced physicians. Discussion Our study demonstrated that AI-enabled Ultreon™ 2.0 software can streamline the image interpretation process and improve viewing efficiency for both inexperienced and experienced physicians. Enhanced viewing efficiency implies reduced cognitive load potentially reducing the barriers for OCT adoption in PCI decision-making.
Collapse
Affiliation(s)
| | | | | | - Matthew Gary Sibbald
- Division of Cardiology, Hamilton General Hospital, Hamilton Health Sciences, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
6
|
Klein DS, Lago MA, Abbey CK, Eckstein MP. A 2D Synthesized Image Improves the 3D Search for Foveated Visual Systems. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2176-2188. [PMID: 37027767 PMCID: PMC10476603 DOI: 10.1109/tmi.2023.3246005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Current medical imaging increasingly relies on 3D volumetric data making it difficult for radiologists to thoroughly search all regions of the volume. In some applications (e.g., Digital Breast Tomosynthesis), the volumetric data is typically paired with a synthesized 2D image (2D-S) generated from the corresponding 3D volume. We investigate how this image pairing affects the search for spatially large and small signals. Observers searched for these signals in 3D volumes, 2D-S images, and while viewing both. We hypothesize that lower spatial acuity in the observers' visual periphery hinders the search for the small signals in the 3D images. However, the inclusion of the 2D-S guides eye movements to suspicious locations, improving the observer's ability to find the signals in 3D. Behavioral results show that the 2D-S, used as an adjunct to the volumetric data, improves the localization and detection of the small (but not large) signal compared to 3D alone. There is a concomitant reduction in search errors as well. To understand this process at a computational level, we implement a Foveated Search Model (FSM) that executes human eye movements and then processes points in the image with varying spatial detail based on their eccentricity from fixations. The FSM predicts human performance for both signals and captures the reduction in search errors when the 2D-S supplements the 3D search. Our experimental and modeling results delineate the utility of 2D-S in 3D search-reduce the detrimental impact of low-resolution peripheral processing by guiding attention to regions of interest, effectively reducing errors.
Collapse
|
7
|
Drew T, Konold CE, Lavelle M, Brunyé TT, Kerr KF, Shucard H, Weaver DL, Elmore JG. Pathologist pupil dilation reflects experience level and difficulty in diagnosing medical images. J Med Imaging (Bellingham) 2023; 10:025503. [PMID: 37096053 PMCID: PMC10122150 DOI: 10.1117/1.jmi.10.2.025503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Revised: 03/26/2023] [Accepted: 04/10/2023] [Indexed: 04/26/2023] Open
Abstract
Purpose: Digital whole slide imaging allows pathologists to view slides on a computer screen instead of under a microscope. Digital viewing allows for real-time monitoring of pathologists' search behavior and neurophysiological responses during the diagnostic process. One particular neurophysiological measure, pupil diameter, could provide a basis for evaluating clinical competence during training or developing tools that support the diagnostic process. Prior research shows that pupil diameter is sensitive to cognitive load and arousal, and it switches between exploration and exploitation of a visual image. Different categories of lesions in pathology pose different levels of challenge, as indicated by diagnostic disagreement among pathologists. If pupil diameter is sensitive to the perceived difficulty in diagnosing biopsies, eye-tracking could potentially be used to identify biopsies that may benefit from a second opinion. Approach: We measured case onset baseline-corrected (phasic) and uncorrected (tonic) pupil diameter in 90 pathologists who each viewed and diagnosed 14 digital breast biopsy cases that cover the diagnostic spectrum from benign to invasive breast cancer. Pupil data were extracted from the beginning of viewing and interpreting of each individual case. After removing 122 trials ( < 10 % ) with poor eye-tracking quality, 1138 trials remained. We used multiple linear regression with robust standard error estimates to account for dependent observations within pathologists. Results: We found a positive association between the magnitude of phasic dilation and subject-centered difficulty ratings and between the magnitude of tonic dilation and untransformed difficulty ratings. When controlling for case diagnostic category, only the tonic-difficulty relationship persisted. Conclusions: Results suggest that tonic pupil dilation may indicate overall arousal differences between pathologists as they interpret biopsy cases and could signal a need for additional training, experience, or automated decision aids. Phasic dilation is sensitive to characteristics of biopsies that tend to elicit higher difficulty ratings and could indicate a need for a second opinion.
Collapse
Affiliation(s)
- Trafton Drew
- University of Utah, Department of Psychology, Salt Lake City, Utah, United States
| | - Catherine E. Konold
- University of Utah, Department of Psychology, Salt Lake City, Utah, United States
| | - Mark Lavelle
- University of New Mexico, Department of Psychology, Albuquerque, New Mexico, United States
| | - Tad T. Brunyé
- Tufts University, Center for Applied Brain and Cognitive Sciences, Medford, Massachusetts, United States
| | - Kathleen F. Kerr
- University of Washington, Department of Biostatistics, Seattle, Washington, United States
| | - Hannah Shucard
- University of Washington, Department of Biostatistics, Seattle, Washington, United States
| | - Donald L. Weaver
- University of Vermont, Department of Pathology & Laboratory Medicine, Burlington, Vermont, United States
| | - Joann G. Elmore
- David Geffen School of Medicine UCLA, Department of Medicine, Los Angeles, California, United States
| |
Collapse
|
8
|
Balta C, Reiser I, Broeders MJM, Veldkamp WJH, van Engen RE, Sechopoulos I. Lesion detection in digital breast tomosynthesis: human reader experiments indicate no benefit from the integration of information from multiple planes. J Med Imaging (Bellingham) 2023; 10:S11915. [PMID: 37378263 PMCID: PMC10292860 DOI: 10.1117/1.jmi.10.s1.s11915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 05/07/2023] [Accepted: 06/13/2023] [Indexed: 06/29/2023] Open
Abstract
Purpose In digital breast tomosynthesis (DBT), radiologists need to review a stack of 20 to 80 tomosynthesis images, depending upon breast size. This causes a significant increase in reading time. However, it is currently unknown whether there is a perceptual benefit to viewing a mass in the 3D tomosynthesis volume. To answer this question, this study investigated whether adjacent lesion-containing planes provide additional information that aids lesion detection for DBT-like and breast CT-like (bCT) images. Method Human reader detection performance was determined for low-contrast targets shown in a single tomosynthesis image at the center of the target (2D) or shown in the entire tomosynthesis image stack (3D). Using simulations, targets embedded in simulated breast backgrounds, and images were generated using a DBT-like (50 deg angular range) and a bCT-like (180 deg angular range) imaging geometry. Experiments were conducted with spherical and capsule-shaped targets. Eleven readers reviewed 1600 images in two-alternative forced-choice experiments. The area under the receiver operating characteristic curve (AUC) and reading time were computed for the 2D and 3D reading modes for the DBT and bCT imaging geometries and for both target shapes. Results Spherical lesion detection was higher in 2D mode than in 3D, for both DBT- and bCT-like images (DBT: AUC 2 D = 0.790 , AUC 3 D = 0.735 , P = 0.03 ; bCT: AUC 2 D = 0.869 , AUC 3 D = 0.716 , P < 0.05 ), but equivalent for capsule-shaped signals (DBT: AUC 2 D = 0.891 , AUC 3 D = 0.915 , P = 0.19 ; bCT: AUC 2 D = 0.854 , AUC 3 D = 0.847 , P = 0.88 ). Average reading time was up to 134% higher for 3D viewing (P < 0.05 ). Conclusions For the detection of low-contrast lesions, there is no inherent visual perception benefit to reviewing the entire DBT or bCT stack. The findings of this study could have implications for the development of 2D synthetic mammograms: a single synthesized 2D image designed to include all lesions present in the volume might allow readers to maintain detection performance at a significantly reduced reading time.
Collapse
Affiliation(s)
- Christiana Balta
- Dutch Expert Centre for Screening (LRCB), Nijmegen, The Netherlands
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands
| | - Ingrid Reiser
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Mireille J. M. Broeders
- Dutch Expert Centre for Screening (LRCB), Nijmegen, The Netherlands
- Radboud University Medical Center, Department for Health Evidence, Nijmegen, The Netherlands
| | | | | | - Ioannis Sechopoulos
- Dutch Expert Centre for Screening (LRCB), Nijmegen, The Netherlands
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands
- University of Twente, Technical Medicine Center, Enschede, The Netherlands
| |
Collapse
|
9
|
Hout MC, Papesh MH, Masadeh S, Sandin H, Walenchok SC, Post P, Madrid J, White B, Pinto JDG, Welsh J, Goode D, Skulsky R, Rodriguez MC. The Oddity Detection in Diverse Scenes (ODDS) database: Validated real-world scenes for studying anomaly detection. Behav Res Methods 2023; 55:583-599. [PMID: 35353316 PMCID: PMC8966608 DOI: 10.3758/s13428-022-01816-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/18/2022] [Indexed: 11/24/2022]
Abstract
Many applied screening tasks (e.g., medical image or baggage screening) involve challenging searches for which standard laboratory search is rarely equivalent. For example, whereas laboratory search frequently requires observers to look for precisely defined targets among isolated, non-overlapping images randomly arrayed on clean backgrounds, medical images present unspecified targets in noisy, yet spatially regular scenes. Those unspecified targets are typically oddities, elements that do not belong. To develop a closer laboratory analogue to this, we created a database of scenes containing subtle, ill-specified "oddity" targets. These scenes have similar perceptual densities and spatial regularities to those found in expert search tasks, and each includes 16 variants of the unedited scene wherein an oddity (a subtle deformation of the scene) is hidden. In Experiment 1, eight volunteers searched thousands of scene variants for an oddity. Regardless of their search accuracy, they were then shown the highlighted anomaly and rated its subtlety. Subtlety ratings reliably predicted search performance (accuracy and response times) and did so better than image statistics. In Experiment 2, we conducted a conceptual replication in which a larger group of naïve searchers scanned subsets of the scene variants. Prior subtlety ratings reliably predicted search outcomes. Whereas medical image targets are difficult for naïve searchers to detect, our database contains thousands of interior and exterior scenes that vary in difficulty, but are nevertheless searchable by novices. In this way, the stimuli will be useful for studying visual search as it typically occurs in expert domains: Ill-specified search for anomalies in noisy displays.
Collapse
Affiliation(s)
- Michael C Hout
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA.
- National Science Foundation, Alexandria, VA, USA.
| | - Megan H Papesh
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Saleem Masadeh
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Hailey Sandin
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | | | - Phillip Post
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Jessica Madrid
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Bryan White
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | | | - Julian Welsh
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Dre Goode
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Rebecca Skulsky
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Mariana Cazares Rodriguez
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| |
Collapse
|
10
|
Lan L, Mao RQ, Qiu RY, Kay J, de Sa D. Immersive Virtual Reality for Patient-Specific Preoperative Planning: A Systematic Review. Surg Innov 2023; 30:109-122. [PMID: 36448920 PMCID: PMC9925905 DOI: 10.1177/15533506221143235] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
Background. Immersive virtual reality (iVR) facilitates surgical decision-making by enabling surgeons to interact with complex anatomic structures in realistic 3-dimensional environments. With emerging interest in its applications, its effects on patients and providers should be clarified. This systematic review examines the current literature on iVR for patient-specific preoperative planning. Materials and Methods. A literature search was performed on five databases for publications from January 1, 2000 through March 21, 2021. Primary studies on the use of iVR simulators by surgeons at any level of training for patient-specific preoperative planning were eligible. Two reviewers independently screened titles, abstracts, and full texts, extracted data, and assessed quality using the Quality Assessment Tool for Studies with Diverse Designs (QATSDD). Results were qualitatively synthesized, and descriptive statistics were calculated. Results. The systematic search yielded 2,555 studies in total, with 24 full-texts subsequently included for qualitative synthesis, representing 264 medical personnel and 460 patients. Neurosurgery was the most frequently represented discipline (10/24; 42%). Preoperative iVR did not significantly improve patient-specific outcomes of operative time, blood loss, complications, and length of stay, but may decrease fluoroscopy time. In contrast, iVR improved surgeon-specific outcomes of surgical strategy, anatomy visualization, and confidence. Validity, reliability, and feasibility of patient-specific iVR models were assessed. The mean QATSDD score of included studies was 32.9%. Conclusions. Immersive VR improves surgeon experiences of preoperative planning, with minimal evidence for impact on short-term patient outcomes. Future work should focus on high-quality studies investigating long-term patient outcomes, and utility of preoperative iVR for trainees.
Collapse
Affiliation(s)
- Lucy Lan
- Michael G. DeGroote School of
Medicine, McMaster University, Hamilton, ON, Canada,Lucy Lan, Michael G. DeGroote School of
Medicine, McMaster University, 1280 Main Street West, Hamilton, ON L8N 3Z5,
Canada.
| | - Randi Q. Mao
- Michael G. DeGroote School of
Medicine, McMaster University, Hamilton, ON, Canada
| | - Reva Y. Qiu
- Michael G. DeGroote School of
Medicine, McMaster University, Hamilton, ON, Canada
| | - Jeffrey Kay
- Division of Orthopaedic Surgery,
Department of Surgery, McMaster University, Hamilton, ON, Canada
| | - Darren de Sa
- Division of Orthopaedic Surgery,
Department of Surgery, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
11
|
Botch TL, Garcia BD, Choi YB, Feffer N, Robertson CE. Active visual search in naturalistic environments reflects individual differences in classic visual search performance. Sci Rep 2023; 13:631. [PMID: 36635491 PMCID: PMC9837148 DOI: 10.1038/s41598-023-27896-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 01/10/2023] [Indexed: 01/13/2023] Open
Abstract
Visual search is a ubiquitous activity in real-world environments. Yet, traditionally, visual search is investigated in tightly controlled paradigms, where head-restricted participants locate a minimalistic target in a cluttered array that is presented on a computer screen. Do traditional visual search tasks predict performance in naturalistic settings, where participants actively explore complex, real-world scenes? Here, we leverage advances in virtual reality technology to test the degree to which classic and naturalistic search are limited by a common factor, set size, and the degree to which individual differences in classic search behavior predict naturalistic search behavior in a large sample of individuals (N = 75). In a naturalistic search task, participants looked for an object within their environment via a combination of head-turns and eye-movements using a head-mounted display. Then, in a classic search task, participants searched for a target within a simple array of colored letters using only eye-movements. In each task, we found that participants' search performance was impacted by increases in set size-the number of items in the visual display. Critically, we observed that participants' efficiency in classic search tasks-the degree to which set size slowed performance-indeed predicted efficiency in real-world scenes. These results demonstrate that classic, computer-based visual search tasks are excellent models of active, real-world search behavior.
Collapse
Affiliation(s)
- Thomas L Botch
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA.
| | - Brenda D Garcia
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| | - Yeo Bi Choi
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| | - Nicholas Feffer
- Department of Computer Science, Dartmouth College, Hanover, NH, 03755, USA
- Department of Computer Science, Stanford University, Stanford, CA, 94305, USA
| | - Caroline E Robertson
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| |
Collapse
|
12
|
DETECT-LC: A 3D Deep Learning and Textural Radiomics Computational Model for Lung Cancer Staging and Tumor Phenotyping Based on Computed Tomography Volumes. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136318] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Lung Cancer is one of the primary causes of cancer-related deaths worldwide. Timely diagnosis and precise staging are pivotal for treatment planning, and thus can lead to increased survival rates. The application of advanced machine learning techniques helps in effective diagnosis and staging. In this study, a multistage neurobased computational model is proposed, DETECT-LC learning. DETECT-LC handles the challenge of choosing discriminative CT slices for constructing 3D volumes, using Haralick, histogram-based radiomics, and unsupervised clustering. ALT-CNN-DENSE Net architecture is introduced as part of DETECT-LC for voxel-based classification. DETECT-LC offers an automatic threshold-based segmentation approach instead of the manual procedure, to help mitigate this burden for radiologists and clinicians. Also, DETECT-LC presents a slice selection approach and a newly proposed relatively light weight 3D CNN architecture to improve existing studies performance. The proposed pipeline is employed for tumor phenotyping and staging. DETECT-LC performance is assessed through a range of experiments, in which DETECT-LC attains outstanding performance surpassing its counterparts in terms of accuracy, sensitivity, F1-score and Area under Curve (AuC). For histopathology classification, DETECT-LC average performance achieved an improvement of 20% in overall accuracy, 0.19 in sensitivity, 0.16 in F1-Score and 0.16 in AuC over the state of the art. A similar enhancement is reached for staging, where higher overall accuracy, sensitivity and F1-score are attained with differences of 8%, 0.08 and 0.14.
Collapse
|
13
|
Grasso V, Willumeit-Rӧmer R, Jose J. Superpixel spectral unmixing framework for the volumetric assessment of tissue chromophores: A photoacoustic data-driven approach. PHOTOACOUSTICS 2022; 26:100367. [PMID: 35601933 PMCID: PMC9120071 DOI: 10.1016/j.pacs.2022.100367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Revised: 05/04/2022] [Accepted: 05/06/2022] [Indexed: 06/15/2023]
Abstract
The assessment of tissue chromophores at a volumetric scale is vital for an improved diagnosis and treatment of a large number of diseases. Spectral photoacoustic imaging (sPAI) co-registered with high-resolution ultrasound (US) is an innovative technology that has a great potential for clinical translation as it can assess the volumetric distribution of the tissue components. Conventionally, to detect and separate the chromophores from sPAI, an input of the expected tissue absorption spectra is required. However, in pathological conditions, the prediction of the absorption spectra is difficult as it can change with respect to the physiological state. Besides, this conventional approach can also be hampered due to spectral coloring, which is a prominent distortion effect that induces spectral changes at depth. Here, we are proposing a novel data-driven framework that can overcome all these limitations and provide an improved assessment of the tissue chromophores. We have developed a superpixel spectral unmixing (SPAX) approach that can detect the most and less prominent absorber spectra and their volumetric distribution without any user interactions. Within the SPAX framework, we have also implemented an advanced spectral coloring compensation approach by utilizing US image segmentation and Monte Carlo simulations, based on a predefined library of optical properties. The framework has been tested on tissue-mimicking phantoms and also on healthy animals. The obtained results show enhanced specificity and sensitivity for the detection of tissue chromophores. To our knowledge, this is a unique framework that accounts for the spectral coloring and provides automated detection of tissue spectral signatures at a volumetric scale, which can open many possibilities for translational research.
Collapse
Affiliation(s)
- Valeria Grasso
- FUJIFILM VisualSonics, Amsterdam, the Netherlands
- Faculty of Engineering, Institute for Materials Science, Christian-Albrecht University of Kiel, Kiel, Germany
| | - Regine Willumeit-Rӧmer
- Faculty of Engineering, Institute for Materials Science, Christian-Albrecht University of Kiel, Kiel, Germany
- Division Metallic Biomaterials, Institute of Materials Research, Helmholtz-Zentrum Hereon GmbH, Geesthacht, Germany
| | - Jithin Jose
- FUJIFILM VisualSonics, Amsterdam, the Netherlands
| |
Collapse
|
14
|
Wu CC, Wolfe JM. The Functional Visual Field(s) in simple visual search. Vision Res 2022; 190:107965. [PMID: 34775158 PMCID: PMC8976560 DOI: 10.1016/j.visres.2021.107965] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 10/07/2021] [Accepted: 10/12/2021] [Indexed: 01/03/2023]
Abstract
During a visual search for a target among distractors, observers do not fixate every location in the search array. Rather processing is thought to occur within a Functional Visual Field (FVF) surrounding each fixation. We argue that there are three questions that can be asked at each fixation and that these imply three different senses of the FVF. 1) Can I identify what is at location XY? This defines a resolution FVF. 2) To what shall I attend during this fixation? This defines an Attentional FVF. 3) Where should I fixate next? This defines an Exploratory FVF. We examine FVFs 2&3 using eye movements in visual search. In three Experiments, we collected eye movements during visual search for the target letter T among distractor letter Ls (Exps 1 and 3) or for a color X orientation conjunction (Exp 2). Saccades that do not go to the target can be used to define the Exploratory FVF. The saccade that goes to the target can be used to define the Attentional FVF since the target was probably covertly detected during the prior fixation. The Exploratory FVF is larger than the Attentional FVF for all three experiments. Interestingly, the probability that the next saccade would go to the target was always well below 1.0, even when the current fixation was close to the target and well within any reasonable estimate of the FVF. Measuring search-based Exploratory and Attentional FVFs sheds light on how we can miss clearly visible targets.
Collapse
Affiliation(s)
- Chia-Chien Wu
- Harvard Medical School, Boston, MA, USA; Brigham & Women's Hospital, Boston, MA, USA.
| | - Jeremy M Wolfe
- Harvard Medical School, Boston, MA, USA; Brigham & Women's Hospital, Boston, MA, USA
| |
Collapse
|
15
|
Ren Z, Yu SX, Whitney D. Controllable Medical Image Generation via GAN. JOURNAL OF PERCEPTUAL IMAGING 2022; 5:0005021-50215. [PMID: 37621378 PMCID: PMC10448967 DOI: 10.2352/j.percept.imaging.2022.5.000502] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/26/2023]
Abstract
Medical image data is critically important for a range of disciplines, including medical image perception research, clinician training programs, and computer vision algorithms, among many other applications. Authentic medical image data, unfortunately, is relatively scarce for many of these uses. Because of this, researchers often collect their own data in nearby hospitals, which limits the generalizabilty of the data and findings. Moreover, even when larger datasets become available, they are of limited use because of the necessary data processing procedures such as de-identification, labeling, and categorizing, which requires significant time and effort. Thus, in some applications, including behavioral experiments on medical image perception, researchers have used naive artificial medical images (e.g., shapes or textures that are not realistic). These artificial medical images are easy to generate and manipulate, but the lack of authenticity inevitably raises questions about the applicability of the research to clinical practice. Recently, with the great progress in Generative Adversarial Networks (GAN), authentic images can be generated with high quality. In this paper, we propose to use GAN to generate authentic medical images for medical imaging studies. We also adopt a controllable method to manipulate the generated image attributes such that these images can satisfy any arbitrary experimenter goals, tasks, or stimulus settings. We have tested the proposed method on various medical image modalities, including mammogram, MRI, CT, and skin cancer images. The generated authentic medical images verify the success of the proposed method. The model and generated images could be employed in any medical image perception research.
Collapse
Affiliation(s)
- Zhihang Ren
- Vision Science Graduate Group, University of California, Berkeley, CA 94720, United States of America
- International Computer Science Institute, Berkeley, CA 94720, United States of America
| | - Stella X Yu
- Vision Science Graduate Group, University of California, Berkeley, CA 94720, United States of America
- International Computer Science Institute, Berkeley, CA 94720, United States of America
| | - David Whitney
- Vision Science Graduate Group, University of California, Berkeley, CA 94720, United States of America
- International Computer Science Institute, Berkeley, CA 94720, United States of America
- Department of Psychology, University of California, Berkeley CA 94720, United States of America
- Helen Wills Neuroscience Institute, University of California, Berkeley CA 94720, United States of America
| |
Collapse
|
16
|
Drew T, Lavelle M, Kerr KF, Shucard H, Brunyé TT, Weaver DL, Elmore JG. More scanning, but not zooming, is associated with diagnostic accuracy in evaluating digital breast pathology slides. J Vis 2021; 21:7. [PMID: 34636845 PMCID: PMC8525842 DOI: 10.1167/jov.21.11.7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 09/15/2021] [Indexed: 12/02/2022] Open
Abstract
Diagnoses of medical images can invite strikingly diverse strategies for image navigation and visual search. In computed tomography screening for lung nodules, distinct strategies, termed scanning and drilling, relate to both radiologists' clinical experience and accuracy in lesion detection. Here, we examined associations between search patterns and accuracy for pathologists (N = 92) interpreting a diverse set of breast biopsy images. While changes in depth in volumetric images reveal new structures through movement in the z-plane, in digital pathology changes in depth are associated with increased magnification. Thus, "drilling" in radiology may be more appropriately termed "zooming" in pathology. We monitored eye-movements and navigation through digital pathology slides to derive metrics of how quickly the pathologists moved through XY (scanning) and Z (zooming) space. Prior research on eye-movements in depth has categorized clinicians as either "scanners" or "drillers." In contrast, we found that there was no reliable association between a clinician's tendency to scan or zoom while examining digital pathology slides. Thus, in the current work we treated scanning and zooming as continuous predictors rather than categorizing as either a "scanner" or "zoomer." In contrast to prior work in volumetric chest images, we found significant associations between accuracy and scanning rate but not zooming rate. These findings suggest fundamental differences in the relative value of information types and review behaviors across two image formats. Our data suggest that pathologists gather critical information by scanning on a given plane of depth, whereas radiologists drill through depth to interrogate critical features.
Collapse
Affiliation(s)
- Trafton Drew
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Mark Lavelle
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Kathleen F Kerr
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Hannah Shucard
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Tad T Brunyé
- Department of Psychology, Tufts University, Medford, MA, USA
| | - Donald L Weaver
- Department of Pathology & Laboratory Medicine, University of Vermont, Burlington, VT, USA
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| |
Collapse
|
17
|
Carrigan AJ, Stoodley P, Ng K, Moerel D, Wiggins MW. Static versus dynamic medical images: The role of cue utilization in diagnostic performance. APPLIED COGNITIVE PSYCHOLOGY 2021. [DOI: 10.1002/acp.3861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Affiliation(s)
- Ann J. Carrigan
- Centre for Elite Performance, Expertise and Training Macquarie University Sydney New South Wales Australia
- Perception in Action Research Centre Macquarie University Sydney New South Wales Australia
- Department of Psychology Macquarie University Sydney New South Wales Australia
| | - Paul Stoodley
- School of Medicine Western Sydney University Sydney, New South Wales Australia
- Westmead Private Cardiology Westmead New South Wales Australia
| | - Kenny Ng
- Cardiology Department Royal North Shore Hospital Sydney New South Wales Australia
| | - Denise Moerel
- Perception in Action Research Centre Macquarie University Sydney New South Wales Australia
- Department of Cognitive Science Macquarie University Sydney New South Wales Australia
| | - Mark W. Wiggins
- Centre for Elite Performance, Expertise and Training Macquarie University Sydney New South Wales Australia
- Department of Psychology Macquarie University Sydney New South Wales Australia
| |
Collapse
|
18
|
Williams LH, Carrigan AJ, Mills M, Auffermann WF, Rich AN, Drew T. Characteristics of expert search behavior in volumetric medical image interpretation. J Med Imaging (Bellingham) 2021; 8:041208. [PMID: 34277889 DOI: 10.1117/1.jmi.8.4.041208] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 06/28/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Experienced radiologists have enhanced global processing ability relative to novices, allowing experts to rapidly detect medical abnormalities without performing an exhaustive search. However, evidence for global processing models is primarily limited to two-dimensional image interpretation, and it is unclear whether these findings generalize to volumetric images, which are widely used in clinical practice. We examined whether radiologists searching volumetric images use methods consistent with global processing models of expertise. In addition, we investigated whether search strategy (scanning/drilling) differs with experience level. Approach: Fifty radiologists with a wide range of experience evaluated chest computed-tomography scans for lung nodules while their eye movements and scrolling behaviors were tracked. Multiple linear regressions were used to determine: (1) how search behaviors differed with years of experience and the number of chest CTs evaluated per week and (2) which search behaviors predicted better performance. Results: Contrary to global processing models based on 2D images, experience was unrelated to measures of global processing (saccadic amplitude, coverage, time to first fixation, search time, and depth passes) in this task. Drilling behavior was associated with better accuracy than scanning behavior when controlling for observer experience. Greater image coverage was a strong predictor of task accuracy. Conclusions: Global processing ability may play a relatively small role in volumetric image interpretation, where global scene statistics are not available to radiologists in a single glance. Rather, in volumetric images, it may be more important to engage in search strategies that support a more thorough search of the image.
Collapse
Affiliation(s)
- Lauren H Williams
- University of California, San Diego, Department of Psychology, San Diego, California, United States
| | - Ann J Carrigan
- Macquarie University, Department of Psychology, Sydney, New South Wales, Australia.,Macquarie University, Perception in Action Research Centre, Sydney, New South Wales, Australia.,Macquarie University, Centre for Elite Performance, Expertise, and Training, Sydney, New South Wales, Australia
| | - Megan Mills
- University of Utah, School of Medicine, Department of Radiology and Imaging Sciences, Salt Lake City, Utah, United States
| | - William F Auffermann
- University of Utah, School of Medicine, Department of Radiology and Imaging Sciences, Salt Lake City, Utah, United States
| | - Anina N Rich
- Macquarie University, Perception in Action Research Centre, Sydney, New South Wales, Australia.,Macquarie University, Centre for Elite Performance, Expertise, and Training, Sydney, New South Wales, Australia.,Macquarie University, Department of Cognitive Science, Sydney, New South Wales, Australia
| | - Trafton Drew
- University of Utah, Department of Psychology, Salt Lake City, Utah, United States
| |
Collapse
|
19
|
Carrigan AJ, Magnussen J, Georgiou A, Curby KM, Palmeri TJ, Wiggins MW. Differentiating Experience From Cue Utilization in Radiological Assessments. HUMAN FACTORS 2021; 63:635-646. [PMID: 32150500 DOI: 10.1177/0018720820902576] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
OBJECTIVE This research was designed to examine the contribution of self-reported experience and cue utilization to diagnostic accuracy in the context of radiology. BACKGROUND Within radiology, it is unclear how task-related experience contributes to the acquisition of associations between features with events in memory, or cues, and how they contribute to diagnostic performance. METHOD Data were collected from 18 trainees and 41 radiologists. The participants completed a radiology edition of the established cue utilization assessment tool EXPERTise 2.0, which provides a measure of cue utilization based on performance on a number of domain-specific tasks. The participants also completed a separate image interpretation task as an independent measure of diagnostic performance. RESULTS Consistent with previous research, a k-means cluster analysis using the data from EXPERTise 2.0 delineated two groups, the pattern of centroids of which reflected higher and lower cue utilization. Controlling for years of experience, participants with higher cue utilization were more accurate on the image interpretation task compared to participants who demonstrated relatively lower cue utilization (p = .01). CONCLUSION This study provides support for the role of cue utilization in assessments of radiology images among qualified radiologists. Importantly, it also demonstrates that cue utilization and self-reported years of experience as a radiologist make independent contributions to performance on the radiological diagnostic task. APPLICATION Task-related experience, including training, needs to be structured to ensure that learners have the opportunity to acquire feature-event relationships and internalize these associations in the form of cues in memory.
Collapse
Affiliation(s)
| | | | | | - Kim M Curby
- 7788 Macquarie University, Sydney, Australia
| | | | | |
Collapse
|
20
|
Coronary Centerline Extraction from CCTA Using 3D-UNet. FUTURE INTERNET 2021. [DOI: 10.3390/fi13040101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The mesh-type coronary model, obtained from three-dimensional reconstruction using the sequence of images produced by computed tomography (CT), can be used to obtain useful diagnostic information, such as extracting the projection of the lumen (planar development along an artery). In this paper, we have focused on automated coronary centerline extraction from cardiac computed tomography angiography (CCTA) proposing a 3D version of U-Net architecture, trained with a novel loss function and with augmented patches. We have obtained promising results for accuracy (between 90–95%) and overlap (between 90–94%) with various network training configurations on the data from the Rotterdam Coronary Artery Centerline Extraction benchmark. We have also demonstrated the ability of the proposed network to learn despite the huge class imbalance and sparse annotation present in the training data.
Collapse
|
21
|
Lago MA, Jonnalagadda A, Abbey CK, Barufaldi BB, Bakic PR, Maidment ADA, Leung WK, Weinstein SP, Englander BS, Eckstein MP. Under-exploration of Three-Dimensional Images Leads to Search Errors for Small Salient Targets. Curr Biol 2021; 31:1099-1106.e5. [PMID: 33472051 PMCID: PMC8048135 DOI: 10.1016/j.cub.2020.12.029] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 10/09/2020] [Accepted: 12/18/2020] [Indexed: 10/22/2022]
Abstract
Advances in 3D imaging technology are transforming how radiologists search for cancer1,2 and how security officers scrutinize baggage for dangerous objects.3 These new 3D technologies often improve search over 2D images4,5 but vastly increase the image data. Here, we investigate 3D search for targets of various sizes in filtered noise and digital breast phantoms. For a Bayesian ideal observer optimally processing the filtered noise and a convolutional neural network processing the digital breast phantoms, search with 3D image stacks increases target information and improves accuracy over search with 2D images. In contrast, 3D search by humans leads to high miss rates for small targets easily detected in 2D search, but not for larger targets more visible in the visual periphery. Analyses of human eye movements, perceptual judgments, and a computational model with a foveated visual system suggest that human errors can be explained by interaction among a target's peripheral visibility, eye movement under-exploration of the 3D images, and a perceived overestimation of the explored area. Instructing observers to extend the search reduces 75% of the small target misses without increasing false positives. Results with twelve radiologists confirm that even medical professionals reading realistic breast phantoms have high miss rates for small targets in 3D search. Thus, under-exploration represents a fundamental limitation to the efficacy with which humans search in 3D image stacks and miss targets with these prevalent image technologies.
Collapse
Affiliation(s)
- Miguel A Lago
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA 93106, USA
| | - Aditya Jonnalagadda
- Department of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, CA 93106, USA; Institute for Collaborative Biotechnologies, University of California, Santa Barbara, Santa Barbara, CA 93106, USA
| | - Craig K Abbey
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA 93106, USA
| | - Bruno B Barufaldi
- Department of Radiology, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Predrag R Bakic
- Department of Radiology, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Andrew D A Maidment
- Department of Radiology, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Winifred K Leung
- Ridley-Tree Cancer Center, Sansum Clinic, 540 W. Pueblo Street, Santa Barbara, CA 93105, USA
| | - Susan P Weinstein
- Department of Radiology, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Brian S Englander
- Department of Radiology, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Miguel P Eckstein
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA 93106, USA; Department of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, CA 93106, USA; Institute for Collaborative Biotechnologies, University of California, Santa Barbara, Santa Barbara, CA 93106, USA.
| |
Collapse
|
22
|
Kliewer MA, Hartung M, Green CS. The Search Patterns of Abdominal Imaging Subspecialists for Abdominal Computed Tomography: Toward a Foundational Pattern for New Radiology Residents. J Clin Imaging Sci 2021; 11:1. [PMID: 33500836 PMCID: PMC7827582 DOI: 10.25259/jcis_195_2020] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Accepted: 12/09/2020] [Indexed: 11/04/2022] Open
Abstract
Objectives: The routine search patterns used by subspecialty abdominal imaging experts to inspect the image volumes of abdominal/pelvic computed tomography (CT) have not been well characterized or rendered in practical or teachable terms. The goal of this study is to describe the search patterns used by experienced subspecialty imagers when reading a normal abdominal CT at a modern picture archiving and communication system workstation, and utilize this information to propose guidelines for residents as they learn to interpret CT during training. Material and Methods: Twenty-two academic subspecialists enacted their routine search pattern on a normal contrast-enhanced abdominal/pelvic CT study under standardized display parameters. Readers were told that the scan was normal and then asked to verbalize where their gaze centered and moved through the axial, coronal, and sagittal image stacks, demonstrating eye position with a cursor as needed. A peer coded the reported eye gaze movements and scrilling behavior. Spearman correlation coefficients were calculated between years of professional experience and the numbers of passes through the lung bases, liver, kidneys, and bowel. Results: All readers followed an initial organ-by-organ approach. Larger organs were examined by drilling, while smaller organs by oscillation or scanning. Search elements were classified as drilling, scanning, oscillation, and scrilling (scan drilling); these categories were parsed as necessary. The greatest variability was found in the examination the body wall and bowel/mesentery. Two modes of scrilling were described, and these classified as roaming and zigzagging. The years of experience of the readers did not correlated to number of passes made through the lung bases, liver, kidneys, or bowel. Conclusion: Subspecialty abdominal radiologists negotiate through the image stacks of an abdominal CT study in broadly similar ways. Collation of the approaches suggests a foundational search pattern for new trainees.
Collapse
Affiliation(s)
- Mark A Kliewer
- Department of Radiology and Ultrasound Imaging, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, United States
| | - Michael Hartung
- Department of Radiology and Ultrasound Imaging, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, United States
| | - C Shawn Green
- Department of Psychology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, United States
| |
Collapse
|
23
|
Abstract
In visual search tasks, observers look for targets among distractors. In the lab, this often takes the form of multiple searches for a simple shape that may or may not be present among other items scattered at random on a computer screen (e.g., Find a red T among other letters that are either black or red.). In the real world, observers may search for multiple classes of target in complex scenes that occur only once (e.g., As I emerge from the subway, can I find lunch, my friend, and a street sign in the scene before me?). This article reviews work on how search is guided intelligently. I ask how serial and parallel processes collaborate in visual search, describe the distinction between search templates in working memory and target templates in long-term memory, and consider how searches are terminated.
Collapse
Affiliation(s)
- Jeremy M. Wolfe
- Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts 02115, USA
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115, USA
- Visual Attention Lab, Brigham & Women's Hospital, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
24
|
What can an echocardiographer see in briefly presented stimuli? Perceptual expertise in dynamic search. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:30. [PMID: 32696181 PMCID: PMC7374494 DOI: 10.1186/s41235-020-00232-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Accepted: 05/26/2020] [Indexed: 11/10/2022]
Abstract
Background Experts in medical image perception are able to detect abnormalities rapidly from medical images. This ability is likely due to enhanced pattern recognition on a global scale. However, the bulk of research in this domain has focused on static rather than dynamic images, so it remains unclear what level of information that can be extracted from these displays. This study was designed to examine the visual capabilities of echocardiographers—practitioners who provide information regarding cardiac integrity and functionality. In three experiments, echocardiographers and naïve participants completed an abnormality detection task that comprised movies presented on a range of durations, where half were abnormal. This was followed by an abnormality categorization task. Results Across all durations, the results showed that performance was high for detection, but less so for categorization, indicating that categorization was a more challenging task. Not surprisingly, echocardiographers outperformed naïve participants. Conclusions Together, this suggests that echocardiographers have a finely tuned capability for cardiac dysfunction, and a great deal of visual information can be extracted during a global assessment, within a brief glance. No relationship was evident between experience and performance which suggests that other factors such as individual differences need to be considered for future studies.
Collapse
|
25
|
Lago MA, Sechopoulos I, Bochud FO, Eckstein MP. Measurement of the useful field of view for single slices of different imaging modalities and targets. J Med Imaging (Bellingham) 2020; 7:022411. [PMID: 32064303 PMCID: PMC7007584 DOI: 10.1117/1.jmi.7.2.022411] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Accepted: 01/17/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: With three-dimensional (3-D) images displayed as stacks of 2-D images, radiologists rely more heavily on vision away from their fixation point to visually process information, guide eye movements, and detect abnormalities. Thus the ability to detect targets away from the fixation point, commonly characterized as the useful field of view (UFOV), becomes critical for these 3-D imaging modalities. We investigate how the UFOV, defined as the eccentricity, in which detection performance degrades to a given probability, varies across imaging modalities and targets. Approach: We measure the detectability of different targets at various distances from gaze locations for single slices of liver computed tomography (CT), 2-D digital mammograms (DM), and single slices of digital breast tomosynthesis (DBT) cases. Observers with varying expertise were instructed to maintain their gaze at a point while a short display of the image was flashed and an eye tracker verified observer's steady fixation. Display times were 200 and 1000 ms for CT images and 500 ms for DM and DBT images. Results: We find variations in the UFOV from 9 to 12 deg for liver CT to as small as 2.5 to 5 deg for calcification clusters in breast images (DM and DBT). We compare our results to those reported in the literature for lung nodules and discuss the differences across methods used to measure the UFOV, their dependence on case selection/task difficulty, viewing conditions, and observer expertise. We propose a complementary measure defined in terms of performance degradation relative to the peak foveal performance (relative-UFOV) to circumvent UFOV's variations with case selection/task difficulty. Conclusion: Our results highlight the variations in the UFOV across imaging modalities, target types, observer expertise, and measurement methods and suggest an additional relative-UFOV measure to more thoroughly characterize the detection performance away from point of fixation.
Collapse
Affiliation(s)
- Miguel A. Lago
- University of California, Institute for Collaborative Biotechnologies, Department of Psychological and Brain Sciences, Santa Barbara, California, United States
| | - Ioannis Sechopoulos
- Radboud University Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen, The Netherlands
- Dutch Expert Centre for Screening, Nijmegen, The Netherlands
| | - François O. Bochud
- University Hospital and University of Lausanne, Institute of Radiation Physics, Lausanne, Switzerland
| | - Miguel P. Eckstein
- University of California, Institute for Collaborative Biotechnologies, Department of Psychological and Brain Sciences, Santa Barbara, California, United States
| |
Collapse
|
26
|
Sha LZ, Toh YN, Remington RW, Jiang YV. Perceptual learning in the identification of lung cancer in chest radiographs. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:4. [PMID: 32016647 PMCID: PMC6997313 DOI: 10.1186/s41235-020-0208-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2019] [Accepted: 01/12/2020] [Indexed: 11/18/2022]
Abstract
Extensive research has shown that practice yields highly specific perceptual learning of simple visual properties such as orientation and contrast. Does this same learning characterize more complex perceptual skills? Here we investigated perceptual learning of complex medical images. Novices underwent training over four sessions to discriminate which of two chest radiographs contained a tumor and to indicate the location of the tumor. In training, one group received six repetitions of 30 normal/abnormal images, the other three repetitions of 60 normal/abnormal images. Groups were then tested on trained and novel images. To assess the nature of perceptual learning, test items were presented in three formats – the full image, the cutout of the tumor, or the background only. Performance improved across training sessions, and notably, the improvement transferred to the classification of novel images. Training with more repetitions on fewer images yielded comparable transfer to training with fewer repetitions on more images. Little transfer to novel images occurred when tested with just the cutout of the cancer region or just the background, but a larger cutout that included both the cancer region and some surrounding regions yielded good transfer. Perceptual learning contributes to the acquisition of expertise in cancer image perception.
Collapse
Affiliation(s)
- Li Z Sha
- Department of Psychology, University of Minnesota, N240 Elliott Hall, 75 East River Road, Minneapolis, MN, 55455, USA.
| | - Yi Ni Toh
- Department of Psychology, University of Minnesota, N240 Elliott Hall, 75 East River Road, Minneapolis, MN, 55455, USA
| | - Roger W Remington
- Department of Psychology, University of Minnesota, N240 Elliott Hall, 75 East River Road, Minneapolis, MN, 55455, USA.,School of Psychology, University of Queensland, Brisbane, Australia
| | - Yuhong V Jiang
- Department of Psychology, University of Minnesota, N240 Elliott Hall, 75 East River Road, Minneapolis, MN, 55455, USA.
| |
Collapse
|