51
|
Cox PH, Kravitz DJ, Mitroff SR. Great expectations: minor differences in initial instructions have a major impact on visual search in the absence of feedback. Cogn Res Princ Implic 2021; 6:19. [PMID: 33740159 PMCID: PMC7975232 DOI: 10.1186/s41235-021-00286-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 03/05/2021] [Indexed: 11/29/2022] Open
Abstract
Professions such as radiology and aviation security screening that rely on visual search-the act of looking for targets among distractors-often cannot provide operators immediate feedback, which can create situations where performance may be largely driven by the searchers' own expectations. For example, if searchers do not expect relatively hard-to-spot targets to be present in a given search, they may find easy-to-spot targets but systematically quit searching before finding more difficult ones. Without feedback, searchers can create self-fulfilling prophecies where they incorrectly reinforce initial biases (e.g., first assuming and then, perhaps wrongly, concluding hard-to-spot targets are rare). In the current study, two groups of searchers completed an identical visual search task but with just a single difference in their initial task instructions before the experiment started; those in the "high-expectation" condition were told that each trial could have one or two targets present (i.e., correctly implying no target-absent trials) and those in the "low-expectation" condition were told that each trial would have up to two targets (i.e., incorrectly implying there could be target-absent trials). Compared to the high-expectation group, the low-expectation group had a lower hit rate, lower false alarm rate and quit trials more quickly, consistent with a lower quitting threshold (i.e., performing less exhaustive searches) and a potentially higher target-present decision criterion. The expectation effect was present from the start and remained across the experiment-despite exposure to the same true distribution of targets, the groups' performances remained divergent, primarily driven by the different subjective experiences caused by each groups' self-fulfilling prophecies. The effects were limited to the single-targets trials, which provides insights into the mechanisms affected by the initial expectations set by the instructions. In sum, initial expectations can have dramatic influences-searchers who do not expect to find a target, are less likely to find a target as they are more likely to quit searching earlier.
Collapse
Affiliation(s)
- Patrick H Cox
- Department of Psychological and Brain Sciences, The George Washington University, Washington, DC, USA.
| | - Dwight J Kravitz
- Department of Psychological and Brain Sciences, The George Washington University, Washington, DC, USA
| | - Stephen R Mitroff
- Department of Psychological and Brain Sciences, The George Washington University, Washington, DC, USA
| |
Collapse
|
52
|
Burns CL, Taubert ST, Ward EC, McCarthy KA, Graham N. Speech-language therapists' perceptions of an eLearning program to support training in videofluoroscopic swallow studies. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2021; 56:257-270. [PMID: 33459451 DOI: 10.1111/1460-6984.12594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Revised: 11/12/2020] [Accepted: 12/09/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND Speech-language therapists (SLTs) seek a range of educational opportunities for training in adult videofluoroscopic swallow studies (VFSS). However, variable training methods and/or unequal access to training can influence VFSS practice. AIMS To document current SLT needs and barriers to VFSS training and to determine if a new beginner-level VFSS eLearning program would assist to meet their training needs. The program incorporated multimedia modules on preparing, conducting, interpreting and reporting VFSS. METHODS & PROCEDURES SLTs with limited experience in adult VFSS completed surveys relating to VFSS training experience and barriers, and perceived changes in knowledge, skills and confidence on core VFSS module topics pre- (n = 36) and post- (n = 32) eLearning training. OUTCOMES & RESULTS Inconsistent access to VFSS training opportunities and time-related work pressures were reported as the greatest training barriers. SLTs viewed the eLearning program as a suitable option for VFSS training. Post-training, participants perceived they gained confidence, as well as improved knowledge and skills in all VFSS aspects along with generalised benefits for dysphagia management. SLTs indicated that key benefits of the eLearning program were its comprehensive content and self-directed learning with multimedia tools, which afforded theoretical and practical learning opportunities. CONCLUSIONS & IMPLICATIONS The eLearning program offered SLTs free access to beginner-level adult VFSS training, meeting many identified training needs and providing a foundation from which to develop further practical knowledge and skills within a VFSS clinic setting. What this paper adds What is already known on the subject SLTs demonstrate variable knowledge and skill in conducting and interpreting VFSS, which can impact dysphagia diagnosis and management. While access to VFSS training can be challenging, the barriers to training for SLTs have not been clearly documented. Research has confirmed that eLearning can be used effectively in healthcare education, and in some aspects of VFSS training; however, it is yet to be applied to address the broad range of VFSS training needs. What this paper adds to existing knowledge This study describes the SLT reported barriers to VFSS training which include limited access to formal and practical training, workload-related time pressures and the complexity of learning the VFSS skill set. The findings highlight that an eLearning program, was an accepted mode of learning for VFSS training. SLTs reported the online program met their learning needs by improving access to training, the multimedia program features supported their understanding of complex anatomical and physiological concepts, and training frameworks assisted their clinical reasoning and VFSS interpretation. What are the potential or actual clinical implications of this work? eLearning can assist in overcoming many VFSS training barriers identified by SLTs and the multimedia aspects of eLearning can effectively support VFSS beginner-level education to complement and expedite in-clinic practical training. Given that VFSS results inform decisions regarding commencement and progression of oral intake and swallow rehabilitation, enhanced VFSS training has the potential to positively influence dysphagia outcomes and quality of life.
Collapse
Affiliation(s)
- Clare L Burns
- Royal Brisbane & Women's Hospital, Metro North Hospital and Health Service, Brisbane, QLD, Australia
- School of Health & Rehabilitation Sciences, The University of Queensland, Brisbane, QLD, Australia
- Centre for Research in Telerehabilitation, The University of Queensland, Brisbane, QLD, Australia
| | - Shana T Taubert
- Royal Brisbane & Women's Hospital, Metro North Hospital and Health Service, Brisbane, QLD, Australia
- School of Health & Rehabilitation Sciences, The University of Queensland, Brisbane, QLD, Australia
| | - Elizabeth C Ward
- School of Health & Rehabilitation Sciences, The University of Queensland, Brisbane, QLD, Australia
- Centre for Research in Telerehabilitation, The University of Queensland, Brisbane, QLD, Australia
- Centre for Functioning and Health Research, Metro South Hospital and Health Service, Brisbane, QLD, Australia
| | - Kellie A McCarthy
- Princess Alexandra Hospital, Metro South Hospital and Health Service, Brisbane, QLD, Australia
| | - Nicola Graham
- Children's Health Queensland Hospital and Health Service, Brisbane, QLD, Australia
| |
Collapse
|
53
|
Papesh MH, Hout MC, Guevara Pinto JD, Robbins A, Lopez A. Eye movements reflect expertise development in hybrid search. Cogn Res Princ Implic 2021; 6:7. [PMID: 33587219 PMCID: PMC7884546 DOI: 10.1186/s41235-020-00269-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 12/23/2020] [Indexed: 11/10/2022] Open
Abstract
Domain-specific expertise changes the way people perceive, process, and remember information from that domain. This is often observed in visual domains involving skilled searches, such as athletics referees, or professional visual searchers (e.g., security and medical screeners). Although existing research has compared expert to novice performance in visual search, little work has directly documented how accumulating experiences change behavior. A longitudinal approach to studying visual search performance may permit a finer-grained understanding of experience-dependent changes in visual scanning, and the extent to which various cognitive processes are affected by experience. In this study, participants acquired experience by taking part in many experimental sessions over the course of an academic semester. Searchers looked for 20 categories of targets simultaneously (which appeared with unequal frequency), in displays with 0-3 targets present, while having their eye movements recorded. With experience, accuracy increased and response times decreased. Fixation probabilities and durations decreased with increasing experience, but saccade amplitudes and visual span increased. These findings suggest that the behavioral benefits endowed by expertise emerge from oculomotor behaviors that reflect enhanced reliance on memory to guide attention and the ability to process more of the visual field within individual fixations.
Collapse
Affiliation(s)
- Megan H Papesh
- Department of Psychology, New Mexico State University, P.O. Box 30001/MSC 3452, Las Cruces, NM, 88003, USA.
| | - Michael C Hout
- Department of Psychology, New Mexico State University, P.O. Box 30001/MSC 3452, Las Cruces, NM, 88003, USA
| | | | - Arryn Robbins
- Department of Psychology, New Mexico State University, P.O. Box 30001/MSC 3452, Las Cruces, NM, 88003, USA
- Carthage College, Kenosha, WI, USA
| | - Alexis Lopez
- Department of Psychology, New Mexico State University, P.O. Box 30001/MSC 3452, Las Cruces, NM, 88003, USA
| |
Collapse
|
54
|
Mondal SB, Achilefu S. Virtual and Augmented Reality Technologies in Molecular and Anatomical Imaging. Mol Imaging 2021. [DOI: 10.1016/b978-0-12-816386-3.00066-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
55
|
Chung R, Rosenkrantz AB, Shanbhogue KP. Expert radiologist review at a hepatobiliary multidisciplinary tumor board: impact on patient management. Abdom Radiol (NY) 2020; 45:3800-3808. [PMID: 32444889 DOI: 10.1007/s00261-020-02587-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
PURPOSE To identify the frequency, source, and management impact of discrepancies between the initial radiology report and expert reinterpretation occurring in the context of a hepatobiliary multidisciplinary tumor board (MTB). METHODS This retrospective study included 974 consecutive patients discussed at a weekly MTB at a large tertiary care academic medical center over a 2-year period. A single radiologist with dedicated hepatobiliary imaging expertise attended all conferences to review and discuss the relevant liver imaging and rated the concordance between original and re-reads based on RADPEER scoring criteria. Impact on management was based on the conference discussion and reflected changes in follow-up imaging, recommendations for biopsy/surgery, or liver transplant eligibility. RESULTS Image reinterpretation was discordant with the initial report in 19.9% (194/974) of cases (59.8%, 34.5%, 5.7% RADPEER 2/3/4 discrepancies, respectively). A change in LI-RADS category occurred in 59.8% of discrepancies. Most common causes of discordance included re-classification of a lesion as benign rather than malignant (16.0%) and missed tumor recurrence (13.9%). Impact on management occurred in 99.0% of discordant cases and included loco-regional therapy instead of follow-up imaging (19.1%), follow-up imaging instead of treatment (17.5%), and avoidance of biopsy (12.4%). 11.3% received OPTN exception scores due to the revised interpretation, and 8.8% were excluded from listing for orthotopic liver transplant. CONCLUSION Even in a sub-specialized abdominal imaging academic practice, expert radiologist review in the MTB setting identified discordant interpretations and impacted management in a substantial fraction of patients, potentially impacting transplant allocation. The findings may impact how abdominal imaging sections best staff advanced MTBs.
Collapse
Affiliation(s)
- Ryan Chung
- Department of Radiology, NYU Langone Health, 660 First Ave, New York, NY, 10016, USA
| | - Andrew B Rosenkrantz
- Department of Radiology, NYU Langone Health, 660 First Ave, New York, NY, 10016, USA
| | - Krishna P Shanbhogue
- Department of Radiology, NYU Langone Health, 660 First Ave, New York, NY, 10016, USA.
| |
Collapse
|
56
|
Alexander RG, Waite S, Macknik SL, Martinez-Conde S. What do radiologists look for? Advances and limitations of perceptual learning in radiologic search. J Vis 2020; 20:17. [PMID: 33057623 PMCID: PMC7571277 DOI: 10.1167/jov.20.10.17] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 09/14/2020] [Indexed: 12/31/2022] Open
Abstract
Supported by guidance from training during residency programs, radiologists learn clinically relevant visual features by viewing thousands of medical images. Yet the precise visual features that expert radiologists use in their clinical practice remain unknown. Identifying such features would allow the development of perceptual learning training methods targeted to the optimization of radiology training and the reduction of medical error. Here we review attempts to bridge current gaps in understanding with a focus on computational saliency models that characterize and predict gaze behavior in radiologists. There have been great strides toward the accurate prediction of relevant medical information within images, thereby facilitating the development of novel computer-aided detection and diagnostic tools. In some cases, computational models have achieved equivalent sensitivity to that of radiologists, suggesting that we may be close to identifying the underlying visual representations that radiologists use. However, because the relevant bottom-up features vary across task context and imaging modalities, it will also be necessary to identify relevant top-down factors before perceptual expertise in radiology can be fully understood. Progress along these dimensions will improve the tools available for educating new generations of radiologists, and aid in the detection of medically relevant information, ultimately improving patient health.
Collapse
Affiliation(s)
- Robert G Alexander
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Stephen Waite
- Department of Radiology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Stephen L Macknik
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| |
Collapse
|
57
|
Obuchowicz R, Oszust M, Piorkowski A. Interobserver variability in quality assessment of magnetic resonance images. BMC Med Imaging 2020; 20:109. [PMID: 32962651 PMCID: PMC7509933 DOI: 10.1186/s12880-020-00505-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 09/01/2020] [Indexed: 11/10/2022] Open
Abstract
Background The perceptual quality of magnetic resonance (MR) images influences diagnosis and may compromise the treatment. The purpose of this study was to evaluate how the image quality changes influence the interobserver variability of their assessment. Methods For the variability evaluation, a dataset containing distorted MRI images was prepared and then assessed by 31 experienced medical professionals (radiologists). Differences between observers were analyzed using the Fleiss’ kappa. However, since the kappa evaluates the agreement among radiologists taking into account aggregated decisions, a typically employed criterion of the image quality assessment (IQA) performance was used to provide a more thorough analysis. The IQA performance of radiologists was evaluated by comparing the Spearman correlation coefficients, ρ, between individual scores with the mean opinion scores (MOS) composed of the subjective opinions of the remaining professionals. Results The experiments show that there is a significant agreement among radiologists (κ=0.12; 95% confidence interval [CI]: 0.118, 0.121; P<0.001) on the quality of the assessed images. The resulted κ is strongly affected by the subjectivity of the assigned scores, separately presenting close scores. Therefore, the ρ was used to identify poor performance cases and to confirm the consistency of the majority of collected scores (ρmean = 0.5706). The results for interns (ρmean = 0.6868) supports the finding that the quality assessment of MR images can be successfully taught. Conclusions The agreement observed among radiologists from different imaging centers confirms the subjectivity of the perception of MR images. It was shown that the image content and severity of distortions affect the IQA. Furthermore, the study highlights the importance of the psychosomatic condition of the observers and their attitude.
Collapse
Affiliation(s)
- Rafal Obuchowicz
- Department of Diagnostic Imaging, Jagiellonian University Medical College, Kopernika Street 19, Cracow, 31-501, Poland
| | - Mariusz Oszust
- Department of Computer and Control Engineering, Rzeszow University of Technology, Wincentego Pola 2, Rzeszow, 35-959, Poland
| | - Adam Piorkowski
- Department of Biocybernetics and Biomedical Engineering, AGH University of Science and Technology, Mickiewicza 30, Cracow, 30-059, Poland.
| |
Collapse
|
58
|
Lee S, Choe EK, Kim SY, Kim HS, Park KJ, Kim D. Liver imaging features by convolutional neural network to predict the metachronous liver metastasis in stage I-III colorectal cancer patients based on preoperative abdominal CT scan. BMC Bioinformatics 2020; 21:382. [PMID: 32938394 PMCID: PMC7495853 DOI: 10.1186/s12859-020-03686-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
Background Introducing deep learning approach to medical images has rendered a large amount of un-decoded information into usage in clinical research. But mostly, it has been focusing on the performance of the prediction modeling for disease-related entity, but not on the clinical implication of the feature itself. Here we analyzed liver imaging features of abdominal CT images collected from 2019 patients with stage I – III colorectal cancer (CRC) using convolutional neural network (CNN) to elucidate its clinical implication in oncological perspectives. Results CNN generated imaging features from the liver parenchyma. Dimension reduction was done for the features by principal component analysis. We designed multiple prediction models for 5-year metachronous liver metastasis (5YLM) using combinations of clinical variables (age, sex, T stage, N stage) and top principal components (PCs), with logistic regression classification. The model using “1st PC (PC1) + clinical information” had the highest performance (mean AUC = 0.747) to predict 5YLM, compared to the model with clinical features alone (mean AUC = 0.709). The PC1 was independently associated with 5YLM in multivariate analysis (beta = − 3.831, P < 0.001). For the 5-year mortality rate, PC1 did not contribute to an improvement to the model with clinical features alone. For the PC1, Kaplan-Meier plots showed a significant difference between PC1 low vs. high group. The 5YLM-free survival of low PC1 was 89.6% and the high PC1 was 95.9%. In addition, PC1 had a significant correlation with sex, body mass index, alcohol consumption, and fatty liver status. Conclusion The imaging features combined with clinical information improved the performance compared to the standardized prediction model using only clinical information. The liver imaging features generated by CNN may have the potential to predict liver metastasis. These results suggest that even though there were no liver metastasis during the primary colectomy, the features of liver imaging can impose characteristics that could be predictive for metachronous liver metastasis.
Collapse
Affiliation(s)
- Sangwoo Lee
- Division of Future Convergent, The Cyber University of Korea, Seoul, 03051, South Korea
| | - Eun Kyung Choe
- Department of Surgery, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, 06236, South Korea.,Department of Biostatistics, Epidemiology & Informatics, Perelman School of Medicine, University of Pennsylvania, B304 Richards Building, 3700 Hamilton Walk, Philadelphia, PA, 19104-6116, USA
| | - So Yeon Kim
- Department of Biostatistics, Epidemiology & Informatics, Perelman School of Medicine, University of Pennsylvania, B304 Richards Building, 3700 Hamilton Walk, Philadelphia, PA, 19104-6116, USA.,Department of Software and Computer Engineering, Ajou University, Suwon, 16499, South Korea
| | - Hua Sun Kim
- Department of Radiology, Seoul National University College of Medicine, Seoul, 03080, South Korea
| | - Kyu Joo Park
- Department of Surgery, Seoul National University College of Medicine, Seoul, 03080, South Korea
| | - Dokyoon Kim
- Department of Biostatistics, Epidemiology & Informatics, Perelman School of Medicine, University of Pennsylvania, B304 Richards Building, 3700 Hamilton Walk, Philadelphia, PA, 19104-6116, USA. .,Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
59
|
Jensen K, Hagemo G, Tingberg A, Steinfeldt-Reisse C, Mynarek GK, Rivero RJ, Fosse E, Martinsen AC. Evaluation of Image Quality for 7 Iterative Reconstruction Algorithms in Chest Computed Tomography Imaging: A Phantom Study. J Comput Assist Tomogr 2020; 44:673-680. [PMID: 32936576 DOI: 10.1097/rct.0000000000001037] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
OBJECTIVES This study aimed to evaluate the image quality of 7 iterative reconstruction (IR) algorithms in comparison to filtered back-projection (FBP) algorithm. METHODS An anthropomorphic chest phantom was scanned on 4 computed tomography scanners and reconstructed with FBP and IR algorithms. Image quality of anatomical details-large/medium-sized pulmonary vessels, small pulmonary vessels, thoracic wall, and small and large lesions-was scored. Furthermore, general impression of noise, image contrast, and artifacts were evaluated. Visual grading regression was used to analyze the data. Standard deviations were measured, and the noise power spectrum was calculated. RESULTS Iterative reconstruction algorithms showed significantly better results when compared with FBP for these criteria (regression coefficients/P values in parentheses): vessels (FIRST: -1.8/0.05, AIDR Enhanced: <-2.3/0.01, Veo: <-0.1/0.03, ADMIRE: <-2.1/0.04), lesions (FIRST: <-2.6/0.01, AIDR Enhanced: <-1.9/0.03, IMR1: <-2.7/0.01, Veo: <-2.4/0.02, ADMIRE: -2.3/0.02), image noise (FIRST: <-3.2/0.004, AIDR Enhanced: <-3.5/0.002, IMR1: <-6.1/0.001, iDose: <-2.3/0.02, Veo: <-3.4/0.002, ADMIRE: <-3.5/0.02), image contrast (FIRST: -2.3/0.01, AIDR Enhanced: -2.5/0.01, IMR1: -3.7/0.001, iDose: -2.1/0.02), and artifacts (FIRST: <-3.8/0.004, AIDR Enhanced: <-2.7/0.02, IMR1: <-2.6/0.02, iDose: -2.1/0.04, Veo: -2.6/0.02). The iDose algorithm was the only IR algorithm that maintained the noise frequencies. CONCLUSIONS Iterative reconstruction algorithms performed differently on all evaluated criteria, showing the importance of careful implementation of algorithms for diagnostic purposes.
Collapse
Affiliation(s)
| | - Guro Hagemo
- Department of Radiology and Nuclear Medicine, Radiumhospitalet, Oslo University Hospital, Oslo, Norway
| | - Anders Tingberg
- Department of Medical Radiation Physics, Lund University, Skåne University Hospital, Malmö, Sweden
| | | | - Georg Karl Mynarek
- Department of Radiology and Nuclear Medicine, Rikshospitalet, Oslo University Hospital
| | | | | | | |
Collapse
|
60
|
Sran PK, Gupta S, Singh S. Segmentation based image compression of brain magnetic resonance images using visual saliency. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.102089] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
61
|
Ba A, Shams M, Schmidt S, Eckstein MP, Verdun FR, Bochud FO. Search of low-contrast liver lesions in abdominal CT: the importance of scrolling behavior. J Med Imaging (Bellingham) 2020; 7:045501. [PMID: 32743016 PMCID: PMC7380560 DOI: 10.1117/1.jmi.7.4.045501] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Accepted: 07/15/2020] [Indexed: 12/27/2022] Open
Abstract
Purpose: Visual search using volumetric images is becoming the standard in medical imaging. However, we do not fully understand how eye movement strategies mediate diagnostic performance. A recent study on computed tomography (CT) images showed that the search strategies of radiologists could be classified based on saccade amplitudes and cross-quadrant eye movements [eye movement index (EMI)] into two categories: drillers and scanners. Approach: We investigate how the number of times a radiologist scrolls in a given direction during analysis of the images (number of courses) could add a supplementary variable to use to characterize search strategies. We used a set of 15 normal liver CT images in which we inserted 1 to 5 hypodense metastases of two different signal contrast amplitudes. Twenty radiologists were asked to search for the metastases while their eye-gaze was recorded by an eye-tracker device (EyeLink1000, SR Research Ltd., Mississauga, Ontario, Canada). Results: We found that categorizing radiologists based on the number of courses (rather than EMI) could better predict differences in decision times, percentage of image covered, and search error rates. Radiologists with a larger number of courses covered more volume in more time, found more metastases, and made fewer search errors than those with a lower number of courses. Our results suggest that the traditional definition of drillers and scanners could be expanded to include scrolling behavior. Drillers could be defined as scrolling back and forth through the image stack, each time exploring a different area on each image (low EMI and high number of courses). Scanners could be defined as scrolling progressively through the stack of images and focusing on different areas within each image slice (high EMI and low number of courses). Conclusions: Together, our results further enhance the understanding of how radiologists investigate three-dimensional volumes and may improve how to teach effective reading strategies to radiology residents.
Collapse
Affiliation(s)
- Alexandre Ba
- Lausanne University Hospital and University of Lausanne, Institute of Radiation Physics, Lausanne, Switzerland
| | - Marwa Shams
- University of Lausanne, Lausanne, Switzerland
| | - Sabine Schmidt
- Lausanne University Hospital and University of Lausanne, Department of Radiology, Lausanne, Switzerland
| | - Miguel P Eckstein
- University of California Santa Barbara, Department of Psychological and Brain Sciences, Santa Barbara, California, United States.,University of California Santa Barbara, Department of Electrical and Computing Engineering, Santa Barbara, California, United States
| | - Francis R Verdun
- Lausanne University Hospital and University of Lausanne, Institute of Radiation Physics, Lausanne, Switzerland
| | - François O Bochud
- Lausanne University Hospital and University of Lausanne, Institute of Radiation Physics, Lausanne, Switzerland
| |
Collapse
|
62
|
Ploran EJ, Soni S, Snellings JT, Rausch A, Rochelson B. The effect of perceptual decision-making on the interpretation of twin fetal heart rate tracings. J Matern Fetal Neonatal Med 2020; 35:2116-2121. [PMID: 32594812 DOI: 10.1080/14767058.2020.1779695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Objective: Decision-making is an integrative process during which multiple sources of available evidence are combined into a singular response. Importantly, subconscious processes occur in perceptual decisions that may influence interpretations of visually displayed data such as fetal heart rate tracings (FHRT), which are typically presented together for twins. To examine the potential impact of subconscious perceptual influences on fetal well-being, differences in assessments of FHRTs for twin gestations presented singly or paired were evaluated for baseline fetal heart rate, variability, accelerations, decelerations, and overall concern.Study design: Obstetrical nurses (N = 27) assessed FHRTs from 20 twin gestations (each of which had at least one live birth with a 5-min Apgar <7) presented either on the same tracing or as singletons on separate tracings. Nurses were naïve to the fact that the fetal heart rate tracings presented in the unpaired condition were the same as those presented in the paired condition. Assessments were then compared between the two conditions.Results: Each nurse participant completed ratings on five metrics for each of 20 twin gestations across two conditions (80 FHRT assessments, 400 metrics total per participant). The intraobserver impact of visual context was calculated as the frequency of changed opinions regarding an individual metric (e.g. variability) between the paired and unpaired contexts for each individual fetal heart rate. Assessments of variability (average Kappa = 0.59), decelerations (average Kappa = 0.34), and overall level of concern (average Kappa = 0.33) were moderately to heavily impacted by viewing condition (unpaired vs. paired FHRT). Analysis of interobserver agreement using intraclass correlations (two-way random effect, absolute agreement) indicates poor agreement on unpaired assessments for both accelerations (ICC = 0.01, 95% CI -0.01-0.04) and decelerations (ICC = 0.22, 95% CI 0.15-0.33). These results are mirrored in poor agreement on paired assessments for both accelerations (ICC = 0.00, 95% CI -0.01-0.03) and decelerations (ICC = 0.27, 95% CI 0.19-0.39). There was moderate agreement on overall level of concern for unpaired assessments (ICC = 0.55, 95% CI 0.44-0.67) and near moderate agreement for the paired condition (ICC = 0.45, 95% CI 0.34-0.58).Conclusions: The simultaneous presentation of fetal heart rate tracings in twin gestations introduces both intraobserver and interobserver variances in the interpretation of variability, accelerations, and decelerations, likely due to the influence of subconscious perceptual decision-making. This may theoretically affect outcomes in cases in which visual information is nuanced. More research is necessary to determine whether the standard protocol of simultaneous assessment of FHRT in twins is subliminally affected by perceptual decision-making.
Collapse
Affiliation(s)
| | - Shelly Soni
- Maternal-Fetal Medicine, Zucker School of Medicine at Hofstra/Northwell, Manhasset, NY, USA
| | - Jackson T Snellings
- Department of Information Technology, Hofstra University, Hempstead, NY, USA
| | - Andrew Rausch
- Maternal-Fetal Medicine, Zucker School of Medicine at Hofstra/Northwell, Manhasset, NY, USA
| | - Burton Rochelson
- Maternal-Fetal Medicine, Zucker School of Medicine at Hofstra/Northwell, Manhasset, NY, USA
| |
Collapse
|
63
|
Piccini D, Demesmaeker R, Heerfordt J, Yerly J, Di Sopra L, Masci PG, Schwitter J, Van De Ville D, Richiardi J, Kober T, Stuber M. Deep Learning to Automate Reference-Free Image Quality Assessment of Whole-Heart MR Images. Radiol Artif Intell 2020; 2:e190123. [PMID: 33937825 DOI: 10.1148/ryai.2020190123] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 03/03/2020] [Accepted: 03/11/2020] [Indexed: 11/11/2022]
Abstract
Purpose To develop and characterize an algorithm that mimics human expert visual assessment to quantitatively determine the quality of three-dimensional (3D) whole-heart MR images. Materials and Methods In this study, 3D whole-heart cardiac MRI scans from 424 participants (average age, 57 years ± 18 [standard deviation]; 66.5% men) were used to generate an image quality assessment algorithm. A deep convolutional neural network for image quality assessment (IQ-DCNN) was designed, trained, optimized, and cross-validated on a clinical database of 324 (training set) scans. On a separate test set (100 scans), two hypotheses were tested: (a) that the algorithm can assess image quality in concordance with human expert assessment as assessed by human-machine correlation and intra- and interobserver agreement and (b) that the IQ-DCNN algorithm may be used to monitor a compressed sensing reconstruction process where image quality progressively improves. Weighted κ values, agreement and disagreement counts, and Krippendorff α reliability coefficients were reported. Results Regression performance of the IQ-DCNN was within the range of human intra- and interobserver agreement and in very good agreement with the human expert (R 2 = 0.78, κ = 0.67). The image quality assessment during compressed sensing reconstruction correlated with the cost function at each iteration and was successfully applied to rank the results in very good agreement with the human expert. Conclusion The proposed IQ-DCNN was trained to mimic expert visual image quality assessment of 3D whole-heart MR images. The results from the IQ-DCNN were in good agreement with human expert reading, and the network was capable of automatically comparing different reconstructed volumes.Supplemental material is available for this article.© RSNA, 2020.
Collapse
Affiliation(s)
- Davide Piccini
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Robin Demesmaeker
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - John Heerfordt
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Jérôme Yerly
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Lorenzo Di Sopra
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Pier Giorgio Masci
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Juerg Schwitter
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Dimitri Van De Ville
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Jonas Richiardi
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Tobias Kober
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| | - Matthias Stuber
- Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland (D.P., R.D., J.H., J.R., T.K.); Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Rue de Bugnon 46, BH 8.80, 1011 Lausanne, Switzerland (D.P., J.H., J.Y., L.D.S., J.R., T.K., M.S.); LTS5, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (D.P., J.R., T.K.); Institute of Electrical Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D.); Institute of Bioengineering/Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland (R.D., D.V.D.V.); Center for Biomedical Imaging (CIBM), Lausanne, Switzerland (J.Y., M.S.); Division of Cardiology and Cardiac MR Center, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland (P.G.M., J.S.); and Department of Radiology and Medical Informatics, University Hospital of Geneva (HUG), Geneva, Switzerland (D.V.D.V.)
| |
Collapse
|
64
|
Papanikolaou N, Matos C, Koh DM. How to develop a meaningful radiomic signature for clinical use in oncologic patients. Cancer Imaging 2020; 20:33. [PMID: 32357923 PMCID: PMC7195800 DOI: 10.1186/s40644-020-00311-4] [Citation(s) in RCA: 102] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Accepted: 04/15/2020] [Indexed: 01/08/2023] Open
Abstract
During the last decade, there is an increasing usage of quantitative methods in Radiology in an effort to reduce the diagnostic variability associated with a subjective manner of radiological interpretation. Combined approaches where visual assessment made by the radiologist is augmented by quantitative imaging biomarkers are gaining attention. Advances in machine learning resulted in the rise of radiomics that is a new methodology referring to the extraction of quantitative information from medical images. Radiomics are based on the development of computational models, referred to as “Radiomic Signatures”, trying to address either unmet clinical needs, mostly in the field of oncologic imaging, or to compare radiomics performance with that of radiologists. However, to explore this new technology, initial publications did not consider best practices in the field of machine learning resulting in publications with questionable clinical value. In this paper, our effort was concentrated on how to avoid methodological mistakes and consider critical issues in the workflow of the development of clinically meaningful radiomic signatures.
Collapse
Affiliation(s)
- Nikolaos Papanikolaou
- Computational Clinical Imaging Group, Champalimaud Foundation, Centre for the Unknown, Av. Brasília, Doca de Pedrouços, 1400-038, Lisbon, Portugal.
| | - Celso Matos
- Department of Radiology, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Dow Mu Koh
- Department of Radiology, Royal Marsden Hospital, Sutton, UK
| |
Collapse
|
65
|
Gabrani-Juma H, Al Bimani Z, Zuckier LS, Klein R. Development and validation of the Lesion Synthesis Toolbox and the Perception Study Tool for quantifying observer limits of detection of lesions in positron emission tomography. J Med Imaging (Bellingham) 2020; 7:022412. [PMID: 32341935 DOI: 10.1117/1.jmi.7.2.022412] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Accepted: 03/23/2020] [Indexed: 12/22/2022] Open
Abstract
Purpose: Accurate detection of cancer lesions in positron emission tomography (PET) is fundamental to achieving favorable clinical outcomes. Therefore, image reconstruction, processing, visualization, and interpretation techniques must be optimized for this task. The objective of this work was to (1) develop and validate an efficient method to generate well-characterized synthetic lesions in real patient data and (2) to apply these lesions in a human perception experiment to establish baseline measurements of the limits of lesion detection as a function of lesion size and contrast using current imaging technologies. Approach: A fully integrated software package for synthesizing well-characterized lesions in real patient PET was developed using a vendor provided PET image reconstruction toolbox (REGRECON5, General Electric Healthcare, Waukesha, Wisconsin). Lesion characteristics were validated experimentally for geometric accuracy, activity accuracy, and absence of artifacts. The Lesion Synthesis Toolbox was used to generate a library of 133 synthetic lesions of varying sizes ( n = 7 ) and contrast levels ( n = 19 ) in manually defined locations in the livers of 37 patient studies. A lesion-localization perception study was performed with seven observers to determine the limits of detection with regard to lesion size and contrast using our web-based perception study tool. Results: The Lesion Synthesis Toolbox was validated for accurate lesion placement and size. Lesion intensities were deemed accurate with slightly elevated activities (5% at 2:1 lesion-to-background contrast) in small lesions ( Ø = 15 mm spheres), and no bias in large lesions ( Ø = 22.5 mm ). Bed-stitching artifacts were not observed, and lesion attenuation correction bias was small ( - 1.6 ± 1.2 % ). The 133 liver lesions were synthesized in ∼ 50 h , and readers were able to complete the perception study of these lesions in 12 ± 3 min with consistent limits of detection amongst all readers. Conclusions: Our open-source utilities can be employed by nonexperts to generate well-characterized synthetic lesions in real patient PET images and for administering perception studies on clinical workstations without the need to install proprietary software.
Collapse
Affiliation(s)
- Hanif Gabrani-Juma
- University of Ottawa, Division of Nuclear Medicine, Department of Medicine, Ottawa, Ontario, Canada.,Carleton University, Department of Systems and Computer Engineering, Ottawa, Ontario, Canada
| | - Zamzam Al Bimani
- University of Ottawa, Division of Nuclear Medicine, Department of Medicine, Ottawa, Ontario, Canada
| | - Lionel S Zuckier
- University of Ottawa, Division of Nuclear Medicine, Department of Medicine, Ottawa, Ontario, Canada
| | - Ran Klein
- University of Ottawa, Division of Nuclear Medicine, Department of Medicine, Ottawa, Ontario, Canada.,Carleton University, Department of Systems and Computer Engineering, Ottawa, Ontario, Canada
| |
Collapse
|
66
|
A deep learning tool for fully automated measurements of sagittal spinopelvic balance from X-ray images: performance evaluation. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2020; 29:2295-2305. [DOI: 10.1007/s00586-020-06406-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 03/10/2020] [Accepted: 03/30/2020] [Indexed: 12/20/2022]
|
67
|
Abstract
In two experiments, we trained pigeons (Columba livia) to sort visual images (obtained by clinical myocardial perfusion imaging techniques) depicting different degrees of human cardiac disfunction (myocardial hypoperfusion of the left ventricle) into normal and abnormal categories by providing food reward only after correct choice responses. Pigeons proved to be highly proficient at categorizing pseudo-colorized images as well as highly sensitive to the degree of the perfusion deficit depicted in the abnormal images. In later testing, the pigeons completely transferred discriminative responding to novel stimuli, demonstrating that they had fully learned the normal and abnormal categories. Yet, these pigeons failed to transfer discriminative responding to grayscale images containing no color information. We therefore trained a second cohort of pigeons to categorize grayscale image sets from the outset. These birds required substantially more training to achieve similar levels of performance. Yet, they too completely transferred discriminative responding to novel stimuli by relying on both global and local disparities in brightness between the normal and abnormal images. These results confirm that pseudo-colorization can enhance pigeons' categorization of human cardiac images, a result also found with human observers. Overall, our findings further document the potential of the pigeon as a useful aide in studies of medical image perception.
Collapse
Affiliation(s)
- Victor M Navarro
- Department of Psychological and Brain Sciences, The University of Iowa, Iowa City, IA, 52242, USA
| | - Edward A Wasserman
- Department of Psychological and Brain Sciences, The University of Iowa, Iowa City, IA, 52242, USA.
| | - Piotr Slomka
- Cedars-Sinai Medical Center, University of California, Los Angeles, CA, USA
| |
Collapse
|
68
|
Aresta G, Ferreira C, Pedrosa J, Araujo T, Rebelo J, Negrao E, Morgado M, Alves F, Cunha A, Ramos I, Campilho A. Automatic Lung Nodule Detection Combined With Gaze Information Improves Radiologists' Screening Performance. IEEE J Biomed Health Inform 2020; 24:2894-2901. [PMID: 32092022 DOI: 10.1109/jbhi.2020.2976150] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Early diagnosis of lung cancer via computed tomography can significantly reduce the morbidity and mortality rates associated with the pathology. However, searching lung nodules is a high complexity task, which affects the success of screening programs. Whilst computer-aided detection systems can be used as second observers, they may bias radiologists and introduce significant time overheads. With this in mind, this study assesses the potential of using gaze information for integrating automatic detection systems in the clinical practice. For that purpose, 4 radiologists were asked to annotate 20 scans from a public dataset while being monitored by an eye tracker device, and an automatic lung nodule detection system was developed. Our results show that radiologists follow a similar search routine and tend to have lower fixation periods in regions where finding errors occur. The overall detection sensitivity of the specialists was 0.67±0.07, whereas the system achieved 0.69. Combining the annotations of one radiologist with the automatic system significantly improves the detection performance to similar levels of two annotators. Filtering automatic detection candidates only for low fixation regions still significantly improves the detection sensitivity without increasing the number of false-positives.
Collapse
|
69
|
Sebelski CA, Hoogenboom BJ, Hayes AM, Held Bradford E, Wainwright SF, Huhn K. The Intersection of Movement and Clinical Reasoning: Embodying "Body as a Teacher" to Advance the Profession and Practice. Phys Ther 2020; 100:201-204. [PMID: 31595947 DOI: 10.1093/ptj/pzz137] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Revised: 04/05/2019] [Accepted: 06/09/2019] [Indexed: 02/09/2023]
Affiliation(s)
- Chris A Sebelski
- Department of Physical Therapy and Athletic Training, Doisy College of Health Sciences, Saint Louis University, 3437 Caroline Mall Ste 1026, St Louis, MO 63104 (USA)
| | - Barbara J Hoogenboom
- Department of Physical Therapy, Grand Valley State University, Grand Rapids, Michigan
| | - Ann M Hayes
- Department of Physical Therapy and Athletic Training, Saint Louis University, St Louis, Missouri
| | | | - Susan F Wainwright
- Department of Physical Therapy, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Karen Huhn
- School of Physical Therapy, Husson University, Bangor, Maine
| |
Collapse
|
70
|
Sha LZ, Toh YN, Remington RW, Jiang YV. Perceptual learning in the identification of lung cancer in chest radiographs. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:4. [PMID: 32016647 PMCID: PMC6997313 DOI: 10.1186/s41235-020-0208-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2019] [Accepted: 01/12/2020] [Indexed: 11/18/2022]
Abstract
Extensive research has shown that practice yields highly specific perceptual learning of simple visual properties such as orientation and contrast. Does this same learning characterize more complex perceptual skills? Here we investigated perceptual learning of complex medical images. Novices underwent training over four sessions to discriminate which of two chest radiographs contained a tumor and to indicate the location of the tumor. In training, one group received six repetitions of 30 normal/abnormal images, the other three repetitions of 60 normal/abnormal images. Groups were then tested on trained and novel images. To assess the nature of perceptual learning, test items were presented in three formats – the full image, the cutout of the tumor, or the background only. Performance improved across training sessions, and notably, the improvement transferred to the classification of novel images. Training with more repetitions on fewer images yielded comparable transfer to training with fewer repetitions on more images. Little transfer to novel images occurred when tested with just the cutout of the cancer region or just the background, but a larger cutout that included both the cancer region and some surrounding regions yielded good transfer. Perceptual learning contributes to the acquisition of expertise in cancer image perception.
Collapse
Affiliation(s)
- Li Z Sha
- Department of Psychology, University of Minnesota, N240 Elliott Hall, 75 East River Road, Minneapolis, MN, 55455, USA.
| | - Yi Ni Toh
- Department of Psychology, University of Minnesota, N240 Elliott Hall, 75 East River Road, Minneapolis, MN, 55455, USA
| | - Roger W Remington
- Department of Psychology, University of Minnesota, N240 Elliott Hall, 75 East River Road, Minneapolis, MN, 55455, USA.,School of Psychology, University of Queensland, Brisbane, Australia
| | - Yuhong V Jiang
- Department of Psychology, University of Minnesota, N240 Elliott Hall, 75 East River Road, Minneapolis, MN, 55455, USA.
| |
Collapse
|
71
|
Eye movements during music reading: Toward a unified understanding of visual expertise. PSYCHOLOGY OF LEARNING AND MOTIVATION 2020. [DOI: 10.1016/bs.plm.2020.07.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
72
|
Rasouli P, Dooghaie Moghadam A, Eslami P, Aghajanpoor Pasha M, Asadzadeh Aghdaei H, Mehrvar A, Nezami-Asl A, Iravani S, Sadeghi A, Zali MR. The role of artificial intelligence in colon polyps detection. GASTROENTEROLOGY AND HEPATOLOGY FROM BED TO BENCH 2020; 13:191-199. [PMID: 32821348 PMCID: PMC7417492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Over the past few decades, artificial intelligence (AI) has evolved dramatically and is believed to have a significant impact on all aspects of technology and daily life. The use of AI in the healthcare system has been rapidly growing, owing to the large amount of data. Various methods of AI including machine learning, deep learning and convolutional neural network (CNN) have been used in diagnostic imaging, which have helped physicians in the accurate diagnosis of diseases and determination of appropriate treatment for them. Using and collecting a huge number of digital images and medical records has led to the creation of big data over a time period. Currently, considerations regarding the diagnosis of various presentations in all endoscopic procedures and imaging findings are solely handled by endoscopists. Moreover, AI has shown to be highly effective in the field of gastroenterology in terms of diagnosis, prognosis, and image processing. Herein, this review aimed to discuss different aspects of AI use for early detection and treatment of gastroenterology diseases.
Collapse
Affiliation(s)
- Pezhman Rasouli
- Department of Computer, West Tehran Branch, Islamic Azad University, Tehran, Iran
| | - Arash Dooghaie Moghadam
- Gastroenterology and Liver Diseases Research Center, Research Institute for Gastroenterology and Liver Diseases, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Pegah Eslami
- Gastroenterology and Liver Diseases Research Center, Research Institute for Gastroenterology and Liver Diseases, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Morteza Aghajanpoor Pasha
- Gastroenterology and Hepatobiliary Research Center, AJA University of Medical Sciences, Tehran, Iran
| | - Hamid Asadzadeh Aghdaei
- Basic and Molecular Epidemiology of Gastrointestinal Disorders Research Center, Research institute for Gastroenterology and Liver Diseases, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Azim Mehrvar
- Basic and Molecular Epidemiology of Gastrointestinal Disorders Research Center, Research institute for Gastroenterology and Liver Diseases, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Amir Nezami-Asl
- Research Center for Cancer Screening and Epidemiology, AJA University of Medical Sciences, Tehran, Iran
| | - Shahrokh Iravani
- Research Center for Cancer Screening and Epidemiology, AJA University of Medical Sciences, Tehran, Iran
| | - Amir Sadeghi
- Gastroenterology and Liver Diseases Research Center, Research Institute for Gastroenterology and Liver Diseases, Shahid Beheshti University of Medical Sciences, Tehran, Iran ,Reprint or Correspondence: Amir Sadeghi, MD & Shahrokh Iravani, MD. Research Institute for Gastroenterology and Liver Diseases, Shahid Beheshti University of Medical Sciences, & Gastroenterology and Hepatobiliary Research Center, Imam Reza (501) Hospital, AJA University of Medical Sciences, Tehran, Iran. E-mail: &
| | - Mohammad Reza Zali
- Gastroenterology and Liver Diseases Research Center, Research Institute for Gastroenterology and Liver Diseases, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
73
|
Waite S, Farooq Z, Grigorian A, Sistrom C, Kolla S, Mancuso A, Martinez-Conde S, Alexander RG, Kantor A, Macknik SL. A Review of Perceptual Expertise in Radiology-How it develops, How we can test it, and Why humans still matter in the era of Artificial Intelligence. Acad Radiol 2020; 27:26-38. [PMID: 31818384 DOI: 10.1016/j.acra.2019.08.018] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Revised: 08/26/2019] [Accepted: 08/27/2019] [Indexed: 10/25/2022]
Abstract
As the first step in image interpretation is detection, an error in perception can prematurely end the diagnostic process leading to missed diagnoses. Because perceptual errors of this sort-"failure to detect"-are the most common interpretive error (and cause of litigation) in radiology, understanding the nature of perceptual expertise is essential in decreasing radiology's long-standing error rates. In this article, we review what constitutes a perceptual error, the existing models of radiologic image perception, the development of perceptual expertise and how it can be tested, perceptual learning methods in training radiologists, and why understanding perceptual expertise is still relevant in the era of artificial intelligence. Adding targeted interventions, such as perceptual learning, to existing teaching practices, has the potential to enhance expertise and reduce medical error.
Collapse
|
74
|
Chatelain P, Sharma H, Drukker L, Papageorghiou AT, Noble JA. Evaluation of Gaze Tracking Calibration for Longitudinal Biomedical Imaging Studies. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:153-163. [PMID: 30188843 DOI: 10.1109/tcyb.2018.2866274] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Gaze tracking is a promising technology for studying the visual perception of clinicians during image-based medical exams. It could be used in longitudinal studies to analyze their perceptive process, explore human-machine interactions, and develop innovative computer-aided imaging systems. However, using a remote eye tracker in an unconstrained environment and over time periods of weeks requires a certain guarantee of performance to ensure that collected gaze data are fit for purpose. We report the results of evaluating eye tracking calibration for longitudinal studies. First, we tested the performance of an eye tracker on a cohort of 13 users over a period of one month. For each participant, the eye tracker was calibrated during the first session. The participants were asked to sit in front of a monitor equipped with the eye tracker, but their position was not constrained. Second, we tested the performance of the eye tracker on sonographers positioned in front of a cart-based ultrasound scanner. Experimental results show a decrease of accuracy between calibration and later testing of 0.30° and a further degradation over time at a rate of 0.13°. month-1. The overall median accuracy was 1.00° (50.9 pixels) and the overall median precision was 0.16° (8.3 pixels). The results from the ultrasonography setting show a decrease of accuracy of 0.16° between calibration and later testing. This slow degradation of gaze tracking accuracy could impact the data quality in long-term studies. Therefore, the results we present here can help in planning such long-term gaze tracking studies.
Collapse
|
75
|
Mohamad Shahimin M, Razali A. An Eye Tracking Analysis on Diagnostic Performance of Digital Fundus Photography Images between Ophthalmologists and Optometrists. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2019; 17:ijerph17010030. [PMID: 31861457 PMCID: PMC6982190 DOI: 10.3390/ijerph17010030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 11/23/2019] [Accepted: 11/25/2019] [Indexed: 11/25/2022]
Abstract
To investigate the parameters of eye movement between ophthalmologists and optometrists while diagnosing digital fundus photographs, sixteen participants (eight ophthalmologists and eight optometrists) were recruited in this study. Every participant’s eye movement during diagnosis of a randomized set of fundus photographs displayed on an eye tracker were recorded. Fixation metrics (duration, count and rate) and scan path patterns were extracted from the eye tracker. These parameters of eye movement and correct diagnosis score were compared between both groups. Correlation analyses between fixation metrics and correct diagnosis score were also performed. Although fixation metrics between ophthalmologists and optometrists were not statistically different (p > 0.05), these parameters were statistically different when compared between different area of interests. Both participant groups had a similar correct diagnosis score. No correlation was found between fixation metrics and correct diagnosis score between both groups, except for total fixation duration and ophthalmologists’ diagnosis score of diabetic retinopathy photographs. The ophthalmologists’ scan paths were simpler, with larger saccades, and were distributed at the middle region of the photographs. Conversely, optometrists’ scan paths were extensive, with shorter saccades covering wider fundus areas, and were accumulated in some unrelated fundus areas. These findings indicated comparable efficiency and systematic visual search patterns between both the groups. Understanding visual search strategy could expedite the creation of a novel training routine for interpretation of ophthalmic diagnostic imaging.
Collapse
Affiliation(s)
- Mizhanim Mohamad Shahimin
- Optometry & Vision Sciences Program, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur Campus, Jalan Raja Muda Abdul Aziz, Kuala Lumpur 50300, Malaysia
- Correspondence:
| | - Azalia Razali
- Ophthalmology Department, Hospital Selayang, Lebuhraya Selayang-Kepong, Batu Caves, Selangor 68100, Malaysia;
| |
Collapse
|
76
|
Majkowska A, Mittal S, Steiner DF, Reicher JJ, McKinney SM, Duggan GE, Eswaran K, Cameron Chen PH, Liu Y, Kalidindi SR, Ding A, Corrado GS, Tse D, Shetty S. Chest Radiograph Interpretation with Deep Learning Models: Assessment with Radiologist-adjudicated Reference Standards and Population-adjusted Evaluation. Radiology 2019; 294:421-431. [PMID: 31793848 DOI: 10.1148/radiol.2019191293] [Citation(s) in RCA: 118] [Impact Index Per Article: 23.6] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
BackgroundDeep learning has the potential to augment the use of chest radiography in clinical radiology, but challenges include poor generalizability, spectrum bias, and difficulty comparing across studies.PurposeTo develop and evaluate deep learning models for chest radiograph interpretation by using radiologist-adjudicated reference standards.Materials and MethodsDeep learning models were developed to detect four findings (pneumothorax, opacity, nodule or mass, and fracture) on frontal chest radiographs. This retrospective study used two data sets. Data set 1 (DS1) consisted of 759 611 images from a multicity hospital network and ChestX-ray14 is a publicly available data set with 112 120 images. Natural language processing and expert review of a subset of images provided labels for 657 954 training images. Test sets consisted of 1818 and 1962 images from DS1 and ChestX-ray14, respectively. Reference standards were defined by radiologist-adjudicated image review. Performance was evaluated by area under the receiver operating characteristic curve analysis, sensitivity, specificity, and positive predictive value. Four radiologists reviewed test set images for performance comparison. Inverse probability weighting was applied to DS1 to account for positive radiograph enrichment and estimate population-level performance.ResultsIn DS1, population-adjusted areas under the receiver operating characteristic curve for pneumothorax, nodule or mass, airspace opacity, and fracture were, respectively, 0.95 (95% confidence interval [CI]: 0.91, 0.99), 0.72 (95% CI: 0.66, 0.77), 0.91 (95% CI: 0.88, 0.93), and 0.86 (95% CI: 0.79, 0.92). With ChestX-ray14, areas under the receiver operating characteristic curve were 0.94 (95% CI: 0.93, 0.96), 0.91 (95% CI: 0.89, 0.93), 0.94 (95% CI: 0.93, 0.95), and 0.81 (95% CI: 0.75, 0.86), respectively.ConclusionExpert-level models for detecting clinically relevant chest radiograph findings were developed for this study by using adjudicated reference standards and with population-level performance estimation. Radiologist-adjudicated labels for 2412 ChestX-ray14 validation set images and 1962 test set images are provided.© RSNA, 2019Online supplemental material is available for this article.See also the editorial by Chang in this issue.
Collapse
Affiliation(s)
- Anna Majkowska
- From Google Health, 1600 Amphitheatre Pkwy, Mountain View, CA 94043 (A.M., S.M., D.F.S., S.M.M., G.E.D., K.E., P.H.C.C., Y.L., G.S.C., D.T., S.S.); Stanford Healthcare and Palo Alto Veterans Affairs, Palo Alto, Calif (J.J.R.); Apollo Radiology International, Hyderabad, India (S.R.K.); and California Advanced Imaging, Novato, Calif (A.D.)
| | - Sid Mittal
- From Google Health, 1600 Amphitheatre Pkwy, Mountain View, CA 94043 (A.M., S.M., D.F.S., S.M.M., G.E.D., K.E., P.H.C.C., Y.L., G.S.C., D.T., S.S.); Stanford Healthcare and Palo Alto Veterans Affairs, Palo Alto, Calif (J.J.R.); Apollo Radiology International, Hyderabad, India (S.R.K.); and California Advanced Imaging, Novato, Calif (A.D.)
| | - David F Steiner
- From Google Health, 1600 Amphitheatre Pkwy, Mountain View, CA 94043 (A.M., S.M., D.F.S., S.M.M., G.E.D., K.E., P.H.C.C., Y.L., G.S.C., D.T., S.S.); Stanford Healthcare and Palo Alto Veterans Affairs, Palo Alto, Calif (J.J.R.); Apollo Radiology International, Hyderabad, India (S.R.K.); and California Advanced Imaging, Novato, Calif (A.D.)
| | - Joshua J Reicher
- From Google Health, 1600 Amphitheatre Pkwy, Mountain View, CA 94043 (A.M., S.M., D.F.S., S.M.M., G.E.D., K.E., P.H.C.C., Y.L., G.S.C., D.T., S.S.); Stanford Healthcare and Palo Alto Veterans Affairs, Palo Alto, Calif (J.J.R.); Apollo Radiology International, Hyderabad, India (S.R.K.); and California Advanced Imaging, Novato, Calif (A.D.)
| | - Scott Mayer McKinney
- From Google Health, 1600 Amphitheatre Pkwy, Mountain View, CA 94043 (A.M., S.M., D.F.S., S.M.M., G.E.D., K.E., P.H.C.C., Y.L., G.S.C., D.T., S.S.); Stanford Healthcare and Palo Alto Veterans Affairs, Palo Alto, Calif (J.J.R.); Apollo Radiology International, Hyderabad, India (S.R.K.); and California Advanced Imaging, Novato, Calif (A.D.)
| | - Gavin E Duggan
- From Google Health, 1600 Amphitheatre Pkwy, Mountain View, CA 94043 (A.M., S.M., D.F.S., S.M.M., G.E.D., K.E., P.H.C.C., Y.L., G.S.C., D.T., S.S.); Stanford Healthcare and Palo Alto Veterans Affairs, Palo Alto, Calif (J.J.R.); Apollo Radiology International, Hyderabad, India (S.R.K.); and California Advanced Imaging, Novato, Calif (A.D.)
| | - Krish Eswaran
- From Google Health, 1600 Amphitheatre Pkwy, Mountain View, CA 94043 (A.M., S.M., D.F.S., S.M.M., G.E.D., K.E., P.H.C.C., Y.L., G.S.C., D.T., S.S.); Stanford Healthcare and Palo Alto Veterans Affairs, Palo Alto, Calif (J.J.R.); Apollo Radiology International, Hyderabad, India (S.R.K.); and California Advanced Imaging, Novato, Calif (A.D.)
| | - Po-Hsuan Cameron Chen
- From Google Health, 1600 Amphitheatre Pkwy, Mountain View, CA 94043 (A.M., S.M., D.F.S., S.M.M., G.E.D., K.E., P.H.C.C., Y.L., G.S.C., D.T., S.S.); Stanford Healthcare and Palo Alto Veterans Affairs, Palo Alto, Calif (J.J.R.); Apollo Radiology International, Hyderabad, India (S.R.K.); and California Advanced Imaging, Novato, Calif (A.D.)
| | - Yun Liu
- From Google Health, 1600 Amphitheatre Pkwy, Mountain View, CA 94043 (A.M., S.M., D.F.S., S.M.M., G.E.D., K.E., P.H.C.C., Y.L., G.S.C., D.T., S.S.); Stanford Healthcare and Palo Alto Veterans Affairs, Palo Alto, Calif (J.J.R.); Apollo Radiology International, Hyderabad, India (S.R.K.); and California Advanced Imaging, Novato, Calif (A.D.)
| | - Sreenivasa Raju Kalidindi
- From Google Health, 1600 Amphitheatre Pkwy, Mountain View, CA 94043 (A.M., S.M., D.F.S., S.M.M., G.E.D., K.E., P.H.C.C., Y.L., G.S.C., D.T., S.S.); Stanford Healthcare and Palo Alto Veterans Affairs, Palo Alto, Calif (J.J.R.); Apollo Radiology International, Hyderabad, India (S.R.K.); and California Advanced Imaging, Novato, Calif (A.D.)
| | - Alexander Ding
- From Google Health, 1600 Amphitheatre Pkwy, Mountain View, CA 94043 (A.M., S.M., D.F.S., S.M.M., G.E.D., K.E., P.H.C.C., Y.L., G.S.C., D.T., S.S.); Stanford Healthcare and Palo Alto Veterans Affairs, Palo Alto, Calif (J.J.R.); Apollo Radiology International, Hyderabad, India (S.R.K.); and California Advanced Imaging, Novato, Calif (A.D.)
| | - Greg S Corrado
- From Google Health, 1600 Amphitheatre Pkwy, Mountain View, CA 94043 (A.M., S.M., D.F.S., S.M.M., G.E.D., K.E., P.H.C.C., Y.L., G.S.C., D.T., S.S.); Stanford Healthcare and Palo Alto Veterans Affairs, Palo Alto, Calif (J.J.R.); Apollo Radiology International, Hyderabad, India (S.R.K.); and California Advanced Imaging, Novato, Calif (A.D.)
| | - Daniel Tse
- From Google Health, 1600 Amphitheatre Pkwy, Mountain View, CA 94043 (A.M., S.M., D.F.S., S.M.M., G.E.D., K.E., P.H.C.C., Y.L., G.S.C., D.T., S.S.); Stanford Healthcare and Palo Alto Veterans Affairs, Palo Alto, Calif (J.J.R.); Apollo Radiology International, Hyderabad, India (S.R.K.); and California Advanced Imaging, Novato, Calif (A.D.)
| | - Shravya Shetty
- From Google Health, 1600 Amphitheatre Pkwy, Mountain View, CA 94043 (A.M., S.M., D.F.S., S.M.M., G.E.D., K.E., P.H.C.C., Y.L., G.S.C., D.T., S.S.); Stanford Healthcare and Palo Alto Veterans Affairs, Palo Alto, Calif (J.J.R.); Apollo Radiology International, Hyderabad, India (S.R.K.); and California Advanced Imaging, Novato, Calif (A.D.)
| |
Collapse
|
77
|
Alshabibi AS, Suleiman ME, Tapia KA, Brennan PC. Effects of time of day on radiological interpretation. Clin Radiol 2019; 75:148-155. [PMID: 31699432 DOI: 10.1016/j.crad.2019.10.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Accepted: 10/03/2019] [Indexed: 11/25/2022]
Abstract
Accurate interpretation of radiological images involves a complex visual search that relies on several cognitive processes, including selective attention, working memory, and decision-making. Patient outcomes often depend on the accuracy of image interpretations, and yet research has revealed that conclusions vary significantly from one radiologist to another. A myriad of factors has been shown to contribute to the likelihood of interpretative errors and discrepancies, including the radiologist's level of experience and fatigue, and these factors are well reported elsewhere; however, a potentially important factor that has been given little previous consideration is how radiologists' interpretations might be impacted by the time of day at which the reading takes place, a factor that other disciplines have shown to be a determinant of competency. The available literature shows that while the time of day is known to significantly impact some cognitive functions that likely relate to reading competence, including selective visual attention and visual working memory, little is known about the impact of the time of day on radiology interpretation performance. This review explores the evidence regarding the relationship between time of day and performance, with a particular emphasis on radiological activities.
Collapse
Affiliation(s)
- A S Alshabibi
- Faculty of Health Sciences, Medical Radiation Sciences, University of Sydney, New South Wales, Australia.
| | - M E Suleiman
- Faculty of Health Sciences, Medical Radiation Sciences, University of Sydney, New South Wales, Australia
| | - K A Tapia
- Faculty of Health Sciences, Medical Radiation Sciences, University of Sydney, New South Wales, Australia
| | - P C Brennan
- Faculty of Health Sciences, Medical Radiation Sciences, University of Sydney, New South Wales, Australia
| |
Collapse
|
78
|
Ashworth J, Thompson J, Mercer C. Learning to look: Evaluating the student experience of an interactive image appraisal activity. Radiography (Lond) 2019; 25:314-319. [DOI: 10.1016/j.radi.2019.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2018] [Revised: 02/15/2019] [Accepted: 02/18/2019] [Indexed: 11/25/2022]
|
79
|
Hani S, Chalouhi G, Lakissian Z, Sharara-Chami R. Introduction of Ultrasound Simulation in Medical Education: Exploratory Study. JMIR MEDICAL EDUCATION 2019; 5:e13568. [PMID: 31573944 PMCID: PMC6787531 DOI: 10.2196/13568] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Revised: 06/21/2019] [Accepted: 07/19/2019] [Indexed: 05/03/2023]
Abstract
BACKGROUND Ultrasound is ubiquitous across all disciplines of medicine; it is one of the most commonly used noninvasive, painless diagnostic tools. However, not many are educated and trained well enough in its use. Ultrasound requires not only theoretical knowledge but also extensive practical experience. The simulated setting offers the safest environment for health care professionals to learn and practice using ultrasound. OBJECTIVE This study aimed to (1) assess health care professionals' need for and enthusiasm toward practicing using ultrasound via simulation and (2) gauge their perception and acceptance of simulation as an integral element of ultrasound education in medical curricula. METHODS A day-long intervention was organized at the American University of Beirut Medical Center (AUBMC) to provide a free-of-charge interactive ultrasound simulation workshop-using CAE Vimedix high-fidelity simulator-for health care providers, including physicians, nurses, ultrasound technicians, residents, and medical students. Following the intervention, attendees completed an evaluation, which included 4 demographic questions and 16 close-ended questions based on a Likert scale agree-neutral-disagree. The results presented are based on this evaluation form. RESULTS A total of 41 participants attended the workshop (46% [19/41] physicians, 30% [12/41] residents, 19% [8/41] sonographers, and 5% [2/41] medical students), mostly from AUBMC (88%, 36/41), with an average experience of 2.27 (SD 3.45) years and 30 (SD 46) scans per attendee. Moreover, 15 out of 41 (36%) participants were from obstetrics and gynecology, 11 (27%) from internal medicine, 4 (10%) from pediatrics, 4 (10%) from emergency medicine, 2 (5%) from surgery and family medicine, and 5 (12%) were technicians. The majority of participants agreed that ultrasound provided a realistic setting (98%, 40/41) and that it allowed for training and identification of pathologies (88%, 36/41). Furthermore, 100% (41/41) of the participants agreed that it should be part of the curriculum either in medical school or residency, and most of the participants approved it for training (98%, 40/41) and teaching (98%, 40/41). CONCLUSIONS All attendees were satisfied with the intervention. There was a positive perception toward the use of simulation for training and teaching medical students and residents in using ultrasound, and there was a definite need and enthusiasm for its integration into curricula. Simulation offers an avenue not only for teaching but also for practicing the ultrasound technology by both medical students and health care providers.
Collapse
Affiliation(s)
- Selim Hani
- Department of Industrial Engineering, American University of Beirut, Beirut, Lebanon
| | - Gihad Chalouhi
- International Society of Ultrasound in Obstetrics and Gynecology, London, United Kingdom
- SimECHOLE, Paris, France
- Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, American University of Beirut Medical Center, Beirut, Lebanon
| | - Zavi Lakissian
- Simulation Program, Faculty of Medicine, American University of Beirut Medical Center, Beirut, Lebanon
| | - Rana Sharara-Chami
- Department of Pediatrics and Adolescent Medicine, American University of Beirut Medical Center, Beirut, Lebanon
| |
Collapse
|
80
|
Shaw CB, Foster BH, Borgese M, Boutin RD, Bateni C, Boonsri P, Bayne CO, Szabo RM, Nayak KS, Chaudhari AJ. Real-time three-dimensional MRI for the assessment of dynamic carpal instability. PLoS One 2019; 14:e0222704. [PMID: 31536561 PMCID: PMC6752861 DOI: 10.1371/journal.pone.0222704] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2019] [Accepted: 09/03/2019] [Indexed: 12/11/2022] Open
Abstract
Background Carpal instability is defined as a condition where wrist motion and/or loading creates mechanical dysfunction, resulting in weakness, pain and decreased function. When conventional methods do not identify the instability patterns, yet clinical signs of instability exist, the diagnosis of dynamic instability is often suggested to describe carpal derangement manifested only during the wrist’s active motion or stress. We addressed the question: can advanced MRI techniques provide quantitative means to evaluate dynamic carpal instability and supplement standard static MRI acquisition? Our objectives were to (i) develop a real-time, three-dimensional MRI method to image the carpal joints during their active, uninterrupted motion; and (ii) demonstrate feasibility of the method for assessing metrics relevant to dynamic carpal instability, thus overcoming limitations of standard MRI. Methods Twenty wrists (bilateral wrists of ten healthy participants) were scanned during radial-ulnar deviation and clenched-fist maneuvers. Images resulting from two real-time MRI pulse sequences, four sparse data-acquisition schemes, and three constrained image reconstruction techniques were compared. Image quality was assessed via blinded scoring by three radiologists and quantitative imaging metrics. Results Real-time MRI data-acquisition employing sparse radial sampling with a gradient-recalled-echo acquisition and constrained iterative reconstruction appeared to provide a practical tradeoff between imaging speed (temporal resolution up to 135 ms per slice) and image quality. The method effectively reduced streaking artifacts arising from data undersampling and enabled the derivation of quantitative measures pertinent to evaluating dynamic carpal instability. Conclusion This study demonstrates that real-time, three-dimensional MRI of the moving wrist is feasible and may be useful for the evaluation of dynamic carpal instability.
Collapse
Affiliation(s)
- Calvin B. Shaw
- Department of Radiology, University of California Davis, Sacramento, California, United States of America
| | - Brent H. Foster
- Department of Biomedical Engineering, University of California Davis, Davis, California, United States of America
| | - Marissa Borgese
- Department of Radiology, University of California Davis, Sacramento, California, United States of America
| | - Robert D. Boutin
- Department of Radiology, University of California Davis, Sacramento, California, United States of America
| | - Cyrus Bateni
- Department of Radiology, University of California Davis, Sacramento, California, United States of America
| | - Pattira Boonsri
- Department of Radiology, University of California Davis, Sacramento, California, United States of America
| | - Christopher O. Bayne
- Department of Orthopaedic Surgery, University of California Davis, Sacramento, California, United States of America
| | - Robert M. Szabo
- Department of Orthopaedic Surgery, University of California Davis, Sacramento, California, United States of America
| | - Krishna S. Nayak
- Ming Hsieh Department of Electrical Engineering, University of Southern California, Los Angeles, California, United States of America
| | - Abhijit J. Chaudhari
- Department of Radiology, University of California Davis, Sacramento, California, United States of America
- * E-mail:
| |
Collapse
|
81
|
Affiliation(s)
| | - Stephen R. Mitroff
- Department of Psychology, The George Washington University, Washington, DC, USA
| |
Collapse
|
82
|
Abstract
Breast cancer is the most common cancer among females worldwide and large volumes of breast images are produced and interpreted annually. As long as radiologists interpret these images, the diagnostic accuracy will be limited by human factors and both false-positive and false-negative errors might occur. By understanding visual search in breast images, we may be able to identify causes of diagnostic errors, find ways to reduce them, and also provide a better education to radiology residents. Many visual search studies in breast radiology have been devoted to mammography. These studies showed that 70% of missed lesions on mammograms attract radiologists' visual attention and that a plethora of different reasons, such as satisfaction of search, incorrect background sampling, and incorrect first impression can cause diagnostic errors in the interpretation of mammograms. Recently, highly accurate tools, which rely on both eye-tracking data and the content of the mammogram, have been proposed to provide feedback to the radiologists. Improving these tools and determining the optimal pathway to integrate them in the radiology workflow could be a possible line of future research. Moreover, in the past few years deep learning has led to improving diagnostic accuracy of computerized diagnostic tools and visual search studies will be required to understand how radiologists interact with the prompts from these tools, and to identify the best way to utilize them. Visual search in other breast imaging modalities, such as breast ultrasound and digital breast tomosynthesis, have so far received less attention, probably due to associated complexities of eye-tracking monitoring and analysing the data. For example, in digital breast tomosynthesis, scrolling through the image results in longer trials, adds a new factor to the study's complexity and makes calculation of gaze parameters more difficult. However, considering the wide utilization of three-dimensional imaging modalities, more visual search studies involving reading stack-view examinations are required in the future. To conclude, in the past few decades visual search studies provided extensive understanding about underlying reasons for diagnostic errors in breast radiology and characterized differences between experts' and novices' visual search patterns. Further visual search studies are required to investigate radiologists' interaction with relatively newer imaging modalities and artificial intelligence tools.
Collapse
Affiliation(s)
- Ziba Gandomkar
- BreastScreen Reader Assessment Strategy (BREAST), Discipline of Medical Imaging Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| | - Claudia Mello-Thoms
- Department of Radiology, Carver College of Medicine, University of Iowa, Iowa City, IA, US
| |
Collapse
|
83
|
Patel SH, Stanton CL, Miller SG, Patrie JT, Itri JN, Shepherd TM. Risk Factors for Perceptual-versus-Interpretative Errors in Diagnostic Neuroradiology. AJNR Am J Neuroradiol 2019; 40:1252-1256. [PMID: 31296527 DOI: 10.3174/ajnr.a6125] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Accepted: 06/09/2019] [Indexed: 11/07/2022]
Abstract
BACKGROUND AND PURPOSE Diagnostic errors in radiology are classified as perception or interpretation errors. This study determined whether specific conditions differed when perception or interpretation errors occurred during neuroradiology image interpretation. MATERIALS AND METHODS In a sample of 254 clinical error cases in diagnostic neuroradiology, we classified errors as perception or interpretation errors, then characterized imaging technique, interpreting radiologist's experience, anatomic location of the abnormality, disease etiology, time of day, and day of the week. Interpretation and perception errors were compared with hours worked per shift, cases read per shift, average cases read per shift hour, and the order of case during the shift when the error occurred. RESULTS Perception and interpretation errors were 74.8% (n = 190) and 25.2% (n = 64) of errors, respectively. Logistic regression analyses showed that the odds of an interpretation error were 2 times greater (OR, 2.09; 95% CI, 1.05-4.15; P = .04) for neuroradiology attending physicians with ≤5 years of experience. Interpretation errors were more likely with MR imaging compared with CT (OR, 2.10; 95% CI, 1.09-4.01; P = .03). Infectious/inflammatory/autoimmune diseases were more frequently associated with interpretation errors (P = .04). Perception errors were associated with faster reading rates (6.01 versus 5.03 cases read per hour; P = .004) and occurred later during the shift (24th-versus-18th case; P = .04). CONCLUSIONS Among diagnostic neuroradiology error cases, interpretation-versus-perception errors are affected by the neuroradiologist's experience, technique, and the volume and rate of cases read. Recognition of these risk factors may help guide programs for error reduction in clinical neuroradiology services.
Collapse
Affiliation(s)
- S H Patel
- From the Departments of Radiology and Medical Imaging (S.H.P.)
| | - C L Stanton
- Department of Radiology (C.L.S., S.G.M., T.M.S.), New York University Langone Medical Center, New York, New York
| | - S G Miller
- Department of Radiology (C.L.S., S.G.M., T.M.S.), New York University Langone Medical Center, New York, New York
| | - J T Patrie
- Public Health Sciences (J.T.P.), University of Virginia Health System, Charlottesville, Virginia
| | - J N Itri
- Department of Radiology (J.N.I.), Wake Forest Baptist Health, Winston-Salem, North Carolina
| | - T M Shepherd
- Department of Radiology (C.L.S., S.G.M., T.M.S.), New York University Langone Medical Center, New York, New York.,Center for Advanced Imaging Innovation and Research (T.M.S.), New York, New York
| |
Collapse
|
84
|
Rundo L, Tangherloni A, Cazzaniga P, Nobile MS, Russo G, Gilardi MC, Vitabile S, Mauri G, Besozzi D, Militello C. A novel framework for MR image segmentation and quantification by using MedGA. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 176:159-172. [PMID: 31200903 DOI: 10.1016/j.cmpb.2019.04.016] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Revised: 04/14/2019] [Accepted: 04/16/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVES Image segmentation represents one of the most challenging issues in medical image analysis to distinguish among different adjacent tissues in a body part. In this context, appropriate image pre-processing tools can improve the result accuracy achieved by computer-assisted segmentation methods. Taking into consideration images with a bimodal intensity distribution, image binarization can be used to classify the input pictorial data into two classes, given a threshold intensity value. Unfortunately, adaptive thresholding techniques for two-class segmentation work properly only for images characterized by bimodal histograms. We aim at overcoming these limitations and automatically determining a suitable optimal threshold for bimodal Magnetic Resonance (MR) images, by designing an intelligent image analysis framework tailored to effectively assist the physicians during their decision-making tasks. METHODS In this work, we present a novel evolutionary framework for image enhancement, automatic global thresholding, and segmentation, which is here applied to different clinical scenarios involving bimodal MR image analysis: (i) uterine fibroid segmentation in MR guided Focused Ultrasound Surgery, and (ii) brain metastatic cancer segmentation in neuro-radiosurgery therapy. Our framework exploits MedGA as a pre-processing stage. MedGA is an image enhancement method based on Genetic Algorithms that improves the threshold selection, obtained by the efficient Iterative Optimal Threshold Selection algorithm, between the underlying sub-distributions in a nearly bimodal histogram. RESULTS The results achieved by the proposed evolutionary framework were quantitatively evaluated, showing that the use of MedGA as a pre-processing stage outperforms the conventional image enhancement methods (i.e., histogram equalization, bi-histogram equalization, Gamma transformation, and sigmoid transformation), in terms of both MR image enhancement and segmentation evaluation metrics. CONCLUSIONS Thanks to this framework, MR image segmentation accuracy is considerably increased, allowing for measurement repeatability in clinical workflows. The proposed computational solution could be well-suited for other clinical contexts requiring MR image analysis and segmentation, aiming at providing useful insights for differential diagnosis and prognosis.
Collapse
Affiliation(s)
- Leonardo Rundo
- Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milan, Italy; Institute of Molecular Bioimaging and Physiology, Italian National Research Council, Cefalù, PA, Italy; Department of Radiology, University of Cambridge, Cambridge, UK; Cancer Research UK Cambridge Centre, Cambridge, UK.
| | - Andrea Tangherloni
- Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milan, Italy; Department of Haematology, University of Cambridge, Cambridge, UK; Wellcome Trust Sanger Institute, Wellcome Trust Genome Campus, Hinxton, UK.
| | - Paolo Cazzaniga
- Department of Human and Social Sciences, University of Bergamo, Bergamo, Italy; SYSBIO.IT Centre of Systems Biology, Milan, Italy.
| | - Marco S Nobile
- Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milan, Italy; SYSBIO.IT Centre of Systems Biology, Milan, Italy.
| | - Giorgio Russo
- Institute of Molecular Bioimaging and Physiology, Italian National Research Council, Cefalù, PA, Italy.
| | - Maria Carla Gilardi
- Institute of Molecular Bioimaging and Physiology, Italian National Research Council, Cefalù, PA, Italy.
| | - Salvatore Vitabile
- Department of Biomedicine, Neuroscience and Advanced Diagnostics, University of Palermo, Palermo, Italy.
| | - Giancarlo Mauri
- Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milan, Italy; SYSBIO.IT Centre of Systems Biology, Milan, Italy.
| | - Daniela Besozzi
- Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milan, Italy.
| | - Carmelo Militello
- Institute of Molecular Bioimaging and Physiology, Italian National Research Council, Cefalù, PA, Italy.
| |
Collapse
|
85
|
Waite S, Grigorian A, Alexander RG, Macknik SL, Carrasco M, Heeger DJ, Martinez-Conde S. Analysis of Perceptual Expertise in Radiology - Current Knowledge and a New Perspective. Front Hum Neurosci 2019; 13:213. [PMID: 31293407 PMCID: PMC6603246 DOI: 10.3389/fnhum.2019.00213] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Accepted: 06/07/2019] [Indexed: 12/14/2022] Open
Abstract
Radiologists rely principally on visual inspection to detect, describe, and classify findings in medical images. As most interpretive errors in radiology are perceptual in nature, understanding the path to radiologic expertise during image analysis is essential to educate future generations of radiologists. We review the perceptual tasks and challenges in radiologic diagnosis, discuss models of radiologic image perception, consider the application of perceptual learning methods in medical training, and suggest a new approach to understanding perceptional expertise. Specific principled enhancements to educational practices in radiology promise to deepen perceptual expertise among radiologists with the goal of improving training and reducing medical error.
Collapse
Affiliation(s)
- Stephen Waite
- Department of Radiology, SUNY Downstate Medical Center, Brooklyn, NY, United States
| | - Arkadij Grigorian
- Department of Radiology, SUNY Downstate Medical Center, Brooklyn, NY, United States
| | - Robert G. Alexander
- Department of Ophthalmology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Neurology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Physiology/Pharmacology, SUNY Downstate Medical Center, Brooklyn, NY, United States
| | - Stephen L. Macknik
- Department of Ophthalmology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Neurology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Physiology/Pharmacology, SUNY Downstate Medical Center, Brooklyn, NY, United States
| | - Marisa Carrasco
- Department of Psychology and Center for Neural Science, New York University, New York, NY, United States
| | - David J. Heeger
- Department of Psychology and Center for Neural Science, New York University, New York, NY, United States
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Neurology, SUNY Downstate Medical Center, Brooklyn, NY, United States
- Department of Physiology/Pharmacology, SUNY Downstate Medical Center, Brooklyn, NY, United States
| |
Collapse
|
86
|
Wu CC, Wolfe JM. Eye Movements in Medical Image Perception: A Selective Review of Past, Present and Future. Vision (Basel) 2019; 3:E32. [PMID: 31735833 PMCID: PMC6802791 DOI: 10.3390/vision3020032] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 06/09/2019] [Accepted: 06/18/2019] [Indexed: 12/21/2022] Open
Abstract
The eye movements of experts, reading medical images, have been studied for many years. Unlike topics such as face perception, medical image perception research needs to cope with substantial, qualitative changes in the stimuli under study due to dramatic advances in medical imaging technology. For example, little is known about how radiologists search through 3D volumes of image data because they simply did not exist when earlier eye tracking studies were performed. Moreover, improvements in the affordability and portability of modern eye trackers make other, new studies practical. Here, we review some uses of eye movements in the study of medical image perception with an emphasis on newer work. We ask how basic research on scene perception relates to studies of medical 'scenes' and we discuss how tracking experts' eyes may provide useful insights for medical education and screening efficiency.
Collapse
Affiliation(s)
- Chia-Chien Wu
- Visual Attention Lab, Department of Surgery, Brigham & Women’s Hospital, 65 Landsdowne St, Cambridge, MA 02139, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
| | - Jeremy M. Wolfe
- Visual Attention Lab, Department of Surgery, Brigham & Women’s Hospital, 65 Landsdowne St, Cambridge, MA 02139, USA
- Department of Radiology, Harvard Medical School, Boston, MA 02115, USA
- Department of Ophthalmology, Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
87
|
McWade MA, Thomas G, Nguyen JQ, Sanders ME, Solórzano CC, Mahadevan-Jansen A. Enhancing Parathyroid Gland Visualization Using a Near Infrared Fluorescence-Based Overlay Imaging System. J Am Coll Surg 2019; 228:730-743. [PMID: 30769112 PMCID: PMC6487208 DOI: 10.1016/j.jamcollsurg.2019.01.017] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2018] [Revised: 01/24/2019] [Accepted: 01/29/2019] [Indexed: 11/22/2022]
Abstract
BACKGROUND Misidentifying parathyroid glands (PGs) during thyroidectomies or parathyroidectomies could significantly increase postoperative morbidity. Imaging systems based on near infrared autofluorescence (NIRAF) detection can localize PGs with high accuracy. These devices, however, depict NIRAF images on remote display monitors, where images lack spatial context and comparability with actual surgical field of view. In this study, we designed an overlay tissue imaging system (OTIS) that detects tissue NIRAF and back-projects the collected signal as a visible image directly onto the surgical field of view instead of a display monitor, and tested its ability for enhancing parathyroid visualization. STUDY DESIGN The OTIS was first calibrated with a fluorescent ink grid and initially tested with parathyroid, thyroid, and lymph node tissues ex vivo. For in vivo measurements, the surgeon's opinion on tissue of interest was first ascertained. After the surgeon looked away, the OTIS back-projected visible green light directly onto the tissue of interest, only if the device detected relatively high NIRAF as observed in PGs. System accuracy was determined by correlating NIRAF projection with surgeon's visual confirmation for in situ PGs or histopathology report for excised PGs. RESULTS The OTIS yielded 100% accuracy when tested ex vivo with parathyroid, thyroid, and lymph node specimens. Subsequently, the device was evaluated in 30 patients who underwent thyroidectomy and/or parathyroidectomy. Ninety-seven percent of exposed tissue of interest was visualized correctly as PGs by the OTIS, without requiring display monitors or contrast agents. CONCLUSIONS Although OTIS holds novel potential for enhancing label-free parathyroid visualization directly within the surgical field of view, additional device optimization is required for eventual clinical use.
Collapse
Affiliation(s)
- Melanie A McWade
- Vanderbilt Biophotonics Center, Department of Biomedical Engineering, Vanderbilt University, Nashville, TN
| | - Giju Thomas
- Vanderbilt Biophotonics Center, Department of Biomedical Engineering, Vanderbilt University, Nashville, TN
| | - John Q Nguyen
- Vanderbilt Biophotonics Center, Department of Biomedical Engineering, Vanderbilt University, Nashville, TN
| | - Melinda E Sanders
- Department of Pathology, Vanderbilt University Medical Center, Nashville, TN
| | - Carmen C Solórzano
- Division of Surgical Oncology and Endocrine Surgery, Vanderbilt University Medical Center, Nashville, TN
| | - Anita Mahadevan-Jansen
- Vanderbilt Biophotonics Center, Department of Biomedical Engineering, Vanderbilt University, Nashville, TN.
| |
Collapse
|
88
|
Clarke EL, Brettle D, Sykes A, Wright A, Boden A, Treanor D. Development and Evaluation of a Novel Point-of-Use Quality Assurance Tool for Digital Pathology. Arch Pathol Lab Med 2019; 143:1246-1255. [DOI: 10.5858/arpa.2018-0210-oa] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Context.—
Flexible working at diverse or remote sites is a major advantage when reporting using digital pathology, but currently there is no method to validate the clinical diagnostic setting within digital microscopy.
Objective.—
To develop a preliminary Point-of-Use Quality Assurance (POUQA) tool designed specifically to validate the diagnostic setting for digital microscopy.
Design.—
We based the POUQA tool on the red, green, and blue (RGB) values of hematoxylin-eosin. The tool used 144 hematoxylin-eosin–colored, 5×5-cm patches with a superimposed random letter with subtly lighter RGB values from the background color, with differing levels of difficulty. We performed an initial evaluation across 3 phases within 2 pathology departments: 1 in the United Kingdom and 1 in Sweden.
Results.—
In total, 53 experiments were conducted across all phases resulting in 7632 test images viewed in all. Results indicated that the display, the user's visual system, and the environment each independently impacted performance. Performance was improved with reduction in natural light and through use of medical-grade displays.
Conclusions.—
The use of a POUQA tool for digital microscopy is essential to afford flexible working while ensuring patient safety. The color-contrast test provides a standardized method of comparing diagnostic settings for digital microscopy. With further planned development, the color-contrast test may be used to create a “Verified Login” for diagnostic setting validation.
Collapse
Affiliation(s)
- Emily L. Clarke
- From the Section of Pathology and Tumour Biology, University of Leeds, Leeds, United Kingdom (Dr Clarke, Mr Sykes, Mr Wright, and Dr Treanor); Histopathology Department (Drs Clarke and Treanor) and Medical Physics Department (Dr Brettle), Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom; and the Division of Neurobiology, Department of Clinical and Experimental Medicine, Faculty of Health
| | - David Brettle
- From the Section of Pathology and Tumour Biology, University of Leeds, Leeds, United Kingdom (Dr Clarke, Mr Sykes, Mr Wright, and Dr Treanor); Histopathology Department (Drs Clarke and Treanor) and Medical Physics Department (Dr Brettle), Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom; and the Division of Neurobiology, Department of Clinical and Experimental Medicine, Faculty of Health
| | - Alexander Sykes
- From the Section of Pathology and Tumour Biology, University of Leeds, Leeds, United Kingdom (Dr Clarke, Mr Sykes, Mr Wright, and Dr Treanor); Histopathology Department (Drs Clarke and Treanor) and Medical Physics Department (Dr Brettle), Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom; and the Division of Neurobiology, Department of Clinical and Experimental Medicine, Faculty of Health
| | - Alexander Wright
- From the Section of Pathology and Tumour Biology, University of Leeds, Leeds, United Kingdom (Dr Clarke, Mr Sykes, Mr Wright, and Dr Treanor); Histopathology Department (Drs Clarke and Treanor) and Medical Physics Department (Dr Brettle), Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom; and the Division of Neurobiology, Department of Clinical and Experimental Medicine, Faculty of Health
| | - Anna Boden
- From the Section of Pathology and Tumour Biology, University of Leeds, Leeds, United Kingdom (Dr Clarke, Mr Sykes, Mr Wright, and Dr Treanor); Histopathology Department (Drs Clarke and Treanor) and Medical Physics Department (Dr Brettle), Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom; and the Division of Neurobiology, Department of Clinical and Experimental Medicine, Faculty of Health
| | | |
Collapse
|
89
|
Ventura-Alfaro CE. [Measurements errors in screening mammogram interpretation by radiologists]. ACTA ACUST UNITED AC 2019; 20:518-522. [PMID: 30843990 DOI: 10.15446/rsap.v20n4.52035] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2015] [Accepted: 02/12/2018] [Indexed: 11/09/2022]
Abstract
The timely detection of breast cancer is achieved through mammography; however, the quality of the procedure should be addressed for proper performance and interpretation. Despite recent improvements in quality assurance in mammography, interpretation still depends on each reader; therefore, errors can be made when interpreting screening mammograms, leading to unnecessary biopsies and/or overdiagnosis, with sustained physical, economic and psychological consequences. Since interpretation is related to the perceptive and cognitive ability of the radiologist, it is necessary to have extensive knowledge about the possible errors that may occur during interpretation, as well as of the way how they can be reduced, prevented and/or corrected to provide the patient with the highest possible level of safety.
Collapse
Affiliation(s)
- Carmelita E Ventura-Alfaro
- CV: MD. M. Sc. Ciencias con Area, de Concentración en Economía de la Salud. Ph. D. Ciencias con Area de Concentración en Epidemiología. Instituto Mexicano del Seguro Social, Delegación Jalisco. Jalisco, México.
| |
Collapse
|
90
|
Geel KV, Kok EM, Aldekhayel AD, Robben SGF, van Merriënboer JJG. Chest X-ray evaluation training: impact of normal and abnormal image ratio and instructional sequence. MEDICAL EDUCATION 2019; 53:153-164. [PMID: 30474292 PMCID: PMC6587445 DOI: 10.1111/medu.13756] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/27/2018] [Revised: 09/07/2018] [Accepted: 09/13/2018] [Indexed: 06/09/2023]
Abstract
CONTEXT Medical image perception training generally focuses on abnormalities, whereas normal images are more prevalent in medical practice. Furthermore, instructional sequences that let students practice prior to expert instruction (inductive) may lead to improved performance compared with methods that give students expert instruction before practice (deductive). This study investigates the effects of the proportion of normal images and practice-instruction order on learning to interpret medical images. It is hypothesised that manipulation of the proportion of normal images will lead to a sensitivity-specificity trade-off and that students in practice-first (inductive) conditons need more time per practice case but will correctly identify more test cases. METHODS Third-year medical students (n = 103) learned radiograph interpretation by practising cases with, respectively, 30% or 70% normal radiographs prior to expert instruction (practice-first order) or after expert instruction (instruction-first order). After training, students performed a test (60% normal) and sensitivity (% of correctly identified abnormal radiographs), specificity (% of correctly identified normal radiographs), diagnostic performance (% of correct diagnoses) and case duration were measured. RESULTS The conditions with 30% of normal images scored higher on sensitivity but the conditions with 70% of normal images scored higher on specificity, indicating a sensitivity and specificity trade-off. Those who participated in inductive conditions took less time per practice case but more per test case. They had similar test sensitivity, but scored lower on test specificity. CONCLUSIONS The proportion of normal images impacted the sensitivity-specificity trade-off. This trade-off should be an important consideration for the alignment of training with future practice. Furthermore, the deductive conditions unexpectedly scored higher on specificity when participants took less time per case. An inductive approach did not lead to higher diagnostic performance, possibly because participants might already have relevant prior knowledge. Deductive approaches are therefore advised for the training of advanced learners.
Collapse
Affiliation(s)
- Koos van Geel
- Department of Radiology, Maastricht University Medical Center, Maastricht, the Netherlands
| | - Ellen M Kok
- Department of Education, Utrecht University, Utrecht, the Netherlands
| | - Abdullah D Aldekhayel
- Department of Radiology, Maastricht University Medical Center, Maastricht, the Netherlands
| | - Simon G F Robben
- Department of Radiology, Maastricht University Medical Center, Maastricht, the Netherlands
| | - Jeroen J G van Merriënboer
- School of Health Professions Education, Department of Educational Research and Development, Maastricht University, Maastricht, the Netherlands
| |
Collapse
|
91
|
Pee LG, Pan SL, Cui L. Artificial intelligence in healthcare robots: A social informatics study of knowledge embodiment. J Assoc Inf Sci Technol 2018. [DOI: 10.1002/asi.24145] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- L. G. Pee
- Nanyang Technological University; 31 Nanyang Link, #05-06, Singapore, 637718 Singapore
| | - Shan L. Pan
- The University of New South Wales; Level 2, West Wing, Quadrangle Building, Sydney NSW, 2052 Australia
| | - Lili Cui
- Shanghai University of Finance and Economics; Room 213, 100# Wudong Road, Yangpu District, Shanghai, 200433 China
| |
Collapse
|
92
|
Dournes G, Bricault I, Chateil JF. Analysis of the French national evaluation of radiology residents. Diagn Interv Imaging 2018; 100:185-193. [PMID: 30527527 DOI: 10.1016/j.diii.2018.11.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Accepted: 11/18/2018] [Indexed: 11/17/2022]
Abstract
PURPOSE In France, a national evaluation is given annually to radiology residents. The aim of this study was to perform both a docimological analysis of the quality of the questionnaire and a statistical analysis of the results. MATERIALS AND METHODS This retrospective study, which included French radiology residents from Year 1 to Year 5 of residency, was performed from 2015 to 2017 across 25 medical universities in France. Both qualitative and quantitative docimological analyses were performed as assessed by the Cronbach alpha coefficient, the difficulty of question (PDI), and the coefficient of discrimination (Rir). Results to the questionnaire were compared between years of residency. RESULTS The results of the analysis confirmed the quality of the questionnaire (Cronbach alpha coefficient=0.71, mean [PDI=0.40]) though the majority of questions could be answered by memory rather than cognitive ability. The mean Rir was 0.02, indicating that students could not be certified using only the questionnaire. The results measuring resident level of knowledge were moderate, with mean results ranging from 9.2/20 at the first year to 11.3/20 at the fifth year of residency (P<0.001). There were no significant differences in results obtained between the third, fourth, and fifth year of residency but results were significantly different among university hospitals. CONCLUSION Even if close interactions exist between learning and pedagogic environment, our results suggest that it may be useful to further develop an evaluation process in relation with pedagogic instructions in order to provide more optimal training.
Collapse
Affiliation(s)
- G Dournes
- Centre de recherche cardio-thoracique de Bordeaux, U1045, Bordeaux University, 33000 Bordeaux, France; Inserm, centre de recherche cardio-thoracique de Bordeaux, U1045, 33000 Bordeaux, France; Department of cardiovascular and thoracic imaging, CHU de Bordeaux, 33600 Pessac, France.
| | - I Bricault
- Department of medical imaging, hôpital Nord, CHU de Grenoble, 38043 Grenoble, France; Université Grenoble-Alpes, TIMC-IMAG, 38000 Grenoble, France
| | - J-F Chateil
- Department of pediatric imaging, CHU de Bordeaux, 33000 Bordeaux, France; Centre de résonance magnétique des systèmes biologiques, UMR 5536, Bordeaux University, 33076 Bordeaux, France
| |
Collapse
|
93
|
den Boer L, van der Schaaf MF, Vincken KL, Mol CP, Stuijfzand BG, van der Gijp A. Volumetric image interpretation in radiology: scroll behavior and cognitive processes. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2018; 23:783-802. [PMID: 29767400 PMCID: PMC6132416 DOI: 10.1007/s10459-018-9828-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2017] [Accepted: 05/07/2018] [Indexed: 05/12/2023]
Abstract
The interpretation of medical images is a primary task for radiologists. Besides two-dimensional (2D) images, current imaging technologies allow for volumetric display of medical images. Whereas current radiology practice increasingly uses volumetric images, the majority of studies on medical image interpretation is conducted on 2D images. The current study aimed to gain deeper insight into the volumetric image interpretation process by examining this process in twenty radiology trainees who all completed four volumetric image cases. Two types of data were obtained concerning scroll behaviors and think-aloud data. Types of scroll behavior concerned oscillations, half runs, full runs, image manipulations, and interruptions. Think-aloud data were coded by a framework of knowledge and skills in radiology including three cognitive processes: perception, analysis, and synthesis. Relating scroll behavior to cognitive processes showed that oscillations and half runs coincided more often with analysis and synthesis than full runs, whereas full runs coincided more often with perception than oscillations and half runs. Interruptions were characterized by synthesis and image manipulations by perception. In addition, we investigated relations between cognitive processes and found an overall bottom-up way of reasoning with dynamic interactions between cognitive processes, especially between perception and analysis. In sum, our results highlight the dynamic interactions between these processes and the grounding of cognitive processes in scroll behavior. It suggests, that the types of scroll behavior are relevant to describe how radiologists interact with and manipulate volumetric images.
Collapse
Affiliation(s)
- Larissa den Boer
- Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands.
| | | | - Koen L Vincken
- University Medical Center Utrecht, Utrecht, The Netherlands
| | - Chris P Mol
- University Medical Center Utrecht, Utrecht, The Netherlands
| | | | | |
Collapse
|
94
|
Abstract
Radiologists practice in an environment of extraordinarily high uncertainty, which results partly from the high variability of the physical and technical aspects of imaging, partly from the inherent limitations in the diagnostic power of the various imaging modalities, and partly from the complex visual-perceptual and cognitive processes involved in image interpretation. This paper reviews the high level of uncertainty inherent to the process of radiological imaging and image interpretation vis-à-vis the issue of radiological interpretive error, in order to highlight the considerable degree of overlap that exists between these. The scope of radiological error, its many potential causes and various error-reduction strategies in radiology are also reviewed.
Collapse
Affiliation(s)
- Michael A Bruno
- Penn State Health/Milton S. Hershey Medical Center and The Penn State College of Medicine, 500 University Drive, Mail Code H-066, Hershey, PA 17033, USA
| |
Collapse
|
95
|
Kellman PJ, Krasne S. Accelerating expertise: Perceptual and adaptive learning technology in medical learning. MEDICAL TEACHER 2018; 40:797-802. [PMID: 30091650 PMCID: PMC6584026 DOI: 10.1080/0142159x.2018.1484897] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
RATIONALE Recent advances in the learning sciences offer remarkable potential for improving medical learning and performance. Difficult to teach pattern recognition skills can be systematically accelerated using techniques of perceptual learning (PL). The effectiveness of PL interventions is amplified when they are combined with adaptive learning (AL) technology in perceptual-adaptive learning modules (PALMs). INNOVATION Specifically, PALMs incorporate the Adaptive Response Time-based Sequencing (ARTS) system, which leverages learner performance (accuracy and speed) in interactive learning episodes to guide the course of factual, perceptual, or procedural learning, optimize spacing, and lead learners to comprehensive mastery. Here we describe elements and scientific foundations of PL and its embodiment in learning technology. We also consider evidence that AL systems utilizing both accuracy and speed enhance learning efficiency and provide a unified account and potential optimization of spacing effects in learning, as well as supporting accuracy, transfer, and fluency as goals of learning. RESULTS To illustrate this process, we review some results of earlier PALMs and present new data from a PALM designed to accelerate and improve diagnosis in electrocardiography. CONCLUSIONS Through relatively short training interventions, PALMs produce large and durable improvements in trainees' abilities to accurately and fluently interpret clinical signs and tests, helping to bridge the gap between novice and expert clinicians.
Collapse
Affiliation(s)
- Philip J. Kellman
- Department of Psychology, University of California, Los Angeles, CA, USA
| | - Sally Krasne
- Department of Physiology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| |
Collapse
|
96
|
The impact of speed and bias on the cognitive processes of experts and novices in medical image decision-making. Cogn Res Princ Implic 2018. [PMCID: PMC6091404 DOI: 10.1186/s41235-018-0119-2] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
Training individuals to make accurate decisions from medical images is a critical component of education in diagnostic pathology. We describe a joint experimental and computational modeling approach to examine the similarities and differences in the cognitive processes of novice participants and experienced participants (pathology residents and pathology faculty) in cancer cell image identification. For this study we collected a bank of hundreds of digital images that were identified by cell type and classified by difficulty by a panel of expert hematopathologists. The key manipulations in our study included examining the speed-accuracy tradeoff as well as the impact of prior expectations on decisions. In addition, our study examined individual differences in decision-making by comparing task performance to domain general visual ability (as measured using the Novel Object Memory Test (NOMT) (Richler et al. Cognition 166:42–55, 2017). Using signal detection theory and the diffusion decision model (DDM), we found many similarities between experts and novices in our task. While experts tended to have better discriminability, the two groups responded similarly to time pressure (i.e., reduced caution under speed instructions in the DDM) and to the introduction of a probabilistic cue (i.e., increased response bias in the DDM). These results have important implications for training in this area as well as using novice participants in research on medical image perception and decision-making.
Collapse
|
97
|
O'Connor SD, Silverman SG, Cochon LR, Khorasani RK. Renal cancer at unenhanced CT: imaging features, detection rates, and outcomes. Abdom Radiol (NY) 2018; 43:1756-1763. [PMID: 29128991 DOI: 10.1007/s00261-017-1376-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
PURPOSE To describe and quantify the rate of detection of renal cancer on unenhanced CT. METHODS This retrospective, HIPAA-compliant study was approved by the Institutional Review Board. Electronic health records for all patients who underwent unenhanced abdominal CT at our institution between 2000 and 2005 were reviewed to identify patients subsequently diagnosed with renal cancer during a follow-up period of up to 12 years. Images were reviewed to determine if the cancer was visible at index (first) unenhanced CT and their findings recorded. Original radiology reports were reviewed to determine whether the renal cancer was reported; Fisher's Exact Test compared imaging features of detected and missed cancers. Clinical outcomes including time until diagnosis and stage at diagnosis were used to assess the potential impact of missed cancers. RESULTS Of 15,695 patients, 82 (0.52%) were diagnosed with renal cancer. Of these, 43/82 (52%) cancers were retrospectively detectable on index unenhanced CT. Among retrospectively detectable cancers, 63% (27/43) were originally detected and reported on index CT and 37% (16/43) were missed. Size was the only feature associated with detection; 83% (20/24) of cancers > 3.0 cm were detected versus 37% (7/19) of cancers ≤ 3.0 cm (p = 0.0036). Although none of the 16 missed renal cancers developed metastases between index CT and time of diagnosis (median 33.5 months), 4 (25%) progressed in stage. CONCLUSIONS Renal cancer was rare in patients undergoing unenhanced abdominal CT. Over one-third of potentially detectable cancers were missed prospectively. However, missed cancers did not metastasize and infrequently progressed in stage before being diagnosed.
Collapse
Affiliation(s)
- Stacy D O'Connor
- Center for Evidence-Based Imaging and Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis Street, Boston, MA, 02115, USA
- Medical College of Wisconsin, 9200 W. Wisconsin Avenue, Milwaukee, WI, 53226, USA
| | - Stuart G Silverman
- Center for Evidence-Based Imaging and Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis Street, Boston, MA, 02115, USA
| | - Laila R Cochon
- Center for Evidence-Based Imaging and Department of Radiology, Brigham and Women's Hospital, 20 Kent Street, 2nd Floor, Boston, MA, 02445, USA
| | - Ramin K Khorasani
- Center for Evidence-Based Imaging and Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis Street, Boston, MA, 02115, USA.
| |
Collapse
|
98
|
Developing competent videofluoroscopic swallowing study analysts. Curr Opin Otolaryngol Head Neck Surg 2018; 26:162-166. [DOI: 10.1097/moo.0000000000000449] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
99
|
Lückehe D, von Voigt G. Evolutionary image simplification for lung nodule classification with convolutional neural networks. Int J Comput Assist Radiol Surg 2018; 13:1499-1513. [PMID: 29845453 DOI: 10.1007/s11548-018-1794-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Accepted: 05/17/2018] [Indexed: 12/19/2022]
Abstract
PURPOSE Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. METHODS In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. RESULTS In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. CONCLUSIONS Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.
Collapse
Affiliation(s)
- Daniel Lückehe
- Computational Health Informatics, Leibniz University Hanover, Schloßwender Str. 5, 30159, Hanover, Germany.
| | - Gabriele von Voigt
- Computational Health Informatics, Leibniz University Hanover, Schloßwender Str. 5, 30159, Hanover, Germany
| |
Collapse
|
100
|
Crowe EM, Gilchrist ID, Kent C. New approaches to the analysis of eye movement behaviour across expertise while viewing brain MRIs. Cogn Res Princ Implic 2018; 3:12. [PMID: 29721518 PMCID: PMC5915515 DOI: 10.1186/s41235-018-0097-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2017] [Accepted: 03/15/2018] [Indexed: 12/04/2022] Open
Abstract
Brain tumour detection and diagnosis requires clinicians to inspect and analyse brain magnetic resonance images. Eye-tracking is commonly used to examine observers' gaze behaviour during such medical image interpretation tasks, but analysis of eye movement sequences is limited. We therefore used ScanMatch, a novel technique that compares saccadic eye movement sequences, to examine the effect of expertise and diagnosis on the similarity of scanning patterns. Diagnostic accuracy was also recorded. Thirty-five participants were classified as Novices, Medics and Experts based on their level of expertise. Participants completed two brain tumour detection tasks. The first was a whole-brain task, which consisted of 60 consecutively presented slices from one patient; the second was an independent-slice detection task, which consisted of 32 independent slices from five different patients. Experts displayed the highest accuracy and sensitivity followed by Medics and then Novices in the independent-slice task. Experts showed the highest level of scanning pattern similarity, with medics engaging in the least similar scanning patterns, for both the whole-brain and independent-slice task. In the independent-slice task, scanning patterns were the least similar for false negatives across all expertise levels and most similar for experts when they responded correctly. These results demonstrate the value of using ScanMatch in the medical image perception literature. Future research adopting this tool could, for example, identify cases that yield low scanning similarity and so provide insight into why diagnostic errors occur and ultimately help in training radiologists.
Collapse
Affiliation(s)
- Emily M. Crowe
- School of Experimental Psychology, University of Bristol, 12a Priory Road, Bristol, BS8 1TU UK
| | - Iain D. Gilchrist
- School of Experimental Psychology, University of Bristol, 12a Priory Road, Bristol, BS8 1TU UK
| | - Christopher Kent
- School of Experimental Psychology, University of Bristol, 12a Priory Road, Bristol, BS8 1TU UK
| |
Collapse
|