101
|
Cao X, Chen H, Li Y, Peng Y, Wang S, Cheng L. Dilated densely connected U-Net with uncertainty focus loss for 3D ABUS mass segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 209:106313. [PMID: 34364182 DOI: 10.1016/j.cmpb.2021.106313] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2020] [Accepted: 07/21/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate segmentation of breast mass in 3D automated breast ultrasound (ABUS) images plays an important role in qualitative and quantitative ABUS image analysis. Yet this task is challenging due to the low signal to noise ratio and serious artifacts in ABUS images, the large shape and size variation of breast masses, as well as the small training dataset compared with natural images. The purpose of this study is to address these difficulties by designing a dilated densely connected U-Net (D2U-Net) together with an uncertainty focus loss. METHODS A lightweight yet effective densely connected segmentation network is constructed to extensively explore feature representations in the small ABUS dataset. In order to deal with the high variation in shape and size of breast masses, a set of hybrid dilated convolutions is integrated into the dense blocks of the D2U-Net. We further suggest an uncertainty focus loss to put more attention on unreliable network predictions, especially the ambiguous mass boundaries caused by low signal to noise ratio and artifacts. Our segmentation algorithm is evaluated on an ABUS dataset of 170 volumes from 107 patients. Ablation analysis and comparison with existing methods are conduct to verify the effectiveness of the proposed method. RESULTS Experiment results demonstrate that the proposed algorithm outperforms existing methods on 3D ABUS mass segmentation tasks, with Dice similarity coefficient, Jaccard index and 95% Hausdorff distance of 69.02%, 56.61% and 4.92 mm, respectively. CONCLUSIONS The proposed method is effective in segmenting breast masses on our small ABUS dataset, especially breast masses with large shape and size variations.
Collapse
Affiliation(s)
- Xuyang Cao
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
| | - Houjin Chen
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China.
| | - Yanfeng Li
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
| | - Yahui Peng
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
| | - Shu Wang
- Peking University People's Hospital, Beijing 100044, China
| | - Lin Cheng
- Peking University People's Hospital, Beijing 100044, China
| |
Collapse
|
102
|
Shoeibi A, Khodatars M, Jafari M, Moridian P, Rezaei M, Alizadehsani R, Khozeimeh F, Gorriz JM, Heras J, Panahiazar M, Nahavandi S, Acharya UR. Applications of deep learning techniques for automated multiple sclerosis detection using magnetic resonance imaging: A review. Comput Biol Med 2021; 136:104697. [PMID: 34358994 DOI: 10.1016/j.compbiomed.2021.104697] [Citation(s) in RCA: 62] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 07/22/2021] [Accepted: 07/25/2021] [Indexed: 11/18/2022]
Abstract
Multiple Sclerosis (MS) is a type of brain disease which causes visual, sensory, and motor problems for people with a detrimental effect on the functioning of the nervous system. In order to diagnose MS, multiple screening methods have been proposed so far; among them, magnetic resonance imaging (MRI) has received considerable attention among physicians. MRI modalities provide physicians with fundamental information about the structure and function of the brain, which is crucial for the rapid diagnosis of MS lesions. Diagnosing MS using MRI is time-consuming, tedious, and prone to manual errors. Research on the implementation of computer aided diagnosis system (CADS) based on artificial intelligence (AI) to diagnose MS involves conventional machine learning and deep learning (DL) methods. In conventional machine learning, feature extraction, feature selection, and classification steps are carried out by using trial and error; on the contrary, these steps in DL are based on deep layers whose values are automatically learn. In this paper, a complete review of automated MS diagnosis methods performed using DL techniques with MRI neuroimaging modalities is provided. Initially, the steps involved in various CADS proposed using MRI modalities and DL techniques for MS diagnosis are investigated. The important preprocessing techniques employed in various works are analyzed. Most of the published papers on MS diagnosis using MRI modalities and DL are presented. The most significant challenges facing and future direction of automated diagnosis of MS using MRI modalities and DL techniques are also provided.
Collapse
Affiliation(s)
- Afshin Shoeibi
- Faculty of Electrical Engineering, Biomedical Data Acquisition Lab (BDAL), K. N. Toosi University of Technology, Tehran, Iran.
| | - Marjane Khodatars
- Faculty of Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran
| | - Mahboobeh Jafari
- Electrical and Computer Engineering Faculty, Semnan University, Semnan, Iran
| | - Parisa Moridian
- Faculty of Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Mitra Rezaei
- Electrical and Computer Engineering Dept., Tarbiat Modares University, Tehran, Iran
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, Australia
| | - Fahime Khozeimeh
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, Australia
| | - Juan Manuel Gorriz
- Department of Signal Theory, Networking and Communications, Universidad de Granada, Spain; Department of Psychiatry. University of Cambridge, UK
| | - Jónathan Heras
- Department of Mathematics and Computer Science, University of La Rioja, La Rioja, Spain
| | | | - Saeid Nahavandi
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, Australia
| | - U Rajendra Acharya
- Department of Biomedical Engineering, School of Science and Technology, Singapore University of Social Sciences, Singapore; Dept. of Electronics and Computer Engineering, Ngee Ann Polytechnic, 599489, Singapore; Department of Bioinformatics and Medical Engineering, Asia University, Taiwan
| |
Collapse
|
103
|
Sudarshan VP, Upadhyay U, Egan GF, Chen Z, Awate SP. Towards lower-dose PET using physics-based uncertainty-aware multimodal learning with robustness to out-of-distribution data. Med Image Anal 2021; 73:102187. [PMID: 34348196 DOI: 10.1016/j.media.2021.102187] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 07/12/2021] [Accepted: 07/16/2021] [Indexed: 10/20/2022]
Abstract
Radiation exposure in positron emission tomography (PET) imaging limits its usage in the studies of radiation-sensitive populations, e.g., pregnant women, children, and adults that require longitudinal imaging. Reducing the PET radiotracer dose or acquisition time reduces photon counts, which can deteriorate image quality. Recent deep-neural-network (DNN) based methods for image-to-image translation enable the mapping of low-quality PET images (acquired using substantially reduced dose), coupled with the associated magnetic resonance imaging (MRI) images, to high-quality PET images. However, such DNN methods focus on applications involving test data that match the statistical characteristics of the training data very closely and give little attention to evaluating the performance of these DNNs on new out-of-distribution (OOD) acquisitions. We propose a novel DNN formulation that models the (i) underlying sinogram-based physics of the PET imaging system and (ii) the uncertainty in the DNN output through the per-voxel heteroscedasticity of the residuals between the predicted and the high-quality reference images. Our sinogram-based uncertainty-aware DNN framework, namely, suDNN, estimates a standard-dose PET image using multimodal input in the form of (i) a low-dose/low-count PET image and (ii) the corresponding multi-contrast MRI images, leading to improved robustness of suDNN to OOD acquisitions. Results on in vivo simultaneous PET-MRI, and various forms of OOD data in PET-MRI, show the benefits of suDNN over the current state of the art, quantitatively and qualitatively.
Collapse
Affiliation(s)
- Viswanath P Sudarshan
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India; IITB-Monash Research Academy, Indian Institute of Technology (IIT) Bombay, Mumbai, India
| | - Uddeshya Upadhyay
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India
| | - Gary F Egan
- Monash Biomedical Imaging (MBI), Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging (MBI), Monash University, Melbourne, Australia
| | - Suyash P Awate
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India.
| |
Collapse
|
104
|
Lv J, Zhu J, Yang G. Which GAN? A comparative study of generative adversarial network-based fast MRI reconstruction. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2021; 379:20200203. [PMID: 33966462 DOI: 10.1098/rsta.2020.0203] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/14/2020] [Indexed: 05/03/2023]
Abstract
Fast magnetic resonance imaging (MRI) is crucial for clinical applications that can alleviate motion artefacts and increase patient throughput. K-space undersampling is an obvious approach to accelerate MR acquisition. However, undersampling of k-space data can result in blurring and aliasing artefacts for the reconstructed images. Recently, several studies have been proposed to use deep learning-based data-driven models for MRI reconstruction and have obtained promising results. However, the comparison of these methods remains limited because the models have not been trained on the same datasets and the validation strategies may be different. The purpose of this work is to conduct a comparative study to investigate the generative adversarial network (GAN)-based models for MRI reconstruction. We reimplemented and benchmarked four widely used GAN-based architectures including DAGAN, ReconGAN, RefineGAN and KIGAN. These four frameworks were trained and tested on brain, knee and liver MRI images using twofold, fourfold and sixfold accelerations, respectively, with a random undersampling mask. Both quantitative evaluations and qualitative visualization have shown that the RefineGAN method has achieved superior performance in reconstruction with better accuracy and perceptual quality compared to other GAN-based methods. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'.
Collapse
Affiliation(s)
- Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai, People's Republic of China
| | - Jin Zhu
- Department of Computer Science and Technology, University of Cambridge, Cambridge CB3 0FD, UK
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, SW3 6NP London, UK
- National Heart and Lung Institute, Imperial College London, London SW7 2AZ, UK
| |
Collapse
|
105
|
Born J, Beymer D, Rajan D, Coy A, Mukherjee VV, Manica M, Prasanna P, Ballah D, Guindy M, Shaham D, Shah PL, Karteris E, Robertus JL, Gabrani M, Rosen-Zvi M. On the role of artificial intelligence in medical imaging of COVID-19. PATTERNS (NEW YORK, N.Y.) 2021; 2:100269. [PMID: 33969323 PMCID: PMC8086827 DOI: 10.1016/j.patter.2021.100269] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Although a plethora of research articles on AI methods on COVID-19 medical imaging are published, their clinical value remains unclear. We conducted the largest systematic review of the literature addressing the utility of AI in imaging for COVID-19 patient care. By keyword searches on PubMed and preprint servers throughout 2020, we identified 463 manuscripts and performed a systematic meta-analysis to assess their technical merit and clinical relevance. Our analysis evidences a significant disparity between clinical and AI communities, in the focus on both imaging modalities (AI experts neglected CT and ultrasound, favoring X-ray) and performed tasks (71.9% of AI papers centered on diagnosis). The vast majority of manuscripts were found to be deficient regarding potential use in clinical practice, but 2.7% (n = 12) publications were assigned a high maturity level and are summarized in greater detail. We provide an itemized discussion of the challenges in developing clinically relevant AI solutions with recommendations and remedies.
Collapse
Affiliation(s)
- Jannis Born
- IBM Research Europe, Zurich, Switzerland
- Department for Biosystems Science & Engineering, ETH Zurich, Zurich, Switzerland
| | | | | | - Adam Coy
- IBM Almaden Research Center, San Jose, CA, USA
- Vision Radiology, Dallas, TX, USA
| | | | | | - Prasanth Prasanna
- IBM Almaden Research Center, San Jose, CA, USA
- Department of Radiology and Imaging Sciences, University of Utah Health Sciences Center, Salt Lake City, UT, USA
| | - Deddeh Ballah
- IBM Almaden Research Center, San Jose, CA, USA
- Department of Radiology, Seton Medical Center, Daly City, CA, USA
| | - Michal Guindy
- Assuta Medical Centres Radiology, Tel-Aviv, Israel
- Ben-Gurion University Medical School, Be'er Sheva, Israel
| | - Dorith Shaham
- Department of Radiology, Hadassah-Hebrew University Medical Center, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Pallav L. Shah
- Royal Brompton and Harefield Hospitals, Guy's and St Thomas' NHS Foundation Trust, London, UK
- Chelsea & Westminster Hospital, London, UK
- National Heart & Lung Institute, Imperial College London, London, UK
| | - Emmanouil Karteris
- College of Health, Medicine and Life Sciences, Brunel University London, London, UK
| | - Jan L. Robertus
- Royal Brompton and Harefield Hospitals, Guy's and St Thomas' NHS Foundation Trust, London, UK
- National Heart & Lung Institute, Imperial College London, London, UK
| | | | - Michal Rosen-Zvi
- IBM Research Haifa, Haifa, Israel
- Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
106
|
Lei W, Mei H, Sun Z, Ye S, Gu R, Wang H, Huang R, Zhang S, Zhang S, Wang G. Automatic segmentation of organs-at-risk from head-and-neck CT using separable convolutional neural network with hard-region-weighted loss. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.01.135] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
107
|
Leveraging voxel-wise segmentation uncertainty to improve reliability in assessment of paediatric dysplasia of the hip. Int J Comput Assist Radiol Surg 2021; 16:1121-1129. [PMID: 33966168 DOI: 10.1007/s11548-021-02389-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 04/27/2021] [Indexed: 11/27/2022]
Abstract
PURPOSE Estimating uncertainty in predictions made by neural networks is critically important for increasing the trust medical experts have in automatic data analysis results. In segmentation tasks, quantifying levels of confidence can provide meaningful additional information to aid clinical decision making. In recent work, we proposed an interpretable uncertainty measure to aid clinicians in assessing the reliability of developmental dysplasia of the hip metrics measured from 3D ultrasound screening scans, as well as that of the US scan itself. In this work, we propose a technique to quantify confidence in the associated segmentation process that incorporates voxel-wise uncertainty into the binary loss function used in the training regime, which encourages the network to concentrate its training effort on its least certain predictions. METHODS We propose using a Bayesian-based technique to quantify 3D segmentation uncertainty by modifying the loss function within an encoder-decoder type voxel labeling deep network. By appending a voxel-wise uncertainty measure, our modified loss helps the network improve prediction uncertainty for voxels that are harder to train. We validate our approach by training a Bayesian 3D U-Net with the proposed modified loss function on a dataset comprising 92 clinical 3D US neonate scans and test on a separate hold-out dataset of 24 patients. RESULTS Quantitatively, we show that the Dice score of ilium and acetabulum segmentation improves by 5% when trained with our proposed voxel-wise uncertainty loss compared to training with standard cross-entropy loss. Qualitatively, we further demonstrate how our modified loss function results in meaningful reduction of voxel-wise segmentation uncertainty estimates, with the network making more confident accurate predictions. CONCLUSION We proposed a Bayesian technique to encode voxel-wise segmentation uncertainty information into deep neural network optimization, and demonstrated how it can be leveraged into meaningful confidence measures to improve the model's predictive performance.
Collapse
|
108
|
Zhou SK, Greenspan H, Davatzikos C, Duncan JS, van Ginneken B, Madabhushi A, Prince JL, Rueckert D, Summers RM. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2021; 109:820-838. [PMID: 37786449 PMCID: PMC10544772 DOI: 10.1109/jproc.2021.3054390] [Citation(s) in RCA: 250] [Impact Index Per Article: 62.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Since its renaissance, deep learning has been widely used in various medical imaging tasks and has achieved remarkable success in many medical imaging applications, thereby propelling us into the so-called artificial intelligence (AI) era. It is known that the success of AI is mostly attributed to the availability of big data with annotations for a single task and the advances in high performance computing. However, medical imaging presents unique challenges that confront deep learning approaches. In this survey paper, we first present traits of medical imaging, highlight both clinical needs and technical challenges in medical imaging, and describe how emerging trends in deep learning are addressing these issues. We cover the topics of network architecture, sparse and noisy labels, federating learning, interpretability, uncertainty quantification, etc. Then, we present several case studies that are commonly found in clinical practice, including digital pathology and chest, brain, cardiovascular, and abdominal imaging. Rather than presenting an exhaustive literature survey, we instead describe some prominent research highlights related to these case study applications. We conclude with a discussion and presentation of promising future directions.
Collapse
Affiliation(s)
- S Kevin Zhou
- School of Biomedical Engineering, University of Science and Technology of China and Institute of Computing Technology, Chinese Academy of Sciences
| | - Hayit Greenspan
- Biomedical Engineering Department, Tel-Aviv University, Israel
| | - Christos Davatzikos
- Radiology Department and Electrical and Systems Engineering Department, University of Pennsylvania, USA
| | - James S Duncan
- Departments of Biomedical Engineering and Radiology & Biomedical Imaging, Yale University
| | | | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University and Louis Stokes Cleveland Veterans Administration Medical Center, USA
| | - Jerry L Prince
- Electrical and Computer Engineering Department, Johns Hopkins University, USA
| | - Daniel Rueckert
- Klinikum rechts der Isar, TU Munich, Germany and Department of Computing, Imperial College, UK
| | | |
Collapse
|
109
|
Saad G, Jaber B, Al-hajri M, Househ M, Ahmed A, Abd-alrazaq A. Artificial Intelligence in Diagnosis and Prediction of the Multiple Sclerosis Progression: A Scoping Review (Preprint).. [DOI: 10.2196/preprints.29720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
BACKGROUND
Multiple Sclerosis (MS) is an autoimmune disease that results from the demyelination of the nerves in the Central Nervous System. The diagnosis depends on clinical history, neurological examination, and radiological images. Artificial Intelligence proved to be an effective tool in enhancing the diagnostic tools of MS.
OBJECTIVE
To explore how AI assisted in diagnosis and predicting the progression of MS.
METHODS
We used three bibliographic databases in our search: PubMed IEEE Xplore and Cochrane in our search. The study selection process included: removal of duplicated articles, screening titles and abstracts, and reading the full text. This process was performed by two reviewers. The data extracted from the included studies have been filled in an Excel sheet. This step had been done by each reviewer accordingly to the assigned articles. The extracted data sheet was checked by two reviewers to have accuracy ensured. The narrative approach is applied in data synthesis.
RESULTS
The search conducted resulted in 320 articles Removing duplicates and excluding the ineligible articles due to irrelevancy to the population, intervention, and outcomes resulted in excluding 299 articles. Thus, our review will include 21 articles for data extraction and data synthesis.
CONCLUSIONS
Artificial Intelligence is becoming a trend in the medical field. Its contribution in enhancing the diagnostic tools of many diseases, as in MS, is prominent and can be built on in further development plans. However, the implementation of Artificial Intelligence in Multiple Sclerosis is not widespread to confirm the benefits gained, and the datasets involved in the current practice are relatively small. It is recommended to have more studies that focus on the relationship between the employment of AI in diagnosis and monitoring progression and the accuracy gained by this employment.
Collapse
|
110
|
Alzubaidi L, Zhang J, Humaidi AJ, Al-Dujaili A, Duan Y, Al-Shamma O, Santamaría J, Fadhel MA, Al-Amidie M, Farhan L. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. JOURNAL OF BIG DATA 2021; 8:53. [PMID: 33816053 PMCID: PMC8010506 DOI: 10.1186/s40537-021-00444-8] [Citation(s) in RCA: 914] [Impact Index Per Article: 228.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 03/22/2021] [Indexed: 05/04/2023]
Abstract
In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - Jinglan Zhang
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
| | - Amjad J. Humaidi
- Control and Systems Engineering Department, University of Technology, Baghdad, 10001 Iraq
| | - Ayad Al-Dujaili
- Electrical Engineering Technical College, Middle Technical University, Baghdad, 10001 Iraq
| | - Ye Duan
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Omran Al-Shamma
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - J. Santamaría
- Department of Computer Science, University of Jaén, 23071 Jaén, Spain
| | - Mohammed A. Fadhel
- College of Computer Science and Information Technology, University of Sumer, Thi Qar, 64005 Iraq
| | - Muthana Al-Amidie
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Laith Farhan
- School of Engineering, Manchester Metropolitan University, Manchester, M1 5GD UK
| |
Collapse
|
111
|
Gros C, Lemay A, Cohen-Adad J. SoftSeg: Advantages of soft versus binary training for image segmentation. Med Image Anal 2021; 71:102038. [PMID: 33784599 DOI: 10.1016/j.media.2021.102038] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 02/07/2021] [Accepted: 03/11/2021] [Indexed: 12/28/2022]
Abstract
Most image segmentation algorithms are trained on binary masks formulated as a classification task per pixel. However, in applications such as medical imaging, this "black-and-white" approach is too constraining because the contrast between two tissues is often ill-defined, i.e., the voxels located on objects' edges contain a mixture of tissues (a partial volume effect). Consequently, assigning a single "hard" label can result in a detrimental approximation. Instead, a soft prediction containing non-binary values would overcome that limitation. In this study, we introduce SoftSeg, a deep learning training approach that takes advantage of soft ground truth labels, and is not bound to binary predictions. SoftSeg aims at solving a regression instead of a classification problem. This is achieved by using (i) no binarization after preprocessing and data augmentation, (ii) a normalized ReLU final activation layer (instead of sigmoid), and (iii) a regression loss function (instead of the traditional Dice loss). We assess the impact of these three features on three open-source MRI segmentation datasets from the spinal cord gray matter, the multiple sclerosis brain lesion, and the multimodal brain tumor segmentation challenges. Across multiple random dataset splittings, SoftSeg outperformed the conventional approach, leading to an increase in Dice score of 2.0% on the gray matter dataset (p=0.001), 3.3% for the brain lesions, and 6.5% for the brain tumors. SoftSeg produces consistent soft predictions at tissues' interfaces and shows an increased sensitivity for small objects (e.g., multiple sclerosis lesions). The richness of soft labels could represent the inter-expert variability, the partial volume effect, and complement the model uncertainty estimation, which is typically unclear with binary predictions. The developed training pipeline can easily be incorporated into most of the existing deep learning architectures. SoftSeg is implemented in the freely-available deep learning toolbox ivadomed (https://ivadomed.org).
Collapse
Affiliation(s)
- Charley Gros
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada; Mila - Quebec AI Institute, Montreal, QC, Canada
| | - Andreanne Lemay
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada; Mila - Quebec AI Institute, Montreal, QC, Canada
| | - Julien Cohen-Adad
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada; Mila - Quebec AI Institute, Montreal, QC, Canada; Functional Neuroimaging Unit, CRIUGM, Université de Montréal, Montreal, QC, Canada.
| |
Collapse
|
112
|
|
113
|
Van Molle P, Verbelen T, Vankeirsbilck B, De Vylder J, Diricx B, Kimpe T, Simoens P, Dhoedt B. Leveraging the Bhattacharyya coefficient for uncertainty quantification in deep neural networks. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05789-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
AbstractModern deep learning models achieve state-of-the-art results for many tasks in computer vision, such as image classification and segmentation. However, its adoption into high-risk applications, e.g. automated medical diagnosis systems, happens at a slow pace. One of the main reasons for this is that regular neural networks do not capture uncertainty. To assess uncertainty in classification, several techniques have been proposed casting neural network approaches in a Bayesian setting. Amongst these techniques, Monte Carlo dropout is by far the most popular. This particular technique estimates the moments of the output distribution through sampling with different dropout masks. The output uncertainty of a neural network is then approximated as the sample variance. In this paper, we highlight the limitations of such a variance-based uncertainty metric and propose an novel approach. Our approach is based on the overlap between output distributions of different classes. We show that our technique leads to a better approximation of the inter-class output confusion. We illustrate the advantages of our method using benchmark datasets. In addition, we apply our metric to skin lesion classification—a real-world use case—and show that this yields promising results.
Collapse
|
114
|
van Rooij W, Verbakel WF, Slotman BJ, Dahele M. Using Spatial Probability Maps to Highlight Potential Inaccuracies in Deep Learning-Based Contours: Facilitating Online Adaptive Radiation Therapy. Adv Radiat Oncol 2021; 6:100658. [PMID: 33778184 PMCID: PMC7985281 DOI: 10.1016/j.adro.2021.100658] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 12/14/2020] [Accepted: 12/30/2020] [Indexed: 10/27/2022] Open
Abstract
PURPOSE Contouring organs at risk remains a largely manual task, which is time consuming and prone to variation. Deep learning-based delineation (DLD) shows promise both in terms of quality and speed, but it does not yet perform perfectly. Because of that, manual checking of DLD is still recommended. There are currently no commercial tools to focus attention on the areas of greatest uncertainty within a DLD contour. Therefore, we explore the use of spatial probability maps (SPMs) to help efficiency and reproducibility of DLD checking and correction, using the salivary glands as the paradigm. METHODS AND MATERIALS A 3-dimensional fully convolutional network was trained with 315/264 parotid/submandibular glands. Subsequently, SPMs were created using Monte Carlo dropout (MCD). The method was boosted by placing a Gaussian distribution (GD) over the model's parameters during sampling (MCD + GD). MCD and MCD + GD were quantitatively compared and the SPMs were visually inspected. RESULTS The addition of the GD appears to increase the method's ability to detect uncertainty. In general, this technique demonstrated uncertainty in areas that (1) have lower contrast, (2) are less consistently contoured by clinicians, and (3) deviate from the anatomic norm. CONCLUSIONS We believe the integration of uncertainty information into contours made using DLD is an important step in highlighting where a contour may be less reliable. We have shown how SPMs are one way to achieve this and how they may be integrated into the online adaptive radiation therapy workflow.
Collapse
Affiliation(s)
- Ward van Rooij
- Department of Radiation Oncology, Amsterdam UMC, Vrije Universiteit Amsterdam, Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Wilko F. Verbakel
- Department of Radiation Oncology, Amsterdam UMC, Vrije Universiteit Amsterdam, Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Berend J. Slotman
- Department of Radiation Oncology, Amsterdam UMC, Vrije Universiteit Amsterdam, Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Max Dahele
- Department of Radiation Oncology, Amsterdam UMC, Vrije Universiteit Amsterdam, Cancer Center Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
115
|
Kläser K, Borges P, Shaw R, Ranzini M, Modat M, Atkinson D, Thielemans K, Hutton B, Goh V, Cook G, Cardoso MJ, Ourselin S. A multi-channel uncertainty-aware multi-resolution network for MR to CT synthesis. APPLIED SCIENCES (BASEL, SWITZERLAND) 2021; 11:1667. [PMID: 33763236 PMCID: PMC7610395 DOI: 10.3390/app11041667] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Synthesising computed tomography (CT) images from magnetic resonance images (MRI) plays an important role in the field of medical image analysis, both for quantification and diagnostic purposes. Convolutional neural networks (CNNs) have achieved state-of-the-art results in image-to-image translation for brain applications. However, synthesising whole-body images remains largely uncharted territory involving many challenges, including large image size and limited field of view, complex spatial context, and anatomical differences between images acquired at different times. We propose the use of an uncertainty-aware multi-channel multi-resolution 3D cascade network specifically aiming for whole-body MR to CT synthesis. The Mean Absolute Error on the synthetic CT generated with the MultiRes unc network (73.90 HU) is compared to multiple baseline CNNs like 3D U-Net (92.89 HU), HighRes3DNet (89.05 HU) and deep boosted regression (77.58 HU) and shows superior synthesis performance. We ultimately exploit the extrapolation properties of the MultiRes networks on sub-regions of the body.
Collapse
Affiliation(s)
- Kerstin Kläser
- Dept. Medical Physics & Biomedical Engineering, University College London, UK
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Pedro Borges
- Dept. Medical Physics & Biomedical Engineering, University College London, UK
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Richard Shaw
- Dept. Medical Physics & Biomedical Engineering, University College London, UK
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Marta Ranzini
- Dept. Medical Physics & Biomedical Engineering, University College London, UK
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - David Atkinson
- Centre for Medical Imaging, University College London, UK
| | - Kris Thielemans
- Institute of Nuclear Medicine, University College London, UK
| | - Brian Hutton
- Institute of Nuclear Medicine, University College London, UK
| | - Vicky Goh
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Gary Cook
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - M Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| |
Collapse
|
116
|
Cao X, Chen H, Li Y, Peng Y, Wang S, Cheng L. Uncertainty Aware Temporal-Ensembling Model for Semi-Supervised ABUS Mass Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:431-443. [PMID: 33021936 DOI: 10.1109/tmi.2020.3029161] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Accurate breast mass segmentation of automated breast ultrasound (ABUS) images plays a crucial role in 3D breast reconstruction which can assist radiologists in surgery planning. Although the convolutional neural network has great potential for breast mass segmentation due to the remarkable progress of deep learning, the lack of annotated data limits the performance of deep CNNs. In this article, we present an uncertainty aware temporal ensembling (UATE) model for semi-supervised ABUS mass segmentation. Specifically, a temporal ensembling segmentation (TEs) model is designed to segment breast mass using a few labeled images and a large number of unlabeled images. Considering the network output contains correct predictions and unreliable predictions, equally treating each prediction in pseudo label update and loss calculation may degrade the network performance. To alleviate this problem, the uncertainty map is estimated for each image. Then an adaptive ensembling momentum map and an uncertainty aware unsupervised loss are designed and integrated with TEs model. The effectiveness of the proposed UATE model is mainly verified on an ABUS dataset of 107 patients with 170 volumes, including 13382 2D labeled slices. The Jaccard index (JI), Dice similarity coefficient (DSC), pixel-wise accuracy (AC) and Hausdorff distance (HD) of the proposed method on testing set are 63.65%, 74.25%, 99.21% and 3.81mm respectively. Experimental results demonstrate that our semi-supervised method outperforms the fully supervised method, and get a promising result compared with existing semi-supervised methods.
Collapse
|
117
|
Tanno R, Worrall DE, Kaden E, Ghosh A, Grussu F, Bizzi A, Sotiropoulos SN, Criminisi A, Alexander DC. Uncertainty modelling in deep learning for safer neuroimage enhancement: Demonstration in diffusion MRI. Neuroimage 2021; 225:117366. [DOI: 10.1016/j.neuroimage.2020.117366] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Revised: 08/28/2020] [Accepted: 09/05/2020] [Indexed: 12/14/2022] Open
|
118
|
Liu Y, Cui W, Ha Q, Xiong X, Zeng X, Ye C. Knowledge transfer between brain lesion segmentation tasks with increased model capacity. Comput Med Imaging Graph 2020; 88:101842. [PMID: 33387812 DOI: 10.1016/j.compmedimag.2020.101842] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Revised: 12/02/2020] [Accepted: 12/11/2020] [Indexed: 10/22/2022]
Abstract
Convolutional neural networks (CNNs) have become an increasingly popular tool for brain lesion segmentation in recent years due to its accuracy and efficiency. However, CNN-based brain lesion segmentation generally requires a large amount of annotated training data, which can be costly for medical imaging. In many scenarios, only a few annotations of brain lesions are available. One common strategy to address the issue of limited annotated data is to transfer knowledge from a different yet relevant source task, where training data is abundant, to the target task of interest. Typically, a model can be pretrained for the source task, and then fine-tuned with the scarce training data associated with the target task. However, classic fine-tuning tends to make small modifications to the pretrained model, which could hinder its adaptation to the target task. Fine-tuning with increased model capacity has been shown to alleviate this negative impact in image classification problems. In this work, we extend the strategy of fine-tuning with increased model capacity to the problem of brain lesion segmentation, and then develop an advanced version that is better suitable for segmentation problems. First, we propose a vanilla strategy of increasing the capacity, where, like in the classification problem, the width of the network is augmented during fine-tuning. Second, because unlike image classification, in segmentation problems each voxel is associated with a labeling result, we further develop a spatially adaptive augmentation strategy during fine-tuning. Specifically, in addition to the vanilla width augmentation, we incorporate a module that computes a spatial map of the contribution of the information given by width augmentation in the final segmentation. For demonstration, the proposed method was applied to ischemic stroke lesion segmentation, where a model pretrained for brain tumor segmentation was fine-tuned, and the experimental results indicate the benefit of our method.
Collapse
Affiliation(s)
- Yanlin Liu
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| | - Wenhui Cui
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Qing Ha
- Deepwise AI Lab, Beijing, China
| | | | - Xiangzhu Zeng
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Chuyang Ye
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China.
| |
Collapse
|
119
|
Image Anomaly Detection Using Normal Data Only by Latent Space Resampling. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10238660] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Detecting image anomalies automatically in industrial scenarios can improve economic efficiency, but the scarcity of anomalous samples increases the challenge of the task. Recently, autoencoder has been widely used in image anomaly detection without using anomalous images during training. However, it is hard to determine the proper dimensionality of the latent space, and it often leads to unwanted reconstructions of the anomalous parts. To solve this problem, we propose a novel method based on the autoencoder. In this method, the latent space of the autoencoder is estimated using a discrete probability model. With the estimated probability model, the anomalous components in the latent space can be well excluded and undesirable reconstruction of the anomalous parts can be avoided. Specifically, we first adopt VQ-VAE as the reconstruction model to get a discrete latent space of normal samples. Then, PixelSail, a deep autoregressive model, is used to estimate the probability model of the discrete latent space. In the detection stage, the autoregressive model will determine the parts that deviate from the normal distribution in the input latent space. Then, the deviation code will be resampled from the normal distribution and decoded to yield a restored image, which is closest to the anomaly input. The anomaly is then detected by comparing the difference between the restored image and the anomaly image. Our proposed method is evaluated on the high-resolution industrial inspection image datasets MVTec AD which consist of 15 categories. The results show that the AUROC of the model improves by 15% over autoencoder and also yields competitive performance compared with state-of-the-art methods.
Collapse
|
120
|
Xing X, Yuan Y, Meng MQH. Zoom in Lesions for Better Diagnosis: Attention Guided Deformation Network for WCE Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4047-4059. [PMID: 32746146 DOI: 10.1109/tmi.2020.3010102] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Wireless capsule endoscopy (WCE) is a novel imaging tool that allows noninvasive visualization of the entire gastrointestinal (GI) tract without causing discomfort to patients. Convolutional neural networks (CNNs), though perform favorably against traditional machine learning methods, show limited capacity in WCE image classification due to the small lesions and background interference. To overcome these limits, we propose a two-branch Attention Guided Deformation Network (AGDN) for WCE image classification. Specifically, the attention maps of branch1 are utilized to guide the amplification of lesion regions on the input images of branch2, thus leading to better representation and inspection of the small lesions. What's more, we devise and insert Third-order Long-range Feature Aggregation (TLFA) modules into the network. By capturing long-range dependencies and aggregating contextual features, TLFAs endow the network with a global contextual view and stronger feature representation and discrimination capability. Furthermore, we propose a novel Deformation based Attention Consistency (DAC) loss to refine the attention maps and achieve the mutual promotion of the two branches. Finally, the global feature embeddings from the two branches are fused to make image label predictions. Extensive experiments show that the proposed AGDN outperforms state-of-the-art methods with an overall classification accuracy of 91.29% on two public WCE datasets. The source code is available at https://github.com/hathawayxxh/WCE-AGDN.
Collapse
|
121
|
Zhou Y, Chen H, Li Y, Liu Q, Xu X, Wang S, Yap PT, Shen D. Multi-task learning for segmentation and classification of tumors in 3D automated breast ultrasound images. Med Image Anal 2020; 70:101918. [PMID: 33676100 DOI: 10.1016/j.media.2020.101918] [Citation(s) in RCA: 98] [Impact Index Per Article: 19.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 11/22/2020] [Accepted: 11/23/2020] [Indexed: 12/12/2022]
Abstract
Tumor classification and segmentation are two important tasks for computer-aided diagnosis (CAD) using 3D automated breast ultrasound (ABUS) images. However, they are challenging due to the significant shape variation of breast tumors and the fuzzy nature of ultrasound images (e.g., low contrast and signal to noise ratio). Considering the correlation between tumor classification and segmentation, we argue that learning these two tasks jointly is able to improve the outcomes of both tasks. In this paper, we propose a novel multi-task learning framework for joint segmentation and classification of tumors in ABUS images. The proposed framework consists of two sub-networks: an encoder-decoder network for segmentation and a light-weight multi-scale network for classification. To account for the fuzzy boundaries of tumors in ABUS images, our framework uses an iterative training strategy to refine feature maps with the help of probability maps obtained from previous iterations. Experimental results based on a clinical dataset of 170 3D ABUS volumes collected from 107 patients indicate that the proposed multi-task framework improves tumor segmentation and classification over the single-task learning counterparts.
Collapse
Affiliation(s)
- Yue Zhou
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China; Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Houjin Chen
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China.
| | - Yanfeng Li
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
| | - Qin Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Xuanang Xu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Shu Wang
- Peking University People's Hospital, Beijing 100044, China
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
122
|
Qin Y, Liu Z, Liu C, Li Y, Zeng X, Ye C. Super-Resolved q-Space deep learning with uncertainty quantification. Med Image Anal 2020; 67:101885. [PMID: 33227600 DOI: 10.1016/j.media.2020.101885] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 10/12/2020] [Accepted: 10/16/2020] [Indexed: 11/17/2022]
Abstract
Diffusion magnetic resonance imaging (dMRI) provides a noninvasive method for measuring brain tissue microstructure. q-Space deep learning(q-DL) methods have been developed to accurately estimate tissue microstructure from dMRI scans acquired with a reduced number of diffusion gradients. In these methods, deep networks are trained to learn the mapping directly from diffusion signals to tissue microstructure. However, the quality of tissue microstructure estimation can be limited not only by the reduced number of diffusion gradients but also by the low spatial resolution of typical dMRI acquisitions. Therefore, in this work we extend q-DL to super-resolved tissue microstructure estimation and propose super-resolvedq-DL (SR-q-DL), where deep networks are designed to map low-resolution diffusion signals undersampled in the q-space to high-resolution tissue microstructure. Specifically, we use a patch-based strategy, where a deep network takes low-resolution patches of diffusion signals as input and outputs high-resolution tissue microstructure patches. The high-resolution patches are then combined to obtain the final high-resolution tissue microstructure map. Motivated by existing q-DL methods, we integrate the sparsity of diffusion signals in the network design, which comprises two functional components. The first component computes sparse representation of diffusion signals for the low-resolution input patch, and the second component maps the low-resolution sparse representation to high-resolution tissue microstructure. The weights in the two components are learned jointly and the trained network performs end-to-end tissue microstructure estimation. In addition to SR-q-DL, we further propose probabilistic SR-q-DL, which can quantify the uncertainty of the network output as well as achieve improved estimation accuracy. In probabilistic SR-q-DL, a deep ensemble strategy is used. Specifically, the deep network for SR-q-DL is revised to produce not only tissue microstructure estimates but also the uncertainty of the estimates. Then, multiple deep networks are trained and their results are fused for the final prediction of high-resolution tissue microstructure and uncertainty quantification. The proposed method was evaluated on two independent datasets of brain dMRI scans. Results indicate that our approach outperforms competing methods in terms of estimation accuracy. In addition, uncertainty measures provided by our method correlate with estimation errors, which indicates potential application of the proposed uncertainty quantification method in brain studies.
Collapse
Affiliation(s)
- Yu Qin
- School of Information and Electronics, Beijing Institute of Technology, 5 Zhongguancun South Street, Beijing, China
| | - Zhiwen Liu
- School of Information and Electronics, Beijing Institute of Technology, 5 Zhongguancun South Street, Beijing, China
| | - Chenghao Liu
- School of Information and Electronics, Beijing Institute of Technology, 5 Zhongguancun South Street, Beijing, China
| | - Yuxing Li
- School of Information and Electronics, Beijing Institute of Technology, 5 Zhongguancun South Street, Beijing, China
| | - Xiangzhu Zeng
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Chuyang Ye
- School of Information and Electronics, Beijing Institute of Technology, 5 Zhongguancun South Street, Beijing, China.
| |
Collapse
|
123
|
Ghesu FC, Georgescu B, Mansoor A, Yoo Y, Gibson E, Vishwanath RS, Balachandran A, Balter JM, Cao Y, Singh R, Digumarthy SR, Kalra MK, Grbic S, Comaniciu D. Quantifying and leveraging predictive uncertainty for medical image assessment. Med Image Anal 2020; 68:101855. [PMID: 33260116 DOI: 10.1016/j.media.2020.101855] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Revised: 08/21/2020] [Accepted: 09/14/2020] [Indexed: 11/19/2022]
Abstract
The interpretation of medical images is a challenging task, often complicated by the presence of artifacts, occlusions, limited contrast and more. Most notable is the case of chest radiography, where there is a high inter-rater variability in the detection and classification of abnormalities. This is largely due to inconclusive evidence in the data or subjective definitions of disease appearance. An additional example is the classification of anatomical views based on 2D Ultrasound images. Often, the anatomical context captured in a frame is not sufficient to recognize the underlying anatomy. Current machine learning solutions for these problems are typically limited to providing probabilistic predictions, relying on the capacity of underlying models to adapt to limited information and the high degree of label noise. In practice, however, this leads to overconfident systems with poor generalization on unseen data. To account for this, we propose a system that learns not only the probabilistic estimate for classification, but also an explicit uncertainty measure which captures the confidence of the system in the predicted output. We argue that this approach is essential to account for the inherent ambiguity characteristic of medical images from different radiologic exams including computed radiography, ultrasonography and magnetic resonance imaging. In our experiments we demonstrate that sample rejection based on the predicted uncertainty can significantly improve the ROC-AUC for various tasks, e.g., by 8% to 0.91 with an expected rejection rate of under 25% for the classification of different abnormalities in chest radiographs. In addition, we show that using uncertainty-driven bootstrapping to filter the training data, one can achieve a significant increase in robustness and accuracy. Finally, we present a multi-reader study showing that the predictive uncertainty is indicative of reader errors.
Collapse
Affiliation(s)
- Florin C Ghesu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA.
| | - Bogdan Georgescu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| | - Awais Mansoor
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| | - Youngjin Yoo
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| | - Eli Gibson
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| | - R S Vishwanath
- Siemens Healthineers, Digital Technology and Innovation, Bangalore, India
| | | | - James M Balter
- University of Michigan, Department of Radiation Oncology, Ann Arbor, MI, USA
| | - Yue Cao
- University of Michigan, Department of Radiation Oncology, Ann Arbor, MI, USA
| | - Ramandeep Singh
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Subba R Digumarthy
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Sasa Grbic
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| | - Dorin Comaniciu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| |
Collapse
|
124
|
Integrating uncertainty in deep neural networks for MRI based stroke analysis. Med Image Anal 2020; 65:101790. [DOI: 10.1016/j.media.2020.101790] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2019] [Revised: 07/09/2020] [Accepted: 07/16/2020] [Indexed: 12/13/2022]
|
125
|
Kaka H, Zhang E, Khan N. Artificial Intelligence and Deep Learning in Neuroradiology: Exploring the New Frontier. Can Assoc Radiol J 2020; 72:35-44. [PMID: 32946272 DOI: 10.1177/0846537120954293] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
There have been many recently published studies exploring machine learning (ML) and deep learning applications within neuroradiology. The improvement in performance of these techniques has resulted in an ever-increasing number of commercially available tools for the neuroradiologist. In this narrative review, recent publications exploring ML in neuroradiology are assessed with a focus on several key clinical domains. In particular, major advances are reviewed in the context of: (1) intracranial hemorrhage detection, (2) stroke imaging, (3) intracranial aneurysm screening, (4) multiple sclerosis imaging, (5) neuro-oncology, (6) head and tumor imaging, and (7) spine imaging.
Collapse
Affiliation(s)
- Hussam Kaka
- Department of Radiology, 3710McMaster University, Hamilton, Ontario, Canada
| | - Euan Zhang
- Department of Radiology, 3710McMaster University, Hamilton General Hospital, Hamilton, Ontario, Canada
| | - Nazir Khan
- Department of Radiology, 3710McMaster University, Hamilton General Hospital, Hamilton, Ontario, Canada
| |
Collapse
|
126
|
Xue W, Guo T, Ni D. Left ventricle quantification with sample-level confidence estimation via Bayesian neural network. Comput Med Imaging Graph 2020; 84:101753. [PMID: 32755759 DOI: 10.1016/j.compmedimag.2020.101753] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 06/24/2020] [Accepted: 07/03/2020] [Indexed: 11/28/2022]
Abstract
Quantification of cardiac left ventricle has become a hot topic due to its great significance in clinical practice. Many efforts have been devoted to LV quantification and obtained promising performance with the help of various deep neural networks when validated on a group of samples. However, none of them can provide sample-level confidence of the results, i.e., how reliable is the prediction result for one single sample, which would help clinicians make decisions of whether or not to accept the automatic results. In this paper, we achieve this by introducing the uncertainty analysis theory into our LV quantification network. Two types of uncertainty, Model Uncertainty, and Data Uncertainty are analyzed for the quantification performance and contribute to the sample-level confidence. Experiments with data of 145 subjects validate that our method not only improved the quantification performance with an uncertainty-weighted regression loss but also is capable of providing for each sample the confidence level of the estimation results for clinicians' further consideration.
Collapse
Affiliation(s)
- Wufeng Xue
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen, China
| | - Tingting Guo
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen, China.
| |
Collapse
|
127
|
|
128
|
Abrol A, Bhattarai M, Fedorov A, Du Y, Plis S, Calhoun V. Deep residual learning for neuroimaging: An application to predict progression to Alzheimer's disease. J Neurosci Methods 2020; 339:108701. [PMID: 32275915 PMCID: PMC7297044 DOI: 10.1016/j.jneumeth.2020.108701] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 01/03/2020] [Accepted: 03/25/2020] [Indexed: 01/22/2023]
Abstract
BACKGROUND The unparalleled performance of deep learning approaches in generic image processing has motivated its extension to neuroimaging data. These approaches learn abstract neuroanatomical and functional brain alterations that could enable exceptional performance in classification of brain disorders, predicting disease progression, and localizing brain abnormalities. NEW METHOD This work investigates the suitability of a modified form of deep residual neural networks (ResNet) for studying neuroimaging data in the specific application of predicting progression from mild cognitive impairment (MCI) to Alzheimer's disease (AD). Prediction was conducted first by training the deep models using MCI individuals only, followed by a domain transfer learning version that additionally trained on AD and controls. We also demonstrate a network occlusion based method to localize abnormalities. RESULTS The implemented framework captured non-linear features that successfully predicted AD progression and also conformed to the spectrum of various clinical scores. In a repeated cross-validated setup, the learnt predictive models showed highly similar peak activations that corresponded to previous AD reports. COMPARISON WITH EXISTING METHODS The implemented architecture achieved a significant performance improvement over the classical support vector machine and the stacked autoencoder frameworks (p < 0.005), numerically better than state-of-the-art performance using sMRI data alone (> 7% than the second-best performing method) and within 1% of the state-of-the-art performance considering learning using multiple neuroimaging modalities as well. CONCLUSIONS The explored frameworks reflected the high potential of deep learning architectures in learning subtle predictive features and utility in critical applications such as predicting and understanding disease progression.
Collapse
Affiliation(s)
- Anees Abrol
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA; Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, USA.
| | - Manish Bhattarai
- Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, USA; Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
| | - Alex Fedorov
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA; Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, USA
| | - Yuhui Du
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA; School of Computer and Information Technology, Shanxi University, Taiyuan, China
| | - Sergey Plis
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA
| | - Vince Calhoun
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA; Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, USA
| |
Collapse
|
129
|
Abstract
The use of intraoral ultrasound imaging has received great attention recently due to the benefits of being a portable and low-cost imaging solution for initial and continuing care that is noninvasive and free of ionizing radiation. Alveolar bone is an important structure in the periodontal apparatus to support the tooth. Accurate assessment of alveolar bone level is essential for periodontal diagnosis. However, interpretation of alveolar bone structure in ultrasound images is a challenge for clinicians. This work is aimed at automatically segmenting alveolar bone and locating the alveolar crest via a machine learning (ML) approach for intraoral ultrasound images. Three convolutional neural network–based ML methods were trained, validated, and tested with 700, 200, and 200 images, respectively. To improve the robustness of the ML algorithms, a data augmentation approach was introduced, where 2100 additional images were synthesized through vertical and horizontal shifting as well as horizontal flipping during the training process. Quantitative evaluations of 200 images, as compared with an expert clinician, showed that the best ML approach yielded an average Dice score of 85.3%, sensitivity of 88.5%, and specificity of 99.8%, and identified the alveolar crest with a mean difference of 0.20 mm and excellent reliability (intraclass correlation coefficient ≥0.98) in less than a second. This work demonstrated the potential use of ML to assist general dentists and specialists in the visualization of alveolar bone in ultrasound images.
Collapse
|
130
|
Maglogiannis I, Iliadis L, Pimenidis E. Bridging the Gap Between AI and Healthcare Sides: Towards Developing Clinically Relevant AI-Powered Diagnosis Systems. ARTIFICIAL INTELLIGENCE APPLICATIONS AND INNOVATIONS 2020; 584. [PMCID: PMC7256589 DOI: 10.1007/978-3-030-49186-4_27] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Despite the success of Convolutional Neural Network-based Computer-Aided Diagnosis research, its clinical applications remain challenging. Accordingly, developing medical Artificial Intelligence (AI) fitting into a clinical environment requires identifying/bridging the gap between AI and Healthcare sides. Since the biggest problem in Medical Imaging lies in data paucity, confirming the clinical relevance for diagnosis of research-proven image augmentation techniques is essential. Therefore, we hold a clinically valuable AI-envisioning workshop among Japanese Medical Imaging experts, physicians, and generalists in Healthcare/Informatics. Then, a questionnaire survey for physicians evaluates our pathology-aware Generative Adversarial Network (GAN)-based image augmentation projects in terms of Data Augmentation and physician training. The workshop reveals the intrinsic gap between AI/Healthcare sides and solutions on Why (i.e., clinical significance/interpretation) and How (i.e., data acquisition, commercial deployment, and safety/feeling safe). This analysis confirms our pathology-aware GANs’ clinical relevance as a clinical decision support system and non-expert physician training tool. Our findings would play a key role in connecting inter-disciplinary research and clinical applications, not limited to the Japanese medical context and pathology-aware GANs.
Collapse
Affiliation(s)
| | - Lazaros Iliadis
- Department of Civil Engineering, Lab of Mathematics and Informatics (ISCE), Democritus University of Thrace, Xanthi, Greece
| | - Elias Pimenidis
- Department of Computer Science and Creative Technologies, University of the West of England, Bristol, UK
| |
Collapse
|
131
|
Carneiro G, Zorron Cheng Tao Pu L, Singh R, Burt A. Deep learning uncertainty and confidence calibration for the five-class polyp classification from colonoscopy. Med Image Anal 2020; 62:101653. [PMID: 32172037 DOI: 10.1016/j.media.2020.101653] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 01/12/2020] [Accepted: 01/16/2020] [Indexed: 12/27/2022]
|
132
|
Antico M, Sasazawa F, Dunnhofer M, Camps SM, Jaiprakash AT, Pandey AK, Crawford R, Carneiro G, Fontanarosa D. Deep Learning-Based Femoral Cartilage Automatic Segmentation in Ultrasound Imaging for Guidance in Robotic Knee Arthroscopy. ULTRASOUND IN MEDICINE & BIOLOGY 2020; 46:422-435. [PMID: 31767454 DOI: 10.1016/j.ultrasmedbio.2019.10.015] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 10/06/2019] [Accepted: 10/18/2019] [Indexed: 06/10/2023]
Abstract
Knee arthroscopy is a minimally invasive surgery used in the treatment of intra-articular knee pathology which may cause unintended damage to femoral cartilage. An ultrasound (US)-guided autonomous robotic platform for knee arthroscopy can be envisioned to minimise these risks and possibly to improve surgical outcomes. The first necessary tool for reliable guidance during robotic surgeries was an automatic segmentation algorithm to outline the regions at risk. In this work, we studied the feasibility of using a state-of-the-art deep neural network (UNet) to automatically segment femoral cartilage imaged with dynamic volumetric US (at the refresh rate of 1 Hz), under simulated surgical conditions. Six volunteers were scanned which resulted in the extraction of 18278 2-D US images from 35 dynamic 3-D US scans, and these were manually labelled. The UNet was evaluated using a five-fold cross-validation with an average of 15531 training and 3124 testing labelled images per fold. An intra-observer study was performed to assess intra-observer variability due to inherent US physical properties. To account for this variability, a novel metric concept named Dice coefficient with boundary uncertainty (DSCUB) was proposed and used to test the algorithm. The algorithm performed comparably to an experienced orthopaedic surgeon, with DSCUB of 0.87. The proposed UNet has the potential to localise femoral cartilage in robotic knee arthroscopy with clinical accuracy.
Collapse
Affiliation(s)
- M Antico
- School of Chemistry, Physics and Mechanical Engineering, Science and Engineering Faculty, Queensland University of Technology, Brisbane, QLD 4000, Australia; Institute of Health & Biomedical Innovation, Queensland University of Technology, Brisbane, QLD 4059, Australia
| | - F Sasazawa
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, 060-8638, Japan
| | - M Dunnhofer
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, 33100, Italy
| | - S M Camps
- Faculty of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, the Netherlands; Oncology Solutions Department, Philips Research, 5656 AE Eindhoven, the Netherlands
| | - A T Jaiprakash
- Institute of Health & Biomedical Innovation, Queensland University of Technology, Brisbane, QLD 4059, Australia; School of Electrical Engineering, Computer Science, Science and Engineering Faculty, Queensland 16 University of Technology, Brisbane, QLD 4000, Australia
| | - A K Pandey
- Institute of Health & Biomedical Innovation, Queensland University of Technology, Brisbane, QLD 4059, Australia; School of Electrical Engineering, Computer Science, Science and Engineering Faculty, Queensland 16 University of Technology, Brisbane, QLD 4000, Australia
| | - R Crawford
- School of Chemistry, Physics and Mechanical Engineering, Science and Engineering Faculty, Queensland University of Technology, Brisbane, QLD 4000, Australia; Institute of Health & Biomedical Innovation, Queensland University of Technology, Brisbane, QLD 4059, Australia
| | - G Carneiro
- Australian Institute for Machine Learning, School of Computer Science, the University of Adelaide, Adelaide, SA 5005, Australia
| | - D Fontanarosa
- Institute of Health & Biomedical Innovation, Queensland University of Technology, Brisbane, QLD 4059, Australia; School of Clinical Sciences, Queensland University of Technology, Gardens Point Campus, 2 George St, Brisbane, QLD 4000, Australia.
| |
Collapse
|
133
|
Liu Y, Yang G, Hosseiny M, Azadikhah A, Mirak SA, Miao Q, Raman SS, Sung K. Exploring Uncertainty Measures in Bayesian Deep Attentive Neural Networks for Prostate Zonal Segmentation. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:151817-151828. [PMID: 33564563 PMCID: PMC7869831 DOI: 10.1109/access.2020.3017168] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Automatic segmentation of prostatic zones on multiparametric MRI (mpMRI) can improve the diagnostic workflow of prostate cancer. We designed a spatial attentive Bayesian deep learning network for the automatic segmentation of the peripheral zone (PZ) and transition zone (TZ) of the prostate with uncertainty estimation. The proposed method was evaluated by using internal and external independent testing datasets, and overall uncertainties of the proposed model were calculated at different prostate locations (apex, middle, and base). The study cohort included 351 MRI scans, of which 304 scans were retrieved from a de-identified publicly available datasets (PROSTATEX) and 47 scans were extracted from a large U.S. tertiary referral center (external testing dataset; ETD)). All the PZ and TZ contours were drawn by research fellows under the supervision of expert genitourinary radiologists. Within the PROSTATEX dataset, 259 and 45 patients (internal testing dataset; ITD) were used to develop and validate the model. Then, the model was tested independently using the ETD only. The segmentation performance was evaluated using the Dice Similarity Coefficient (DSC). For PZ and TZ segmentation, the proposed method achieved mean DSCs of 0.80±0.05 and 0.89±0.04 on ITD, as well as 0.79±0.06 and 0.87±0.07 on ETD. For both PZ and TZ, there was no significant difference between ITD and ETD for the proposed method. This DL-based method enabled the accuracy of the PZ and TZ segmentation, which outperformed the state-of-art methods (Deeplab V3+, Attention U-Net, R2U-Net, USE-Net and U-Net). We observed that segmentation uncertainty peaked at the junction between PZ, TZ and AFS. Also, the overall uncertainties were highly consistent with the actual model performance between PZ and TZ at three clinically relevant locations of the prostate.
Collapse
Affiliation(s)
- Yongkai Liu
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, South Kensington, London, UK, SW7 2AZ
| | - Melina Hosseiny
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Afshin Azadikhah
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Sohrab Afshari Mirak
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Qi Miao
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Steven S. Raman
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Kyunghyun Sung
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| |
Collapse
|