101
|
van Rooij W, Verbakel WF, Slotman BJ, Dahele M. Using Spatial Probability Maps to Highlight Potential Inaccuracies in Deep Learning-Based Contours: Facilitating Online Adaptive Radiation Therapy. Adv Radiat Oncol 2021; 6:100658. [PMID: 33778184 PMCID: PMC7985281 DOI: 10.1016/j.adro.2021.100658] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 12/14/2020] [Accepted: 12/30/2020] [Indexed: 10/27/2022] Open
Abstract
PURPOSE Contouring organs at risk remains a largely manual task, which is time consuming and prone to variation. Deep learning-based delineation (DLD) shows promise both in terms of quality and speed, but it does not yet perform perfectly. Because of that, manual checking of DLD is still recommended. There are currently no commercial tools to focus attention on the areas of greatest uncertainty within a DLD contour. Therefore, we explore the use of spatial probability maps (SPMs) to help efficiency and reproducibility of DLD checking and correction, using the salivary glands as the paradigm. METHODS AND MATERIALS A 3-dimensional fully convolutional network was trained with 315/264 parotid/submandibular glands. Subsequently, SPMs were created using Monte Carlo dropout (MCD). The method was boosted by placing a Gaussian distribution (GD) over the model's parameters during sampling (MCD + GD). MCD and MCD + GD were quantitatively compared and the SPMs were visually inspected. RESULTS The addition of the GD appears to increase the method's ability to detect uncertainty. In general, this technique demonstrated uncertainty in areas that (1) have lower contrast, (2) are less consistently contoured by clinicians, and (3) deviate from the anatomic norm. CONCLUSIONS We believe the integration of uncertainty information into contours made using DLD is an important step in highlighting where a contour may be less reliable. We have shown how SPMs are one way to achieve this and how they may be integrated into the online adaptive radiation therapy workflow.
Collapse
Affiliation(s)
- Ward van Rooij
- Department of Radiation Oncology, Amsterdam UMC, Vrije Universiteit Amsterdam, Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Wilko F. Verbakel
- Department of Radiation Oncology, Amsterdam UMC, Vrije Universiteit Amsterdam, Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Berend J. Slotman
- Department of Radiation Oncology, Amsterdam UMC, Vrije Universiteit Amsterdam, Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Max Dahele
- Department of Radiation Oncology, Amsterdam UMC, Vrije Universiteit Amsterdam, Cancer Center Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
102
|
Kläser K, Borges P, Shaw R, Ranzini M, Modat M, Atkinson D, Thielemans K, Hutton B, Goh V, Cook G, Cardoso MJ, Ourselin S. A multi-channel uncertainty-aware multi-resolution network for MR to CT synthesis. APPLIED SCIENCES (BASEL, SWITZERLAND) 2021; 11:1667. [PMID: 33763236 PMCID: PMC7610395 DOI: 10.3390/app11041667] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Synthesising computed tomography (CT) images from magnetic resonance images (MRI) plays an important role in the field of medical image analysis, both for quantification and diagnostic purposes. Convolutional neural networks (CNNs) have achieved state-of-the-art results in image-to-image translation for brain applications. However, synthesising whole-body images remains largely uncharted territory involving many challenges, including large image size and limited field of view, complex spatial context, and anatomical differences between images acquired at different times. We propose the use of an uncertainty-aware multi-channel multi-resolution 3D cascade network specifically aiming for whole-body MR to CT synthesis. The Mean Absolute Error on the synthetic CT generated with the MultiRes unc network (73.90 HU) is compared to multiple baseline CNNs like 3D U-Net (92.89 HU), HighRes3DNet (89.05 HU) and deep boosted regression (77.58 HU) and shows superior synthesis performance. We ultimately exploit the extrapolation properties of the MultiRes networks on sub-regions of the body.
Collapse
Affiliation(s)
- Kerstin Kläser
- Dept. Medical Physics & Biomedical Engineering, University College London, UK
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Pedro Borges
- Dept. Medical Physics & Biomedical Engineering, University College London, UK
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Richard Shaw
- Dept. Medical Physics & Biomedical Engineering, University College London, UK
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Marta Ranzini
- Dept. Medical Physics & Biomedical Engineering, University College London, UK
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - David Atkinson
- Centre for Medical Imaging, University College London, UK
| | - Kris Thielemans
- Institute of Nuclear Medicine, University College London, UK
| | - Brian Hutton
- Institute of Nuclear Medicine, University College London, UK
| | - Vicky Goh
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Gary Cook
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - M Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| |
Collapse
|
103
|
Cao X, Chen H, Li Y, Peng Y, Wang S, Cheng L. Uncertainty Aware Temporal-Ensembling Model for Semi-Supervised ABUS Mass Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:431-443. [PMID: 33021936 DOI: 10.1109/tmi.2020.3029161] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Accurate breast mass segmentation of automated breast ultrasound (ABUS) images plays a crucial role in 3D breast reconstruction which can assist radiologists in surgery planning. Although the convolutional neural network has great potential for breast mass segmentation due to the remarkable progress of deep learning, the lack of annotated data limits the performance of deep CNNs. In this article, we present an uncertainty aware temporal ensembling (UATE) model for semi-supervised ABUS mass segmentation. Specifically, a temporal ensembling segmentation (TEs) model is designed to segment breast mass using a few labeled images and a large number of unlabeled images. Considering the network output contains correct predictions and unreliable predictions, equally treating each prediction in pseudo label update and loss calculation may degrade the network performance. To alleviate this problem, the uncertainty map is estimated for each image. Then an adaptive ensembling momentum map and an uncertainty aware unsupervised loss are designed and integrated with TEs model. The effectiveness of the proposed UATE model is mainly verified on an ABUS dataset of 107 patients with 170 volumes, including 13382 2D labeled slices. The Jaccard index (JI), Dice similarity coefficient (DSC), pixel-wise accuracy (AC) and Hausdorff distance (HD) of the proposed method on testing set are 63.65%, 74.25%, 99.21% and 3.81mm respectively. Experimental results demonstrate that our semi-supervised method outperforms the fully supervised method, and get a promising result compared with existing semi-supervised methods.
Collapse
|
104
|
Tanno R, Worrall DE, Kaden E, Ghosh A, Grussu F, Bizzi A, Sotiropoulos SN, Criminisi A, Alexander DC. Uncertainty modelling in deep learning for safer neuroimage enhancement: Demonstration in diffusion MRI. Neuroimage 2021; 225:117366. [DOI: 10.1016/j.neuroimage.2020.117366] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Revised: 08/28/2020] [Accepted: 09/05/2020] [Indexed: 12/14/2022] Open
|
105
|
Liu Y, Cui W, Ha Q, Xiong X, Zeng X, Ye C. Knowledge transfer between brain lesion segmentation tasks with increased model capacity. Comput Med Imaging Graph 2020; 88:101842. [PMID: 33387812 DOI: 10.1016/j.compmedimag.2020.101842] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Revised: 12/02/2020] [Accepted: 12/11/2020] [Indexed: 10/22/2022]
Abstract
Convolutional neural networks (CNNs) have become an increasingly popular tool for brain lesion segmentation in recent years due to its accuracy and efficiency. However, CNN-based brain lesion segmentation generally requires a large amount of annotated training data, which can be costly for medical imaging. In many scenarios, only a few annotations of brain lesions are available. One common strategy to address the issue of limited annotated data is to transfer knowledge from a different yet relevant source task, where training data is abundant, to the target task of interest. Typically, a model can be pretrained for the source task, and then fine-tuned with the scarce training data associated with the target task. However, classic fine-tuning tends to make small modifications to the pretrained model, which could hinder its adaptation to the target task. Fine-tuning with increased model capacity has been shown to alleviate this negative impact in image classification problems. In this work, we extend the strategy of fine-tuning with increased model capacity to the problem of brain lesion segmentation, and then develop an advanced version that is better suitable for segmentation problems. First, we propose a vanilla strategy of increasing the capacity, where, like in the classification problem, the width of the network is augmented during fine-tuning. Second, because unlike image classification, in segmentation problems each voxel is associated with a labeling result, we further develop a spatially adaptive augmentation strategy during fine-tuning. Specifically, in addition to the vanilla width augmentation, we incorporate a module that computes a spatial map of the contribution of the information given by width augmentation in the final segmentation. For demonstration, the proposed method was applied to ischemic stroke lesion segmentation, where a model pretrained for brain tumor segmentation was fine-tuned, and the experimental results indicate the benefit of our method.
Collapse
Affiliation(s)
- Yanlin Liu
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| | - Wenhui Cui
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Qing Ha
- Deepwise AI Lab, Beijing, China
| | | | - Xiangzhu Zeng
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Chuyang Ye
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China.
| |
Collapse
|
106
|
Image Anomaly Detection Using Normal Data Only by Latent Space Resampling. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10238660] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Detecting image anomalies automatically in industrial scenarios can improve economic efficiency, but the scarcity of anomalous samples increases the challenge of the task. Recently, autoencoder has been widely used in image anomaly detection without using anomalous images during training. However, it is hard to determine the proper dimensionality of the latent space, and it often leads to unwanted reconstructions of the anomalous parts. To solve this problem, we propose a novel method based on the autoencoder. In this method, the latent space of the autoencoder is estimated using a discrete probability model. With the estimated probability model, the anomalous components in the latent space can be well excluded and undesirable reconstruction of the anomalous parts can be avoided. Specifically, we first adopt VQ-VAE as the reconstruction model to get a discrete latent space of normal samples. Then, PixelSail, a deep autoregressive model, is used to estimate the probability model of the discrete latent space. In the detection stage, the autoregressive model will determine the parts that deviate from the normal distribution in the input latent space. Then, the deviation code will be resampled from the normal distribution and decoded to yield a restored image, which is closest to the anomaly input. The anomaly is then detected by comparing the difference between the restored image and the anomaly image. Our proposed method is evaluated on the high-resolution industrial inspection image datasets MVTec AD which consist of 15 categories. The results show that the AUROC of the model improves by 15% over autoencoder and also yields competitive performance compared with state-of-the-art methods.
Collapse
|
107
|
Xing X, Yuan Y, Meng MQH. Zoom in Lesions for Better Diagnosis: Attention Guided Deformation Network for WCE Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4047-4059. [PMID: 32746146 DOI: 10.1109/tmi.2020.3010102] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Wireless capsule endoscopy (WCE) is a novel imaging tool that allows noninvasive visualization of the entire gastrointestinal (GI) tract without causing discomfort to patients. Convolutional neural networks (CNNs), though perform favorably against traditional machine learning methods, show limited capacity in WCE image classification due to the small lesions and background interference. To overcome these limits, we propose a two-branch Attention Guided Deformation Network (AGDN) for WCE image classification. Specifically, the attention maps of branch1 are utilized to guide the amplification of lesion regions on the input images of branch2, thus leading to better representation and inspection of the small lesions. What's more, we devise and insert Third-order Long-range Feature Aggregation (TLFA) modules into the network. By capturing long-range dependencies and aggregating contextual features, TLFAs endow the network with a global contextual view and stronger feature representation and discrimination capability. Furthermore, we propose a novel Deformation based Attention Consistency (DAC) loss to refine the attention maps and achieve the mutual promotion of the two branches. Finally, the global feature embeddings from the two branches are fused to make image label predictions. Extensive experiments show that the proposed AGDN outperforms state-of-the-art methods with an overall classification accuracy of 91.29% on two public WCE datasets. The source code is available at https://github.com/hathawayxxh/WCE-AGDN.
Collapse
|
108
|
Zhou Y, Chen H, Li Y, Liu Q, Xu X, Wang S, Yap PT, Shen D. Multi-task learning for segmentation and classification of tumors in 3D automated breast ultrasound images. Med Image Anal 2020; 70:101918. [PMID: 33676100 DOI: 10.1016/j.media.2020.101918] [Citation(s) in RCA: 90] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 11/22/2020] [Accepted: 11/23/2020] [Indexed: 12/12/2022]
Abstract
Tumor classification and segmentation are two important tasks for computer-aided diagnosis (CAD) using 3D automated breast ultrasound (ABUS) images. However, they are challenging due to the significant shape variation of breast tumors and the fuzzy nature of ultrasound images (e.g., low contrast and signal to noise ratio). Considering the correlation between tumor classification and segmentation, we argue that learning these two tasks jointly is able to improve the outcomes of both tasks. In this paper, we propose a novel multi-task learning framework for joint segmentation and classification of tumors in ABUS images. The proposed framework consists of two sub-networks: an encoder-decoder network for segmentation and a light-weight multi-scale network for classification. To account for the fuzzy boundaries of tumors in ABUS images, our framework uses an iterative training strategy to refine feature maps with the help of probability maps obtained from previous iterations. Experimental results based on a clinical dataset of 170 3D ABUS volumes collected from 107 patients indicate that the proposed multi-task framework improves tumor segmentation and classification over the single-task learning counterparts.
Collapse
Affiliation(s)
- Yue Zhou
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China; Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Houjin Chen
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China.
| | - Yanfeng Li
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
| | - Qin Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Xuanang Xu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Shu Wang
- Peking University People's Hospital, Beijing 100044, China
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
109
|
Qin Y, Liu Z, Liu C, Li Y, Zeng X, Ye C. Super-Resolved q-Space deep learning with uncertainty quantification. Med Image Anal 2020; 67:101885. [PMID: 33227600 DOI: 10.1016/j.media.2020.101885] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 10/12/2020] [Accepted: 10/16/2020] [Indexed: 11/17/2022]
Abstract
Diffusion magnetic resonance imaging (dMRI) provides a noninvasive method for measuring brain tissue microstructure. q-Space deep learning(q-DL) methods have been developed to accurately estimate tissue microstructure from dMRI scans acquired with a reduced number of diffusion gradients. In these methods, deep networks are trained to learn the mapping directly from diffusion signals to tissue microstructure. However, the quality of tissue microstructure estimation can be limited not only by the reduced number of diffusion gradients but also by the low spatial resolution of typical dMRI acquisitions. Therefore, in this work we extend q-DL to super-resolved tissue microstructure estimation and propose super-resolvedq-DL (SR-q-DL), where deep networks are designed to map low-resolution diffusion signals undersampled in the q-space to high-resolution tissue microstructure. Specifically, we use a patch-based strategy, where a deep network takes low-resolution patches of diffusion signals as input and outputs high-resolution tissue microstructure patches. The high-resolution patches are then combined to obtain the final high-resolution tissue microstructure map. Motivated by existing q-DL methods, we integrate the sparsity of diffusion signals in the network design, which comprises two functional components. The first component computes sparse representation of diffusion signals for the low-resolution input patch, and the second component maps the low-resolution sparse representation to high-resolution tissue microstructure. The weights in the two components are learned jointly and the trained network performs end-to-end tissue microstructure estimation. In addition to SR-q-DL, we further propose probabilistic SR-q-DL, which can quantify the uncertainty of the network output as well as achieve improved estimation accuracy. In probabilistic SR-q-DL, a deep ensemble strategy is used. Specifically, the deep network for SR-q-DL is revised to produce not only tissue microstructure estimates but also the uncertainty of the estimates. Then, multiple deep networks are trained and their results are fused for the final prediction of high-resolution tissue microstructure and uncertainty quantification. The proposed method was evaluated on two independent datasets of brain dMRI scans. Results indicate that our approach outperforms competing methods in terms of estimation accuracy. In addition, uncertainty measures provided by our method correlate with estimation errors, which indicates potential application of the proposed uncertainty quantification method in brain studies.
Collapse
Affiliation(s)
- Yu Qin
- School of Information and Electronics, Beijing Institute of Technology, 5 Zhongguancun South Street, Beijing, China
| | - Zhiwen Liu
- School of Information and Electronics, Beijing Institute of Technology, 5 Zhongguancun South Street, Beijing, China
| | - Chenghao Liu
- School of Information and Electronics, Beijing Institute of Technology, 5 Zhongguancun South Street, Beijing, China
| | - Yuxing Li
- School of Information and Electronics, Beijing Institute of Technology, 5 Zhongguancun South Street, Beijing, China
| | - Xiangzhu Zeng
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Chuyang Ye
- School of Information and Electronics, Beijing Institute of Technology, 5 Zhongguancun South Street, Beijing, China.
| |
Collapse
|
110
|
Ghesu FC, Georgescu B, Mansoor A, Yoo Y, Gibson E, Vishwanath RS, Balachandran A, Balter JM, Cao Y, Singh R, Digumarthy SR, Kalra MK, Grbic S, Comaniciu D. Quantifying and leveraging predictive uncertainty for medical image assessment. Med Image Anal 2020; 68:101855. [PMID: 33260116 DOI: 10.1016/j.media.2020.101855] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Revised: 08/21/2020] [Accepted: 09/14/2020] [Indexed: 11/19/2022]
Abstract
The interpretation of medical images is a challenging task, often complicated by the presence of artifacts, occlusions, limited contrast and more. Most notable is the case of chest radiography, where there is a high inter-rater variability in the detection and classification of abnormalities. This is largely due to inconclusive evidence in the data or subjective definitions of disease appearance. An additional example is the classification of anatomical views based on 2D Ultrasound images. Often, the anatomical context captured in a frame is not sufficient to recognize the underlying anatomy. Current machine learning solutions for these problems are typically limited to providing probabilistic predictions, relying on the capacity of underlying models to adapt to limited information and the high degree of label noise. In practice, however, this leads to overconfident systems with poor generalization on unseen data. To account for this, we propose a system that learns not only the probabilistic estimate for classification, but also an explicit uncertainty measure which captures the confidence of the system in the predicted output. We argue that this approach is essential to account for the inherent ambiguity characteristic of medical images from different radiologic exams including computed radiography, ultrasonography and magnetic resonance imaging. In our experiments we demonstrate that sample rejection based on the predicted uncertainty can significantly improve the ROC-AUC for various tasks, e.g., by 8% to 0.91 with an expected rejection rate of under 25% for the classification of different abnormalities in chest radiographs. In addition, we show that using uncertainty-driven bootstrapping to filter the training data, one can achieve a significant increase in robustness and accuracy. Finally, we present a multi-reader study showing that the predictive uncertainty is indicative of reader errors.
Collapse
Affiliation(s)
- Florin C Ghesu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA.
| | - Bogdan Georgescu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| | - Awais Mansoor
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| | - Youngjin Yoo
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| | - Eli Gibson
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| | - R S Vishwanath
- Siemens Healthineers, Digital Technology and Innovation, Bangalore, India
| | | | - James M Balter
- University of Michigan, Department of Radiation Oncology, Ann Arbor, MI, USA
| | - Yue Cao
- University of Michigan, Department of Radiation Oncology, Ann Arbor, MI, USA
| | - Ramandeep Singh
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Subba R Digumarthy
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Sasa Grbic
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| | - Dorin Comaniciu
- Siemens Healthineers, Digital Technology and Innovation, Princeton, NJ, USA
| |
Collapse
|
111
|
Integrating uncertainty in deep neural networks for MRI based stroke analysis. Med Image Anal 2020; 65:101790. [DOI: 10.1016/j.media.2020.101790] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2019] [Revised: 07/09/2020] [Accepted: 07/16/2020] [Indexed: 12/13/2022]
|
112
|
Kaka H, Zhang E, Khan N. Artificial Intelligence and Deep Learning in Neuroradiology: Exploring the New Frontier. Can Assoc Radiol J 2020; 72:35-44. [PMID: 32946272 DOI: 10.1177/0846537120954293] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
There have been many recently published studies exploring machine learning (ML) and deep learning applications within neuroradiology. The improvement in performance of these techniques has resulted in an ever-increasing number of commercially available tools for the neuroradiologist. In this narrative review, recent publications exploring ML in neuroradiology are assessed with a focus on several key clinical domains. In particular, major advances are reviewed in the context of: (1) intracranial hemorrhage detection, (2) stroke imaging, (3) intracranial aneurysm screening, (4) multiple sclerosis imaging, (5) neuro-oncology, (6) head and tumor imaging, and (7) spine imaging.
Collapse
Affiliation(s)
- Hussam Kaka
- Department of Radiology, 3710McMaster University, Hamilton, Ontario, Canada
| | - Euan Zhang
- Department of Radiology, 3710McMaster University, Hamilton General Hospital, Hamilton, Ontario, Canada
| | - Nazir Khan
- Department of Radiology, 3710McMaster University, Hamilton General Hospital, Hamilton, Ontario, Canada
| |
Collapse
|
113
|
Xue W, Guo T, Ni D. Left ventricle quantification with sample-level confidence estimation via Bayesian neural network. Comput Med Imaging Graph 2020; 84:101753. [PMID: 32755759 DOI: 10.1016/j.compmedimag.2020.101753] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 06/24/2020] [Accepted: 07/03/2020] [Indexed: 11/28/2022]
Abstract
Quantification of cardiac left ventricle has become a hot topic due to its great significance in clinical practice. Many efforts have been devoted to LV quantification and obtained promising performance with the help of various deep neural networks when validated on a group of samples. However, none of them can provide sample-level confidence of the results, i.e., how reliable is the prediction result for one single sample, which would help clinicians make decisions of whether or not to accept the automatic results. In this paper, we achieve this by introducing the uncertainty analysis theory into our LV quantification network. Two types of uncertainty, Model Uncertainty, and Data Uncertainty are analyzed for the quantification performance and contribute to the sample-level confidence. Experiments with data of 145 subjects validate that our method not only improved the quantification performance with an uncertainty-weighted regression loss but also is capable of providing for each sample the confidence level of the estimation results for clinicians' further consideration.
Collapse
Affiliation(s)
- Wufeng Xue
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen, China
| | - Tingting Guo
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen, China.
| |
Collapse
|
114
|
|
115
|
Abrol A, Bhattarai M, Fedorov A, Du Y, Plis S, Calhoun V. Deep residual learning for neuroimaging: An application to predict progression to Alzheimer's disease. J Neurosci Methods 2020; 339:108701. [PMID: 32275915 PMCID: PMC7297044 DOI: 10.1016/j.jneumeth.2020.108701] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 01/03/2020] [Accepted: 03/25/2020] [Indexed: 01/22/2023]
Abstract
BACKGROUND The unparalleled performance of deep learning approaches in generic image processing has motivated its extension to neuroimaging data. These approaches learn abstract neuroanatomical and functional brain alterations that could enable exceptional performance in classification of brain disorders, predicting disease progression, and localizing brain abnormalities. NEW METHOD This work investigates the suitability of a modified form of deep residual neural networks (ResNet) for studying neuroimaging data in the specific application of predicting progression from mild cognitive impairment (MCI) to Alzheimer's disease (AD). Prediction was conducted first by training the deep models using MCI individuals only, followed by a domain transfer learning version that additionally trained on AD and controls. We also demonstrate a network occlusion based method to localize abnormalities. RESULTS The implemented framework captured non-linear features that successfully predicted AD progression and also conformed to the spectrum of various clinical scores. In a repeated cross-validated setup, the learnt predictive models showed highly similar peak activations that corresponded to previous AD reports. COMPARISON WITH EXISTING METHODS The implemented architecture achieved a significant performance improvement over the classical support vector machine and the stacked autoencoder frameworks (p < 0.005), numerically better than state-of-the-art performance using sMRI data alone (> 7% than the second-best performing method) and within 1% of the state-of-the-art performance considering learning using multiple neuroimaging modalities as well. CONCLUSIONS The explored frameworks reflected the high potential of deep learning architectures in learning subtle predictive features and utility in critical applications such as predicting and understanding disease progression.
Collapse
Affiliation(s)
- Anees Abrol
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA; Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, USA.
| | - Manish Bhattarai
- Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, USA; Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
| | - Alex Fedorov
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA; Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, USA
| | - Yuhui Du
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA; School of Computer and Information Technology, Shanxi University, Taiyuan, China
| | - Sergey Plis
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA
| | - Vince Calhoun
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA; Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, USA
| |
Collapse
|
116
|
Abstract
The use of intraoral ultrasound imaging has received great attention recently due to the benefits of being a portable and low-cost imaging solution for initial and continuing care that is noninvasive and free of ionizing radiation. Alveolar bone is an important structure in the periodontal apparatus to support the tooth. Accurate assessment of alveolar bone level is essential for periodontal diagnosis. However, interpretation of alveolar bone structure in ultrasound images is a challenge for clinicians. This work is aimed at automatically segmenting alveolar bone and locating the alveolar crest via a machine learning (ML) approach for intraoral ultrasound images. Three convolutional neural network–based ML methods were trained, validated, and tested with 700, 200, and 200 images, respectively. To improve the robustness of the ML algorithms, a data augmentation approach was introduced, where 2100 additional images were synthesized through vertical and horizontal shifting as well as horizontal flipping during the training process. Quantitative evaluations of 200 images, as compared with an expert clinician, showed that the best ML approach yielded an average Dice score of 85.3%, sensitivity of 88.5%, and specificity of 99.8%, and identified the alveolar crest with a mean difference of 0.20 mm and excellent reliability (intraclass correlation coefficient ≥0.98) in less than a second. This work demonstrated the potential use of ML to assist general dentists and specialists in the visualization of alveolar bone in ultrasound images.
Collapse
|
117
|
Maglogiannis I, Iliadis L, Pimenidis E. Bridging the Gap Between AI and Healthcare Sides: Towards Developing Clinically Relevant AI-Powered Diagnosis Systems. ARTIFICIAL INTELLIGENCE APPLICATIONS AND INNOVATIONS 2020; 584. [PMCID: PMC7256589 DOI: 10.1007/978-3-030-49186-4_27] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Despite the success of Convolutional Neural Network-based Computer-Aided Diagnosis research, its clinical applications remain challenging. Accordingly, developing medical Artificial Intelligence (AI) fitting into a clinical environment requires identifying/bridging the gap between AI and Healthcare sides. Since the biggest problem in Medical Imaging lies in data paucity, confirming the clinical relevance for diagnosis of research-proven image augmentation techniques is essential. Therefore, we hold a clinically valuable AI-envisioning workshop among Japanese Medical Imaging experts, physicians, and generalists in Healthcare/Informatics. Then, a questionnaire survey for physicians evaluates our pathology-aware Generative Adversarial Network (GAN)-based image augmentation projects in terms of Data Augmentation and physician training. The workshop reveals the intrinsic gap between AI/Healthcare sides and solutions on Why (i.e., clinical significance/interpretation) and How (i.e., data acquisition, commercial deployment, and safety/feeling safe). This analysis confirms our pathology-aware GANs’ clinical relevance as a clinical decision support system and non-expert physician training tool. Our findings would play a key role in connecting inter-disciplinary research and clinical applications, not limited to the Japanese medical context and pathology-aware GANs.
Collapse
Affiliation(s)
| | - Lazaros Iliadis
- Department of Civil Engineering, Lab of Mathematics and Informatics (ISCE), Democritus University of Thrace, Xanthi, Greece
| | - Elias Pimenidis
- Department of Computer Science and Creative Technologies, University of the West of England, Bristol, UK
| |
Collapse
|
118
|
Carneiro G, Zorron Cheng Tao Pu L, Singh R, Burt A. Deep learning uncertainty and confidence calibration for the five-class polyp classification from colonoscopy. Med Image Anal 2020; 62:101653. [DOI: 10.1016/j.media.2020.101653] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 01/12/2020] [Accepted: 01/16/2020] [Indexed: 12/27/2022]
|
119
|
Antico M, Sasazawa F, Dunnhofer M, Camps SM, Jaiprakash AT, Pandey AK, Crawford R, Carneiro G, Fontanarosa D. Deep Learning-Based Femoral Cartilage Automatic Segmentation in Ultrasound Imaging for Guidance in Robotic Knee Arthroscopy. ULTRASOUND IN MEDICINE & BIOLOGY 2020; 46:422-435. [PMID: 31767454 DOI: 10.1016/j.ultrasmedbio.2019.10.015] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 10/06/2019] [Accepted: 10/18/2019] [Indexed: 06/10/2023]
Abstract
Knee arthroscopy is a minimally invasive surgery used in the treatment of intra-articular knee pathology which may cause unintended damage to femoral cartilage. An ultrasound (US)-guided autonomous robotic platform for knee arthroscopy can be envisioned to minimise these risks and possibly to improve surgical outcomes. The first necessary tool for reliable guidance during robotic surgeries was an automatic segmentation algorithm to outline the regions at risk. In this work, we studied the feasibility of using a state-of-the-art deep neural network (UNet) to automatically segment femoral cartilage imaged with dynamic volumetric US (at the refresh rate of 1 Hz), under simulated surgical conditions. Six volunteers were scanned which resulted in the extraction of 18278 2-D US images from 35 dynamic 3-D US scans, and these were manually labelled. The UNet was evaluated using a five-fold cross-validation with an average of 15531 training and 3124 testing labelled images per fold. An intra-observer study was performed to assess intra-observer variability due to inherent US physical properties. To account for this variability, a novel metric concept named Dice coefficient with boundary uncertainty (DSCUB) was proposed and used to test the algorithm. The algorithm performed comparably to an experienced orthopaedic surgeon, with DSCUB of 0.87. The proposed UNet has the potential to localise femoral cartilage in robotic knee arthroscopy with clinical accuracy.
Collapse
Affiliation(s)
- M Antico
- School of Chemistry, Physics and Mechanical Engineering, Science and Engineering Faculty, Queensland University of Technology, Brisbane, QLD 4000, Australia; Institute of Health & Biomedical Innovation, Queensland University of Technology, Brisbane, QLD 4059, Australia
| | - F Sasazawa
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, 060-8638, Japan
| | - M Dunnhofer
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, 33100, Italy
| | - S M Camps
- Faculty of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, the Netherlands; Oncology Solutions Department, Philips Research, 5656 AE Eindhoven, the Netherlands
| | - A T Jaiprakash
- Institute of Health & Biomedical Innovation, Queensland University of Technology, Brisbane, QLD 4059, Australia; School of Electrical Engineering, Computer Science, Science and Engineering Faculty, Queensland 16 University of Technology, Brisbane, QLD 4000, Australia
| | - A K Pandey
- Institute of Health & Biomedical Innovation, Queensland University of Technology, Brisbane, QLD 4059, Australia; School of Electrical Engineering, Computer Science, Science and Engineering Faculty, Queensland 16 University of Technology, Brisbane, QLD 4000, Australia
| | - R Crawford
- School of Chemistry, Physics and Mechanical Engineering, Science and Engineering Faculty, Queensland University of Technology, Brisbane, QLD 4000, Australia; Institute of Health & Biomedical Innovation, Queensland University of Technology, Brisbane, QLD 4059, Australia
| | - G Carneiro
- Australian Institute for Machine Learning, School of Computer Science, the University of Adelaide, Adelaide, SA 5005, Australia
| | - D Fontanarosa
- Institute of Health & Biomedical Innovation, Queensland University of Technology, Brisbane, QLD 4059, Australia; School of Clinical Sciences, Queensland University of Technology, Gardens Point Campus, 2 George St, Brisbane, QLD 4000, Australia.
| |
Collapse
|
120
|
Liu Y, Yang G, Hosseiny M, Azadikhah A, Mirak SA, Miao Q, Raman SS, Sung K. Exploring Uncertainty Measures in Bayesian Deep Attentive Neural Networks for Prostate Zonal Segmentation. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:151817-151828. [PMID: 33564563 PMCID: PMC7869831 DOI: 10.1109/access.2020.3017168] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Automatic segmentation of prostatic zones on multiparametric MRI (mpMRI) can improve the diagnostic workflow of prostate cancer. We designed a spatial attentive Bayesian deep learning network for the automatic segmentation of the peripheral zone (PZ) and transition zone (TZ) of the prostate with uncertainty estimation. The proposed method was evaluated by using internal and external independent testing datasets, and overall uncertainties of the proposed model were calculated at different prostate locations (apex, middle, and base). The study cohort included 351 MRI scans, of which 304 scans were retrieved from a de-identified publicly available datasets (PROSTATEX) and 47 scans were extracted from a large U.S. tertiary referral center (external testing dataset; ETD)). All the PZ and TZ contours were drawn by research fellows under the supervision of expert genitourinary radiologists. Within the PROSTATEX dataset, 259 and 45 patients (internal testing dataset; ITD) were used to develop and validate the model. Then, the model was tested independently using the ETD only. The segmentation performance was evaluated using the Dice Similarity Coefficient (DSC). For PZ and TZ segmentation, the proposed method achieved mean DSCs of 0.80±0.05 and 0.89±0.04 on ITD, as well as 0.79±0.06 and 0.87±0.07 on ETD. For both PZ and TZ, there was no significant difference between ITD and ETD for the proposed method. This DL-based method enabled the accuracy of the PZ and TZ segmentation, which outperformed the state-of-art methods (Deeplab V3+, Attention U-Net, R2U-Net, USE-Net and U-Net). We observed that segmentation uncertainty peaked at the junction between PZ, TZ and AFS. Also, the overall uncertainties were highly consistent with the actual model performance between PZ and TZ at three clinically relevant locations of the prostate.
Collapse
Affiliation(s)
- Yongkai Liu
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, South Kensington, London, UK, SW7 2AZ
| | - Melina Hosseiny
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Afshin Azadikhah
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Sohrab Afshari Mirak
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Qi Miao
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Steven S. Raman
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Kyunghyun Sung
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| |
Collapse
|