201
|
Zhao X, Zhao XM. Deep learning of brain magnetic resonance images: A brief review. Methods 2020; 192:131-140. [PMID: 32931932 DOI: 10.1016/j.ymeth.2020.09.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Revised: 08/22/2020] [Accepted: 09/09/2020] [Indexed: 01/24/2023] Open
Abstract
Magnetic resonance imaging (MRI) is one of the most popular techniques in brain science and is important for understanding brain function and neuropsychiatric disorders. However, the processing and analysis of MRI is not a trivial task with lots of challenges. Recently, deep learning has shown superior performance over traditional machine learning approaches in image analysis. In this survey, we give a brief review of the recent popular deep learning approaches and their applications in brain MRI analysis. Furthermore, popular brain MRI databases and deep learning tools are also introduced. The strength and weaknesses of different approaches are addressed, and challenges as well as future directions are also discussed.
Collapse
Affiliation(s)
- Xingzhong Zhao
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China; Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Ministry of Education, China
| | - Xing-Ming Zhao
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China; Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Ministry of Education, China; Research Institute of Intelligent Complex Systems, Fudan University, Shanghai 200433, China.
| |
Collapse
|
202
|
Gessert N, Krüger J, Opfer R, Ostwaldt AC, Manogaran P, Kitzler HH, Schippling S, Schlaefer A. Multiple sclerosis lesion activity segmentation with attention-guided two-path CNNs. Comput Med Imaging Graph 2020; 84:101772. [DOI: 10.1016/j.compmedimag.2020.101772] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 07/20/2020] [Accepted: 07/31/2020] [Indexed: 10/23/2022]
|
203
|
Liu L, Hu X, Zhu L, Fu CW, Qin J, Heng PA. ψ-Net: Stacking Densely Convolutional LSTMs for Sub-Cortical Brain Structure Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2806-2817. [PMID: 32091996 DOI: 10.1109/tmi.2020.2975642] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Sub-cortical brain structure segmentation is of great importance for diagnosing neuropsychiatric disorders. However, developing an automatic approach to segmenting sub-cortical brain structures remains very challenging due to the ambiguous boundaries, complex anatomical structures, and large variance of shapes. This paper presents a novel deep network architecture, namely Ψ -Net, for sub-cortical brain structure segmentation, aiming at selectively aggregating features and boosting the information propagation in a deep convolutional neural network (CNN). To achieve this, we first formulate a densely convolutional LSTM module (DC-LSTM) to selectively aggregate the convolutional features with the same spatial resolution at the same stage of a CNN. This helps to promote the discriminativeness of features at each CNN stage. Second, we stack multiple DC-LSTMs from the deepest stage to the shallowest stage to progressively enrich low-level feature maps with high-level context. We employ two benchmark datasets on sub-cortical brain structure segmentation, and perform various experiments to evaluate the proposed Ψ -Net. The experimental results show that our network performs favorably against the state-of-the-art methods on both benchmark datasets.
Collapse
|
204
|
Turečková A, Tureček T, Komínková Oplatková Z, Rodríguez-Sánchez A. Improving CT Image Tumor Segmentation Through Deep Supervision and Attentional Gates. Front Robot AI 2020; 7:106. [PMID: 33501273 PMCID: PMC7805665 DOI: 10.3389/frobt.2020.00106] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Accepted: 07/07/2020] [Indexed: 01/22/2023] Open
Abstract
Computer Tomography (CT) is an imaging procedure that combines many X-ray measurements taken from different angles. The segmentation of areas in the CT images provides a valuable aid to physicians and radiologists in order to better provide a patient diagnose. The CT scans of a body torso usually include different neighboring internal body organs. Deep learning has become the state-of-the-art in medical image segmentation. For such techniques, in order to perform a successful segmentation, it is of great importance that the network learns to focus on the organ of interest and surrounding structures and also that the network can detect target regions of different sizes. In this paper, we propose the extension of a popular deep learning methodology, Convolutional Neural Networks (CNN), by including deep supervision and attention gates. Our experimental evaluation shows that the inclusion of attention and deep supervision results in consistent improvement of the tumor prediction accuracy across the different datasets and training sizes while adding minimal computational overhead.
Collapse
Affiliation(s)
- Alžběta Turečková
- Artificial Intelligence Laboratory, Faculty of Applied Informatics, Tomas Bata University in Zlin, Zlin, Czechia
| | - Tomáš Tureček
- Artificial Intelligence Laboratory, Faculty of Applied Informatics, Tomas Bata University in Zlin, Zlin, Czechia
| | - Zuzana Komínková Oplatková
- Artificial Intelligence Laboratory, Faculty of Applied Informatics, Tomas Bata University in Zlin, Zlin, Czechia
| | - Antonio Rodríguez-Sánchez
- Intelligent and Interactive Systems, Department of Computer Science, University of Innsbruck, Innsbruck, Austria
| |
Collapse
|
205
|
Martins SB, Telea AC, Falcão AX. Investigating the impact of supervoxel segmentation for unsupervised abnormal brain asymmetry detection. Comput Med Imaging Graph 2020; 85:101770. [PMID: 32854021 DOI: 10.1016/j.compmedimag.2020.101770] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 07/27/2020] [Accepted: 07/31/2020] [Indexed: 11/26/2022]
Abstract
Several brain disorders are associated with abnormal brain asymmetries (asymmetric anomalies). Several computer-based methods aim to detect such anomalies automatically. Recent advances in this area use automatic unsupervised techniques that extract pairs of symmetric supervoxels in the hemispheres, model normal brain asymmetries for each pair from healthy subjects, and treat outliers as anomalies. Yet, there is no deep understanding of the impact of the supervoxel segmentation quality for abnormal asymmetry detection, especially for small anomalies, nor of the added value of using a specialized model for each supervoxel pair instead of a single global appearance model. We aim to answer these questions by a detailed evaluation of different scenarios for supervoxel segmentation and classification for detecting abnormal brain asymmetries. Experimental results on 3D MR-T1 brain images of stroke patients confirm the importance of high-quality supervoxels fit anomalies and the use of a specific classifier for each supervoxel. Next, we present a refinement of the detection method that reduces the number of false-positive supervoxels, thereby making the detection method easier to use for visual inspection and analysis of the found anomalies.
Collapse
Affiliation(s)
- Samuel B Martins
- Laboratory of Image Data Science (LIDS), Institute of Computing, University of Campinas, Brazil; Bernoulli Institute, University of Groningen, The Netherlands; Federal Institute of São Paulo, Campinas, Brazil
| | - Alexandru C Telea
- Department of Information and Computing Sciences, Utrecht University, The Netherlands
| | - Alexandre X Falcão
- Laboratory of Image Data Science (LIDS), Institute of Computing, University of Campinas, Brazil
| |
Collapse
|
206
|
Abstract
In this paper, a map of the state of the art of recent medical simulators that provide evaluation and guidance for surgical procedures is performed. The systems are reviewed and compared from the viewpoint of the used technology, force feedback, learning evaluation, didactic and visual aid, guidance, data collection and storage, and type of solution (commercial or non-commercial). The works’ assessment was made to identify if—(1) current applications can provide assistance and track performance in training, and (2) virtual environments are more suitable for practicing than physical applications. Automatic analysis of the papers was performed to minimize subjective bias. It was found that some works limit themselves to recording the session data to evaluate them internally, while others assess it and provide immediate user feedback. However, it was found that few works are currently implementing guidance, aid during sessions, and assessment. Current trends suggest that the evaluation process’s automation could reduce the workload of experts and let them focus on improving the curriculum covered in medical education. Lastly, this paper also draws several conclusions, observations per area, and suggestions for future work.
Collapse
|
207
|
Zhao X, Huang M, Li L, Qi XS, Tan S. Multi-to-binary network (MTBNet) for automated multi-organ segmentation on multi-sequence abdominal MRI images. ACTA ACUST UNITED AC 2020; 65:165013. [DOI: 10.1088/1361-6560/ab9453] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
|
208
|
CAB U-Net: An end-to-end category attention boosting algorithm for segmentation. Comput Med Imaging Graph 2020; 84:101764. [PMID: 32721853 DOI: 10.1016/j.compmedimag.2020.101764] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 06/06/2020] [Accepted: 07/10/2020] [Indexed: 11/23/2022]
Abstract
With the development of machine learning and artificial intelligence, many convolutional neural networks (CNNs) based segmentation methods have been proposed for 3D cardiac segmentation. In this paper, we propose the category attention boosting (CAB) module, which combines the deep network calculation graph with the boosting method. On the one hand, we add the attention mechanism into the gradient boosting process, which enhances the information of coarse segmentation without high computation cost. On the other hand, we introduce the CAB module into the 3D U-Net segmentation network and construct a new multi-scale boosting model CAB U-Net which strengthens the gradient flow in the network and makes full use of the low resolution feature information. Thanks to the advantage that end-to-end networks can adaptively adjust the internal parameters, CAB U-Net can make full use of the complementary effects among different base learners. Extensive experiments on public datasets show that our approach can achieve superior performance over the state-of-the-art methods.
Collapse
|
209
|
Vrtovec T, Močnik D, Strojan P, Pernuš F, Ibragimov B. Auto-segmentation of organs at risk for head and neck radiotherapy planning: From atlas-based to deep learning methods. Med Phys 2020; 47:e929-e950. [PMID: 32510603 DOI: 10.1002/mp.14320] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2019] [Revised: 05/27/2020] [Accepted: 05/29/2020] [Indexed: 02/06/2023] Open
Abstract
Radiotherapy (RT) is one of the basic treatment modalities for cancer of the head and neck (H&N), which requires a precise spatial description of the target volumes and organs at risk (OARs) to deliver a highly conformal radiation dose to the tumor cells while sparing the healthy tissues. For this purpose, target volumes and OARs have to be delineated and segmented from medical images. As manual delineation is a tedious and time-consuming task subjected to intra/interobserver variability, computerized auto-segmentation has been developed as an alternative. The field of medical imaging and RT planning has experienced an increased interest in the past decade, with new emerging trends that shifted the field of H&N OAR auto-segmentation from atlas-based to deep learning-based approaches. In this review, we systematically analyzed 78 relevant publications on auto-segmentation of OARs in the H&N region from 2008 to date, and provided critical discussions and recommendations from various perspectives: image modality - both computed tomography and magnetic resonance image modalities are being exploited, but the potential of the latter should be explored more in the future; OAR - the spinal cord, brainstem, and major salivary glands are the most studied OARs, but additional experiments should be conducted for several less studied soft tissue structures; image database - several image databases with the corresponding ground truth are currently available for methodology evaluation, but should be augmented with data from multiple observers and multiple institutions; methodology - current methods have shifted from atlas-based to deep learning auto-segmentation, which is expected to become even more sophisticated; ground truth - delineation guidelines should be followed and participation of multiple experts from multiple institutions is recommended; performance metrics - the Dice coefficient as the standard volumetric overlap metrics should be accompanied with at least one distance metrics, and combined with clinical acceptability scores and risk assessments; segmentation performance - the best performing methods achieve clinically acceptable auto-segmentation for several OARs, however, the dosimetric impact should be also studied to provide clinically relevant endpoints for RT planning.
Collapse
Affiliation(s)
- Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Domen Močnik
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Primož Strojan
- Institute of Oncology Ljubljana, Zaloška cesta 2, Ljubljana, SI-1000, Slovenia
| | - Franjo Pernuš
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia.,Department of Computer Science, University of Copenhagen, Universitetsparken 1, Copenhagen, D-2100, Denmark
| |
Collapse
|
210
|
Huang Q, Chen Y, Liu S, Xu C, Cao T, Xu Y, Wang X, Rao G, Li A, Zeng S, Quan T. Weakly Supervised Learning of 3D Deep Network for Neuron Reconstruction. Front Neuroanat 2020; 14:38. [PMID: 32848636 PMCID: PMC7399060 DOI: 10.3389/fnana.2020.00038] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 06/05/2020] [Indexed: 11/13/2022] Open
Abstract
Digital reconstruction or tracing of 3D tree-like neuronal structures from optical microscopy images is essential for understanding the functionality of neurons and reveal the connectivity of neuronal networks. Despite the existence of numerous tracing methods, reconstructing a neuron from highly noisy images remains challenging, particularly for neurites with low and inhomogeneous intensities. Conducting deep convolutional neural network (CNN)-based segmentation prior to neuron tracing facilitates an approach to solving this problem via separation of weak neurites from a noisy background. However, large manual annotations are needed in deep learning-based methods, which is labor-intensive and limits the algorithm's generalization for different datasets. In this study, we present a weakly supervised learning method of a deep CNN for neuron reconstruction without manual annotations. Specifically, we apply a 3D residual CNN as the architecture for discriminative neuronal feature extraction. We construct the initial pseudo-labels (without manual segmentation) of the neuronal images on the basis of an existing automatic tracing method. A weakly supervised learning framework is proposed via iterative training of the CNN model for improved prediction and refining of the pseudo-labels to update training samples. The pseudo-label was iteratively modified via mining and addition of weak neurites from the CNN predicted probability map on the basis of their tubularity and continuity. The proposed method was evaluated on several challenging images from the public BigNeuron and Diadem datasets, to fMOST datasets. Owing to the adaption of 3D deep CNNs and weakly supervised learning, the presented method demonstrates effective detection of weak neurites from noisy images and achieves results similar to those of the CNN model with manual annotations. The tracing performance was significantly improved by the proposed method on both small and large datasets (>100 GB). Moreover, the proposed method proved to be superior to several novel tracing methods on original images. The results obtained on various large-scale datasets demonstrated the generalization and high precision achieved by the proposed method for neuron reconstruction.
Collapse
Affiliation(s)
- Qing Huang
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yijun Chen
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shijie Liu
- School of Mathematics and Physics, China University of Geosciences, Wuhan, China
| | - Cheng Xu
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingting Cao
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yongchao Xu
- School of Electronics Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaojun Wang
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Gong Rao
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Anan Li
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingwei Quan
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
211
|
Improved Classification of White Blood Cells with the Generative Adversarial Network and Deep Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2020; 2020:6490479. [PMID: 32695152 PMCID: PMC7368188 DOI: 10.1155/2020/6490479] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 04/27/2020] [Accepted: 06/17/2020] [Indexed: 12/30/2022]
Abstract
White blood cells (leukocytes) are a very important component of the blood that forms the immune system, which is responsible for fighting foreign elements. The five types of white blood cells include neutrophils, eosinophils, lymphocytes, monocytes, and basophils, where each type constitutes a different proportion and performs specific functions. Being able to classify and, therefore, count these different constituents is critical for assessing the health of patients and infection risks. Generally, laboratory experiments are used for determining the type of a white blood cell. The staining process and manual evaluation of acquired images under the microscope are tedious and subject to human errors. Moreover, a major challenge is the unavailability of training data that cover the morphological variations of white blood cells so that trained classifiers can generalize well. As such, this paper investigates image transformation operations and generative adversarial networks (GAN) for data augmentation and state-of-the-art deep neural networks (i.e., VGG-16, ResNet, and DenseNet) for the classification of white blood cells into the five types. Furthermore, we explore initializing the DNNs' weights randomly or using weights pretrained on the CIFAR-100 dataset. In contrast to other works that require advanced image preprocessing and manual feature extraction before classification, our method works directly with the acquired images. The results of extensive experiments show that the proposed method can successfully classify white blood cells. The best DNN model, DenseNet-169, yields a validation accuracy of 98.8%. Particularly, we find that the proposed approach outperforms other methods that rely on sophisticated image processing and manual feature engineering.
Collapse
|
212
|
Liu L, Kurgan L, Wu FX, Wang J. Attention convolutional neural network for accurate segmentation and quantification of lesions in ischemic stroke disease. Med Image Anal 2020; 65:101791. [PMID: 32712525 DOI: 10.1016/j.media.2020.101791] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 06/26/2020] [Accepted: 07/17/2020] [Indexed: 10/23/2022]
Abstract
Ischemic stroke lesion and white matter hyperintensity (WMH) lesion appear as regions of abnormally signal intensity on magnetic resonance image (MRI) sequences. Ischemic stroke is a frequent cause of death and disability, while WMH is a risk factor for stroke. Accurate segmentation and quantification of ischemic stroke and WMH lesions are important for diagnosis and prognosis. However, radiologists have a difficult time distinguishing these two types of similar lesions. A novel deep residual attention convolutional neural network (DRANet) is proposed to accurately and simultaneously segment and quantify ischemic stroke and WMH lesions in the MRI images. DRANet inherits the advantages of the U-net design and applies a novel attention module that extracts high-quality features from the input images. Moreover, the Dice loss function is used to train DRANet to address data imbalance in the training data set. DRANet is trained and evaluated on 742 2D MRI images which are produced from the sub-acute ischemic stroke lesion segmentation (SISS) challenge. Empirical tests demonstrate that DRANet outperforms several other state-of-the-art segmentation methods. It accurately segments and quantifies both ischemic stroke lesion and WMH. Ablation experiments reveal that attention modules improve the predictive performance of DRANet.
Collapse
Affiliation(s)
- Liangliang Liu
- School of Computer Science and Engineering, Central South University, Changsha, 410083, P.R. China; Hunan Provincial Key Lab on Bioinformatics, Central South University, Changsha, 410083, P.R. China; Department of Network Center, Pingdingshan University, Pingdingshan, 467000, P.R. China
| | - Lukasz Kurgan
- Department of Computer Science, Virginia Commonwealth University, Richmond, VA, 23284, USA
| | - Fang-Xiang Wu
- Department of Mechanical Engineering and Division of Biomedical Engineering, University of Saskatchewan, Saskatoon, SK S7N5A9, Canada
| | - Jianxin Wang
- School of Computer Science and Engineering, Central South University, Changsha, 410083, P.R. China; Hunan Provincial Key Lab on Bioinformatics, Central South University, Changsha, 410083, P.R. China.
| |
Collapse
|
213
|
Wang W, Feng H, Bu Q, Cui L, Xie Y, Zhang A, Feng J, Zhu Z, Chen Z. MDU-Net: A Convolutional Network for Clavicle and Rib Segmentation from a Chest Radiograph. JOURNAL OF HEALTHCARE ENGINEERING 2020; 2020:2785464. [PMID: 32724504 PMCID: PMC7382745 DOI: 10.1155/2020/2785464] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2019] [Revised: 12/29/2019] [Accepted: 02/15/2020] [Indexed: 12/03/2022]
Abstract
Automatic bone segmentation from a chest radiograph is an important and challenging task in medical image analysis. However, a chest radiograph contains numerous artifacts and tissue shadows, such as trachea, blood vessels, and lung veins, which limit the accuracy of traditional segmentation methods, such as thresholding and contour-related techniques. Deep learning has recently achieved excellent segmentation of some organs, such as the pancreas and the hippocampus. However, the insufficiency of annotated datasets impedes clavicle and rib segmentation from chest X-rays. We have constructed a dataset of chest X-rays with a raw chest radiograph and four annotated images showing the clavicles, anterior ribs, posterior ribs, and all bones (the complete set of ribs and clavicle). On the basis of a sufficient dataset, a multitask dense connection U-Net (MDU-Net) is proposed to address the challenge of bone segmentation from a chest radiograph. We first combine the U-Net multiscale feature fusion method, DenseNet dense connection, and multitasking mechanism to construct the proposed network referred to as MDU-Net. We then present a mask encoding mechanism that can force the network to learn the background features. Transfer learning is ultimately introduced to help the network extract sufficient features. We evaluate the proposed network by fourfold cross validation on 88 chest radiography images. The proposed method achieves the average DSC (Dice similarity coefficient) values of 93.78%, 80.95%, 89.06%, and 88.38% in clavicle segmentation, anterior rib segmentation, posterior rib segmentation, and segmentation of all bones, respectively.
Collapse
Affiliation(s)
- Wenjing Wang
- Department of Information Science and Technology, Northwest University, Xi'an 710127, China
| | - Hongwei Feng
- Department of Information Science and Technology, Northwest University, Xi'an 710127, China
| | - Qirong Bu
- Department of Information Science and Technology, Northwest University, Xi'an 710127, China
| | - Lei Cui
- Department of Information Science and Technology, Northwest University, Xi'an 710127, China
| | - Yilin Xie
- Department of Information Science and Technology, Northwest University, Xi'an 710127, China
| | - Aoqi Zhang
- Department of Information Science and Technology, Northwest University, Xi'an 710127, China
| | - Jun Feng
- Department of Information Science and Technology, Northwest University, Xi'an 710127, China
- State-Province Joint Engineering and Research Center of Advanced Networking and Intelligent Information Services, School of Information Science and Technology, Northwest University, Xi'an 710127, Shaanxi, China
| | - Zhaohui Zhu
- Chest Hospital of Xinjiang Uyghur Autonomous Region of the PRC, Xinjiang Uygur Autonomous Region, Urumqi 830049, China
| | - Zhongyuanlong Chen
- Chest Hospital of Xinjiang Uyghur Autonomous Region of the PRC, Xinjiang Uygur Autonomous Region, Urumqi 830049, China
| |
Collapse
|
214
|
Fatima A, Shahid AR, Raza B, Madni TM, Janjua UI. State-of-the-Art Traditional to the Machine- and Deep-Learning-Based Skull Stripping Techniques, Models, and Algorithms. J Digit Imaging 2020; 33:1443-1464. [PMID: 32666364 DOI: 10.1007/s10278-020-00367-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Abstract
Several neuroimaging processing applications consider skull stripping as a crucial pre-processing step. Due to complex anatomical brain structure and intensity variations in brain magnetic resonance imaging (MRI), an appropriate skull stripping is an important part. The process of skull stripping basically deals with the removal of the skull region for clinical analysis in brain segmentation tasks, and its accuracy and efficiency are quite crucial for diagnostic purposes. It requires more accurate and detailed methods for differentiating brain regions and the skull regions and is considered as a challenging task. This paper is focused on the transition of the conventional to the machine- and deep-learning-based automated skull stripping methods for brain MRI images. It is observed in this study that deep learning approaches have outperformed conventional and machine learning techniques in many ways, but they have their limitations. It also includes the comparative analysis of the current state-of-the-art skull stripping methods, a critical discussion of some challenges, model of quantifying parameters, and future work directions.
Collapse
Affiliation(s)
- Anam Fatima
- Medical Imaging and Diagnostics (MID) Lab, National Centre of Artificial Intelligence (NCAI), Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, 45550, Pakistan
| | - Ahmad Raza Shahid
- Medical Imaging and Diagnostics (MID) Lab, National Centre of Artificial Intelligence (NCAI), Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, 45550, Pakistan
| | - Basit Raza
- Medical Imaging and Diagnostics (MID) Lab, National Centre of Artificial Intelligence (NCAI), Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, 45550, Pakistan.
| | - Tahir Mustafa Madni
- Medical Imaging and Diagnostics (MID) Lab, National Centre of Artificial Intelligence (NCAI), Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, 45550, Pakistan
| | - Uzair Iqbal Janjua
- Medical Imaging and Diagnostics (MID) Lab, National Centre of Artificial Intelligence (NCAI), Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, 45550, Pakistan
| |
Collapse
|
215
|
Dou Q, Liu Q, Heng PA, Glocker B. Unpaired Multi-Modal Segmentation via Knowledge Distillation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2415-2425. [PMID: 32012001 DOI: 10.1109/tmi.2019.2963882] [Citation(s) in RCA: 65] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Multi-modal learning is typically performed with network architectures containing modality-specific layers and shared layers, utilizing co-registered images of different modalities. We propose a novel learning scheme for unpaired cross-modality image segmentation, with a highly compact architecture achieving superior segmentation accuracy. In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI, and only employ modality-specific internal normalization layers which compute respective statistics. To effectively train such a highly compact model, we introduce a novel loss term inspired by knowledge distillation, by explicitly constraining the KL-divergence of our derived prediction distributions between modalities. We have extensively validated our approach on two multi-class segmentation problems: i) cardiac structure segmentation, and ii) abdominal organ segmentation. Different network settings, i.e., 2D dilated network and 3D U-net, are utilized to investigate our method's general efficacy. Experimental results on both tasks demonstrate that our novel multi-modal learning scheme consistently outperforms single-modal training and previous multi-modal approaches.
Collapse
|
216
|
Zhuge Y, Ning H, Mathen P, Cheng JY, Krauze AV, Camphausen K, Miller RW. Automated glioma grading on conventional MRI images using deep convolutional neural networks. Med Phys 2020; 47:3044-3053. [PMID: 32277478 PMCID: PMC8494136 DOI: 10.1002/mp.14168] [Citation(s) in RCA: 63] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2019] [Revised: 03/09/2020] [Accepted: 03/25/2020] [Indexed: 01/05/2023] Open
Abstract
PURPOSE Gliomas are the most common primary tumor of the brain and are classified into grades I-IV of the World Health Organization (WHO), based on their invasively histological appearance. Gliomas grading plays an important role to determine the treatment plan and prognosis prediction. In this study we propose two novel methods for automatic, non-invasively distinguishing low-grade (Grades II and III) glioma (LGG) and high-grade (grade IV) glioma (HGG) on conventional MRI images by using deep convolutional neural networks (CNNs). METHODS All MRI images have been preprocessed first by rigid image registration and intensity inhomogeneity correction. Both proposed methods consist of two steps: (a) three-dimensional (3D) brain tumor segmentation based on a modification of the popular U-Net model; (b) tumor classification on segmented brain tumor. In the first method, the slice with largest area of tumor is determined and the state-of-the-art mask R-CNN model is employed for tumor grading. To improve the performance of the grading model, a two-dimensional (2D) data augmentation has been implemented to increase both the amount and the diversity of the training images. In the second method, denoted as 3DConvNet, a 3D volumetric CNNs is applied directly on bounding image regions of segmented tumor for classification, which can fully leverage the 3D spatial contextual information of volumetric image data. RESULTS The proposed schemes were evaluated on The Cancer Imaging Archive (TCIA) low grade glioma (LGG) data, and the Multimodal Brain Tumor Image Segmentation (BraTS) Benchmark 2018 training datasets with fivefold cross validation. All data are divided into training, validation, and test sets. Based on biopsy-proven ground truth, the performance metrics of sensitivity, specificity, and accuracy are measured on the test sets. The results are 0.935 (sensitivity), 0.972 (specificity), and 0.963 (accuracy) for the 2D Mask R-CNN based method, and 0.947 (sensitivity), 0.968 (specificity), and 0.971 (accuracy) for the 3DConvNet method, respectively. In regard to efficiency, for 3D brain tumor segmentation, the program takes around ten and a half hours for training with 300 epochs on BraTS 2018 dataset and takes only around 50 s for testing of a typical image with a size of 160 × 216 × 176. For 2D Mask R-CNN based tumor grading, the program takes around 4 h for training with around 60 000 iterations, and around 1 s for testing of a 2D slice image with size of 128 × 128. For 3DConvNet based tumor grading, the program takes around 2 h for training with 10 000 iterations, and 0.25 s for testing of a 3D cropped image with size of 64 × 64 × 64, using a DELL PRECISION Tower T7910, with two NVIDIA Titan Xp GPUs. CONCLUSIONS Two effective glioma grading methods on conventional MRI images using deep convolutional neural networks have been developed. Our methods are fully automated without manual specification of region-of-interests and selection of slices for model training, which are common in traditional machine learning based brain tumor grading methods. This methodology may play a crucial role in selecting effective treatment options and survival predictions without the need for surgical biopsy.
Collapse
Affiliation(s)
- Ying Zhuge
- Radiation Oncology Branch, National Cancer Institute National Institutes of Health, Bethesda, MD 20892, USA
| | - Holly Ning
- Radiation Oncology Branch, National Cancer Institute National Institutes of Health, Bethesda, MD 20892, USA
| | - Peter Mathen
- Radiation Oncology Branch, National Cancer Institute National Institutes of Health, Bethesda, MD 20892, USA
| | - Jason Y. Cheng
- Radiation Oncology Branch, National Cancer Institute National Institutes of Health, Bethesda, MD 20892, USA
| | - Andra V. Krauze
- Division of Radiation Oncology and Developmental Radiotherapeutics, BC Cancer, Vancouver, BC, Canada
| | - Kevin Camphausen
- Radiation Oncology Branch, National Cancer Institute National Institutes of Health, Bethesda, MD 20892, USA
| | - Robert W. Miller
- Radiation Oncology Branch, National Cancer Institute National Institutes of Health, Bethesda, MD 20892, USA
| |
Collapse
|
217
|
Towards multi-center glaucoma OCT image screening with semi-supervised joint structure and function multi-task learning. Med Image Anal 2020; 63:101695. [DOI: 10.1016/j.media.2020.101695] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 02/02/2020] [Accepted: 03/30/2020] [Indexed: 01/12/2023]
|
218
|
Shan H, Jia X, Yan P, Li Y, Paganetti H, Wang G. Synergizing medical imaging and radiotherapy with deep learning. MACHINE LEARNING-SCIENCE AND TECHNOLOGY 2020. [DOI: 10.1088/2632-2153/ab869f] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
219
|
Attention deep residual networks for MR image analysis. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05083-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
220
|
Yamanakkanavar N, Choi JY, Lee B. MRI Segmentation and Classification of Human Brain Using Deep Learning for Diagnosis of Alzheimer's Disease: A Survey. SENSORS (BASEL, SWITZERLAND) 2020; 20:E3243. [PMID: 32517304 PMCID: PMC7313699 DOI: 10.3390/s20113243] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 05/25/2020] [Accepted: 06/03/2020] [Indexed: 02/07/2023]
Abstract
Many neurological diseases and delineating pathological regions have been analyzed, and the anatomical structure of the brain researched with the aid of magnetic resonance imaging (MRI). It is important to identify patients with Alzheimer's disease (AD) early so that preventative measures can be taken. A detailed analysis of the tissue structures from segmented MRI leads to a more accurate classification of specific brain disorders. Several segmentation methods to diagnose AD have been proposed with varying complexity. Segmentation of the brain structure and classification of AD using deep learning approaches has gained attention as it can provide effective results over a large set of data. Hence, deep learning methods are now preferred over state-of-the-art machine learning methods. We aim to provide an outline of current deep learning-based segmentation approaches for the quantitative analysis of brain MRI for the diagnosis of AD. Here, we report how convolutional neural network architectures are used to analyze the anatomical brain structure and diagnose AD, discuss how brain MRI segmentation improves AD classification, describe the state-of-the-art approaches, and summarize their results using publicly available datasets. Finally, we provide insight into current issues and discuss possible future research directions in building a computer-aided diagnostic system for AD.
Collapse
Affiliation(s)
- Nagaraj Yamanakkanavar
- Department of Information and Communications Engineering, Chosun University, Gwangju 61452, Korea;
| | - Jae Young Choi
- Division of Computer & Electronic Systems Engineering, Hankuk University of Foreign Studies, Yongin 17035, Korea;
| | - Bumshik Lee
- Department of Information and Communications Engineering, Chosun University, Gwangju 61452, Korea;
| |
Collapse
|
221
|
Porter E, Fuentes P, Siddiqui Z, Thompson A, Levitin R, Solis D, Myziuk N, Guerrero T. Hippocampus segmentation on noncontrast CT using deep learning. Med Phys 2020; 47:2950-2961. [DOI: 10.1002/mp.14098] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 01/28/2020] [Accepted: 01/29/2020] [Indexed: 11/06/2022] Open
Affiliation(s)
- Evan Porter
- Department of Medical Physics Wayne State University Detroit MI USA
- Beaumont Artificial Intelligence Research Laboratory Beaumont Health Systems Royal Oak MI USA
- Department of Radiation Oncology Beaumont Health Systems Royal Oak MI USA
| | - Patricia Fuentes
- Beaumont Artificial Intelligence Research Laboratory Beaumont Health Systems Royal Oak MI USA
- Oakland University William Beaumont School of Medicine Oakland University Rochester MI USA
| | - Zaid Siddiqui
- Beaumont Artificial Intelligence Research Laboratory Beaumont Health Systems Royal Oak MI USA
- Department of Radiation Oncology Beaumont Health Systems Royal Oak MI USA
| | - Andrew Thompson
- Beaumont Artificial Intelligence Research Laboratory Beaumont Health Systems Royal Oak MI USA
- Department of Radiation Oncology Beaumont Health Systems Royal Oak MI USA
| | - Ronald Levitin
- Beaumont Artificial Intelligence Research Laboratory Beaumont Health Systems Royal Oak MI USA
- Department of Radiation Oncology Beaumont Health Systems Royal Oak MI USA
| | - David Solis
- Beaumont Artificial Intelligence Research Laboratory Beaumont Health Systems Royal Oak MI USA
- Department of Radiation Oncology Beaumont Health Systems Royal Oak MI USA
| | - Nick Myziuk
- Beaumont Artificial Intelligence Research Laboratory Beaumont Health Systems Royal Oak MI USA
- Department of Radiation Oncology Beaumont Health Systems Royal Oak MI USA
| | - Thomas Guerrero
- Beaumont Artificial Intelligence Research Laboratory Beaumont Health Systems Royal Oak MI USA
- Department of Radiation Oncology Beaumont Health Systems Royal Oak MI USA
- Oakland University William Beaumont School of Medicine Oakland University Rochester MI USA
| |
Collapse
|
222
|
NFN+: A novel network followed network for retinal vessel segmentation. Neural Netw 2020; 126:153-162. [DOI: 10.1016/j.neunet.2020.02.018] [Citation(s) in RCA: 59] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 01/28/2020] [Accepted: 02/26/2020] [Indexed: 11/21/2022]
|
223
|
AdaEn-Net: An ensemble of adaptive 2D–3D Fully Convolutional Networks for medical image segmentation. Neural Netw 2020; 126:76-94. [DOI: 10.1016/j.neunet.2020.03.007] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2019] [Revised: 02/12/2020] [Accepted: 03/06/2020] [Indexed: 11/21/2022]
|
224
|
Yang Z, Zhuang X, Mishra V, Sreenivasan K, Cordes D. CAST: A multi-scale convolutional neural network based automated hippocampal subfield segmentation toolbox. Neuroimage 2020; 218:116947. [PMID: 32474081 DOI: 10.1016/j.neuroimage.2020.116947] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Revised: 05/12/2020] [Accepted: 05/13/2020] [Indexed: 11/29/2022] Open
Abstract
In this study, we developed a multi-scale Convolutional neural network based Automated hippocampal subfield Segmentation Toolbox (CAST) for automated segmentation of hippocampal subfields. Although training CAST required approximately three days on a single workstation with a high-quality GPU card, CAST can segment a new subject in less than 1 min even with GPU acceleration disabled, thus this method is more time efficient than current automated methods and manual segmentation. This toolbox is highly flexible with either a single modality or multiple modalities and can be easily set up to be trained with a researcher's unique data. A 3D multi-scale deep convolutional neural network is the key algorithm used in the toolbox. The main merit of multi-scale images is the capability to capture more global structural information from down-sampled images without dramatically increasing memory and computational burden. The original images capture more local information to refine the boundary between subfields. Residual learning is applied to alleviate the vanishing gradient problem and improve the performance with a deeper network. We applied CAST with the same settings on two datasets, one 7T dataset (the UMC dataset) with only the T2 image and one 3T dataset (the MNI dataset) with both T1 and T2 images available. The segmentation accuracy of both CAST and the state-of-the-art automated method ASHS, in terms of the dice similarity coefficient (DSC), were comparable. CAST significantly improved the reliability of segmenting small subfields, such as CA2, CA3, and the entorhinal cortex (ERC), in terms of the intraclass correlation coefficient (ICC). Both ASHS and manual segmentation process some subfields (e.g. CA2 and ERC) with high DSC values but low ICC values, consequently increasing the difficulty of judging segmentation quality. CAST produces very consistent DSC and ICC values, with a maximal discrepancy of 0.01 (DSC-ICC) across all subfields. The pre-trained model, source code, and settings for the CAST toolbox are publicly available.
Collapse
Affiliation(s)
- Zhengshi Yang
- Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, NV, 89106, USA
| | - Xiaowei Zhuang
- Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, NV, 89106, USA
| | - Virendra Mishra
- Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, NV, 89106, USA
| | - Karthik Sreenivasan
- Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, NV, 89106, USA
| | - Dietmar Cordes
- Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, NV, 89106, USA; Department of Psychology and Neuroscience, University of Colorado, Boulder, CO, 80309, USA.
| |
Collapse
|
225
|
Abstract
Automatic segmentation of brain tumors from magnetic resonance imaging (MRI) is a challenging task due to the uneven, irregular and unstructured size and shape of tumors. Recently, brain tumor segmentation methods based on the symmetric U-Net architecture have achieved favorable performance. Meanwhile, the effectiveness of enhancing local responses for feature extraction and restoration has also been shown in recent works, which may encourage the better performance of the brain tumor segmentation problem. Inspired by this, we try to introduce the attention mechanism into the existing U-Net architecture to explore the effects of local important responses on this task. More specifically, we propose an end-to-end 2D brain tumor segmentation network, i.e., attention residual U-Net (AResU-Net), which simultaneously embeds attention mechanism and residual units into U-Net for the further performance improvement of brain tumor segmentation. AResU-Net adds a series of attention units among corresponding down-sampling and up-sampling processes, and it adaptively rescales features to effectively enhance local responses of down-sampling residual features utilized for the feature recovery of the following up-sampling process. We extensively evaluate AResU-Net on two MRI brain tumor segmentation benchmarks of BraTS 2017 and BraTS 2018 datasets. Experiment results illustrate that the proposed AResU-Net outperforms its baselines and achieves comparable performance with typical brain tumor segmentation methods.
Collapse
|
226
|
Rashed EA, Gomez-Tames J, Hirata A. End-to-end semantic segmentation of personalized deep brain structures for non-invasive brain stimulation. Neural Netw 2020; 125:233-244. [DOI: 10.1016/j.neunet.2020.02.006] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 01/16/2020] [Accepted: 02/13/2020] [Indexed: 01/08/2023]
|
227
|
CEREBRUM: a fast and fully-volumetric Convolutional Encoder-decodeR for weakly-supervised sEgmentation of BRain strUctures from out-of-the-scanner MRI. Med Image Anal 2020; 62:101688. [DOI: 10.1016/j.media.2020.101688] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 03/07/2020] [Accepted: 03/12/2020] [Indexed: 11/15/2022]
|
228
|
Detecting Alzheimer's disease Based on 4D fMRI: An exploration under deep learning framework. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.01.053] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
229
|
Liu J, Liu H, Tang Z, Gui W, Ma T, Gong S, Gao Q, Xie Y, Niyoyita JP. IOUC-3DSFCNN: Segmentation of Brain Tumors via IOU Constraint 3D Symmetric Full Convolution Network with Multimodal Auto-context. Sci Rep 2020; 10:6256. [PMID: 32277141 PMCID: PMC7148375 DOI: 10.1038/s41598-020-63242-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Accepted: 03/27/2020] [Indexed: 11/26/2022] Open
Abstract
Accurate segmentation of brain tumors from magnetic resonance (MR) images play a pivot role in assisting diagnoses, treatments and postoperative evaluations. However, due to its structural complexities, e.g., fuzzy tumor boundaries with irregular shapes, accurate 3D brain tumor delineation is challenging. In this paper, an intersection over union (IOU) constraint 3D symmetric full convolutional neural network (IOUC-3DSFCNN) model fused with multimodal auto-context is proposed for the 3D brain tumor segmentation. IOUC-3DSFCNN incorporates 3D residual groups into the classic 3DU-Net to further deepen the network structure to obtain more abstract voxel features under a five-layer cohesion architecture to ensure the model stability. The IOU constraint is used to address the issue of extremely unbalanced tumor foreground and background regions in MR images. In addition, to obtain more comprehensive and stable 3D brain tumor profiles, the multimodal auto-context information is fused into the IOUC-3DSFCNN model to achieve end-to-end 3D brain tumor profiles. Extensive confirmatory and comparative experiments conducted on the benchmark BRATS 2017 dataset demonstrate that the proposed segmentation model is superior to classic 3DU-Net-relevant and other state-of-the-art segmentation models, which can achieve accurate 3D tumor profiles on multimodal MRI volumes even with blurred tumor boundaries and big noise.
Collapse
Affiliation(s)
- Jinping Liu
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, Hunan, 410081, China.
| | - Hui Liu
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, Hunan, 410081, China
| | - Zhaohui Tang
- School of Automation, Central South University, Changsha, Hunan, 410083, China
| | - Weihua Gui
- School of Automation, Central South University, Changsha, Hunan, 410083, China
| | - Tianyu Ma
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, Hunan, 410081, China
| | - Subo Gong
- Department of Geriatrics, The Second Xiangya Hospital of Central South University, Changsha, 410011, China.
| | - Quanquan Gao
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha, Hunan, 410081, China
| | - Yongfang Xie
- School of Automation, Central South University, Changsha, Hunan, 410083, China
| | - Jean Paul Niyoyita
- College of Science and Technology, University of Rwanda, Kigali, 3286, Rwanda
| |
Collapse
|
230
|
Sun L, Ma W, Ding X, Huang Y, Liang D, Paisley J. A 3D Spatially Weighted Network for Segmentation of Brain Tissue From MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:898-909. [PMID: 31449009 DOI: 10.1109/tmi.2019.2937271] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The segmentation of brain tissue in MRI is valuable for extracting brain structure to aid diagnosis, treatment and tracking the progression of different neurologic diseases. Medical image data are volumetric and some neural network models for medical image segmentation have addressed this using a 3D convolutional architecture. However, this volumetric spatial information has not been fully exploited to enhance the representative ability of deep networks, and these networks have not fully addressed the practical issues facing the analysis of multimodal MRI data. In this paper, we propose a spatially-weighted 3D network (SW-3D-UNet) for brain tissue segmentation of single-modality MRI, and extend it using multimodality MRI data. We validate our model on the MRBrainS13 and MALC12 datasets. This unpublished model ranked first on the leaderboard of the MRBrainS13 Challenge.
Collapse
|
231
|
Sharma A, Hamarneh G. Missing MRI Pulse Sequence Synthesis Using Multi-Modal Generative Adversarial Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1170-1183. [PMID: 31603773 DOI: 10.1109/tmi.2019.2945521] [Citation(s) in RCA: 67] [Impact Index Per Article: 16.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Magnetic resonance imaging (MRI) is being increasingly utilized to assess, diagnose, and plan treatment for a variety of diseases. The ability to visualize tissue in varied contrasts in the form of MR pulse sequences in a single scan provides valuable insights to physicians, as well as enabling automated systems performing downstream analysis. However, many issues like prohibitive scan time, image corruption, different acquisition protocols, or allergies to certain contrast materials may hinder the process of acquiring multiple sequences for a patient. This poses challenges to both physicians and automated systems since complementary information provided by the missing sequences is lost. In this paper, we propose a variant of generative adversarial network (GAN) capable of leveraging redundant information contained within multiple available sequences in order to generate one or more missing sequences for a patient scan. The proposed network is designed as a multi-input, multi-output network which combines information from all the available pulse sequences and synthesizes the missing ones in a single forward pass. We demonstrate and validate our method on two brain MRI datasets each with four sequences, and show the applicability of the proposed method in simultaneously synthesizing all missing sequences in any possible scenario where either one, two, or three of the four sequences may be missing. We compare our approach with competing unimodal and multi-modal methods, and show that we outperform both quantitatively and qualitatively.
Collapse
|
232
|
Thyreau B, Taki Y. Learning a cortical parcellation of the brain robust to the MRI segmentation with convolutional neural networks. Med Image Anal 2020; 61:101639. [DOI: 10.1016/j.media.2020.101639] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Revised: 12/27/2019] [Accepted: 01/09/2020] [Indexed: 10/25/2022]
|
233
|
Wu C, Qiao Z, Zhang N, Li X, Fan J, Song H, Ai D, Yang J, Huang Y. Phase unwrapping based on a residual en-decoder network for phase images in Fourier domain Doppler optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2020; 11:1760-1771. [PMID: 32341846 PMCID: PMC7173896 DOI: 10.1364/boe.386101] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 02/19/2020] [Accepted: 02/27/2020] [Indexed: 06/01/2023]
Abstract
To solve the phase unwrapping problem for phase images in Fourier domain Doppler optical coherence tomography (DOCT), we propose a deep learning-based residual en-decoder network (REDN) method. In our approach, we reformulate the definition for obtaining the true phase as obtaining an integer multiple of 2π at each pixel by semantic segmentation. The proposed REDN architecture can provide recognition performance with pixel-level accuracy. To address the lack of phase images that are noise and wrapping free from DOCT systems for training, we used simulated images synthesized with DOCT phase image background noise features. An evaluation study on simulated images, DOCT phase images of phantom milk flowing in a plastic tube and a mouse artery, was performed. Meanwhile, a comparison study with recently proposed deep learning-based DeepLabV3+ and PhaseNet methods for signal phase unwrapping and traditional modified networking programming (MNP) method was also performed. Both visual inspection and quantitative metrical evaluation based on accuracy, specificity, sensitivity, root-mean-square-error, total-variation, and processing time demonstrate the robustness, effectiveness and superiority of our method. The proposed REDN method will benefit accurate and fast DOCT phase image-based diagnosis and evaluation when the detected phase is wrapped and will enrich the deep learning-based image processing platform for DOCT images.
Collapse
Affiliation(s)
- Chuanchao Wu
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Zhengyu Qiao
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Nan Zhang
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Xiaochen Li
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Jingfan Fan
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Danni Ai
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Jian Yang
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Yong Huang
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| |
Collapse
|
234
|
Chung M, Lee M, Hong J, Park S, Lee J, Lee J, Yang IH, Lee J, Shin YG. Pose-aware instance segmentation framework from cone beam CT images for tooth segmentation. Comput Biol Med 2020; 120:103720. [PMID: 32250852 DOI: 10.1016/j.compbiomed.2020.103720] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2020] [Revised: 03/21/2020] [Accepted: 03/21/2020] [Indexed: 11/30/2022]
Abstract
Individual tooth segmentation from cone beam computed tomography (CBCT) images is an essential prerequisite for an anatomical understanding of orthodontic structures in several applications, such as tooth reformation planning and implant guide simulations. However, the presence of severe metal artifacts in CBCT images hinders the accurate segmentation of each individual tooth. In this study, we propose a neural network for pixel-wise labeling to exploit an instance segmentation framework that is robust to metal artifacts. Our method comprises of three steps: 1) image cropping and realignment by pose regressions, 2) metal-robust individual tooth detection, and 3) segmentation. We first extract the alignment information of the patient by pose regression neural networks to attain a volume-of-interest (VOI) region and realign the input image, which reduces the inter-overlapping area between tooth bounding boxes. Then, individual tooth regions are localized within a VOI realigned image using a convolutional detector. We improved the accuracy of the detector by employing non-maximum suppression and multiclass classification metrics in the region proposal network. Finally, we apply a convolutional neural network (CNN) to perform individual tooth segmentation by converting the pixel-wise labeling task to a distance regression task. Metal-intensive image augmentation is also employed for a robust segmentation of metal artifacts. The result shows that our proposed method outperforms other state-of-the-art methods, especially for teeth with metal artifacts. Our method demonstrated 5.68% and 30.30% better accuracy in the F1 score and aggregated Jaccard index, respectively, when compared to the best performing state-of-the-art algorithms. The major implication of the proposed method is two-fold: 1) an introduction of pose-aware VOI realignment followed by a robust tooth detection and 2) a metal-robust CNN framework for accurate tooth segmentation.
Collapse
Affiliation(s)
- Minyoung Chung
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea.
| | - Minkyung Lee
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea.
| | - Jioh Hong
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea.
| | - Sanguk Park
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea.
| | - Jusang Lee
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea.
| | - Jingyu Lee
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea.
| | - Il-Hyung Yang
- Department of Orthodontics, Seoul National University School of Dentistry, 101 Daehak-Ro Jongro-Gu, Seoul, 03080, South Korea.
| | - Jeongjin Lee
- School of Computer Science and Engineering, Soongsil University, 369 Sangdo-Ro, Dongjak-Gu, Seoul, 06978, South Korea.
| | - Yeong-Gil Shin
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea.
| |
Collapse
|
235
|
Ding Y, Acosta R, Enguix V, Suffren S, Ortmann J, Luck D, Dolz J, Lodygensky GA. Using Deep Convolutional Neural Networks for Neonatal Brain Image Segmentation. Front Neurosci 2020; 14:207. [PMID: 32273836 PMCID: PMC7114297 DOI: 10.3389/fnins.2020.00207] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 02/25/2020] [Indexed: 12/13/2022] Open
Abstract
INTRODUCTION Deep learning neural networks are especially potent at dealing with structured data, such as images and volumes. Both modified LiviaNET and HyperDense-Net performed well at a prior competition segmenting 6-month-old infant magnetic resonance images, but neonatal cerebral tissue type identification is challenging given its uniquely inverted tissue contrasts. The current study aims to evaluate the two architectures to segment neonatal brain tissue types at term equivalent age. METHODS Both networks were retrained over 24 pairs of neonatal T1 and T2 data from the Developing Human Connectome Project public data set and validated on another eight pairs against ground truth. We then reported the best-performing model from training and its performance by computing the Dice similarity coefficient (DSC) for each tissue type against eight test subjects. RESULTS During the testing phase, among the segmentation approaches tested, the dual-modality HyperDense-Net achieved the best statistically significantly test mean DSC values, obtaining 0.94/0.95/0.92 for the tissue types and took 80 h to train and 10 min to segment, including preprocessing. The single-modality LiviaNET was better at processing T2-weighted images than processing T1-weighted images across all tissue types, achieving mean DSC values of 0.90/0.90/0.88 for gray matter, white matter, and cerebrospinal fluid, respectively, while requiring 30 h to train and 8 min to segment each brain, including preprocessing. DISCUSSION Our evaluation demonstrates that both neural networks can segment neonatal brains, achieving previously reported performance. Both networks will be continuously retrained over an increasingly larger repertoire of neonatal brain data and be made available through the Canadian Neonatal Brain Platform to better serve the neonatal brain imaging research community.
Collapse
Affiliation(s)
- Yang Ding
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Rolando Acosta
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Vicente Enguix
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Sabrina Suffren
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Janosch Ortmann
- Department of Management and Technology, Université du Québec à Montréal, Montreal, QC, Canada
| | - David Luck
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Jose Dolz
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), École de Technologie Supérieure, Montreal, QC, Canada
| | - Gregory A. Lodygensky
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), École de Technologie Supérieure, Montreal, QC, Canada
| |
Collapse
|
236
|
Tan C, Guan Y, Feng Z, Ni H, Zhang Z, Wang Z, Li X, Yuan J, Gong H, Luo Q, Li A. DeepBrainSeg: Automated Brain Region Segmentation for Micro-Optical Images With a Convolutional Neural Network. Front Neurosci 2020; 14:179. [PMID: 32265621 PMCID: PMC7099146 DOI: 10.3389/fnins.2020.00179] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Accepted: 02/18/2020] [Indexed: 12/27/2022] Open
Abstract
The segmentation of brain region contours in three dimensions is critical for the analysis of different brain structures, and advanced approaches are emerging continuously within the field of neurosciences. With the development of high-resolution micro-optical imaging, whole-brain images can be acquired at the cellular level. However, brain regions in microscopic images are aggregated by discrete neurons with blurry boundaries, the complex and variable features of brain regions make it challenging to accurately segment brain regions. Manual segmentation is a reliable method, but is unrealistic to apply on a large scale. Here, we propose an automated brain region segmentation framework, DeepBrainSeg, which is inspired by the principle of manual segmentation. DeepBrainSeg incorporates three feature levels to learn local and contextual features in different receptive fields through a dual-pathway convolutional neural network (CNN), and to provide global features of localization by image registration and domain-condition constraints. Validated on biological datasets, DeepBrainSeg can not only effectively segment brain-wide regions with high accuracy (Dice ratio > 0.9), but can also be applied to various types of datasets and to datasets with noises. It has the potential to automatically locate information in the brain space on the large scale.
Collapse
Affiliation(s)
- Chaozhen Tan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yue Guan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Zhao Feng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Hong Ni
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Zoutao Zhang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Zhiguang Wang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Xiangning Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou, China
| | - Jing Yuan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
- MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou, China
| |
Collapse
|
237
|
Du X, Song Y, Liu Y, Zhang Y, Liu H, Chen B, Li S. An integrated deep learning framework for joint segmentation of blood pool and myocardium. Med Image Anal 2020; 62:101685. [PMID: 32272344 DOI: 10.1016/j.media.2020.101685] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 02/26/2020] [Accepted: 03/02/2020] [Indexed: 11/26/2022]
Abstract
Simultaneous and automatic segmentation of the blood pool and myocardium is an important precondition for early diagnosis and pre-operative planning in patients with complex congenital heart disease. However, due to the high diversity of cardiovascular structures and changes in mechanical properties caused by cardiac defects, the segmentation task still faces great challenges. To overcome these challenges, in this study we propose an integrated multi-task deep learning framework based on the dilated residual and hybrid pyramid pooling network (DRHPPN) for joint segmentation of the blood pool and myocardium. The framework consists of three closely connected progressive sub-networks. An inception module is used to realize the initial multi-level feature representation of cardiovascular images. A dilated residual network (DRN), as the main body of feature extraction and pixel classification, preliminary predicts segmentation regions. A hybrid pyramid pooling network (HPPN) is designed for facilitating the aggregation of local information to global information, which complements DRN. Extensive experiments on three-dimensional cardiovascular magnetic resonance (CMR) images (the available dataset of the MICCAI 2016 HVSMR challenge) demonstrate that our approach can accurately segment the blood pool and myocardium and achieve competitive performance compared with state-of-the-art segmentation methods.
Collapse
Affiliation(s)
- Xiuquan Du
- Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, Anhui University, China; School of Computer Science and Technology, Anhui University, China
| | - Yuhui Song
- School of Computer Science and Technology, Anhui University, China
| | - Yueguo Liu
- School of Computer Science and Technology, Anhui University, China
| | - Yanping Zhang
- Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, Anhui University, China; School of Computer Science and Technology, Anhui University, China
| | - Heng Liu
- Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Anhui, China
| | - Bo Chen
- School of Health Science, Western University, London, ON N6A 3K7, Canada
| | - Shuo Li
- Department of Medical Imaging, Western University, London, ON N6A 3K7, Canada.
| |
Collapse
|
238
|
Conventional and Deep Learning Methods for Skull Stripping in Brain MRI. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10051773] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Skull stripping in brain magnetic resonance volume has recently been attracting attention due to an increased demand to develop an efficient, accurate, and general algorithm for diverse datasets of the brain. Accurate skull stripping is a critical step for neuroimaging diagnostic systems because neither the inclusion of non-brain tissues nor removal of brain parts can be corrected in subsequent steps, which results in unfixed error through subsequent analysis. The objective of this review article is to give a comprehensive overview of skull stripping approaches, including recent deep learning-based approaches. In this paper, the current methods of skull stripping have been divided into two distinct groups—conventional or classical approaches, and convolutional neural networks or deep learning approaches. The potentials of several methods are emphasized because they can be applied to standard clinical imaging protocols. Finally, current trends and future developments are addressed giving special attention to recent deep learning algorithms.
Collapse
|
239
|
Holbrook MD, Blocker SJ, Mowery YM, Badea A, Qi Y, Xu ES, Kirsch DG, Johnson GA, Badea CT. MRI-Based Deep Learning Segmentation and Radiomics of Sarcoma in Mice. Tomography 2020; 6:23-33. [PMID: 32280747 PMCID: PMC7138523 DOI: 10.18383/j.tom.2019.00021] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
Small-animal imaging is an essential tool that provides noninvasive, longitudinal insight into novel cancer therapies. However, considerable variability in image analysis techniques can lead to inconsistent results. We have developed quantitative imaging for application in the preclinical arm of a coclinical trial by using a genetically engineered mouse model of soft tissue sarcoma. Magnetic resonance imaging (MRI) images were acquired 1 day before and 1 week after radiation therapy. After the second MRI, the primary tumor was surgically removed by amputating the tumor-bearing hind limb, and mice were followed for up to 6 months. An automatic analysis pipeline was used for multicontrast MRI data using a convolutional neural network for tumor segmentation followed by radiomics analysis. We then calculated radiomics features for the tumor, the peritumoral area, and the 2 combined. The first radiomics analysis focused on features most indicative of radiation therapy effects; the second radiomics analysis looked for features that might predict primary tumor recurrence. The segmentation results indicated that Dice scores were similar when using multicontrast versus single T2-weighted data (0.863 vs 0.861). One week post RT, larger tumor volumes were measured, and radiomics analysis showed greater heterogeneity. In the tumor and peritumoral area, radiomics features were predictive of primary tumor recurrence (AUC: 0.79). We have created an image processing pipeline for high-throughput, reduced-bias segmentation of multiparametric tumor MRI data and radiomics analysis, to better our understanding of preclinical imaging and the insights it provides when studying new cancer therapies.
Collapse
Affiliation(s)
- M. D. Holbrook
- Departments of Radiology, Center for In Vivo Microscopy; and
| | - S. J. Blocker
- Departments of Radiology, Center for In Vivo Microscopy; and
| | - Y. M. Mowery
- Radiation Oncology, Duke University Medical Center, Durham, NC
| | - A. Badea
- Departments of Radiology, Center for In Vivo Microscopy; and
| | - Y. Qi
- Departments of Radiology, Center for In Vivo Microscopy; and
| | - E. S. Xu
- Radiation Oncology, Duke University Medical Center, Durham, NC
| | - D. G. Kirsch
- Radiation Oncology, Duke University Medical Center, Durham, NC
| | - G. A. Johnson
- Departments of Radiology, Center for In Vivo Microscopy; and
| | - C. T. Badea
- Departments of Radiology, Center for In Vivo Microscopy; and
| |
Collapse
|
240
|
Liu L, Dou Q, Chen H, Qin J, Heng PA. Multi-Task Deep Model With Margin Ranking Loss for Lung Nodule Analysis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:718-728. [PMID: 31403410 DOI: 10.1109/tmi.2019.2934577] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Lung cancer is the leading cause of cancer deaths worldwide and early diagnosis of lung nodule is of great importance for therapeutic treatment and saving lives. Automated lung nodule analysis requires both accurate lung nodule benign-malignant classification and attribute score regression. However, this is quite challenging due to the considerable difficulty of lung nodule heterogeneity modeling and the limited discrimination capability on ambiguous cases. To solve these challenges, we propose a Multi-Task deep model with Margin Ranking loss (referred as MTMR-Net) for automated lung nodule analysis. Compared to existing methods which consider these two tasks separately, the relatedness between lung nodule classification and attribute score regression is explicitly explored in a cause-and-effect manner within our multi-task deep model, which can contribute to the performance gains of both tasks. The results of different tasks can be yielded simultaneously for assisting the radiologists in diagnosis interpretation. Furthermore, a Siamese network with a margin ranking loss is elaborately designed to enhance the discrimination capability on ambiguous nodule cases. To further explore the internal relationship between two tasks and validate the effectiveness of the proposed model, we use the recursive feature elimination method to iteratively rank the most malignancy-related features. We validate the efficacy of our method MTMR-Net on the public benchmark LIDC-IDRI dataset. Extensive experiments show that the diagnosis results with internal relationship explicitly explored in our model has met some similar patterns in clinical usage and also demonstrate that our approach can achieve competitive classification performance and more accurate scoring on attributes over the state-of-the-arts. Codes are publicly available at: https://github.com/CaptainWilliam/MTMR-NET.
Collapse
|
241
|
Zhou C, Ding C, Wang X, Lu Z, Tao D. One-pass Multi-task Networks with Cross-task Guided Attention for Brain Tumor Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:4516-4529. [PMID: 32086210 DOI: 10.1109/tip.2020.2973510] [Citation(s) in RCA: 62] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Class imbalance has emerged as one of the major challenges for medical image segmentation. The model cascade (MC) strategy, a popular scheme, significantly alleviates the class imbalance issue via running a set of individual deep models for coarse-to-fine segmentation. Despite its outstanding performance, however, this method leads to undesired system complexity and also ignores the correlation among the models. To handle these flaws in the MC approach, we propose in this paper a light-weight deep model, i.e., the One-pass Multi-task Network (OM-Net) to solve class imbalance better than MC does, while requiring only one-pass computation for brain tumor segmentation. First, OM-Net integrates the separate segmentation tasks into one deep model, which consists of shared parameters to learn joint features, as well as task-specific parameters to learn discriminative features. Second, to more effectively optimize OM-Net, we take advantage of the correlation among tasks to design both an online training data transfer strategy and a curriculum learning-based training strategy. Third, we further propose sharing prediction results between tasks, which enables us to design a cross-task guided attention (CGA) module. By following the guidance of the prediction results provided by the previous task, CGA can adaptively recalibrate channel-wise feature responses based on the category-specific statistics. Finally, a simple yet effective post-processing method is introduced to refine the segmentation results of the proposed attention network. Extensive experiments are conducted to demonstrate the effectiveness of the proposed techniques. Most impressively, we achieve state-of-the-art performance on the BraTS 2015 testing set and BraTS 2017 online validation set. Using these proposed approaches, we also won joint third place in the BraTS 2018 challenge among 64 participating teams.The code will be made publicly available at https://github.com/chenhong-zhou/OM-Net.
Collapse
|
242
|
Hu X, Luo W, Hu J, Guo S, Huang W, Scott MR, Wiest R, Dahlweid M, Reyes M. Brain SegNet: 3D local refinement network for brain lesion segmentation. BMC Med Imaging 2020; 20:17. [PMID: 32046685 PMCID: PMC7014943 DOI: 10.1186/s12880-020-0409-2] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Accepted: 01/03/2020] [Indexed: 11/20/2022] Open
Abstract
MR images (MRIs) accurate segmentation of brain lesions is important for improving cancer diagnosis, surgical planning, and prediction of outcome. However, manual and accurate segmentation of brain lesions from 3D MRIs is highly expensive, time-consuming, and prone to user biases. We present an efficient yet conceptually simple brain segmentation network (referred as Brain SegNet), which is a 3D residual framework for automatic voxel-wise segmentation of brain lesion. Our model is able to directly predict dense voxel segmentation of brain tumor or ischemic stroke regions in 3D brain MRIs. The proposed 3D segmentation network can run at about 0.5s per MRIs - about 50 times faster than previous approaches Med Image Anal 43: 98–111, 2018, Med Image Anal 36:61–78, 2017. Our model is evaluated on the BRATS 2015 benchmark for brain tumor segmentation, where it obtains state-of-the-art results, by surpassing recently published results reported in Med Image Anal 43: 98–111, 2018, Med Image Anal 36:61–78, 2017. We further applied the proposed Brain SegNet for ischemic stroke lesion outcome prediction, with impressive results achieved on the Ischemic Stroke Lesion Segmentation (ISLES) 2017 database.
Collapse
Affiliation(s)
- Xiaojun Hu
- Malong Technologies, Shenzhen, China.,Shenzhen Malong Artificial Intelligence Research Center, Shenzhen, China
| | - Weijian Luo
- Department of Neurosurgery, Second Clinical Medical College of Jinan University (Shenzhen People's Hospital), Shenzhen, China
| | - Jiliang Hu
- Department of Neurosurgery, Second Clinical Medical College of Jinan University (Shenzhen People's Hospital), Shenzhen, China.
| | - Sheng Guo
- Malong Technologies, Shenzhen, China.,Shenzhen Malong Artificial Intelligence Research Center, Shenzhen, China
| | - Weilin Huang
- Malong Technologies, Shenzhen, China. .,Shenzhen Malong Artificial Intelligence Research Center, Shenzhen, China.
| | - Matthew R Scott
- Malong Technologies, Shenzhen, China.,Shenzhen Malong Artificial Intelligence Research Center, Shenzhen, China
| | - Roland Wiest
- Imaging A.I. Lab, Insel Data Science Center, Bern University Hospital, Bern, Switzerland
| | - Michael Dahlweid
- Imaging A.I. Lab, Insel Data Science Center, Bern University Hospital, Bern, Switzerland
| | - Mauricio Reyes
- Imaging A.I. Lab, Insel Data Science Center, Bern University Hospital, Bern, Switzerland
| |
Collapse
|
243
|
Rickmann AM, Roy AG, Sarasua I, Wachinger C. Recalibrating 3D ConvNets with Project & Excite. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2461-2471. [PMID: 32031934 DOI: 10.1109/tmi.2020.2972059] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Fully Convolutional Neural Networks (F-CNNs) achieve state-of-the-art performance for segmentation tasks in computer vision and medical imaging. Recently, computational blocks termed squeeze and excitation (SE) have been introduced to recalibrate F-CNN feature maps both channel- and spatial-wise, boosting segmentation performance while only minimally increasing the model complexity. So far, the development of SE blocks has focused on 2D architectures. For volumetric medical images, however, 3D F-CNNs are a natural choice. In this article, we extend existing 2D recalibration methods to 3D and propose a generic compress-process-recalibrate pipeline for easy comparison of such blocks. We further introduce Project & Excite (PE) modules, customized for 3D networks. In contrast to existing modules, Project & Excite does not perform global average pooling but compresses feature maps along different spatial dimensions of the tensor separately to retain more spatial information that is subsequently used in the excitation step. We evaluate the modules on two challenging tasks, whole-brain segmentation of MRI scans and whole-body segmentation of CT scans. We demonstrate that PE modules can be easily integrated into 3D F-CNNs, boosting performance up to 0.3 in Dice Score and outperforming 3D extensions of other recalibration blocks, while only marginally increasing the model complexity. Our code is publicly available on https://github.com/ai-med/squeezeandexcitation.
Collapse
|
244
|
Brusini I, Lindberg O, Muehlboeck JS, Smedby Ö, Westman E, Wang C. Shape Information Improves the Cross-Cohort Performance of Deep Learning-Based Segmentation of the Hippocampus. Front Neurosci 2020; 14:15. [PMID: 32226359 PMCID: PMC7081773 DOI: 10.3389/fnins.2020.00015] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Accepted: 01/08/2020] [Indexed: 12/20/2022] Open
Abstract
Performing an accurate segmentation of the hippocampus from brain magnetic resonance images is a crucial task in neuroimaging research, since its structural integrity is strongly related to several neurodegenerative disorders, including Alzheimer's disease (AD). Some automatic segmentation tools are already being used, but, in recent years, new deep learning (DL)-based methods have been proven to be much more accurate in various medical image segmentation tasks. In this work, we propose a DL-based hippocampus segmentation framework that embeds statistical shape of the hippocampus as context information into the deep neural network (DNN). The inclusion of shape information is achieved with three main steps: (1) a U-Net-based segmentation, (2) a shape model estimation, and (3) a second U-Net-based segmentation which uses both the original input data and the fitted shape model. The trained DL architectures were tested on image data of three diagnostic groups [AD patients, subjects with mild cognitive impairment (MCI) and controls] from two cohorts (ADNI and AddNeuroMed). Both intra-cohort validation and cross-cohort validation were performed and compared with the conventional U-net architecture and some variations with other types of context information (i.e., autocontext and tissue-class context). Our results suggest that adding shape information can improve the segmentation accuracy in cross-cohort validation, i.e., when DNNs are trained on one cohort and applied to another. However, no significant benefit is observed in intra-cohort validation, i.e., training and testing DNNs on images from the same cohort. Moreover, compared to other types of context information, the use of shape context was shown to be the most successful in increasing the accuracy, while keeping the computational time in the order of a few minutes.
Collapse
Affiliation(s)
- Irene Brusini
- Division of Biomedical Imaging, Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Stockholm, Sweden.,Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institute, Solna, Sweden
| | - Olof Lindberg
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institute, Solna, Sweden
| | - J-Sebastian Muehlboeck
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institute, Solna, Sweden
| | - Örjan Smedby
- Division of Biomedical Imaging, Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Eric Westman
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institute, Solna, Sweden
| | - Chunliang Wang
- Division of Biomedical Imaging, Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
245
|
Wu J, Zhang Y, Tang X. Simultaneous Tissue Classification and Lateral Ventricle Segmentation via a 2D U-net Driven by a 3D Fully Convolutional Neural Network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:5928-5931. [PMID: 31947198 DOI: 10.1109/embc.2019.8856668] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this paper, we proposed and validated a novel and fully automatic pipeline for simultaneous tissue classification and lateral ventricle segmentation via a 2D U-net. The 2D U-net was driven by a 3D fully convolutional neural network (FCN). Multiple T1-weighted atlases which had been pre-segmented into six whole-brain regions including the gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), lateral ventricles (LVs), skull, and the background of the entire image were used. In the proposed pipeline, probability maps of the six whole-brain regions of interest (ROIs) were obtained after a pre-segmentation through a trained 3D patch-based FCN. To further capture the global context of the entire image, the to-be-segmented image and the corresponding six probability maps were input to a trained 2D U-net in a 2D slice fashion to obtain the final segmentation map. Experiments were performed on a dataset consisting of 18 T1-weighted images. Compared to the 3D patch-based FCN on segmenting five ROIs (GM, WM, CSF, LVs, skull) and another two classical methods (SPM and FSL) on segmenting GM and WM, the proposed pipeline showed a superior segmentation performance. The proposed segmentation architecture can also be extended to other medical image segmentation tasks.
Collapse
|
246
|
Liu Z, Li S, Chen YK, Liu T, Liu Q, Xu X, Shi Y, Wen W. Orchestrating Medical Image Compression and Remote Segmentation Networks. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2020 2020. [DOI: 10.1007/978-3-030-59719-1_40] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
247
|
Vasan R, Rowan MP, Lee CT, Johnson GR, Rangamani P, Holst M. Applications and Challenges of Machine Learning to Enable Realistic Cellular Simulations. FRONTIERS IN PHYSICS 2020; 7:247. [PMID: 36188416 PMCID: PMC9521042 DOI: 10.3389/fphy.2019.00247] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
In this perspective, we examine three key aspects of an end-to-end pipeline for realistic cellular simulations: reconstruction and segmentation of cellular structures; generation of cellular structures; and mesh generation, simulation, and data analysis. We highlight some of the relevant prior work in these distinct but overlapping areas, with a particular emphasis on current use of machine learning technologies, as well as on future opportunities.
Collapse
Affiliation(s)
- Ritvik Vasan
- Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, CA, United States
| | - Meagan P. Rowan
- Department of Bioengineering, University of California San Diego, La Jolla, CA, United States
| | - Christopher T. Lee
- Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, CA, United States
| | | | - Padmini Rangamani
- Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, CA, United States
| | - Michael Holst
- Department of Mathematics, University of California San Diego, La Jolla, CA, United States
- Department of Physics, University of California San Diego, La Jolla, CA, United States
| |
Collapse
|
248
|
Rizwan I Haque I, Neubert J. Deep learning approaches to biomedical image segmentation. INFORMATICS IN MEDICINE UNLOCKED 2020. [DOI: 10.1016/j.imu.2020.100297] [Citation(s) in RCA: 100] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
|
249
|
Vijh S, Sharma S, Gaurav P. Brain Tumor Segmentation Using OTSU Embedded Adaptive Particle Swarm Optimization Method and Convolutional Neural Network. DATA VISUALIZATION AND KNOWLEDGE ENGINEERING 2020. [DOI: 10.1007/978-3-030-25797-2_8] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
250
|
Gruetzemacher R, Gupta A, Paradice D. 3D deep learning for detecting pulmonary nodules in CT scans. J Am Med Inform Assoc 2019; 25:1301-1310. [PMID: 30137371 DOI: 10.1093/jamia/ocy098] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Accepted: 07/03/2018] [Indexed: 01/09/2023] Open
Abstract
Objective To demonstrate and test the validity of a novel deep-learning-based system for the automated detection of pulmonary nodules. Materials and Methods The proposed system uses 2 3D deep learning models, 1 for each of the essential tasks of computer-aided nodule detection: candidate generation and false positive reduction. A total of 888 scans from the LIDC-IDRI dataset were used for training and evaluation. Results Results for candidate generation on the test data indicated a detection rate of 94.77% with 30.39 false positives per scan, while the test results for false positive reduction exhibited a sensitivity of 94.21% with 1.789 false positives per scan. The overall system detection rate on the test data was 89.29% with 1.789 false positives per scan. Discussion An extensive and rigorous validation was conducted to assess the performance of the proposed system. The system demonstrated a novel combination of 3D deep neural network architectures and demonstrates the use of deep learning for both candidate generation and false positive reduction to be evaluated with a substantial test dataset. The results strongly support the ability of deep learning pulmonary nodule detection systems to generalize to unseen data. The source code and trained model weights have been made available. Conclusion A novel deep-neural-network-based pulmonary nodule detection system is demonstrated and validated. The results provide comparison of the proposed deep-learning-based system over other similar systems based on performance.
Collapse
Affiliation(s)
- Ross Gruetzemacher
- Department of Systems & Technology, Raymond J. Harbert College of Business, Auburn University, Auburn, AL, USA 36849
| | - Ashish Gupta
- Department of Systems & Technology, Raymond J. Harbert College of Business, Auburn University, Auburn, AL, USA 36849
| | - David Paradice
- Department of Systems & Technology, Raymond J. Harbert College of Business, Auburn University, Auburn, AL, USA 36849
| |
Collapse
|