1
|
Bhalodiya JM, Lim Choi Keung SN, Arvanitis TN. Magnetic resonance image-based brain tumour segmentation methods: A systematic review. Digit Health 2022; 8:20552076221074122. [PMID: 35340900 PMCID: PMC8943308 DOI: 10.1177/20552076221074122] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 11/20/2021] [Accepted: 12/27/2021] [Indexed: 01/10/2023] Open
Abstract
Background Image segmentation is an essential step in the analysis and subsequent characterisation of brain tumours through magnetic resonance imaging. In the literature, segmentation methods are empowered by open-access magnetic resonance imaging datasets, such as the brain tumour segmentation dataset. Moreover, with the increased use of artificial intelligence methods in medical imaging, access to larger data repositories has become vital in method development. Purpose To determine what automated brain tumour segmentation techniques can medical imaging specialists and clinicians use to identify tumour components, compared to manual segmentation. Methods We conducted a systematic review of 572 brain tumour segmentation studies during 2015-2020. We reviewed segmentation techniques using T1-weighted, T2-weighted, gadolinium-enhanced T1-weighted, fluid-attenuated inversion recovery, diffusion-weighted and perfusion-weighted magnetic resonance imaging sequences. Moreover, we assessed physics or mathematics-based methods, deep learning methods, and software-based or semi-automatic methods, as applied to magnetic resonance imaging techniques. Particularly, we synthesised each method as per the utilised magnetic resonance imaging sequences, study population, technical approach (such as deep learning) and performance score measures (such as Dice score). Statistical tests We compared median Dice score in segmenting the whole tumour, tumour core and enhanced tumour. Results We found that T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid-attenuated inversion recovery magnetic resonance imaging are used the most in various segmentation algorithms. However, there is limited use of perfusion-weighted and diffusion-weighted magnetic resonance imaging. Moreover, we found that the U-Net deep learning technology is cited the most, and has high accuracy (Dice score 0.9) for magnetic resonance imaging-based brain tumour segmentation. Conclusion U-Net is a promising deep learning technology for magnetic resonance imaging-based brain tumour segmentation. The community should be encouraged to contribute open-access datasets so training, testing and validation of deep learning algorithms can be improved, particularly for diffusion- and perfusion-weighted magnetic resonance imaging, where there are limited datasets available.
Collapse
Affiliation(s)
- Jayendra M Bhalodiya
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Sarah N Lim Choi Keung
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Theodoros N Arvanitis
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| |
Collapse
|
2
|
Guo K, Li X, Hu X, Liu J, Fan T. Hahn-PCNN-CNN: an end-to-end multi-modal brain medical image fusion framework useful for clinical diagnosis. BMC Med Imaging 2021; 21:111. [PMID: 34261452 PMCID: PMC8278599 DOI: 10.1186/s12880-021-00642-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 05/28/2021] [Indexed: 11/30/2022] Open
Abstract
Background In medical diagnosis of brain, the role of multi-modal medical image fusion is becoming more prominent. Among them, there is no lack of filtering layered fusion and newly emerging deep learning algorithms. The former has a fast fusion speed but the fusion image texture is blurred; the latter has a better fusion effect but requires higher machine computing capabilities. Therefore, how to find a balanced algorithm in terms of image quality, speed and computing power is still the focus of all scholars. Methods We built an end-to-end Hahn-PCNN-CNN. The network is composed of feature extraction module, feature fusion module and image reconstruction module. We selected 8000 multi-modal brain medical images downloaded from the Harvard Medical School website to train the feature extraction layer and image reconstruction layer to enhance the network’s ability to reconstruct brain medical images. In the feature fusion module, we use the moments of the feature map combined with the pulse-coupled neural network to reduce the information loss caused by convolution in the previous fusion module and save time. Results We choose eight sets of registered multi-modal brain medical images in four diease to verify our model. The anatomical structure images are from MRI and the functional metabolism images are SPECT and 18F-FDG. At the same time, we also selected eight representative fusion models as comparative experiments. In terms of objective quality evaluation, we select six evaluation metrics in five categories to evaluate our model. Conclusions The fusion image obtained by our model can retain the effective information in source images to the greatest extent. In terms of image fusion evaluation metrics, our model is superior to other comparison algorithms. In terms of time computational efficiency, our model also performs well. In terms of robustness, our model is very stable and can be generalized to multi-modal image fusion of other organs.
Collapse
Affiliation(s)
- Kai Guo
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, China.,College of Computer Science and Technology, Jilin University, Changchun, China
| | - Xiongfei Li
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, China.,College of Computer Science and Technology, Jilin University, Changchun, China
| | - Xiaohan Hu
- Department of Radiology, The First Hospital of Jilin University, Changchun, China.
| | - Jichen Liu
- College of Software, Jilin University, Changchun, China
| | - Tiehu Fan
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| |
Collapse
|
3
|
Gu Y, Li K. A Transfer Model Based on Supervised Multi-Layer Dictionary Learning for Brain Tumor MRI Image Recognition. Front Neurosci 2021; 15:687496. [PMID: 34122003 PMCID: PMC8193061 DOI: 10.3389/fnins.2021.687496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 04/19/2021] [Indexed: 11/30/2022] Open
Abstract
Artificial intelligence (AI) is an effective technology for automatic brain tumor MRI image recognition. The training of an AI model requires a large number of labeled data, but medical data needs to be labeled by professional clinicians, which makes data collection complex and expensive. Moreover, a traditional AI model requires that the training data and test data must follow the independent and identically distributed. To solve this problem, we propose a transfer model based on supervised multi-layer dictionary learning (TSMDL) for brain tumor MRI image recognition in this paper. With the help of the knowledge learned from related domains, the goal of this model is to solve the task of transfer learning where the target domain has only a small number of labeled samples. Based on the framework of multi-layer dictionary learning, the proposed model learns the common shared dictionary of source and target domains in each layer to explore the intrinsic connections and shared information between different domains. At the same time, by making full use of the label information of samples, the Laplacian regularization term is introduced to make the dictionary coding of similar samples as close as possible and the dictionary coding of different class samples as different as possible. The recognition experiments on brain MRI image datasets REMBRANDT and Figshare show that the model performs better than competitive state of-the-art methods.
Collapse
Affiliation(s)
- Yi Gu
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| | - Kang Li
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| |
Collapse
|
4
|
Gu X, Shen Z, Xue J, Fan Y, Ni T. Brain Tumor MR Image Classification Using Convolutional Dictionary Learning With Local Constraint. Front Neurosci 2021; 15:679847. [PMID: 34122001 PMCID: PMC8193950 DOI: 10.3389/fnins.2021.679847] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 04/09/2021] [Indexed: 11/30/2022] Open
Abstract
Brain tumor image classification is an important part of medical image processing. It assists doctors to make accurate diagnosis and treatment plans. Magnetic resonance (MR) imaging is one of the main imaging tools to study brain tissue. In this article, we propose a brain tumor MR image classification method using convolutional dictionary learning with local constraint (CDLLC). Our method integrates the multi-layer dictionary learning into a convolutional neural network (CNN) structure to explore the discriminative information. Encoding a vector on a dictionary can be considered as multiple projections into new spaces, and the obtained coding vector is sparse. Meanwhile, in order to preserve the geometric structure of data and utilize the supervised information, we construct the local constraint of atoms through a supervised k-nearest neighbor graph, so that the discrimination of the obtained dictionary is strong. To solve the proposed problem, an efficient iterative optimization scheme is designed. In the experiment, two clinically relevant multi-class classification tasks on the Cheng and REMBRANDT datasets are designed. The evaluation results demonstrate that our method is effective for brain tumor MR image classification, and it could outperform other comparisons.
Collapse
Affiliation(s)
- Xiaoqing Gu
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, China
| | - Zongxuan Shen
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, China
| | - Jing Xue
- Department of Nephrology, Affiliated Wuxi People's Hospital of Nanjing Medical University, Wuxi, China
| | - Yiqing Fan
- Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
| | - Tongguang Ni
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, China
| |
Collapse
|
5
|
Müller D, Kramer F. MIScnn: a framework for medical image segmentation with convolutional neural networks and deep learning. BMC Med Imaging 2021; 21:12. [PMID: 33461500 PMCID: PMC7814713 DOI: 10.1186/s12880-020-00543-7] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Accepted: 12/25/2020] [Indexed: 12/15/2022] Open
Abstract
Background The increased availability and usage of modern medical imaging induced a strong need for automatic medical image segmentation. Still, current image segmentation platforms do not provide the required functionalities for plain setup of medical image segmentation pipelines. Already implemented pipelines are commonly standalone software, optimized on a specific public data set. Therefore, this paper introduces the open-source Python library MIScnn. Implementation The aim of MIScnn is to provide an intuitive API allowing fast building of medical image segmentation pipelines including data I/O, preprocessing, data augmentation, patch-wise analysis, metrics, a library with state-of-the-art deep learning models and model utilization like training, prediction, as well as fully automatic evaluation (e.g. cross-validation). Similarly, high configurability and multiple open interfaces allow full pipeline customization. Results Running a cross-validation with MIScnn on the Kidney Tumor Segmentation Challenge 2019 data set (multi-class semantic segmentation with 300 CT scans) resulted into a powerful predictor based on the standard 3D U-Net model. Conclusions With this experiment, we could show that the MIScnn framework enables researchers to rapidly set up a complete medical image segmentation pipeline by using just a few lines of code. The source code for MIScnn is available in the Git repository: https://github.com/frankkramer-lab/MIScnn.
Collapse
Affiliation(s)
- Dominik Müller
- IT-Infrastructure for Translational Medical Research, Faculty of Applied Computer Science, Faculty of Medicine, University of Augsburg, Augsburg, Germany.
| | - Frank Kramer
- IT-Infrastructure for Translational Medical Research, Faculty of Applied Computer Science, Faculty of Medicine, University of Augsburg, Augsburg, Germany
| |
Collapse
|
6
|
Computer-aided quantification of non-contrast 3D black blood MRI as an efficient alternative to reference standard manual CT angiography measurements of abdominal aortic aneurysms. Eur J Radiol 2020; 134:109396. [PMID: 33217686 DOI: 10.1016/j.ejrad.2020.109396] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 10/12/2020] [Accepted: 11/02/2020] [Indexed: 11/20/2022]
Abstract
BACKGROUND Non-contrast 3D black blood MRI is a promising tool for abdominal aortic aneurysm (AAA) surveillance, permitting accurate aneurysm diameter measurements needed for patient management. PURPOSE To evaluate whether automated AAA volume and diameter measurements obtained from computer-aided segmentation of non-contrast 3D black blood MRI are accurate, and whether they can supplant reference standard manual measurements from contrast-enhanced CT angiography (CTA). MATERIALS AND METHODS Thirty AAA patients (mean age, 71.9 ± 7.9 years) were recruited between 2014 and 2017. Participants underwent both non-contrast black blood MRI and CTA within 3 months of each other. Semi-automatic (computer-aided) MRI and CTA segmentations utilizing deformable registration methods were compared against manual segmentations of the same modality using the Dice similarity coefficient (DSC). AAA lumen and total aneurysm volumes and AAA maximum diameter, quantified automatically from these segmentations, were compared against manual measurements using Pearson correlation and Bland-Altman analyses. Finally, automated measurements from non-contrast 3D black blood MRI were evaluated against manual CTA measurements using the Wilcoxon test, Pearson correlation and Bland-Altman analyses. RESULTS Semi-automatic segmentations had excellent agreement with manual segmentations (lumen DSC: 0.91 ± 0.03 and 0.94 ± 0.03; total aneurysm DSC: 0.92 ± 0.02 and 0.94 ± 0.03, for black blood MRI and CTA, respectively). Automated volume and maximum diameter measurements also had excellent correlation to their manual counterparts for both black blood MRI (volume: r = 0.99, P < 0.001; diameter: r = 0.97, P < 0.001) and CTA (volume: r = 0.99, P < 0.001; diameter: r = 0.97, P < 0.001). Compared to manual CTA measurements, bias and limits of agreement (LOA) for automated MRI measurements (lumen volume: 1.49, [-4.19 7.17] cm3; outer wall volume: -2.46, [-14.05 9.13] cm3; maximal diameter: 0.08, [-6.51 6.67] mm) were largely equivalent to those of manual MRI measurements, particularly for maximum AAA diameter (lumen volume: 0.73, [-6.47 7.93] cm3; outer wall volume: 0.98, [-10.54 12.5] cm3; maximal diameter: 0.08, [-3.67 3.83] mm). CONCLUSION Semi-automatic segmentation of non-contrast 3D black blood MRI efficiently provides reproducible morphologic AAA assessment yielding accurate AAA diameters and volumes with no clinically relevant differences compared to either automatic or manual measurements based on CTA.
Collapse
|
7
|
Ge C, Gu IYH, Jakola AS, Yang J. Deep semi-supervised learning for brain tumor classification. BMC Med Imaging 2020; 20:87. [PMID: 32727476 PMCID: PMC7391541 DOI: 10.1186/s12880-020-00485-0] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Accepted: 07/13/2020] [Indexed: 12/01/2022] Open
Abstract
Background This paper addresses issues of brain tumor, glioma, classification from four modalities of Magnetic Resonance Image (MRI) scans (i.e., T1 weighted MRI, T1 weighted MRI with contrast-enhanced, T2 weighted MRI and FLAIR). Currently, many available glioma datasets often contain some unlabeled brain scans, and many datasets are moderate in size. Methods We propose to exploit deep semi-supervised learning to make full use of the unlabeled data. Deep CNN features were incorporated into a new graph-based semi-supervised learning framework for learning the labels of the unlabeled data, where a new 3D-2D consistent constraint is added to make consistent classifications for the 2D slices from the same 3D brain scan. A deep-learning classifier is then trained to classify different glioma types using both labeled and unlabeled data with estimated labels. To alleviate the overfitting caused by moderate-size datasets, synthetic MRIs generated by Generative Adversarial Networks (GANs) are added in the training of CNNs. Results The proposed scheme has been tested on two glioma datasets, TCGA dataset for IDH-mutation prediction (molecular-based glioma subtype classification) and MICCAI dataset for glioma grading. Our results have shown good performance (with test accuracies 86.53% on TCGA dataset and 90.70% on MICCAI dataset). Conclusions The proposed scheme is effective for glioma IDH-mutation prediction and glioma grading, and its performance is comparable to the state-of-the-art.
Collapse
Affiliation(s)
- Chenjie Ge
- Dept. of Electrical Engineering, Chalmers Univ. of Technoloogy, Gothenburg, 41296, Sweden.
| | - Irene Yu-Hua Gu
- Dept. of Electrical Engineering, Chalmers Univ. of Technoloogy, Gothenburg, 41296, Sweden
| | - Asgeir Store Jakola
- Sahlgrenska University Hospital and Inst. of Neuroscience and Physiology, Sahlgrenska Academy, Gothenburg, 41345, Sweden
| | - Jie Yang
- Inst. of Image Processing and Pattern Recognition, Shanghai Jiao Tong Univ., Shanghai, 200240, China
| |
Collapse
|
8
|
Martín-Noguerol T, Paulano-Godino F, Riascos RF, Calabia-del-Campo J, Márquez-Rivas J, Luna A. Hybrid computed tomography and magnetic resonance imaging 3D printed models for neurosurgery planning. ANNALS OF TRANSLATIONAL MEDICINE 2019; 7:684. [PMID: 31930085 PMCID: PMC6944557 DOI: 10.21037/atm.2019.10.109] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2019] [Accepted: 10/29/2019] [Indexed: 12/16/2022]
Abstract
In the last decade, the clinical applications of three-dimensional (3D) printed models, in the neurosurgery field among others, have expanded widely based on several technical improvements in 3D printers, an increased variety of materials, but especially in postprocessing software. More commonly, physical models are obtained from a unique imaging technique with potential utilization in presurgical planning, generation/creation of patient-specific surgical material and personalized prosthesis or implants. Using specific software solutions, it is possible to obtain a more accurate segmentation of different anatomical and pathological structures and a more precise registration between different medical image sources allowing generating hybrid computed tomography (CT) and magnetic resonance imaging (MRI) 3D printed models. The need of neurosurgeons for a better understanding of the complex anatomy of central nervous system (CNS) and spine is pushing the use of these hybrid models, which are able to combine morphological information from CT and MRI, and also able to add physiological data from advanced MRI sequences, such as diffusion-weighted imaging (DWI), diffusion tensor imaging (DTI), perfusion weighted imaging (PWI) and functional MRI (fMRI). The inclusion of physiopathological data from advanced MRI sequences enables neurosurgeons to identify those areas with increased biological aggressiveness within a certain lesion prior to surgery or biopsy procedures. Preliminary data support the use of this more accurate presurgical perspective, to select the better surgical approach, reduce the global length of surgery and minimize the rate of intraoperative complications, morbidities or patient recovery times after surgery. The use of 3D printed models in neurosurgery has also demonstrated to be a valid tool for surgeons training and to improve communication between specialists and patients. Further studies are needed to test the feasibility of this novel approach in common clinical practice and determine the degree of improvement the neurosurgeons receive and the potential impact on patient outcome.
Collapse
Affiliation(s)
| | | | - Roy F. Riascos
- Department of Neuroradiology, The University of Texas Health Science Center at Houston, McGovern Medical School, Texas, USA
| | | | | | - Antonio Luna
- MRI Unit, Radiology Department, HT Médica, Jaén, Spain
| |
Collapse
|