1
|
Neijzen D, Lunter G. Unsupervised learning for medical data: A review of probabilistic factorization methods. Stat Med 2023; 42:5541-5554. [PMID: 37850249 DOI: 10.1002/sim.9924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 09/13/2023] [Indexed: 10/19/2023]
Abstract
We review popular unsupervised learning methods for the analysis of high-dimensional data encountered in, for example, genomics, medical imaging, cohort studies, and biobanks. We show that four commonly used methods, principal component analysis, K-means clustering, nonnegative matrix factorization, and latent Dirichlet allocation, can be written as probabilistic models underpinned by a low-rank matrix factorization. In addition to highlighting their similarities, this formulation clarifies the various assumptions and restrictions of each approach, which eases identifying the appropriate method for specific applications for applied medical researchers. We also touch upon the most important aspects of inference and model selection for the application of these methods to health data.
Collapse
Affiliation(s)
- Dorien Neijzen
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | - Gerton Lunter
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
- Weatherall Institute of Molecular Medicine, Oxford University, Oxford, UK
| |
Collapse
|
2
|
Tensaouti F, Desmoulin F, Gilhodes J, Roques M, Ken S, Lotterie JA, Noël G, Truc G, Sunyach MP, Charissoux M, Magné N, Lubrano V, Péran P, Cohen-Jonathan Moyal E, Laprie A. Is pre-radiotherapy metabolic heterogeneity of glioblastoma predictive of progression-free survival? Radiother Oncol 2023; 183:109665. [PMID: 37024057 DOI: 10.1016/j.radonc.2023.109665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 03/25/2023] [Accepted: 03/28/2023] [Indexed: 04/08/2023]
Abstract
BACKGROUND AND PURPOSE All glioblastoma subtypes share the hallmark of aggressive invasion, meaning that it is crucial to identify their different components if we are to ensure effective treatment and improve survival. Proton MR spectroscopic imaging (MRSI) is a noninvasive technique that yields metabolic information and is able to identify pathological tissue with high accuracy. The aim of the present study was to identify clusters of metabolic heterogeneity, using a large MRSI dataset, and determine which of these clusters are predictive of progression-free survival (PFS). MATERIALS AND METHODS MRSI data of 180 patients acquired in a pre-radiotherapy examination were included in the prospective SPECTRO-GLIO trial. Eight features were extracted for each spectrum: Cho/NAA, NAA/Cr, Cho/Cr, Lac/NAA, and the ratio of each metabolite to the sum of all the metabolites. Clustering of data was performed using a mini-batch k-means algorithm. The Cox model and logrank test were used for PFS analysis. RESULTS Five clusters were identified as sharing similar metabolic information and being predictive of PFS. Two clusters revealed metabolic abnormalities. PFS was lower when Cluster 2 was the dominant cluster in patients' MRSI data. Among the metabolites, lactate (present in this cluster and in Cluster 5) was the most statistically significant predictor of poor outcome. CONCLUSION Results showed that pre-radiotherapy MRSI can be used to reveal tumor heterogeneity. Groups of spectra, which have the same metabolic information, reflect the different tissue components representative of tumor burden proliferation and hypoxia. Clusters with metabolic abnormalities and high lactate are predictive of PFS.
Collapse
Affiliation(s)
- Fatima Tensaouti
- Institut Claudius Regaud/Institut Universitaire du Cancer de Toulouse - Oncopôle, Radiation oncology, Toulouse, France; ToNIC, Toulouse NeuroImaging Center, Université de Toulouse, Inserm, UPS, France.
| | - Franck Desmoulin
- ToNIC, Toulouse NeuroImaging Center, Université de Toulouse, Inserm, UPS, France
| | - Julia Gilhodes
- Institut Claudius Regaud/Institut Universitaire du Cancer de Toulouse - Oncopôle, Biostatistics, Toulouse, France
| | - Margaux Roques
- CHU Toulouse, Neuroradiology, Toulouse, France; ToNIC, Toulouse NeuroImaging Center, Université de Toulouse, Inserm, UPS, France
| | - Soleakhena Ken
- Institut Claudius Regaud/Institut Universitaire du Cancer de Toulouse - Oncopôle, Engineering and Medical Physics, Toulouse, France; Inserm U1037- Centre de Recherches contre le Cancer de Toulouse, Radiation oncology, Toulouse, France
| | - Jean-Albert Lotterie
- ToNIC, Toulouse NeuroImaging Center, Université de Toulouse, Inserm, UPS, France; CHU Toulouse, Nuclear Medicine, Toulouse, France
| | | | - Gilles Truc
- Centre Georges-François Leclerc, Radiation Oncology, Dijon, France
| | | | - Marie Charissoux
- Institut du Cancer de Montpellier, Radiation Oncology, Montpellier, France
| | - Nicolas Magné
- Institut de Cancérologie de la Loire Lucien Neuwirth, Radiation Oncology, Saint-Priest-en-Jarez, France
| | - Vincent Lubrano
- ToNIC, Toulouse NeuroImaging Center, Université de Toulouse, Inserm, UPS, France
| | - Patrice Péran
- ToNIC, Toulouse NeuroImaging Center, Université de Toulouse, Inserm, UPS, France
| | - Elizabeth Cohen-Jonathan Moyal
- Institut Claudius Regaud/Institut Universitaire du Cancer de Toulouse - Oncopôle, Radiation oncology, Toulouse, France; Inserm U1037- Centre de Recherches contre le Cancer de Toulouse, Radiation oncology, Toulouse, France
| | - Anne Laprie
- Institut Claudius Regaud/Institut Universitaire du Cancer de Toulouse - Oncopôle, Radiation oncology, Toulouse, France; ToNIC, Toulouse NeuroImaging Center, Université de Toulouse, Inserm, UPS, France
| |
Collapse
|
3
|
Factorizer: A scalable interpretable approach to context modeling for medical image segmentation. Med Image Anal 2023; 84:102706. [PMID: 36516557 DOI: 10.1016/j.media.2022.102706] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 11/16/2022] [Accepted: 11/24/2022] [Indexed: 12/02/2022]
Abstract
Convolutional Neural Networks (CNNs) with U-shaped architectures have dominated medical image segmentation, which is crucial for various clinical purposes. However, the inherent locality of convolution makes CNNs fail to fully exploit global context, essential for better recognition of some structures, e.g., brain lesions. Transformers have recently proven promising performance on vision tasks, including semantic segmentation, mainly due to their capability of modeling long-range dependencies. Nevertheless, the quadratic complexity of attention makes existing Transformer-based models use self-attention layers only after somehow reducing the image resolution, which limits the ability to capture global contexts present at higher resolutions. Therefore, this work introduces a family of models, dubbed Factorizer, which leverages the power of low-rank matrix factorization for constructing an end-to-end segmentation model. Specifically, we propose a linearly scalable approach to context modeling, formulating Nonnegative Matrix Factorization (NMF) as a differentiable layer integrated into a U-shaped architecture. The shifted window technique is also utilized in combination with NMF to effectively aggregate local information. Factorizers compete favorably with CNNs and Transformers in terms of accuracy, scalability, and interpretability, achieving state-of-the-art results on the BraTS dataset for brain tumor segmentation and ISLES'22 dataset for stroke lesion segmentation. Highly meaningful NMF components give an additional interpretability advantage to Factorizers over CNNs and Transformers. Moreover, our ablation studies reveal a distinctive feature of Factorizers that enables a significant speed-up in inference for a trained Factorizer without any extra steps and without sacrificing much accuracy. The code and models are publicly available at https://github.com/pashtari/factorizer.
Collapse
|
4
|
Popat M, Patel S. Research perspective and review towards brain tumour segmentation and classification using different image modalities. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2124546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Mayuri Popat
- U & P.U. Patel Department of Computer Engineering, Chandubhai S Patel Institute of Technology (CSPIT), Charotar University of Science and Technology (CHARUSAT), Gujarat, India
| | - Sanskruti Patel
- Smt. Chandaben Mohanbhai Patel Institute of Computer Applications (CMPICA), Charotar University of Science and Technology (CHARUSAT), Gujarat, India
| |
Collapse
|
5
|
Hamamoto R, Takasawa K, Machino H, Kobayashi K, Takahashi S, Bolatkan A, Shinkai N, Sakai A, Aoyama R, Yamada M, Asada K, Komatsu M, Okamoto K, Kameoka H, Kaneko S. Application of non-negative matrix factorization in oncology: one approach for establishing precision medicine. Brief Bioinform 2022; 23:6628783. [PMID: 35788277 PMCID: PMC9294421 DOI: 10.1093/bib/bbac246] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 05/06/2022] [Accepted: 05/25/2022] [Indexed: 12/19/2022] Open
Abstract
The increase in the expectations of artificial intelligence (AI) technology has led to machine learning technology being actively used in the medical field. Non-negative matrix factorization (NMF) is a machine learning technique used for image analysis, speech recognition, and language processing; recently, it is being applied to medical research. Precision medicine, wherein important information is extracted from large-scale medical data to provide optimal medical care for every individual, is considered important in medical policies globally, and the application of machine learning techniques to this end is being handled in several ways. NMF is also introduced differently because of the characteristics of its algorithms. In this review, the importance of NMF in the field of medicine, with a focus on the field of oncology, is described by explaining the mathematical science of NMF and the characteristics of the algorithm, providing examples of how NMF can be used to establish precision medicine, and presenting the challenges of NMF. Finally, the direction regarding the effective use of NMF in the field of oncology is also discussed.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | - Rina Aoyama
- Showa University Graduate School of Medicine School of Medicine
| | | | - Ken Asada
- RIKEN Center for Advanced Intelligence Project
| | | | | | | | | |
Collapse
|
6
|
Bhalodiya JM, Lim Choi Keung SN, Arvanitis TN. Magnetic resonance image-based brain tumour segmentation methods: A systematic review. Digit Health 2022; 8:20552076221074122. [PMID: 35340900 PMCID: PMC8943308 DOI: 10.1177/20552076221074122] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 11/20/2021] [Accepted: 12/27/2021] [Indexed: 01/10/2023] Open
Abstract
Background Image segmentation is an essential step in the analysis and subsequent characterisation of brain tumours through magnetic resonance imaging. In the literature, segmentation methods are empowered by open-access magnetic resonance imaging datasets, such as the brain tumour segmentation dataset. Moreover, with the increased use of artificial intelligence methods in medical imaging, access to larger data repositories has become vital in method development. Purpose To determine what automated brain tumour segmentation techniques can medical imaging specialists and clinicians use to identify tumour components, compared to manual segmentation. Methods We conducted a systematic review of 572 brain tumour segmentation studies during 2015-2020. We reviewed segmentation techniques using T1-weighted, T2-weighted, gadolinium-enhanced T1-weighted, fluid-attenuated inversion recovery, diffusion-weighted and perfusion-weighted magnetic resonance imaging sequences. Moreover, we assessed physics or mathematics-based methods, deep learning methods, and software-based or semi-automatic methods, as applied to magnetic resonance imaging techniques. Particularly, we synthesised each method as per the utilised magnetic resonance imaging sequences, study population, technical approach (such as deep learning) and performance score measures (such as Dice score). Statistical tests We compared median Dice score in segmenting the whole tumour, tumour core and enhanced tumour. Results We found that T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid-attenuated inversion recovery magnetic resonance imaging are used the most in various segmentation algorithms. However, there is limited use of perfusion-weighted and diffusion-weighted magnetic resonance imaging. Moreover, we found that the U-Net deep learning technology is cited the most, and has high accuracy (Dice score 0.9) for magnetic resonance imaging-based brain tumour segmentation. Conclusion U-Net is a promising deep learning technology for magnetic resonance imaging-based brain tumour segmentation. The community should be encouraged to contribute open-access datasets so training, testing and validation of deep learning algorithms can be improved, particularly for diffusion- and perfusion-weighted magnetic resonance imaging, where there are limited datasets available.
Collapse
Affiliation(s)
- Jayendra M Bhalodiya
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Sarah N Lim Choi Keung
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Theodoros N Arvanitis
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| |
Collapse
|
7
|
Abstract
AbstractBrain tumor occurs owing to uncontrolled and rapid growth of cells. If not treated at an initial phase, it may lead to death. Despite many significant efforts and promising outcomes in this domain, accurate segmentation and classification remain a challenging task. A major challenge for brain tumor detection arises from the variations in tumor location, shape, and size. The objective of this survey is to deliver a comprehensive literature on brain tumor detection through magnetic resonance imaging to help the researchers. This survey covered the anatomy of brain tumors, publicly available datasets, enhancement techniques, segmentation, feature extraction, classification, and deep learning, transfer learning and quantum machine learning for brain tumors analysis. Finally, this survey provides all important literature for the detection of brain tumors with their advantages, limitations, developments, and future trends.
Collapse
|
8
|
Image Features of Magnetic Resonance Imaging under the Deep Learning Algorithm in the Diagnosis and Nursing of Malignant Tumors. CONTRAST MEDIA & MOLECULAR IMAGING 2021; 2021:1104611. [PMID: 34548850 PMCID: PMC8423572 DOI: 10.1155/2021/1104611] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 07/16/2021] [Accepted: 08/14/2021] [Indexed: 12/15/2022]
Abstract
In order to explore the effect of convolutional neural network (CNN) algorithm based on deep learning on magnetic resonance imaging (MRI) images of brain tumor patients and evaluate the practical value of MRI image features based on deep learning algorithm in the clinical diagnosis and nursing of malignant tumors, in this study, a brain tumor MRI image model based on the CNN algorithm was constructed, and 80 patients with brain tumors were selected as the research objects. They were divided into an experimental group (CNN algorithm) and a control group (traditional algorithm). The patients were nursed in the whole process. The macroscopic characteristics and imaging index of the MRI image and anxiety of patients in two groups were compared and analyzed. In addition, the image quality after nursing was checked. The results of the study revealed that the MRI characteristics of brain tumors based on CNN algorithm were clearer and more accurate in the fluid-attenuated inversion recovery (FLAIR), MRI T1, T1c, and T2; in terms of accuracy, sensitivity, and specificity, the mean value was 0.83, 0.84, and 0.83, which had obvious advantages compared with the traditional algorithm (P < 0.05). The patients in the nursing group showed lower depression scores and better MRI images in contrast to the control group (P < 0.05). Therefore, the deep learning algorithm can further accurately analyze the MRI image characteristics of brain tumor patients on the basis of conventional algorithms, showing high sensitivity and specificity, which improved the application value of MRI image characteristics in the diagnosis of malignant tumors. In addition, effective nursing for patients undergoing analysis and diagnosis on brain tumor MRI image characteristics can alleviate the patient's anxiety and ensure that high-quality MRI images were obtained after the examination.
Collapse
|
9
|
Guo K, Li X, Hu X, Liu J, Fan T. Hahn-PCNN-CNN: an end-to-end multi-modal brain medical image fusion framework useful for clinical diagnosis. BMC Med Imaging 2021; 21:111. [PMID: 34261452 PMCID: PMC8278599 DOI: 10.1186/s12880-021-00642-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 05/28/2021] [Indexed: 11/30/2022] Open
Abstract
Background In medical diagnosis of brain, the role of multi-modal medical image fusion is becoming more prominent. Among them, there is no lack of filtering layered fusion and newly emerging deep learning algorithms. The former has a fast fusion speed but the fusion image texture is blurred; the latter has a better fusion effect but requires higher machine computing capabilities. Therefore, how to find a balanced algorithm in terms of image quality, speed and computing power is still the focus of all scholars. Methods We built an end-to-end Hahn-PCNN-CNN. The network is composed of feature extraction module, feature fusion module and image reconstruction module. We selected 8000 multi-modal brain medical images downloaded from the Harvard Medical School website to train the feature extraction layer and image reconstruction layer to enhance the network’s ability to reconstruct brain medical images. In the feature fusion module, we use the moments of the feature map combined with the pulse-coupled neural network to reduce the information loss caused by convolution in the previous fusion module and save time. Results We choose eight sets of registered multi-modal brain medical images in four diease to verify our model. The anatomical structure images are from MRI and the functional metabolism images are SPECT and 18F-FDG. At the same time, we also selected eight representative fusion models as comparative experiments. In terms of objective quality evaluation, we select six evaluation metrics in five categories to evaluate our model. Conclusions The fusion image obtained by our model can retain the effective information in source images to the greatest extent. In terms of image fusion evaluation metrics, our model is superior to other comparison algorithms. In terms of time computational efficiency, our model also performs well. In terms of robustness, our model is very stable and can be generalized to multi-modal image fusion of other organs.
Collapse
Affiliation(s)
- Kai Guo
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, China.,College of Computer Science and Technology, Jilin University, Changchun, China
| | - Xiongfei Li
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, China.,College of Computer Science and Technology, Jilin University, Changchun, China
| | - Xiaohan Hu
- Department of Radiology, The First Hospital of Jilin University, Changchun, China.
| | - Jichen Liu
- College of Software, Jilin University, Changchun, China
| | - Tiehu Fan
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| |
Collapse
|
10
|
Ge C, Gu IYH, Jakola AS, Yang J. Deep semi-supervised learning for brain tumor classification. BMC Med Imaging 2020; 20:87. [PMID: 32727476 PMCID: PMC7391541 DOI: 10.1186/s12880-020-00485-0] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Accepted: 07/13/2020] [Indexed: 12/01/2022] Open
Abstract
Background This paper addresses issues of brain tumor, glioma, classification from four modalities of Magnetic Resonance Image (MRI) scans (i.e., T1 weighted MRI, T1 weighted MRI with contrast-enhanced, T2 weighted MRI and FLAIR). Currently, many available glioma datasets often contain some unlabeled brain scans, and many datasets are moderate in size. Methods We propose to exploit deep semi-supervised learning to make full use of the unlabeled data. Deep CNN features were incorporated into a new graph-based semi-supervised learning framework for learning the labels of the unlabeled data, where a new 3D-2D consistent constraint is added to make consistent classifications for the 2D slices from the same 3D brain scan. A deep-learning classifier is then trained to classify different glioma types using both labeled and unlabeled data with estimated labels. To alleviate the overfitting caused by moderate-size datasets, synthetic MRIs generated by Generative Adversarial Networks (GANs) are added in the training of CNNs. Results The proposed scheme has been tested on two glioma datasets, TCGA dataset for IDH-mutation prediction (molecular-based glioma subtype classification) and MICCAI dataset for glioma grading. Our results have shown good performance (with test accuracies 86.53% on TCGA dataset and 90.70% on MICCAI dataset). Conclusions The proposed scheme is effective for glioma IDH-mutation prediction and glioma grading, and its performance is comparable to the state-of-the-art.
Collapse
Affiliation(s)
- Chenjie Ge
- Dept. of Electrical Engineering, Chalmers Univ. of Technoloogy, Gothenburg, 41296, Sweden.
| | - Irene Yu-Hua Gu
- Dept. of Electrical Engineering, Chalmers Univ. of Technoloogy, Gothenburg, 41296, Sweden
| | - Asgeir Store Jakola
- Sahlgrenska University Hospital and Inst. of Neuroscience and Physiology, Sahlgrenska Academy, Gothenburg, 41345, Sweden
| | - Jie Yang
- Inst. of Image Processing and Pattern Recognition, Shanghai Jiao Tong Univ., Shanghai, 200240, China
| |
Collapse
|
11
|
Kumar S, Mankame DP. Optimization driven Deep Convolution Neural Network for brain tumor classification. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.05.009] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
12
|
Sharif M, Amin J, Raza M, Yasmin M, Satapathy SC. An integrated design of particle swarm optimization (PSO) with fusion of features for detection of brain tumor. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2019.11.017] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
13
|
Nalepa J, Ribalta Lorenzo P, Marcinkiewicz M, Bobek-Billewicz B, Wawrzyniak P, Walczak M, Kawulok M, Dudzik W, Kotowski K, Burda I, Machura B, Mrukwa G, Ulrych P, Hayball MP. Fully-automated deep learning-powered system for DCE-MRI analysis of brain tumors. Artif Intell Med 2020; 102:101769. [DOI: 10.1016/j.artmed.2019.101769] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Revised: 10/28/2019] [Accepted: 11/20/2019] [Indexed: 02/01/2023]
|
14
|
Nalepa J, Marcinkiewicz M, Kawulok M. Data Augmentation for Brain-Tumor Segmentation: A Review. Front Comput Neurosci 2019; 13:83. [PMID: 31920608 PMCID: PMC6917660 DOI: 10.3389/fncom.2019.00083] [Citation(s) in RCA: 94] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 11/27/2019] [Indexed: 11/17/2022] Open
Abstract
Data augmentation is a popular technique which helps improve generalization capabilities of deep neural networks, and can be perceived as implicit regularization. It plays a pivotal role in scenarios in which the amount of high-quality ground-truth data is limited, and acquiring new examples is costly and time-consuming. This is a very common problem in medical image analysis, especially tumor delineation. In this paper, we review the current advances in data-augmentation techniques applied to magnetic resonance images of brain tumors. To better understand the practical aspects of such algorithms, we investigate the papers submitted to the Multimodal Brain Tumor Segmentation Challenge (BraTS 2018 edition), as the BraTS dataset became a standard benchmark for validating existent and emerging brain-tumor detection and segmentation techniques. We verify which data augmentation approaches were exploited and what was their impact on the abilities of underlying supervised learners. Finally, we highlight the most promising research directions to follow in order to synthesize high-quality artificial brain-tumor examples which can boost the generalization abilities of deep models.
Collapse
Affiliation(s)
- Jakub Nalepa
- Future Processing, Gliwice, Poland
- Silesian University of Technology, Gliwice, Poland
| | | | | |
Collapse
|
15
|
Hale AT, Stonko DP, Wang L, Strother MK, Chambless LB. Machine learning analyses can differentiate meningioma grade by features on magnetic resonance imaging. Neurosurg Focus 2019; 45:E4. [PMID: 30453458 DOI: 10.3171/2018.8.focus18191] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Accepted: 08/15/2018] [Indexed: 11/06/2022]
Abstract
OBJECTIVEPrognostication and surgical planning for WHO grade I versus grade II meningioma requires thoughtful decision-making based on radiographic evidence, among other factors. Although conventional statistical models such as logistic regression are useful, machine learning (ML) algorithms are often more predictive, have higher discriminative ability, and can learn from new data. The authors used conventional statistical models and an array of ML algorithms to predict atypical meningioma based on radiologist-interpreted preoperative MRI findings. The goal of this study was to compare the performance of ML algorithms to standard statistical methods when predicting meningioma grade.METHODSThe cohort included patients aged 18-65 years with WHO grade I (n = 94) and II (n = 34) meningioma in whom preoperative MRI was obtained between 1998 and 2010. A board-certified neuroradiologist, blinded to histological grade, interpreted all MR images for tumor volume, degree of peritumoral edema, presence of necrosis, tumor location, presence of a draining vein, and patient sex. The authors trained and validated several binary classifiers: k-nearest neighbors models, support vector machines, naïve Bayes classifiers, and artificial neural networks as well as logistic regression models to predict tumor grade. The area under the curve-receiver operating characteristic curve was used for comparison across and within model classes. All analyses were performed in MATLAB using a MacBook Pro.RESULTSThe authors included 6 preoperative imaging and demographic variables: tumor volume, degree of peritumoral edema, presence of necrosis, tumor location, patient sex, and presence of a draining vein to construct the models. The artificial neural networks outperformed all other ML models across the true-positive versus false-positive (receiver operating characteristic) space (area under curve = 0.8895).CONCLUSIONSML algorithms are powerful computational tools that can predict meningioma grade with great accuracy.
Collapse
Affiliation(s)
- Andrew T Hale
- 1Department of Neurosurgery, Vanderbilt University Medical Center.,3Vanderbilt University School of Medicine
| | | | - Li Wang
- 4Department of Biostatistics, Vanderbilt University; and
| | - Megan K Strother
- 5Department of Radiology, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Lola B Chambless
- 1Department of Neurosurgery, Vanderbilt University Medical Center.,3Vanderbilt University School of Medicine
| |
Collapse
|
16
|
Ortega-Martorell S, Candiota AP, Thomson R, Riley P, Julia-Sape M, Olier I. Embedding MRI information into MRSI data source extraction improves brain tumour delineation in animal models. PLoS One 2019; 14:e0220809. [PMID: 31415601 PMCID: PMC6695141 DOI: 10.1371/journal.pone.0220809] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2019] [Accepted: 07/23/2019] [Indexed: 01/22/2023] Open
Abstract
Glioblastoma is the most frequent malignant intra-cranial tumour. Magnetic resonance imaging is the modality of choice in diagnosis, aggressiveness assessment, and follow-up. However, there are examples where it lacks diagnostic accuracy. Magnetic resonance spectroscopy enables the identification of molecules present in the tissue, providing a precise metabolomic signature. Previous research shows that combining imaging and spectroscopy information results in more accurate outcomes and superior diagnostic value. This study proposes a method to combine them, which builds upon a previous methodology whose main objective is to guide the extraction of sources. To this aim, prior knowledge about class-specific information is integrated into the methodology by setting the metric of a latent variable space where Non-negative Matrix Factorisation is performed. The former methodology, which only used spectroscopy and involved combining spectra from different subjects, was adapted to use selected areas of interest that arise from segmenting the T2-weighted image. Results showed that embedding imaging information into the source extraction (the proposed semi-supervised analysis) improved the quality of the tumour delineation, as compared to those obtained without this information (unsupervised analysis). Both approaches were applied to pre-clinical data, involving thirteen brain tumour-bearing mice, and tested against histopathological data. On results of twenty-eight images, the proposed Semi-Supervised Source Extraction (SSSE) method greatly outperformed the unsupervised one, as well as an alternative semi-supervised approach from the literature, with differences being statistically significant. SSSE has proven successful in the delineation of the tumour, while bringing benefits such as 1) not constricting the metabolomic-based prediction to the image-segmented area, 2) ability to deal with signal-to-noise issues, 3) opportunity to answer specific questions by allowing researchers/radiologists define areas of interest that guide the source extraction, 4) creation of an intra-subject model and avoiding contamination from inter-subject overlaps, and 5) extraction of meaningful, good-quality sources that adds interpretability, conferring validation and better understanding of each case.
Collapse
Affiliation(s)
- Sandra Ortega-Martorell
- Department of Applied Mathematics, Liverpool John Moores University, Liverpool, England, United Kingdom
- Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Cerdanyola del Vallès, Spain
- * E-mail:
| | - Ana Paula Candiota
- Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Cerdanyola del Vallès, Spain
- Departament de Bioquímica i Biologia Molecular, Unitat de Biociències, Universitat Autònoma de Barcelona, Cerdanyola del Vallès, Spain
- Institut de Biotecnologia i de Biomedicina, Universitat Autònoma de Barcelona, Cerdanyola del Vallès, Spain
| | - Ryan Thomson
- Department of Applied Mathematics, Liverpool John Moores University, Liverpool, England, United Kingdom
| | - Patrick Riley
- Department of Applied Mathematics, Liverpool John Moores University, Liverpool, England, United Kingdom
| | - Margarida Julia-Sape
- Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Cerdanyola del Vallès, Spain
- Departament de Bioquímica i Biologia Molecular, Unitat de Biociències, Universitat Autònoma de Barcelona, Cerdanyola del Vallès, Spain
- Institut de Biotecnologia i de Biomedicina, Universitat Autònoma de Barcelona, Cerdanyola del Vallès, Spain
| | - Ivan Olier
- Department of Applied Mathematics, Liverpool John Moores University, Liverpool, England, United Kingdom
| |
Collapse
|
17
|
Pedrosa de Barros N, Meier R, Pletscher M, Stettler S, Knecht U, Reyes M, Gralla J, Wiest R, Slotboom J. Analysis of metabolic abnormalities in high-grade glioma using MRSI and convex NMF. NMR IN BIOMEDICINE 2019; 32:e4109. [PMID: 31131943 DOI: 10.1002/nbm.4109] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2017] [Revised: 03/30/2019] [Accepted: 04/01/2019] [Indexed: 06/09/2023]
Abstract
Clinical use of MRSI is limited by the level of experience required to properly translate MRSI examinations into relevant clinical information. To solve this, several methods have been proposed to automatically recognize a predefined set of reference metabolic patterns. Given the variety of metabolic patterns seen in glioma patients, the decision on the optimal number of patterns that need to be used to describe the data is not trivial. In this paper, we propose a novel framework to (1) separate healthy from abnormal metabolic patterns and (2) retrieve an optimal number of reference patterns describing the most important types of abnormality. Using 41 MRSI examinations (1.5 T, PRESS, TE 135 ms) from 22 glioma patients, four different patterns describing different types of abnormality were detected: edema, healthy without Glx, active tumor and necrosis. The identified patterns were then evaluated on 17 MRSI examinations from nine different glioma patients. The results were compared against BraTumIA, an automatic segmentation method trained to identify different tumor compartments on structural MRI data. Finally, the ability to predict future contrast enhancement using the proposed approach was also evaluated.
Collapse
Affiliation(s)
- Nuno Pedrosa de Barros
- Support Center for Advanced Neuroimaging (SCAN), Neuroradiology, University Hospital Inselspital, Bern, Switzerland
| | - Raphael Meier
- Support Center for Advanced Neuroimaging (SCAN), Neuroradiology, University Hospital Inselspital, Bern, Switzerland
| | - Martin Pletscher
- Support Center for Advanced Neuroimaging (SCAN), Neuroradiology, University Hospital Inselspital, Bern, Switzerland
| | - Samuel Stettler
- Support Center for Advanced Neuroimaging (SCAN), Neuroradiology, University Hospital Inselspital, Bern, Switzerland
| | - Urspeter Knecht
- Support Center for Advanced Neuroimaging (SCAN), Neuroradiology, University Hospital Inselspital, Bern, Switzerland
| | - Mauricio Reyes
- Institute for Surgical Technology and Biomechanics (ISTB), University of Bern, Bern, Switzerland
| | - Jan Gralla
- Support Center for Advanced Neuroimaging (SCAN), Neuroradiology, University Hospital Inselspital, Bern, Switzerland
| | - Roland Wiest
- Support Center for Advanced Neuroimaging (SCAN), Neuroradiology, University Hospital Inselspital, Bern, Switzerland
| | - Johannes Slotboom
- Support Center for Advanced Neuroimaging (SCAN), Neuroradiology, University Hospital Inselspital, Bern, Switzerland
| |
Collapse
|
18
|
Zhai J, Li H. An Improved Full Convolutional Network Combined with Conditional Random Fields for Brain MR Image Segmentation Algorithm and its 3D Visualization Analysis. J Med Syst 2019; 43:292. [PMID: 31338693 DOI: 10.1007/s10916-019-1424-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Accepted: 07/14/2019] [Indexed: 01/27/2023]
Abstract
Existing brain region segmentation algorithms based on deep convolutional neural networks (CNN) are inefficient for object boundary segmentation. In order to enhance the segmentation accuracy of brain tissue, this paper proposed an object region segmentation algorithm that combines pixel-level information and semantic information. Firstly, we extract semantic information by CNN with the attention module and get the coarse segmentation results through a specific pixel-level classifier. Then, we exploit conditional random fields to model the relationship between the underlying pixels so as to get local features. Finally, the semantic information and the local pixel-level information are respectively used as the unary potential and the binary potential of the Gibbs distribution, and the combination of both can obtain the fine region segmentation algorithm based on the fusion of pixel-level information and the semantic information. A large number of qualitative and quantitative test results show that our proposed algorithm has higher precision than the existing state-of-the-art deep feature models, which can better solve the problem of rough edge segmentation and produce good 3D visualization effect.
Collapse
Affiliation(s)
- Jiemin Zhai
- Department of Neurology, Xi'an XD Group Hospital, Xi'an, 710077, Shaanxi, China.
| | - Huiqi Li
- Department of Neurology, Xi'an XD Group Hospital, Xi'an, 710077, Shaanxi, China
| |
Collapse
|
19
|
Bi WL, Hosny A, Schabath MB, Giger ML, Birkbak NJ, Mehrtash A, Allison T, Arnaout O, Abbosh C, Dunn IF, Mak RH, Tamimi RM, Tempany CM, Swanton C, Hoffmann U, Schwartz LH, Gillies RJ, Huang RY, Aerts HJWL. Artificial intelligence in cancer imaging: Clinical challenges and applications. CA Cancer J Clin 2019; 69:127-157. [PMID: 30720861 PMCID: PMC6403009 DOI: 10.3322/caac.21552] [Citation(s) in RCA: 635] [Impact Index Per Article: 127.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care.
Collapse
Affiliation(s)
- Wenya Linda Bi
- Assistant Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Ahmed Hosny
- Research Scientist, Department of Radiation Oncology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Matthew B. Schabath
- Associate Member, Department of Cancer EpidemiologyH. Lee Moffitt Cancer Center and Research InstituteTampaFL
| | - Maryellen L. Giger
- Professor of Radiology, Department of RadiologyUniversity of ChicagoChicagoIL
| | - Nicolai J. Birkbak
- Research Associate, The Francis Crick InstituteLondonUnited Kingdom
- Research Associate, University College London Cancer InstituteLondonUnited Kingdom
| | - Alireza Mehrtash
- Research Assistant, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
- Research Assistant, Department of Electrical and Computer EngineeringUniversity of British ColumbiaVancouverBCCanada
| | - Tavis Allison
- Research Assistant, Department of RadiologyColumbia University College of Physicians and SurgeonsNew YorkNY
- Research Assistant, Department of RadiologyNew York Presbyterian HospitalNew YorkNY
| | - Omar Arnaout
- Assistant Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Christopher Abbosh
- Research Fellow, The Francis Crick InstituteLondonUnited Kingdom
- Research Fellow, University College London Cancer InstituteLondonUnited Kingdom
| | - Ian F. Dunn
- Associate Professor of Neurosurgery, Department of Neurosurgery, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Raymond H. Mak
- Associate Professor, Department of Radiation Oncology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Rulla M. Tamimi
- Associate Professor, Department of MedicineBrigham and Women’s Hospital, Dana‐Farber Cancer Institute, Harvard Medical SchoolBostonMA
| | - Clare M. Tempany
- Professor of Radiology, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Charles Swanton
- Professor, The Francis Crick InstituteLondonUnited Kingdom
- Professor, University College London Cancer InstituteLondonUnited Kingdom
| | - Udo Hoffmann
- Professor of Radiology, Department of RadiologyMassachusetts General Hospital and Harvard Medical SchoolBostonMA
| | - Lawrence H. Schwartz
- Professor of Radiology, Department of RadiologyColumbia University College of Physicians and SurgeonsNew YorkNY
- Chair, Department of RadiologyNew York Presbyterian HospitalNew YorkNY
| | - Robert J. Gillies
- Professor of Radiology, Department of Cancer PhysiologyH. Lee Moffitt Cancer Center and Research InstituteTampaFL
| | - Raymond Y. Huang
- Assistant Professor, Department of Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
| | - Hugo J. W. L. Aerts
- Associate Professor, Departments of Radiation Oncology and Radiology, Brigham and Women’s Hospital, Dana‐Farber Cancer InstituteHarvard Medical SchoolBostonMA
- Professor in AI in Medicine, Radiology and Nuclear Medicine, GROWMaastricht University Medical Centre (MUMC+)MaastrichtThe Netherlands
| |
Collapse
|
20
|
Sakai K, Yamada K. Machine learning studies on major brain diseases: 5-year trends of 2014–2018. Jpn J Radiol 2018; 37:34-72. [DOI: 10.1007/s11604-018-0794-4] [Citation(s) in RCA: 73] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2018] [Accepted: 11/14/2018] [Indexed: 12/17/2022]
|
21
|
Rank-Two NMF Clustering for Glioblastoma Characterization. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:1048164. [PMID: 30425818 PMCID: PMC6218733 DOI: 10.1155/2018/1048164] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Accepted: 09/26/2018] [Indexed: 11/17/2022]
Abstract
This study investigates a novel classification method for 3D multimodal MRI glioblastomas tumor characterization. We formulate our segmentation problem as a linear mixture model (LMM). Thus, we provide a nonnegative matrix M from every MRI slice in every segmentation process' step. This matrix will be used as an input for the first segmentation process to extract the edema region from T2 and FLAIR modalities. After that, in the rest of segmentation processes, we extract the edema region from T1c modality, generate the matrix M, and segment the necrosis, the enhanced tumor, and the nonenhanced tumor regions. In the segmentation process, we apply a rank-two NMF clustering. We have executed our tumor characterization method on BraTS 2015 challenge dataset. Quantitative and qualitative evaluations over the publicly training and testing dataset from the MICCAI 2015 multimodal brain segmentation challenge (BraTS 2015) attested that the proposed algorithm could yield a competitive performance for brain glioblastomas characterization (necrosis, tumor core, and edema) among several competing methods.
Collapse
|
22
|
Fully automated detection and segmentation of meningiomas using deep learning on routine multiparametric MRI. Eur Radiol 2018; 29:124-132. [PMID: 29943184 PMCID: PMC6291436 DOI: 10.1007/s00330-018-5595-8] [Citation(s) in RCA: 108] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Revised: 05/19/2018] [Accepted: 06/05/2018] [Indexed: 12/18/2022]
Abstract
Objectives Magnetic resonance imaging (MRI) is the method of choice for imaging meningiomas. Volumetric assessment of meningiomas is highly relevant for therapy planning and monitoring. We used a multiparametric deep-learning model (DLM) on routine MRI data including images from diverse referring institutions to investigate DLM performance in automated detection and segmentation of meningiomas in comparison to manual segmentations. Methods We included 56 of 136 consecutive preoperative MRI datasets [T1/T2-weighted, T1-weighted contrast-enhanced (T1CE), FLAIR] of meningiomas that were treated surgically at the University Hospital Cologne and graded histologically as tumour grade I (n = 38) or grade II (n = 18). The DLM was trained on an independent dataset of 249 glioma cases and segmented different tumour classes as defined in the brain tumour image segmentation benchmark (BRATS benchmark). The DLM was based on the DeepMedic architecture. Results were compared to manual segmentations by two radiologists in a consensus reading in FLAIR and T1CE. Results The DLM detected meningiomas in 55 of 56 cases. Further, automated segmentations correlated strongly with manual segmentations: average Dice coefficients were 0.81 ± 0.10 (range, 0.46-0.93) for the total tumour volume (union of tumour volume in FLAIR and T1CE) and 0.78 ± 0.19 (range, 0.27-0.95) for contrast-enhancing tumour volume in T1CE. Conclusions The DLM yielded accurate automated detection and segmentation of meningioma tissue despite diverse scanner data and thereby may improve and facilitate therapy planning as well as monitoring of this highly frequent tumour entity. Key Points • Deep learning allows for accurate meningioma detection and segmentation • Deep learning helps clinicians to assess patients with meningiomas • Meningioma monitoring and treatment planning can be improved Electronic supplementary material The online version of this article (10.1007/s00330-018-5595-8) contains supplementary material, which is available to authorized users.
Collapse
|
23
|
Kong Y, Chen X, Wu J, Zhang P, Chen Y, Shu H. Automatic brain tissue segmentation based on graph filter. BMC Med Imaging 2018; 18:9. [PMID: 29739350 PMCID: PMC5941431 DOI: 10.1186/s12880-018-0252-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2017] [Accepted: 04/30/2018] [Indexed: 01/24/2023] Open
Abstract
Background Accurate segmentation of brain tissues from magnetic resonance imaging (MRI) is of significant importance in clinical applications and neuroscience research. Accurate segmentation is challenging due to the tissue heterogeneity, which is caused by noise, bias filed and partial volume effects. Methods To overcome this limitation, this paper presents a novel algorithm for brain tissue segmentation based on supervoxel and graph filter. Firstly, an effective supervoxel method is employed to generate effective supervoxels for the 3D MRI image. Secondly, the supervoxels are classified into different types of tissues based on filtering of graph signals. Results The performance is evaluated on the BrainWeb 18 dataset and the Internet Brain Segmentation Repository (IBSR) 18 dataset. The proposed method achieves mean dice similarity coefficient (DSC) of 0.94, 0.92 and 0.90 for the segmentation of white matter (WM), grey matter (GM) and cerebrospinal fluid (CSF) for BrainWeb 18 dataset, and mean DSC of 0.85, 0.87 and 0.57 for the segmentation of WM, GM and CSF for IBSR18 dataset. Conclusions The proposed approach can well discriminate different types of brain tissues from the brain MRI image, which has high potential to be applied for clinical applications.
Collapse
Affiliation(s)
- Youyong Kong
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China. .,International Joint Laboratory of Information Display and Visualization, Nanjing, People's Republic of China.
| | - Xiaopeng Chen
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China.,International Joint Laboratory of Information Display and Visualization, Nanjing, People's Republic of China
| | - Jiasong Wu
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China.,International Joint Laboratory of Information Display and Visualization, Nanjing, People's Republic of China
| | - Pinzheng Zhang
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China.,International Joint Laboratory of Information Display and Visualization, Nanjing, People's Republic of China
| | - Yang Chen
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China.,International Joint Laboratory of Information Display and Visualization, Nanjing, People's Republic of China
| | - Huazhong Shu
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China.,International Joint Laboratory of Information Display and Visualization, Nanjing, People's Republic of China
| |
Collapse
|