1
|
De A, Wang X, Zhang Q, Wu J, Cong F. An efficient memory reserving-and-fading strategy for vector quantization based 3D brain segmentation and tumor extraction using an unsupervised deep learning network. Cogn Neurodyn 2023; 18:1-22. [PMID: 37362765 PMCID: PMC10132803 DOI: 10.1007/s11571-023-09965-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Revised: 02/24/2023] [Accepted: 03/08/2023] [Indexed: 06/28/2023] Open
Abstract
Deep learning networks are state-of-the-art approaches for 3D brain image segmentation, and the radiological characteristics extracted from tumors are of great significance for clinical diagnosis, treatment planning, and treatment outcome evaluation. However, two problems have been the hindering factors in brain image segmentation techniques. One is that deep learning networks require large amounts of manually annotated data. Another issue is the computational efficiency of 3D deep learning networks. In this study, we propose a vector quantization (VQ)-based 3D segmentation method that employs a novel unsupervised 3D deep embedding clustering (3D-DEC) network and an efficiency memory reserving-and-fading strategy. The VQ-based 3D-DEC network is trained on volume data in an unsupervised manner to avoid manual data annotation. The memory reserving-and-fading strategy beefs up model efficiency greatly. The designed methodology makes deep learning-based model feasible for biomedical image segmentation. The experiment is divided into two parts. First, we extensively evaluate the effectiveness and robustness of the proposed model on two authoritative MRI brain tumor databases (i.e., IBSR and BrainWeb). Second, we validate the model using real 3D brain tumor data collected from our institute for clinical practice significance. Results show that our method (without data manual annotation) has superior accuracy (0.74 ± 0.04 Tanimoto coefficient on IBSR, 97.5% TP and 97.7% TN on BrainWeb, and 91% Dice, 88% sensitivity and 87% specificity on real brain data) and remarkable efficiency (speedup ratio is 18.72 on IBSR, 31.16 on BrainWeb, 31.00 on real brain data) compared to the state-of-the-art methods. The results show that our proposed model can address the lacks of manual annotations, and greatly increase computation speedup with competitive segmentation accuracy compared to other state-of-the-art 3D CNN models. Moreover, the proposed model can be used for tumor treatment follow-ups every 6 months, providing critical details for surgical and postoperative treatment by correctly extracting numerical radiomic features of tumors.
Collapse
Affiliation(s)
- Ailing De
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, 116000 Liaoning China
| | - Xiulin Wang
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, 116000 Liaoning China
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, 116000 Liaoning China
| | - Qing Zhang
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, 116000 Liaoning China
| | - Jianlin Wu
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, 116000 Liaoning China
| | - Fengyu Cong
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, 116000 Liaoning China
- Faculty of Information Technology, University of Jyväskylä, 40014 Jyväskylä, Finland
- School of Artificial Intelligence, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116000 Liaoning China
- Key Laboratory of Integrated Circuit and Biomedical Electronic System, Dalian University of Technology, Dalian, 116000 Liaoning China
| |
Collapse
|
2
|
Balaha HM, Hassan AES. A variate brain tumor segmentation, optimization, and recognition framework. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10337-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
3
|
Optimal Superpixel Kernel-Based Kernel Low-Rank and Sparsity Representation for Brain Tumour Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3514988. [PMID: 35785083 PMCID: PMC9249491 DOI: 10.1155/2022/3514988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 02/27/2022] [Accepted: 05/18/2022] [Indexed: 11/18/2022]
Abstract
Given the need for quantitative measurement and 3D visualisation of brain tumours, more and more attention has been paid to the automatic segmentation of tumour regions from brain tumour magnetic resonance (MR) images. In view of the uneven grey distribution of MR images and the fuzzy boundaries of brain tumours, a representation model based on the joint constraints of kernel low-rank and sparsity (KLRR-SR) is proposed to mine the characteristics and structural prior knowledge of brain tumour image in the spectral kernel space. In addition, the optimal kernel based on superpixel uniform regions and multikernel learning (MKL) is constructed to improve the accuracy of the pairwise similarity measurement of pixels in the kernel space. By introducing the optimal kernel into KLRR-SR, the coefficient matrix can be solved, which allows brain tumour segmentation results to conform with the spatial information of the image. The experimental results demonstrate that the segmentation accuracy of the proposed method is superior to several existing methods under different indicators and that the sparsity constraint for the coefficient matrix in the kernel space, which is integrated into the kernel low-rank model, has certain effects in preserving the local structure and details of brain tumours.
Collapse
|
4
|
Deep pattern-based tumor segmentation in brain MRIs. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07422-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
5
|
Cheng J, Liu J, Kuang H, Wang J. A Fully Automated Multimodal MRI-Based Multi-Task Learning for Glioma Segmentation and IDH Genotyping. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1520-1532. [PMID: 35020590 DOI: 10.1109/tmi.2022.3142321] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The accurate prediction of isocitrate dehydrogenase (IDH) mutation and glioma segmentation are important tasks for computer-aided diagnosis using preoperative multimodal magnetic resonance imaging (MRI). The two tasks are ongoing challenges due to the significant inter-tumor and intra-tumor heterogeneity. The existing methods to address them are mostly based on single-task approaches without considering the correlation between the two tasks. In addition, the acquisition of IDH genetic labels is expensive and costly, resulting in a limited number of IDH mutation data for modeling. To comprehensively address these problems, we propose a fully automated multimodal MRI-based multi-task learning framework for simultaneous glioma segmentation and IDH genotyping. Specifically, the task correlation and heterogeneity are tackled with a hybrid CNN-Transformer encoder that consists of a convolutional neural network and a transformer to extract the shared spatial and global information learned from a decoder for glioma segmentation and a multi-scale classifier for IDH genotyping. Then, a multi-task learning loss is designed to balance the two tasks by combining the segmentation and classification loss functions with uncertain weights. Finally, an uncertainty-aware pseudo-label selection is proposed to generate IDH pseudo-labels from larger unlabeled data for improving the accuracy of IDH genotyping by using semi-supervised learning. We evaluate our method on a multi-institutional public dataset. Experimental results show that our proposed multi-task network achieves promising performance and outperforms the single-task learning counterparts and other existing state-of-the-art methods. With the introduction of unlabeled data, the semi-supervised multi-task learning framework further improves the performance of glioma segmentation and IDH genotyping. The source codes of our framework are publicly available at https://github.com/miacsu/MTTU-Net.git.
Collapse
|
6
|
Lapuyade-Lahorgue J, Ruan S. Segmentation of multicorrelated images with copula models and conditionally random fields. J Med Imaging (Bellingham) 2022; 9:014001. [PMID: 35024379 PMCID: PMC8741411 DOI: 10.1117/1.jmi.9.1.014001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 12/16/2021] [Indexed: 01/11/2023] Open
Abstract
Purpose: Multisource images are interesting in medical imaging. Indeed, multisource images enable the use of complementary information of different sources such as for T1 and T2 modalities in MRI imaging. However, such multisource data can also be subject to redundancy and correlation. The question is how to efficiently fuse the multisource information without reinforcing the redundancy. We propose a method for segmenting multisource images that are statistically correlated. Approach: The method that we propose is the continuation of a prior work in which we introduce the copula model in hidden Markov fields (HMF). To achieve the multisource segmentations, we use a functional measure of dependency called "copula." This copula is incorporated in the conditionally random fields (CRF). Contrary to HMF, where we consider a prior knowledge on the hidden states modeled by an HMF, in CRF, there is no prior information and only the distribution of the hidden states conditionally to the observations can be known. This conditional distribution depends on the data and can be modeled by an energy function composed of two terms. The first one groups the voxels having similar intensities in the same class. As for the second term, it encourages a pair of voxels to be in the same class if the difference between their intensities is not too big. Results: A comparison between HMF and CRF is performed via theory and experimentations using both simulated and real data from BRATS 2013. Moreover, our method is compared with different state-of-the-art methods, which include supervised (convolutional neural networks) and unsupervised (hierarchical MRF). Our unsupervised method gives similar results as decision trees for synthetic images and as convolutional neural networks for real images; both methods are supervised. Conclusions: We compare two statistical methods using the copula: HMF and CRF to deal with multicorrelated images. We demonstrate the interest of using copula. In both models, the copula considerably improves the results compared with individual segmentations.
Collapse
Affiliation(s)
- Jérôme Lapuyade-Lahorgue
- University of Rouen, LITIS, Eq. Quantif, Rouen, France,Address all correspondence to Jérôme Lapuyade-Lahorgue,
| | - Su Ruan
- University of Rouen, LITIS, Eq. Quantif, Rouen, France
| |
Collapse
|
7
|
Sasank V, Venkateswarlu S. An automatic tumour growth prediction based segmentation using full resolution convolutional network for brain tumour. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103090] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
8
|
Huang D, Wang M, Zhang L, Li H, Ye M, Li A. Learning rich features with hybrid loss for brain tumor segmentation. BMC Med Inform Decis Mak 2021; 21:63. [PMID: 34330265 PMCID: PMC8323198 DOI: 10.1186/s12911-021-01431-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 02/09/2021] [Indexed: 11/10/2022] Open
Abstract
Background Accurately segment the tumor region of MRI images is important for brain tumor diagnosis and radiotherapy planning. At present, manual segmentation is wildly adopted in clinical and there is a strong need for an automatic and objective system to alleviate the workload of radiologists. Methods We propose a parallel multi-scale feature fusing architecture to generate rich feature representation for accurate brain tumor segmentation. It comprises two parts: (1) Feature Extraction Network (FEN) for brain tumor feature extraction at different levels and (2) Multi-scale Feature Fusing Network (MSFFN) for merge all different scale features in a parallel manner. In addition, we use two hybrid loss functions to optimize the proposed network for the class imbalance issue. Results We validate our method on BRATS 2015, with 0.86, 0.73 and 0.61 in Dice for the three tumor regions (complete, core and enhancing), and the model parameter size is only 6.3 MB. Without any post-processing operations, our method still outperforms published state-of-the-arts methods on the segmentation results of complete tumor regions and obtains competitive performance in another two regions. Conclusions The proposed parallel structure can effectively fuse multi-level features to generate rich feature representation for high-resolution results. Moreover, the hybrid loss functions can alleviate the class imbalance issue and guide the training process. The proposed method can be used in other medical segmentation tasks.
Collapse
Affiliation(s)
- Daobin Huang
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China.,School of Medical Information, Wannan Medical College, Wuhu, 241002, China.,Research Center of Health Big Data Mining and Applications, Wannan Medical College, Wuhu, 241002, China
| | - Minghui Wang
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China
| | - Ling Zhang
- Department of Biochemistry, Wannan Medical College, Wuhu, 241002, China
| | - Haichun Li
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China
| | - Minquan Ye
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China. .,Research Center of Health Big Data Mining and Applications, Wannan Medical College, Wuhu, 241002, China.
| | - Ao Li
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China.
| |
Collapse
|
9
|
Morphological active contour model for automatic brain tumor extraction from multimodal magnetic resonance images. J Neurosci Methods 2021; 362:109296. [PMID: 34302860 DOI: 10.1016/j.jneumeth.2021.109296] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2019] [Revised: 07/18/2021] [Accepted: 07/19/2021] [Indexed: 11/22/2022]
Abstract
BACKGROUND Brain tumor extraction from magnetic resonance (MR) images is challenging due to variations in the location, shape, size and intensity of tumors. Manual delineation of brain tumors from MR images is time-consuming and prone to human errors. METHOD In this paper, we present a method for automatic tumor extraction from multimodal MR images. Brain tumors are first detected using k-means clustering. A morphological region-based active contour model is then used for tumor extraction using an initial contour defined based on the boundary of the detected brain tumor regions. The contour evolution for tumor extraction was performed using successive application of morphological operators. In our model, a Gaussian distribution was used to model local image intensities. The spatial correlation between neighboring voxels was also modeled using Markov random field. RESULTS The proposed method was evaluated on BraTS 2013 dataset including patients with high-grade and low-grade tumors. In comparison with other active contour based methods, the proposed method yielded better performance on tumor segmentation with mean Dice similarity coefficients of 0.9179 ( ± 0.025) and 0.8910 ( ± 0.042) obtained on high-grade and low-grade tumors, respectively. CONCLUSION The proposed method achieved higher accuracies for brain tumor extraction in comparison to other contour-based methods.
Collapse
|
10
|
An efficient brain tumor image classifier by combining multi-pathway cascaded deep neural network and handcrafted features in MR images. Med Biol Eng Comput 2021; 59:1495-1527. [PMID: 34184181 DOI: 10.1007/s11517-021-02370-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Accepted: 04/27/2021] [Indexed: 10/21/2022]
Abstract
Accurate segmentation and delineation of the sub-tumor regions are very challenging tasks due to the nature of the tumor. Traditionally, convolutional neural networks (CNNs) have succeeded in achieving most promising performance for the segmentation of brain tumor; however, handcrafted features remain very important in identification of tumor's boundary regions accurately. The present work proposes a robust deep learning-based model with three different CNN architectures along with pre-defined handcrafted features for brain tumor segmentation, mainly to find out more prominent boundaries of the core and enhanced tumor regions. Generally, automatic CNN architecture does not use the pre-defined handcrafted features because it extracts the features automatically. In this present work, several pre-defined handcrafted features are computed from four MRI modalities (T2, FLAIR, T1c, and T1) with the help of additional handcrafted masks according to user interest and fed to the convolutional features (automatic features) to improve the overall performance of the proposed CNN model for tumor segmentation. Multi-pathway CNN is explored in this present work along with single-pathway CNN, which extracts simultaneously both local and global features to identify the accurate sub-regions of the tumor with the help of handcrafted features. The present work uses a cascaded CNN architecture, where the outcome of a CNN is considered as an additional input information to next subsequent CNNs. To extract the handcrafted features, convolutional operation was applied on the four MRI modalities with the help of several pre-defined masks to produce a predefined set of handcrafted features. The present work also investigates the usefulness of intensity normalization and data augmentation in pre-processing stage in order to handle the difficulties related to the imbalance of tumor labels. The proposed method is experimented on the BraST 2018 datasets and achieved promising results than the existing (currently published) methods with respect to different metrics such as specificity, sensitivity, and dice similarity coefficient (DSC) for complete, core, and enhanced tumor regions. Quantitatively, a notable gain is achieved around the boundaries of the sub-tumor regions using the proposed two-pathway CNN along with the handcrafted features. Graphical Abstract This data is mandatory. Please provide.
Collapse
|
11
|
Rathore S, Mohan S, Bakas S, Sako C, Badve C, Pati S, Singh A, Bounias D, Ngo P, Akbari H, Gastounioti A, Bergman M, Bilello M, Shinohara RT, Yushkevich P, O'Rourke DM, Sloan AE, Kontos D, Nasrallah MP, Barnholtz-Sloan JS, Davatzikos C. Multi-institutional noninvasive in vivo characterization of IDH, 1p/19q, and EGFRvIII in glioma using neuro-Cancer Imaging Phenomics Toolkit (neuro-CaPTk). Neurooncol Adv 2021; 2:iv22-iv34. [PMID: 33521638 PMCID: PMC7829474 DOI: 10.1093/noajnl/vdaa128] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Background Gliomas represent a biologically heterogeneous group of primary brain tumors with uncontrolled cellular proliferation and diffuse infiltration that renders them almost incurable, thereby leading to a grim prognosis. Recent comprehensive genomic profiling has greatly elucidated the molecular hallmarks of gliomas, including the mutations in isocitrate dehydrogenase 1 and 2 (IDH1 and IDH2), loss of chromosomes 1p and 19q (1p/19q), and epidermal growth factor receptor variant III (EGFRvIII). Detection of these molecular alterations is based on ex vivo analysis of surgically resected tissue specimen that sometimes is not adequate for testing and/or does not capture the spatial tumor heterogeneity of the neoplasm. Methods We developed a method for noninvasive detection of radiogenomic markers of IDH both in lower-grade gliomas (WHO grade II and III tumors) and glioblastoma (WHO grade IV), 1p/19q in IDH-mutant lower-grade gliomas, and EGFRvIII in glioblastoma. Preoperative MRIs of 473 glioma patients from 3 of the studies participating in the ReSPOND consortium (collection I: Hospital of the University of Pennsylvania [HUP: n = 248], collection II: The Cancer Imaging Archive [TCIA; n = 192], and collection III: Ohio Brain Tumor Study [OBTS, n = 33]) were collected. Neuro-Cancer Imaging Phenomics Toolkit (neuro-CaPTk), a modular platform available for cancer imaging analytics and machine learning, was leveraged to extract histogram, shape, anatomical, and texture features from delineated tumor subregions and to integrate these features using support vector machine to generate models predictive of IDH, 1p/19q, and EGFRvIII. The models were validated using 3 configurations: (1) 70-30% training-testing splits or 10-fold cross-validation within individual collections, (2) 70-30% training-testing splits within merged collections, and (3) training on one collection and testing on another. Results These models achieved a classification accuracy of 86.74% (HUP), 85.45% (TCIA), and 75.15% (TCIA) in identifying EGFRvIII, IDH, and 1p/19q, respectively, in configuration I. The model, when applied on combined data in configuration II, yielded a classification success rate of 82.50% in predicting IDH mutation (HUP + TCIA + OBTS). The model when trained on TCIA dataset yielded classification accuracy of 84.88% in predicting IDH in HUP dataset. Conclusions Using machine learning algorithms, high accuracy was achieved in the prediction of IDH, 1p/19q, and EGFRvIII mutation. Neuro-CaPTk encompasses all the pipelines required to replicate these analyses in multi-institutional settings and could also be used for other radio(geno)mic analyses.
Collapse
Affiliation(s)
- Saima Rathore
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Suyash Mohan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Chiharu Sako
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Chaitra Badve
- Department of Radiology, University Hospitals Cleveland, Cleveland, Ohio, USA
| | - Sarthak Pati
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Ashish Singh
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Dimitrios Bounias
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Phuc Ngo
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Hamed Akbari
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Aimilia Gastounioti
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Mark Bergman
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Michel Bilello
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Russell T Shinohara
- Penn Statistics in Imaging and Visualization Center (PennSIVE), Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Paul Yushkevich
- Penn Image Computing and Science Lab (PICSL), University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Donald M O'Rourke
- Glioblastoma Translational Center of Excellence, Abramson Cancer Center, Philadelphia, Pennsylvania, USA
| | - Andrew E Sloan
- Case Comprehensive Cancer Center, Cleveland, Ohio, USA.,Department of Neurological Surgery, University Hospitals Seidman Cancer Center, Cleveland, Ohio, USA.,Department of Neurosurgery, Case Western Reserve University School of Medicine, Cleveland, Ohio, USA
| | - Despina Kontos
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - MacLean P Nasrallah
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Jill S Barnholtz-Sloan
- Case Comprehensive Cancer Center, Cleveland, Ohio, USA.,Department of Population and Quantitative Health Sciences, Case Western Reserve University School of Medicine, Cleveland, Ohio, USA
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
12
|
Ridwan AR, Niaz MR, Wu Y, Qi X, Zhang S, Kontzialis M, Javierre-Petit C, Tazwar M, Bennett DA, Yang Y, Arfanakis K. Development and evaluation of a high performance T1-weighted brain template for use in studies on older adults. Hum Brain Mapp 2021; 42:1758-1776. [PMID: 33449398 PMCID: PMC7978143 DOI: 10.1002/hbm.25327] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 11/16/2020] [Accepted: 12/13/2020] [Indexed: 01/03/2023] Open
Abstract
Τhe accuracy of template-based neuroimaging investigations depends on the template's image quality and representativeness of the individuals under study. Yet a thorough, quantitative investigation of how available standardized and study-specific T1-weighted templates perform in studies on older adults has not been conducted. The purpose of this work was to construct a high-quality standardized T1-weighted template specifically designed for the older adult brain, and systematically compare the new template to several other standardized and study-specific templates in terms of image quality, performance in spatial normalization of older adult data and detection of small inter-group morphometric differences, and representativeness of the older adult brain. The new template was constructed with state-of-the-art spatial normalization of high-quality data from 222 older adults. It was shown that the new template (a) exhibited high image sharpness, (b) provided higher inter-subject spatial normalization accuracy and (c) allowed detection of smaller inter-group morphometric differences compared to other standardized templates, (d) had similar performance to that of study-specific templates constructed with the same methodology, and (e) was highly representative of the older adult brain.
Collapse
Affiliation(s)
- Abdur Raquib Ridwan
- Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, Illinois, USA
| | - Mohammad Rakeen Niaz
- Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, Illinois, USA
| | - Yingjuan Wu
- Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, Illinois, USA
| | - Xiaoxiao Qi
- Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, Illinois, USA
| | - Shengwei Zhang
- Rush Alzheimer's Disease Center, Rush University Medical Center, Chicago, Illinois, USA
| | - Marinos Kontzialis
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Chicago, Illinois, USA
| | - Carles Javierre-Petit
- Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, Illinois, USA
| | - Mahir Tazwar
- Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, Illinois, USA
| | | | - David A Bennett
- Rush Alzheimer's Disease Center, Rush University Medical Center, Chicago, Illinois, USA
| | - Yongyi Yang
- Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, Illinois, USA
| | - Konstantinos Arfanakis
- Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, Illinois, USA.,Rush Alzheimer's Disease Center, Rush University Medical Center, Chicago, Illinois, USA.,Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Chicago, Illinois, USA
| |
Collapse
|
13
|
MRI brain tumor medical images analysis using deep learning techniques: a systematic review. HEALTH AND TECHNOLOGY 2021. [DOI: 10.1007/s12553-020-00514-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
14
|
Isselmou AEK, Xu G, Shuai Z, Saminu S, Javaid I, Ahmad IS. Brain Tumor identification by Convolution Neural Network with Fuzzy C-mean Model Using MR Brain Images. INTERNATIONAL JOURNAL OF CIRCUITS, SYSTEMS AND SIGNAL PROCESSING 2021; 14:1096-1102. [DOI: 10.46300/9106.2020.14.137] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Medical image computing techniques are essential in helping the doctors to support their decision in the diagnosis of the patients. Due to the complexity of the brain structure, we choose to use MR brain images because of their quality and the highest resolution. The objective of this article is to detect brain tumor using convolution neural network with fuzzy c-means model, the advantage of the proposed model is the ability to achieve excellent performance using accuracy, sensitivity, specificity, overall dice and recall values better than the previous models that are already published. In addition, the novel model can identify the brain tumor, using different types of MR images. The proposed model obtained accuracy with 98%.
Collapse
Affiliation(s)
- Abd El Kader Isselmou
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| | - Guizhi Xu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| | - Zhang Shuai
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| | - Sani Saminu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| | - Imran Javaid
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| | - Isah Salim Ahmad
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.R. China
| |
Collapse
|
15
|
Zhang L, Zhang J, Shen P, Zhu G, Li P, Lu X, Zhang H, Shah SA, Bennamoun M. Block Level Skip Connections Across Cascaded V-Net for Multi-Organ Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2782-2793. [PMID: 32091995 DOI: 10.1109/tmi.2020.2975347] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Multi-organ segmentation is a challenging task due to the label imbalance and structural differences between different organs. In this work, we propose an efficient cascaded V-Net model to improve the performance of multi-organ segmentation by establishing dense Block Level Skip Connections (BLSC) across cascaded V-Net. Our model can take full advantage of features from the first stage network and make the cascaded structure more efficient. We also combine stacked small and large kernels with an inception-like structure to help our model to learn more patterns, which produces superior results for multi-organ segmentation. In addition, some small organs are commonly occluded by large organs and have unclear boundaries with other surrounding tissues, which makes them hard to be segmented. We therefore first locate the small organs through a multi-class network and crop them randomly with the surrounding region, then segment them with a single-class network. We evaluated our model on SegTHOR 2019 challenge unseen testing set and Multi-Atlas Labeling Beyond the Cranial Vault challenge validation set. Our model has achieved an average dice score gain of 1.62 percents and 3.90 percents compared to traditional cascaded networks on these two datasets, respectively. For hard-to-segment small organs, such as the esophagus in SegTHOR 2019 challenge, our technique has achieved a gain of 5.63 percents on dice score, and four organs in Multi-Atlas Labeling Beyond the Cranial Vault challenge have achieved a gain of 5.27 percents on average dice score.
Collapse
|
16
|
An Intelligent Diagnosis Method of Brain MRI Tumor Segmentation Using Deep Convolutional Neural Network and SVM Algorithm. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:6789306. [PMID: 32733596 PMCID: PMC7376410 DOI: 10.1155/2020/6789306] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 07/01/2020] [Indexed: 12/30/2022]
Abstract
Among the currently proposed brain segmentation methods, brain tumor segmentation methods based on traditional image processing and machine learning are not ideal enough. Therefore, deep learning-based brain segmentation methods are widely used. In the brain tumor segmentation method based on deep learning, the convolutional network model has a good brain segmentation effect. The deep convolutional network model has the problems of a large number of parameters and large loss of information in the encoding and decoding process. This paper proposes a deep convolutional neural network fusion support vector machine algorithm (DCNN-F-SVM). The proposed brain tumor segmentation model is mainly divided into three stages. In the first stage, a deep convolutional neural network is trained to learn the mapping from image space to tumor marker space. In the second stage, the predicted labels obtained from the deep convolutional neural network training are input into the integrated support vector machine classifier together with the test images. In the third stage, a deep convolutional neural network and an integrated support vector machine are connected in series to train a deep classifier. Run each model on the BraTS dataset and the self-made dataset to segment brain tumors. The segmentation results show that the performance of the proposed model is significantly better than the deep convolutional neural network and the integrated SVM classifier.
Collapse
|
17
|
Mang A, Bakas S, Subramanian S, Davatzikos C, Biros G. Integrated Biophysical Modeling and Image Analysis: Application to Neuro-Oncology. Annu Rev Biomed Eng 2020; 22:309-341. [PMID: 32501772 PMCID: PMC7520881 DOI: 10.1146/annurev-bioeng-062117-121105] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Central nervous system (CNS) tumors come with vastly heterogeneous histologic, molecular, and radiographic landscapes, rendering their precise characterization challenging. The rapidly growing fields of biophysical modeling and radiomics have shown promise in better characterizing the molecular, spatial, and temporal heterogeneity of tumors. Integrative analysis of CNS tumors, including clinically acquired multi-parametric magnetic resonance imaging (mpMRI) and the inverse problem of calibrating biophysical models to mpMRI data, assists in identifying macroscopic quantifiable tumor patterns of invasion and proliferation, potentially leading to improved (a) detection/segmentation of tumor subregions and (b) computer-aided diagnostic/prognostic/predictive modeling. This article presents a summary of (a) biophysical growth modeling and simulation,(b) inverse problems for model calibration, (c) these models' integration with imaging workflows, and (d) their application to clinically relevant studies. We anticipate that such quantitative integrative analysis may even be beneficial in a future revision of the World Health Organization (WHO) classification for CNS tumors, ultimately improving patient survival prospects.
Collapse
Affiliation(s)
- Andreas Mang
- Department of Mathematics, University of Houston, Houston, Texas 77204, USA;
| | - Spyridon Bakas
- Department of Mathematics, University of Houston, Houston, Texas 77204, USA;
| | - Shashank Subramanian
- Oden Institute of Computational Engineering and Sciences, The University of Texas at Austin, Austin, Texas 78712, USA; ,
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA); Department of Radiology; and Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA; ,
| | - George Biros
- Oden Institute of Computational Engineering and Sciences, The University of Texas at Austin, Austin, Texas 78712, USA; ,
| |
Collapse
|
18
|
Brain Tumor Detection by Using Stacked Autoencoders in Deep Learning. J Med Syst 2019; 44:32. [PMID: 31848728 DOI: 10.1007/s10916-019-1483-2] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Accepted: 10/14/2019] [Indexed: 10/25/2022]
Abstract
Brain tumor detection depicts a tough job because of its shape, size and appearance variations. In this manuscript, a deep learning model is deployed to predict input slices as a tumor (unhealthy)/non-tumor (healthy). This manuscript employs a high pass filter image to prominent the inhomogeneities field effect of the MR slices and fused with the input slices. Moreover, the median filter is applied to the fused slices. The resultant slices quality is improved with smoothen and highlighted edges of the input slices. After that, based on these slices' intensity, a 4-connected seed growing algorithm is applied, where optimal threshold clusters the similar pixels from the input slices. The segmented slices are then supplied to the fine-tuned two layers proposed stacked sparse autoencoder (SSAE) model. The hyperparameters of the model are selected after extensive experiments. At the first layer, 200 hidden units and at the second layer 400 hidden units are utilized. The testing is performed on the softmax layer for the prediction of the images having tumors and no tumors. The suggested model is trained and checked on BRATS datasets i.e., 2012(challenge and synthetic), 2013, and 2013 Leaderboard, 2014, and 2015 datasets. The presented model is evaluated with a number of performance metrics which demonstrates the improved performance.
Collapse
|
19
|
Rathore S, Akbari H, Bakas S, Pisapia JM, Shukla G, Rudie JD, Da X, Davuluri RV, Dahmane N, O'Rourke DM, Davatzikos C. Multivariate Analysis of Preoperative Magnetic Resonance Imaging Reveals Transcriptomic Classification of de novo Glioblastoma Patients. Front Comput Neurosci 2019; 13:81. [PMID: 31920606 PMCID: PMC6923885 DOI: 10.3389/fncom.2019.00081] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2019] [Accepted: 11/12/2019] [Indexed: 12/30/2022] Open
Abstract
Glioblastoma, the most frequent primary malignant brain neoplasm, is genetically diverse and classified into four transcriptomic subtypes, i. e., classical, mesenchymal, proneural, and neural. Currently, detection of transcriptomic subtype is based on ex vivo analysis of tissue that does not capture the spatial tumor heterogeneity. In view of accumulative evidence of in vivo imaging signatures summarizing molecular features of cancer, this study seeks robust non-invasive radiographic markers of transcriptomic classification of glioblastoma, based solely on routine clinically-acquired imaging sequences. A pre-operative retrospective cohort of 112 pathology-proven de novo glioblastoma patients, having multi-parametric MRI (T1, T1-Gd, T2, T2-FLAIR), collected from the Hospital of the University of Pennsylvania were included. Following tumor segmentation into distinct radiographic sub-regions, diverse imaging features were extracted and support vector machines were employed to multivariately integrate these features and derive an imaging signature of transcriptomic subtype. Extracted features included intensity distributions, volume, morphology, statistics, tumors' anatomical location, and texture descriptors for each tumor sub-region. The derived signature was evaluated against the transcriptomic subtype of surgically-resected tissue specimens, using a 5-fold cross-validation method and a receiver-operating-characteristics analysis. The proposed model was 71% accurate in distinguishing among the four transcriptomic subtypes. The accuracy (sensitivity/specificity) for distinguishing each subtype (classical, mesenchymal, proneural, neural) from the rest was equal to 88.4% (71.4/92.3), 75.9% (83.9/72.8), 82.1% (73.1/84.9), and 75.9% (79.4/74.4), respectively. The findings were also replicated in The Cancer Genomic Atlas glioblastoma dataset. The obtained imaging signature for the classical subtype was dominated by associations with features related to edge sharpness, whereas for the mesenchymal subtype had more pronounced presence of higher T2 and T2-FLAIR signal in edema, and higher volume of enhancing tumor and edema. The proneural and neural subtypes were characterized by the lower T1-Gd signal in enhancing tumor and higher T2-FLAIR signal in edema, respectively. Our results indicate that quantitative multivariate analysis of features extracted from clinically-acquired MRI may provide a radiographic biomarker of the transcriptomic profile of glioblastoma. Importantly our findings can be influential in surgical decision-making, treatment planning, and assessment of inoperable tumors.
Collapse
Affiliation(s)
- Saima Rathore
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Hamed Akbari
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.,Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Jared M Pisapia
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.,Division of Neurosurgery, Children Hospital of Philadelphia, Philadelphia, PA, United States
| | - Gaurav Shukla
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.,Christiana Care Health System, Philadelphia, PA, United States
| | - Jeffrey D Rudie
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Xiao Da
- Brigham and Women's Hospital, Boston, MA, United States
| | - Ramana V Davuluri
- Department of Biomedical Informatics, Northwestern University Feinberg School of Medicine, Chicago, IL, United States
| | - Nadia Dahmane
- Department of Neurological Surgery, Weill Cornell Medicine, New York, NY, United States
| | - Donald M O'Rourke
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
20
|
Agn M, Munck Af Rosenschöld P, Puonti O, Lundemann MJ, Mancini L, Papadaki A, Thust S, Ashburner J, Law I, Van Leemput K. A modality-adaptive method for segmenting brain tumors and organs-at-risk in radiation therapy planning. Med Image Anal 2019; 54:220-237. [PMID: 30952038 PMCID: PMC6554451 DOI: 10.1016/j.media.2019.03.005] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 03/14/2019] [Accepted: 03/21/2019] [Indexed: 12/25/2022]
Abstract
In this paper we present a method for simultaneously segmenting brain tumors and an extensive set of organs-at-risk for radiation therapy planning of glioblastomas. The method combines a contrast-adaptive generative model for whole-brain segmentation with a new spatial regularization model of tumor shape using convolutional restricted Boltzmann machines. We demonstrate experimentally that the method is able to adapt to image acquisitions that differ substantially from any available training data, ensuring its applicability across treatment sites; that its tumor segmentation accuracy is comparable to that of the current state of the art; and that it captures most organs-at-risk sufficiently well for radiation therapy planning purposes. The proposed method may be a valuable step towards automating the delineation of brain tumors and organs-at-risk in glioblastoma patients undergoing radiation therapy.
Collapse
Affiliation(s)
- Mikael Agn
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark.
| | - Per Munck Af Rosenschöld
- Radiation Physics, Department of Hematology, Oncology and Radiation Physics, Skåne University Hospital, Lund, Sweden
| | - Oula Puonti
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre, Denmark
| | - Michael J Lundemann
- Department of Oncology, Copenhagen University Hospital Rigshospitalet, Denmark
| | - Laura Mancini
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - Anastasia Papadaki
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - Steffi Thust
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - John Ashburner
- Wellcome Centre for Human Neuroimaging, UCL Institute of Neurology, University College London, UK
| | - Ian Law
- Department of Clinical Physiology, Nuclear Medicine and PET, Copenhagen University Hospital Rigshospitalet, Denmark
| | - Koen Van Leemput
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA
| |
Collapse
|
21
|
Scheufele K, Mang A, Gholami A, Davatzikos C, Biros G, Mehl M. Coupling brain-tumor biophysical models and diffeomorphic image registration. COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING 2019; 347:533-567. [PMID: 31857736 PMCID: PMC6922029 DOI: 10.1016/j.cma.2018.12.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
We present SIBIA (Scalable Integrated Biophysics-based Image Analysis), a framework for joint image registration and biophysical inversion and we apply it to analyze MR images of glioblastomas (primary brain tumors). We have two applications in mind. The first one is normal-to-abnormal image registration in the presence of tumor-induced topology differences. The second one is biophysical inversion based on single-time patient data. The underlying optimization problem is highly non-linear and non-convex and has not been solved before with a gradient-based approach. Given the segmentation of a normal brain MRI and the segmentation of a cancer patient MRI, we determine tumor growth parameters and a registration map so that if we "grow a tumor" (using our tumor model) in the normal brain and then register it to the patient image, then the registration mismatch is as small as possible. This "coupled problem" two-way couples the biophysical inversion and the registration problem. In the image registration step we solve a large-deformation diffeomorphic registration problem parameterized by an Eulerian velocity field. In the biophysical inversion step we estimate parameters in a reaction-diffusion tumor growth model that is formulated as a partial differential equation (PDE). In SIBIA, we couple these two sub-components in an iterative manner. We first presented the components of SIBIA in "Gholami et al., Framework for Scalable Biophysics-based Image Analysis, IEEE/ACM Proceedings of the SC2017", in which we derived parallel distributed memory algorithms and software modules for the decoupled registration and biophysical inverse problems. In this paper, our contributions are the introduction of a PDE-constrained optimization formulation of the coupled problem, and the derivation of a Picard iterative solution scheme. We perform extensive tests to experimentally assess the performance of our method on synthetic and clinical datasets. We demonstrate the convergence of the SIBIA optimization solver in different usage scenarios. We demonstrate that using SIBIA, we can accurately solve the coupled problem in three dimensions (2563 resolution) in a few minutes using 11 dual-x86 nodes.
Collapse
Affiliation(s)
- Klaudius Scheufele
- University of Stuttgart, IPVS, Universitätstraße 38, 70569 Stuttgart, Germany
| | - Andreas Mang
- University of Houston, Department of Mathematics, 3551 Cullen Blvd., Houston, TX 77204-3008, USA
| | - Amir Gholami
- University of California Berkeley, EECS, Berkeley, CA 94720-1776, USA
| | - Christos Davatzikos
- Department of Radiology, University of Pennsylvania School of Medicine, 3700 Hamilton Walk, Philadelphia, PA 19104, USA
| | - George Biros
- University of Texas, ICES, 201 East 24th St, Austin, TX 78712-1229, USA
| | - Miriam Mehl
- University of Stuttgart, IPVS, Universitätstraße 38, 70569 Stuttgart, Germany
| |
Collapse
|
22
|
Akbari H, Bakas S, Pisapia JM, Nasrallah MP, Rozycki M, Martinez-Lage M, Morrissette JJD, Dahmane N, O’Rourke DM, Davatzikos C. In vivo evaluation of EGFRvIII mutation in primary glioblastoma patients via complex multiparametric MRI signature. Neuro Oncol 2018; 20:1068-1079. [PMID: 29617843 PMCID: PMC6280148 DOI: 10.1093/neuonc/noy033] [Citation(s) in RCA: 76] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Background Epidermal growth factor receptor variant III (EGFRvIII) is a driver mutation and potential therapeutic target in glioblastoma. Non-invasive in vivo EGFRvIII determination, using clinically acquired multiparametric MRI sequences, could assist in assessing spatial heterogeneity related to EGFRvIII, currently not captured via single-specimen analyses. We hypothesize that integration of subtle, yet distinctive, quantitative imaging/radiomic patterns using machine learning may lead to non-invasively determining molecular characteristics, and particularly the EGFRvIII mutation. Methods We integrated diverse imaging features, including the tumor's spatial distribution pattern, via support vector machines, to construct an imaging signature of EGFRvIII. This signature was evaluated in independent discovery (n = 75) and replication (n = 54) cohorts of de novo glioblastoma, and compared with the EGFRvIII status obtained through an assay based on next-generation sequencing. Results The cross-validated accuracy of the EGFRvIII signature in classifying the mutation status in individual patients of the independent discovery and replication cohorts was 85.3% (specificity = 86.3%, sensitivity = 83.3%, area under the curve [AUC] = 0.85) and 87% (specificity = 90%, sensitivity = 78.6%, AUC = 0.86), respectively. The signature was consistent with EGFRvIII+ tumors having increased neovascularization and cell density, as well as a distinctive spatial pattern involving relatively more frontal and parietal regions compared with EGFRvIII- tumors. Conclusions An imaging signature of EGFRvIII was found, revealing a complex, yet distinct macroscopic glioblastoma phenotype. By non-invasively capturing the tumor in its entirety, the proposed methodology can assist in evaluating the tumor's spatial heterogeneity, hence overcoming common spatial sampling limitations of tissue-based analyses. This signature can preoperatively stratify patients for EGFRvIII-targeted therapies, and potentially monitor dynamic mutational changes during treatment.
Collapse
Affiliation(s)
- Hamed Akbari
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadephia, Pennsylvania
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadephia, Pennsylvania
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadephia, Pennsylvania
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadephia, Pennsylvania
| | - Jared M Pisapia
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadephia, Pennsylvania
| | - MacLean P Nasrallah
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadephia, Pennsylvania
| | - Martin Rozycki
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadephia, Pennsylvania
| | - Maria Martinez-Lage
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadephia, Pennsylvania
| | - Jennifer J D Morrissette
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadephia, Pennsylvania
| | - Nadia Dahmane
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadephia, Pennsylvania
| | - Donald M O’Rourke
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadephia, Pennsylvania
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadephia, Pennsylvania
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadephia, Pennsylvania
| |
Collapse
|
23
|
Radiomic MRI signature reveals three distinct subtypes of glioblastoma with different clinical and molecular characteristics, offering prognostic value beyond IDH1. Sci Rep 2018; 8:5087. [PMID: 29572492 PMCID: PMC5865162 DOI: 10.1038/s41598-018-22739-2] [Citation(s) in RCA: 105] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2017] [Accepted: 02/23/2018] [Indexed: 12/28/2022] Open
Abstract
The remarkable heterogeneity of glioblastoma, across patients and over time, is one of the main challenges in precision diagnostics and treatment planning. Non-invasive in vivo characterization of this heterogeneity using imaging could assist in understanding disease subtypes, as well as in risk-stratification and treatment planning of glioblastoma. The current study leveraged advanced imaging analytics and radiomic approaches applied to multi-parametric MRI of de novo glioblastoma patients (n = 208 discovery, n = 53 replication), and discovered three distinct and reproducible imaging subtypes of glioblastoma, with differential clinical outcome and underlying molecular characteristics, including isocitrate dehydrogenase-1 (IDH1), O6-methylguanine–DNA methyltransferase, epidermal growth factor receptor variant III (EGFRvIII), and transcriptomic subtype composition. The subtypes provided risk-stratification substantially beyond that provided by WHO classifications. Within IDH1-wildtype tumors, our subtypes revealed different survival (p < 0.001), thereby highlighting the synergistic consideration of molecular and imaging measures for prognostication. Moreover, the imaging characteristics suggest that subtype-specific treatment of peritumoral infiltrated brain tissue might be more effective than current uniform standard-of-care. Finally, our analysis found subtype-specific radiogenomic signatures of EGFRvIII-mutated tumors. The identified subtypes and their clinical and molecular correlates provide an in vivo portrait of phenotypic heterogeneity in glioblastoma, which points to the need for precision diagnostics and personalized treatment.
Collapse
|
24
|
Rathore S, Akbari H, Doshi J, Shukla G, Rozycki M, Bilello M, Lustig R, Davatzikos C. Radiomic signature of infiltration in peritumoral edema predicts subsequent recurrence in glioblastoma: implications for personalized radiotherapy planning. J Med Imaging (Bellingham) 2018. [PMID: 29531967 DOI: 10.1117/1.jmi.5.2.021219] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023] Open
Abstract
Standard surgical resection of glioblastoma, mainly guided by the enhancement on postcontrast T1-weighted magnetic resonance imaging (MRI), disregards infiltrating tumor within the peritumoral edema region (ED). Subsequent radiotherapy typically delivers uniform radiation to peritumoral FLAIR-hyperintense regions, without attempting to target areas likely to be infiltrated more heavily. Noninvasive in vivo delineation of the areas of tumor infiltration and prediction of early recurrence in peritumoral ED could assist in targeted intensification of local therapies, thereby potentially delaying recurrence and prolonging survival. This paper presents a method for estimating peritumoral edema infiltration using radiomic signatures determined via machine learning methods, and tests it on 90 patients with de novo glioblastoma. The generalizability of the proposed predictive model was evaluated via cross-validation in a discovery cohort ([Formula: see text]) and was subsequently evaluated in a replication cohort ([Formula: see text]). Spatial maps representing the likelihood of tumor infiltration and future early recurrence were compared with regions of recurrence on postresection follow-up studies with pathology confirmation. The cross-validated accuracy of our predictive infiltration model on the discovery and replication cohorts was 87.51% (odds ratio = 10.22, sensitivity = 80.65, and specificity = 87.63) and 89.54% (odds ratio = 13.66, sensitivity = 97.06, and specificity = 76.73), respectively. The radiomic signature of the recurrent tumor region revealed higher vascularity and cellularity when compared with the nonrecurrent region. The proposed model shows evidence that multiparametric pattern analysis from clinical MRI sequences can assist in in vivo estimation of the spatial extent and pattern of tumor recurrence in peritumoral edema, which may guide supratotal resection and/or intensification of postoperative radiation therapy.
Collapse
Affiliation(s)
- Saima Rathore
- University of Pennsylvania, Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, Philadelphia, Pennsylvania, United States.,University of Pennsylvania, Department of Radiology, Perelman School of Medicine, Philadelphia, Pennsylvania, United States
| | - Hamed Akbari
- University of Pennsylvania, Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, Philadelphia, Pennsylvania, United States.,University of Pennsylvania, Department of Radiology, Perelman School of Medicine, Philadelphia, Pennsylvania, United States
| | - Jimit Doshi
- University of Pennsylvania, Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, Philadelphia, Pennsylvania, United States.,University of Pennsylvania, Department of Radiology, Perelman School of Medicine, Philadelphia, Pennsylvania, United States
| | - Gaurav Shukla
- University of Pennsylvania, Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, Philadelphia, Pennsylvania, United States.,Thomas Jefferson University, Department of Radiation Oncology, Philadelphia, Pennsylvania, United States
| | - Martin Rozycki
- University of Pennsylvania, Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, Philadelphia, Pennsylvania, United States.,University of Pennsylvania, Department of Radiology, Perelman School of Medicine, Philadelphia, Pennsylvania, United States
| | - Michel Bilello
- University of Pennsylvania, Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, Philadelphia, Pennsylvania, United States.,University of Pennsylvania, Department of Radiology, Perelman School of Medicine, Philadelphia, Pennsylvania, United States
| | - Robert Lustig
- University of Pennsylvania, Department of Radiation Oncology, Perelman School of Medicine, Philadelphia, Pennsylvania, United States
| | - Christos Davatzikos
- University of Pennsylvania, Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, Philadelphia, Pennsylvania, United States.,University of Pennsylvania, Department of Radiology, Perelman School of Medicine, Philadelphia, Pennsylvania, United States
| |
Collapse
|
25
|
Vidyaratne L, Alam M, Shboul Z, Iftekharuddin KM. Deep Learning and Texture-Based Semantic Label Fusion for Brain Tumor Segmentation. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 2018. [PMID: 29551853 DOI: 10.1117/12.2292930] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Brain tumor segmentation is a fundamental step in surgical treatment and therapy. Many hand-crafted and learning based methods have been proposed for automatic brain tumor segmentation from MRI. Studies have shown that these approaches have their inherent advantages and limitations. This work proposes a semantic label fusion algorithm by combining two representative state-of-the-art segmentation algorithms: texture based hand-crafted, and deep learning based methods to obtain robust tumor segmentation. We evaluate the proposed method using publicly available BRATS 2017 brain tumor segmentation challenge dataset. The results show that the proposed method offers improved segmentation by alleviating inherent weaknesses: extensive false positives in texture based method, and the false tumor tissue classification problem in deep learning method, respectively. Furthermore, we investigate the effect of patient's gender on the segmentation performance using a subset of validation dataset. Note the substantial improvement in brain tumor segmentation performance proposed in this work has recently enabled us to secure the first place by our group in overall patient survival prediction task at the BRATS 2017 challenge.
Collapse
Affiliation(s)
- L Vidyaratne
- Vision Lab in Department of Electrical and Computer Engineering, Old Dominion University, Norfolk, VA 23529
| | - M Alam
- Vision Lab in Department of Electrical and Computer Engineering, Old Dominion University, Norfolk, VA 23529
| | - Z Shboul
- Vision Lab in Department of Electrical and Computer Engineering, Old Dominion University, Norfolk, VA 23529
| | - K M Iftekharuddin
- Vision Lab in Department of Electrical and Computer Engineering, Old Dominion University, Norfolk, VA 23529
| |
Collapse
|
26
|
Zhuge Y, Krauze AV, Ning H, Cheng JY, Arora BC, Camphausen K, Miller RW. Brain tumor segmentation using holistically nested neural networks in MRI images. Med Phys 2017; 44:5234-5243. [PMID: 28736864 PMCID: PMC5646222 DOI: 10.1002/mp.12481] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2017] [Revised: 06/23/2017] [Accepted: 07/10/2017] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Gliomas are rapidly progressive, neurologically devastating, largely fatal brain tumors. Magnetic resonance imaging (MRI) is a widely used technique employed in the diagnosis and management of gliomas in clinical practice. MRI is also the standard imaging modality used to delineate the brain tumor target as part of treatment planning for the administration of radiation therapy. Despite more than 20 yr of research and development, computational brain tumor segmentation in MRI images remains a challenging task. We are presenting a novel method of automatic image segmentation based on holistically nested neural networks that could be employed for brain tumor segmentation of MRI images. METHODS Two preprocessing techniques were applied to MRI images. The N4ITK method was employed for correction of bias field distortion. A novel landmark-based intensity normalization method was developed so that tissue types have a similar intensity scale in images of different subjects for the same MRI protocol. The holistically nested neural networks (HNN), which extend from the convolutional neural networks (CNN) with a deep supervision through an additional weighted-fusion output layer, was trained to learn the multiscale and multilevel hierarchical appearance representation of the brain tumor in MRI images and was subsequently applied to produce a prediction map of the brain tumor on test images. Finally, the brain tumor was obtained through an optimum thresholding on the prediction map. RESULTS The proposed method was evaluated on both the Multimodal Brain Tumor Image Segmentation (BRATS) Benchmark 2013 training datasets, and clinical data from our institute. A dice similarity coefficient (DSC) and sensitivity of 0.78 and 0.81 were achieved on 20 BRATS 2013 training datasets with high-grade gliomas (HGG), based on a two-fold cross-validation. The HNN model built on the BRATS 2013 training data was applied to ten clinical datasets with HGG from a locally developed database. DSC and sensitivity of 0.83 and 0.85 were achieved. A quantitative comparison indicated that the proposed method outperforms the popular fully convolutional network (FCN) method. In terms of efficiency, the proposed method took around 10 h for training with 50,000 iterations, and approximately 30 s for testing of a typical MRI image in the BRATS 2013 dataset with a size of 160 × 216 × 176, using a DELL PRECISION workstation T7400, with an NVIDIA Tesla K20c GPU. CONCLUSIONS An effective brain tumor segmentation method for MRI images based on a HNN has been developed. The high level of accuracy and efficiency make this method practical in brain tumor segmentation. It may play a crucial role in both brain tumor diagnostic analysis and in the treatment planning of radiation therapy.
Collapse
Affiliation(s)
- Ying Zhuge
- Radiation Oncology BranchNational Cancer InstituteNational Institutes of HealthBethesdaMD20892USA
| | - Andra V. Krauze
- Radiation Oncology BranchNational Cancer InstituteNational Institutes of HealthBethesdaMD20892USA
| | - Holly Ning
- Radiation Oncology BranchNational Cancer InstituteNational Institutes of HealthBethesdaMD20892USA
| | - Jason Y. Cheng
- Radiation Oncology BranchNational Cancer InstituteNational Institutes of HealthBethesdaMD20892USA
| | - Barbara C. Arora
- Radiation Oncology BranchNational Cancer InstituteNational Institutes of HealthBethesdaMD20892USA
| | - Kevin Camphausen
- Radiation Oncology BranchNational Cancer InstituteNational Institutes of HealthBethesdaMD20892USA
| | - Robert W. Miller
- Radiation Oncology BranchNational Cancer InstituteNational Institutes of HealthBethesdaMD20892USA
| |
Collapse
|
27
|
Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci Data 2017; 4:170117. [PMID: 28872634 PMCID: PMC5685212 DOI: 10.1038/sdata.2017.117] [Citation(s) in RCA: 716] [Impact Index Per Article: 102.3] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2017] [Accepted: 07/14/2017] [Indexed: 01/04/2023] Open
Abstract
Gliomas belong to a group of central nervous system tumors, and consist of various sub-regions. Gold standard labeling of these sub-regions in radiographic imaging is essential for both clinical and computational studies, including radiomic and radiogenomic analyses. Towards this end, we release segmentation labels and radiomic features for all pre-operative multimodal magnetic resonance imaging (MRI) (n=243) of the multi-institutional glioma collections of The Cancer Genome Atlas (TCGA), publicly available in The Cancer Imaging Archive (TCIA). Pre-operative scans were identified in both glioblastoma (TCGA-GBM, n=135) and low-grade-glioma (TCGA-LGG, n=108) collections via radiological assessment. The glioma sub-region labels were produced by an automated state-of-the-art method and manually revised by an expert board-certified neuroradiologist. An extensive panel of radiomic features was extracted based on the manually-revised labels. This set of labels and features should enable i) direct utilization of the TCGA/TCIA glioma collections towards repeatable, reproducible and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments, as well as ii) performance evaluation of computer-aided segmentation methods, and comparison to our state-of-the-art method.
Collapse
|
28
|
Sauwen N, Acou M, Sima DM, Veraart J, Maes F, Himmelreich U, Achten E, Huffel SV. Semi-automated brain tumor segmentation on multi-parametric MRI using regularized non-negative matrix factorization. BMC Med Imaging 2017; 17:29. [PMID: 28472943 PMCID: PMC5418702 DOI: 10.1186/s12880-017-0198-4] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2016] [Accepted: 04/11/2017] [Indexed: 12/19/2022] Open
Abstract
Background Segmentation of gliomas in multi-parametric (MP-)MR images is challenging due to their heterogeneous nature in terms of size, appearance and location. Manual tumor segmentation is a time-consuming task and clinical practice would benefit from (semi-) automated segmentation of the different tumor compartments. Methods We present a semi-automated framework for brain tumor segmentation based on non-negative matrix factorization (NMF) that does not require prior training of the method. L1-regularization is incorporated into the NMF objective function to promote spatial consistency and sparseness of the tissue abundance maps. The pathological sources are initialized through user-defined voxel selection. Knowledge about the spatial location of the selected voxels is combined with tissue adjacency constraints in a post-processing step to enhance segmentation quality. The method is applied to an MP-MRI dataset of 21 high-grade glioma patients, including conventional, perfusion-weighted and diffusion-weighted MRI. To assess the effect of using MP-MRI data and the L1-regularization term, analyses are also run using only conventional MRI and without L1-regularization. Robustness against user input variability is verified by considering the statistical distribution of the segmentation results when repeatedly analyzing each patient’s dataset with a different set of random seeding points. Results Using L1-regularized semi-automated NMF segmentation, mean Dice-scores of 65%, 74 and 80% are found for active tumor, the tumor core and the whole tumor region. Mean Hausdorff distances of 6.1 mm, 7.4 mm and 8.2 mm are found for active tumor, the tumor core and the whole tumor region. Lower Dice-scores and higher Hausdorff distances are found without L1-regularization and when only considering conventional MRI data. Conclusions Based on the mean Dice-scores and Hausdorff distances, segmentation results are competitive with state-of-the-art in literature. Robust results were found for most patients, although careful voxel selection is mandatory to avoid sub-optimal segmentation.
Collapse
Affiliation(s)
- Nicolas Sauwen
- Department of Electrical Engineering (ESAT), STADIUS Centre for Dynamical Systems, Signal Processing and Data Analytics, KULeuven, Kasteelpark Arenberg, Leuven, Belgium. .,imec, Kapeldreef 75, Leuven, 3001, Belgium.
| | - Marjan Acou
- Department of Radiology, Ghent University Hospital, De Pintelaan 185, Ghent, 9000, Belgium
| | - Diana M Sima
- Department of Electrical Engineering (ESAT), STADIUS Centre for Dynamical Systems, Signal Processing and Data Analytics, KULeuven, Kasteelpark Arenberg, Leuven, Belgium.,imec, Kapeldreef 75, Leuven, 3001, Belgium
| | - Jelle Veraart
- Department of Physics, iMinds Vision Lab, University of Antwerp, Edegemsesteenweg 200-240, Antwerp, 2610, Belgium
| | - Frederik Maes
- Department of Electrical Engineering (ESAT), PSI Centre for Processing Speech and Images, KULeuven, Kasteelpark Arenberg 10, Leuven, 3001, Belgium
| | - Uwe Himmelreich
- Department of Imaging and Pathology, Biomedical MRI/MoSAIC, KULeuven, Herestraat 49, Leuven, 3000, Belgium
| | - Eric Achten
- Department of Radiology, Ghent University Hospital, De Pintelaan 185, Ghent, 9000, Belgium
| | - Sabine Van Huffel
- Department of Electrical Engineering (ESAT), STADIUS Centre for Dynamical Systems, Signal Processing and Data Analytics, KULeuven, Kasteelpark Arenberg, Leuven, Belgium.,imec, Kapeldreef 75, Leuven, 3001, Belgium
| |
Collapse
|
29
|
Pei L, Reza SMS, Li W, Davatzikos C, Iftekharuddin KM. Improved brain tumor segmentation by utilizing tumor growth model in longitudinal brain MRI. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2017; 10134:101342L. [PMID: 28642629 PMCID: PMC5476314 DOI: 10.1117/12.2254034] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
In this work, we propose a novel method to improve texture based tumor segmentation by fusing cell density patterns that are generated from tumor growth modeling. In order to model tumor growth, we solve the reaction-diffusion equation by using Lattice-Boltzmann method (LBM). Computational tumor growth modeling obtains the cell density distribution that potentially indicates the predicted tissue locations in the brain over time. The density patterns is then considered as novel features along with other texture (such as fractal, and multifractal Brownian motion (mBm)), and intensity features in MRI for improved brain tumor segmentation. We evaluate the proposed method with about one hundred longitudinal MRI scans from five patients obtained from public BRATS 2015 data set, validated by the ground truth. The result shows significant improvement of complete tumor segmentation using ANOVA analysis for five patients in longitudinal MR images.
Collapse
Affiliation(s)
- Linmin Pei
- Vision Lab, Electrical & Computer Engineering, Old Dominion University
| | - Syed M S Reza
- Vision Lab, Electrical & Computer Engineering, Old Dominion University
| | - Wei Li
- Department of Mathematics & Statistics, Old Dominion University
| | | | | |
Collapse
|
30
|
Shen H, Wang R, Zhang J, McKenna SJ. Boundary-Aware Fully Convolutional Network for Brain Tumor Segmentation. LECTURE NOTES IN COMPUTER SCIENCE 2017. [DOI: 10.1007/978-3-319-66185-8_49] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
31
|
Korfiatis P, Kline TL, Erickson BJ. Automated Segmentation of Hyperintense Regions in FLAIR MRI Using Deep Learning. ACTA ACUST UNITED AC 2016; 2:334-340. [PMID: 28066806 PMCID: PMC5215737 DOI: 10.18383/j.tom.2016.00166] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
We present a deep convolutional neural network application based on autoencoders aimed at segmentation of increased signal regions in fluid-attenuated inversion recovery magnetic resonance imaging images. The convolutional autoencoders were trained on the publicly available Brain Tumor Image Segmentation Benchmark (BRATS) data set, and the accuracy was evaluated on a data set where 3 expert segmentations were available. The simultaneous truth and performance level estimation (STAPLE) algorithm was used to provide the ground truth for comparison, and Dice coefficient, Jaccard coefficient, true positive fraction, and false negative fraction were calculated. The proposed technique was within the interobserver variability with respect to Dice, Jaccard, and true positive fraction. The developed method can be used to produce automatic segmentations of tumor regions corresponding to signal-increased fluid-attenuated inversion recovery regions.
Collapse
|
32
|
Zeng K, Bakas S, Sotiras A, Akbari H, Rozycki M, Rathore S, Pati S, Davatzikos C. Segmentation of Gliomas in Pre-operative and Post-operative Multimodal Magnetic Resonance Imaging Volumes Based on a Hybrid Generative-Discriminative Framework. BRAINLESION : GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES. BRAINLES (WORKSHOP) 2016; 10154:184-194. [PMID: 28725878 PMCID: PMC5512606 DOI: 10.1007/978-3-319-55524-9_18] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
We present an approach for segmenting both low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed framework is an extension of our previous work [6,7], with an additional component for segmenting post-operative scans. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative model based on a joint segmentation-registration framework is used to segment the brain scans into cancerous and healthy tissues. Secondly, a gradient boosting classification scheme is used to refine tumor segmentation based on information from multiple patients. We evaluated our approach in 218 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2016 challenge and report promising results. During the testing phase, the proposed approach was ranked among the top performing methods, after being additionally evaluated in 191 unseen cases.
Collapse
Affiliation(s)
- Ke Zeng
- Section of Biomedical Image Analysis, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Spyridon Bakas
- Section of Biomedical Image Analysis, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Aristeidis Sotiras
- Section of Biomedical Image Analysis, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Hamed Akbari
- Section of Biomedical Image Analysis, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Martin Rozycki
- Section of Biomedical Image Analysis, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Saima Rathore
- Section of Biomedical Image Analysis, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Sarthak Pati
- Section of Biomedical Image Analysis, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Christos Davatzikos
- Section of Biomedical Image Analysis, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| |
Collapse
|
33
|
Pereira S, Pinto A, Alves V, Silva CA. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1240-1251. [PMID: 26960222 DOI: 10.1109/tmi.2016.2538465] [Citation(s) in RCA: 845] [Impact Index Per Article: 105.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Among brain tumors, gliomas are the most common and aggressive, leading to a very short life expectancy in their highest grade. Thus, treatment planning is a key stage to improve the quality of life of oncological patients. Magnetic resonance imaging (MRI) is a widely used imaging technique to assess these tumors, but the large amount of data produced by MRI prevents manual segmentation in a reasonable time, limiting the use of precise quantitative measurements in the clinical practice. So, automatic and reliable segmentation methods are required; however, the large spatial and structural variability among brain tumors make automatic segmentation a challenging problem. In this paper, we propose an automatic segmentation method based on Convolutional Neural Networks (CNN), exploring small 3 ×3 kernels. The use of small kernels allows designing a deeper architecture, besides having a positive effect against overfitting, given the fewer number of weights in the network. We also investigated the use of intensity normalization as a pre-processing step, which though not common in CNN-based segmentation methods, proved together with data augmentation to be very effective for brain tumor segmentation in MRI images. Our proposal was validated in the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013), obtaining simultaneously the first position for the complete, core, and enhancing regions in Dice Similarity Coefficient metric (0.88, 0.83, 0.77) for the Challenge data set. Also, it obtained the overall first position by the online evaluation platform. We also participated in the on-site BRATS 2015 Challenge using the same model, obtaining the second place, with Dice Similarity Coefficient metric of 0.78, 0.65, and 0.75 for the complete, core, and enhancing regions, respectively.
Collapse
|
34
|
Cordier N, Delingette H, Ayache N. A Patch-Based Approach for the Segmentation of Pathologies: Application to Glioma Labelling. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1066-1076. [PMID: 26685225 DOI: 10.1109/tmi.2015.2508150] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper, we describe a novel and generic approach to address fully-automatic segmentation of brain tumors by using multi-atlas patch-based voting techniques. In addition to avoiding the local search window assumption, the conventional patch-based framework is enhanced through several simple procedures: an improvement of the training dataset in terms of both label purity and intensity statistics, augmented features to implicitly guide the nearest-neighbor-search, multi-scale patches, invariance to cube isometries, stratification of the votes with respect to cases and labels. A probabilistic model automatically delineates regions of interest enclosing high-probability tumor volumes, which allows the algorithm to achieve highly competitive running time despite minimal processing power and resources. This method was evaluated on Multimodal Brain Tumor Image Segmentation challenge datasets. State-of-the-art results are achieved, with a limited learning stage thus restricting the risk of overfit. Moreover, segmentation smoothness does not involve any post-processing.
Collapse
|
35
|
GLISTRboost: Combining Multimodal MRI Segmentation, Registration, and Biophysical Tumor Growth Modeling with Gradient Boosting Machines for Glioma Segmentation. BRAINLESION : GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES. BRAINLES (WORKSHOP) 2016; 9556:144-155. [PMID: 28725877 DOI: 10.1007/978-3-319-30858-6_1] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
We present an approach for segmenting low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative approach based on an Expectation-Maximization framework that incorporates a glioma growth model is used to segment the brain scans into tumor, as well as healthy tissue labels. Secondly, a gradient boosting multi-class classification scheme is used to refine tumor labels based on information from multiple patients. Lastly, a probabilistic Bayesian strategy is employed to further refine and finalize the tumor segmentation based on patient-specific intensity statistics from the multiple modalities. We evaluated our approach in 186 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2015 challenge and report promising results. During the testing phase, the algorithm was additionally evaluated in 53 unseen cases, achieving the best performance among the competing methods.
Collapse
|
36
|
Deep Convolutional Neural Networks for the Segmentation of Gliomas in Multi-sequence MRI. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES 2016. [DOI: 10.1007/978-3-319-30858-6_12] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
37
|
Agn M, Puonti O, Rosenschöld PMA, Law I, Van Leemput K. Brain Tumor Segmentation Using a Generative Model with an RBM Prior on Tumor Shape. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES 2016. [DOI: 10.1007/978-3-319-30858-6_15] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|