1
|
Seshimo H, Rashed EA. Segmentation of Low-Grade Brain Tumors Using Mutual Attention Multimodal MRI. SENSORS (BASEL, SWITZERLAND) 2024; 24:7576. [PMID: 39686112 DOI: 10.3390/s24237576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2024] [Revised: 11/18/2024] [Accepted: 11/26/2024] [Indexed: 12/18/2024]
Abstract
Early detection and precise characterization of brain tumors play a crucial role in improving patient outcomes and extending survival rates. Among neuroimaging modalities, magnetic resonance imaging (MRI) is the gold standard for brain tumor diagnostics due to its ability to produce high-contrast images across a variety of sequences, each highlighting distinct tissue characteristics. This study focuses on enabling multimodal MRI sequences to advance the automatic segmentation of low-grade astrocytomas, a challenging task due to their diffuse and irregular growth patterns. A novel mutual-attention deep learning framework is proposed, which integrates complementary information from multiple MRI sequences, including T2-weighted and fluid-attenuated inversion recovery (FLAIR) sequences, to enhance the segmentation accuracy. Unlike conventional segmentation models, which treat each modality independently or simply concatenate them, our model introduces mutual attention mechanisms. This allows the network to dynamically focus on salient features across modalities by jointly learning interdependencies between imaging sequences, leading to more precise boundary delineations even in regions with subtle tumor signals. The proposed method is validated using the UCSF-PDGM dataset, which consists of 35 astrocytoma cases, presenting a realistic and clinically challenging dataset. The results demonstrate that T2w/FLAIR modalities contribute most significantly to the segmentation performance. The mutual-attention model achieves an average Dice coefficient of 0.87. This study provides an innovative pathway toward improving segmentation of low-grade tumors by enabling context-aware fusion across imaging sequences. Furthermore, the study showcases the clinical relevance of integrating AI with multimodal MRI, potentially improving non-invasive tumor characterization and guiding future research in radiological diagnostics.
Collapse
Affiliation(s)
- Hiroyuki Seshimo
- Graduate School of Information Science, University of Hyogo, Kobe 650-0047, Japan
| | - Essam A Rashed
- Graduate School of Information Science, University of Hyogo, Kobe 650-0047, Japan
- Advanced Medical Engineering Research Institute, University of Hyogo, Himeji 670-0836, Japan
| |
Collapse
|
2
|
Zheng B, Zhao Z, Zheng P, Liu Q, Li S, Jiang X, Huang X, Ye Y, Wang H. The current state of MRI-based radiomics in pituitary adenoma: promising but challenging. Front Endocrinol (Lausanne) 2024; 15:1426781. [PMID: 39371931 PMCID: PMC11449739 DOI: 10.3389/fendo.2024.1426781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Accepted: 08/30/2024] [Indexed: 10/08/2024] Open
Abstract
In the clinical diagnosis and treatment of pituitary adenomas, MRI plays a crucial role. However, traditional manual interpretations are plagued by inter-observer variability and limitations in recognizing details. Radiomics, based on MRI, facilitates quantitative analysis by extracting high-throughput data from images. This approach elucidates correlations between imaging features and pituitary tumor characteristics, thereby establishing imaging biomarkers. Recent studies have demonstrated the extensive application of radiomics in differential diagnosis, subtype identification, consistency evaluation, invasiveness assessment, and treatment response in pituitary adenomas. This review succinctly presents the general workflow of radiomics, reviews pertinent literature with a summary table, and provides a comparative analysis with traditional methods. We further elucidate the connections between radiological features and biological findings in the field of pituitary adenoma. While promising, the clinical application of radiomics still has a considerable distance to traverse, considering the issues with reproducibility of imaging features and the significant heterogeneity in pituitary adenoma patients.
Collapse
Affiliation(s)
- Baoping Zheng
- Department of Neurosurgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhen Zhao
- Department of Neurosurgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Pingping Zheng
- Department of Neurosurgery, People’s Hospital of Biyang County, Zhumadian, China
| | - Qiang Liu
- Department of Neurosurgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Shuang Li
- Department of Neurosurgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaobing Jiang
- Department of Neurosurgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xing Huang
- Department of Neurosurgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Youfan Ye
- Department of Ophthalmology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Haijun Wang
- Department of Neurosurgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
3
|
Asadi F, Angsuwatanakul T, O’Reilly JA. Evaluating synthetic neuroimaging data augmentation for automatic brain tumour segmentation with a deep fully-convolutional network. IBRO Neurosci Rep 2024; 16:57-66. [PMID: 39007088 PMCID: PMC11240293 DOI: 10.1016/j.ibneur.2023.12.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 12/11/2023] [Indexed: 07/16/2024] Open
Abstract
Gliomas observed in medical images require expert neuro-radiologist evaluation for treatment planning and monitoring, motivating development of intelligent systems capable of automating aspects of tumour evaluation. Deep learning models for automatic image segmentation rely on the amount and quality of training data. In this study we developed a neuroimaging synthesis technique to augment data for training fully-convolutional networks (U-nets) to perform automatic glioma segmentation. We used StyleGAN2-ada to simultaneously generate fluid-attenuated inversion recovery (FLAIR) magnetic resonance images and corresponding glioma segmentation masks. Synthetic data were successively added to real training data (n = 2751) in fourteen rounds of 1000 and used to train U-nets that were evaluated on held-out validation (n = 590) and test sets (n = 588). U-nets were trained with and without geometric augmentation (translation, zoom and shear), and Dice coefficients were computed to evaluate segmentation performance. We also monitored the number of training iterations before stopping, total training time, and time per iteration to evaluate computational costs associated with training each U-net. Synthetic data augmentation yielded marginal improvements in Dice coefficients (validation set +0.0409, test set +0.0355), whereas geometric augmentation improved generalization (standard deviation between training, validation and test set performances of 0.01 with, and 0.04 without geometric augmentation). Based on the modest performance gains for automatic glioma segmentation we find it hard to justify the computational expense of developing a synthetic image generation pipeline. Future work may seek to optimize the efficiency of synthetic data generation for augmentation of neuroimaging data.
Collapse
Affiliation(s)
- Fawad Asadi
- College of Biomedical Engineering, Rangsit University, Pathum Thani 12000, Thailand
| | | | - Jamie A. O’Reilly
- School of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
| |
Collapse
|
4
|
Garcea F, Serra A, Lamberti F, Morra L. Data augmentation for medical imaging: A systematic literature review. Comput Biol Med 2023; 152:106391. [PMID: 36549032 DOI: 10.1016/j.compbiomed.2022.106391] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 11/22/2022] [Accepted: 11/29/2022] [Indexed: 12/13/2022]
Abstract
Recent advances in Deep Learning have largely benefited from larger and more diverse training sets. However, collecting large datasets for medical imaging is still a challenge due to privacy concerns and labeling costs. Data augmentation makes it possible to greatly expand the amount and variety of data available for training without actually collecting new samples. Data augmentation techniques range from simple yet surprisingly effective transformations such as cropping, padding, and flipping, to complex generative models. Depending on the nature of the input and the visual task, different data augmentation strategies are likely to perform differently. For this reason, it is conceivable that medical imaging requires specific augmentation strategies that generate plausible data samples and enable effective regularization of deep neural networks. Data augmentation can also be used to augment specific classes that are underrepresented in the training set, e.g., to generate artificial lesions. The goal of this systematic literature review is to investigate which data augmentation strategies are used in the medical domain and how they affect the performance of clinical tasks such as classification, segmentation, and lesion detection. To this end, a comprehensive analysis of more than 300 articles published in recent years (2018-2022) was conducted. The results highlight the effectiveness of data augmentation across organs, modalities, tasks, and dataset sizes, and suggest potential avenues for future research.
Collapse
Affiliation(s)
- Fabio Garcea
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy
| | - Alessio Serra
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy
| | - Fabrizio Lamberti
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy
| | - Lia Morra
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy.
| |
Collapse
|
5
|
Yu X, Wu Y, Bai Y, Han H, Chen L, Gao H, Wei H, Wang M. A lightweight 3D UNet model for glioma grading. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7d33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 06/29/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Glioma is one of the most fatal cancers in the world which has been divided into low grade glioma (LGG) and high grade glioma (HGG), and its image grading has become a hot topic of contemporary research. Magnetic resonance imaging (MRI) is a vital diagnostic tool for brain tumor detection, analysis, and surgical planning. Accurate and automatic glioma grading is crucial for speeding up diagnosis and treatment planning. Aiming at the problems of (1) large number of parameters, (2) complex calculation, and (3) poor speed of the current glioma grading algorithms based on deep learning, this paper proposes a lightweight 3D UNet deep learning framework, which can improve classification accuracy in comparison with the existing methods. Approach. To improve efficiency while maintaining accuracy, existing 3D UNet has been excluded, and depthwise separable convolution has been applied to 3D convolution to reduce the number of network parameters. The weight of parameters on the basis of space and channel compression & excitation module has been strengthened to improve the model in the feature map, reduce the weight of redundant parameters, and strengthen the performance of the model. Main results. A total of 560 patients with glioma were retrospectively reviewed. All patients underwent MRI before surgery. The experiments were carried out on T1w, T2w, fluid attenuated inversion recovery, and CET1w images. Additionally, a way of marking tumor area by cube bounding box is presented which has no significant difference in model performance with the manually drawn ground truth. Evaluated on test datasets using the proposed model has shown good results (with accuracy of 89.29%). Significance. This work serves to achieve LGG/HGG grading by simple, effective, and non-invasive diagnostic approaches to provide diagnostic suggestions for clinical usage, thereby facilitating hasten treatment decisions.
Collapse
|
6
|
LGMSU-Net: Local Features, Global Features, and Multi-Scale Features Fused the U-Shaped Network for Brain Tumor Segmentation. ELECTRONICS 2022. [DOI: 10.3390/electronics11121911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Brain tumors are one of the deadliest cancers in the world. Researchers have conducted a lot of research work on brain tumor segmentation with good performance due to the rapid development of deep learning for assisting doctors in diagnosis and treatment. However, most of these methods cannot fully combine multiple feature information and their performances need to be improved. This study developed a novel network fusing local features representing detailed information, global features representing global information, and multi-scale features enhancing the model’s robustness to fully extract the features of brain tumors and proposed a novel axial-deformable attention module for modeling global information to improve the performance of brain tumor segmentation to assist clinicians in the automatic segmentation of brain tumors. Moreover, positional embeddings were used to make the network training faster and improve the method’s performance. Six metrics were used to evaluate the proposed method on the BraTS2018 dataset. Outstanding performance was obtained with Dice score, mean Intersection over Union, precision, recall, params, and inference time of 0.8735, 0.7756, 0.9477, 0.8769, 69.02 M, and 15.66 millisecond, respectively, for the whole tumor. Extensive experiments demonstrated that the proposed network obtained excellent performance and was helpful in providing supplementary advice to the clinicians.
Collapse
|
7
|
Das S, Nayak GK, Saba L, Kalra M, Suri JS, Saxena S. An artificial intelligence framework and its bias for brain tumor segmentation: A narrative review. Comput Biol Med 2022; 143:105273. [PMID: 35228172 DOI: 10.1016/j.compbiomed.2022.105273] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 01/15/2022] [Accepted: 01/24/2022] [Indexed: 02/06/2023]
Abstract
BACKGROUND Artificial intelligence (AI) has become a prominent technique for medical diagnosis and represents an essential role in detecting brain tumors. Although AI-based models are widely used in brain lesion segmentation (BLS), understanding their effectiveness is challenging due to their complexity and diversity. Several reviews on brain tumor segmentation are available, but none of them describe a link between the threats due to risk-of-bias (RoB) in AI and its architectures. In our review, we focused on linking RoB and different AI-based architectural Cluster in popular DL framework. Further, due to variance in these designs and input data types in medical imaging, it is necessary to present a narrative review considering all facets of BLS. APPROACH The proposed study uses a PRISMA strategy based on 75 relevant studies found by searching PubMed, Scopus, and Google Scholar. Based on the architectural evolution, DL studies were subsequently categorized into four classes: convolutional neural network (CNN)-based, encoder-decoder (ED)-based, transfer learning (TL)-based, and hybrid DL (HDL)-based architectures. These studies were then analyzed considering 32 AI attributes, with clusters including AI architecture, imaging modalities, hyper-parameters, performance evaluation metrics, and clinical evaluation. Then, after these studies were scored for all attributes, a composite score was computed, normalized, and ranked. Thereafter, a bias cutoff (AP(ai)Bias 1.0, AtheroPoint, Roseville, CA, USA) was established to detect low-, moderate- and high-bias studies. CONCLUSION The four classes of architectures, from best-to worst-performing, are TL > ED > CNN > HDL. ED-based models had the lowest AI bias for BLS. This study presents a set of three primary and six secondary recommendations for lowering the RoB.
Collapse
Affiliation(s)
- Suchismita Das
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India; CSE Department, KIIT Deemed to be University, Bhubaneswar, Odisha, India
| | - G K Nayak
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India
| | - Luca Saba
- Department of Radiology, AOU, University of Cagliari, Cagliari, Italy
| | - Mannudeep Kalra
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA, USA
| | - Jasjit S Suri
- Stroke Diagnostic and Monitoring Division, AtheroPoint™ LLC, Roseville, CA, USA.
| | - Sanjay Saxena
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India
| |
Collapse
|
8
|
Ahuja S, Panigrahi BK, Gandhi TK. Enhanced performance of Dark-Nets for brain tumor classification and segmentation using colormap-based superpixel techniques. MACHINE LEARNING WITH APPLICATIONS 2022. [DOI: 10.1016/j.mlwa.2021.100212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
|
9
|
Bento M, Fantini I, Park J, Rittner L, Frayne R. Deep Learning in Large and Multi-Site Structural Brain MR Imaging Datasets. Front Neuroinform 2022; 15:805669. [PMID: 35126080 PMCID: PMC8811356 DOI: 10.3389/fninf.2021.805669] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 12/27/2021] [Indexed: 12/22/2022] Open
Abstract
Large, multi-site, heterogeneous brain imaging datasets are increasingly required for the training, validation, and testing of advanced deep learning (DL)-based automated tools, including structural magnetic resonance (MR) image-based diagnostic and treatment monitoring approaches. When assembling a number of smaller datasets to form a larger dataset, understanding the underlying variability between different acquisition and processing protocols across the aggregated dataset (termed “batch effects”) is critical. The presence of variation in the training dataset is important as it more closely reflects the true underlying data distribution and, thus, may enhance the overall generalizability of the tool. However, the impact of batch effects must be carefully evaluated in order to avoid undesirable effects that, for example, may reduce performance measures. Batch effects can result from many sources, including differences in acquisition equipment, imaging technique and parameters, as well as applied processing methodologies. Their impact, both beneficial and adversarial, must be considered when developing tools to ensure that their outputs are related to the proposed clinical or research question (i.e., actual disease-related or pathological changes) and are not simply due to the peculiarities of underlying batch effects in the aggregated dataset. We reviewed applications of DL in structural brain MR imaging that aggregated images from neuroimaging datasets, typically acquired at multiple sites. We examined datasets containing both healthy control participants and patients that were acquired using varying acquisition protocols. First, we discussed issues around Data Access and enumerated the key characteristics of some commonly used publicly available brain datasets. Then we reviewed methods for correcting batch effects by exploring the two main classes of approaches: Data Harmonization that uses data standardization, quality control protocols or other similar algorithms and procedures to explicitly understand and minimize unwanted batch effects; and Domain Adaptation that develops DL tools that implicitly handle the batch effects by using approaches to achieve reliable and robust results. In this narrative review, we highlighted the advantages and disadvantages of both classes of DL approaches, and described key challenges to be addressed in future studies.
Collapse
Affiliation(s)
- Mariana Bento
- Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Calgary Image Processing and Analysis Centre, Foothills Medical Centre, Calgary, AB, Canada
- *Correspondence: Mariana Bento
| | - Irene Fantini
- School of Electrical and Computer Engineering, University of Campinas, Campinas, Brazil
| | - Justin Park
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Calgary Image Processing and Analysis Centre, Foothills Medical Centre, Calgary, AB, Canada
- Radiology and Clinical Neurosciences, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Leticia Rittner
- School of Electrical and Computer Engineering, University of Campinas, Campinas, Brazil
| | - Richard Frayne
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Calgary Image Processing and Analysis Centre, Foothills Medical Centre, Calgary, AB, Canada
- Radiology and Clinical Neurosciences, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
- Seaman Family MR Research Centre, Foothills Medical Centre, Calgary, AB, Canada
| |
Collapse
|
10
|
Nalepa J. AIM and Brain Tumors. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
11
|
Spherical coordinates transformation pre-processing in Deep Convolution Neural Networks for brain tumor segmentation in MRI. Med Biol Eng Comput 2021; 60:121-134. [PMID: 34729681 DOI: 10.1007/s11517-021-02464-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 10/21/2021] [Indexed: 12/29/2022]
Abstract
Magnetic Resonance Imaging (MRI) is used in everyday clinical practice to assess brain tumors. Deep Convolutional Neural Networks (DCNN) have recently shown very promising results in brain tumor segmentation tasks; however, DCNN models fail the task when applied to volumes that are different from the training dataset. One of the reasons is due to the lack of data standardization to adjust for different models and MR machines. In this work, a 3D spherical coordinates transform during the pre-processing phase has been hypothesized to improve DCNN models' accuracy and to allow more generalizable results even when the model is trained on small and heterogeneous datasets and translated into different domains. Indeed, the spherical coordinate system avoids several standardization issues since it works independently of resolution and imaging settings. The model trained on spherical transform pre-processed inputs resulted in superior performance over the Cartesian-input trained model on predicting gliomas' segmentation on Tumor Core and Enhancing Tumor classes, achieving a further improvement in accuracy by merging the two models together. The proposed model is not resolution-dependent, thus improving segmentation accuracy and theoretically solving some transfer learning problems related to the domain shifting, at least in terms of image resolution in the datasets.
Collapse
|
12
|
Wang J, Gao J, Ren J, Luan Z, Yu Z, Zhao Y, Zhao Y. DFP-ResUNet:Convolutional Neural Network with a Dilated Convolutional Feature Pyramid for Multimodal Brain Tumor Segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106208. [PMID: 34174763 DOI: 10.1016/j.cmpb.2021.106208] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 05/24/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Manual brain tumor segmentation by radiologists is time consuming and subjective. Therefore, fully automatic segmentation of different brain tumor subregions is essential to the treatment of patients. In this paper, we propose a neural network for automatically segmenting the enhancing tumor (ET), whole tumor (WT), and tumor core (TC) brain tumor subregions. METHODS The network is based on a U-Net with encoding and decoding structure, a residual module, and a spatial dilated feature pyramid (DFP) module, namely, DFP-ResUNet. First, we propose using a spatial DFP module composed of multiple parallel dilated convolution layers to extract the multiscale image features. This spatial DFP structure improves the ability of the neural network to extract and utilize the multiscale image features. Then, we use the residual module to deepen the network structure. Further, we propose using a multiclass Dice loss function to suppress the impact of class imbalance on brain tumor segmentation. We carried out a large number of ablation experiments to verify the feasibility and superiority of our approach using the Multimodal Brain Tumor Segmentation (BraTS) challenge dataset. RESULTS The mean Dice score of different subregions was ET 0.8431, WT 0.897 and TC 0.9068 using the proposed method on the BraTS 2018 challenge validation set and 0.7985, 0.90281, 0.8453 on the BraTS 2019 challenge, respectively. Further, we got a high Sensitivity and Specificity and low Hausdorff distance. CONCLUSIONS Through the analysis of the experimental results, it can be seen that the proposed approach DFP-ResUNet has a great potential in segmenting different subregions of brain tumors and can be applied in clinical practice.
Collapse
Affiliation(s)
- Jingjing Wang
- School of Physics and Electronics, Shandong Normal University, Jinan, China.
| | - Jun Gao
- School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Jinwen Ren
- School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Zhenye Luan
- School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Zishu Yu
- School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Yanhua Zhao
- Xigang Central Health Center, Tengzhou City, Shandong Province, China
| | - Yuefeng Zhao
- School of Physics and Electronics, Shandong Normal University, Jinan, China
| |
Collapse
|
13
|
Fawzi A, Achuthan A, Belaton B. Brain Image Segmentation in Recent Years: A Narrative Review. Brain Sci 2021; 11:1055. [PMID: 34439674 PMCID: PMC8392552 DOI: 10.3390/brainsci11081055] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 07/10/2021] [Accepted: 07/19/2021] [Indexed: 11/17/2022] Open
Abstract
Brain image segmentation is one of the most time-consuming and challenging procedures in a clinical environment. Recently, a drastic increase in the number of brain disorders has been noted. This has indirectly led to an increased demand for automated brain segmentation solutions to assist medical experts in early diagnosis and treatment interventions. This paper aims to present a critical review of the recent trend in segmentation and classification methods for brain magnetic resonance images. Various segmentation methods ranging from simple intensity-based to high-level segmentation approaches such as machine learning, metaheuristic, deep learning, and hybridization are included in the present review. Common issues, advantages, and disadvantages of brain image segmentation methods are also discussed to provide a better understanding of the strengths and limitations of existing methods. From this review, it is found that deep learning-based and hybrid-based metaheuristic approaches are more efficient for the reliable segmentation of brain tumors. However, these methods fall behind in terms of computation and memory complexity.
Collapse
Affiliation(s)
| | - Anusha Achuthan
- School of Computer Sciences, Universiti Sains Malaysia, Gelugor 11800, Malaysia; (A.F.); (B.B.)
| | | |
Collapse
|
14
|
Wang YL, Zhao ZJ, Hu SY, Chang FL. CLCU-Net: Cross-level connected U-shaped network with selective feature aggregation attention module for brain tumor segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 207:106154. [PMID: 34034031 DOI: 10.1016/j.cmpb.2021.106154] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Accepted: 04/30/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Brain tumors are among the most deadly cancers worldwide. Due to the development of deep convolutional neural networks, many brain tumor segmentation methods help clinicians diagnose and operate. However, most of these methods insufficiently use multi-scale features, reducing their ability to extract brain tumors' features and details. To assist clinicians in the accurate automatic segmentation of brain tumors, we built a new deep learning network to make full use of multi-scale features for improving the performance of brain tumor segmentation. METHODS We propose a novel cross-level connected U-shaped network (CLCU-Net) to connect different scales' features for fully utilizing multi-scale features. Besides, we propose a generic attention module (Segmented Attention Module, SAM) on the connections of different scale features for selectively aggregating features, which provides a more efficient connection of different scale features. Moreover, we employ deep supervision and spatial pyramid pooling (SSP) to improve the method's performance further. RESULTS We evaluated our method on the BRATS 2018 dataset by five indexes and achieved excellent performance with a Dice Score of 88.5%, a Precision of 91.98%, a Recall of 85.62%, a Params of 36.34M and Inference Time of 8.89ms for the whole tumor, which outperformed six state-of-the-art methods. Moreover, the performed analysis of different attention modules' heatmaps proved that the attention module proposed in this study was more suitable for segmentation tasks than the other existing popular attention modules. CONCLUSION Both the qualitative and quantitative experimental results indicate that our cross-level connected U-shaped network with selective feature aggregation attention module can achieve accurate brain tumor segmentation and is considered quite instrumental in clinical practice implementation.
Collapse
Affiliation(s)
- Y L Wang
- School of Control Science and Engineering, Shandong University, Jinan 250061, China
| | - Z J Zhao
- School of Control Science and Engineering, Shandong University, Jinan 250061, China.
| | - S Y Hu
- the Department of General surgery, First Affiliated Hospital of Shandong First Medical University, Jinan 250012, China
| | - F L Chang
- School of Control Science and Engineering, Shandong University, Jinan 250061, China
| |
Collapse
|
15
|
Chlap P, Min H, Vandenberg N, Dowling J, Holloway L, Haworth A. A review of medical image data augmentation techniques for deep learning applications. J Med Imaging Radiat Oncol 2021; 65:545-563. [PMID: 34145766 DOI: 10.1111/1754-9485.13261] [Citation(s) in RCA: 221] [Impact Index Per Article: 55.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 05/23/2021] [Indexed: 12/21/2022]
Abstract
Research in artificial intelligence for radiology and radiotherapy has recently become increasingly reliant on the use of deep learning-based algorithms. While the performance of the models which these algorithms produce can significantly outperform more traditional machine learning methods, they do rely on larger datasets being available for training. To address this issue, data augmentation has become a popular method for increasing the size of a training dataset, particularly in fields where large datasets aren't typically available, which is often the case when working with medical images. Data augmentation aims to generate additional data which is used to train the model and has been shown to improve performance when validated on a separate unseen dataset. This approach has become commonplace so to help understand the types of data augmentation techniques used in state-of-the-art deep learning models, we conducted a systematic review of the literature where data augmentation was utilised on medical images (limited to CT and MRI) to train a deep learning model. Articles were categorised into basic, deformable, deep learning or other data augmentation techniques. As artificial intelligence models trained using augmented data make their way into the clinic, this review aims to give an insight to these techniques and confidence in the validity of the models produced.
Collapse
Affiliation(s)
- Phillip Chlap
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia
| | - Hang Min
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Nym Vandenberg
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| | - Jason Dowling
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Lois Holloway
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia.,Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales, Australia
| | - Annette Haworth
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
16
|
Ghosal P, Chowdhury T, Kumar A, Bhadra AK, Chakraborty J, Nandi D. MhURI:A Supervised Segmentation Approach to Leverage Salient Brain Tissues in Magnetic Resonance Images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105841. [PMID: 33221057 PMCID: PMC9096474 DOI: 10.1016/j.cmpb.2020.105841] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 11/07/2020] [Indexed: 05/09/2023]
Abstract
BACKGROUND AND OBJECTIVES Accurate segmentation of critical tissues from a brain MRI is pivotal for characterization and quantitative pattern analysis of the human brain and thereby, identifies the earliest signs of various neurodegenerative diseases. To date, in most cases, it is done manually by the radiologists. The overwhelming workload in some of the thickly populated nations may cause exhaustion leading to interruption for the doctors, which may pose a continuing threat to patient safety. A novel fusion method called U-Net inception based on 3D convolutions and transition layers is proposed to address this issue. METHODS A 3D deep learning method called Multi headed U-Net with Residual Inception (MhURI) accompanied by Morphological Gradient channel for brain tissue segmentation is proposed, which incorporates Residual Inception 2-Residual (RI2R) module as the basic building block. The model exploits the benefits of morphological pre-processing for structural enhancement of MR images. A multi-path data encoding pipeline is introduced on top of the U-Net backbone, which encapsulates initial global features and captures the information from each MRI modality. RESULTS The proposed model has accomplished encouraging outcomes, which appreciates the adequacy in terms of some of the established quality metrices when compared with some of the state-of-the-art methods while evaluating with respect to two popular publicly available data sets. CONCLUSION The model is entirely automatic and able to segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) from brain MRI effectively with sufficient accuracy. Hence, it may be considered to be a potential computer-aided diagnostic (CAD) tool for radiologists and other medical practitioners in their clinical diagnosis workflow.
Collapse
Affiliation(s)
- Palash Ghosal
- Department of Computer Science and Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| | - Tamal Chowdhury
- Department of Electronics and Communication Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| | - Amish Kumar
- Department of Computer Science and Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| | - Ashok Kumar Bhadra
- Department of Radiology, KPC Medical College and Hospital, Jadavpur, 700032, West Bengal, India.
| | - Jayasree Chakraborty
- Department of Hepatopancreatobiliary Service, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA.
| | - Debashis Nandi
- Department of Computer Science and Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| |
Collapse
|
17
|
Magadza T, Viriri S. Deep Learning for Brain Tumor Segmentation: A Survey of State-of-the-Art. J Imaging 2021; 7:19. [PMID: 34460618 PMCID: PMC8321266 DOI: 10.3390/jimaging7020019] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 01/07/2021] [Accepted: 01/11/2021] [Indexed: 01/17/2023] Open
Abstract
Quantitative analysis of the brain tumors provides valuable information for understanding the tumor characteristics and treatment planning better. The accurate segmentation of lesions requires more than one image modalities with varying contrasts. As a result, manual segmentation, which is arguably the most accurate segmentation method, would be impractical for more extensive studies. Deep learning has recently emerged as a solution for quantitative analysis due to its record-shattering performance. However, medical image analysis has its unique challenges. This paper presents a review of state-of-the-art deep learning methods for brain tumor segmentation, clearly highlighting their building blocks and various strategies. We end with a critical discussion of open challenges in medical image analysis.
Collapse
Affiliation(s)
| | - Serestina Viriri
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4000, South Africa;
| |
Collapse
|
18
|
Khosravanian A, Rahmanimanesh M, Keshavarzi P, Mozaffari S. Fast level set method for glioma brain tumor segmentation based on Superpixel fuzzy clustering and lattice Boltzmann method. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 198:105809. [PMID: 33130495 DOI: 10.1016/j.cmpb.2020.105809] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Accepted: 10/12/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Brain tumor segmentation is a challenging issue due to noise, artifact, and intensity non-uniformity in magnetic resonance images (MRI). Manual MRI segmentation is a very tedious, time-consuming, and user-dependent task. This paper aims to presents a novel level set method to address aforementioned challenges for reliable and automatic brain tumor segmentation. METHODS In the proposed method, a new functional, based on level set method, is presented for medical image segmentation. Firstly, we define a superpixel fuzzy clustering objective function. To create superpixel regions, multiscale morphological gradient reconstruction (MMGR) operation is used. Secondly, a novel fuzzy energy functional is defined based on superpixel segmentation and histogram computation. Then, level set equations are obtained by using gradient descent method. Finally, we solve the level set equations by using lattice Boltzmann method (LBM). To evaluate the performance of the proposed method, both synthetic image dataset and real Glioma brain tumor images from BraTS 2017 dataset are used. RESULTS Experiments indicate that our proposed method is robust to noise, initialization, and intensity non-uniformity. Moreover, it is faster and more accurate than other state-of-the-art segmentation methods with the averages of running time is 3.25 seconds, Dice and Jaccard coefficients for automatic tumor segmentation against ground truth are 0.93 and 0.87, respectively. The mean value of Hausdorff distance, Mean absolute Distance (MAD), accuracy, sensitivity, and specificity are 2.70, 0.005, 0.9940, 0.9183, and 0.9972, respectively. CONCLUSIONS Our proposed method shows satisfactory results for Glioma brain tumor segmentation due to superpixel fuzzy clustering accurate segmentation results. Moreover, our method is fast and robust to noise, initialization, and intensity non-uniformity. Since most of the medical images suffer from these problems, the proposed method can more effective for complicated medical image segmentation.
Collapse
Affiliation(s)
- Asieh Khosravanian
- Faculty of Electrical and Computer Engineering, Semnan University, Semnan, Iran.
| | | | - Parviz Keshavarzi
- Faculty of Electrical and Computer Engineering, Semnan University, Semnan, Iran.
| | - Saeed Mozaffari
- Faculty of Electrical and Computer Engineering, Semnan University, Semnan, Iran.
| |
Collapse
|
19
|
Nalepa J. AIM and Brain Tumors. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_284-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
20
|
Zhang Z, Gao S, Huang Z. An Automatic Glioma Segmentation System Using a Multilevel Attention Pyramid Scene Parsing Network. Curr Med Imaging 2020; 17:751-761. [PMID: 33390119 DOI: 10.2174/1573405616666201231100623] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Revised: 09/15/2020] [Accepted: 10/15/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Due to the significant variances in their shape and size, it is a challenging task to automatically segment gliomas. To improve the performance of glioma segmentation tasks, this paper proposed a multilevel attention pyramid scene parsing network (MLAPSPNet) that aggregates the multiscale context and multilevel features. METHODS First, T1 pre-contrast, T2-weighted fluid-attenuated inversion recovery (FLAIR) and T1 post-contrast sequences of each slice are combined to form the input. Afterwards, image normalization and augmentation techniques are applied to accelerate the training process and avoid overfitting, respectively. Furthermore, the proposed MLAPSPNet that introduces multilevel pyramid pooling modules (PPMs) and attention gates is constructed. Eventually, the proposed network is compared with some existing networks. RESULTS The dice similarity coefficient (DSC), sensitivity and Jaccard score of the proposed system can reach 0.885, 0.933 and 0.8, respectively. The introduction of multilevel pyramid pooling modules and attention gates can improve the DSC by 0.029 and 0.022, respectively. Moreover, compared with Res-UNet, Dense-UNet, residual channel attention UNet (RCA-UNet), DeepLab V3+ and UNet++, the DSC is improved by 0.032, 0.026, 0.014, 0.041 and 0.011, respectively. CONCLUSION The proposed multilevel attention pyramid scene parsing network can achieve stateof- the-art performance, and the introduction of multilevel pyramid pooling modules and attention gates can improve the performance of glioma segmentation tasks.
Collapse
Affiliation(s)
- Zhenyu Zhang
- School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China
| | - Shouwei Gao
- School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China
| | - Zheng Huang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
| |
Collapse
|
21
|
Towards Personalized Diagnosis of Glioblastoma in Fluid-Attenuated Inversion Recovery (FLAIR) by Topological Interpretable Machine Learning. MATHEMATICS 2020. [DOI: 10.3390/math8050770] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Glioblastoma multiforme (GBM) is a fast-growing and highly invasive brain tumor, which tends to occur in adults between the ages of 45 and 70 and it accounts for 52 percent of all primary brain tumors. Usually, GBMs are detected by magnetic resonance images (MRI). Among MRI, a fluid-attenuated inversion recovery (FLAIR) sequence produces high quality digital tumor representation. Fast computer-aided detection and segmentation techniques are needed for overcoming subjective medical doctors (MDs) judgment. This study has three main novelties for demonstrating the role of topological features as new set of radiomics features which can be used as pillars of a personalized diagnostic systems of GBM analysis from FLAIR. For the first time topological data analysis is used for analyzing GBM from three complementary perspectives—tumor growth at cell level, temporal evolution of GBM in follow-up period and eventually GBM detection. The second novelty is represented by the definition of a new Shannon-like topological entropy, the so-called Generator Entropy. The third novelty is the combination of topological and textural features for training automatic interpretable machine learning. These novelties are demonstrated by three numerical experiments. Topological Data Analysis of a simplified 2D tumor growth mathematical model had allowed to understand the bio-chemical conditions that facilitate tumor growth—the higher the concentration of chemical nutrients the more virulent the process. Topological data analysis was used for evaluating GBM temporal progression on FLAIR recorded within 90 days following treatment completion and at progression. The experiment had confirmed that persistent entropy is a viable statistics for monitoring GBM evolution during the follow-up period. In the third experiment we developed a novel methodology based on topological and textural features and automatic interpretable machine learning for automatic GBM classification on FLAIR. The algorithm reached a classification accuracy up to 97%.
Collapse
|
22
|
Nalepa J, Ribalta Lorenzo P, Marcinkiewicz M, Bobek-Billewicz B, Wawrzyniak P, Walczak M, Kawulok M, Dudzik W, Kotowski K, Burda I, Machura B, Mrukwa G, Ulrych P, Hayball MP. Fully-automated deep learning-powered system for DCE-MRI analysis of brain tumors. Artif Intell Med 2020; 102:101769. [DOI: 10.1016/j.artmed.2019.101769] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Revised: 10/28/2019] [Accepted: 11/20/2019] [Indexed: 02/01/2023]
|
23
|
Nalepa J, Marcinkiewicz M, Kawulok M. Data Augmentation for Brain-Tumor Segmentation: A Review. Front Comput Neurosci 2019; 13:83. [PMID: 31920608 PMCID: PMC6917660 DOI: 10.3389/fncom.2019.00083] [Citation(s) in RCA: 94] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 11/27/2019] [Indexed: 11/17/2022] Open
Abstract
Data augmentation is a popular technique which helps improve generalization capabilities of deep neural networks, and can be perceived as implicit regularization. It plays a pivotal role in scenarios in which the amount of high-quality ground-truth data is limited, and acquiring new examples is costly and time-consuming. This is a very common problem in medical image analysis, especially tumor delineation. In this paper, we review the current advances in data-augmentation techniques applied to magnetic resonance images of brain tumors. To better understand the practical aspects of such algorithms, we investigate the papers submitted to the Multimodal Brain Tumor Segmentation Challenge (BraTS 2018 edition), as the BraTS dataset became a standard benchmark for validating existent and emerging brain-tumor detection and segmentation techniques. We verify which data augmentation approaches were exploited and what was their impact on the abilities of underlying supervised learners. Finally, we highlight the most promising research directions to follow in order to synthesize high-quality artificial brain-tumor examples which can boost the generalization abilities of deep models.
Collapse
Affiliation(s)
- Jakub Nalepa
- Future Processing, Gliwice, Poland
- Silesian University of Technology, Gliwice, Poland
| | | | | |
Collapse
|