1
|
Kundal K, Rao KV, Majumdar A, Kumar N, Kumar R. Comprehensive benchmarking of CNN-based tumor segmentation methods using multimodal MRI data. Comput Biol Med 2024; 178:108799. [PMID: 38925087 DOI: 10.1016/j.compbiomed.2024.108799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 06/12/2024] [Accepted: 06/19/2024] [Indexed: 06/28/2024]
Abstract
Magnetic resonance imaging (MRI) has become an essential and a frontline technique in the detection of brain tumor. However, segmenting tumors manually from scans is laborious and time-consuming. This has led to an increasing trend towards fully automated methods for precise tumor segmentation in MRI scans. Accurate tumor segmentation is crucial for improved diagnosis, treatment, and prognosis. This study benchmarks and evaluates four widely used CNN-based methods for brain tumor segmentation CaPTk, 2DVNet, EnsembleUNets, and ResNet50. Using 1251 multimodal MRI scans from the BraTS2021 dataset, we compared the performance of these methods against a reference dataset of segmented images assisted by radiologists. This comparison was conducted using segmented images directly and further by radiomic features extracted from the segmented images using pyRadiomics. Performance was assessed using the Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD). EnsembleUNets excelled, achieving a DSC of 0.93 and an HD of 18, outperforming the other methods. Further comparative analysis of radiomic features confirmed EnsembleUNets as the most precise segmentation method, surpassing other methods. EnsembleUNets recorded a Concordance Correlation Coefficient (CCC) of 0.79, a Total Deviation Index (TDI) of 1.14, and a Root Mean Square Error (RMSE) of 0.53, underscoring its superior performance. We also performed validation on an independent dataset of 611 samples (UPENN-GBM), which further supported the accuracy of EnsembleUNets, with a DSC of 0.85 and an HD of 17.5. These findings provide valuable insight into the efficacy of EnsembleUNets, supporting informed decisions for accurate brain tumor segmentation.
Collapse
Affiliation(s)
- Kavita Kundal
- Department of Biotechnology, Indian Institute of Technology Hyderabad, Kandi, Telangana, 502284, India
| | - K Venkateswara Rao
- Department of Neurosurgical Oncology, Basavatarakam Indo American Cancer Hospital & Research Institute, Hyderabad, Telangana, 500034, India
| | - Arunabha Majumdar
- Department of Mathematics, Indian Institute of Technology Hyderabad, Kandi, Telangana, 502284, India
| | - Neeraj Kumar
- Department of Biotechnology, Indian Institute of Technology Hyderabad, Kandi, Telangana, 502284, India; Department of Liberal Arts, Indian Institute of Technology Hyderabad, Kandi, Telangana, 502284, India
| | - Rahul Kumar
- Department of Biotechnology, Indian Institute of Technology Hyderabad, Kandi, Telangana, 502284, India.
| |
Collapse
|
2
|
Barhoumi Y, Fattah AH, Bouaynaya N, Moron F, Kim J, Fathallah-Shaykh HM, Chahine RA, Sotoudeh H. Robust AI-Driven Segmentation of Glioblastoma T1c and FLAIR MRI Series and the Low Variability of the MRIMath© Smart Manual Contouring Platform. Diagnostics (Basel) 2024; 14:1066. [PMID: 38893592 PMCID: PMC11172016 DOI: 10.3390/diagnostics14111066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 05/13/2024] [Accepted: 05/16/2024] [Indexed: 06/21/2024] Open
Abstract
Patients diagnosed with glioblastoma multiforme (GBM) continue to face a dire prognosis. Developing accurate and efficient contouring methods is crucial, as they can significantly advance both clinical practice and research. This study evaluates the AI models developed by MRIMath© for GBM T1c and fluid attenuation inversion recovery (FLAIR) images by comparing their contours to those of three neuro-radiologists using a smart manual contouring platform. The mean overall Sørensen-Dice Similarity Coefficient metric score (DSC) for the post-contrast T1 (T1c) AI was 95%, with a 95% confidence interval (CI) of 93% to 96%, closely aligning with the radiologists' scores. For true positive T1c images, AI segmentation achieved a mean DSC of 81% compared to radiologists' ranging from 80% to 86%. Sensitivity and specificity for T1c AI were 91.6% and 97.5%, respectively. The FLAIR AI exhibited a mean DSC of 90% with a 95% CI interval of 87% to 92%, comparable to the radiologists' scores. It also achieved a mean DSC of 78% for true positive FLAIR slices versus radiologists' scores of 75% to 83% and recorded a median sensitivity and specificity of 92.1% and 96.1%, respectively. The T1C and FLAIR AI models produced mean Hausdorff distances (<5 mm), volume measurements, kappa scores, and Bland-Altman differences that align closely with those measured by radiologists. Moreover, the inter-user variability between radiologists using the smart manual contouring platform was under 5% for T1c and under 10% for FLAIR images. These results underscore the MRIMath© platform's low inter-user variability and the high accuracy of its T1c and FLAIR AI models.
Collapse
Affiliation(s)
- Yassine Barhoumi
- MRIMath, 3473 Birchwood Lane, Birmingham, AL 35243, USA; (Y.B.); (A.H.F.)
| | - Abdul Hamid Fattah
- MRIMath, 3473 Birchwood Lane, Birmingham, AL 35243, USA; (Y.B.); (A.H.F.)
| | - Nidhal Bouaynaya
- Department of Electrical and Computer Science, Rowan University, Glassboro, NJ 08028, USA;
| | - Fanny Moron
- Department of Radiology, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030, USA
| | - Jinsuh Kim
- Department of Radiology, Emory University, 100 Woodruff Circle, Atlanta, GA 30322, USA;
| | - Hassan M. Fathallah-Shaykh
- Department of Neurology, University of Alabama at Birmingham, 510 20th Street South, Birmingham, AL 35294, USA;
| | | | - Houman Sotoudeh
- Department of Neurology, University of Alabama at Birmingham, 510 20th Street South, Birmingham, AL 35294, USA;
| |
Collapse
|
3
|
Zhou R, Wang J, Xia G, Xing J, Shen H, Shen X. Cascade Residual Multiscale Convolution and Mamba-Structured UNet for Advanced Brain Tumor Image Segmentation. ENTROPY (BASEL, SWITZERLAND) 2024; 26:385. [PMID: 38785634 PMCID: PMC11120374 DOI: 10.3390/e26050385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 04/21/2024] [Accepted: 04/29/2024] [Indexed: 05/25/2024]
Abstract
In brain imaging segmentation, precise tumor delineation is crucial for diagnosis and treatment planning. Traditional approaches include convolutional neural networks (CNNs), which struggle with processing sequential data, and transformer models that face limitations in maintaining computational efficiency with large-scale data. This study introduces MambaBTS: a model that synergizes the strengths of CNNs and transformers, is inspired by the Mamba architecture, and integrates cascade residual multi-scale convolutional kernels. The model employs a mixed loss function that blends dice loss with cross-entropy to refine segmentation accuracy effectively. This novel approach reduces computational complexity, enhances the receptive field, and demonstrates superior performance for accurately segmenting brain tumors in MRI images. Experiments on the MICCAI BraTS 2019 dataset show that MambaBTS achieves dice coefficients of 0.8450 for the whole tumor (WT), 0.8606 for the tumor core (TC), and 0.7796 for the enhancing tumor (ET) and outperforms existing models in terms of accuracy, computational efficiency, and parameter efficiency. These results underscore the model's potential to offer a balanced, efficient, and effective segmentation method, overcoming the constraints of existing models and promising significant improvements in clinical diagnostics and planning.
Collapse
Affiliation(s)
- Rui Zhou
- School of Zhang Jian, Nantong University, Nantong 226019, China; (R.Z.); (G.X.); (J.X.)
| | - Ju Wang
- School of Information Science and Technology, Nantong University, Nantong 226019, China;
| | - Guijiang Xia
- School of Zhang Jian, Nantong University, Nantong 226019, China; (R.Z.); (G.X.); (J.X.)
| | - Jingyang Xing
- School of Zhang Jian, Nantong University, Nantong 226019, China; (R.Z.); (G.X.); (J.X.)
| | - Hongming Shen
- School of Microelectronics and School of Integrated Circuits, Nantong University, Nantong 226019, China
| | - Xiaoyan Shen
- School of Information Science and Technology, Nantong University, Nantong 226019, China;
- Nantong Research Institute for Advanced Communication Technologies, Nantong University, Nantong 226019, China
| |
Collapse
|
4
|
Ahamed MF, Hossain MM, Nahiduzzaman M, Islam MR, Islam MR, Ahsan M, Haider J. A review on brain tumor segmentation based on deep learning methods with federated learning techniques. Comput Med Imaging Graph 2023; 110:102313. [PMID: 38011781 DOI: 10.1016/j.compmedimag.2023.102313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 11/13/2023] [Accepted: 11/13/2023] [Indexed: 11/29/2023]
Abstract
Brain tumors have become a severe medical complication in recent years due to their high fatality rate. Radiologists segment the tumor manually, which is time-consuming, error-prone, and expensive. In recent years, automated segmentation based on deep learning has demonstrated promising results in solving computer vision problems such as image classification and segmentation. Brain tumor segmentation has recently become a prevalent task in medical imaging to determine the tumor location, size, and shape using automated methods. Many researchers have worked on various machine and deep learning approaches to determine the most optimal solution using the convolutional methodology. In this review paper, we discuss the most effective segmentation techniques based on the datasets that are widely used and publicly available. We also proposed a survey of federated learning methodologies to enhance global segmentation performance and ensure privacy. A comprehensive literature review is suggested after studying more than 100 papers to generalize the most recent techniques in segmentation and multi-modality information. Finally, we concentrated on unsolved problems in brain tumor segmentation and a client-based federated model training strategy. Based on this review, future researchers will understand the optimal solution path to solve these issues.
Collapse
Affiliation(s)
- Md Faysal Ahamed
- Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Munawar Hossain
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Nahiduzzaman
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Rabiul Islam
- Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Robiul Islam
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Mominul Ahsan
- Department of Computer Science, University of York, Deramore Lane, Heslington, York YO10 5GH, UK
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Chester St, Manchester M1 5GD, UK.
| |
Collapse
|
5
|
Rauch P, Stefanits H, Aichholzer M, Serra C, Vorhauer D, Wagner H, Böhm P, Hartl S, Manakov I, Sonnberger M, Buckwar E, Ruiz-Navarro F, Heil K, Glöckel M, Oberndorfer J, Spiegl-Kreinecker S, Aufschnaiter-Hiessböck K, Weis S, Leibetseder A, Thomae W, Hauser T, Auer C, Katletz S, Gruber A, Gmeiner M. Deep learning-assisted radiomics facilitates multimodal prognostication for personalized treatment strategies in low-grade glioma. Sci Rep 2023; 13:9494. [PMID: 37302994 PMCID: PMC10258197 DOI: 10.1038/s41598-023-36298-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 05/31/2023] [Indexed: 06/13/2023] Open
Abstract
Determining the optimal course of treatment for low grade glioma (LGG) patients is challenging and frequently reliant on subjective judgment and limited scientific evidence. Our objective was to develop a comprehensive deep learning assisted radiomics model for assessing not only overall survival in LGG, but also the likelihood of future malignancy and glioma growth velocity. Thus, we retrospectively included 349 LGG patients to develop a prediction model using clinical, anatomical, and preoperative MRI data. Before performing radiomics analysis, a U2-model for glioma segmentation was utilized to prevent bias, yielding a mean whole tumor Dice score of 0.837. Overall survival and time to malignancy were estimated using Cox proportional hazard models. In a postoperative model, we derived a C-index of 0.82 (CI 0.79-0.86) for the training cohort over 10 years and 0.74 (Cl 0.64-0.84) for the test cohort. Preoperative models showed a C-index of 0.77 (Cl 0.73-0.82) for training and 0.67 (Cl 0.57-0.80) test sets. Our findings suggest that we can reliably predict the survival of a heterogeneous population of glioma patients in both preoperative and postoperative scenarios. Further, we demonstrate the utility of radiomics in predicting biological tumor activity, such as the time to malignancy and the LGG growth rate.
Collapse
Affiliation(s)
- P Rauch
- Department of Neurosurgery, Kepler University Hospital, Wagner-Jauregg Weg 15, 4020, Linz, Austria
- Johannes Kepler University, Altenberger Strasse 69, 4040, Linz, Austria
| | - H Stefanits
- Department of Neurosurgery, Kepler University Hospital, Wagner-Jauregg Weg 15, 4020, Linz, Austria.
- Johannes Kepler University, Altenberger Strasse 69, 4040, Linz, Austria.
| | - M Aichholzer
- Department of Neurosurgery, Kepler University Hospital, Wagner-Jauregg Weg 15, 4020, Linz, Austria
- Johannes Kepler University, Altenberger Strasse 69, 4040, Linz, Austria
| | - C Serra
- Department of Neurosurgery, Clinical Neuroscience Center, University Hospital, University of Zurich, Zurich, Switzerland
- Machine Intelligence in Clinical Neuroscience (MICN) Lab, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Frauenklinikstrasse 10, 8091, Zurich, Switzerland
| | - D Vorhauer
- Institute of Statistics, Johannes Kepler University, Linz, Austria
| | - H Wagner
- Institute of Statistics, Johannes Kepler University, Linz, Austria
| | - P Böhm
- Department of Neurosurgery, Kepler University Hospital, Wagner-Jauregg Weg 15, 4020, Linz, Austria
- Johannes Kepler University, Altenberger Strasse 69, 4040, Linz, Austria
| | - S Hartl
- Department of Neurosurgery, Kepler University Hospital, Wagner-Jauregg Weg 15, 4020, Linz, Austria
- Johannes Kepler University, Altenberger Strasse 69, 4040, Linz, Austria
| | | | - M Sonnberger
- Institute of Neuroradiology, Kepler University Hospital and Johannes Kepler University, Linz, Austria
| | - E Buckwar
- Institute of Stochastics, Johannes Kepler University, Linz, Austria
| | - F Ruiz-Navarro
- Department of Neurosurgery, Kepler University Hospital, Wagner-Jauregg Weg 15, 4020, Linz, Austria
- Johannes Kepler University, Altenberger Strasse 69, 4040, Linz, Austria
| | - K Heil
- Department of Neurosurgery, Kepler University Hospital, Wagner-Jauregg Weg 15, 4020, Linz, Austria
- Johannes Kepler University, Altenberger Strasse 69, 4040, Linz, Austria
| | - M Glöckel
- Department of Neurosurgery, Kepler University Hospital, Wagner-Jauregg Weg 15, 4020, Linz, Austria
- Johannes Kepler University, Altenberger Strasse 69, 4040, Linz, Austria
| | - J Oberndorfer
- Department of Neurosurgery, Kepler University Hospital, Wagner-Jauregg Weg 15, 4020, Linz, Austria
- Johannes Kepler University, Altenberger Strasse 69, 4040, Linz, Austria
| | - S Spiegl-Kreinecker
- Department of Neurosurgery, Kepler University Hospital, Wagner-Jauregg Weg 15, 4020, Linz, Austria
- Johannes Kepler University, Altenberger Strasse 69, 4040, Linz, Austria
| | - K Aufschnaiter-Hiessböck
- Department of Neurosurgery, Kepler University Hospital, Wagner-Jauregg Weg 15, 4020, Linz, Austria
- Johannes Kepler University, Altenberger Strasse 69, 4040, Linz, Austria
| | - S Weis
- Institute of Pathology and Neuropathology, Kepler University Hospital and Johannes Kepler University, Linz, Austria
| | - A Leibetseder
- Department of Neurology, Kepler University Hospital and Johannes Kepler University, Linz, Austria
| | - W Thomae
- Department of Neurosurgery, Kepler University Hospital, Wagner-Jauregg Weg 15, 4020, Linz, Austria
- Johannes Kepler University, Altenberger Strasse 69, 4040, Linz, Austria
| | - T Hauser
- Department of Neurosurgery, Kepler University Hospital, Wagner-Jauregg Weg 15, 4020, Linz, Austria
- Johannes Kepler University, Altenberger Strasse 69, 4040, Linz, Austria
| | - C Auer
- Department of Neurosurgery, Kepler University Hospital, Wagner-Jauregg Weg 15, 4020, Linz, Austria
- Johannes Kepler University, Altenberger Strasse 69, 4040, Linz, Austria
| | - S Katletz
- Department of Neurology, Kepler University Hospital and Johannes Kepler University, Linz, Austria
| | - A Gruber
- Department of Neurosurgery, Kepler University Hospital, Wagner-Jauregg Weg 15, 4020, Linz, Austria
- Johannes Kepler University, Altenberger Strasse 69, 4040, Linz, Austria
| | - M Gmeiner
- Department of Neurosurgery, Kepler University Hospital, Wagner-Jauregg Weg 15, 4020, Linz, Austria
- Johannes Kepler University, Altenberger Strasse 69, 4040, Linz, Austria
| |
Collapse
|
6
|
Nalepa J, Kotowski K, Machura B, Adamski S, Bozek O, Eksner B, Kokoszka B, Pekala T, Radom M, Strzelczak M, Zarudzki L, Krason A, Arcadu F, Tessier J. Deep learning automates bidimensional and volumetric tumor burden measurement from MRI in pre- and post-operative glioblastoma patients. Comput Biol Med 2023; 154:106603. [PMID: 36738710 DOI: 10.1016/j.compbiomed.2023.106603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 01/11/2023] [Accepted: 01/22/2023] [Indexed: 02/05/2023]
Abstract
Tumor burden assessment by magnetic resonance imaging (MRI) is central to the evaluation of treatment response for glioblastoma. This assessment is, however, complex to perform and associated with high variability due to the high heterogeneity and complexity of the disease. In this work, we tackle this issue and propose a deep learning pipeline for the fully automated end-to-end analysis of glioblastoma patients. Our approach simultaneously identifies tumor sub-regions, including the enhancing tumor, peritumoral edema and surgical cavity in the first step, and then calculates the volumetric and bidimensional measurements that follow the current Response Assessment in Neuro-Oncology (RANO) criteria. Also, we introduce a rigorous manual annotation process which was followed to delineate the tumor sub-regions by the human experts, and to capture their segmentation confidences that are later used while training deep learning models. The results of our extensive experimental study performed over 760 pre-operative and 504 post-operative adult patients with glioma obtained from the public database (acquired at 19 sites in years 2021-2020) and from a clinical treatment trial (47 and 69 sites for pre-/post-operative patients, 2009-2011) and backed up with thorough quantitative, qualitative and statistical analysis revealed that our pipeline performs accurate segmentation of pre- and post-operative MRIs in a fraction of the manual delineation time (up to 20 times faster than humans). Volumetric measurements were in strong agreement with experts with the Intraclass Correlation Coefficient (ICC): 0.959, 0.703, 0.960 for ET, ED, and cavity. Similarly, automated RANO compared favorably with experienced readers (ICC: 0.681 and 0.866) producing consistent and accurate results. Additionally, we showed that RANO measurements are not always sufficient to quantify tumor burden. The high performance of the automated tumor burden measurement highlights the potential of the tool for considerably improving and simplifying radiological evaluation of glioblastoma in clinical trials and clinical practice.
Collapse
Affiliation(s)
- Jakub Nalepa
- Graylight Imaging, Gliwice, Poland; Department of Algorithmics and Software, Silesian University of Technology, Gliwice, Poland.
| | | | | | | | - Oskar Bozek
- Department of Radiodiagnostics and Invasive Radiology, School of Medicine in Katowice, Medical University of Silesia in Katowice, Katowice, Poland
| | - Bartosz Eksner
- Department of Radiology and Nuclear Medicine, ZSM Chorzów, Chorzów, Poland
| | - Bartosz Kokoszka
- Department of Radiodiagnostics, Interventional Radiology and Nuclear Medicine, University Clinical Centre, Katowice, Poland
| | - Tomasz Pekala
- Department of Radiodiagnostics, Interventional Radiology and Nuclear Medicine, University Clinical Centre, Katowice, Poland
| | - Mateusz Radom
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland
| | - Marek Strzelczak
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland
| | - Lukasz Zarudzki
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland
| | - Agata Krason
- Roche Pharmaceutical Research & Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland
| | - Filippo Arcadu
- Roche Pharmaceutical Research & Early Development, Early Clinical Development Informatics, Roche Innovation Center Basel, Basel, Switzerland
| | - Jean Tessier
- Roche Pharmaceutical Research & Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland
| |
Collapse
|
7
|
Singh S, Singh BK, Kumar A. Magnetic Resonance Imaging Image-Based Segmentation of Brain Tumor Using the Modified Transfer Learning Method. J Med Phys 2022; 47:315-321. [PMID: 36908498 PMCID: PMC9997534 DOI: 10.4103/jmp.jmp_52_22] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 10/07/2022] [Accepted: 10/09/2022] [Indexed: 01/12/2023] Open
Abstract
Purpose The goal of this study was to improve overall brain tumor segmentation (BraTS) accuracy. In this study, a form of convolutional neural network called three-dimensional (3D) U-Net was utilized to segment various tumor regions on brain 3D magnetic resonance imaging images using a transfer learning technique. Materials and Methods The dataset used for this study was obtained from the multimodal BraTS challenge. The total number of studies was 2240, obtained from BraTS 2018, BraTS 2019, BraTS 2020, and BraTS 2021 challenges, and each study had five series: T1, contrast-enhanced-T1, Flair, T2, and segmented mask file (seg), all in Neuroimaging Informatics Technology Initiative (NIFTI) format. The proposed method employs a 3D U-Net that was trained separately on each of the four datasets by transferring weights across them. Results The overall training accuracy, validation accuracy, mean dice coefficient, and mean intersection over union achieved were 99.35%, 98.93%, 0.9875%, and 0.8738%, respectively. Conclusion The proposed method for tumor segmentation outperforms the existing method.
Collapse
Affiliation(s)
- Sandeep Singh
- Department of Physics, GLA University, Mathura, Uttar Pradesh, India
- Department of Radiation Oncology, Lady Hardinge Medical College and Associated Hospitals, New Delhi, India
| | - Benoy Kumar Singh
- Department of Physics, GLA University, Mathura, Uttar Pradesh, India
| | - Anuj Kumar
- Department of Radiotherapy, SN Medical College, Agra, Uttar Pradesh, India
| |
Collapse
|
8
|
Khorasani A, Kafieh R, Saboori M, Tavakoli MB. Glioma segmentation with DWI weighted images, conventional anatomical images, and post-contrast enhancement magnetic resonance imaging images by U-Net. Phys Eng Sci Med 2022; 45:925-934. [PMID: 35997927 DOI: 10.1007/s13246-022-01164-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 07/16/2022] [Indexed: 11/24/2022]
Abstract
Glioma segmentation is believed to be one of the most important stages of treatment management. Recent developments in magnetic resonance imaging (MRI) protocols have led to a renewed interest in using automatic glioma segmentation with different MRI image weights. U-Net is a major area of interest within the field of automatic glioma segmentation. This paper examines the impact of different input MRI image-weight on the U-Net output performance for glioma segmentation. One hundred forty-nine glioma patients were scanned with a 1.5T MRI scanner. The main MRI image-weights acquired are diffusion-weighted imaging (DWI) weighted images (b50, b500, b1000, Apparent diffusion coefficient (ADC) map, Exponential apparent diffusion coefficient (eADC) map), anatomical image-weights (T2, T1, T2-FLAIR), and post enhancement image-weights (T1Gd). The U-Net and data augmentation are used to segment the glioma tumors. Having the Dice coefficient and accuracy enabled us to compare our results with the previous study. The first set of analyses examined the impact of epoch number on the accuracy of U-Net, and n_epoch = 20 was selected for U-Net training. The mean Dice coefficient for b50, b500, b1000, ADC map, eADC map, T2, T1, T2-FLAIR, and T1Gd image weights for glioma segmentation with U-Net were calculated 0.892, 0.872, 0.752, 0.931, 0.944, 0.762, 0.721, 0.896, 0.694 respectively. This study has found that, DWI image-weights have a higher diagnostic value for glioma segmentation with U-Net in comparison with anatomical image-weights and post enhancement image-weights. The results of this investigation show that ADC and eADC maps have higher performance for glioma segmentation with U-Net.
Collapse
Affiliation(s)
- Amir Khorasani
- Department of Medical Physics, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Rahele Kafieh
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran.,Department of Engineering, Durham University, Durham, UK
| | - Masih Saboori
- Department of Neurosurgery, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Mohamad Bagher Tavakoli
- Department of Medical Physics, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran.
| |
Collapse
|
9
|
Shaukat Z, Farooq QUA, Tu S, Xiao C, Ali S. A state-of-the-art technique to perform cloud-based semantic segmentation using deep learning 3D U-Net architecture. BMC Bioinformatics 2022; 23:251. [PMID: 35751030 PMCID: PMC9229514 DOI: 10.1186/s12859-022-04794-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 06/15/2022] [Indexed: 11/11/2022] Open
Abstract
Glioma is the most aggressive and dangerous primary brain tumor with a survival time of less than 14 months. Segmentation of tumors is a necessary task in the image processing of the gliomas and is important for its timely diagnosis and starting a treatment. Using 3D U-net architecture to perform semantic segmentation on brain tumor dataset is at the core of deep learning. In this paper, we present a unique cloud-based 3D U-Net method to perform brain tumor segmentation using BRATS dataset. The system was effectively trained by using Adam optimization solver by utilizing multiple hyper parameters. We got an average dice score of 95% which makes our method the first cloud-based method to achieve maximum accuracy. The dice score is calculated by using Sørensen-Dice similarity coefficient. We also performed an extensive literature review of the brain tumor segmentation methods implemented in the last five years to get a state-of-the-art picture of well-known methodologies with a higher dice score. In comparison to the already implemented architectures, our method ranks on top in terms of accuracy in using a cloud-based 3D U-Net framework for glioma segmentation.
Collapse
Affiliation(s)
- Zeeshan Shaukat
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China.
- Faculty of Computer Science, University of South Asia, Lahore, Pakistan.
| | - Qurat Ul Ain Farooq
- Faculty of Environmental and Life Sciences, Beijing University of Technology, Beijing, People's Republic of China
| | - Shanshan Tu
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China
| | - Chuangbai Xiao
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China.
| | - Saqib Ali
- Faculty of Information Technology, Beijing University of Technology, Beijing, People's Republic of China
| |
Collapse
|
10
|
De Asis-Cruz J, Krishnamurthy D, Jose C, Cook KM, Limperopoulos C. FetalGAN: Automated Segmentation of Fetal Functional Brain MRI Using Deep Generative Adversarial Learning and Multi-Scale 3D U-Net. Front Neurosci 2022; 16:887634. [PMID: 35747213 PMCID: PMC9209698 DOI: 10.3389/fnins.2022.887634] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 05/16/2022] [Indexed: 01/02/2023] Open
Abstract
An important step in the preprocessing of resting state functional magnetic resonance images (rs-fMRI) is the separation of brain from non-brain voxels. Widely used imaging tools such as FSL's BET2 and AFNI's 3dSkullStrip accomplish this task effectively in children and adults. In fetal functional brain imaging, however, the presence of maternal tissue around the brain coupled with the non-standard position of the fetal head limit the usefulness of these tools. Accurate brain masks are thus generated manually, a time-consuming and tedious process that slows down preprocessing of fetal rs-fMRI. Recently, deep learning-based segmentation models such as convolutional neural networks (CNNs) have been increasingly used for automated segmentation of medical images, including the fetal brain. Here, we propose a computationally efficient end-to-end generative adversarial neural network (GAN) for segmenting the fetal brain. This method, which we call FetalGAN, yielded whole brain masks that closely approximated the manually labeled ground truth. FetalGAN performed better than 3D U-Net model and BET2: FetalGAN, Dice score = 0.973 ± 0.013, precision = 0.977 ± 0.015; 3D U-Net, Dice score = 0.954 ± 0.054, precision = 0.967 ± 0.037; BET2, Dice score = 0.856 ± 0.084, precision = 0.758 ± 0.113. FetalGAN was also faster than 3D U-Net and the manual method (7.35 s vs. 10.25 s vs. ∼5 min/volume). To the best of our knowledge, this is the first successful implementation of 3D CNN with GAN on fetal fMRI brain images and represents a significant advance in fully automating processing of rs-MRI images.
Collapse
Affiliation(s)
- Josepheen De Asis-Cruz
- Developing Brain Institute, Department of Diagnostic Radiology, Children’s National Hospital, Washington, DC, United States
| | - Dhineshvikram Krishnamurthy
- Developing Brain Institute, Department of Diagnostic Radiology, Children’s National Hospital, Washington, DC, United States
| | - Chris Jose
- Department of Computer Science, University of Maryland, College Park, MD, United States
| | - Kevin M. Cook
- Developing Brain Institute, Department of Diagnostic Radiology, Children’s National Hospital, Washington, DC, United States
| | - Catherine Limperopoulos
- Developing Brain Institute, Department of Diagnostic Radiology, Children’s National Hospital, Washington, DC, United States
| |
Collapse
|