1
|
Kundal K, Rao KV, Majumdar A, Kumar N, Kumar R. Comprehensive benchmarking of CNN-based tumor segmentation methods using multimodal MRI data. Comput Biol Med 2024; 178:108799. [PMID: 38925087 DOI: 10.1016/j.compbiomed.2024.108799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 06/12/2024] [Accepted: 06/19/2024] [Indexed: 06/28/2024]
Abstract
Magnetic resonance imaging (MRI) has become an essential and a frontline technique in the detection of brain tumor. However, segmenting tumors manually from scans is laborious and time-consuming. This has led to an increasing trend towards fully automated methods for precise tumor segmentation in MRI scans. Accurate tumor segmentation is crucial for improved diagnosis, treatment, and prognosis. This study benchmarks and evaluates four widely used CNN-based methods for brain tumor segmentation CaPTk, 2DVNet, EnsembleUNets, and ResNet50. Using 1251 multimodal MRI scans from the BraTS2021 dataset, we compared the performance of these methods against a reference dataset of segmented images assisted by radiologists. This comparison was conducted using segmented images directly and further by radiomic features extracted from the segmented images using pyRadiomics. Performance was assessed using the Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD). EnsembleUNets excelled, achieving a DSC of 0.93 and an HD of 18, outperforming the other methods. Further comparative analysis of radiomic features confirmed EnsembleUNets as the most precise segmentation method, surpassing other methods. EnsembleUNets recorded a Concordance Correlation Coefficient (CCC) of 0.79, a Total Deviation Index (TDI) of 1.14, and a Root Mean Square Error (RMSE) of 0.53, underscoring its superior performance. We also performed validation on an independent dataset of 611 samples (UPENN-GBM), which further supported the accuracy of EnsembleUNets, with a DSC of 0.85 and an HD of 17.5. These findings provide valuable insight into the efficacy of EnsembleUNets, supporting informed decisions for accurate brain tumor segmentation.
Collapse
Affiliation(s)
- Kavita Kundal
- Department of Biotechnology, Indian Institute of Technology Hyderabad, Kandi, Telangana, 502284, India
| | - K Venkateswara Rao
- Department of Neurosurgical Oncology, Basavatarakam Indo American Cancer Hospital & Research Institute, Hyderabad, Telangana, 500034, India
| | - Arunabha Majumdar
- Department of Mathematics, Indian Institute of Technology Hyderabad, Kandi, Telangana, 502284, India
| | - Neeraj Kumar
- Department of Biotechnology, Indian Institute of Technology Hyderabad, Kandi, Telangana, 502284, India; Department of Liberal Arts, Indian Institute of Technology Hyderabad, Kandi, Telangana, 502284, India
| | - Rahul Kumar
- Department of Biotechnology, Indian Institute of Technology Hyderabad, Kandi, Telangana, 502284, India.
| |
Collapse
|
2
|
Khodadadi Shoushtari F, Dehkordi ANV, Sina S. Quantitative and Visual Analysis of Data Augmentation and Hyperparameter Optimization in Deep Learning-Based Segmentation of Low-Grade Glioma Tumors Using Grad-CAM. Ann Biomed Eng 2024; 52:1359-1377. [PMID: 38409433 DOI: 10.1007/s10439-024-03461-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 01/29/2024] [Indexed: 02/28/2024]
Abstract
This study executes a quantitative and visual investigation on the effectiveness of data augmentation and hyperparameter optimization on the accuracy of deep learning-based segmentation of LGG tumors. The study employed the MobileNetV2 and ResNet backbones with atrous convolution in DeepLabV3+ structure. The Grad-CAM tool was also used to interpret the effect of augmentation and network optimization on segmentation performance. A wide investigation was performed to optimize the network hyperparameters. In addition, the study examined 35 different models to evaluate different data augmentation techniques. The results of the study indicated that incorporating data augmentation techniques and optimization can improve the performance of segmenting brain LGG tumors up to 10%. Our extensive investigation of the data augmentation techniques indicated that enlargement of data from 90° and 225° rotated data,up to down and left to right flipping are the most effective techniques. MobilenetV2 as the backbone,"Focal Loss" as the loss function and "Adam" as the optimizer showed the superior results. The optimal model (DLG-Net) achieved an overall accuracy of 96.1% with a loss value of 0.006. Specifically, the segmentation performance for Whole Tumor (WT), Tumor Core (TC), and Enhanced Tumor (ET) reached a Dice Similarity Coefficient (DSC) of 89.4%, 70.1%, and 49.9%, respectively. Simultaneous visual and quantitative assessment of data augmentation and network optimization can lead to an optimal model with a reasonable performance in segmenting the LGG tumors.
Collapse
Affiliation(s)
| | - Azimeh N V Dehkordi
- Department of Physics, Faculty of Computer Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran.
- Najafabad Branch, Islamic Azad University, Najafabad, 8514143131, Iran.
| | - Sedigheh Sina
- Nuclear Engineering Department, Shiraz University, Shiraz, Iran
- Radiation Research Center, Shiraz University, Shiraz, Iran
| |
Collapse
|
3
|
Hou W, Zou L, Wang D. Tumor Segmentation in Intraoperative Fluorescence Images Based on Transfer Learning and Convolutional Neural Networks. Surg Innov 2024:15533506241246576. [PMID: 38619039 DOI: 10.1177/15533506241246576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
OBJECTIVE To propose a transfer learning based method of tumor segmentation in intraoperative fluorescence images, which will assist surgeons to efficiently and accurately identify the boundary of tumors of interest. METHODS We employed transfer learning and deep convolutional neural networks (DCNNs) for tumor segmentation. Specifically, we first pre-trained four networks on the ImageNet dataset to extract low-level features. Subsequently, we fine-tuned these networks on two fluorescence image datasets (ABFM and DTHP) separately to enhance the segmentation performance of fluorescence images. Finally, we tested the trained models on the DTHL dataset. The performance of this approach was compared and evaluated against DCNNs trained end-to-end and the traditional level-set method. RESULTS The transfer learning-based UNet++ model achieved high segmentation accuracies of 82.17% on the ABFM dataset, 95.61% on the DTHP dataset, and 85.49% on the DTHL test set. For the DTHP dataset, the pre-trained Deeplab v3 + network performed exceptionally well, with a segmentation accuracy of 96.48%. Furthermore, all models achieved segmentation accuracies of over 90% when dealing with the DTHP dataset. CONCLUSION To the best of our knowledge, this study explores tumor segmentation on intraoperative fluorescent images for the first time. The results show that compared to traditional methods, deep learning has significant advantages in improving segmentation performance. Transfer learning enables deep learning models to perform better on small-sample fluorescence image data compared to end-to-end training. This discovery provides strong support for surgeons to obtain more reliable and accurate image segmentation results during surgery.
Collapse
Affiliation(s)
- Weijia Hou
- College of Science, Nanjing Forestry University, Nanjing, China
| | - Liwen Zou
- Department of Mathematics, Nanjing University, Nanjing, China
| | - Dong Wang
- Group A: Large-Scale Scientific Computing and Media Imaging, Nanjing Center for Applied Mathematics, Nanjing, China
| |
Collapse
|
4
|
Wang L, Zhang X, Tian C, Chen S, Deng Y, Liao X, Wang Q, Si W. PlaqueNet: deep learning enabled coronary artery plaque segmentation from coronary computed tomography angiography. Vis Comput Ind Biomed Art 2024; 7:6. [PMID: 38514491 PMCID: PMC11349722 DOI: 10.1186/s42492-024-00157-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 03/03/2024] [Indexed: 03/23/2024] Open
Abstract
Cardiovascular disease, primarily caused by atherosclerotic plaque formation, is a significant health concern. The early detection of these plaques is crucial for targeted therapies and reducing the risk of cardiovascular diseases. This study presents PlaqueNet, a solution for segmenting coronary artery plaques from coronary computed tomography angiography (CCTA) images. For feature extraction, the advanced residual net module was utilized, which integrates a deepwise residual optimization module into network branches, enhances feature extraction capabilities, avoiding information loss, and addresses gradient issues during training. To improve segmentation accuracy, a depthwise atrous spatial pyramid pooling based on bicubic efficient channel attention (DASPP-BICECA) module is introduced. The BICECA component amplifies the local feature sensitivity, whereas the DASPP component expands the network's information-gathering scope, resulting in elevated segmentation accuracy. Additionally, BINet, a module for joint network loss evaluation, is proposed. It optimizes the segmentation model without affecting the segmentation results. When combined with the DASPP-BICECA module, BINet enhances overall efficiency. The CCTA segmentation algorithm proposed in this study outperformed the other three comparative algorithms, achieving an intersection over Union of 87.37%, Dice of 93.26%, accuracy of 93.12%, mean intersection over Union of 93.68%, mean Dice of 96.63%, and mean pixel accuracy value of 96.55%.
Collapse
Affiliation(s)
- Linyuan Wang
- Department of Cardiovascular Surgery, the Affiliated Hospital of Shanxi Medical University, Shanxi Cardiovascular Hospital (Institute), Shanxi Clinical Medical Research Center for Cardiovascular Disease, Taiyuan, 030024, Shanxi, China
| | - Xiaofeng Zhang
- Department of Mechanical Engineering, Nantong University, Nantong, 226019, Jiangsu, China
| | - Congyu Tian
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, Guangdong, China
| | - Shu Chen
- Department of Cardiovascular Surgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, Hubei, China
| | - Yongzhi Deng
- Department of Cardiovascular Surgery, the Affiliated Hospital of Shanxi Medical University, Shanxi Cardiovascular Hospital (Institute), Shanxi Clinical Medical Research Center for Cardiovascular Disease, Taiyuan, 030024, Shanxi, China.
| | - Xiangyun Liao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, Guangdong, China.
| | - Qiong Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, Guangdong, China
| | - Weixin Si
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, Guangdong, China
| |
Collapse
|
5
|
Domadia SG, Thakkar FN, Ardeshana MA. Segmenting brain glioblastoma using dense-attentive 3D DAF 2. Phys Med 2024; 119:103304. [PMID: 38340694 DOI: 10.1016/j.ejmp.2024.103304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 12/18/2023] [Accepted: 01/29/2024] [Indexed: 02/12/2024] Open
Abstract
Precise delineation of brain glioblastoma or tumor through segmentation is pivotal in the diagnosis, formulating treatment strategies, and evaluating therapeutic progress in patients. Precisely identifying brain glioblastoma within multimodal MRI scans poses a significant challenge in the field of medical image analysis as different intensity profiles are observed across the sub-regions, reflecting diverse tumor biological properties. For segmenting glioblastoma areas, convolutional neural networks have displayed astounding performance in recent years. This paper introduces an innovative methodology for brain glioblastoma segmentation by combining the Dense-Attention 3D U-Net network with a fusion strategy and the focal tversky loss function. By fusing information from multiple resolution segmentation maps, our model enhances its ability to discern intricate tumor boundaries. Incorporating the focal tversky loss function, we effectively emphasize critical regions and mitigate class imbalance. Recursive Convolution Block 2 is proposed after fusion to ensure efficient utilization of all accessible features while maintaining rapid convergence. The network's effectiveness is assessed using diverse datasets BraTS 2020 and BraTS 2021. Results show comparable dice similarity coefficient compared to other methods with increased efficiency and segmentation performance. Additionally, the architecture achieved an average dice similarity coefficient of 82.4% and an average hausdorff distance (HD95) of 10.426, which demonstrated consistent performance improvement compared to baseline models like U-Net, Attention U-Net, V-Net and Res U-Net and indicating the effectiveness of proposed architecture.
Collapse
|
6
|
Karimipourfard M, Sina S, Mahani H, Alavi M, Yazdi M. Impact of deep learning-based multiorgan segmentation methods on patient-specific internal dosimetry in PET/CT imaging: A comparative study. J Appl Clin Med Phys 2024; 25:e14254. [PMID: 38214349 PMCID: PMC10860559 DOI: 10.1002/acm2.14254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 10/29/2023] [Accepted: 11/30/2023] [Indexed: 01/13/2024] Open
Abstract
PURPOSE Accurate and fast multiorgan segmentation is essential in image-based internal dosimetry in nuclear medicine. While conventional manual PET image segmentation is widely used, it suffers from both being time-consuming as well as subject to human error. This study exploited 2D and 3D deep learning (DL) models. Key organs in the trunk of the body were segmented and then used as a reference for networks. METHODS The pre-trained p2p-U-Net-GAN and HighRes3D architectures were fine-tuned with PET-only images as inputs. Additionally, the HighRes3D model was alternatively trained with PET/CT images. Evaluation metrics such as sensitivity (SEN), specificity (SPC), intersection over union (IoU), and Dice scores were considered to assess the performance of the networks. The impact of DL-assisted PET image segmentation methods was further assessed using the Monte Carlo (MC)-derived S-values to be used for internal dosimetry. RESULTS A fair comparison with manual low-dose CT-aided segmentation of the PET images was also conducted. Although both 2D and 3D models performed well, the HighRes3D offers superior performance with Dice scores higher than 0.90. Key evaluation metrics such as SEN, SPC, and IoU vary between 0.89-0.93, 0.98-0.99, and 0.87-0.89 intervals, respectively, indicating the encouraging performance of the models. The percentage differences between the manual and DL segmentation methods in the calculated S-values varied between 0.1% and 6% with a maximum attributed to the stomach. CONCLUSION The findings prove while the incorporation of anatomical information provided by the CT data offers superior performance in terms of Dice score, the performance of HighRes3D remains comparable without the extra CT channel. It is concluded that both proposed DL-based methods provide automated and fast segmentation of whole-body PET/CT images with promising evaluation metrics. Between them, the HighRes3D is more pronounced by providing better performance and can therefore be the method of choice for 18F-FDG-PET image segmentation.
Collapse
Affiliation(s)
| | - Sedigheh Sina
- Department of Ray‐Medical EngineeringShiraz UniversityShirazIran
- Radiation Research CenterShiraz UniversityShirazIran
| | - Hojjat Mahani
- Radiation Applications Research SchoolNuclear Science and Technology Research InstituteTehranIran
| | - Mehrosadat Alavi
- Department of Nuclear MedicineShiraz University of Medical SciencesShirazIran
| | - Mehran Yazdi
- School of Electrical and Computer EngineeringShiraz UniversityShirazIran
| |
Collapse
|
7
|
Sheikhi M, Sina S, Karimipourfard M. Deep-learned generation of renal dual-energy CT from a single-energy scan. Clin Radiol 2024; 79:e17-e25. [PMID: 37923626 DOI: 10.1016/j.crad.2023.09.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 09/14/2023] [Accepted: 09/24/2023] [Indexed: 11/07/2023]
Abstract
AIM To investigate the role of the deep-learning (DL) method in the generation of dual-energy computed tomography (DECT) images from single-energy images for precise diagnosis of kidney stone type. MATERIALS AND METHODS DECT of 23 patients was acquired, and the stone types were investigated based on the DECT software suggestions. The data were divided into two paired groups:120 kVp input and 80 kVp target and 120 kVp input and 135 kVp targets, p2p-UNet-GAN was exploited to generate the different energy images based on the common CT protocols. RESULTS The images generated of the generative adversarial network (GAN) network were evaluated based on the SSIM, PSNR, and MSE metrics, and the values were estimated as 0.85-0.95, 28-32, and 0.85-0.89 respectively. The attenuation ratio of test patient images were estimated and compared with real patient reports. The network achieved high accuracy in stone region localisation and resulted in accurate stone type predictions. CONCLUSION This study presents a useful method based on the DL technique to reduce patient radiation dose and facilitate the prediction of urinary stone types using single-energy CT imaging.
Collapse
Affiliation(s)
- M Sheikhi
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran; Abu Ali Sina Hospital, Shiraz, Iran
| | - S Sina
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran; Radiation Research Center, Shiraz University, Shiraz, Iran.
| | - M Karimipourfard
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran
| |
Collapse
|