1
|
Liu H, Huang J, Li Q, Guan X, Tseng M. A deep convolutional neural network for the automatic segmentation of glioblastoma brain tumor: Joint spatial pyramid module and attention mechanism network. Artif Intell Med 2024; 148:102776. [PMID: 38325925 DOI: 10.1016/j.artmed.2024.102776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 12/20/2023] [Accepted: 01/14/2024] [Indexed: 02/09/2024]
Abstract
This study proposes a deep convolutional neural network for the automatic segmentation of glioblastoma brain tumors, aiming sat replacing the manual segmentation method that is both time-consuming and labor-intensive. There are many challenges for automatic segmentation to finely segment sub-regions from multi-sequence magnetic resonance images because of the complexity and variability of glioblastomas, such as the loss of boundary information, misclassified regions, and subregion size. To overcome these challenges, this study introduces a spatial pyramid module and attention mechanism to the automatic segmentation algorithm, which focuses on multi-scale spatial details and context information. The proposed method has been tested in the public benchmarks BraTS 2018, BraTS 2019, BraTS 2020 and BraTS 2021 datasets. The Dice score on the enhanced tumor, whole tumor, and tumor core were respectively 79.90 %, 89.63 %, and 85.89 % on the BraTS 2018 dataset, respectively 77.14 %, 89.58 %, and 83.33 % on the BraTS 2019 dataset, and respectively 77.80 %, 90.04 %, and 83.18 % on the BraTS 2020 dataset, and respectively 83.48 %, 90.70 %, and 88.94 % on the BraTS 2021 dataset offering performance on par with that of state-of-the-art methods with only 1.90 M parameters. In addition, our approach significantly reduced the requirements for experimental equipment, and the average time taken to segment one case was only 1.48 s; these two benefits rendered the proposed network intensely competitive for clinical practice.
Collapse
Affiliation(s)
- Hengxin Liu
- School of Microelectronics, Tianjin University, Tianjin, China
| | - Jingteng Huang
- School of Microelectronics, Tianjin University, Tianjin, China
| | - Qiang Li
- School of Microelectronics, Tianjin University, Tianjin, China
| | - Xin Guan
- School of Microelectronics, Tianjin University, Tianjin, China.
| | - Minglang Tseng
- Institute of Innovation and Circular Economy, Asia University, Taichung, Taiwan; Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, Taiwan; UKM-Graduate School of Business, Universiti Kebangsaan Malaysia, 43000 Bangi, Selangor, Malaysia; Department of Industrial Engineering, Khon Kaen University, 40002, Thailand.
| |
Collapse
|
2
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
3
|
Ranjbarzadeh R, Zarbakhsh P, Caputo A, Tirkolaee EB, Bendechache M. Brain tumor segmentation based on optimized convolutional neural network and improved chimp optimization algorithm. Comput Biol Med 2024; 168:107723. [PMID: 38000242 DOI: 10.1016/j.compbiomed.2023.107723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 10/21/2023] [Accepted: 11/15/2023] [Indexed: 11/26/2023]
Abstract
Reliable and accurate brain tumor segmentation is a challenging task even with the appropriate acquisition of brain images. Tumor grading and segmentation utilizing Magnetic Resonance Imaging (MRI) are necessary steps for correct diagnosis and treatment planning. There are different MRI sequence images (T1, Flair, T1ce, T2, etc.) for identifying different parts of the tumor. Due to the diversity in the illumination of each brain imaging modality, different information and details can be obtained from each input modality. Therefore, by using various MRI modalities, the diagnosis system is capable of finding more unique details that lead to a better segmentation result, especially in fuzzy borders. In this study, to achieve an automatic and robust brain tumor segmentation framework using four MRI sequence images, an optimized Convolutional Neural Network (CNN) is proposed. All weight and bias values of the CNN model are adjusted using an Improved Chimp Optimization Algorithm (IChOA). In the first step, all four input images are normalized to find some potential areas of the existing tumor. Next, by employing the IChOA, the best features are selected using a Support Vector Machine (SVM) classifier. Finally, the best-extracted features are fed to the optimized CNN model to classify each object for brain tumor segmentation. Accordingly, the proposed IChOA is utilized for feature selection and optimizing Hyperparameters in the CNN model. The experimental outcomes conducted on the BRATS 2018 dataset demonstrate superior performance (Precision of 97.41 %, Recall of 95.78 %, and Dice Score of 97.04 %) compared to the existing frameworks.
Collapse
Affiliation(s)
- Ramin Ranjbarzadeh
- School of Computing, Faculty of Engineering and Computing, Dublin City University, Ireland.
| | - Payam Zarbakhsh
- Electrical and Electronic Engineering Department, Cyprus International University, Via Mersin 10, Nicosia, Northern Cyprus, Turkey.
| | - Annalina Caputo
- School of Computing, Faculty of Engineering and Computing, Dublin City University, Ireland.
| | - Erfan Babaee Tirkolaee
- Department of Industrial Engineering, Istinye University, Istanbul, Turkey; Department of Industrial Engineering and Management, Yuan Ze University, Taoyuan, Taiwan; Department of Industrial and Mechanical Engineering, Lebanese American University, Byblos, Lebanon.
| | - Malika Bendechache
- Lero & ADAPT Research Centres, School of Computer Science, University of Galway, Ireland.
| |
Collapse
|
4
|
Anand V, Gupta S, Gupta D, Gulzar Y, Xin Q, Juneja S, Shah A, Shaikh A. Weighted Average Ensemble Deep Learning Model for Stratification of Brain Tumor in MRI Images. Diagnostics (Basel) 2023; 13:diagnostics13071320. [PMID: 37046538 PMCID: PMC10093740 DOI: 10.3390/diagnostics13071320] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 03/22/2023] [Accepted: 03/26/2023] [Indexed: 04/05/2023] Open
Abstract
Brain tumor diagnosis at an early stage can improve the chances of successful treatment and better patient outcomes. In the biomedical industry, non-invasive diagnostic procedures, such as magnetic resonance imaging (MRI), can be used to diagnose brain tumors. Deep learning, a type of artificial intelligence, can analyze MRI images in a matter of seconds, reducing the time it takes for diagnosis and potentially improving patient outcomes. Furthermore, an ensemble model can help increase the accuracy of classification by combining the strengths of multiple models and compensating for their individual weaknesses. Therefore, in this research, a weighted average ensemble deep learning model is proposed for the classification of brain tumors. For the weighted ensemble classification model, three different feature spaces are taken from the transfer learning VGG19 model, Convolution Neural Network (CNN) model without augmentation, and CNN model with augmentation. These three feature spaces are ensembled with the best combination of weights, i.e., weight1, weight2, and weight3 by using grid search. The dataset used for simulation is taken from The Cancer Genome Atlas (TCGA), having a lower-grade glioma collection with 3929 MRI images of 110 patients. The ensemble model helps reduce overfitting by combining multiple models that have learned different aspects of the data. The proposed ensemble model outperforms the three individual models for detecting brain tumors in terms of accuracy, precision, and F1-score. Therefore, the proposed model can act as a second opinion tool for radiologists to diagnose the tumor from MRI images of the brain.
Collapse
Affiliation(s)
- Vatsala Anand
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India
| | - Deepali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India
| | - Yonis Gulzar
- Department of Management Information Systems, College of Business Administration, King Faisal University, Al-Ahsa 31982, Saudi Arabia
| | - Qin Xin
- Faculty of Science and Technology, University of the Faroe Islands, Vestarabryggja 15, FO 100 Torshavn, Faroe Islands, Denmark
| | - Sapna Juneja
- Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, Gombak 53100, Selangor, Malaysia
| | - Asadullah Shah
- Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, Gombak 53100, Selangor, Malaysia
| | - Asadullah Shaikh
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran 55461, Saudi Arabia
| |
Collapse
|
5
|
SSO-RBNN driven brain tumor classification with Saliency-K-means segmentation technique. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
6
|
Tong J, Wang C. A dual tri-path CNN system for brain tumor segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
7
|
Cao Y, Zhou W, Zang M, An D, Feng Y, Yu B. MBANet: A 3D convolutional neural network with multi-branch attention for brain tumor segmentation from MRI images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
8
|
Zhang R, Jia S, Adamu MJ, Nie W, Li Q, Wu T. HMNet: Hierarchical Multi-Scale Brain Tumor Segmentation Network. J Clin Med 2023; 12:jcm12020538. [PMID: 36675470 PMCID: PMC9861819 DOI: 10.3390/jcm12020538] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 12/30/2022] [Accepted: 01/04/2023] [Indexed: 01/11/2023] Open
Abstract
An accurate and efficient automatic brain tumor segmentation algorithm is important for clinical practice. In recent years, there has been much interest in automatic segmentation algorithms that use convolutional neural networks. In this paper, we propose a novel hierarchical multi-scale segmentation network (HMNet), which contains a high-resolution branch and parallel multi-resolution branches. The high-resolution branch can keep track of the brain tumor's spatial details, and the multi-resolution feature exchange and fusion allow the network's receptive fields to adapt to brain tumors of different shapes and sizes. In particular, to overcome the large computational overhead caused by expensive 3D convolution, we propose a lightweight conditional channel weighting block to reduce GPU memory and improve the efficiency of HMNet. We also propose a lightweight multi-resolution feature fusion (LMRF) module to further reduce model complexity and reduce the redundancy of the feature maps. We run tests on the BraTS 2020 dataset to determine how well the proposed network would work. The dice similarity coefficients of HMNet for ET, WT, and TC are 0.781, 0.901, and 0.823, respectively. Many comparative experiments on the BraTS 2020 dataset and other two datasets show that our proposed HMNet has achieved satisfactory performance compared with the SOTA approaches.
Collapse
Affiliation(s)
- Ruifeng Zhang
- School of Microelectronicss, Tianjin University, Tianjin 300072, China
| | - Shasha Jia
- School of Microelectronicss, Tianjin University, Tianjin 300072, China
| | | | - Weizhi Nie
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
- Correspondence: (W.N.); (Q.L.)
| | - Qiang Li
- School of Microelectronicss, Tianjin University, Tianjin 300072, China
- Correspondence: (W.N.); (Q.L.)
| | - Ting Wu
- Department of Cardiopulmonary Bypass, Chest Hospital, Tianjin University, Tianjin 300072, China
| |
Collapse
|
9
|
Chang Y, Zheng Z, Sun Y, Zhao M, Lu Y, Zhang Y. DPAFNet: A Residual Dual-Path Attention-Fusion Convolutional Neural Network for Multimodal Brain Tumor Segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
10
|
Research on CT Lung Segmentation Method of Preschool Children based on Traditional Image Processing and ResUnet. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:7321330. [PMID: 36262868 PMCID: PMC9576440 DOI: 10.1155/2022/7321330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 09/13/2022] [Accepted: 09/21/2022] [Indexed: 11/22/2022]
Abstract
Lung segmentation using computed tomography (CT) images is important for diagnosing various lung diseases. Currently, no lung segmentation method has been developed for assessing the CT images of preschool children, which may differ from those of adults due to (1) presence of artifacts caused by the shaking of children, (2) loss of a localized lung area due to a failure to hold their breath, and (3) a smaller CT chest area, compared with adults. To solve these unique problems, this study developed an automatic lung segmentation method by combining traditional imaging methods with ResUnet using the CT images of 60 children, aged 0-6 years. First, the CT images were cropped and zoomed through ecological operations to concentrate the segmentation task on the chest area. Then, a ResUnet model was used to improve the loss for lung segmentation, and case-based connected domain operations were performed to filter the segmentation results and improve segmentation accuracy. The proposed method demonstrated promising segmentation results on a test set of 12 cases, with average accuracy, Dice, precision, and recall of 0.9479, 0.9678, 0.9711, and 0.9715, respectively, which achieved the best performance relative to the other six models. This study shows that the proposed method can achieve good segmentation results in CT of preschool children, laying a good foundation for the diagnosis of children's lung diseases.
Collapse
|
11
|
Yu Y, Tao Y, Guan H, Xiao S, Li F, Yu C, Liu Z, Li J. A multi-branch hierarchical attention network for medical target segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.104021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
12
|
Zhao C, Chen W, Qin J, Yang P, Xiang Z, Frangi AF, Chen M, Fan S, Yu W, Chen X, Xia B, Wang T, Lei B. IFT-Net: Interactive Fusion Transformer Network for Quantitative Analysis of Pediatric Echocardiography. Med Image Anal 2022; 82:102648. [DOI: 10.1016/j.media.2022.102648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 09/01/2022] [Accepted: 09/27/2022] [Indexed: 10/31/2022]
|
13
|
Raza R, Ijaz Bajwa U, Mehmood Y, Waqas Anwar M, Hassan Jamal M. dResU-Net: 3D deep residual U-Net based brain tumor segmentation from multimodal MRI. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
14
|
Liu J, Zheng J, Jiao G. Transition Net: 2D backbone to segment 3D brain tumor. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
15
|
Liu Y, Du J, Vong CM, Yue G, Yu J, Wang Y, Lei B, Wang T. Scale-adaptive super-feature based MetricUNet for brain tumor segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103442] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
16
|
Road Surface Crack Detection Method Based on Conditional Generative Adversarial Networks. SENSORS 2021; 21:s21217405. [PMID: 34770711 PMCID: PMC8587934 DOI: 10.3390/s21217405] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Revised: 10/29/2021] [Accepted: 11/03/2021] [Indexed: 11/27/2022]
Abstract
Constant monitoring of road surfaces helps to show the urgency of deterioration or problems in the road construction and to improve the safety level of the road surface. Conditional generative adversarial networks (cGAN) are a powerful tool to generate or transform the images used for crack detection. The advantage of this method is the highly accurate results in vector-based images, which are convenient for mathematical analysis of the detected cracks at a later time. However, images taken under established parameters are different from images in real-world contexts. Another potential problem of cGAN is that it is difficult to detect the shape of an object when the resulting accuracy is low, which can seriously affect any further mathematical analysis of the detected crack. To tackle this issue, this paper proposes a method called improved cGAN with attention gate (ICGA) for roadway surface crack detection. To obtain a more accurate shape of the detected target object, ICGA establishes a multi-level model with independent stages. In the first stage, everything except the road is treated as noise and removed from the image. These images are stored in a new dataset. In the second stage, ICGA determines the cracks. Therefore, ICGA focuses on the redistribution of cracks, not the auxiliary elements in the image. ICGA adds two attention gates to a U-net architecture and improves the segmentation capacities of the generator in pix2pix. Extensive experimental results on dashboard camera images of the Unsupervised Llamas dataset show that our method has better performance than other state-of-the-art methods.
Collapse
|