1
|
Xu Y, Quan R, Xu W, Huang Y, Chen X, Liu F. Advances in Medical Image Segmentation: A Comprehensive Review of Traditional, Deep Learning and Hybrid Approaches. Bioengineering (Basel) 2024; 11:1034. [PMID: 39451409 PMCID: PMC11505408 DOI: 10.3390/bioengineering11101034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 10/08/2024] [Accepted: 10/11/2024] [Indexed: 10/26/2024] Open
Abstract
Medical image segmentation plays a critical role in accurate diagnosis and treatment planning, enabling precise analysis across a wide range of clinical tasks. This review begins by offering a comprehensive overview of traditional segmentation techniques, including thresholding, edge-based methods, region-based approaches, clustering, and graph-based segmentation. While these methods are computationally efficient and interpretable, they often face significant challenges when applied to complex, noisy, or variable medical images. The central focus of this review is the transformative impact of deep learning on medical image segmentation. We delve into prominent deep learning architectures such as Convolutional Neural Networks (CNNs), Fully Convolutional Networks (FCNs), U-Net, Recurrent Neural Networks (RNNs), Adversarial Networks (GANs), and Autoencoders (AEs). Each architecture is analyzed in terms of its structural foundation and specific application to medical image segmentation, illustrating how these models have enhanced segmentation accuracy across various clinical contexts. Finally, the review examines the integration of deep learning with traditional segmentation methods, addressing the limitations of both approaches. These hybrid strategies offer improved segmentation performance, particularly in challenging scenarios involving weak edges, noise, or inconsistent intensities. By synthesizing recent advancements, this review provides a detailed resource for researchers and practitioners, offering valuable insights into the current landscape and future directions of medical image segmentation.
Collapse
Affiliation(s)
- Yan Xu
- School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol BS8 1QU, UK; (Y.X.); (R.Q.); (W.X.)
| | - Rixiang Quan
- School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol BS8 1QU, UK; (Y.X.); (R.Q.); (W.X.)
| | - Weiting Xu
- School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol BS8 1QU, UK; (Y.X.); (R.Q.); (W.X.)
| | - Yi Huang
- Bristol Medical School, University of Bristol, Bristol BS8 1UD, UK;
| | - Xiaolong Chen
- Department of Mechanical, Materials and Manufacturing Engineering, University of Nottingham, Nottingham NG7 2RD, UK;
| | - Fengyuan Liu
- School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol BS8 1QU, UK; (Y.X.); (R.Q.); (W.X.)
| |
Collapse
|
2
|
Shaheema SB, K. SD, Muppalaneni NB. Explainability based Panoptic brain tumor segmentation using a hybrid PA-NET with GCNN-ResNet50. Biomed Signal Process Control 2024; 94:106334. [DOI: 10.1016/j.bspc.2024.106334] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
3
|
Saluja S, Trivedi MC, Saha A. Deep CNNs for glioma grading on conventional MRIs: Performance analysis, challenges, and future directions. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:5250-5282. [PMID: 38872535 DOI: 10.3934/mbe.2024232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2024]
Abstract
The increasing global incidence of glioma tumors has raised significant healthcare concerns due to their high mortality rates. Traditionally, tumor diagnosis relies on visual analysis of medical imaging and invasive biopsies for precise grading. As an alternative, computer-assisted methods, particularly deep convolutional neural networks (DCNNs), have gained traction. This research paper explores the recent advancements in DCNNs for glioma grading using brain magnetic resonance images (MRIs) from 2015 to 2023. The study evaluated various DCNN architectures and their performance, revealing remarkable results with models such as hybrid and ensemble based DCNNs achieving accuracy levels of up to 98.91%. However, challenges persisted in the form of limited datasets, lack of external validation, and variations in grading formulations across diverse literature sources. Addressing these challenges through expanding datasets, conducting external validation, and standardizing grading formulations can enhance the performance and reliability of DCNNs in glioma grading, thereby advancing brain tumor classification and extending its applications to other neurological disorders.
Collapse
Affiliation(s)
- Sonam Saluja
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| | - Munesh Chandra Trivedi
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| | - Ashim Saha
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| |
Collapse
|
4
|
Saluja S, Trivedi MC, Sarangdevot SS. Advancing glioma diagnosis: Integrating custom U-Net and VGG-16 for improved grading in MR imaging. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:4328-4350. [PMID: 38549330 DOI: 10.3934/mbe.2024191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/02/2024]
Abstract
In the realm of medical imaging, the precise segmentation and classification of gliomas represent fundamental challenges with profound clinical implications. Leveraging the BraTS 2018 dataset as a standard benchmark, this study delves into the potential of advanced deep learning models for addressing these challenges. We propose a novel approach that integrates a customized U-Net for segmentation and VGG-16 for classification. The U-Net, with its tailored encoder-decoder pathways, accurately identifies glioma regions, thus improving tumor localization. The fine-tuned VGG-16, featuring a customized output layer, precisely differentiates between low-grade and high-grade gliomas. To ensure consistency in data pre-processing, a standardized methodology involving gamma correction, data augmentation, and normalization is introduced. This novel integration surpasses existing methods, offering significantly improved glioma diagnosis, validated by high segmentation dice scores (WT: 0.96, TC: 0.92, ET: 0.89), and a remarkable overall classification accuracy of 97.89%. The experimental findings underscore the potential of integrating deep learning-based methodologies for tumor segmentation and classification in enhancing glioma diagnosis and formulating subsequent treatment strategies.
Collapse
Affiliation(s)
- Sonam Saluja
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura, 799046, India
| | - Munesh Chandra Trivedi
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura, 799046, India
| | | |
Collapse
|
5
|
Ahamed MF, Hossain MM, Nahiduzzaman M, Islam MR, Islam MR, Ahsan M, Haider J. A review on brain tumor segmentation based on deep learning methods with federated learning techniques. Comput Med Imaging Graph 2023; 110:102313. [PMID: 38011781 DOI: 10.1016/j.compmedimag.2023.102313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 11/13/2023] [Accepted: 11/13/2023] [Indexed: 11/29/2023]
Abstract
Brain tumors have become a severe medical complication in recent years due to their high fatality rate. Radiologists segment the tumor manually, which is time-consuming, error-prone, and expensive. In recent years, automated segmentation based on deep learning has demonstrated promising results in solving computer vision problems such as image classification and segmentation. Brain tumor segmentation has recently become a prevalent task in medical imaging to determine the tumor location, size, and shape using automated methods. Many researchers have worked on various machine and deep learning approaches to determine the most optimal solution using the convolutional methodology. In this review paper, we discuss the most effective segmentation techniques based on the datasets that are widely used and publicly available. We also proposed a survey of federated learning methodologies to enhance global segmentation performance and ensure privacy. A comprehensive literature review is suggested after studying more than 100 papers to generalize the most recent techniques in segmentation and multi-modality information. Finally, we concentrated on unsolved problems in brain tumor segmentation and a client-based federated model training strategy. Based on this review, future researchers will understand the optimal solution path to solve these issues.
Collapse
Affiliation(s)
- Md Faysal Ahamed
- Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Munawar Hossain
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Nahiduzzaman
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Rabiul Islam
- Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Robiul Islam
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Mominul Ahsan
- Department of Computer Science, University of York, Deramore Lane, Heslington, York YO10 5GH, UK
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Chester St, Manchester M1 5GD, UK.
| |
Collapse
|
6
|
Wu H, Niyogisubizo J, Zhao K, Meng J, Xi W, Li H, Pan Y, Wei Y. A Weakly Supervised Learning Method for Cell Detection and Tracking Using Incomplete Initial Annotations. Int J Mol Sci 2023; 24:16028. [PMID: 38003217 PMCID: PMC10670924 DOI: 10.3390/ijms242216028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 08/18/2023] [Accepted: 09/06/2023] [Indexed: 11/26/2023] Open
Abstract
The automatic detection of cells in microscopy image sequences is a significant task in biomedical research. However, routine microscopy images with cells, which are taken during the process whereby constant division and differentiation occur, are notoriously difficult to detect due to changes in their appearance and number. Recently, convolutional neural network (CNN)-based methods have made significant progress in cell detection and tracking. However, these approaches require many manually annotated data for fully supervised training, which is time-consuming and often requires professional researchers. To alleviate such tiresome and labor-intensive costs, we propose a novel weakly supervised learning cell detection and tracking framework that trains the deep neural network using incomplete initial labels. Our approach uses incomplete cell markers obtained from fluorescent images for initial training on the Induced Pluripotent Stem (iPS) cell dataset, which is rarely studied for cell detection and tracking. During training, the incomplete initial labels were updated iteratively by combining detection and tracking results to obtain a model with better robustness. Our method was evaluated using two fields of the iPS cell dataset, along with the cell detection accuracy (DET) evaluation metric from the Cell Tracking Challenge (CTC) initiative, and it achieved 0.862 and 0.924 DET, respectively. The transferability of the developed model was tested using the public dataset FluoN2DH-GOWT1, which was taken from CTC; this contains two datasets with reference annotations. We randomly removed parts of the annotations in each labeled data to simulate the initial annotations on the public dataset. After training the model on the two datasets, with labels that comprise 10% cell markers, the DET improved from 0.130 to 0.903 and 0.116 to 0.877. When trained with labels that comprise 60% cell markers, the performance was better than the model trained using the supervised learning method. This outcome indicates that the model's performance improved as the quality of the labels used for training increased.
Collapse
Affiliation(s)
- Hao Wu
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Jovial Niyogisubizo
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Keliang Zhao
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jintao Meng
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Wenhui Xi
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Hongchang Li
- Institute of Biomedicine and Biotechnology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Yi Pan
- College of Computer Science and Control Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Yanjie Wei
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| |
Collapse
|
7
|
Kalantar R, Curcean S, Winfield JM, Lin G, Messiou C, Blackledge MD, Koh DM. Deep Learning Framework with Multi-Head Dilated Encoders for Enhanced Segmentation of Cervical Cancer on Multiparametric Magnetic Resonance Imaging. Diagnostics (Basel) 2023; 13:3381. [PMID: 37958277 PMCID: PMC10647438 DOI: 10.3390/diagnostics13213381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 10/29/2023] [Accepted: 11/01/2023] [Indexed: 11/15/2023] Open
Abstract
T2-weighted magnetic resonance imaging (MRI) and diffusion-weighted imaging (DWI) are essential components of cervical cancer diagnosis. However, combining these channels for the training of deep learning models is challenging due to image misalignment. Here, we propose a novel multi-head framework that uses dilated convolutions and shared residual connections for the separate encoding of multiparametric MRI images. We employ a residual U-Net model as a baseline, and perform a series of architectural experiments to evaluate the tumor segmentation performance based on multiparametric input channels and different feature encoding configurations. All experiments were performed on a cohort of 207 patients with locally advanced cervical cancer. Our proposed multi-head model using separate dilated encoding for T2W MRI and combined b1000 DWI and apparent diffusion coefficient (ADC) maps achieved the best median Dice similarity coefficient (DSC) score, 0.823 (confidence interval (CI), 0.595-0.797), outperforming the conventional multi-channel model, DSC 0.788 (95% CI, 0.568-0.776), although the difference was not statistically significant (p > 0.05). We investigated channel sensitivity using 3D GRAD-CAM and channel dropout, and highlighted the critical importance of T2W and ADC channels for accurate tumor segmentation. However, our results showed that b1000 DWI had a minor impact on the overall segmentation performance. We demonstrated that the use of separate dilated feature extractors and independent contextual learning improved the model's ability to reduce the boundary effects and distortion of DWI, leading to improved segmentation performance. Our findings could have significant implications for the development of robust and generalizable models that can extend to other multi-modal segmentation applications.
Collapse
Affiliation(s)
- Reza Kalantar
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SW7 3RP, UK; (R.K.); (J.M.W.); (C.M.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Sebastian Curcean
- Department of Radiation Oncology, Iuliu Hatieganu University of Medicine and Pharmacy, 400347 Cluj-Napoca, Romania;
| | - Jessica M. Winfield
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SW7 3RP, UK; (R.K.); (J.M.W.); (C.M.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, Chang Gung University, Guishan, Taoyuan 333, Taiwan;
| | - Christina Messiou
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SW7 3RP, UK; (R.K.); (J.M.W.); (C.M.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Matthew D. Blackledge
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SW7 3RP, UK; (R.K.); (J.M.W.); (C.M.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Dow-Mu Koh
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SW7 3RP, UK; (R.K.); (J.M.W.); (C.M.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| |
Collapse
|
8
|
Khan MKH, Guo W, Liu J, Dong F, Li Z, Patterson TA, Hong H. Machine learning and deep learning for brain tumor MRI image segmentation. Exp Biol Med (Maywood) 2023; 248:1974-1992. [PMID: 38102956 PMCID: PMC10798183 DOI: 10.1177/15353702231214259] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2023] Open
Abstract
Brain tumors are often fatal. Therefore, accurate brain tumor image segmentation is critical for the diagnosis, treatment, and monitoring of patients with these tumors. Magnetic resonance imaging (MRI) is a commonly used imaging technique for capturing brain images. Both machine learning and deep learning techniques are popular in analyzing MRI images. This article reviews some commonly used machine learning and deep learning techniques for brain tumor MRI image segmentation. The limitations and advantages of the reviewed machine learning and deep learning methods are discussed. Even though each of these methods has a well-established status in their individual domains, the combination of two or more techniques is currently an emerging trend.
Collapse
Affiliation(s)
- Md Kamrul Hasan Khan
- National Center for Toxicological Research, U.S. Food & Drug Administration, Jefferson, AR 72079, USA
| | - Wenjing Guo
- National Center for Toxicological Research, U.S. Food & Drug Administration, Jefferson, AR 72079, USA
| | - Jie Liu
- National Center for Toxicological Research, U.S. Food & Drug Administration, Jefferson, AR 72079, USA
| | - Fan Dong
- National Center for Toxicological Research, U.S. Food & Drug Administration, Jefferson, AR 72079, USA
| | - Zoe Li
- National Center for Toxicological Research, U.S. Food & Drug Administration, Jefferson, AR 72079, USA
| | - Tucker A Patterson
- National Center for Toxicological Research, U.S. Food & Drug Administration, Jefferson, AR 72079, USA
| | - Huixiao Hong
- National Center for Toxicological Research, U.S. Food & Drug Administration, Jefferson, AR 72079, USA
| |
Collapse
|
9
|
Ryu J, Rehman MU, Nizami IF, Chong KT. SegR-Net: A deep learning framework with multi-scale feature fusion for robust retinal vessel segmentation. Comput Biol Med 2023; 163:107132. [PMID: 37343468 DOI: 10.1016/j.compbiomed.2023.107132] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 05/12/2023] [Accepted: 06/04/2023] [Indexed: 06/23/2023]
Abstract
Retinal vessel segmentation is an important task in medical image analysis and has a variety of applications in the diagnosis and treatment of retinal diseases. In this paper, we propose SegR-Net, a deep learning framework for robust retinal vessel segmentation. SegR-Net utilizes a combination of feature extraction and embedding, deep feature magnification, feature precision and interference, and dense multiscale feature fusion to generate accurate segmentation masks. The model consists of an encoder module that extracts high-level features from the input images and a decoder module that reconstructs the segmentation masks by combining features from the encoder module. The encoder module consists of a feature extraction and embedding block that enhances by dense multiscale feature fusion, followed by a deep feature magnification block that magnifies the retinal vessels. To further improve the quality of the extracted features, we use a group of two convolutional layers after each DFM block. In the decoder module, we utilize a feature precision and interference block and a dense multiscale feature fusion block (DMFF) to combine features from the encoder module and reconstruct the segmentation mask. We also incorporate data augmentation and pre-processing techniques to improve the generalization of the trained model. Experimental results on three fundus image publicly available datasets (CHASE_DB1, STARE, and DRIVE) demonstrate that SegR-Net outperforms state-of-the-art models in terms of accuracy, sensitivity, specificity, and F1 score. The proposed framework can provide more accurate and more efficient segmentation of retinal blood vessels in comparison to the state-of-the-art techniques, which is essential for clinical decision-making and diagnosis of various eye diseases.
Collapse
Affiliation(s)
- Jihyoung Ryu
- Electronics and Telecommunications Research Institute, 176-11 Cheomdan Gwagi-ro, Buk-gu, Gwangju 61012, Republic of Korea.
| | - Mobeen Ur Rehman
- Department of Electronics and Information Engineering, Jeonbuk National University, Jeonju 54896, Republic of Korea.
| | - Imran Fareed Nizami
- Department of Electrical Engineering, Bahria University, Islamabad, Pakistan.
| | - Kil To Chong
- Electronics and Telecommunications Research Institute, 176-11 Cheomdan Gwagi-ro, Buk-gu, Gwangju 61012, Republic of Korea; Advances Electronics and Information Research Center, Jeonbuk National University, Jeonju 54896, Republic of Korea.
| |
Collapse
|
10
|
Sarala B, Sumathy G, Kalpana A, Jasmine Hephzipah J. Glioma brain tumor detection using dual convolutional neural networks and histogram density segmentation algorithm. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
11
|
Yang Z, Hu Z, Ji H, Lafata K, Vaios E, Floyd S, Yin FF, Wang C. A neural ordinary differential equation model for visualizing deep neural network behaviors in multi-parametric MRI-based glioma segmentation. Med Phys 2023; 50:4825-4838. [PMID: 36840621 PMCID: PMC10440249 DOI: 10.1002/mp.16286] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 01/26/2023] [Accepted: 01/30/2023] [Indexed: 02/26/2023] Open
Abstract
PURPOSE To develop a neural ordinary differential equation (ODE) model for visualizing deep neural network behavior during multi-parametric MRI-based glioma segmentation as a method to enhance deep learning explainability. METHODS By hypothesizing that deep feature extraction can be modeled as a spatiotemporally continuous process, we implemented a novel deep learning model, Neural ODE, in which deep feature extraction was governed by an ODE parameterized by a neural network. The dynamics of (1) MR images after interactions with the deep neural network and (2) segmentation formation can thus be visualized after solving the ODE. An accumulative contribution curve (ACC) was designed to quantitatively evaluate each MR image's utilization by the deep neural network toward the final segmentation results. The proposed Neural ODE model was demonstrated using 369 glioma patients with a 4-modality multi-parametric MRI protocol: T1, contrast-enhanced T1 (T1-Ce), T2, and FLAIR. Three Neural ODE models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The key MRI modalities with significant utilization by deep neural networks were identified based on ACC analysis. Segmentation results by deep neural networks using only the key MRI modalities were compared to those using all four MRI modalities in terms of Dice coefficient, accuracy, sensitivity, and specificity. RESULTS All Neural ODE models successfully illustrated image dynamics as expected. ACC analysis identified T1-Ce as the only key modality in ET and TC segmentations, while both FLAIR and T2 were key modalities in WT segmentation. Compared to the U-Net results using all four MRI modalities, the Dice coefficient of ET (0.784→0.775), TC (0.760→0.758), and WT (0.841→0.837) using the key modalities only had minimal differences without significance. Accuracy, sensitivity, and specificity results demonstrated the same patterns. CONCLUSION The Neural ODE model offers a new tool for optimizing the deep learning model inputs with enhanced explainability. The presented methodology can be generalized to other medical image-related deep-learning applications.
Collapse
Affiliation(s)
- Zhenyu Yang
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China
| | - Zongsheng Hu
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China
| | - Hangjie Ji
- Department of Mathematics, North Carolina State University, Raleigh, North Carolina, USA
| | - Kyle Lafata
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
- Department of Radiology, Duke University, Durham, North Carolina, USA
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
| | - Eugene Vaios
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | - Scott Floyd
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | - Fang-Fang Yin
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China
| | - Chunhao Wang
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
| |
Collapse
|
12
|
Tanveerul Hassan M, Tayara H, To Chong K. Meta-IL4: An Ensemble Learning Approach for IL-4-Inducing Peptide Prediction. Methods 2023:S1046-2023(23)00113-5. [PMID: 37454743 DOI: 10.1016/j.ymeth.2023.07.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 03/25/2023] [Accepted: 07/10/2023] [Indexed: 07/18/2023] Open
Abstract
The cytokine interleukin-4 (IL-4) plays an important role in our immune system. IL-4 leads the way in the differentiation of naïve T-helper 0 cells (Th0) to T-helper 2 cells (Th2). The Th2 responses are characterized by the release of IL-4. CD4+ T cells produce the cytokine IL-4 in response to exogenous parasites. IL-4 has a critical role in the growth of CD8+ cells, inflammation, and responses of T-cells. We propose an ensemble model for the prediction of IL-4 inducing peptides. Four feature encodings were extracted to build an efficient predictor: pseudo-amino acid composition, amphiphilic pseudo-amino acid composition, quasi-sequence-order, and Shannon entropy. We developed an ensemble learning model fusion of random forest, extreme gradient boost, light gradient boosting machine, and extra tree classifier in the first layer, and a Gaussian process classifier as a meta classifier in the second layer. The outcome of the benchmarking testing dataset, with a Matthews correlation coefficient of 0.793, showed that the meta-model (Meta-IL4) outperformed individual classifiers. The highest accuracy achieved by the Meta-IL4 model is 90.70%. These findings suggest that peptides that induce IL-4 can be predicted with reasonable accuracy. These models could aid in the development of peptides that trigger the appropriate Th2 response.
Collapse
Affiliation(s)
- Mir Tanveerul Hassan
- Department of Electronics and Information Engineering, Jeonbuk National University, Jeonju, South Korea
| | - Hilal Tayara
- School of International Engineering and Science, Jeonbuk National University, Jeonju, South Korea.
| | - Kil To Chong
- Department of Electronics and Information Engineering, Jeonbuk National University, Jeonju, South Korea; Advances Electronics and Information Research Centre, Jeonbuk National University, Jeonju, South Korea.
| |
Collapse
|
13
|
Kodipalli A, Fernandes SL, Gururaj V, Varada Rameshbabu S, Dasar S. Performance Analysis of Segmentation and Classification of CT-Scanned Ovarian Tumours Using U-Net and Deep Convolutional Neural Networks. Diagnostics (Basel) 2023; 13:2282. [PMID: 37443676 DOI: 10.3390/diagnostics13132282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 06/29/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023] Open
Abstract
Difficulty in detecting tumours in early stages is the major cause of mortalities in patients, despite the advancements in treatment and research regarding ovarian cancer. Deep learning algorithms were applied to serve the purpose as a diagnostic tool and applied to CT scan images of the ovarian region. The images went through a series of pre-processing techniques and, further, the tumour was segmented using the UNet model. The instances were then classified into two categories-benign and malignant tumours. Classification was performed using deep learning models like CNN, ResNet, DenseNet, Inception-ResNet, VGG16 and Xception, along with machine learning models such as Random Forest, Gradient Boosting, AdaBoosting and XGBoosting. DenseNet 121 emerges as the best model on this dataset after applying optimization on the machine learning models by obtaining an accuracy of 95.7%. The current work demonstrates the comparison of multiple CNN architectures with common machine learning algorithms, with and without optimization techniques applied.
Collapse
Affiliation(s)
- Ashwini Kodipalli
- Department of Artificial Intelligence & Data Science, Global Academy of Technology, Bangalore 560098, India
| | - Steven L Fernandes
- Department of Computer Science, Design, Journalism, Creighton University, Omaha, NE 68178, USA
| | - Vaishnavi Gururaj
- Department of Computer Science, George Mason University, Fairfax, VA 22030, USA
| | - Shriya Varada Rameshbabu
- Department of Computer Science & Engineering, Global Academy of Technology, Bangalore 560098, India
| | - Santosh Dasar
- Department of Radiologist, SDM College of Medical Sciences and Hospital, Dharwad 580009, India
| |
Collapse
|
14
|
Messaoudi H, Belaid A, Ben Salem D, Conze PH. Cross-dimensional transfer learning in medical image segmentation with deep learning. Med Image Anal 2023; 88:102868. [PMID: 37384952 DOI: 10.1016/j.media.2023.102868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 06/06/2023] [Accepted: 06/08/2023] [Indexed: 07/01/2023]
Abstract
Over the last decade, convolutional neural networks have emerged and advanced the state-of-the-art in various image analysis and computer vision applications. The performance of 2D image classification networks is constantly improving and being trained on databases made of millions of natural images. Conversely, in the field of medical image analysis, the progress is also remarkable but has mainly slowed down due to the relative lack of annotated data and besides, the inherent constraints related to the acquisition process. These limitations are even more pronounced given the volumetry of medical imaging data. In this paper, we introduce an efficient way to transfer the efficiency of a 2D classification network trained on natural images to 2D, 3D uni- and multi-modal medical image segmentation applications. In this direction, we designed novel architectures based on two key principles: weight transfer by embedding a 2D pre-trained encoder into a higher dimensional U-Net, and dimensional transfer by expanding a 2D segmentation network into a higher dimension one. The proposed networks were tested on benchmarks comprising different modalities: MR, CT, and ultrasound images. Our 2D network ranked first on the CAMUS challenge dedicated to echo-cardiographic data segmentation and surpassed the state-of-the-art. Regarding 2D/3D MR and CT abdominal images from the CHAOS challenge, our approach largely outperformed the other 2D-based methods described in the challenge paper on Dice, RAVD, ASSD, and MSSD scores and ranked third on the online evaluation platform. Our 3D network applied to the BraTS 2022 competition also achieved promising results, reaching an average Dice score of 91.69% (91.22%) for the whole tumor, 83.23% (84.77%) for the tumor core and 81.75% (83.88%) for enhanced tumor using the approach based on weight (dimensional) transfer. Experimental and qualitative results illustrate the effectiveness of our methods for multi-dimensional medical image segmentation.
Collapse
Affiliation(s)
- Hicham Messaoudi
- Laboratory of Medical Informatics (LIMED), Faculty of Technology, University of Bejaia, 06000 Bejaia, Algeria.
| | - Ahror Belaid
- Laboratory of Medical Informatics (LIMED), Faculty of Exact Sciences, University of Bejaia, 06000 Bejaia, Algeria; Data Science & Applications Research Unit - CERIST, 06000, Bejaia, Algeria
| | - Douraied Ben Salem
- Laboratory of Medical Information Processing (LaTIM) UMR 1101, Inserm, 29200, Brest, France; Neuroradiology Department, University Hospital of Brest, 29200, Brest, France
| | - Pierre-Henri Conze
- Laboratory of Medical Information Processing (LaTIM) UMR 1101, Inserm, 29200, Brest, France; IMT Atlantique, 29200, Brest, France
| |
Collapse
|
15
|
Hussain S, Haider S, Maqsood S, Damaševičius R, Maskeliūnas R, Khan M. ETISTP: An Enhanced Model for Brain Tumor Identification and Survival Time Prediction. Diagnostics (Basel) 2023; 13:diagnostics13081456. [PMID: 37189556 DOI: 10.3390/diagnostics13081456] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 03/30/2023] [Accepted: 04/14/2023] [Indexed: 05/17/2023] Open
Abstract
Technology-assisted diagnosis is increasingly important in healthcare systems. Brain tumors are a leading cause of death worldwide, and treatment plans rely heavily on accurate survival predictions. Gliomas, a type of brain tumor, have particularly high mortality rates and can be further classified as low- or high-grade, making survival prediction challenging. Existing literature provides several survival prediction models that use different parameters, such as patient age, gross total resection status, tumor size, or tumor grade. However, accuracy is often lacking in these models. The use of tumor volume instead of size may improve the accuracy of survival prediction. In response to this need, we propose a novel model, the enhanced brain tumor identification and survival time prediction (ETISTP), which computes tumor volume, classifies it into low- or high-grade glioma, and predicts survival time with greater accuracy. The ETISTP model integrates four parameters: patient age, survival days, gross total resection (GTR) status, and tumor volume. Notably, ETISTP is the first model to employ tumor volume for prediction. Furthermore, our model minimizes the computation time by allowing for parallel execution of tumor volume computation and classification. The simulation results demonstrate that ETISTP outperforms prominent survival prediction models.
Collapse
Affiliation(s)
- Shah Hussain
- Department of Computer Science, City University of Science and Information Technology, Peshawar 25000, Pakistan
| | - Shahab Haider
- Department of Computer Science, City University of Science and Information Technology, Peshawar 25000, Pakistan
| | - Sarmad Maqsood
- Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania
| | - Rytis Maskeliūnas
- Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
| | - Muzammil Khan
- Department of Computer & Software Technology, University of Swat, Swat 19200, Pakistan
| |
Collapse
|
16
|
Pedada KR, A. BR, Patro KK, Allam JP, Jamjoom MM, Samee NA. A novel approach for brain tumour detection using deep learning based technique. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
17
|
Srinivasan S, Bai PSM, Mathivanan SK, Muthukumaran V, Babu JC, Vilcekova L. Grade Classification of Tumors from Brain Magnetic Resonance Images Using a Deep Learning Technique. Diagnostics (Basel) 2023; 13:diagnostics13061153. [PMID: 36980463 PMCID: PMC10046932 DOI: 10.3390/diagnostics13061153] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/14/2023] [Accepted: 03/14/2023] [Indexed: 03/22/2023] Open
Abstract
To improve the accuracy of tumor identification, it is necessary to develop a reliable automated diagnostic method. In order to precisely categorize brain tumors, researchers developed a variety of segmentation algorithms. Segmentation of brain images is generally recognized as one of the most challenging tasks in medical image processing. In this article, a novel automated detection and classification method was proposed. The proposed approach consisted of many phases, including pre-processing MRI images, segmenting images, extracting features, and classifying images. During the pre-processing portion of an MRI scan, an adaptive filter was utilized to eliminate background noise. For feature extraction, the local-binary grey level co-occurrence matrix (LBGLCM) was used, and for image segmentation, enhanced fuzzy c-means clustering (EFCMC) was used. After extracting the scan features, we used a deep learning model to classify MRI images into two groups: glioma and normal. The classifications were created using a convolutional recurrent neural network (CRNN). The proposed technique improved brain image classification from a defined input dataset. MRI scans from the REMBRANDT dataset, which consisted of 620 testing and 2480 training sets, were used for the research. The data demonstrate that the newly proposed method outperformed its predecessors. The proposed CRNN strategy was compared against BP, U-Net, and ResNet, which are three of the most prevalent classification approaches currently being used. For brain tumor classification, the proposed system outcomes were 98.17% accuracy, 91.34% specificity, and 98.79% sensitivity.
Collapse
Affiliation(s)
- Saravanan Srinivasan
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India
| | | | - Sandeep Kumar Mathivanan
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Venkatesan Muthukumaran
- Department of Mathematics, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur 603203, India
| | - Jyothi Chinna Babu
- Department of Electronics and Communications Engineering, Annamacharya Institute of Technology and Sciences, Rajampet 516126, India
| | - Lucia Vilcekova
- Faculty of Management, Comenius University Bratislava, Odbojarov 10, 820 05 Bratislava, Slovakia
- Correspondence:
| |
Collapse
|
18
|
Rehman MU, Ryu J, Nizami IF, Chong KT. RAAGR2-Net: A brain tumor segmentation network using parallel processing of multiple spatial frames. Comput Biol Med 2023; 152:106426. [PMID: 36565485 DOI: 10.1016/j.compbiomed.2022.106426] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 11/16/2022] [Accepted: 12/13/2022] [Indexed: 12/24/2022]
Abstract
Brain tumors are one of the most fatal cancers. Magnetic Resonance Imaging (MRI) is a non-invasive method that provides multi-modal images containing important information regarding the tumor. Many contemporary techniques employ four modalities: T1-weighted (T1), T1-weighted with contrast (T1c), T2-weighted (T2), and fluid-attenuation-inversion-recovery (FLAIR), each of which provides unique and important characteristics for the location of each tumor. Although several modern procedures provide decent segmentation results on the multimodal brain tumor image segmentation benchmark (BraTS) dataset, they lack performance when evaluated simultaneously on all the regions of MRI images. Furthermore, there is still room for improvement due to parameter limitations and computational complexity. Therefore, in this work, a novel encoder-decoder-based architecture is proposed for the effective segmentation of brain tumor regions. Data pre-processing is performed by applying N4 bias field correction, z-score, and 0 to 1 resampling to facilitate model training. To minimize the loss of location information in different modules, a residual spatial pyramid pooling (RASPP) module is proposed. RASPP is a set of parallel layers using dilated convolution. In addition, an attention gate (AG) module is used to efficiently emphasize and restore the segmented output from extracted feature maps. The proposed modules attempt to acquire rich feature representations by combining knowledge from diverse feature maps and retaining their local information. The performance of the proposed deep network based on RASPP, AG, and recursive residual (R2) block termed RAAGR2-Net is evaluated on the BraTS benchmarks. The experimental results show that the suggested network outperforms existing networks that exhibit the usefulness of the proposed modules for "fine" segmentation. The code for this work is made available online at: https://github.com/Rehman1995/RAAGR2-Net.
Collapse
Affiliation(s)
- Mobeen Ur Rehman
- Department of Electronics and Information Engineering, Jeonbuk National University, Jeonju 54896, South Korea.
| | - Jihyoung Ryu
- Electronics and Telecommunications Research Institute, 176-11 Cheomdan Gwagi-ro, Buk-gu, Gwangju 61012, Republic of Korea.
| | - Imran Fareed Nizami
- Department of Electrical Engineering, Bahria University, Islamabad, Pakistan.
| | - Kil To Chong
- Department of Electronics and Information Engineering, Jeonbuk National University, Jeonju 54896, South Korea; Advances Electronics and Information Research Center, Jeonbuk National University, Jeonju 54896, South Korea.
| |
Collapse
|
19
|
Samee NA, Ahmad T, Mahmoud NF, Atteia G, Abdallah HA, Rizwan A. Clinical Decision Support Framework for Segmentation and Classification of Brain Tumor MRIs Using a U-Net and DCNN Cascaded Learning Algorithm. Healthcare (Basel) 2022; 10:healthcare10122340. [PMID: 36553864 PMCID: PMC9777942 DOI: 10.3390/healthcare10122340] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 11/11/2022] [Accepted: 11/15/2022] [Indexed: 11/23/2022] Open
Abstract
Brain tumors (BTs) are an uncommon but fatal kind of cancer. Therefore, the development of computer-aided diagnosis (CAD) systems for classifying brain tumors in magnetic resonance imaging (MRI) has been the subject of many research papers so far. However, research in this sector is still in its early stage. The ultimate goal of this research is to develop a lightweight effective implementation of the U-Net deep network for use in performing exact real-time segmentation. Moreover, a simplified deep convolutional neural network (DCNN) architecture for the BT classification is presented for automatic feature extraction and classification of the segmented regions of interest (ROIs). Five convolutional layers, rectified linear unit, normalization, and max-pooling layers make up the DCNN's proposed simplified architecture. The introduced method was verified on multimodal brain tumor segmentation (BRATS 2015) datasets. Our experimental results on BRATS 2015 acquired Dice similarity coefficient (DSC) scores, sensitivity, and classification accuracy of 88.8%, 89.4%, and 88.6% for high-grade gliomas. When it comes to segmenting BRATS 2015 BT images, the performance of our proposed CAD framework is on par with existing state-of-the-art methods. However, the accuracy achieved in this study for the classification of BT images has improved upon the accuracy reported in prior studies. Image classification accuracy for BRATS 2015 BT has been improved from 88% to 88.6%.
Collapse
Affiliation(s)
- Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Tahir Ahmad
- Department of Computer Science, COMSATS University Islamabad, Attock Campus, Attock 43600, Pakistan
| | - Noha F. Mahmoud
- Rehabilitation Sciences Department, Health and Rehabilitation Sciences College, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
- Correspondence: (N.F.M.); (G.A.); (A.R.)
| | - Ghada Atteia
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
- Correspondence: (N.F.M.); (G.A.); (A.R.)
| | - Hanaa A. Abdallah
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Atif Rizwan
- Department of Computer Engineering, Jeju National University, Jejusi 63243, Republic of Korea
- Correspondence: (N.F.M.); (G.A.); (A.R.)
| |
Collapse
|
20
|
Khorasani A. Automated irreversible electroporated region prediction using deep neural network, a preliminary study for treatment planning. Electromagn Biol Med 2022; 41:379-388. [PMID: 35989633 DOI: 10.1080/15368378.2022.2114493] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
The primary purpose of cancer treatment with irreversible electroporation (IRE) is to maximize tumor damage and minimize surrounding healthy tissue damage. Finite element analysis is one of the popular ways to calculate electric field and cell kill probability in IRE. However, this method also has limitations. This paper will focus on using a deep neural network (DNN) in IRE to predict irreversible electroporated regions for treatment planning purposes. COMSOL Multiphysics was used to simulate the IRE. The electric conductivity change during IRE was considered to create accurate data sets of electric field distribution and cell kill probability distributions. We used eight pulses with a pulse width of 100 μs, frequency of 1 Hz, and different voltages. To create masks for DNN training, a 90% cell kill probability contour was used. After data set creation, U-Net architecture was trained to predict irreversible electroporated regions. In this study, the average U-Net DICE coefficient on test data was 0.96. Also, the average accuracy of U-Net for predicting irreversible electroporated regions was 0.97. As far as we are aware, this is the first time that U-Net was used to predict an irreversible electroporated region in IRE. The present study provides significant evidence for U-Net's use for predicting an irreversible electroporated region in treatment planning.
Collapse
Affiliation(s)
- Amir Khorasani
- Department of Medical Physics, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| |
Collapse
|
21
|
Rehman MU, Tayara H, Zou Q, Chong KT. i6mA-Caps: a CapsuleNet-based framework for identifying DNA N6-methyladenine sites. Bioinformatics 2022; 38:3885-3891. [PMID: 35771648 DOI: 10.1093/bioinformatics/btac434] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 05/19/2022] [Accepted: 06/28/2022] [Indexed: 12/24/2022] Open
Abstract
MOTIVATION DNA N6-methyladenine (6mA) has been demonstrated to have an essential function in epigenetic modification in eukaryotic species in recent research. 6mA has been linked to various biological processes. It's critical to create a new algorithm that can rapidly and reliably detect 6mA sites in genomes to investigate their biological roles. The identification of 6mA marks in the genome is the first and most important step in understanding the underlying molecular processes, as well as their regulatory functions. RESULTS In this article, we proposed a novel computational tool called i6mA-Caps which CapsuleNet based a framework for identifying the DNA N6-methyladenine sites. The proposed framework uses a single encoding scheme for numerical representation of the DNA sequence. The numerical data is then used by the set of convolution layers to extract low-level features. These features are then used by the capsule network to extract intermediate-level and later high-level features to classify the 6mA sites. The proposed network is evaluated on three datasets belonging to three genomes which are Rosaceae, Rice and Arabidopsis thaliana. Proposed method has attained an accuracy of 96.71%, 94% and 86.83% for independent Rosaceae dataset, Rice dataset and A.thaliana dataset respectively. The proposed framework has exhibited improved results when compared with the existing top-of-the-line methods. AVAILABILITY AND IMPLEMENTATION A user-friendly web-server is made available for the biological experts which can be accessed at: http://nsclbio.jbnu.ac.kr/tools/i6mA-Caps/. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Mobeen Ur Rehman
- Department of Electronics and Information Engineering, Jeonbuk National University, Jeonju 54896, South Korea
| | - Hilal Tayara
- School of International Engineering and Science, Jeonbuk National University, Jeonju 54896, South Korea
| | - Quan Zou
- Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Kil To Chong
- Department of Electronics and Information Engineering, Jeonbuk National University, Jeonju 54896, South Korea.,Advances Electronics and Information Research Center, Jeonbuk National University, Jeonju 54896, South Korea
| |
Collapse
|
22
|
SIP-UNet: Sequential Inputs Parallel UNet Architecture for Segmentation of Brain Tissues from Magnetic Resonance Images. MATHEMATICS 2022. [DOI: 10.3390/math10152755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Proper analysis of changes in brain structure can lead to a more accurate diagnosis of specific brain disorders. The accuracy of segmentation is crucial for quantifying changes in brain structure. In recent studies, UNet-based architectures have outperformed other deep learning architectures in biomedical image segmentation. However, improving segmentation accuracy is challenging due to the low resolution of medical images and insufficient data. In this study, we present a novel architecture that combines three parallel UNets using a residual network. This architecture improves upon the baseline methods in three ways. First, instead of using a single image as input, we use three consecutive images. This gives our model the freedom to learn from neighboring images as well. Additionally, the images are individually compressed and decompressed using three different UNets, which prevents the model from merging the features of the images. Finally, following the residual network architecture, the outputs of the UNets are combined in such a way that the features of the image corresponding to the output are enhanced by a skip connection. The proposed architecture performed better than using a single conventional UNet and other UNet variants.
Collapse
|
23
|
Research and Analysis of Brain Glioma Imaging Based on Deep Learning. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2021:3426080. [PMID: 35911847 PMCID: PMC9334044 DOI: 10.1155/2021/3426080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 10/13/2021] [Accepted: 10/18/2021] [Indexed: 12/24/2022]
Abstract
The incidence of glioma is increasing year by year, seriously endangering people's health. Magnetic resonance imaging (MRI) can effectively provide intracranial images of brain tumors and provide strong support for the diagnosis and treatment of the disease. Accurate segmentation of brain glioma has positive significance in medicine. However, due to the strong variability of the size, shape, and location of glioma and the large differences between different cases, the recognition and segmentation of glioma images are very difficult. Traditional methods are time-consuming, labor-intensive, and inefficient, and single-modal MRI images cannot provide comprehensive information about gliomas. Therefore, it is necessary to synthesize multimodal MRI images to identify and segment glioma MRI images. This work is based on multimodal MRI images and based on deep learning technology to achieve automatic and efficient segmentation of gliomas. The main tasks are as follows. A deep learning model based on dense blocks of holes, 3D U-Net, is proposed. It can automatically segment multimodal MRI glioma images. U-Net network is often used in image segmentation and has good performance. However, due to the strong specificity of glioma, the U-Net model cannot effectively obtain more details. Therefore, the 3D U-Net model proposed in this paper can integrate hollow convolution and densely connected blocks. In addition, this paper also combines classification loss and cross-entropy loss as the loss function of the network to improve the problem of category imbalance in glioma image segmentation tasks. The algorithm proposed in this paper has been used to perform a lot of experiments on the BraTS2018 dataset, and the results prove that this model has good segmentation performance.
Collapse
|
24
|
Das S, Nayak GK, Saba L, Kalra M, Suri JS, Saxena S. An artificial intelligence framework and its bias for brain tumor segmentation: A narrative review. Comput Biol Med 2022; 143:105273. [PMID: 35228172 DOI: 10.1016/j.compbiomed.2022.105273] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 01/15/2022] [Accepted: 01/24/2022] [Indexed: 02/06/2023]
Abstract
BACKGROUND Artificial intelligence (AI) has become a prominent technique for medical diagnosis and represents an essential role in detecting brain tumors. Although AI-based models are widely used in brain lesion segmentation (BLS), understanding their effectiveness is challenging due to their complexity and diversity. Several reviews on brain tumor segmentation are available, but none of them describe a link between the threats due to risk-of-bias (RoB) in AI and its architectures. In our review, we focused on linking RoB and different AI-based architectural Cluster in popular DL framework. Further, due to variance in these designs and input data types in medical imaging, it is necessary to present a narrative review considering all facets of BLS. APPROACH The proposed study uses a PRISMA strategy based on 75 relevant studies found by searching PubMed, Scopus, and Google Scholar. Based on the architectural evolution, DL studies were subsequently categorized into four classes: convolutional neural network (CNN)-based, encoder-decoder (ED)-based, transfer learning (TL)-based, and hybrid DL (HDL)-based architectures. These studies were then analyzed considering 32 AI attributes, with clusters including AI architecture, imaging modalities, hyper-parameters, performance evaluation metrics, and clinical evaluation. Then, after these studies were scored for all attributes, a composite score was computed, normalized, and ranked. Thereafter, a bias cutoff (AP(ai)Bias 1.0, AtheroPoint, Roseville, CA, USA) was established to detect low-, moderate- and high-bias studies. CONCLUSION The four classes of architectures, from best-to worst-performing, are TL > ED > CNN > HDL. ED-based models had the lowest AI bias for BLS. This study presents a set of three primary and six secondary recommendations for lowering the RoB.
Collapse
Affiliation(s)
- Suchismita Das
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India; CSE Department, KIIT Deemed to be University, Bhubaneswar, Odisha, India
| | - G K Nayak
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India
| | - Luca Saba
- Department of Radiology, AOU, University of Cagliari, Cagliari, Italy
| | - Mannudeep Kalra
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA, USA
| | - Jasjit S Suri
- Stroke Diagnostic and Monitoring Division, AtheroPoint™ LLC, Roseville, CA, USA.
| | - Sanjay Saxena
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India
| |
Collapse
|
25
|
Optimal DeepMRSeg based tumor segmentation with GAN for brain tumor classification. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103537] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
26
|
Sambath Kumar K, Rajendran A. An automatic brain tumor segmentation using modified inception module based U-Net model. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-211879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Manual segmentation of brain tumor is not only a tedious task that may bring human mistakes. An automatic segmentation gives results faster, and it extends the survival rate with an earlier treatment plan. So, an automatic brain tumor segmentation model, modified inception module based U-Net (IMU-Net) proposed. It takes Magnetic resonance (MR) images from the BRATS 2017 training dataset with four modalities (FLAIR, T1, T1ce, and T2). The concatenation of two series 3×3 kernels, one 5×5, and one 1×1 convolution kernels are utilized to extract the whole tumor (WT), core tumor (CT), and enhance tumor (ET). The modified inception module (IM) collects all the relevant features and provides better segmentation results. The proposed deep learning model contains 40 convolution layers and utilizes intensity normalization and data augmentation operation for further improvement. It achieved the mean dice similarity coefficient (DSC) of 0.90, 0.77, 0.74, and the mean Intersection over Union (IOU) of 0.79, 0.70, 0.70 for WT, CT, and ET during the evaluation.
Collapse
Affiliation(s)
- K. Sambath Kumar
- Department of Electronics and Communication Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Avadi, Chennai, Tamilnadu, India
| | - A. Rajendran
- Department of Electronics and Communication Engineering, Karpagam College of Engineering, Myleripalayam Village, Othakalmandapam, Coimbatore, Tamilnadu, India
| |
Collapse
|
27
|
Ottom MA, Rahman HA, Dinov ID. Znet: Deep Learning Approach for 2D MRI Brain Tumor Segmentation. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 10:1800508. [PMID: 35774412 PMCID: PMC9236306 DOI: 10.1109/jtehm.2022.3176737] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Revised: 05/12/2022] [Accepted: 05/16/2022] [Indexed: 11/22/2022]
Abstract
Background: Detection and segmentation of brain tumors using MR images are challenging and valuable tasks in the medical field. Early diagnosing and localizing of brain tumors can save lives and provide timely options for physicians to select efficient treatment plans. Deep learning approaches have attracted researchers in medical imaging due to their capacity, performance, and potential to assist in accurate diagnosis, prognosis, and medical treatment technologies. Methods and procedures: This paper presents a novel framework for segmenting 2D brain tumors in MR images using deep neural networks (DNN) and utilizing data augmentation strategies. The proposed approach (Znet) is based on the idea of skip-connection, encoder-decoder architectures, and data amplification to propagate the intrinsic affinities of a relatively smaller number of expert delineated tumors, e.g., hundreds of patients of the low-grade glioma (LGG), to many thousands of synthetic cases. Results: Our experimental results showed high values of the mean dice similarity coefficient (dice = 0.96 during model training and dice = 0.92 for the independent testing dataset). Other evaluation measures were also relatively high, e.g., pixel accuracy = 0.996, F1 score = 0.81, and Matthews Correlation Coefficient, MCC = 0.81. The results and visualization of the DNN-derived tumor masks in the testing dataset showcase the ZNet model’s capability to localize and auto-segment brain tumors in MR images. This approach can further be generalized to 3D brain volumes, other pathologies, and a wide range of image modalities. Conclusion: We can confirm the ability of deep learning methods and the proposed Znet framework to detect and segment tumors in MR images. Furthermore, pixel accuracy evaluation may not be a suitable evaluation measure for semantic segmentation in case of class imbalance in MR images segmentation. This is because the dominant class in ground truth images is the background. Therefore, a high value of pixel accuracy can be misleading in some computer vision applications. On the other hand, alternative evaluation metrics, such as dice and IoU (Intersection over Union), are more factual for semantic segmentation. Clinical impact: Artificial intelligence (AI) applications in medicine are advancing swiftly, however, there is a lack of deployed techniques in clinical practice. This research demonstrates a practical example of AI applications in medical imaging, which can be deployed as a tool for auto-segmentation of tumors in MR images.
Collapse
Affiliation(s)
| | - Hanif Abdul Rahman
- Departments of Health Behavior and Biological Sciences and Computational Medicine and Bioinformatics, Statistics Online Computational Resource, University of Michigan, Ann Arbor, MI, USA
| | - Ivo D. Dinov
- Departments of Health Behavior and Biological Sciences and Computational Medicine and Bioinformatics, Statistics Online Computational Resource, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
28
|
Rehman MU, Akhtar S, Zakwan M, Mahmood MH. Novel architecture with selected feature vector for effective classification of mitotic and non-mitotic cells in breast cancer histology images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103212] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
29
|
Chegraoui H, Philippe C, Dangouloff-Ros V, Grigis A, Calmon R, Boddaert N, Frouin F, Grill J, Frouin V. Object Detection Improves Tumour Segmentation in MR Images of Rare Brain Tumours. Cancers (Basel) 2021; 13:cancers13236113. [PMID: 34885222 PMCID: PMC8657375 DOI: 10.3390/cancers13236113] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 11/26/2021] [Accepted: 11/30/2021] [Indexed: 11/16/2022] Open
Abstract
Simple Summary This study evaluates the impact of adding an object detection framework into brain tumour segmentation models, especially when the models are applied to different domains. In recent years, multiple models have been successfully applied to brain tumour segmentation tasks. However, the performance and stability of these models have never been evaluated when the training and target domain differ. In this study, we identify object detection as a simpler problem that can be injected into a segmentation model as an a priori, and which can increase the performance of our models. We propose an automatic segmentation model that, without model retraining or adaptation, showed good results when applied to a rare brain tumour. Abstract Tumour lesion segmentation is a key step to study and characterise cancer from MR neuroradiological images. Presently, numerous deep learning segmentation architectures have been shown to perform well on the specific tumour type they are trained on (e.g., glioblastoma in brain hemispheres). However, a high performing network heavily trained on a given tumour type may perform poorly on a rare tumour type for which no labelled cases allows training or transfer learning. Yet, because some visual similarities exist nevertheless between common and rare tumours, in the lesion and around it, one may split the problem into two steps: object detection and segmentation. For each step, trained networks on common lesions could be used on rare ones following a domain adaptation scheme without extra fine-tuning. This work proposes a resilient tumour lesion delineation strategy, based on the combination of established elementary networks that achieve detection and segmentation. Our strategy allowed us to achieve robust segmentation inference on a rare tumour located in an unseen tumour context region during training. As an example of a rare tumour, Diffuse Intrinsic Pontine Glioma (DIPG), we achieve an average dice score of 0.62 without further training or network architecture adaptation.
Collapse
Affiliation(s)
- Hamza Chegraoui
- Université Paris-Saclay, Neurospin, CEA, 91191 Gif-sur-Yvette, France; (C.P.); (A.G.)
- Correspondence: (H.C.); (V.F.)
| | - Cathy Philippe
- Université Paris-Saclay, Neurospin, CEA, 91191 Gif-sur-Yvette, France; (C.P.); (A.G.)
| | - Volodia Dangouloff-Ros
- Pediatric Radiology Department, Hôpital Necker Enfants Malades, APHP, IMAGINE Institute, Inserm, Université de Paris, 75015 Paris, France; (V.D.-R.); (R.C.); (N.B.)
| | - Antoine Grigis
- Université Paris-Saclay, Neurospin, CEA, 91191 Gif-sur-Yvette, France; (C.P.); (A.G.)
| | - Raphael Calmon
- Pediatric Radiology Department, Hôpital Necker Enfants Malades, APHP, IMAGINE Institute, Inserm, Université de Paris, 75015 Paris, France; (V.D.-R.); (R.C.); (N.B.)
| | - Nathalie Boddaert
- Pediatric Radiology Department, Hôpital Necker Enfants Malades, APHP, IMAGINE Institute, Inserm, Université de Paris, 75015 Paris, France; (V.D.-R.); (R.C.); (N.B.)
| | | | - Jacques Grill
- Department of Pediatric and Adolescent Oncology, Gustave Roussy, Inserm U981, Université Paris-Saclay, 94800 Villejuif, France;
| | - Vincent Frouin
- Université Paris-Saclay, Neurospin, CEA, 91191 Gif-sur-Yvette, France; (C.P.); (A.G.)
- Correspondence: (H.C.); (V.F.)
| |
Collapse
|
30
|
Brain Cancer Prediction Based on Novel Interpretable Ensemble Gene Selection Algorithm and Classifier. Diagnostics (Basel) 2021; 11:diagnostics11101936. [PMID: 34679634 PMCID: PMC8535043 DOI: 10.3390/diagnostics11101936] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 10/12/2021] [Accepted: 10/13/2021] [Indexed: 11/17/2022] Open
Abstract
The growth of abnormal cells in the brain causes human brain tumors. Identifying the type of tumor is crucial for the prognosis and treatment of the patient. Data from cancer microarrays typically include fewer samples with many gene expression levels as features, reflecting the curse of dimensionality and making classifying data from microarrays challenging. In most of the examined studies, cancer classification (Malignant and benign) accuracy was examined without disclosing biological information related to the classification process. A new approach was proposed to bridge the gap between cancer classification and the interpretation of the biological studies of the genes implicated in cancer. This study aims to develop a new hybrid model for cancer classification (by using feature selection mRMRe as a key step to improve the performance of classification methods and a distributed hyperparameter optimization for gradient boosting ensemble methods). To evaluate the proposed method, NB, RF, and SVM classifiers have been chosen. In terms of the AUC, sensitivity, and specificity, the optimized CatBoost classifier performed better than the optimized XGBoost in cross-validation 5, 6, 8, and 10. With an accuracy of 0.91±0.12, the optimized CatBoost classifier is more accurate than the CatBoost classifier without optimization, which is 0.81± 0.24. By using hybrid algorithms, SVM, RF, and NB automatically become more accurate. Furthermore, in terms of accuracy, SVM and RF (0.97±0.08) achieve equivalent and higher classification accuracy than NB (0.91±0.12). The findings of relevant biomedical studies confirm the findings of the selected genes.
Collapse
|
31
|
iRG-4mC: Neural Network Based Tool for Identification of DNA 4mC Sites in Rosaceae Genome. Symmetry (Basel) 2021. [DOI: 10.3390/sym13050899] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
DNA N4-Methylcytosine is a genetic modification process which has an essential role in changing different biological processes such as DNA conformation, DNA replication, DNA stability, cell development and structural alteration in DNA. Due to its negative effects, it is important to identify the modified 4mC sites. Further, methylcytosine may develop anywhere at cytosine residue, however, clonal gene expression patterns are most likely transmitted just for cytosine residues in strand-symmetrical sequences. For this reason many different experiments are introduced but they proved not to be viable choice due to time limitation and high expenses. Therefore, to date there is still need for an efficient computational method to deal with 4mC sites identification. Keeping it in mind, in this research we have proposed an efficient model for Fragaria vesca (F. vesca) and Rosa chinensis (R. chinensis) genome. The proposed iRG-4mC tool is developed based on neural network architecture with two encoding schemes to identify the 4mC sites. The iRG-4mC predictor outperformed the existing state-of-the-art computational model by an accuracy difference of 9.95% on F. vesca (training dataset), 8.7% on R. chinesis (training dataset), 6.2% on F. vesca (independent dataset) and 10.6% on R. chinesis (independent dataset). We have also established a webserver which is freely accessible for the research community.
Collapse
|
32
|
Computational Complexity Reduction of Neural Networks of Brain Tumor Image Segmentation by Introducing Fermi-Dirac Correction Functions. ENTROPY 2021; 23:e23020223. [PMID: 33670368 PMCID: PMC7918890 DOI: 10.3390/e23020223] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 01/29/2021] [Accepted: 02/07/2021] [Indexed: 11/16/2022]
Abstract
Nowadays, deep learning methods with high structural complexity and flexibility inevitably lean on the computational capability of the hardware. A platform with high-performance GPUs and large amounts of memory could support neural networks having large numbers of layers and kernels. However, naively pursuing high-cost hardware would probably drag the technical development of deep learning methods. In the article, we thus establish a new preprocessing method to reduce the computational complexity of the neural networks. Inspired by the band theory of solids in physics, we map the image space into a noninteraction physical system isomorphically and then treat image voxels as particle-like clusters. Then, we reconstruct the Fermi-Dirac distribution to be a correction function for the normalization of the voxel intensity and as a filter of insignificant cluster components. The filtered clusters at the circumstance can delineate the morphological heterogeneity of the image voxels. We used the BraTS 2019 datasets and the dimensional fusion U-net for the algorithmic validation, and the proposed Fermi-Dirac correction function exhibited comparable performance to other employed preprocessing methods. By comparing to the conventional z-score normalization function and the Gamma correction function, the proposed algorithm can save at least 38% of computational time cost under a low-cost hardware architecture. Even though the correction function of global histogram equalization has the lowest computational time among the employed correction functions, the proposed Fermi-Dirac correction function exhibits better capabilities of image augmentation and segmentation.
Collapse
|
33
|
Ilyas T, Khan A, Umraiz M, Jeong Y, Kim H. Multi-Scale Context Aggregation for Strawberry Fruit Recognition and Disease Phenotyping. IEEE ACCESS 2021; 9:124491-124504. [DOI: 10.1109/access.2021.3110978] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|