1
|
Prasad V, Jeba Jingle ID, Sriramakrishnan GV. DTDO: Driving Training Development Optimization enabled deep learning approach for brain tumour classification using MRI. NETWORK (BRISTOL, ENGLAND) 2024; 35:520-561. [PMID: 38801074 DOI: 10.1080/0954898x.2024.2351159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 04/12/2024] [Accepted: 04/29/2024] [Indexed: 05/29/2024]
Abstract
A brain tumour is an abnormal mass of tissue. Brain tumours vary in size, from tiny to large. Moreover, they display variations in location, shape, and size, which add complexity to their detection. The accurate delineation of tumour regions poses a challenge due to their irregular boundaries. In this research, these issues are overcome by introducing the DTDO-ZFNet for detection of brain tumour. The input Magnetic Resonance Imaging (MRI) image is fed to the pre-processing stage. Tumour areas are segmented by utilizing SegNet in which the factors of SegNet are biased using DTDO. The image augmentation is carried out using eminent techniques, such as geometric transformation and colour space transformation. Here, features such as GIST descriptor, PCA-NGIST, statistical feature and Haralick features, SLBT feature, and CNN features are extricated. Finally, the categorization of the tumour is accomplished based on ZFNet, which is trained by utilizing DTDO. The devised DTDO is a consolidation of DTBO and CDDO. The comparison of proposed DTDO-ZFNet with the existing methods, which results in highest accuracy of 0.944, a positive predictive value (PPV) of 0.936, a true positive rate (TPR) of 0.939, a negative predictive value (NPV) of 0.937, and a minimal false-negative rate (FNR) of 0.061%.
Collapse
Affiliation(s)
- Vadamodula Prasad
- Department of Computer Science & Engineering, Lendi Institute of Engineering & Technology, Jonnada, India
| | - Issac Diana Jeba Jingle
- Department of Computer Science & Engineering, Christ (Deemed to be University), Bangalore, India
| | | |
Collapse
|
2
|
Dénes-Fazakas L, Kovács L, Eigner G, Szilágyi L. Enhancing Brain Tumor Diagnosis with L-Net: A Novel Deep Learning Approach for MRI Image Segmentation and Classification. Biomedicines 2024; 12:2388. [PMID: 39457700 PMCID: PMC11505252 DOI: 10.3390/biomedicines12102388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Revised: 10/15/2024] [Accepted: 10/16/2024] [Indexed: 10/28/2024] Open
Abstract
Background: Brain tumors are highly complex, making their detection and classification a significant challenge in modern medical diagnostics. The accurate segmentation and classification of brain tumors from MRI images are crucial for effective treatment planning. This study aims to develop an advanced neural network architecture that addresses these challenges. Methods: We propose L-net, a novel architecture combining U-net for tumor boundary segmentation and a convolutional neural network (CNN) for tumor classification. These two units are coupled such a way that the CNN classifies the MRI images based on the features extracted by the U-net while segmenting the tumor, instead of relying on the original input images. The model is trained on a dataset of 3064 high-resolution MRI images, encompassing gliomas, meningiomas, and pituitary tumors, ensuring robust performance across different tumor types. Results: L-net achieved a classification accuracy of up to 99.6%, surpassing existing models in both segmentation and classification tasks. The model demonstrated effectiveness even with lower image resolutions, making it suitable for diverse clinical settings. Conclusions: The proposed L-net model provides an accurate and unified approach to brain tumor segmentation and classification. Its enhanced performance contributes to more reliable and precise diagnosis, supporting early detection and treatment in clinical applications.
Collapse
Affiliation(s)
- Lehel Dénes-Fazakas
- Physiological Controls Research Center, University Research and Innovation Center, Obuda University, 1034 Budapest, Hungary; (L.D.-F.); (L.K.); (G.E.)
- Biomatics and Applied Artificial Intelligence Institute, John von Neumann Faculty of Informatics, Obuda University, 1034 Budapest, Hungary
- Doctoral School of Applied Informatics and Applied Mathematics, Obuda University, 1034 Budapest, Hungary
| | - Levente Kovács
- Physiological Controls Research Center, University Research and Innovation Center, Obuda University, 1034 Budapest, Hungary; (L.D.-F.); (L.K.); (G.E.)
- Biomatics and Applied Artificial Intelligence Institute, John von Neumann Faculty of Informatics, Obuda University, 1034 Budapest, Hungary
| | - György Eigner
- Physiological Controls Research Center, University Research and Innovation Center, Obuda University, 1034 Budapest, Hungary; (L.D.-F.); (L.K.); (G.E.)
- Biomatics and Applied Artificial Intelligence Institute, John von Neumann Faculty of Informatics, Obuda University, 1034 Budapest, Hungary
| | - László Szilágyi
- Physiological Controls Research Center, University Research and Innovation Center, Obuda University, 1034 Budapest, Hungary; (L.D.-F.); (L.K.); (G.E.)
- Biomatics and Applied Artificial Intelligence Institute, John von Neumann Faculty of Informatics, Obuda University, 1034 Budapest, Hungary
- Computational Intelligence Research Group, Sapientia Hungarian University of Transylvania, 547367 Târgu Mureș, Romania
| |
Collapse
|
3
|
Ullah MS, Khan MA, Albarakati HM, Damaševičius R, Alsenan S. Multimodal brain tumor segmentation and classification from MRI scans based on optimized DeepLabV3+ and interpreted networks information fusion empowered with explainable AI. Comput Biol Med 2024; 182:109183. [PMID: 39357134 DOI: 10.1016/j.compbiomed.2024.109183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2024] [Revised: 09/03/2024] [Accepted: 09/20/2024] [Indexed: 10/04/2024]
Abstract
Explainable artificial intelligence (XAI) aims to offer machine learning (ML) methods that enable people to comprehend, properly trust, and create more explainable models. In medical imaging, XAI has been adopted to interpret deep learning black box models to demonstrate the trustworthiness of machine decisions and predictions. In this work, we proposed a deep learning and explainable AI-based framework for segmenting and classifying brain tumors. The proposed framework consists of two parts. The first part, encoder-decoder-based DeepLabv3+ architecture, is implemented with Bayesian Optimization (BO) based hyperparameter initialization. The different scales are performed, and features are extracted through the Atrous Spatial Pyramid Pooling (ASPP) technique. The extracted features are passed to the output layer for tumor segmentation. In the second part of the proposed framework, two customized models have been proposed named Inverted Residual Bottleneck 96 layers (IRB-96) and Inverted Residual Bottleneck Self-Attention (IRB-Self). Both models are trained on the selected brain tumor datasets and extracted features from the global average pooling and self-attention layers. Features are fused using a serial approach, and classification is performed. The BO-based hyperparameters optimization of the neural network classifiers is performed and the classification results have been optimized. An XAI method named LIME is implemented to check the interpretability of the proposed models. The experimental process of the proposed framework was performed on the Figshare dataset, and an average segmentation accuracy of 92.68 % and classification accuracy of 95.42 % were obtained, respectively. Compared with state-of-the-art techniques, the proposed framework shows improved accuracy.
Collapse
Affiliation(s)
| | - Muhammad Attique Khan
- Department of Artificial Intelligence, College of Computer Engineering and Science, Prince Mohammad Bin Fahd University, Al Khobar, Saudi Arabia.
| | - Hussain Mubarak Albarakati
- Computer and Network Engineering Department, College of Computing, Umm Al-Qura University, Makkah, 24382, Saudi Arabia
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100, Gliwice, Poland
| | - Shrooq Alsenan
- Information Systems Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia.
| |
Collapse
|
4
|
Alshomrani F. A Unified Pipeline for Simultaneous Brain Tumor Classification and Segmentation Using Fine-Tuned CNN and Residual UNet Architecture. Life (Basel) 2024; 14:1143. [PMID: 39337926 PMCID: PMC11433524 DOI: 10.3390/life14091143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 08/12/2024] [Accepted: 09/09/2024] [Indexed: 09/30/2024] Open
Abstract
In this paper, I present a comprehensive pipeline integrating a Fine-Tuned Convolutional Neural Network (FT-CNN) and a Residual-UNet (RUNet) architecture for the automated analysis of MRI brain scans. The proposed system addresses the dual challenges of brain tumor classification and segmentation, which are crucial tasks in medical image analysis for precise diagnosis and treatment planning. Initially, the pipeline preprocesses the FigShare brain MRI image dataset, comprising 3064 images, by normalizing and resizing them to achieve uniformity and compatibility with the model. The FT-CNN model then classifies the preprocessed images into distinct tumor types: glioma, meningioma, and pituitary tumor. Following classification, the RUNet model performs pixel-level segmentation to delineate tumor regions within the MRI scans. The FT-CNN leverages the VGG19 architecture, pre-trained on large datasets and fine-tuned for specific tumor classification tasks. Features extracted from MRI images are used to train the FT-CNN, demonstrating robust performance in discriminating between tumor types. Subsequently, the RUNet model, inspired by the U-Net design and enhanced with residual blocks, effectively segments tumors by combining high-resolution spatial information from the encoding path with context-rich features from the bottleneck. My experimental results indicate that the integrated pipeline achieves high accuracy in both classification (96%) and segmentation tasks (98%), showcasing its potential for clinical applications in brain tumor diagnosis. For the classification task, the metrics involved are loss, accuracy, confusion matrix, and classification report, while for the segmentation task, the metrics used are loss, accuracy, Dice coefficient, intersection over union, and Jaccard distance. To further validate the generalizability and robustness of the integrated pipeline, I evaluated the model on two additional datasets. The first dataset consists of 7023 images for classification tasks, expanding to a four-class dataset. The second dataset contains approximately 3929 images for both classification and segmentation tasks, including a binary classification scenario. The model demonstrated robust performance, achieving 95% accuracy on the four-class task and high accuracy (96%) in the binary classification and segmentation tasks, with a Dice coefficient of 95%.
Collapse
Affiliation(s)
- Faisal Alshomrani
- Department of Diagnostic Radiology Technology, College of Applied Medical Science, Taibah University, Medinah 42353, Saudi Arabia
| |
Collapse
|
5
|
Zhang S, Wang J, Sun S, Zhang Q, Zhai Y, Wang X, Ge P, Shi Z, Zhang D. CT Angiography Radiomics Combining Traditional Risk Factors to Predict Brain Arteriovenous Malformation Rupture: a Machine Learning, Multicenter Study. Transl Stroke Res 2024; 15:784-794. [PMID: 37311939 DOI: 10.1007/s12975-023-01166-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 05/04/2023] [Accepted: 06/06/2023] [Indexed: 06/15/2023]
Abstract
This study aimed to develop a machine learning model for predicting brain arteriovenous malformation (bAVM) rupture using a combination of traditional risk factors and radiomics features. This multicenter retrospective study enrolled 586 patients with unruptured bAVMs from 2010 to 2020. All patients were grouped into the hemorrhage (n = 368) and non-hemorrhage (n = 218) groups. The bAVM nidus were segmented on CT angiography images using Slicer software, and radiomic features were extracted using Pyradiomics. The dataset included a training set and an independent testing set. The machine learning model was developed on the training set and validated on the testing set by merging numerous base estimators and a final estimator based on the stacking method. The area under the receiver operating characteristic (ROC) curve, precision, and the f1 score were evaluated to determine the performance of the model. A total of 1790 radiomics features and 8 traditional risk factors were contained in the original dataset, and 241 features remained for model training after L1 regularization filtering. The base estimator of the ensemble model was Logistic Regression, whereas the final estimator was Random Forest. In the training set, the area under the ROC curve of the model was 0.982 (0.967-0.996) and 0.893 (0.826-0.960) in the testing set. This study indicated that radiomics features are a valuable addition to traditional risk factors for predicting bAVM rupture. In the meantime, ensemble learning can effectively improve the performance of a prediction model.
Collapse
Affiliation(s)
- Shaosen Zhang
- Department of Neurosurgery, Beijing Hospital, National Center of Gerontology; Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
| | - Junjie Wang
- Department of Neurosurgery, Beijing Hospital, National Center of Gerontology; Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
| | - Shengjun Sun
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Qian Zhang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Yuanren Zhai
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Xiaochen Wang
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Peicong Ge
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Zhiyong Shi
- Department of Neurosurgery, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, China
| | - Dong Zhang
- Department of Neurosurgery, Beijing Hospital, National Center of Gerontology; Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China.
| |
Collapse
|
6
|
Al-Ostoot FH, Salah S, Khanum SA. An Overview of Cancer Biology, Pathophysiological Development and It's Treatment Modalities: Current Challenges of Cancer anti-Angiogenic Therapy. Cancer Invest 2024; 42:559-604. [PMID: 38874308 DOI: 10.1080/07357907.2024.2361295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Revised: 11/22/2021] [Accepted: 05/25/2024] [Indexed: 06/15/2024]
Abstract
A number of conditions and factors can cause the transformation of normal cells in the body into malignant tissue by changing the normal functions of a wide range of regulatory, apoptotic, and signal transduction pathways. Despite the current deficiency in fully understanding the mechanism of cancer action accurately and clearly, numerous genes and proteins that are causally involved in the initiation, progression, and metastasis of cancer have been identified. But due to the lack of space and the abundance of details on this complex topic, we have emphasized here more recent advances in our understanding of the principles implied tumor cell transformation, development, invasion, angiogenesis, and metastasis. Inhibition of angiogenesis is a significant strategy for the treatment of various solid tumors, that essentially depend on cutting or at least limiting the supply of blood to micro-regions of tumors, leading to pan-hypoxia and pan-necrosis inside solid tumor tissues. Researchers have continued to enhance the efficiency of anti-angiogenic drugs over the past two decades, to identify their potential in the drug interaction, and to discover reasonable interpretations for possible resistance to treatment. In this review, we have discussed an overview of cancer history and recent methods use in cancer therapy, focusing on anti-angiogenic inhibitors targeting angiogenesis formation. Further, this review has explained the molecular mechanism of action of these anti-angiogenic inhibitors in various tumor types and their limitations use. In addition, we described the synergistic mechanisms of immunotherapy and anti-angiogenic therapy and summarizes current clinical trials of these combinations. Many phase III trials found that combining immunotherapy and anti-angiogenic therapy improved survival. Therefore, targeting the source supply of cancer cells to grow and spread with new anti-angiogenic agents in combination with different conventional therapy is a novel method to reduce cancer progression. The aim of this paper is to overview the varying concepts of cancer focusing on mechanisms involved in tumor angiogenesis and provide an overview of the recent trends in anti-angiogenic strategies for cancer therapy.
Collapse
Affiliation(s)
- Fares Hezam Al-Ostoot
- Department of Chemistry, Yuvaraja's College, University of Mysore, Mysuru, India
- Department of Biochemistry, Faculty of Education & Science, Albaydha University, Al-Baydha, Yemen
| | - Salma Salah
- Faculty of Medicine and Health Sciences, Thamar University, Dhamar, Yemen
| | - Shaukath Ara Khanum
- Department of Chemistry, Yuvaraja's College, University of Mysore, Mysuru, India
| |
Collapse
|
7
|
Iqbal S, Qureshi AN, Alhussein M, Aurangzeb K, Choudhry IA, Anwar MS. Hybrid deep spatial and statistical feature fusion for accurate MRI brain tumor classification. Front Comput Neurosci 2024; 18:1423051. [PMID: 38978524 PMCID: PMC11228303 DOI: 10.3389/fncom.2024.1423051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 06/06/2024] [Indexed: 07/10/2024] Open
Abstract
The classification of medical images is crucial in the biomedical field, and despite attempts to address the issue, significant challenges persist. To effectively categorize medical images, collecting and integrating statistical information that accurately describes the image is essential. This study proposes a unique method for feature extraction that combines deep spatial characteristics with handmade statistical features. The approach involves extracting statistical radiomics features using advanced techniques, followed by a novel handcrafted feature fusion method inspired by the ResNet deep learning model. A new feature fusion framework (FusionNet) is then used to reduce image dimensionality and simplify computation. The proposed approach is tested on MRI images of brain tumors from the BraTS dataset, and the results show that it outperforms existing methods regarding classification accuracy. The study presents three models, including a handcrafted-based model and two CNN models, which completed the binary classification task. The recommended hybrid approach achieved a high F1 score of 96.12 ± 0.41, precision of 97.77 ± 0.32, and accuracy of 97.53 ± 0.24, indicating that it has the potential to serve as a valuable tool for pathologists.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology and Computer Science, University of Central Punjab, Lahore, Pakistan
| | - Adnan N. Qureshi
- Faculty of Arts, Society, and Professional Studies, Newman University, Birmingham, United Kingdom
| | - Musaed Alhussein
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Imran Arshad Choudhry
- Department of Computer Science, Faculty of Information Technology and Computer Science, University of Central Punjab, Lahore, Pakistan
| | | |
Collapse
|
8
|
Saran Raj S, Surendiran B, Raja SP. Designing a deep hybridized residual and SE model for MRI image-based brain tumor prediction. JOURNAL OF CLINICAL ULTRASOUND : JCU 2024; 52:588-599. [PMID: 38567722 DOI: 10.1002/jcu.23679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Revised: 02/27/2024] [Accepted: 03/19/2024] [Indexed: 06/15/2024]
Abstract
Deep learning techniques have become crucial in the detection of brain tumors but classifying numerous images is time-consuming and error-prone, impacting timely diagnosis. This can hinder the effectiveness of these techniques in detecting brain tumors in a timely manner. To address this limitation, this study introduces a novel brain tumor detection system. The main objective is to overcome the challenges associated with acquiring a large and well-classified dataset. The proposed approach involves generating synthetic Magnetic Resonance Imaging (MRI) images that mimic the patterns commonly found in brain MRI images. The system utilizes a dataset consisting of small images that are unbalanced in terms of class distribution. To enhance the accuracy of tumor detection, two deep learning models are employed. Using a hybrid ResNet+SE model, we capture feature distributions within unbalanced classes, creating a more balanced dataset. The second model, a tailored classifier identifies brain tumors in MRI images. The proposed method has shown promising results, achieving a high detection accuracy of 98.79%. This highlights the potential of the model as an efficient and cost-effective system for brain tumor detection.
Collapse
Affiliation(s)
- S Saran Raj
- Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, India
| | - B Surendiran
- Department of Computer Science and Engineering, National Institute of Technology Puducherry, Puducherry, India
| | - S P Raja
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, India
| |
Collapse
|
9
|
Batool A, Byun YC. Brain tumor detection with integrating traditional and computational intelligence approaches across diverse imaging modalities - Challenges and future directions. Comput Biol Med 2024; 175:108412. [PMID: 38691914 DOI: 10.1016/j.compbiomed.2024.108412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 03/18/2024] [Accepted: 04/02/2024] [Indexed: 05/03/2024]
Abstract
Brain tumor segmentation and classification play a crucial role in the diagnosis and treatment planning of brain tumors. Accurate and efficient methods for identifying tumor regions and classifying different tumor types are essential for guiding medical interventions. This study comprehensively reviews brain tumor segmentation and classification techniques, exploring various approaches based on image processing, machine learning, and deep learning. Furthermore, our study aims to review existing methodologies, discuss their advantages and limitations, and highlight recent advancements in this field. The impact of existing segmentation and classification techniques for automated brain tumor detection is also critically examined using various open-source datasets of Magnetic Resonance Images (MRI) of different modalities. Moreover, our proposed study highlights the challenges related to segmentation and classification techniques and datasets having various MRI modalities to enable researchers to develop innovative and robust solutions for automated brain tumor detection. The results of this study contribute to the development of automated and robust solutions for analyzing brain tumors, ultimately aiding medical professionals in making informed decisions and providing better patient care.
Collapse
Affiliation(s)
- Amreen Batool
- Department of Electronic Engineering, Institute of Information Science & Technology, Jeju National University, Jeju, 63243, South Korea
| | - Yung-Cheol Byun
- Department of Computer Engineering, Major of Electronic Engineering, Jeju National University, Institute of Information Science Technology, Jeju, 63243, South Korea.
| |
Collapse
|
10
|
Moghtaderi S, Einlou M, Wahid KA, Lukong KE. Advancing multimodal medical image fusion: an adaptive image decomposition approach based on multilevel Guided filtering. ROYAL SOCIETY OPEN SCIENCE 2024; 11:rsos.231762. [PMID: 38601031 PMCID: PMC11004680 DOI: 10.1098/rsos.231762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 02/06/2024] [Accepted: 02/25/2024] [Indexed: 04/12/2024]
Abstract
With the rapid development of medical imaging methods, multimodal medical image fusion techniques have caught the interest of researchers. The aim is to preserve information from diverse sensors using various models to generate a single informative image. The main challenge is to derive a trade-off between the spatial and spectral qualities of the resulting fused image and the computing efficiency. This article proposes a fast and reliable method for medical image fusion depending on multilevel Guided edge-preserving filtering (MLGEPF) decomposition rule. First, each multimodal medical image was divided into three sublayer categories using an MLGEPF decomposition scheme: small-scale component, large-scale component and background component. Secondly, two fusion strategies-pulse-coupled neural network based on the structure tensor and maximum based-are applied to combine the three types of layers, based on the layers' various properties. The three different types of fused sublayers are combined to create the fused image at the end. A total of 40 pairs of brain images from four separate categories of medical conditions were tested in experiments. The pair of images includes various case studies including magnetic resonance imaging (MRI) , TITc, single-photon emission computed tomography (SPECT) and positron emission tomography (PET). We included qualitative analysis to demonstrate that the visual contrast between the structure and the surrounding tissue is increased in our proposed method. To further enhance the visual comparison, we asked a group of observers to compare our method's outputs with other methods and score them. Overall, our proposed fusion scheme increased the visual contrast and received positive subjective review. Moreover, objective assessment indicators for each category of medical conditions are also included. Our method achieves a high evaluation outcome on feature mutual information (FMI), the sum of correlation of differences (SCD), Qabf and Qy indexes. This implies that our fusion algorithm has better performance in information preservation and efficient structural and visual transferring.
Collapse
Affiliation(s)
- Shiva Moghtaderi
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SaskatchewanS7N 5A9, Canada
| | - Mokarrameh Einlou
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SaskatchewanS7N 5A9, Canada
| | - Khan A. Wahid
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SaskatchewanS7N 5A9, Canada
| | - Kiven Erique Lukong
- Department of Biochemistry, Microbiology and Immunology, University of Saskatchewan, Saskatoon, SaskatchewanS7N 5E5, Canada
| |
Collapse
|
11
|
Saluja S, Trivedi MC, Saha A. Deep CNNs for glioma grading on conventional MRIs: Performance analysis, challenges, and future directions. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:5250-5282. [PMID: 38872535 DOI: 10.3934/mbe.2024232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2024]
Abstract
The increasing global incidence of glioma tumors has raised significant healthcare concerns due to their high mortality rates. Traditionally, tumor diagnosis relies on visual analysis of medical imaging and invasive biopsies for precise grading. As an alternative, computer-assisted methods, particularly deep convolutional neural networks (DCNNs), have gained traction. This research paper explores the recent advancements in DCNNs for glioma grading using brain magnetic resonance images (MRIs) from 2015 to 2023. The study evaluated various DCNN architectures and their performance, revealing remarkable results with models such as hybrid and ensemble based DCNNs achieving accuracy levels of up to 98.91%. However, challenges persisted in the form of limited datasets, lack of external validation, and variations in grading formulations across diverse literature sources. Addressing these challenges through expanding datasets, conducting external validation, and standardizing grading formulations can enhance the performance and reliability of DCNNs in glioma grading, thereby advancing brain tumor classification and extending its applications to other neurological disorders.
Collapse
Affiliation(s)
- Sonam Saluja
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| | - Munesh Chandra Trivedi
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| | - Ashim Saha
- Department of Computer Science and Engineering, National Institute of Technology Agartala, Tripura 799046, India
| |
Collapse
|
12
|
Hassan J, Saeed SM, Deka L, Uddin MJ, Das DB. Applications of Machine Learning (ML) and Mathematical Modeling (MM) in Healthcare with Special Focus on Cancer Prognosis and Anticancer Therapy: Current Status and Challenges. Pharmaceutics 2024; 16:260. [PMID: 38399314 PMCID: PMC10892549 DOI: 10.3390/pharmaceutics16020260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 01/29/2024] [Accepted: 02/07/2024] [Indexed: 02/25/2024] Open
Abstract
The use of data-driven high-throughput analytical techniques, which has given rise to computational oncology, is undisputed. The widespread use of machine learning (ML) and mathematical modeling (MM)-based techniques is widely acknowledged. These two approaches have fueled the advancement in cancer research and eventually led to the uptake of telemedicine in cancer care. For diagnostic, prognostic, and treatment purposes concerning different types of cancer research, vast databases of varied information with manifold dimensions are required, and indeed, all this information can only be managed by an automated system developed utilizing ML and MM. In addition, MM is being used to probe the relationship between the pharmacokinetics and pharmacodynamics (PK/PD interactions) of anti-cancer substances to improve cancer treatment, and also to refine the quality of existing treatment models by being incorporated at all steps of research and development related to cancer and in routine patient care. This review will serve as a consolidation of the advancement and benefits of ML and MM techniques with a special focus on the area of cancer prognosis and anticancer therapy, leading to the identification of challenges (data quantity, ethical consideration, and data privacy) which are yet to be fully addressed in current studies.
Collapse
Affiliation(s)
- Jasmin Hassan
- Drug Delivery & Therapeutics Lab, Dhaka 1212, Bangladesh; (J.H.); (S.M.S.)
| | | | - Lipika Deka
- Faculty of Computing, Engineering and Media, De Montfort University, Leicester LE1 9BH, UK;
| | - Md Jasim Uddin
- Department of Pharmaceutical Technology, Faculty of Pharmacy, Universiti Malaya, Kuala Lumpur 50603, Malaysia
| | - Diganta B. Das
- Department of Chemical Engineering, Loughborough University, Loughborough LE11 3TU, UK
| |
Collapse
|
13
|
Pal S, Singh RP, Kumar A. Analysis of Hybrid Feature Optimization Techniques Based on the Classification Accuracy of Brain Tumor Regions Using Machine Learning and Further Evaluation Based on the Institute Test Data. J Med Phys 2024; 49:22-32. [PMID: 38828069 PMCID: PMC11141750 DOI: 10.4103/jmp.jmp_77_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 02/23/2024] [Accepted: 02/23/2024] [Indexed: 06/05/2024] Open
Abstract
Aim The goal of this study was to get optimal brain tumor features from magnetic resonance imaging (MRI) images and classify them based on the three groups of the tumor region: Peritumoral edema, enhancing-core, and necrotic tumor core, using machine learning classification models. Materials and Methods This study's dataset was obtained from the multimodal brain tumor segmentation challenge. A total of 599 brain MRI studies were employed, all in neuroimaging informatics technology initiative format. The dataset was divided into training, validation, and testing subsets online test dataset (OTD). The dataset includes four types of MRI series, which were combined together and processed for intensity normalization using contrast limited adaptive histogram equalization methodology. To extract radiomics features, a python-based library called pyRadiomics was employed. Particle-swarm optimization (PSO) with varying inertia weights was used for feature optimization. Inertia weight with a linearly decreasing strategy (W1), inertia weight with a nonlinear coefficient decreasing strategy (W2), and inertia weight with a logarithmic strategy (W3) were different strategies used to vary the inertia weight for feature optimization in PSO. These selected features were further optimized using the principal component analysis (PCA) method to further reducing the dimensionality and removing the noise and improve the performance and efficiency of subsequent algorithms. Support vector machine (SVM), light gradient boosting (LGB), and extreme gradient boosting (XGB) machine learning classification algorithms were utilized for the classification of images into different tumor regions using optimized features. The proposed method was also tested on institute test data (ITD) for a total of 30 patient images. Results For OTD test dataset, the classification accuracy of SVM was 0.989, for the LGB model (LGBM) was 0.992, and for the XGB model (XGBM) was 0.994, using the varying inertia weight-PSO optimization method and the classification accuracy of SVM was 0.996 for the LGBM was 0.998, and for the XGBM was 0.994, using PSO and PCA-a hybrid optimization technique. For ITD test dataset, the classification accuracy of SVM was 0.994 for the LGBM was 0.993, and for the XGBM was 0.997, using the hybrid optimization technique. Conclusion The results suggest that the proposed method can be used to classify a brain tumor as used in this study to classify the tumor region into three groups: Peritumoral edema, enhancing-core, and necrotic tumor core. This was done by extracting the different features of the tumor, such as its shape, grey level, gray-level co-occurrence matrix, etc., and then choosing the best features using hybrid optimal feature selection techniques. This was done without much human expertise and in much less time than it would take a person.
Collapse
Affiliation(s)
- Soniya Pal
- Department of Physics, GLA University, Mathura, Uttar Pradesh, India
- Batra Hospital and Medical Research Center, New Delhi, India
| | - Raj Pal Singh
- Department of Physics, GLA University, Mathura, Uttar Pradesh, India
| | - Anuj Kumar
- Department of Radiotherapy, S. N. Medical College, Agra, Uttar Pradesh, India
| |
Collapse
|
14
|
Rasa SM, Islam MM, Talukder MA, Uddin MA, Khalid M, Kazi M, Kazi MZ. Brain tumor classification using fine-tuned transfer learning models on magnetic resonance imaging (MRI) images. Digit Health 2024; 10:20552076241286140. [PMID: 39381813 PMCID: PMC11459499 DOI: 10.1177/20552076241286140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Accepted: 08/30/2024] [Indexed: 10/10/2024] Open
Abstract
Objective Brain tumors are a leading global cause of mortality, often leading to reduced life expectancy and challenging recovery. Early detection significantly improves survival rates. This paper introduces an efficient deep learning model to expedite brain tumor detection through timely and accurate identification using magnetic resonance imaging images. Methods Our approach leverages deep transfer learning with six transfer learning algorithms: VGG16, ResNet50, MobileNetV2, DenseNet201, EfficientNetB3, and InceptionV3. We optimize data preprocessing, upsample data through augmentation, and train the models using two optimizers: Adam and AdaMax. We perform three experiments with binary and multi-class datasets, fine-tuning parameters to reduce overfitting. Model effectiveness is analyzed using various performance scores with and without cross-validation. Results With smaller datasets, the models achieve 100% accuracy in both training and testing without cross-validation. After applying cross-validation, the framework records an outstanding accuracy of 99.96% with a receiver operating characteristic of 100% on average across five tests. For larger datasets, accuracy ranges from 96.34% to 98.20% across different models. The methodology also demonstrates a small computation time, contributing to its reliability and speed. Conclusion The study establishes a new standard for brain tumor classification, surpassing existing methods in accuracy and efficiency. Our deep learning approach, incorporating advanced transfer learning algorithms and optimized data processing, provides a robust and rapid solution for brain tumor detection.
Collapse
Affiliation(s)
- Sadia Maduri Rasa
- Department of Computer Science and Engineering, Jagannath University, Dhaka, Bangladesh
| | | | - Mohammed Alamin Talukder
- Department of Computer Science and Engineering, International University of Business Agriculture and Technology, Dhaka, Bangladesh
| | | | - Majdi Khalid
- Department of Computer Science and Artificial Intelligence,
College of Computing, Umm Al-Qura University, Makkah,
Saudi Arabia
| | - Mohsin Kazi
- Department of Pharmaceutics, College of Pharmacy, King Saud University, Riyadh, Saudi Arabia
| | - Mohammed Zobayer Kazi
- Department of Computer Science and Engineering, Jagannath University, Dhaka, Bangladesh
| |
Collapse
|
15
|
Zheng Y, Huang D, Hao X, Wei J, Lu H, Liu Y. UniVisNet: A Unified Visualization and Classification Network for accurate grading of gliomas from MRI. Comput Biol Med 2023; 165:107332. [PMID: 37598632 DOI: 10.1016/j.compbiomed.2023.107332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 07/30/2023] [Accepted: 08/07/2023] [Indexed: 08/22/2023]
Abstract
Accurate grading of brain tumors plays a crucial role in the diagnosis and treatment of glioma. While convolutional neural networks (CNNs) have shown promising performance in this task, their clinical applicability is still constrained by the interpretability and robustness of the models. In the conventional framework, the classification model is trained first, and then visual explanations are generated. However, this approach often leads to models that prioritize classification performance or complexity, making it difficult to achieve a precise visual explanation. Motivated by these challenges, we propose the Unified Visualization and Classification Network (UniVisNet), a novel framework that aims to improve both the classification performance and the generation of high-resolution visual explanations. UniVisNet addresses attention misalignment by introducing a subregion-based attention mechanism, which replaces traditional down-sampling operations. Additionally, multiscale feature maps are fused to achieve higher resolution, enabling the generation of detailed visual explanations. To streamline the process, we introduce the Unified Visualization and Classification head (UniVisHead), which directly generates visual explanations without the need for additional separation steps. Through extensive experiments, our proposed UniVisNet consistently outperforms strong baseline classification models and prevalent visualization methods. Notably, UniVisNet achieves remarkable results on the glioma grading task, including an AUC of 94.7%, an accuracy of 89.3%, a sensitivity of 90.4%, and a specificity of 85.3%. Moreover, UniVisNet provides visually interpretable explanations that surpass existing approaches. In conclusion, UniVisNet innovatively generates visual explanations in brain tumor grading by simultaneously improving the classification performance and generating high-resolution visual explanations. This work contributes to the clinical application of deep learning, empowering clinicians with comprehensive insights into the spatial heterogeneity of glioma.
Collapse
Affiliation(s)
- Yao Zheng
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China
| | - Dong Huang
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China; Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China
| | - Xiaoshuo Hao
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China
| | - Jie Wei
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China
| | - Hongbing Lu
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China; Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China.
| | - Yang Liu
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China; Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China.
| |
Collapse
|
16
|
Yang Z, Hu Z, Ji H, Lafata K, Vaios E, Floyd S, Yin FF, Wang C. A neural ordinary differential equation model for visualizing deep neural network behaviors in multi-parametric MRI-based glioma segmentation. Med Phys 2023; 50:4825-4838. [PMID: 36840621 PMCID: PMC10440249 DOI: 10.1002/mp.16286] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 01/26/2023] [Accepted: 01/30/2023] [Indexed: 02/26/2023] Open
Abstract
PURPOSE To develop a neural ordinary differential equation (ODE) model for visualizing deep neural network behavior during multi-parametric MRI-based glioma segmentation as a method to enhance deep learning explainability. METHODS By hypothesizing that deep feature extraction can be modeled as a spatiotemporally continuous process, we implemented a novel deep learning model, Neural ODE, in which deep feature extraction was governed by an ODE parameterized by a neural network. The dynamics of (1) MR images after interactions with the deep neural network and (2) segmentation formation can thus be visualized after solving the ODE. An accumulative contribution curve (ACC) was designed to quantitatively evaluate each MR image's utilization by the deep neural network toward the final segmentation results. The proposed Neural ODE model was demonstrated using 369 glioma patients with a 4-modality multi-parametric MRI protocol: T1, contrast-enhanced T1 (T1-Ce), T2, and FLAIR. Three Neural ODE models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The key MRI modalities with significant utilization by deep neural networks were identified based on ACC analysis. Segmentation results by deep neural networks using only the key MRI modalities were compared to those using all four MRI modalities in terms of Dice coefficient, accuracy, sensitivity, and specificity. RESULTS All Neural ODE models successfully illustrated image dynamics as expected. ACC analysis identified T1-Ce as the only key modality in ET and TC segmentations, while both FLAIR and T2 were key modalities in WT segmentation. Compared to the U-Net results using all four MRI modalities, the Dice coefficient of ET (0.784→0.775), TC (0.760→0.758), and WT (0.841→0.837) using the key modalities only had minimal differences without significance. Accuracy, sensitivity, and specificity results demonstrated the same patterns. CONCLUSION The Neural ODE model offers a new tool for optimizing the deep learning model inputs with enhanced explainability. The presented methodology can be generalized to other medical image-related deep-learning applications.
Collapse
Affiliation(s)
- Zhenyu Yang
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China
| | - Zongsheng Hu
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China
| | - Hangjie Ji
- Department of Mathematics, North Carolina State University, Raleigh, North Carolina, USA
| | - Kyle Lafata
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
- Department of Radiology, Duke University, Durham, North Carolina, USA
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
| | - Eugene Vaios
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | - Scott Floyd
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | - Fang-Fang Yin
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China
| | - Chunhao Wang
- Deparment of Radiation Oncology, Duke University, Durham, North Carolina, USA
| |
Collapse
|
17
|
Zheng Y, Huang D, Feng Y, Hao X, He Y, Liu Y. CSF-Glioma: A Causal Segmentation Framework for Accurate Grading and Subregion Identification of Gliomas. Bioengineering (Basel) 2023; 10:887. [PMID: 37627772 PMCID: PMC10451284 DOI: 10.3390/bioengineering10080887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 07/22/2023] [Accepted: 07/24/2023] [Indexed: 08/27/2023] Open
Abstract
Deep networks have shown strong performance in glioma grading; however, interpreting their decisions remains challenging due to glioma heterogeneity. To address these challenges, the proposed solution is the Causal Segmentation Framework (CSF). This framework aims to accurately predict high- and low-grade gliomas while simultaneously highlighting key subregions. Our framework utilizes a shrinkage segmentation method to identify subregions containing essential decision information. Moreover, we introduce a glioma grading module that combines deep learning and traditional approaches for precise grading. Our proposed model achieves the best performance among all models, with an AUC of 96.14%, an F1 score of 93.74%, an accuracy of 91.04%, a sensitivity of 91.83%, and a specificity of 88.88%. Additionally, our model exhibits efficient resource utilization, completing predictions within 2.31s and occupying only 0.12 GB of memory during the test phase. Furthermore, our approach provides clear and specific visualizations of key subregions, surpassing other methods in terms of interpretability. In conclusion, the Causal Segmentation Framework (CSF) demonstrates its effectiveness at accurately predicting glioma grades and identifying key subregions. The inclusion of causality in the CSF model enhances the reliability and accuracy of preoperative decision-making for gliomas. The interpretable results provided by the CSF model can assist clinicians in their assessment and treatment planning.
Collapse
Affiliation(s)
- Yao Zheng
- School of Biomedical Engineering, Air Force Medical University, No. 169 Changle West Road, Xi’an 710032, China; (Y.Z.); (D.H.); (Y.F.); (X.H.)
| | - Dong Huang
- School of Biomedical Engineering, Air Force Medical University, No. 169 Changle West Road, Xi’an 710032, China; (Y.Z.); (D.H.); (Y.F.); (X.H.)
- Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, No. 169 Changle West Road, Xi’an 710032, China
| | - Yuefei Feng
- School of Biomedical Engineering, Air Force Medical University, No. 169 Changle West Road, Xi’an 710032, China; (Y.Z.); (D.H.); (Y.F.); (X.H.)
| | - Xiaoshuo Hao
- School of Biomedical Engineering, Air Force Medical University, No. 169 Changle West Road, Xi’an 710032, China; (Y.Z.); (D.H.); (Y.F.); (X.H.)
| | - Yutao He
- School of Biomedical Engineering, Air Force Medical University, No. 169 Changle West Road, Xi’an 710032, China; (Y.Z.); (D.H.); (Y.F.); (X.H.)
| | - Yang Liu
- School of Biomedical Engineering, Air Force Medical University, No. 169 Changle West Road, Xi’an 710032, China; (Y.Z.); (D.H.); (Y.F.); (X.H.)
- Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, No. 169 Changle West Road, Xi’an 710032, China
| |
Collapse
|
18
|
Lim CC, Ling AHW, Chong YF, Mashor MY, Alshantti K, Aziz ME. Comparative Analysis of Image Processing Techniques for Enhanced MRI Image Quality: 3D Reconstruction and Segmentation Using 3D U-Net Architecture. Diagnostics (Basel) 2023; 13:2377. [PMID: 37510120 PMCID: PMC10377862 DOI: 10.3390/diagnostics13142377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 06/29/2023] [Accepted: 07/05/2023] [Indexed: 07/30/2023] Open
Abstract
Osteosarcoma is a common type of bone tumor, particularly prevalent in children and adolescents between the ages of 5 and 25 who are experiencing growth spurts during puberty. Manual delineation of tumor regions in MRI images can be laborious and time-consuming, and results may be subjective and difficult to replicate. Therefore, a convolutional neural network (CNN) was developed to automatically segment osteosarcoma cancerous cells in three types of MRI images. The study consisted of five main stages. First, 3692 DICOM format MRI images were acquired from 46 patients, including T1-weighted, T2-weighted, and T1-weighted with injection of Gadolinium (T1W + Gd) images. Contrast stretching and median filter were applied to enhance image intensity and remove noise, and the pre-processed images were reconstructed into NIfTI format files for deep learning. The MRI images were then transformed to fit the CNN's requirements. A 3D U-Net architecture was proposed with optimized parameters to build an automatic segmentation model capable of segmenting osteosarcoma from the MRI images. The 3D U-Net segmentation model achieved excellent results, with mean dice similarity coefficients (DSC) of 83.75%, 85.45%, and 87.62% for T1W, T2W, and T1W + Gd images, respectively. However, the study found that the proposed method had some limitations, including poorly defined borders, missing lesion portions, and other confounding factors. In summary, an automatic segmentation method based on a CNN has been developed to address the challenge of manually segmenting osteosarcoma cancerous cells in MRI images. While the proposed method showed promise, the study revealed limitations that need to be addressed to improve its efficacy.
Collapse
Affiliation(s)
- Chee Chin Lim
- Faculty of Electronic Engineering & Technology, Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
- Sport Engineering Research Centre (SERC), Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
| | - Apple Ho Wei Ling
- Faculty of Electronic Engineering & Technology, Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
| | - Yen Fook Chong
- Sport Engineering Research Centre (SERC), Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
| | - Mohd Yusoff Mashor
- Faculty of Electronic Engineering & Technology, Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
- Sport Engineering Research Centre (SERC), Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
| | | | - Mohd Ezane Aziz
- Department of Radiology, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
| |
Collapse
|
19
|
Mishra M, Pati UC. A classification framework for Autism Spectrum Disorder detection using sMRI: Optimizer based ensemble of deep convolution neural network with on-the-fly data augmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/26/2023]
|
20
|
Yepes-Calderon F, McComb JG. Eliminating the need for manual segmentation to determine size and volume from MRI. A proof of concept on segmenting the lateral ventricles. PLoS One 2023; 18:e0285414. [PMID: 37167315 PMCID: PMC10174587 DOI: 10.1371/journal.pone.0285414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Accepted: 04/23/2023] [Indexed: 05/13/2023] Open
Abstract
Manual segmentation, which is tedious, time-consuming, and operator-dependent, is currently used as the gold standard to validate automatic and semiautomatic methods that quantify geometries from 2D and 3D MR images. This study examines the accuracy of manual segmentation and generalizes a strategy to eliminate its use. Trained individuals manually measured MR lateral ventricles images of normal and hydrocephalus infants from 1 month to 9.5 years of age. We created 3D-printed models of the lateral ventricles from the MRI studies and accurately estimated their volume by water displacement. MRI phantoms were made from the 3D models and images obtained. Using a previously developed artificial intelligence (AI) algorithm that employs four features extracted from the images, we estimated the ventricular volume of the phantom images. The algorithm was certified when discrepancies between the volumes-gold standards-yielded by the water displacement device and those measured by the automation were smaller than 2%. Then, we compared volumes after manual segmentation with those obtained with the certified automation. As determined by manual segmentation, lateral ventricular volume yielded an inter and intra-operator variation up to 50% and 48%, respectively, while manually segmenting saggital images generated errors up to 71%. These errors were determined by direct comparisons with the volumes yielded by the certified automation. The errors induced by manual segmentation are large enough to adversely affect decisions that may lead to less-than-optimal treatment; therefore, we suggest avoiding manual segmentation whenever possible.
Collapse
Affiliation(s)
- Fernando Yepes-Calderon
- Science-Based Platforms, Fort Pierce, Florida, United States of America
- GYM Group SA, Cali, Colombia
| | - J. Gordon McComb
- Division of Neurosurgery, Children’s Hospital Los Angeles, Los Angeles, CA, United States of America
- Keck School of Medicine, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
21
|
Alshamrani HA, Rashid M, Alshamrani SS, Alshehri AHD. Osteo-NeT: An Automated System for Predicting Knee Osteoarthritis from X-ray Images Using Transfer-Learning-Based Neural Networks Approach. Healthcare (Basel) 2023; 11:healthcare11091206. [PMID: 37174748 PMCID: PMC10178688 DOI: 10.3390/healthcare11091206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 04/20/2023] [Accepted: 04/20/2023] [Indexed: 05/15/2023] Open
Abstract
Knee osteoarthritis is a challenging problem affecting many adults around the world. There are currently no medications that cure knee osteoarthritis. The only way to control the progression of knee osteoarthritis is early detection. Currently, X-ray imaging is a central technique used for the prediction of osteoarthritis. However, the manual X-ray technique is prone to errors due to the lack of expertise of radiologists. Recent studies have described the use of automated systems based on machine learning for the effective prediction of osteoarthritis from X-ray images. However, most of these techniques still need to achieve higher predictive accuracy to detect osteoarthritis at an early stage. This paper suggests a method with higher predictive accuracy that can be employed in the real world for the early detection of knee osteoarthritis. In this paper, we suggest the use of transfer learning models based on sequential convolutional neural networks (CNNs), Visual Geometry Group 16 (VGG-16), and Residual Neural Network 50 (ResNet-50) for the early detection of osteoarthritis from knee X-ray images. In our analysis, we found that all the suggested models achieved a higher level of predictive accuracy, greater than 90%, in detecting osteoarthritis. However, the best-performing model was the pretrained VGG-16 model, which achieved a training accuracy of 99% and a testing accuracy of 92%.
Collapse
Affiliation(s)
- Hassan A Alshamrani
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 11001, Saudi Arabia
| | - Mamoon Rashid
- Department of Computer Engineering, Faculty of Science and Technology, Vishwakarma University, Pune 411048, India
- Research Center of Excellence for Health Informatics, Vishwakarma University, Pune 411048, India
| | - Sultan S Alshamrani
- Department of Information Technology, College of Computers and Information Technology, Taif University, Taif 21944, Saudi Arabia
| | - Ali H D Alshehri
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 11001, Saudi Arabia
| |
Collapse
|
22
|
Reddy KR, Batchu RK, Polinati S, Bavirisetti DP. Design of a medical decision-supporting system for the identification of brain tumors using entropy-based thresholding and non-local texture features. Front Hum Neurosci 2023; 17:1157155. [PMID: 37033909 PMCID: PMC10073563 DOI: 10.3389/fnhum.2023.1157155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 03/06/2023] [Indexed: 04/11/2023] Open
Abstract
Introduction Brain tumors arise due to abnormal growth of cells at any brain location with uneven boundaries and shapes. Usually, they proliferate rapidly, and their size increases by approximately 1.4% a day, resulting in invisible illness and psychological and behavioral changes in the human body. It is one of the leading causes of the increase in the mortality rate of adults worldwide. Therefore, early prediction of brain tumors is crucial in saving a patient's life. In addition, selecting a suitable imaging sequence also plays a significant role in treating brain tumors. Among available techniques, the magnetic resonance (MR) imaging modality is widely used due to its noninvasive nature and ability to represent the inherent details of brain tissue. Several computer-assisted diagnosis (CAD) approaches have recently been developed based on these observations. However, there is scope for improvement due to tumor characteristics and image noise variations. Hence, it is essential to establish a new paradigm. Methods This paper attempts to develop a new medical decision-support system for detecting and differentiating brain tumors from MR images. In the implemented approach, initially, we improve the contrast and brightness using the tuned single-scale retinex (TSSR) approach. Then, we extract the infected tumor region(s) using maximum entropy-based thresholding and morphological operations. Furthermore, we obtain the relevant texture features based on the non-local binary pattern (NLBP) feature descriptor. Finally, the extracted features are subjected to a support vector machine (SVM), K-nearest neighbors (KNN), random forest (RF), and GentleBoost (GB). Results The presented CAD model achieved 99.75% classification accuracy with 5-fold cross-validation and a 91.88% dice similarity score, which is higher than the existing models. Discussions By analyzing the experimental outcomes, we conclude that our method can be used as a supportive clinical tool for physicians during the diagnosis of brain tumors.
Collapse
Affiliation(s)
- K. Rasool Reddy
- Department of Electronics and Communication Engineering, Malla Reddy College of Engineering and Technology (MRCET), Hyderabad, India
| | - Raj Kumar Batchu
- Department of Computer Science and Engineering (Data Science), Prasad V. Potluri Siddhartha Institute of Technology (PVPSIT), Vijayawada, India
| | - Srinivasu Polinati
- Department of Electronics and Communication Engineering, Vignan’s Institute of Engineering for Women (VIEW), Visakhapatnam, India
| | - Durga Prasad Bavirisetti
- Department of Computer Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| |
Collapse
|
23
|
Bhavani MR, Vasanth DK. Classification of brain tumor using a multistage approach based on RELM and MLBP. EAI ENDORSED TRANSACTIONS ON PERVASIVE HEALTH AND TECHNOLOGY 2023. [DOI: 10.4108/eetpht.v8i4.3082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023] Open
Abstract
INTRODUCTION: Automatic segmentation and classification of brain tumors help in improvement of treatment which will increase the life of the patient. Tumor may be noncancerous (benign) or cancerous (malignant). Precancerous cells may also form into cancer.OBJECTIVES: Hough CNN is applied for selected section which applies hough casting technique in segmentation. METHODS: A multistage methodof extracting features, with multistage neighbouring is done for emerging an exact brain tumor classifying methodology.RESULTS: In this dataset three types of brain tumors are available they are meningioma, glioma, and pituitary.. CONCLUSION: This paperpresented an efficient brain tumor classification approach which involves multiscale preprocessing, multiscale feature extraction and classification.
Collapse
|
24
|
Shahin AI, Aly S, Aly W. A novel multi-class brain tumor classification method based on unsupervised PCANet features. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08281-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2023]
|
25
|
Maurya S, Tiwari S, Mothukuri MC, Tangeda CM, Nandigam RNS, Addagiri DC. A review on recent developments in cancer detection using Machine Learning and Deep Learning models. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
26
|
Zhang H, Xi Q, Zhang F, Li Q, Jiao Z, Ni X. Application of Deep Learning in Cancer Prognosis Prediction Model. Technol Cancer Res Treat 2023; 22:15330338231199287. [PMID: 37709267 PMCID: PMC10503281 DOI: 10.1177/15330338231199287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/16/2023] Open
Abstract
As an important branch of artificial intelligence and machine learning, deep learning (DL) has been widely used in various aspects of cancer auxiliary diagnosis, among which cancer prognosis is the most important part. High-accuracy cancer prognosis is beneficial to the clinical management of patients with cancer. Compared with other methods, DL models can significantly improve the accuracy of prediction. Therefore, this article is a systematic review of the latest research on DL in cancer prognosis prediction. First, the data type, construction process, and performance evaluation index of the DL model are introduced in detail. Then, the current mainstream baseline DL cancer prognosis prediction models, namely, deep neural networks, convolutional neural networks, deep belief networks, deep residual networks, and vision transformers, including network architectures, the latest application in cancer prognosis, and their respective characteristics, are discussed. Next, some key factors that affect the predictive performance of the model and common performance enhancement techniques are listed. Finally, the limitations of the DL cancer prognosis prediction model in clinical practice are summarized, and the future research direction is prospected. This article could provide relevant researchers with a comprehensive understanding of DL cancer prognostic models and is expected to promote the research progress of cancer prognosis prediction.
Collapse
Affiliation(s)
- Heng Zhang
- Department of Radiotherapy Oncology, Changzhou No.2 People's Hospital, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, China
| | - Qianyi Xi
- Department of Radiotherapy Oncology, Changzhou No.2 People's Hospital, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, China
- School of Microelectronics and Control Engineering, Changzhou University, Changzhou, China
| | - Fan Zhang
- Department of Radiotherapy Oncology, Changzhou No.2 People's Hospital, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, China
- School of Microelectronics and Control Engineering, Changzhou University, Changzhou, China
| | - Qixuan Li
- Department of Radiotherapy Oncology, Changzhou No.2 People's Hospital, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, China
- School of Microelectronics and Control Engineering, Changzhou University, Changzhou, China
| | - Zhuqing Jiao
- School of Microelectronics and Control Engineering, Changzhou University, Changzhou, China
| | - Xinye Ni
- Department of Radiotherapy Oncology, Changzhou No.2 People's Hospital, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, China
| |
Collapse
|
27
|
Fang L, Wang X. Multi-input Unet model based on the integrated block and the aggregation connection for MRI brain tumor segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104027] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
28
|
Reddy PG, Ramashri T, Krishna KL. Brain Tumour Region Extraction Using Novel Self-Organising Map-Based KFCM Algorithm. PERTANIKA JOURNAL OF SCIENCE AND TECHNOLOGY 2022. [DOI: 10.47836/pjst.31.1.33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Medical professionals need help finding tumours in the ground truth image of the brain because the tumours’ location, contrast, intensity, size, and shape vary between images because of different acquisition methods, modalities, and the patient’s age. The medical examiner has difficulty manually separating a tumour from other parts of a Magnetic Resonance Imaging (MRI) image. Many semi- and fully automated brain tumour detection systems have been written about in the literature, and they keep improving. The segmentation literature has seen several transformations throughout the years. An in-depth examination of these methods will be the focus of this investigation. We look at the most recent soft computing technologies used in MRI brain analysis through several review papers. This study looks at Self-Organising maps (SOM) with K-means and the kernel Fuzzy c-means (KFCM) method for segmenting them. The suggested SOM networks were first compared to K-means analysis in an experiment based on datasets with well-known cluster solutions. Later, the SOM is combined with KFCM, reducing time complexity and producing more accurate results than other methods. Experiments show that skewed data improves networks’ performance with more SOMs. Finally, performance measures in real-time datasets are analysed using machine learning approaches. The results show that the proposed algorithm has good sensitivity and better accuracy than k-means and other state-of-art methods.
Collapse
|
29
|
Zheng P, Zhu X, Guo W. Brain tumour segmentation based on an improved U-Net. BMC Med Imaging 2022; 22:199. [PMCID: PMC9673428 DOI: 10.1186/s12880-022-00931-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 11/08/2022] [Indexed: 11/19/2022] Open
Abstract
Abstract
Background
Automatic segmentation of brain tumours using deep learning algorithms is currently one of the research hotspots in the medical image segmentation field. An improved U-Net network is proposed to segment brain tumours to improve the segmentation effect of brain tumours.
Methods
To solve the problems of other brain tumour segmentation models such as U-Net, including insufficient ability to segment edge details and reuse feature information, poor extraction of location information and the commonly used binary cross-entropy and Dice loss are often ineffective when used as loss functions for brain tumour segmentation models, we propose a serial encoding–decoding structure, which achieves improved segmentation performance by adding hybrid dilated convolution (HDC) modules and concatenation between each module of two serial networks. In addition, we propose a new loss function to focus the model more on samples that are difficult to segment and classify. We compared the results of our proposed model and the commonly used segmentation models under the IOU, PA, Dice, precision, Hausdorf95, and ASD metrics.
Results
The performance of the proposed method outperforms other segmentation models in each metric. In addition, the schematic diagram of the segmentation results shows that the segmentation results of our algorithm are closer to the ground truth, showing more brain tumour details, while the segmentation results of other algorithms are smoother.
Conclusions
Our algorithm has better semantic segmentation performance than other commonly used segmentation algorithms. The technology we propose can be used in the brain tumour diagnosis to provide better protection for patients' later treatments.
Collapse
|
30
|
Arif M, Jims A, F. A, Geman O, Craciun MD, Leuciuc F. Application of Genetic Algorithm and U-Net in Brain Tumor Segmentation and Classification: A Deep Learning Approach. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:5625757. [PMID: 36156956 PMCID: PMC9499747 DOI: 10.1155/2022/5625757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 08/01/2022] [Accepted: 08/16/2022] [Indexed: 11/18/2022]
Abstract
The development of unusual cells in the cerebrum causes brain cancer. It is classified primarily into two classes: a noncarcinogenic (benign) type of growth and cancerous (malignant) growth. Early detection of this disease is a quintessential task for all medical practice professionals. For traditional approaches of tumor detections, certain limitations exist. They include less effectiveness, inability to detect due to low-quality processing of images, less dataset for training and testing, less predictive nature to models, and skipping of quintessential stages. All these lead to inaccurate results of tumor detections. To overcome this issue, this paper brings an effective deep learning technique for brain tumor detection with the following stages: (a) data collection from REMBRANDT dataset containing multisequence MRI of 130 patients; (b) preprocessing using conversion to greyscale, skull stripping, and histogram equalization; (c) segmentation uses genetic algorithm; (d) feature extraction using discrete wavelet transform (DWT); (e) particle swarm optimization technique for feature selection; (f) classification using U-Net. Experiment evaluation states that the proposed model (GA-UNET) outperforms (accuracy: 0.97, sensitivity: 0.98, specificity: 0.98) compared to other advanced models.
Collapse
Affiliation(s)
- Muhammad Arif
- Department of Computer Science, Superior University, Lahore, Pakistan
| | - Anupama Jims
- Department of Computer Science and Information Technology JAIN (Deemed-to-be University), Bangalore, India
| | - Ajesh F.
- Department of Computer Science and Engineering, Sree Buddha College of Engineering, Pattoor Alappuzha, Kerala, India
| | - Oana Geman
- Stefan Cel Mare University of Suceava Romania, Suceava, Romania
| | | | - Florin Leuciuc
- Stefan Cel Mare University of Suceava Romania, Suceava, Romania
| |
Collapse
|
31
|
Brain tumor detection using deep ensemble model with wavelet features. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00699-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
32
|
Koteswara Rao Chinnam S, Sistla V, Krishna Kishore Kolli V. Multimodal attention-gated cascaded U-Net model for automatic brain tumor detection and segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
33
|
Haq EU, Jianjun H, Huarong X, Li K, Weng L. A Hybrid Approach Based on Deep CNN and Machine Learning Classifiers for the Tumor Segmentation and Classification in Brain MRI. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:6446680. [PMID: 36035291 PMCID: PMC9400402 DOI: 10.1155/2022/6446680] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 04/13/2022] [Accepted: 04/20/2022] [Indexed: 11/17/2022]
Abstract
Conventional medical imaging and machine learning techniques are not perfect enough to correctly segment the brain tumor in MRI as the proper identification and segmentation of tumor borders are one of the most important criteria of tumor extraction. The existing approaches are time-consuming, incursive, and susceptible to human mistake. These drawbacks highlight the importance of developing a completely automated deep learning-based approach for segmentation and classification of brain tumors. The expedient and prompt segmentation and classification of a brain tumor are critical for accurate clinical diagnosis and adequately treatment. As a result, deep learning-based brain tumor segmentation and classification algorithms are extensively employed. In the deep learning-based brain tumor segmentation and classification technique, the CNN model has an excellent brain segmentation and classification effect. In this work, an integrated and hybrid approach based on deep convolutional neural network and machine learning classifiers is proposed for the accurate segmentation and classification of brain MRI tumor. A CNN is proposed in the first stage to learn the feature map from image space of brain MRI into the tumor marker region. In the second step, a faster region-based CNN is developed for the localization of tumor region followed by region proposal network (RPN). In the last step, a deep convolutional neural network and machine learning classifiers are incorporated in series in order to further refine the segmentation and classification process to obtain more accurate results and findings. The proposed model's performance is assessed based on evaluation metrics extensively used in medical image processing. The experimental results validate that the proposed deep CNN and SVM-RBF classifier achieved an accuracy of 98.3% and a dice similarity coefficient (DSC) of 97.8% on the task of classifying brain tumors as gliomas, meningioma, or pituitary using brain dataset-1, while on Figshare dataset, it achieved an accuracy of 98.0% and a DSC of 97.1% on classifying brain tumors as gliomas, meningioma, or pituitary. The segmentation and classification results demonstrate that the proposed model outperforms state-of-the-art techniques by a significant margin.
Collapse
Affiliation(s)
- Ejaz Ul Haq
- Guangdong Key Laboratory of Intelligent Information Processing, School of Electronics and Information Engineering, Shenzhen University, China
- School of Computer and Information Engineering, Xiamen University of Technology, China
| | - Huang Jianjun
- Guangdong Key Laboratory of Intelligent Information Processing, School of Electronics and Information Engineering, Shenzhen University, China
| | - Xu Huarong
- School of Computer and Information Engineering, Xiamen University of Technology, China
| | - Kang Li
- Guangdong Key Laboratory of Intelligent Information Processing, School of Electronics and Information Engineering, Shenzhen University, China
| | - Lifen Weng
- School of Computer and Information Engineering, Xiamen University of Technology, China
| |
Collapse
|
34
|
Xie Y, Zaccagna F, Rundo L, Testa C, Agati R, Lodi R, Manners DN, Tonon C. Convolutional Neural Network Techniques for Brain Tumor Classification (from 2015 to 2022): Review, Challenges, and Future Perspectives. Diagnostics (Basel) 2022; 12:diagnostics12081850. [PMID: 36010200 PMCID: PMC9406354 DOI: 10.3390/diagnostics12081850] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Revised: 07/20/2022] [Accepted: 07/28/2022] [Indexed: 12/21/2022] Open
Abstract
Convolutional neural networks (CNNs) constitute a widely used deep learning approach that has frequently been applied to the problem of brain tumor diagnosis. Such techniques still face some critical challenges in moving towards clinic application. The main objective of this work is to present a comprehensive review of studies using CNN architectures to classify brain tumors using MR images with the aim of identifying useful strategies for and possible impediments in the development of this technology. Relevant articles were identified using a predefined, systematic procedure. For each article, data were extracted regarding training data, target problems, the network architecture, validation methods, and the reported quantitative performance criteria. The clinical relevance of the studies was then evaluated to identify limitations by considering the merits of convolutional neural networks and the remaining challenges that need to be solved to promote the clinical application and development of CNN algorithms. Finally, possible directions for future research are discussed for researchers in the biomedical and machine learning communities. A total of 83 studies were identified and reviewed. They differed in terms of the precise classification problem targeted and the strategies used to construct and train the chosen CNN. Consequently, the reported performance varied widely, with accuracies of 91.63–100% in differentiating meningiomas, gliomas, and pituitary tumors (26 articles) and of 60.0–99.46% in distinguishing low-grade from high-grade gliomas (13 articles). The review provides a survey of the state of the art in CNN-based deep learning methods for brain tumor classification. Many networks demonstrated good performance, and it is not evident that any specific methodological choice greatly outperforms the alternatives, especially given the inconsistencies in the reporting of validation methods, performance metrics, and training data encountered. Few studies have focused on clinical usability.
Collapse
Affiliation(s)
- Yuting Xie
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
| | - Fulvio Zaccagna
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy;
| | - Leonardo Rundo
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, Italy;
| | - Claudia Testa
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy;
- Department of Physics and Astronomy, University of Bologna, 40127 Bologna, Italy
| | - Raffaele Agati
- Programma Neuroradiologia con Tecniche ad elevata complessità, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy;
| | - Raffaele Lodi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
- IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy
| | - David Neil Manners
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
- Correspondence:
| | - Caterina Tonon
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy;
| |
Collapse
|
35
|
Application of Smooth Fuzzy Model in Image Denoising and Edge Detection. MATHEMATICS 2022. [DOI: 10.3390/math10142421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
In this paper, the bounded variation property of fuzzy models with smooth compositions have been studied, and they have been compared with the standard fuzzy composition (e.g., min–max). Moreover, the contribution of the bounded variation of the smooth fuzzy model for the noise removal and edge preservation of the digital images has been investigated. Different simulations on the test images have been employed to verify the results. The performance index related to the detected edges of the smooth fuzzy models in the presence of both Gaussian and Impulse (also known as salt-and-pepper noise) noises of different densities has been found to be higher than the standard well-known fuzzy models (e.g., min–max composition), which demonstrates the efficiency of smooth compositions in comparison to the standard composition.
Collapse
|
36
|
Segmentation and classification of brain tumors from MRI images based on adaptive mechanisms and ELDP feature descriptor. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
37
|
Kawahara K, Ishikawa R, Sasano S, Shibata N, Ikuhara Y. Atomic-Resolution STEM Image Denoising by Total Variation Regularization. Microscopy (Oxf) 2022; 71:302-310. [PMID: 35713554 DOI: 10.1093/jmicro/dfac032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 05/31/2022] [Accepted: 06/16/2022] [Indexed: 11/13/2022] Open
Abstract
Atomic-resolution electron microscopy imaging of solid state material is a powerful method for structural analysis. Scanning transmission electron microscopy (STEM) is one of the actively used techniques to directly observe atoms in materials. However, some materials are easily damaged by the electron beam irradiation, and only noisy images are available when we decrease the electron dose to avoid beam damages. Therefore, a denoising process is necessary for precise structural analysis in low-dose STEM. In this study, we propose total variation (TV) denoising algorithm to remove quantum noise in a STEM image. We defined an entropy of STEM image that corresponds to the image contrast to determine a hyperparameter and we found that there is a hyperparameter that maximize the entropy. We acquired atomic resolution STEM image of CaF2 viewed along the [001] direction, and executed TV denoising. The atomic columns of Ca and F are clearly visualized by the TV denoising, and atomic position of Ca and F are determined with the error of ± 1 pm and ± 4 pm, respectively.
Collapse
Affiliation(s)
- Kazuaki Kawahara
- Institute of Engineering Innovation, The University of Tokyo, Bunkyo, Tokyo 113-8656, Japan
| | - Ryo Ishikawa
- Institute of Engineering Innovation, The University of Tokyo, Bunkyo, Tokyo 113-8656, Japan
| | - Shun Sasano
- Institute of Engineering Innovation, The University of Tokyo, Bunkyo, Tokyo 113-8656, Japan
| | - Naoya Shibata
- Institute of Engineering Innovation, The University of Tokyo, Bunkyo, Tokyo 113-8656, Japan.,Nanostructures Research Laboratory, Japan Fine Ceramics Center, Atsuta, Nagoya 456-8587, Japan
| | - Yuichi Ikuhara
- Institute of Engineering Innovation, The University of Tokyo, Bunkyo, Tokyo 113-8656, Japan.,Nanostructures Research Laboratory, Japan Fine Ceramics Center, Atsuta, Nagoya 456-8587, Japan
| |
Collapse
|
38
|
Cheng J, Liu J, Kuang H, Wang J. A Fully Automated Multimodal MRI-Based Multi-Task Learning for Glioma Segmentation and IDH Genotyping. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1520-1532. [PMID: 35020590 DOI: 10.1109/tmi.2022.3142321] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The accurate prediction of isocitrate dehydrogenase (IDH) mutation and glioma segmentation are important tasks for computer-aided diagnosis using preoperative multimodal magnetic resonance imaging (MRI). The two tasks are ongoing challenges due to the significant inter-tumor and intra-tumor heterogeneity. The existing methods to address them are mostly based on single-task approaches without considering the correlation between the two tasks. In addition, the acquisition of IDH genetic labels is expensive and costly, resulting in a limited number of IDH mutation data for modeling. To comprehensively address these problems, we propose a fully automated multimodal MRI-based multi-task learning framework for simultaneous glioma segmentation and IDH genotyping. Specifically, the task correlation and heterogeneity are tackled with a hybrid CNN-Transformer encoder that consists of a convolutional neural network and a transformer to extract the shared spatial and global information learned from a decoder for glioma segmentation and a multi-scale classifier for IDH genotyping. Then, a multi-task learning loss is designed to balance the two tasks by combining the segmentation and classification loss functions with uncertain weights. Finally, an uncertainty-aware pseudo-label selection is proposed to generate IDH pseudo-labels from larger unlabeled data for improving the accuracy of IDH genotyping by using semi-supervised learning. We evaluate our method on a multi-institutional public dataset. Experimental results show that our proposed multi-task network achieves promising performance and outperforms the single-task learning counterparts and other existing state-of-the-art methods. With the introduction of unlabeled data, the semi-supervised multi-task learning framework further improves the performance of glioma segmentation and IDH genotyping. The source codes of our framework are publicly available at https://github.com/miacsu/MTTU-Net.git.
Collapse
|
39
|
Modified Artificial Bee Colony Algorithm-Based Strategy for Brain Tumor Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:5465279. [PMID: 35602633 PMCID: PMC9117055 DOI: 10.1155/2022/5465279] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 04/09/2022] [Accepted: 04/27/2022] [Indexed: 11/18/2022]
Abstract
Medical image segmentation is a technique for detecting boundaries in a 2D or 3D image automatically or semiautomatically. The enormous range of the medical image is a considerable challenge for image segmentation. Magnetic resonance imaging (MRI) scans to aid in the detection and existence of brain tumors. This approach, however, requires exact delineation of the tumor location inside the brain scan. To solve this, an optimization algorithm will be one of the most successful techniques for distinguishing pixels of interest from the background, but its performance is reliant on the starting values of the centroids. The primary goal of this work is to segment tumor areas within brain MRI images. After converting the gray MRI image to a color image, a multiobjective modified ABC algorithm is utilized to separate the tumor from the brain. The intensity determines the RGB color generated in the image. The simulation results are assessed in terms of performance metrics such as accuracy, precision, specificity, recall, F-measure, and the time in seconds required by the system to segment the tumor from the brain. The performance of the proposed algorithm is computed with other algorithms like the single-objective ABC algorithm and multiobjective ABC algorithm. The results prove that the proposed multiobjective modified ABC algorithm is efficient in analyzing and segmenting the tumor from brain images.
Collapse
|
40
|
Treder MS, Codrai R, Tsvetanov KA. Quality assessment of anatomical MRI images from generative adversarial networks: Human assessment and image quality metrics. J Neurosci Methods 2022; 374:109579. [PMID: 35364110 DOI: 10.1016/j.jneumeth.2022.109579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Revised: 03/01/2022] [Accepted: 03/20/2022] [Indexed: 10/18/2022]
Abstract
BACKGROUND Generative Adversarial Networks (GANs) can synthesize brain images from image or noise input. So far, the gold standard for assessing the quality of the generated images has been human expert ratings. However, due to limitations of human assessment in terms of cost, scalability, and the limited sensitivity of the human eye to more subtle statistical relationships, a more automated approach towards evaluating GANs is required. NEW METHOD We investigated to what extent visual quality can be assessed using image quality metrics and we used group analysis and spatial independent components analysis to verify that the GAN reproduces multivariate statistical relationships found in real data. Reference human data was obtained by recruiting neuroimaging experts to assess real Magnetic Resonance (MR) images and images generated by a GAN. Image quality was manipulated by exporting images at different stages of GAN training. RESULTS Experts were sensitive to changes in image quality as evidenced by ratings and reaction times, and the generated images reproduced group effects (age, gender) and spatial correlations moderately well. We also surveyed a number of image quality metrics. Overall, Fréchet Inception Distance (FID), Maximum Mean Discrepancy (MMD) and Naturalness Image Quality Evaluator (NIQE) showed sensitivity to image quality and good correspondence with the human data, especially for lower-quality images (i.e., images from early stages of GAN training). However, only a Deep Quality Assessment (QA) model trained on human ratings was able to reproduce the subtle differences between higher-quality images. CONCLUSIONS We recommend a combination of group analyses, spatial correlation analyses, and both distortion metrics (FID, MMD, NIQE) and perceptual models (Deep QA) for a comprehensive evaluation and comparison of brain images produced by GANs.
Collapse
Affiliation(s)
- Matthias S Treder
- School of Computer Science and Informatics, Cardiff University, Cardiff CF24 3AA, UK.
| | - Ryan Codrai
- School of Computer Science and Informatics, Cardiff University, Cardiff CF24 3AA, UK
| | - Kamen A Tsvetanov
- Department of Clinical Neurosciences, University of Cambridge, CB2 0SZ, UK; Department of Psychology, University of Cambridge, Cambridge CB2 3EB, UK
| |
Collapse
|
41
|
A New Model for Brain Tumor Detection Using Ensemble Transfer Learning and Quantum Variational Classifier. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3236305. [PMID: 35463245 PMCID: PMC9023211 DOI: 10.1155/2022/3236305] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 02/03/2022] [Accepted: 03/14/2022] [Indexed: 12/14/2022]
Abstract
A brain tumor is an abnormal enlargement of cells if not properly diagnosed. Early detection of a brain tumor is critical for clinical practice and survival rates. Brain tumors arise in a variety of shapes, sizes, and features, with variable treatment options. Manual detection of tumors is difficult, time-consuming, and error-prone. Therefore, a significant requirement for computerized diagnostics systems for accurate brain tumor detection is present. In this research, deep features are extracted from the inceptionv3 model, in which score vector is acquired from softmax and supplied to the quantum variational classifier (QVR) for discrimination between glioma, meningioma, no tumor, and pituitary tumor. The classified tumor images have been passed to the proposed Seg-network where the actual infected region is segmented to analyze the tumor severity level. The outcomes of the reported research have been evaluated on three benchmark datasets such as Kaggle, 2020-BRATS, and local collected images. The model achieved greater than 90% detection scores to prove the proposed model's effectiveness.
Collapse
|
42
|
Wang X, Wang R, Yang S, Zhang J, Wang M, Zhong D, Zhang J, Han X. Combining Radiology and Pathology for Automatic Glioma Classification. Front Bioeng Biotechnol 2022; 10:841958. [PMID: 35387307 PMCID: PMC8977526 DOI: 10.3389/fbioe.2022.841958] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Accepted: 02/23/2022] [Indexed: 11/17/2022] Open
Abstract
Subtype classification is critical in the treatment of gliomas because different subtypes lead to different treatment options and postoperative care. Although many radiological- or histological-based glioma classification algorithms have been developed, most of them focus on single-modality data. In this paper, we propose an innovative two-stage model to classify gliomas into three subtypes (i.e., glioblastoma, oligodendroglioma, and astrocytoma) based on radiology and histology data. In the first stage, our model classifies each image as having glioblastoma or not. Based on the obtained non-glioblastoma images, the second stage aims to accurately distinguish astrocytoma and oligodendroglioma. The radiological images and histological images pass through the two-stage design with 3D and 2D models, respectively. Then, an ensemble classification network is designed to automatically integrate the features of the two modalities. We have verified our method by participating in the MICCAI 2020 CPM-RadPath Challenge and won 1st place. Our proposed model achieves high performance on the validation set with a balanced accuracy of 0.889, Cohen’s Kappa of 0.903, and an F1-score of 0.943. Our model could advance multimodal-based glioma research and provide assistance to pathologists and neurologists in diagnosing glioma subtypes. The code has been publicly available online at https://github.com/Xiyue-Wang/1st-in-MICCAI2020-CPM.
Collapse
Affiliation(s)
- Xiyue Wang
- College of Biomedical Engineering, Sichuan University, Chengdu, China.,College of Computer Science, Sichuan University, Chengdu, China
| | - Ruijie Wang
- School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, China
| | | | | | - Minghui Wang
- College of Biomedical Engineering, Sichuan University, Chengdu, China.,College of Computer Science, Sichuan University, Chengdu, China
| | - Dexing Zhong
- School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, China.,Pazhou Lab, Guangzhou, China.,State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Jing Zhang
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | | |
Collapse
|
43
|
A survey of deep learning methods for multiple sclerosis identification using brain MRI images. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07099-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
44
|
Guo S, Wang L, Chen Q, Wang L, Zhang J, Zhu Y. Multimodal MRI Image Decision Fusion-Based Network for Glioma Classification. Front Oncol 2022; 12:819673. [PMID: 35280828 PMCID: PMC8907622 DOI: 10.3389/fonc.2022.819673] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 01/24/2022] [Indexed: 12/12/2022] Open
Abstract
Purpose Glioma is the most common primary brain tumor, with varying degrees of aggressiveness and prognosis. Accurate glioma classification is very important for treatment planning and prognosis prediction. The main purpose of this study is to design a novel effective algorithm for further improving the performance of glioma subtype classification using multimodal MRI images. Method MRI images of four modalities for 221 glioma patients were collected from Computational Precision Medicine: Radiology-Pathology 2020 challenge, including T1, T2, T1ce, and fluid-attenuated inversion recovery (FLAIR) MRI images, to classify astrocytoma, oligodendroglioma, and glioblastoma. We proposed a multimodal MRI image decision fusion-based network for improving the glioma classification accuracy. First, the MRI images of each modality were input into a pre-trained tumor segmentation model to delineate the regions of tumor lesions. Then, the whole tumor regions were centrally clipped from original MRI images followed by max-min normalization. Subsequently, a deep learning-based network was designed based on a unified DenseNet structure, which extracts features through a series of dense blocks. After that, two fully connected layers were used to map the features into three glioma subtypes. During the training stage, we used the images of each modality after tumor segmentation to train the network to obtain its best accuracy on our testing set. During the inferring stage, a linear weighted module based on a decision fusion strategy was applied to assemble the predicted probabilities of the pre-trained models obtained in the training stage. Finally, the performance of our method was evaluated in terms of accuracy, area under the curve (AUC), sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), etc. Results The proposed method achieved an accuracy of 0.878, an AUC of 0.902, a sensitivity of 0.772, a specificity of 0.930, a PPV of 0.862, an NPV of 0.949, and a Cohen's Kappa of 0.773, which showed a significantly higher performance than existing state-of-the-art methods. Conclusion Compared with current studies, this study demonstrated the effectiveness and superiority in the overall performance of our proposed multimodal MRI image decision fusion-based network method for glioma subtype classification, which would be of enormous potential value in clinical practice.
Collapse
Affiliation(s)
- Shunchao Guo
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, College of Computer Science and Technology, Guizhou University, Guiyang, China.,College of Computer and Information, Qiannan Normal University for Nationalities, Duyun, China
| | - Lihui Wang
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Qijian Chen
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Li Wang
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Jian Zhang
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Yuemin Zhu
- CREATIS, CNRS UMR 5220, Inserm U1044, INSA Lyon, University of Lyon, Lyon, France
| |
Collapse
|
45
|
Veeramuthu A, Meenakshi S, Mathivanan G, Kotecha K, Saini JR, Vijayakumar V, Subramaniyaswamy V. MRI Brain Tumor Image Classification Using a Combined Feature and Image-Based Classifier. Front Psychol 2022; 13:848784. [PMID: 35310201 PMCID: PMC8931531 DOI: 10.3389/fpsyg.2022.848784] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 01/19/2022] [Indexed: 11/17/2022] Open
Abstract
Brain tumor classification plays a niche role in medical prognosis and effective treatment process. We have proposed a combined feature and image-based classifier (CFIC) for brain tumor image classification in this study. Carious deep neural network and deep convolutional neural networks (DCNN)-based architectures are proposed for image classification, namely, actual image feature-based classifier (AIFC), segmented image feature-based classifier (SIFC), actual and segmented image feature-based classifier (ASIFC), actual image-based classifier (AIC), segmented image-based classifier (SIC), actual and segmented image-based classifier (ASIC), and finally, CFIC. The Kaggle Brain Tumor Detection 2020 dataset has been used to train and test the proposed classifiers. Among the various classifiers proposed, the CFIC performs better than all other proposed methods. The proposed CFIC method gives significantly better results in terms of sensitivity, specificity, and accuracy with 98.86, 97.14, and 98.97%, respectively, compared with the existing classification methods.
Collapse
Affiliation(s)
- A. Veeramuthu
- Department of Information Technology, School of Computing, Sathyabama Institute of Science and Technology, Chennai, India
| | - S. Meenakshi
- Department of Information Technology, Jeppiaar SRR Engineering College, Chennai, India
| | - G. Mathivanan
- Department of Information Technology, School of Computing, Sathyabama Institute of Science and Technology, Chennai, India
| | - Ketan Kotecha
- Symbiosis Centre for Applied Artificial Intelligence, Symbiosis International (Deemed University), Pune, India
- *Correspondence: Ketan Kotecha,
| | - Jatinderkumar R. Saini
- Symbiosis Institute of Computer Studies and Research, Symbiosis International (Deemed University), Pune, India
| | - V. Vijayakumar
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia
| | - V. Subramaniyaswamy
- School of Computing, Shanmugha Arts, Science, Technology & Research Academy Deemed University, Thanjavur, India
| |
Collapse
|
46
|
Liu Y, Du J, Vong CM, Yue G, Yu J, Wang Y, Lei B, Wang T. Scale-adaptive super-feature based MetricUNet for brain tumor segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103442] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
47
|
Tripathi PC, Bag S. A computer-aided grading of glioma tumor using deep residual networks fusion. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106597. [PMID: 34974232 DOI: 10.1016/j.cmpb.2021.106597] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 10/19/2021] [Accepted: 12/20/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVES Among different cancer types, glioma is considered as a potentially fatal brain cancer that arises from glial cells. Early diagnosis of glioma helps the physician in offering effective treatment to the patients. Magnetic Resonance Imaging (MRI)-based Computer-Aided Diagnosis for the brain tumors has attracted a lot of attention in the literature in recent years. In this paper, we propose a novel deep learning-based computer-aided diagnosis for glioma tumors. METHODS The proposed method incorporates a two-level classification of gliomas. Firstly, the tumor is classified into low-or high-grade and secondly, the low-grade tumors are classified into two types based on the presence of chromosome arms 1p/19q. The feature representations of four residual networks have been used to solve the problem by utilizing transfer learning approach. Furthermore, we have fused these trained models using a novel Dempster-shafer Theory (DST)-based fusion scheme in order to enhance the classification performance. Extensive data augmentation strategies are also utilized to avoid over-fitting of the discrimination models. RESULTS Extensive experiments have been performed on an MRI dataset to show the effectiveness of the method. It has been found that our method achieves 95.87% accuracy for glioma classification. The results obtained by our method have also been compared with different existing methods. The comparative study reveals that our method not only outperforms traditional machine learning-based methods, but it also produces better results to state-of-the-art deep learning-based methods. CONCLUSION The fusion of different residual networks enhances the tumor classification performance. The experimental findings indicates that Dempster-shafer Theory (DST)-based fusion technique produces superior performance in comparison to other fusion schemes.
Collapse
Affiliation(s)
- Prasun Chandra Tripathi
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines) Dhanabd, Dhanbad 826004, India.
| | - Soumen Bag
- Department of Computer Science and Engineering, Indian Institute of Technology (Indian School of Mines) Dhanabd, Dhanbad 826004, India.
| |
Collapse
|
48
|
Le WT, Vorontsov E, Romero FP, Seddik L, Elsharief MM, Nguyen-Tan PF, Roberge D, Bahig H, Kadoury S. Cross-institutional outcome prediction for head and neck cancer patients using self-attention neural networks. Sci Rep 2022; 12:3183. [PMID: 35210482 PMCID: PMC8873259 DOI: 10.1038/s41598-022-07034-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 02/10/2022] [Indexed: 12/13/2022] Open
Abstract
In radiation oncology, predicting patient risk stratification allows specialization of therapy intensification as well as selecting between systemic and regional treatments, all of which helps to improve patient outcome and quality of life. Deep learning offers an advantage over traditional radiomics for medical image processing by learning salient features from training data originating from multiple datasets. However, while their large capacity allows to combine high-level medical imaging data for outcome prediction, they lack generalization to be used across institutions. In this work, a pseudo-volumetric convolutional neural network with a deep preprocessor module and self-attention (PreSANet) is proposed for the prediction of distant metastasis, locoregional recurrence, and overall survival occurrence probabilities within the 10 year follow-up time frame for head and neck cancer patients with squamous cell carcinoma. The model is capable of processing multi-modal inputs of variable scan length, as well as integrating patient data in the prediction model. These proposed architectural features and additional modalities all serve to extract additional information from the available data when availability to additional samples is limited. This model was trained on the public Cancer Imaging Archive Head–Neck-PET–CT dataset consisting of 298 patients undergoing curative radio/chemo-radiotherapy and acquired from 4 different institutions. The model was further validated on an internal retrospective dataset with 371 patients acquired from one of the institutions in the training dataset. An extensive set of ablation experiments were performed to test the utility of the proposed model characteristics, achieving an AUROC of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$80\%$$\end{document}80%, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$80\%$$\end{document}80% and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$82\%$$\end{document}82% for DM, LR and OS respectively on the public TCIA Head–Neck-PET–CT dataset. External validation was performed on a retrospective dataset with 371 patients, achieving \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$69\%$$\end{document}69% AUROC in all outcomes. To test for model generalization across sites, a validation scheme consisting of single site-holdout and cross-validation combining both datasets was used. The mean accuracy across 4 institutions obtained was \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$72\%$$\end{document}72%, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$70\%$$\end{document}70% and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$71\%$$\end{document}71% for DM, LR and OS respectively. The proposed model demonstrates an effective method for tumor outcome prediction for multi-site, multi-modal combining both volumetric data and structured patient clinical data.
Collapse
Affiliation(s)
- William Trung Le
- Polytechnique Montréal, 500 Chemin de Polytechnique, Montreal, QC, H3T 1J4, Canada.,Centre de recherche du Centre Hospitalier de l'Université de Montréal, 900 Rue Saint-Denis, Pavillon R, Montreal, QC, H2X 0A9, Canada
| | - Eugene Vorontsov
- Polytechnique Montréal, 500 Chemin de Polytechnique, Montreal, QC, H3T 1J4, Canada
| | | | - Lotfi Seddik
- Centre Hospitalier de l'Université de Montréal, 1051 Rue Sanguinet, Montreal, QC, H2X 3E4, Canada
| | | | - Phuc Felix Nguyen-Tan
- Centre Hospitalier de l'Université de Montréal, 1051 Rue Sanguinet, Montreal, QC, H2X 3E4, Canada
| | - David Roberge
- Centre Hospitalier de l'Université de Montréal, 1051 Rue Sanguinet, Montreal, QC, H2X 3E4, Canada
| | - Houda Bahig
- Centre Hospitalier de l'Université de Montréal, 1051 Rue Sanguinet, Montreal, QC, H2X 3E4, Canada
| | - Samuel Kadoury
- Polytechnique Montréal, 500 Chemin de Polytechnique, Montreal, QC, H3T 1J4, Canada. .,Centre de recherche du Centre Hospitalier de l'Université de Montréal, 900 Rue Saint-Denis, Pavillon R, Montreal, QC, H2X 0A9, Canada.
| |
Collapse
|
49
|
Intelligent Model for Brain Tumor Identification Using Deep Learning. APPLIED COMPUTATIONAL INTELLIGENCE AND SOFT COMPUTING 2022. [DOI: 10.1155/2022/8104054] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Brain tumors can be a major cause of psychiatric complications such as depression and panic attacks. Quick and timely recognition of a brain tumor is more effective in tumor healing. The processing of medical images plays a crucial role in assisting humans in identifying different diseases. The classification of brain tumors is a significant part that depends on the expertise and knowledge of the physician. An intelligent system for detecting and classifying brain tumors is essential to help physicians. The novel feature of the study is the division of brain tumors into glioma, meningioma, and pituitary using a hierarchical deep learning method. The diagnosis and tumor classification are significant for the quick and productive cure, and medical image processing using a convolutional neural network (CNN) is giving excellent outcomes in this capacity. CNN uses the image fragments to train the data and classify them into tumor types. Hierarchical Deep Learning-Based Brain Tumor (HDL2BT) classification is proposed with the help of CNN for the detection and classification of brain tumors. The proposed system categorizes the tumor into four types: glioma, meningioma, pituitary, and no-tumor. The suggested model achieves 92.13% precision and a miss rate of 7.87%, being superior to earlier methods for detecting and segmentation brain tumors. The proposed system will provide clinical assistance in the area of medicine.
Collapse
|
50
|
Classification of Brain MRI Tumor Images Based on Deep Learning PGGAN Augmentation. Diagnostics (Basel) 2021; 11:diagnostics11122343. [PMID: 34943580 PMCID: PMC8700152 DOI: 10.3390/diagnostics11122343] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 12/02/2021] [Accepted: 12/07/2021] [Indexed: 12/16/2022] Open
Abstract
The wide prevalence of brain tumors in all age groups necessitates having the ability to make an early and accurate identification of the tumor type and thus select the most appropriate treatment plans. The application of convolutional neural networks (CNNs) has helped radiologists to more accurately classify the type of brain tumor from magnetic resonance images (MRIs). The learning of CNN suffers from overfitting if a suboptimal number of MRIs are introduced to the system. Recognized as the current best solution to this problem, the augmentation method allows for the optimization of the learning stage and thus maximizes the overall efficiency. The main objective of this study is to examine the efficacy of a new approach to the classification of brain tumor MRIs through the use of a VGG19 features extractor coupled with one of three types of classifiers. A progressive growing generative adversarial network (PGGAN) augmentation model is used to produce ‘realistic’ MRIs of brain tumors and help overcome the shortage of images needed for deep learning. Results indicated the ability of our framework to classify gliomas, meningiomas, and pituitary tumors more accurately than in previous studies with an accuracy of 98.54%. Other performance metrics were also examined.
Collapse
|