1
|
Saeed T, Khan MA, Hamza A, Shabaz M, Khan WZ, Alhayan F, Jamel L, Baili J. Neuro-XAI: Explainable deep learning framework based on deeplabV3+ and bayesian optimization for segmentation and classification of brain tumor in MRI scans. J Neurosci Methods 2024; 410:110247. [PMID: 39128599 DOI: 10.1016/j.jneumeth.2024.110247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 06/30/2024] [Accepted: 08/05/2024] [Indexed: 08/13/2024]
Abstract
The prevalence of brain tumor disorders is currently a global issue. In general, radiography, which includes a large number of images, is an efficient method for diagnosing these life-threatening disorders. The biggest issue in this area is that it takes a radiologist a long time and is physically strenuous to look at all the images. As a result, research into developing systems based on machine learning to assist radiologists in diagnosis continues to rise daily. Convolutional neural networks (CNNs), one type of deep learning approach, have been pivotal in achieving state-of-the-art results in several medical imaging applications, including the identification of brain tumors. CNN hyperparameters are typically set manually for segmentation and classification, which might take a while and increase the chance of using suboptimal hyperparameters for both tasks. Bayesian optimization is a useful method for updating the deep CNN's optimal hyperparameters. The CNN network, however, can be considered a "black box" model because of how difficult it is to comprehend the information it stores because of its complexity. Therefore, this problem can be solved by using Explainable Artificial Intelligence (XAI) tools, which provide doctors with a realistic explanation of CNN's assessments. Implementation of deep learning-based systems in real-time diagnosis is still rare. One of the causes could be that these methods don't quantify the Uncertainty in the predictions, which could undermine trust in the AI-based diagnosis of diseases. To be used in real-time medical diagnosis, CNN-based models must be realistic and appealing, and uncertainty needs to be evaluated. So, a novel three-phase strategy is proposed for segmenting and classifying brain tumors. Segmentation of brain tumors using the DeeplabV3+ model is first performed with tuning of hyperparameters using Bayesian optimization. For classification, features from state-of-the-art deep learning models Darknet53 and mobilenetv2 are extracted and fed to SVM for classification, and hyperparameters of SVM are also optimized using a Bayesian approach. The second step is to understand whatever portion of the images CNN uses for feature extraction using XAI algorithms. Using confusion entropy, the Uncertainty of the Bayesian optimized classifier is finally quantified. Based on a Bayesian-optimized deep learning framework, the experimental findings demonstrate that the proposed method outperforms earlier techniques, achieving a 97 % classification accuracy and a 0.98 global accuracy.
Collapse
Affiliation(s)
- Tallha Saeed
- Department of Computer Science, University of Wah, Wah Cantt 47040, Pakistan.
| | - Muhammad Attique Khan
- Department of Artificial Intelligence, College of Computer Engineering and Science, Prince Mohammad Bin Fahd University, P.O.Box 1664, AlKhobar 31952, Saudi Arabia.
| | - Ameer Hamza
- Department of Computer Science and Mathematics, Lebanese American University, Lebanon; Department of Computer Science, HITEC University, Taxila, 47080, Pakistan.
| | - Mohammad Shabaz
- Model Institute of Engineering and Technology, Jammu, J&K, India.
| | - Wazir Zada Khan
- Department of Computer Science, University of Wah, Wah Cantt 47040, Pakistan
| | - Fatimah Alhayan
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Leila Jamel
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia.
| | - Jamel Baili
- Department of Computer Engineering, College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia.
| |
Collapse
|
2
|
Morano J, Aresta G, Grechenig C, Schmidt-Erfurth U, Bogunovic H. Deep Multimodal Fusion of Data With Heterogeneous Dimensionality via Projective Networks. IEEE J Biomed Health Inform 2024; 28:2235-2246. [PMID: 38206782 DOI: 10.1109/jbhi.2024.3352970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2024]
Abstract
The use of multimodal imaging has led to significant improvements in the diagnosis and treatment of many diseases. Similar to clinical practice, some works have demonstrated the benefits of multimodal fusion for automatic segmentation and classification using deep learning-based methods. However, current segmentation methods are limited to fusion of modalities with the same dimensionality (e.g., 3D + 3D, 2D + 2D), which is not always possible, and the fusion strategies implemented by classification methods are incompatible with localization tasks. In this work, we propose a novel deep learning-based framework for the fusion of multimodal data with heterogeneous dimensionality (e.g., 3D + 2D) that is compatible with localization tasks. The proposed framework extracts the features of the different modalities and projects them into the common feature subspace. The projected features are then fused and further processed to obtain the final prediction. The framework was validated on the following tasks: segmentation of geographic atrophy (GA), a late-stage manifestation of age-related macular degeneration, and segmentation of retinal blood vessels (RBV) in multimodal retinal imaging. Our results show that the proposed method outperforms the state-of-the-art monomodal methods on GA and RBV segmentation by up to 3.10% and 4.64% Dice, respectively.
Collapse
|
3
|
S SP, A S, T K, S D. Self-attention-based generative adversarial network optimized with color harmony algorithm for brain tumor classification. Electromagn Biol Med 2024:1-15. [PMID: 38369844 DOI: 10.1080/15368378.2024.2312363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 01/25/2024] [Indexed: 02/20/2024]
Abstract
This paper proposes a novel approach, BTC-SAGAN-CHA-MRI, for the classification of brain tumors using a SAGAN optimized with a Color Harmony Algorithm. Brain cancer, with its high fatality rate worldwide, especially in the case of brain tumors, necessitates more accurate and efficient classification methods. While existing deep learning approaches for brain tumor classification have been suggested, they often lack precision and require substantial computational time.The proposed method begins by gathering input brain MR images from the BRATS dataset, followed by a pre-processing step using a Mean Curvature Flow-based approach to eliminate noise. The pre-processed images then undergo the Improved Non-Sub sampled Shearlet Transform (INSST) for extracting radiomic features. These features are fed into the SAGAN, which is optimized with a Color Harmony Algorithm to categorize the brain images into different tumor types, including Gliomas, Meningioma, and Pituitary tumors. This innovative approach shows promise in enhancing the precision and efficiency of brain tumor classification, holding potential for improved diagnostic outcomes in the field of medical imaging. The accuracy acquired for the brain tumor identification from the proposed method is 99.29%. The proposed BTC-SAGAN-CHA-MRI technique achieves 18.29%, 14.09% and 7.34% higher accuracy and 67.92%,54.04%, and 59.08% less Computation Time when analyzed to the existing models, like Brain tumor diagnosis utilizing deep learning convolutional neural network with transfer learning approach (BTC-KNN-SVM-MRI); M3BTCNet: multi model brain tumor categorization under metaheuristic deep neural network features optimization (BTC-CNN-DEMFOA-MRI), and efficient method depending upon hierarchical deep learning neural network classifier for brain tumour categorization (BTC-Hie DNN-MRI) respectively.
Collapse
Affiliation(s)
- Senthil Pandi S
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Ramapuram, Chennai, Tamil Nadu, India
| | - Senthilselvi A
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Chennai, Tamil Nadu, India
| | - Kumaragurubaran T
- Department of Computer Science and Engineering, Rajalakshmi Engineering College, Chennai, Tamil Nadu, India
| | - Dhanasekaran S
- Department of Information Technology, Kalasalingam Academy of Research and Education (Deemed to be University), Srivilliputtur, Tamilnadu, India
| |
Collapse
|
4
|
Çetin-Kaya Y, Kaya M. A Novel Ensemble Framework for Multi-Classification of Brain Tumors Using Magnetic Resonance Imaging. Diagnostics (Basel) 2024; 14:383. [PMID: 38396422 PMCID: PMC10888105 DOI: 10.3390/diagnostics14040383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 02/01/2024] [Accepted: 02/06/2024] [Indexed: 02/25/2024] Open
Abstract
Brain tumors can have fatal consequences, affecting many body functions. For this reason, it is essential to detect brain tumor types accurately and at an early stage to start the appropriate treatment process. Although convolutional neural networks (CNNs) are widely used in disease detection from medical images, they face the problem of overfitting in the training phase on limited labeled and insufficiently diverse datasets. The existing studies use transfer learning and ensemble models to overcome these problems. When the existing studies are examined, it is evident that there is a lack of models and weight ratios that will be used with the ensemble technique. With the framework proposed in this study, several CNN models with different architectures are trained with transfer learning and fine-tuning on three brain tumor datasets. A particle swarm optimization-based algorithm determined the optimum weights for combining the five most successful CNN models with the ensemble technique. The results across three datasets are as follows: Dataset 1, 99.35% accuracy and 99.20 F1-score; Dataset 2, 98.77% accuracy and 98.92 F1-score; and Dataset 3, 99.92% accuracy and 99.92 F1-score. We achieved successful performances on three brain tumor datasets, showing that the proposed framework is reliable in classification. As a result, the proposed framework outperforms existing studies, offering clinicians enhanced decision-making support through its high-accuracy classification performance.
Collapse
Affiliation(s)
- Yasemin Çetin-Kaya
- Department of Computer Engineering, Faculty of Engineering and Architecture, Tokat Gaziosmanpaşa University, Tokat 60250, Turkey
| | - Mahir Kaya
- Department of Computer Engineering, Faculty of Engineering and Architecture, Tokat Gaziosmanpaşa University, Tokat 60250, Turkey
| |
Collapse
|
5
|
Haque R, Hassan MM, Bairagi AK, Shariful Islam SM. NeuroNet19: an explainable deep neural network model for the classification of brain tumors using magnetic resonance imaging data. Sci Rep 2024; 14:1524. [PMID: 38233516 PMCID: PMC10794704 DOI: 10.1038/s41598-024-51867-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 01/10/2024] [Indexed: 01/19/2024] Open
Abstract
Brain tumors (BTs) are one of the deadliest diseases that can significantly shorten a person's life. In recent years, deep learning has become increasingly popular for detecting and classifying BTs. In this paper, we propose a deep neural network architecture called NeuroNet19. It utilizes VGG19 as its backbone and incorporates a novel module named the Inverted Pyramid Pooling Module (iPPM). The iPPM captures multi-scale feature maps, ensuring the extraction of both local and global image contexts. This enhances the feature maps produced by the backbone, regardless of the spatial positioning or size of the tumors. To ensure the model's transparency and accountability, we employ Explainable AI. Specifically, we use Local Interpretable Model-Agnostic Explanations (LIME), which highlights the features or areas focused on while predicting individual images. NeuroNet19 is trained on four classes of BTs: glioma, meningioma, no tumor, and pituitary tumors. It is tested on a public dataset containing 7023 images. Our research demonstrates that NeuroNet19 achieves the highest accuracy at 99.3%, with precision, recall, and F1 scores at 99.2% and a Cohen Kappa coefficient (CKC) of 99%.
Collapse
Affiliation(s)
- Rezuana Haque
- Computer Science and Engineering, BGC Trust University Bangladesh, Chittagong, Bangladesh
| | - Md Mehedi Hassan
- Computer Science and Engineering Discipline, Khulna University, Khulna, 9208, Bangladesh.
| | - Anupam Kumar Bairagi
- Computer Science and Engineering Discipline, Khulna University, Khulna, 9208, Bangladesh
| | | |
Collapse
|
6
|
Epizitone A, Moyane SP, Agbehadji IE. A Data-Driven Paradigm for a Resilient and Sustainable Integrated Health Information Systems for Health Care Applications. J Multidiscip Healthc 2023; 16:4015-4025. [PMID: 38107085 PMCID: PMC10725635 DOI: 10.2147/jmdh.s433299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 11/02/2023] [Indexed: 12/19/2023] Open
Abstract
Introduction Many transformations and uncertainties, such as the fourth industrial revolution and pandemics, have propelled healthcare acceptance and deployment of health information systems (HIS). External and internal determinants aligning with the global course influence their deployments. At the epic is digitalization, which generates endless data that has permeated healthcare. The continuous proliferation of complex and dynamic healthcare data is the digitalization frontier in healthcare that necessitates attention. Objective This study explores the existing body of information on HIS for healthcare through the data lens to present a data-driven paradigm for healthcare augmentation paramount to attaining a sustainable and resilient HIS. Method Preferred Reporting Items for Systematic Reviews and Meta-Analyses: PRISMA-compliant in-depth literature review was conducted systematically to synthesize and analyze the literature content to ascertain the value disposition of HIS data in healthcare delivery. Results This study details the aspects of a data-driven paradigm for robust and sustainable HIS for health care applications. Data source, data action and decisions, data sciences techniques, serialization of data sciences techniques in the HIS, and data insight implementation and application are data-driven features expounded. These are essential data-driven paradigm building blocks that need iteration to succeed. Discussions Existing literature considers insurgent data in healthcare challenging, disruptive, and potentially revolutionary. This view echoes the current healthcare quandary of good and bad data availability. Thus, data-driven insights are essential for building a resilient and sustainable HIS. People, technology, and tasks dominated prior HIS frameworks, with few data-centric facets. Improving healthcare and the HIS requires identifying and integrating crucial data elements. Conclusion The paper presented a data-driven paradigm for a resilient and sustainable HIS. The findings show that data-driven track and components are essential to improve healthcare using data analytics insights. It provides an integrated footing for data analytics to support and effectively assist health care delivery.
Collapse
Affiliation(s)
- Ayogeboh Epizitone
- ICT and Society Research Group, Department of Information and Corporate Management, Durban University of Technology, Durban, South Africa
| | - Smangele Pretty Moyane
- Department of Information and Corporate Management, Durban University of Technology, Durban, South Africa
| | - Israel Edem Agbehadji
- Centre for Transformative Agricultural and Food Systems, School of Agricultural, Earth and Environmental Sciences, University of KwaZulu-Natal, Pietermaritzburg, South Africa
| |
Collapse
|
7
|
Automatic Intelligent System Using Medical of Things for Multiple Sclerosis Detection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:4776770. [PMID: 36864930 PMCID: PMC9974276 DOI: 10.1155/2023/4776770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 07/31/2022] [Accepted: 08/16/2022] [Indexed: 02/25/2023]
Abstract
Malfunctions in the immune system cause multiple sclerosis (MS), which initiates mild to severe nerve damage. MS will disturb the signal communication between the brain and other body parts, and early diagnosis will help reduce the harshness of MS in humankind. Magnetic resonance imaging (MRI) supported MS detection is a standard clinical procedure in which the bio-image recorded with a chosen modality is considered to assess the severity of the disease. The proposed research aims to implement a convolutional neural network (CNN) supported scheme to detect MS lesions in the chosen brain MRI slices. The stages of this framework include (i) image collection and resizing, (ii) deep feature mining, (iii) hand-crafted feature mining, (iii) feature optimization with firefly algorithm, and (iv) serial feature integration and classification. In this work, five-fold cross-validation is executed, and the final result is considered for the assessment. The brain MRI slices with/without the skull section are examined separately, presenting the attained results. The experimental outcome of this study confirms that the VGG16 with random forest (RF) classifier offered a classification accuracy of >98% MRI with skull, and VGG16 with K-nearest neighbor (KNN) provided an accuracy of >98% without the skull.
Collapse
|