1
|
Ahmed A, Imran AS, Kastrati Z, Daudpota SM, Ullah M, Noor W. Learning from the few: Fine-grained approach to pediatric wrist pathology recognition on a limited dataset. Comput Biol Med 2024; 181:109044. [PMID: 39180859 DOI: 10.1016/j.compbiomed.2024.109044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 05/25/2024] [Accepted: 08/17/2024] [Indexed: 08/27/2024]
Abstract
Wrist pathologies, particularly fractures common among children and adolescents, present a critical diagnostic challenge. While X-ray imaging remains a prevalent diagnostic tool, the increasing misinterpretation rates highlight the need for more accurate analysis, especially considering the lack of specialized training among many surgeons and physicians. Recent advancements in deep convolutional neural networks offer promise in automating pathology detection in trauma X-rays. However, distinguishing subtle variations between pediatric wrist pathologies in X-rays remains challenging. Traditional manual annotation, though effective, is laborious, costly, and requires specialized expertise. In this paper, we address the challenge of pediatric wrist pathology recognition with a fine-grained approach, aimed at automatically identifying discriminative regions in X-rays without manual intervention. We refine our fine-grained architecture through ablation analysis and the integration of LION. Leveraging Grad-CAM, an explainable AI technique, we highlight these regions. Despite using limited data, reflective of real-world medical study constraints, our method consistently outperforms state-of-the-art image recognition models on both augmented and original (challenging) test sets. Our proposed refined architecture achieves an increase in accuracy of 1.06% and 1.25% compared to the baseline method, resulting in accuracies of 86% and 84%, respectively. Moreover, our approach demonstrates the highest fracture sensitivity of 97%, highlighting its potential to enhance wrist pathology recognition.
Collapse
Affiliation(s)
- Ammar Ahmed
- Intelligent Systems and Analytics (ISA) Research Group, Department of Computer Science (IDI), Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway.
| | - Ali Shariq Imran
- Intelligent Systems and Analytics (ISA) Research Group, Department of Computer Science (IDI), Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway.
| | - Zenun Kastrati
- Department of Informatics, Linnaeus University, Växjö, 351 95, Sweden.
| | | | - Mohib Ullah
- Intelligent Systems and Analytics (ISA) Research Group, Department of Computer Science (IDI), Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway.
| | - Waheed Noor
- Department of Computer Science & Information Technology, University of Balochistan, Quetta, 87300, Pakistan.
| |
Collapse
|
2
|
Pasvantis K, Protopapadakis E. Enhancing Deep Learning Model Explainability in Brain Tumor Datasets Using Post-Heuristic Approaches. J Imaging 2024; 10:232. [PMID: 39330452 PMCID: PMC11433079 DOI: 10.3390/jimaging10090232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Revised: 09/02/2024] [Accepted: 09/05/2024] [Indexed: 09/28/2024] Open
Abstract
The application of deep learning models in medical diagnosis has showcased considerable efficacy in recent years. Nevertheless, a notable limitation involves the inherent lack of explainability during decision-making processes. This study addresses such a constraint by enhancing the interpretability robustness. The primary focus is directed towards refining the explanations generated by the LIME Library and LIME image explainer. This is achieved through post-processing mechanisms based on scenario-specific rules. Multiple experiments have been conducted using publicly accessible datasets related to brain tumor detection. Our proposed post-heuristic approach demonstrates significant advancements, yielding more robust and concrete results in the context of medical diagnosis.
Collapse
Affiliation(s)
- Konstantinos Pasvantis
- Department of Applied Informatics, University of Macedonia, Egnatia 156, 546 36 Thessaloniki, Greece
| | - Eftychios Protopapadakis
- Department of Applied Informatics, University of Macedonia, Egnatia 156, 546 36 Thessaloniki, Greece
| |
Collapse
|
3
|
Erdur AC, Rusche D, Scholz D, Kiechle J, Fischer S, Llorián-Salvador Ó, Buchner JA, Nguyen MQ, Etzel L, Weidner J, Metz MC, Wiestler B, Schnabel J, Rueckert D, Combs SE, Peeken JC. Deep learning for autosegmentation for radiotherapy treatment planning: State-of-the-art and novel perspectives. Strahlenther Onkol 2024:10.1007/s00066-024-02262-2. [PMID: 39105745 DOI: 10.1007/s00066-024-02262-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 06/13/2024] [Indexed: 08/07/2024]
Abstract
The rapid development of artificial intelligence (AI) has gained importance, with many tools already entering our daily lives. The medical field of radiation oncology is also subject to this development, with AI entering all steps of the patient journey. In this review article, we summarize contemporary AI techniques and explore the clinical applications of AI-based automated segmentation models in radiotherapy planning, focusing on delineation of organs at risk (OARs), the gross tumor volume (GTV), and the clinical target volume (CTV). Emphasizing the need for precise and individualized plans, we review various commercial and freeware segmentation tools and also state-of-the-art approaches. Through our own findings and based on the literature, we demonstrate improved efficiency and consistency as well as time savings in different clinical scenarios. Despite challenges in clinical implementation such as domain shifts, the potential benefits for personalized treatment planning are substantial. The integration of mathematical tumor growth models and AI-based tumor detection further enhances the possibilities for refining target volumes. As advancements continue, the prospect of one-stop-shop segmentation and radiotherapy planning represents an exciting frontier in radiotherapy, potentially enabling fast treatment with enhanced precision and individualization.
Collapse
Affiliation(s)
- Ayhan Can Erdur
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
| | - Daniel Rusche
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Daniel Scholz
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Johannes Kiechle
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
| | - Stefan Fischer
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
| | - Óscar Llorián-Salvador
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department for Bioinformatics and Computational Biology - i12, Technical University of Munich, Boltzmannstraße 3, 85748, Garching, Bavaria, Germany
- Institute of Organismic and Molecular Evolution, Johannes Gutenberg University Mainz (JGU), Hüsch-Weg 15, 55128, Mainz, Rhineland-Palatinate, Germany
| | - Josef A Buchner
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Mai Q Nguyen
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Lucas Etzel
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
| | - Jonas Weidner
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Marie-Christin Metz
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Benedikt Wiestler
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Julia Schnabel
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
- Institute of Machine Learning in Biomedical Imaging, Helmholtz Munich, Ingolstädter Landstraße 1, 85764, Neuherberg, Bavaria, Germany
- School of Biomedical Engineering & Imaging Sciences, King's College London, Strand, WC2R 2LS, London, London, UK
| | - Daniel Rueckert
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Faculty of Engineering, Department of Computing, Imperial College London, Exhibition Rd, SW7 2BX, London, London, UK
| | - Stephanie E Combs
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| | - Jan C Peeken
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| |
Collapse
|
4
|
Ghadimi DJ, Vahdani AM, Karimi H, Ebrahimi P, Fathi M, Moodi F, Habibzadeh A, Khodadadi Shoushtari F, Valizadeh G, Mobarak Salari H, Saligheh Rad H. Deep Learning-Based Techniques in Glioma Brain Tumor Segmentation Using Multi-Parametric MRI: A Review on Clinical Applications and Future Outlooks. J Magn Reson Imaging 2024. [PMID: 39074952 DOI: 10.1002/jmri.29543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 07/07/2024] [Accepted: 07/08/2024] [Indexed: 07/31/2024] Open
Abstract
This comprehensive review explores the role of deep learning (DL) in glioma segmentation using multiparametric magnetic resonance imaging (MRI) data. The study surveys advanced techniques such as multiparametric MRI for capturing the complex nature of gliomas. It delves into the integration of DL with MRI, focusing on convolutional neural networks (CNNs) and their remarkable capabilities in tumor segmentation. Clinical applications of DL-based segmentation are highlighted, including treatment planning, monitoring treatment response, and distinguishing between tumor progression and pseudo-progression. Furthermore, the review examines the evolution of DL-based segmentation studies, from early CNN models to recent advancements such as attention mechanisms and transformer models. Challenges in data quality, gradient vanishing, and model interpretability are discussed. The review concludes with insights into future research directions, emphasizing the importance of addressing tumor heterogeneity, integrating genomic data, and ensuring responsible deployment of DL-driven healthcare technologies. EVIDENCE LEVEL: N/A TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Delaram J Ghadimi
- School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Amir M Vahdani
- Image Guided Surgery Lab, Research Center for Biomedical Technologies and Robotics, Advanced Medical Technologies and Equipment Institute, Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran
| | - Hanie Karimi
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Pouya Ebrahimi
- Cardiovascular Diseases Research Institute, Tehran Heart Center, Tehran University of Medical Sciences, Tehran, Iran
| | - Mobina Fathi
- School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Farzan Moodi
- School of Medicine, Iran University of Medical Sciences, Tehran, Iran
- Quantitative MR Imaging and Spectroscopy Group (QMISG), Tehran University of Medical Sciences, Tehran, Iran
| | - Adrina Habibzadeh
- Student Research Committee, Fasa University of Medical Sciences, Fasa, Iran
| | | | - Gelareh Valizadeh
- Quantitative MR Imaging and Spectroscopy Group (QMISG), Tehran University of Medical Sciences, Tehran, Iran
| | - Hanieh Mobarak Salari
- Quantitative MR Imaging and Spectroscopy Group (QMISG), Tehran University of Medical Sciences, Tehran, Iran
| | - Hamidreza Saligheh Rad
- Quantitative MR Imaging and Spectroscopy Group (QMISG), Tehran University of Medical Sciences, Tehran, Iran
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
5
|
Albalawi E, T R M, Thakur A, Kumar VV, Gupta M, Khan SB, Almusharraf A. Integrated approach of federated learning with transfer learning for classification and diagnosis of brain tumor. BMC Med Imaging 2024; 24:110. [PMID: 38750436 PMCID: PMC11097560 DOI: 10.1186/s12880-024-01261-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Accepted: 03/27/2024] [Indexed: 05/18/2024] Open
Abstract
Brain tumor classification using MRI images is a crucial yet challenging task in medical imaging. Accurate diagnosis is vital for effective treatment planning but is often hindered by the complex nature of tumor morphology and variations in imaging. Traditional methodologies primarily rely on manual interpretation of MRI images, supplemented by conventional machine learning techniques. These approaches often lack the robustness and scalability needed for precise and automated tumor classification. The major limitations include a high degree of manual intervention, potential for human error, limited ability to handle large datasets, and lack of generalizability to diverse tumor types and imaging conditions.To address these challenges, we propose a federated learning-based deep learning model that leverages the power of Convolutional Neural Networks (CNN) for automated and accurate brain tumor classification. This innovative approach not only emphasizes the use of a modified VGG16 architecture optimized for brain MRI images but also highlights the significance of federated learning and transfer learning in the medical imaging domain. Federated learning enables decentralized model training across multiple clients without compromising data privacy, addressing the critical need for confidentiality in medical data handling. This model architecture benefits from the transfer learning technique by utilizing a pre-trained CNN, which significantly enhances its ability to classify brain tumors accurately by leveraging knowledge gained from vast and diverse datasets.Our model is trained on a diverse dataset combining figshare, SARTAJ, and Br35H datasets, employing a federated learning approach for decentralized, privacy-preserving model training. The adoption of transfer learning further bolsters the model's performance, making it adept at handling the intricate variations in MRI images associated with different types of brain tumors. The model demonstrates high precision (0.99 for glioma, 0.95 for meningioma, 1.00 for no tumor, and 0.98 for pituitary), recall, and F1-scores in classification, outperforming existing methods. The overall accuracy stands at 98%, showcasing the model's efficacy in classifying various tumor types accurately, thus highlighting the transformative potential of federated learning and transfer learning in enhancing brain tumor classification using MRI images.
Collapse
Affiliation(s)
- Eid Albalawi
- Department of Computer science, College of Computer Science and Information Technology, King faisal University, 31982, Hofuf, Saudi Arabia
| | - Mahesh T R
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), 562112, Bangalore, India
| | - Arastu Thakur
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), 562112, Bangalore, India
| | - V Vinoth Kumar
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, 632014, Vellore, India
| | - Muskan Gupta
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), 562112, Bangalore, India
| | - Surbhi Bhatia Khan
- School of Science, Engineering and environment, University of Salford, M5 4WT, Manchester, UK.
- , Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon, Lebanon.
| | - Ahlam Almusharraf
- Department of Business Administration, College of Business and Administration, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Riyadh, Saudi Arabia
| |
Collapse
|
6
|
Khodadadi Shoushtari F, Dehkordi ANV, Sina S. Quantitative and Visual Analysis of Data Augmentation and Hyperparameter Optimization in Deep Learning-Based Segmentation of Low-Grade Glioma Tumors Using Grad-CAM. Ann Biomed Eng 2024; 52:1359-1377. [PMID: 38409433 DOI: 10.1007/s10439-024-03461-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 01/29/2024] [Indexed: 02/28/2024]
Abstract
This study executes a quantitative and visual investigation on the effectiveness of data augmentation and hyperparameter optimization on the accuracy of deep learning-based segmentation of LGG tumors. The study employed the MobileNetV2 and ResNet backbones with atrous convolution in DeepLabV3+ structure. The Grad-CAM tool was also used to interpret the effect of augmentation and network optimization on segmentation performance. A wide investigation was performed to optimize the network hyperparameters. In addition, the study examined 35 different models to evaluate different data augmentation techniques. The results of the study indicated that incorporating data augmentation techniques and optimization can improve the performance of segmenting brain LGG tumors up to 10%. Our extensive investigation of the data augmentation techniques indicated that enlargement of data from 90° and 225° rotated data,up to down and left to right flipping are the most effective techniques. MobilenetV2 as the backbone,"Focal Loss" as the loss function and "Adam" as the optimizer showed the superior results. The optimal model (DLG-Net) achieved an overall accuracy of 96.1% with a loss value of 0.006. Specifically, the segmentation performance for Whole Tumor (WT), Tumor Core (TC), and Enhanced Tumor (ET) reached a Dice Similarity Coefficient (DSC) of 89.4%, 70.1%, and 49.9%, respectively. Simultaneous visual and quantitative assessment of data augmentation and network optimization can lead to an optimal model with a reasonable performance in segmenting the LGG tumors.
Collapse
Affiliation(s)
| | - Azimeh N V Dehkordi
- Department of Physics, Faculty of Computer Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran.
- Najafabad Branch, Islamic Azad University, Najafabad, 8514143131, Iran.
| | - Sedigheh Sina
- Nuclear Engineering Department, Shiraz University, Shiraz, Iran
- Radiation Research Center, Shiraz University, Shiraz, Iran
| |
Collapse
|
7
|
Frosina G. Advancements in Image-Based Models for High-Grade Gliomas Might Be Accelerated. Cancers (Basel) 2024; 16:1566. [PMID: 38672647 PMCID: PMC11048778 DOI: 10.3390/cancers16081566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 04/08/2024] [Accepted: 04/18/2024] [Indexed: 04/28/2024] Open
Abstract
The first half of 2022 saw the publication of several major research advances in image-based models and artificial intelligence applications to optimize treatment strategies for high-grade gliomas, the deadliest brain tumors. We review them and discuss the barriers that delay their entry into clinical practice; particularly, the small sample size and the heterogeneity of the study designs and methodologies used. We will also write about the poor and late palliation that patients suffering from high-grade glioma can count on at the end of life, as well as the current legislative instruments, with particular reference to Italy. We suggest measures to accelerate the gradual progress in image-based models and end of life care for patients with high-grade glioma.
Collapse
Affiliation(s)
- Guido Frosina
- Mutagenesis & Cancer Prevention Unit, IRCCS Ospedale Policlinico San Martino, Largo Rosanna Benzi 10, 16132 Genova, Italy
| |
Collapse
|
8
|
Metta C, Beretta A, Pellungrini R, Rinzivillo S, Giannotti F. Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial Intelligence. Bioengineering (Basel) 2024; 11:369. [PMID: 38671790 PMCID: PMC11048122 DOI: 10.3390/bioengineering11040369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 04/09/2024] [Accepted: 04/10/2024] [Indexed: 04/28/2024] Open
Abstract
This paper focuses on the use of local Explainable Artificial Intelligence (XAI) methods, particularly the Local Rule-Based Explanations (LORE) technique, within healthcare and medical settings. It emphasizes the critical role of interpretability and transparency in AI systems for diagnosing diseases, predicting patient outcomes, and creating personalized treatment plans. While acknowledging the complexities and inherent trade-offs between interpretability and model performance, our work underscores the significance of local XAI methods in enhancing decision-making processes in healthcare. By providing granular, case-specific insights, local XAI methods like LORE enhance physicians' and patients' understanding of machine learning models and their outcome. Our paper reviews significant contributions to local XAI in healthcare, highlighting its potential to improve clinical decision making, ensure fairness, and comply with regulatory standards.
Collapse
Affiliation(s)
- Carlo Metta
- Institute of Information Science and Technologies (ISTI-CNR), Via Moruzzi 1, 56127 Pisa, Italy; (A.B.); (S.R.)
| | - Andrea Beretta
- Institute of Information Science and Technologies (ISTI-CNR), Via Moruzzi 1, 56127 Pisa, Italy; (A.B.); (S.R.)
| | - Roberto Pellungrini
- Faculty of Sciences, Scuola Normale Superiore, P.za dei Cavalieri 7, 56126 Pisa, Italy; (R.P.); (F.G.)
| | - Salvatore Rinzivillo
- Institute of Information Science and Technologies (ISTI-CNR), Via Moruzzi 1, 56127 Pisa, Italy; (A.B.); (S.R.)
| | - Fosca Giannotti
- Faculty of Sciences, Scuola Normale Superiore, P.za dei Cavalieri 7, 56126 Pisa, Italy; (R.P.); (F.G.)
| |
Collapse
|
9
|
Metta C, Beretta A, Guidotti R, Yin Y, Gallinari P, Rinzivillo S, Giannotti F. Advancing Dermatological Diagnostics: Interpretable AI for Enhanced Skin Lesion Classification. Diagnostics (Basel) 2024; 14:753. [PMID: 38611666 PMCID: PMC11011805 DOI: 10.3390/diagnostics14070753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 03/30/2024] [Accepted: 03/30/2024] [Indexed: 04/14/2024] Open
Abstract
A crucial challenge in critical settings like medical diagnosis is making deep learning models used in decision-making systems interpretable. Efforts in Explainable Artificial Intelligence (XAI) are underway to address this challenge. Yet, many XAI methods are evaluated on broad classifiers and fail to address complex, real-world issues, such as medical diagnosis. In our study, we focus on enhancing user trust and confidence in automated AI decision-making systems, particularly for diagnosing skin lesions, by tailoring an XAI method to explain an AI model's ability to identify various skin lesion types. We generate explanations using synthetic images of skin lesions as examples and counterexamples, offering a method for practitioners to pinpoint the critical features influencing the classification outcome. A validation survey involving domain experts, novices, and laypersons has demonstrated that explanations increase trust and confidence in the automated decision system. Furthermore, our exploration of the model's latent space reveals clear separations among the most common skin lesion classes, a distinction that likely arises from the unique characteristics of each class and could assist in correcting frequent misdiagnoses by human professionals.
Collapse
Affiliation(s)
- Carlo Metta
- Institute of Information Science and Technologies (ISTI-CNR), 56124 Pisa, Italy; (A.B.); (S.R.)
| | - Andrea Beretta
- Institute of Information Science and Technologies (ISTI-CNR), 56124 Pisa, Italy; (A.B.); (S.R.)
| | - Riccardo Guidotti
- Department of Computer Science, Universitá di Pisa, 56124 Pisa, Italy;
| | - Yuan Yin
- Laboratoire d’Informatique de Paris 6, Sorbonne Université, 75005 Paris, Italy; (Y.Y.); (P.G.)
| | - Patrick Gallinari
- Laboratoire d’Informatique de Paris 6, Sorbonne Université, 75005 Paris, Italy; (Y.Y.); (P.G.)
| | - Salvatore Rinzivillo
- Institute of Information Science and Technologies (ISTI-CNR), 56124 Pisa, Italy; (A.B.); (S.R.)
| | - Fosca Giannotti
- Faculty of Sciences, Scuola Normale Superiore di Pisa, 56126 Paris, Italy;
| |
Collapse
|
10
|
Khalighi S, Reddy K, Midya A, Pandav KB, Madabhushi A, Abedalthagafi M. Artificial intelligence in neuro-oncology: advances and challenges in brain tumor diagnosis, prognosis, and precision treatment. NPJ Precis Oncol 2024; 8:80. [PMID: 38553633 PMCID: PMC10980741 DOI: 10.1038/s41698-024-00575-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 03/13/2024] [Indexed: 04/02/2024] Open
Abstract
This review delves into the most recent advancements in applying artificial intelligence (AI) within neuro-oncology, specifically emphasizing work on gliomas, a class of brain tumors that represent a significant global health issue. AI has brought transformative innovations to brain tumor management, utilizing imaging, histopathological, and genomic tools for efficient detection, categorization, outcome prediction, and treatment planning. Assessing its influence across all facets of malignant brain tumor management- diagnosis, prognosis, and therapy- AI models outperform human evaluations in terms of accuracy and specificity. Their ability to discern molecular aspects from imaging may reduce reliance on invasive diagnostics and may accelerate the time to molecular diagnoses. The review covers AI techniques, from classical machine learning to deep learning, highlighting current applications and challenges. Promising directions for future research include multimodal data integration, generative AI, large medical language models, precise tumor delineation and characterization, and addressing racial and gender disparities. Adaptive personalized treatment strategies are also emphasized for optimizing clinical outcomes. Ethical, legal, and social implications are discussed, advocating for transparency and fairness in AI integration for neuro-oncology and providing a holistic understanding of its transformative impact on patient care.
Collapse
Affiliation(s)
- Sirvan Khalighi
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Kartik Reddy
- Department of Radiology, Emory University, Atlanta, GA, USA
| | - Abhishek Midya
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Krunal Balvantbhai Pandav
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Anant Madabhushi
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA.
- Atlanta Veterans Administration Medical Center, Atlanta, GA, USA.
| | - Malak Abedalthagafi
- Department of Pathology and Laboratory Medicine, Emory University, Atlanta, GA, USA.
- The Cell and Molecular Biology Program, Winship Cancer Institute, Atlanta, GA, USA.
| |
Collapse
|
11
|
Zeineldin RA, Karar ME, Burgert O, Mathis-Ullrich F. NeuroIGN: Explainable Multimodal Image-Guided System for Precise Brain Tumor Surgery. J Med Syst 2024; 48:25. [PMID: 38393660 DOI: 10.1007/s10916-024-02037-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 02/03/2024] [Indexed: 02/25/2024]
Abstract
Precise neurosurgical guidance is critical for successful brain surgeries and plays a vital role in all phases of image-guided neurosurgery (IGN). Neuronavigation software enables real-time tracking of surgical tools, ensuring their presentation with high precision in relation to a virtual patient model. Therefore, this work focuses on the development of a novel multimodal IGN system, leveraging deep learning and explainable AI to enhance brain tumor surgery outcomes. The study establishes the clinical and technical requirements of the system for brain tumor surgeries. NeuroIGN adopts a modular architecture, including brain tumor segmentation, patient registration, and explainable output prediction, and integrates open-source packages into an interactive neuronavigational display. The NeuroIGN system components underwent validation and evaluation in both laboratory and simulated operating room (OR) settings. Experimental results demonstrated its accuracy in tumor segmentation and the success of ExplainAI in increasing the trust of medical professionals in deep learning. The proposed system was successfully assembled and set up within 11 min in a pre-clinical OR setting with a tracking accuracy of 0.5 (± 0.1) mm. NeuroIGN was also evaluated as highly useful, with a high frame rate (19 FPS) and real-time ultrasound imaging capabilities. In conclusion, this paper describes not only the development of an open-source multimodal IGN system but also demonstrates the innovative application of deep learning and explainable AI algorithms in enhancing neuronavigation for brain tumor surgeries. By seamlessly integrating pre- and intra-operative patient image data with cutting-edge interventional devices, our experiments underscore the potential for deep learning models to improve the surgical treatment of brain tumors and long-term post-operative outcomes.
Collapse
Affiliation(s)
- Ramy A Zeineldin
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen-Nürnberg, 91052, Erlangen, Germany.
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany.
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt.
| | - Mohamed E Karar
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany
| | - Franziska Mathis-Ullrich
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen-Nürnberg, 91052, Erlangen, Germany
| |
Collapse
|
12
|
Zeineldin RA, Karar ME, Elshaer Z, Coburger J, Wirtz CR, Burgert O, Mathis-Ullrich F. Explainable hybrid vision transformers and convolutional network for multimodal glioma segmentation in brain MRI. Sci Rep 2024; 14:3713. [PMID: 38355678 PMCID: PMC10866944 DOI: 10.1038/s41598-024-54186-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 02/09/2024] [Indexed: 02/16/2024] Open
Abstract
Accurate localization of gliomas, the most common malignant primary brain cancer, and its different sub-region from multimodal magnetic resonance imaging (MRI) volumes are highly important for interventional procedures. Recently, deep learning models have been applied widely to assist automatic lesion segmentation tasks for neurosurgical interventions. However, these models are often complex and represented as "black box" models which limit their applicability in clinical practice. This article introduces new hybrid vision Transformers and convolutional neural networks for accurate and robust glioma segmentation in Brain MRI scans. Our proposed method, TransXAI, provides surgeon-understandable heatmaps to make the neural networks transparent. TransXAI employs a post-hoc explanation technique that provides visual interpretation after the brain tumor localization is made without any network architecture modifications or accuracy tradeoffs. Our experimental findings showed that TransXAI achieves competitive performance in extracting both local and global contexts in addition to generating explainable saliency maps to help understand the prediction of the deep network. Further, visualization maps are obtained to realize the flow of information in the internal layers of the encoder-decoder network and understand the contribution of MRI modalities in the final prediction. The explainability process could provide medical professionals with additional information about the tumor segmentation results and therefore aid in understanding how the deep learning model is capable of processing MRI data successfully. Thus, it enables the physicians' trust in such deep learning systems towards applying them clinically. To facilitate TransXAI model development and results reproducibility, we will share the source code and the pre-trained models after acceptance at https://github.com/razeineldin/TransXAI .
Collapse
Affiliation(s)
- Ramy A Zeineldin
- Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg (FAU), 91052, Erlangen, Germany.
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany.
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt.
| | - Mohamed E Karar
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt
| | - Ziad Elshaer
- Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany
| | - Jan Coburger
- Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany
| | - Christian R Wirtz
- Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany
| | - Franziska Mathis-Ullrich
- Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg (FAU), 91052, Erlangen, Germany
| |
Collapse
|
13
|
Khosravi P, Mohammadi S, Zahiri F, Khodarahmi M, Zahiri J. AI-Enhanced Detection of Clinically Relevant Structural and Functional Anomalies in MRI: Traversing the Landscape of Conventional to Explainable Approaches. J Magn Reson Imaging 2024. [PMID: 38243677 DOI: 10.1002/jmri.29247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 01/05/2024] [Accepted: 01/08/2024] [Indexed: 01/21/2024] Open
Abstract
Anomaly detection in medical imaging, particularly within the realm of magnetic resonance imaging (MRI), stands as a vital area of research with far-reaching implications across various medical fields. This review meticulously examines the integration of artificial intelligence (AI) in anomaly detection for MR images, spotlighting its transformative impact on medical diagnostics. We delve into the forefront of AI applications in MRI, exploring advanced machine learning (ML) and deep learning (DL) methodologies that are pivotal in enhancing the precision of diagnostic processes. The review provides a detailed analysis of preprocessing, feature extraction, classification, and segmentation techniques, alongside a comprehensive evaluation of commonly used metrics. Further, this paper explores the latest developments in ensemble methods and explainable AI, offering insights into future directions and potential breakthroughs. This review synthesizes current insights, offering a valuable guide for researchers, clinicians, and medical imaging experts. It highlights AI's crucial role in improving the precision and speed of detecting key structural and functional irregularities in MRI. Our exploration of innovative techniques and trends furthers MRI technology development, aiming to refine diagnostics, tailor treatments, and elevate patient care outcomes. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Pegah Khosravi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, New York, USA
- The CUNY Graduate Center, City University of New York, New York City, New York, USA
| | - Saber Mohammadi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, New York, USA
- Department of Biophysics, Tarbiat Modares University, Tehran, Iran
| | - Fatemeh Zahiri
- Department of Cell and Molecular Sciences, Kharazmi University, Tehran, Iran
| | | | - Javad Zahiri
- Department of Neuroscience, University of California San Diego, San Diego, California, USA
| |
Collapse
|
14
|
Champendal M, Müller H, Prior JO, Dos Reis CS. A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging. Eur J Radiol 2023; 169:111159. [PMID: 37976760 DOI: 10.1016/j.ejrad.2023.111159] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 09/26/2023] [Accepted: 10/19/2023] [Indexed: 11/19/2023]
Abstract
PURPOSE To review eXplainable Artificial Intelligence/(XAI) methods available for medical imaging/(MI). METHOD A scoping review was conducted following the Joanna Briggs Institute's methodology. The search was performed on Pubmed, Embase, Cinhal, Web of Science, BioRxiv, MedRxiv, and Google Scholar. Studies published in French and English after 2017 were included. Keyword combinations and descriptors related to explainability, and MI modalities were employed. Two independent reviewers screened abstracts, titles and full text, resolving differences through discussion. RESULTS 228 studies met the criteria. XAI publications are increasing, targeting MRI (n = 73), radiography (n = 47), CT (n = 46). Lung (n = 82) and brain (n = 74) pathologies, Covid-19 (n = 48), Alzheimer's disease (n = 25), brain tumors (n = 15) are the main pathologies explained. Explanations are presented visually (n = 186), numerically (n = 67), rule-based (n = 11), textually (n = 11), and example-based (n = 6). Commonly explained tasks include classification (n = 89), prediction (n = 47), diagnosis (n = 39), detection (n = 29), segmentation (n = 13), and image quality improvement (n = 6). The most frequently provided explanations were local (78.1 %), 5.7 % were global, and 16.2 % combined both local and global approaches. Post-hoc approaches were predominantly employed. The used terminology varied, sometimes indistinctively using explainable (n = 207), interpretable (n = 187), understandable (n = 112), transparent (n = 61), reliable (n = 31), and intelligible (n = 3). CONCLUSION The number of XAI publications in medical imaging is increasing, primarily focusing on applying XAI techniques to MRI, CT, and radiography for classifying and predicting lung and brain pathologies. Visual and numerical output formats are predominantly used. Terminology standardisation remains a challenge, as terms like "explainable" and "interpretable" are sometimes being used indistinctively. Future XAI development should consider user needs and perspectives.
Collapse
Affiliation(s)
- Mélanie Champendal
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland, Lausanne, CH, Switzerland; Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland.
| | - Henning Müller
- Informatics Institute, University of Applied Sciences Western Switzerland (HES-SO Valais) Sierre, CH, Switzerland; Medical faculty, University of Geneva, CH, Switzerland.
| | - John O Prior
- Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland; Nuclear Medicine and Molecular Imaging Department, Lausanne University Hospital (CHUV), Lausanne, CH, Switzerland.
| | - Cláudia Sá Dos Reis
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland, Lausanne, CH, Switzerland.
| |
Collapse
|
15
|
Ali S, Akhlaq F, Imran AS, Kastrati Z, Daudpota SM, Moosa M. The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review. Comput Biol Med 2023; 166:107555. [PMID: 37806061 DOI: 10.1016/j.compbiomed.2023.107555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 08/13/2023] [Accepted: 09/28/2023] [Indexed: 10/10/2023]
Abstract
In domains such as medical and healthcare, the interpretability and explainability of machine learning and artificial intelligence systems are crucial for building trust in their results. Errors caused by these systems, such as incorrect diagnoses or treatments, can have severe and even life-threatening consequences for patients. To address this issue, Explainable Artificial Intelligence (XAI) has emerged as a popular area of research, focused on understanding the black-box nature of complex and hard-to-interpret machine learning models. While humans can increase the accuracy of these models through technical expertise, understanding how these models actually function during training can be difficult or even impossible. XAI algorithms such as Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) can provide explanations for these models, improving trust in their predictions by providing feature importance and increasing confidence in the systems. Many articles have been published that propose solutions to medical problems by using machine learning models alongside XAI algorithms to provide interpretability and explainability. In our study, we identified 454 articles published from 2018-2022 and analyzed 93 of them to explore the use of these techniques in the medical domain.
Collapse
Affiliation(s)
- Subhan Ali
- Department of Computer Science, Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway.
| | - Filza Akhlaq
- Department of Computer Science, Sukkur IBA University, Sukkur, 65200, Sindh, Pakistan.
| | - Ali Shariq Imran
- Department of Computer Science, Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway.
| | - Zenun Kastrati
- Department of Informatics, Linnaeus University, Växjö, 351 95, Sweden.
| | | | - Muhammad Moosa
- Department of Computer Science, Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway.
| |
Collapse
|
16
|
de Vries BM, Zwezerijnen GJC, Burchell GL, van Velden FHP, Menke-van der Houven van Oordt CW, Boellaard R. Explainable artificial intelligence (XAI) in radiology and nuclear medicine: a literature review. Front Med (Lausanne) 2023; 10:1180773. [PMID: 37250654 PMCID: PMC10213317 DOI: 10.3389/fmed.2023.1180773] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 04/17/2023] [Indexed: 05/31/2023] Open
Abstract
Rational Deep learning (DL) has demonstrated a remarkable performance in diagnostic imaging for various diseases and modalities and therefore has a high potential to be used as a clinical tool. However, current practice shows low deployment of these algorithms in clinical practice, because DL algorithms lack transparency and trust due to their underlying black-box mechanism. For successful employment, explainable artificial intelligence (XAI) could be introduced to close the gap between the medical professionals and the DL algorithms. In this literature review, XAI methods available for magnetic resonance (MR), computed tomography (CT), and positron emission tomography (PET) imaging are discussed and future suggestions are made. Methods PubMed, Embase.com and Clarivate Analytics/Web of Science Core Collection were screened. Articles were considered eligible for inclusion if XAI was used (and well described) to describe the behavior of a DL model used in MR, CT and PET imaging. Results A total of 75 articles were included of which 54 and 17 articles described post and ad hoc XAI methods, respectively, and 4 articles described both XAI methods. Major variations in performance is seen between the methods. Overall, post hoc XAI lacks the ability to provide class-discriminative and target-specific explanation. Ad hoc XAI seems to tackle this because of its intrinsic ability to explain. However, quality control of the XAI methods is rarely applied and therefore systematic comparison between the methods is difficult. Conclusion There is currently no clear consensus on how XAI should be deployed in order to close the gap between medical professionals and DL algorithms for clinical implementation. We advocate for systematic technical and clinical quality assessment of XAI methods. Also, to ensure end-to-end unbiased and safe integration of XAI in clinical workflow, (anatomical) data minimization and quality control methods should be included.
Collapse
Affiliation(s)
- Bart M. de Vries
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Gerben J. C. Zwezerijnen
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | | | | | | | - Ronald Boellaard
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
17
|
Qian J, Li H, Wang J, He L. Recent Advances in Explainable Artificial Intelligence for Magnetic Resonance Imaging. Diagnostics (Basel) 2023; 13:1571. [PMID: 37174962 PMCID: PMC10178221 DOI: 10.3390/diagnostics13091571] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 03/29/2023] [Accepted: 04/26/2023] [Indexed: 05/15/2023] Open
Abstract
Advances in artificial intelligence (AI), especially deep learning (DL), have facilitated magnetic resonance imaging (MRI) data analysis, enabling AI-assisted medical image diagnoses and prognoses. However, most of the DL models are considered as "black boxes". There is an unmet need to demystify DL models so domain experts can trust these high-performance DL models. This has resulted in a sub-domain of AI research called explainable artificial intelligence (XAI). In the last decade, many experts have dedicated their efforts to developing novel XAI methods that are competent at visualizing and explaining the logic behind data-driven DL models. However, XAI techniques are still in their infancy for medical MRI image analysis. This study aims to outline the XAI applications that are able to interpret DL models for MRI data analysis. We first introduce several common MRI data modalities. Then, a brief history of DL models is discussed. Next, we highlight XAI frameworks and elaborate on the principles of multiple popular XAI methods. Moreover, studies on XAI applications in MRI image analysis are reviewed across the tissues/organs of the human body. A quantitative analysis is conducted to reveal the insights of MRI researchers on these XAI techniques. Finally, evaluations of XAI methods are discussed. This survey presents recent advances in the XAI domain for explaining the DL models that have been utilized in MRI applications.
Collapse
Affiliation(s)
- Jinzhao Qian
- Imaging Research Center, Department of Radiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
- Department of Computer Science, University of Cincinnati, Cincinnati, OH 45221, USA
| | - Hailong Li
- Imaging Research Center, Department of Radiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
- Department of Radiology, College of Medicine, University of Cincinnati, Cincinnati, OH 45221, USA
| | - Junqi Wang
- Imaging Research Center, Department of Radiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
| | - Lili He
- Imaging Research Center, Department of Radiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
- Department of Computer Science, University of Cincinnati, Cincinnati, OH 45221, USA
- Department of Radiology, College of Medicine, University of Cincinnati, Cincinnati, OH 45221, USA
| |
Collapse
|
18
|
Karar ME, El-Fishawy N, Radad M. Automated classification of urine biomarkers to diagnose pancreatic cancer using 1-D convolutional neural networks. J Biol Eng 2023; 17:28. [PMID: 37069681 PMCID: PMC10111836 DOI: 10.1186/s13036-023-00340-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Accepted: 03/13/2023] [Indexed: 04/19/2023] Open
Abstract
BACKGROUND Early diagnosis of Pancreatic Ductal Adenocarcinoma (PDAC) is the main key to surviving cancer patients. Urine proteomic biomarkers which are creatinine, LYVE1, REG1B, and TFF1 present a promising non-invasive and inexpensive diagnostic method of the PDAC. Recent utilization of both microfluidics technology and artificial intelligence techniques enables accurate detection and analysis of these biomarkers. This paper proposes a new deep-learning model to identify urine biomarkers for the automated diagnosis of pancreatic cancers. The proposed model is composed of one-dimensional convolutional neural networks (1D-CNNs) and long short-term memory (LSTM). It can categorize patients into healthy pancreas, benign hepatobiliary disease, and PDAC cases automatically. RESULTS Experiments and evaluations have been successfully done on a public dataset of 590 urine samples of three classes, which are 183 healthy pancreas samples, 208 benign hepatobiliary disease samples, and 199 PDAC samples. The results demonstrated that our proposed 1-D CNN + LSTM model achieved the best accuracy score of 97% and the area under curve (AUC) of 98% versus the state-of-the-art models to diagnose pancreatic cancers using urine biomarkers. CONCLUSION A new efficient 1D CNN-LSTM model has been successfully developed for early PDAC diagnosis using four proteomic urine biomarkers of creatinine, LYVE1, REG1B, and TFF1. This developed model showed superior performance on other machine learning classifiers in previous studies. The main prospect of this study is the laboratory realization of our proposed deep classifier on urinary biomarker panels for assisting diagnostic procedures of pancreatic cancer patients.
Collapse
Affiliation(s)
- Mohamed Esmail Karar
- Department of Industrial Electronics and Control Engineering, Faculty of Electronic Engineering, Menoufia University, Al Minufiyah, Egypt
| | - Nawal El-Fishawy
- Department of Computer Science and Engineering, Faculty of Electronic Engineering, Menoufia University, Al Minufiyah, Egypt
| | - Marwa Radad
- Department of Computer Science and Engineering, Faculty of Electronic Engineering, Menoufia University, Al Minufiyah, Egypt.
| |
Collapse
|
19
|
Alsubai S, Khan HU, Alqahtani A, Sha M, Abbas S, Mohammad UG. Ensemble deep learning for brain tumor detection. Front Comput Neurosci 2022; 16:1005617. [PMID: 36118133 PMCID: PMC9480978 DOI: 10.3389/fncom.2022.1005617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 08/18/2022] [Indexed: 11/29/2022] Open
Abstract
With the quick evolution of medical technology, the era of big data in medicine is quickly approaching. The analysis and mining of these data significantly influence the prediction, monitoring, diagnosis, and treatment of tumor disorders. Since it has a wide range of traits, a low survival rate, and an aggressive nature, brain tumor is regarded as the deadliest and most devastating disease. Misdiagnosed brain tumors lead to inadequate medical treatment, reducing the patient's life chances. Brain tumor detection is highly challenging due to the capacity to distinguish between aberrant and normal tissues. Effective therapy and long-term survival are made possible for the patient by a correct diagnosis. Despite extensive research, there are still certain limitations in detecting brain tumors because of the unusual distribution pattern of the lesions. Finding a region with a small number of lesions can be difficult because small areas tend to look healthy. It directly reduces the classification accuracy, and extracting and choosing informative features is challenging. A significant role is played by automatically classifying early-stage brain tumors utilizing deep and machine learning approaches. This paper proposes a hybrid deep learning model Convolutional Neural Network-Long Short Term Memory (CNN-LSTM) for classifying and predicting brain tumors through Magnetic Resonance Images (MRI). We experiment on an MRI brain image dataset. First, the data is preprocessed efficiently, and then, the Convolutional Neural Network (CNN) is applied to extract the significant features from images. The proposed model predicts the brain tumor with a significant classification accuracy of 99.1%, a precision of 98.8%, recall of 98.9%, and F1-measure of 99.0%.
Collapse
Affiliation(s)
- Shtwai Alsubai
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia
| | - Habib Ullah Khan
- Department of Accounting and Information Systems, College of Business and Economics, Qatar University, Doha, Qatar
- *Correspondence: Habib Ullah Khan
| | - Abdullah Alqahtani
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia
| | - Mohemmed Sha
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia
- Mohemmed Sha
| | - Sidra Abbas
- Department of Computer Science, COMSATS University, Islamabad, Pakistan
- Sidra Abbas
| | - Uzma Ghulam Mohammad
- Department of Computer Science and Software Engineering, International Islamic University, Islamabad, Pakistan
| |
Collapse
|