1
|
Mahmud T, Barua K, Habiba SU, Sharmen N, Hossain MS, Andersson K. An Explainable AI Paradigm for Alzheimer's Diagnosis Using Deep Transfer Learning. Diagnostics (Basel) 2024; 14:345. [PMID: 38337861 PMCID: PMC10855149 DOI: 10.3390/diagnostics14030345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Revised: 01/29/2024] [Accepted: 01/31/2024] [Indexed: 02/12/2024] Open
Abstract
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that affects millions of individuals worldwide, causing severe cognitive decline and memory impairment. The early and accurate diagnosis of AD is crucial for effective intervention and disease management. In recent years, deep learning techniques have shown promising results in medical image analysis, including AD diagnosis from neuroimaging data. However, the lack of interpretability in deep learning models hinders their adoption in clinical settings, where explainability is essential for gaining trust and acceptance from healthcare professionals. In this study, we propose an explainable AI (XAI)-based approach for the diagnosis of Alzheimer's disease, leveraging the power of deep transfer learning and ensemble modeling. The proposed framework aims to enhance the interpretability of deep learning models by incorporating XAI techniques, allowing clinicians to understand the decision-making process and providing valuable insights into disease diagnosis. By leveraging popular pre-trained convolutional neural networks (CNNs) such as VGG16, VGG19, DenseNet169, and DenseNet201, we conducted extensive experiments to evaluate their individual performances on a comprehensive dataset. The proposed ensembles, Ensemble-1 (VGG16 and VGG19) and Ensemble-2 (DenseNet169 and DenseNet201), demonstrated superior accuracy, precision, recall, and F1 scores compared to individual models, reaching up to 95%. In order to enhance interpretability and transparency in Alzheimer's diagnosis, we introduced a novel model achieving an impressive accuracy of 96%. This model incorporates explainable AI techniques, including saliency maps and grad-CAM (gradient-weighted class activation mapping). The integration of these techniques not only contributes to the model's exceptional accuracy but also provides clinicians and researchers with visual insights into the neural regions influencing the diagnosis. Our findings showcase the potential of combining deep transfer learning with explainable AI in the realm of Alzheimer's disease diagnosis, paving the way for more interpretable and clinically relevant AI models in healthcare.
Collapse
Affiliation(s)
- Tanjim Mahmud
- Department of Computer Science and Engineering, Rangamati Science and Technology University, Rangamati 4500, Bangladesh
| | - Koushick Barua
- Department of Computer Science and Engineering, Rangamati Science and Technology University, Rangamati 4500, Bangladesh
| | - Sultana Umme Habiba
- Department of Computer Science and Engineering, Khulna University of Engineering & Technology, Khulna 9203, Bangladesh;
| | - Nahed Sharmen
- Department of Obstetrics and Gynecology, Chattogram Maa-O-Shishu Hospital Medical College, Chittagong 4100, Bangladesh;
| | - Mohammad Shahadat Hossain
- Department of Computer Science and Engineering, University of Chittagong, Chittagong 4331, Bangladesh;
| | - Karl Andersson
- Pervasive and Mobile Computing Laboratory, Luleå University of Technology, 97187 Luleå, Sweden;
| |
Collapse
|
2
|
Kerz E, Zanwar S, Qiao Y, Wiechmann D. Toward explainable AI (XAI) for mental health detection based on language behavior. Front Psychiatry 2023; 14:1219479. [PMID: 38144474 PMCID: PMC10748510 DOI: 10.3389/fpsyt.2023.1219479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 10/31/2023] [Indexed: 12/26/2023] Open
Abstract
Advances in artificial intelligence (AI) in general and Natural Language Processing (NLP) in particular are paving the new way forward for the automated detection and prediction of mental health disorders among the population. Recent research in this area has prioritized predictive accuracy over model interpretability by relying on deep learning methods. However, prioritizing predictive accuracy over model interpretability can result in a lack of transparency in the decision-making process, which is critical in sensitive applications such as healthcare. There is thus a growing need for explainable AI (XAI) approaches to psychiatric diagnosis and prediction. The main aim of this work is to address a gap by conducting a systematic investigation of XAI approaches in the realm of automatic detection of mental disorders from language behavior leveraging textual data from social media. In pursuit of this aim, we perform extensive experiments to evaluate the balance between accuracy and interpretability across predictive mental health models. More specifically, we build BiLSTM models trained on a comprehensive set of human-interpretable features, encompassing syntactic complexity, lexical sophistication, readability, cohesion, stylistics, as well as topics and sentiment/emotions derived from lexicon-based dictionaries to capture multiple dimensions of language production. We conduct extensive feature ablation experiments to determine the most informative feature groups associated with specific mental health conditions. We juxtapose the performance of these models against a "black-box" domain-specific pretrained transformer adapted for mental health applications. To enhance the interpretability of the transformers models, we utilize a multi-task fusion learning framework infusing information from two relevant domains (emotion and personality traits). Moreover, we employ two distinct explanation techniques: the local interpretable model-agnostic explanations (LIME) method and a model-specific self-explaining method (AGRAD). These methods allow us to discern the specific categories of words that the information-infused models rely on when generating predictions. Our proposed approaches are evaluated on two public English benchmark datasets, subsuming five mental health conditions (attention-deficit/hyperactivity disorder, anxiety, bipolar disorder, depression and psychological stress).
Collapse
Affiliation(s)
- Elma Kerz
- Department of English and American Studies, RWTH Aachen University, Aachen, North Rhine-Westphalia, Germany
| | - Sourabh Zanwar
- Department of English and American Studies, RWTH Aachen University, Aachen, North Rhine-Westphalia, Germany
| | - Yu Qiao
- Department of English and American Studies, RWTH Aachen University, Aachen, North Rhine-Westphalia, Germany
| | - Daniel Wiechmann
- Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
3
|
Nambiar A, S H, S S. Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data. Front Artif Intell 2023; 6:1272506. [PMID: 38111787 PMCID: PMC10726049 DOI: 10.3389/frai.2023.1272506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 11/07/2023] [Indexed: 12/20/2023] Open
Abstract
Introduction The COVID-19 pandemic had a global impact and created an unprecedented emergency in healthcare and other related frontline sectors. Various Artificial-Intelligence-based models were developed to effectively manage medical resources and identify patients at high risk. However, many of these AI models were limited in their practical high-risk applicability due to their "black-box" nature, i.e., lack of interpretability of the model. To tackle this problem, Explainable Artificial Intelligence (XAI) was introduced, aiming to explore the "black box" behavior of machine learning models and offer definitive and interpretable evidence. XAI provides interpretable analysis in a human-compliant way, thus boosting our confidence in the successful implementation of AI systems in the wild. Methods In this regard, this study explores the use of model-agnostic XAI models, such as SHapley Additive exPlanations values (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), for COVID-19 symptom analysis in Indian patients toward a COVID severity prediction task. Various machine learning models such as Decision Tree Classifier, XGBoost Classifier, and Neural Network Classifier are leveraged to develop Machine Learning models. Results and discussion The proposed XAI tools are found to augment the high performance of AI systems with human interpretable evidence and reasoning, as shown through the interpretation of various explainability plots. Our comparative analysis illustrates the significance of XAI tools and their impact within a healthcare context. The study suggests that SHAP and LIME analysis are promising methods for incorporating explainability in model development and can lead to better and more trustworthy ML models in the future.
Collapse
Affiliation(s)
- Athira Nambiar
- Department of Computational Intelligence, Faculty of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, India
| | | | | |
Collapse
|
4
|
Mylrea M, Robinson N. Artificial Intelligence (AI) Trust Framework and Maturity Model: Applying an Entropy Lens to Improve Security, Privacy, and Ethical AI. Entropy (Basel) 2023; 25:1429. [PMID: 37895550 PMCID: PMC10606888 DOI: 10.3390/e25101429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 08/30/2023] [Accepted: 09/15/2023] [Indexed: 10/29/2023]
Abstract
Recent advancements in artificial intelligence (AI) technology have raised concerns about the ethical, moral, and legal safeguards. There is a pressing need to improve metrics for assessing security and privacy of AI systems and to manage AI technology in a more ethical manner. To address these challenges, an AI Trust Framework and Maturity Model is proposed to enhance trust in the design and management of AI systems. Trust in AI involves an agreed-upon understanding between humans and machines about system performance. The framework utilizes an "entropy lens" to root the study in information theory and enhance transparency and trust in "black box" AI systems, which lack ethical guardrails. High entropy in AI systems can decrease human trust, particularly in uncertain and competitive environments. The research draws inspiration from entropy studies to improve trust and performance in autonomous human-machine teams and systems, including interconnected elements in hierarchical systems. Applying this lens to improve trust in AI also highlights new opportunities to optimize performance in teams. Two use cases are described to validate the AI framework's ability to measure trust in the design and management of AI systems.
Collapse
Affiliation(s)
- Michael Mylrea
- Department of Computer Science & Engineering, Institute of Data Science and Computing, University of Miami, Coral Gables, FL 33146, USA
| | - Nikki Robinson
- Department of Computer and Data Science, Capitol Technology University, Laurel, ME 20708, USA
| |
Collapse
|
5
|
Gabralla LA, Hussien AM, AlMohimeed A, Saleh H, Alsekait DM, El-Sappagh S, Ali AA, Refaat Hassan M. Automated Diagnosis for Colon Cancer Diseases Using Stacking Transformer Models and Explainable Artificial Intelligence. Diagnostics (Basel) 2023; 13:2939. [PMID: 37761306 PMCID: PMC10529133 DOI: 10.3390/diagnostics13182939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 08/23/2023] [Accepted: 08/31/2023] [Indexed: 09/29/2023] Open
Abstract
Colon cancer is the third most common cancer type worldwide in 2020, almost two million cases were diagnosed. As a result, providing new, highly accurate techniques in detecting colon cancer leads to early and successful treatment of this disease. This paper aims to propose a heterogenic stacking deep learning model to predict colon cancer. Stacking deep learning is integrated with pretrained convolutional neural network (CNN) models with a metalearner to enhance colon cancer prediction performance. The proposed model is compared with VGG16, InceptionV3, Resnet50, and DenseNet121 using different evaluation metrics. Furthermore, the proposed models are evaluated using the LC25000 and WCE binary and muticlassified colon cancer image datasets. The results show that the stacking models recorded the highest performance for the two datasets. For the LC25000 dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (100). For the WCE colon image dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (98). Stacking-SVM achieved the highest performed compared to existing models (VGG16, InceptionV3, Resnet50, and DenseNet121) because it combines the output of multiple single models and trains and evaluates a metalearner using the output to produce better predictive results than any single model. Black-box deep learning models are represented using explainable AI (XAI).
Collapse
Affiliation(s)
- Lubna Abdelkareim Gabralla
- Department of Computer Science and Information Technology, Applied College, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Ali Mohamed Hussien
- Department of Computer Science, Faculty of Science, Aswan University, Aswan 81528, Egypt
| | - Abdulaziz AlMohimeed
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 13318, Saudi Arabia
| | - Hager Saleh
- Faculty of Computers and Artificial Intelligence, South Valley University, Hurghada 84511, Egypt
| | - Deema Mohammed Alsekait
- Department of Computer Science and Information Technology, Applied College, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Shaker El-Sappagh
- Faculty of Computer Science and Engineering, Galala University, Suez 34511, Egypt
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Banha 13518, Egypt
| | - Abdelmgeid A. Ali
- Faculty of Computers and Information, Minia University, Minia 61519, Egypt
| | - Moatamad Refaat Hassan
- Department of Computer Science, Faculty of Science, Aswan University, Aswan 81528, Egypt
| |
Collapse
|
6
|
Ghnemat R, Alodibat S, Abu Al-Haija Q. Explainable Artificial Intelligence (XAI) for Deep Learning Based Medical Imaging Classification. J Imaging 2023; 9:177. [PMID: 37754941 PMCID: PMC10532018 DOI: 10.3390/jimaging9090177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Revised: 08/19/2023] [Accepted: 08/23/2023] [Indexed: 09/28/2023] Open
Abstract
Recently, deep learning has gained significant attention as a noteworthy division of artificial intelligence (AI) due to its high accuracy and versatile applications. However, one of the major challenges of AI is the need for more interpretability, commonly referred to as the black-box problem. In this study, we introduce an explainable AI model for medical image classification to enhance the interpretability of the decision-making process. Our approach is based on segmenting the images to provide a better understanding of how the AI model arrives at its results. We evaluated our model on five datasets, including the COVID-19 and Pneumonia Chest X-ray dataset, Chest X-ray (COVID-19 and Pneumonia), COVID-19 Image Dataset (COVID-19, Viral Pneumonia, Normal), and COVID-19 Radiography Database. We achieved testing and validation accuracy of 90.6% on a relatively small dataset of 6432 images. Our proposed model improved accuracy and reduced time complexity, making it more practical for medical diagnosis. Our approach offers a more interpretable and transparent AI model that can enhance the accuracy and efficiency of medical diagnosis.
Collapse
Affiliation(s)
- Rawan Ghnemat
- Department of Computer Science, Princess Sumaya University for Technology, Amman 11941, Jordan
| | - Sawsan Alodibat
- Department of Computer Science, Princess Sumaya University for Technology, Amman 11941, Jordan
| | - Qasem Abu Al-Haija
- Department of Cybersecurity, Princess Sumaya University for Technology, Amman 11941, Jordan
| |
Collapse
|
7
|
Marques-Silva J, Ignatiev A. No silver bullet: interpretable ML models must be explained. Front Artif Intell 2023; 6:1128212. [PMID: 37168320 PMCID: PMC10165097 DOI: 10.3389/frai.2023.1128212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 03/29/2023] [Indexed: 05/13/2023] Open
Abstract
Recent years witnessed a number of proposals for the use of the so-called interpretable models in specific application domains. These include high-risk, but also safety-critical domains. In contrast, other works reported some pitfalls of machine learning model interpretability, in part justified by the lack of a rigorous definition of what an interpretable model should represent. This study proposes to relate interpretability with the ability of a model to offer explanations of why a prediction is made given some point in feature space. Under this general goal of offering explanations to predictions, this study reveals additional limitations of interpretable models. Concretely, this study considers application domains where the purpose is to help human decision makers to understand why some prediction was made or why was not some other prediction made, and where irreducible (and so minimal) information is sought. In such domains, this study argues that answers to such why (or why not) questions can exhibit arbitrary redundancy, i.e., the answers can be simplified, as long as these answers are obtained by human inspection of the interpretable ML model representation.
Collapse
Affiliation(s)
| | - Alexey Ignatiev
- Department of Data Science and Artificial Intelligence, Faculty of Information Technology, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
8
|
Zia Ur Rehman M, Ahmed F, Alsuhibany SA, Jamal SS, Zulfiqar Ali M, Ahmad J. Classification of Skin Cancer Lesions Using Explainable Deep Learning. Sensors (Basel) 2022; 22:s22186915. [PMID: 36146271 PMCID: PMC9505745 DOI: 10.3390/s22186915] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 09/05/2022] [Accepted: 09/07/2022] [Indexed: 05/14/2023]
Abstract
Skin cancer is among the most prevalent and life-threatening forms of cancer that occur worldwide. Traditional methods of skin cancer detection need an in-depth physical examination by a medical professional, which is time-consuming in some cases. Recently, computer-aided medical diagnostic systems have gained popularity due to their effectiveness and efficiency. These systems can assist dermatologists in the early detection of skin cancer, which can be lifesaving. In this paper, the pre-trained MobileNetV2 and DenseNet201 deep learning models are modified by adding additional convolution layers to effectively detect skin cancer. Specifically, for both models, the modification includes stacking three convolutional layers at the end of both the models. A thorough comparison proves that the modified models show their superiority over the original pre-trained MobileNetV2 and DenseNet201 models. The proposed method can detect both benign and malignant classes. The results indicate that the proposed Modified DenseNet201 model achieves 95.50% accuracy and state-of-the-art performance when compared with other techniques present in the literature. In addition, the sensitivity and specificity of the Modified DenseNet201 model are 93.96% and 97.03%, respectively.
Collapse
Affiliation(s)
| | - Fawad Ahmed
- Department of Cyber Security, Pakistan Navy Engineering College, National University of Sciences & Technology, Karachi 75350, Pakistan
| | - Suliman A. Alsuhibany
- Department of Computer Science, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia
- Correspondence:
| | - Sajjad Shaukat Jamal
- Department of Mathematics, College of Science, King Khalid University, Abha 61413, Saudi Arabia
| | | | - Jawad Ahmad
- School of Computing, Edinburgh Napier University, Edinburgh EH10 5DT, UK
| |
Collapse
|
9
|
Fahim MANI, Saqib N, Siam SK, Jung HY. Rethinking Gradient Weight's Influence over Saliency Map Estimation. Sensors (Basel) 2022; 22:6516. [PMID: 36080974 PMCID: PMC9460162 DOI: 10.3390/s22176516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 08/19/2022] [Accepted: 08/25/2022] [Indexed: 06/15/2023]
Abstract
Class activation map (CAM) helps to formulate saliency maps that aid in interpreting the deep neural network's prediction. Gradient-based methods are generally faster than other branches of vision interpretability and independent of human guidance. The performance of CAM-like studies depends on the governing model's layer response and the influences of the gradients. Typical gradient-oriented CAM studies rely on weighted aggregation for saliency map estimation by projecting the gradient maps into single-weight values, which may lead to an over-generalized saliency map. To address this issue, we use a global guidance map to rectify the weighted aggregation operation during saliency estimation, where resultant interpretations are comparatively cleaner and instance-specific. We obtain the global guidance map by performing elementwise multiplication between the feature maps and their corresponding gradient maps. To validate our study, we compare the proposed study with nine different saliency visualizers. In addition, we use seven commonly used evaluation metrics for quantitative comparison. The proposed scheme achieves significant improvement over the test images from the ImageNet, MS-COCO 14, and PASCAL VOC 2012 datasets.
Collapse
|
10
|
Iliadou E, Su Q, Kikidis D, Bibas T, Kloukinas C. Profiling hearing aid users through big data explainable artificial intelligence techniques. Front Neurol 2022; 13:933940. [PMID: 36090867 PMCID: PMC9459083 DOI: 10.3389/fneur.2022.933940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Accepted: 08/08/2022] [Indexed: 11/13/2022] Open
Abstract
Debilitating hearing loss (HL) affects ~6% of the human population. Only 20% of the people in need of a hearing assistive device will eventually seek and acquire one. The number of people that are satisfied with their Hearing Aids (HAids) and continue using them in the long term is even lower. Understanding the personal, behavioral, environmental, or other factors that correlate with the optimal HAid fitting and with users' experience of HAids is a significant step in improving patient satisfaction and quality of life, while reducing societal and financial burden. In SMART BEAR we are addressing this need by making use of the capacity of modern HAids to provide dynamic logging of their operation and by combining this information with a big amount of information about the medical, environmental, and social context of each HAid user. We are studying hearing rehabilitation through a 12-month continuous monitoring of HL patients, collecting data, such as participants' demographics, audiometric and medical data, their cognitive and mental status, their habits, and preferences, through a set of medical devices and wearables, as well as through face-to-face and remote clinical assessments and fitting/fine-tuning sessions. Descriptive, AI-based analysis and assessment of the relationships between heterogeneous data and HL-related parameters will help clinical researchers to better understand the overall health profiles of HL patients, and to identify patterns or relations that may be proven essential for future clinical trials. In addition, the future state and behavioral (e.g., HAids Satisfiability and HAids usage) of the patients will be predicted with time-dependent machine learning models to assist the clinical researchers to decide on the nature of the interventions. Explainable Artificial Intelligence (XAI) techniques will be leveraged to better understand the factors that play a significant role in the success of a hearing rehabilitation program, constructing patient profiles. This paper is a conceptual one aiming to describe the upcoming data collection process and proposed framework for providing a comprehensive profile for patients with HL in the context of EU-funded SMART BEAR project. Such patient profiles can be invaluable in HL treatment as they can help to identify the characteristics making patients more prone to drop out and stop using their HAids, using their HAids sufficiently long during the day, and being more satisfied by their HAids experience. They can also help decrease the number of needed remote sessions with their Audiologist for counseling, and/or HAids fine tuning, or the number of manual changes of HAids program (as indication of poor sound quality and bad adaptation of HAids configuration to patients' real needs and daily challenges), leading to reduced healthcare cost.
Collapse
Affiliation(s)
- Eleftheria Iliadou
- 1st Department of Otorhinolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens Medical School, Athens, Greece
| | - Qiqi Su
- Department of Computer Science, University of London, London, United Kingdom
| | - Dimitrios Kikidis
- 1st Department of Otorhinolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens Medical School, Athens, Greece
| | - Thanos Bibas
- 1st Department of Otorhinolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens Medical School, Athens, Greece
| | - Christos Kloukinas
- Department of Computer Science, University of London, London, United Kingdom
| |
Collapse
|
11
|
Tack A, Shestakov A, Lüdke D, Zachow S. A Multi-Task Deep Learning Method for Detection of Meniscal Tears in MRI Data from the Osteoarthritis Initiative Database. Front Bioeng Biotechnol 2021; 9:747217. [PMID: 34926416 PMCID: PMC8675251 DOI: 10.3389/fbioe.2021.747217] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2021] [Accepted: 10/15/2021] [Indexed: 11/30/2022] Open
Abstract
We present a novel and computationally efficient method for the detection of meniscal tears in Magnetic Resonance Imaging (MRI) data. Our method is based on a Convolutional Neural Network (CNN) that operates on complete 3D MRI scans. Our approach detects the presence of meniscal tears in three anatomical sub-regions (anterior horn, body, posterior horn) for both the Medial Meniscus (MM) and the Lateral Meniscus (LM) individually. For optimal performance of our method, we investigate how to preprocess the MRI data and how to train the CNN such that only relevant information within a Region of Interest (RoI) of the data volume is taken into account for meniscal tear detection. We propose meniscal tear detection combined with a bounding box regressor in a multi-task deep learning framework to let the CNN implicitly consider the corresponding RoIs of the menisci. We evaluate the accuracy of our CNN-based meniscal tear detection approach on 2,399 Double Echo Steady-State (DESS) MRI scans from the Osteoarthritis Initiative database. In addition, to show that our method is capable of generalizing to other MRI sequences, we also adapt our model to Intermediate-Weighted Turbo Spin-Echo (IW TSE) MRI scans. To judge the quality of our approaches, Receiver Operating Characteristic (ROC) curves and Area Under the Curve (AUC) values are evaluated for both MRI sequences. For the detection of tears in DESS MRI, our method reaches AUC values of 0.94, 0.93, 0.93 (anterior horn, body, posterior horn) in MM and 0.96, 0.94, 0.91 in LM. For the detection of tears in IW TSE MRI data, our method yields AUC values of 0.84, 0.88, 0.86 in MM and 0.95, 0.91, 0.90 in LM. In conclusion, the presented method achieves high accuracy for detecting meniscal tears in both DESS and IW TSE MRI data. Furthermore, our method can be easily trained and applied to other MRI sequences.
Collapse
Affiliation(s)
- Alexander Tack
- Dept. for Visual and Data-Centric Computing, Zuse Institute Berlin, Berlin, Germany
| | - Alexey Shestakov
- Dept. for Visual and Data-Centric Computing, Zuse Institute Berlin, Berlin, Germany
| | - David Lüdke
- Dept. for Visual and Data-Centric Computing, Zuse Institute Berlin, Berlin, Germany
| | - Stefan Zachow
- Dept. for Visual and Data-Centric Computing, Zuse Institute Berlin, Berlin, Germany
- Charité–University Medicine, Berlin, Germany
| |
Collapse
|