1
|
Kaur A, Mittal M, Bhatti JS, Thareja S, Singh S. A systematic literature review on the significance of deep learning and machine learning in predicting Alzheimer's disease. Artif Intell Med 2024; 154:102928. [PMID: 39029377 DOI: 10.1016/j.artmed.2024.102928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 04/15/2024] [Accepted: 06/27/2024] [Indexed: 07/21/2024]
Abstract
BACKGROUND Alzheimer's disease (AD) is the most prevalent cause of dementia, characterized by a steady decline in mental, behavioral, and social abilities and impairs a person's capacity for independent functioning. It is a fatal neurodegenerative disease primarily affecting older adults. OBJECTIVES The purpose of this literature review is to investigate various AD detection techniques, datasets, input modalities, algorithms, libraries, and performance evaluation metrics used to determine which model or strategy may provide superior performance. METHOD The initial search yielded 807 papers, but only 100 research articles were chosen after applying the inclusion-exclusion criteria. This SLR analyzed research items published between January 2019 and December 2022. The ACM, Elsevier, IEEE Xplore Digital Library, PubMed, Springer and Taylor & Francis were systematically searched. The current study considers articles that used Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), APOe4 genotype, Diffusion Tensor Imaging (DTI) and Cerebrospinal Fluid (CSF) biomarkers. The study was performed following Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines. CONCLUSION According to the literature survey, most studies (n = 76) used the DL strategy. The datasets used by studies were primarily derived from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The majority of studies (n = 73) used single-modality neuroimaging data, while the remaining used multi-modal input data. In a multi-modality approach, the combination of MRI and PET scans is commonly preferred. Also, Regarding the algorithm used, Convolution Neural Network (CNN) showed the highest accuracy, 100 %, in classifying AD vs. CN subjects whereas the SVM was the most common ML algorithm, with a maximum accuracy of 99.82 %.
Collapse
Affiliation(s)
- Arshdeep Kaur
- Dept. of Computer Science & Technology, Central University of Punjab, Bathinda, India
| | - Meenakshi Mittal
- Dept. of Computer Science & Technology, Central University of Punjab, Bathinda, India
| | - Jasvinder Singh Bhatti
- Dept. of Human Genetics and Molecular Medicine, Central University of Punjab, Bathinda, India
| | - Suresh Thareja
- Dept. of Pharmaceutical Sciences and Natural Products, Central University of Punjab, Bathinda, India
| | - Satwinder Singh
- Dept. of Computer Science & Technology, Central University of Punjab, Bathinda, India.
| |
Collapse
|
2
|
Fathi S, Ahmadi A, Dehnad A, Almasi-Dooghaee M, Sadegh M. A Deep Learning-Based Ensemble Method for Early Diagnosis of Alzheimer's Disease using MRI Images. Neuroinformatics 2024; 22:89-105. [PMID: 38042764 PMCID: PMC10917836 DOI: 10.1007/s12021-023-09646-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/16/2023] [Indexed: 12/04/2023]
Abstract
Recently, the early diagnosis of Alzheimer's disease has gained major attention due to the growing prevalence of the disease and the resulting costs imposed on individuals and society. The main objective of this study was to propose an ensemble method based on deep learning for the early diagnosis of AD using MRI images. The methodology of this study consisted of collecting the dataset, preprocessing, creating the individual and ensemble models, evaluating the models based on ADNI data, and validating the trained model based on the local dataset. The proposed method was an ensemble approach selected through a comparative analysis of various ensemble scenarios. Finally, the six best individual CNN-based classifiers were selected to combine and constitute the ensemble model. The evaluation showed an accuracy rate of 98.57, 96.37, 94.22, 99.83, 93.88, and 93.92 for NC/AD, NC/EMCI, EMCI/LMCI, LMCI/AD, four-way and three-way classification groups, respectively. The validation results on the local dataset revealed an accuracy of 88.46 for three-way classification. Our performance results were higher than most reviewed studies and comparable with others. Although comparative analysis showed superior results of ensemble methods against individual architectures, there were no significant differences among various ensemble approaches. The validation results revealed the low performance of individual models in practice. In contrast, the ensemble method showed promising results. However, further studies on various and larger datasets are required to validate the generalizability of the model.
Collapse
Affiliation(s)
- Sina Fathi
- Department of Health Information Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ali Ahmadi
- Surrey Business School, University of Surrey, Guildford Surrey, GU2 7XH, UK.
| | - Afsaneh Dehnad
- School of Health Management and Information Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Mostafa Almasi-Dooghaee
- Neurology Department, Firoozgar Hospital, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Melika Sadegh
- Neurology Department, Firoozgar Hospital, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
3
|
Champendal M, Müller H, Prior JO, Dos Reis CS. A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging. Eur J Radiol 2023; 169:111159. [PMID: 37976760 DOI: 10.1016/j.ejrad.2023.111159] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 09/26/2023] [Accepted: 10/19/2023] [Indexed: 11/19/2023]
Abstract
PURPOSE To review eXplainable Artificial Intelligence/(XAI) methods available for medical imaging/(MI). METHOD A scoping review was conducted following the Joanna Briggs Institute's methodology. The search was performed on Pubmed, Embase, Cinhal, Web of Science, BioRxiv, MedRxiv, and Google Scholar. Studies published in French and English after 2017 were included. Keyword combinations and descriptors related to explainability, and MI modalities were employed. Two independent reviewers screened abstracts, titles and full text, resolving differences through discussion. RESULTS 228 studies met the criteria. XAI publications are increasing, targeting MRI (n = 73), radiography (n = 47), CT (n = 46). Lung (n = 82) and brain (n = 74) pathologies, Covid-19 (n = 48), Alzheimer's disease (n = 25), brain tumors (n = 15) are the main pathologies explained. Explanations are presented visually (n = 186), numerically (n = 67), rule-based (n = 11), textually (n = 11), and example-based (n = 6). Commonly explained tasks include classification (n = 89), prediction (n = 47), diagnosis (n = 39), detection (n = 29), segmentation (n = 13), and image quality improvement (n = 6). The most frequently provided explanations were local (78.1 %), 5.7 % were global, and 16.2 % combined both local and global approaches. Post-hoc approaches were predominantly employed. The used terminology varied, sometimes indistinctively using explainable (n = 207), interpretable (n = 187), understandable (n = 112), transparent (n = 61), reliable (n = 31), and intelligible (n = 3). CONCLUSION The number of XAI publications in medical imaging is increasing, primarily focusing on applying XAI techniques to MRI, CT, and radiography for classifying and predicting lung and brain pathologies. Visual and numerical output formats are predominantly used. Terminology standardisation remains a challenge, as terms like "explainable" and "interpretable" are sometimes being used indistinctively. Future XAI development should consider user needs and perspectives.
Collapse
Affiliation(s)
- Mélanie Champendal
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland, Lausanne, CH, Switzerland; Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland.
| | - Henning Müller
- Informatics Institute, University of Applied Sciences Western Switzerland (HES-SO Valais) Sierre, CH, Switzerland; Medical faculty, University of Geneva, CH, Switzerland.
| | - John O Prior
- Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland; Nuclear Medicine and Molecular Imaging Department, Lausanne University Hospital (CHUV), Lausanne, CH, Switzerland.
| | - Cláudia Sá Dos Reis
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland, Lausanne, CH, Switzerland.
| |
Collapse
|
4
|
Yamada S, Otani T, Ii S, Kawano H, Nozaki K, Wada S, Oshima M, Watanabe Y. Aging-related volume changes in the brain and cerebrospinal fluid using artificial intelligence-automated segmentation. Eur Radiol 2023; 33:7099-7112. [PMID: 37060450 PMCID: PMC10511609 DOI: 10.1007/s00330-023-09632-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 02/01/2023] [Accepted: 02/17/2023] [Indexed: 04/16/2023]
Abstract
OBJECTIVES To verify the reliability of the volumes automatically segmented using a new artificial intelligence (AI)-based application and evaluate changes in the brain and CSF volume with healthy aging. METHODS The intracranial spaces were automatically segmented in the 21 brain subregions and 5 CSF subregions using the AI-based application on the 3D T1-weighted images in healthy volunteers aged > 20 years. Additionally, the automatically segmented volumes of the total ventricles and subarachnoid spaces were compared with the manually segmented volumes of those extracted from 3D T2-weighted images using the intra-class correlation and Bland-Altman analysis. RESULTS In this study, 133 healthy volunteers aged 21-92 years were included. The mean intra-class correlations between the automatically and manually segmented volumes of the total ventricles and subarachnoid spaces were 0.986 and 0.882, respectively. The increase in the CSF volume was estimated to be approximately 30 mL (2%) per decade from 265 mL (18.7%) in the 20s to 488 mL (33.7%) in ages above 80 years; however, the increase in the volume of total ventricles was approximately 20 mL (< 2%) until the 60s and increased in ages above 60 years. CONCLUSIONS This study confirmed the reliability of the CSF volumes using the AI-based auto-segmentation application. The intracranial CSF volume increased linearly because of the brain volume reduction with aging; however, the ventricular volume did not change until the age of 60 years and above and then gradually increased. This finding could help elucidate the pathogenesis of chronic hydrocephalus in adults. KEY POINTS • The brain and CSF spaces were automatically segmented using an artificial intelligence-based application. • The total subarachnoid spaces increased linearly with aging, whereas the total ventricle volume was around 20 mL (< 2%) until the 60s and increased in ages above 60 years. • The cortical gray matter gradually decreases with aging, whereas the subcortical gray matter maintains its volume, and the cerebral white matter increases slightly until the 40s and begins to decrease from the 50s.
Collapse
Affiliation(s)
- Shigeki Yamada
- Department of Neurosurgery, Nagoya City University Graduate School of Medical Science, 1 Kawasumi, Mizuho-cho, Mizuho-ku, NagoyaNagoya, Aichi, 467-8601, Japan.
- Interfaculty Initiative in Information Studies / Institute of Industrial Science, The University of Tokyo, Tokyo, Japan.
- Department of Neurosurgery, Shiga University of Medical Science, Ōtsu, Shiga, Japan.
| | - Tomohiro Otani
- Department of Mechanical Science and Bioengineering, Graduate School of Engineering Science, Osaka University, Osaka, Japan
| | - Satoshi Ii
- Faculty of System Design, Tokyo Metropolitan University, Hachioji, Tokyo, Japan
| | - Hiroto Kawano
- Department of Neurosurgery, Shiga University of Medical Science, Ōtsu, Shiga, Japan
| | - Kazuhiko Nozaki
- Department of Neurosurgery, Shiga University of Medical Science, Ōtsu, Shiga, Japan
| | - Shigeo Wada
- Department of Mechanical Science and Bioengineering, Graduate School of Engineering Science, Osaka University, Osaka, Japan
| | - Marie Oshima
- Interfaculty Initiative in Information Studies / Institute of Industrial Science, The University of Tokyo, Tokyo, Japan
| | - Yoshiyuki Watanabe
- Department of Radiology, Shiga University of Medical Science, Ōtsu, Shiga, Japan
| |
Collapse
|
5
|
Liu F, Wang H, Liang SN, Jin Z, Wei S, Li X. MPS-FFA: A multiplane and multiscale feature fusion attention network for Alzheimer's disease prediction with structural MRI. Comput Biol Med 2023; 157:106790. [PMID: 36958239 DOI: 10.1016/j.compbiomed.2023.106790] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 02/13/2023] [Accepted: 03/11/2023] [Indexed: 03/17/2023]
Abstract
Structural magnetic resonance imaging (sMRI) is a popular technique that is widely applied in Alzheimer's disease (AD) diagnosis. However, only a few structural atrophy areas in sMRI scans are highly associated with AD. The degree of atrophy in patients' brain tissues and the distribution of lesion areas differ among patients. Therefore, a key challenge in sMRI-based AD diagnosis is identifying discriminating atrophy features. Hence, we propose a multiplane and multiscale feature-level fusion attention (MPS-FFA) model. The model has three components, (1) A feature encoder uses a multiscale feature extractor with hybrid attention layers to simultaneously capture and fuse multiple pathological features in the sagittal, coronal, and axial planes. (2) A global attention classifier combines clinical scores and two global attention layers to evaluate the feature impact scores and balance the relative contributions of different feature blocks. (3) A feature similarity discriminator minimizes the feature similarities among heterogeneous labels to enhance the ability of the network to discriminate atrophy features. The MPS-FFA model provides improved interpretability for identifying discriminating features using feature visualization. The experimental results on the baseline sMRI scans from two databases confirm the effectiveness (e.g., accuracy and generalizability) of our method in locating pathological locations. The source code is available at https://github.com/LiuFei-AHU/MPSFFA.
Collapse
Affiliation(s)
- Fei Liu
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, China; School of Computer Science and Technology, Anhui University, Hefei, China
| | - Huabin Wang
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, China; School of Computer Science and Technology, Anhui University, Hefei, China.
| | - Shiuan-Ni Liang
- School of Engineering, Monash University Malaysia, Kuala Lumpur, Malaysia
| | - Zhe Jin
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, China
| | - Shicheng Wei
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, China
| | - Xuejun Li
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, China; School of Computer Science and Technology, Anhui University, Hefei, China
| | | |
Collapse
|
6
|
Ravikumar A, Sriraman H. Real-time pneumonia prediction using pipelined spark and high-performance computing. PeerJ Comput Sci 2023; 9:e1258. [PMID: 37346542 PMCID: PMC10280684 DOI: 10.7717/peerj-cs.1258] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 01/27/2023] [Indexed: 06/23/2023]
Abstract
Background Pneumonia is a respiratory disease caused by bacteria; it affects many people, particularly in impoverished countries where pollution, unclean living standards, overpopulation, and insufficient medical infrastructures are prevalent. To guarantee curative therapy and boost survival chances, it is vital to detect pneumonia soon enough. Imaging using chest X-rays is the most common way of detecting pneumonia. However, analyzing chest X-rays is a complex process vulnerable to subjective variation. Moreover, the data available is growing exponentially, and it will take hours and days to train the model to predict pneumonia. Timely prediction is significant to guarantee a better cure and treatment. Existing work provided by different authors needs more precision, and the computation time for predicting pneumonia is also much longer. Therefore, there is a requirement for early forecasting. Using X-ray picture samples, the system must have a continuous and unsupervised learning system for early diagnosis. Methods In this article, the training time of the model is accelerated using the distributed data-parallel approach and the computational power of high-performance computing devices. This research aims to diagnose pneumonia using X-ray pictures with more precision, greater speed, and fewer processing resources. Distributed deep learning techniques are gaining popularity owing to the rising need for computational resources for deep learning models with several parameters. In contrast to conventional training methods, data-parallel training enables several compute nodes to train massive deep-learning models to improve training efficiency concurrently. Deploying the model in Spark solves the scalability and acceleration. Spark's distributed processing capability reads data from multiple nodes, and the results demonstrate that training time can be drastically reduced by utilizing these techniques, which is a significant necessity when dealing with large datasets. Results The proposed model makes the prediction 1.5 times faster than the traditional CNN model used for pneumonia prediction. The model also achieved an accuracy of 98.72%. The speed-up varying from 1.2 to 1.5 was obtained in the synchronous and asynchronous parallel model. The speed-up is reduced in the parallel asynchronous model due to the presence of straggler nodes.
Collapse
|
7
|
Fathi S, Ahmadi M, Dehnad A. Early diagnosis of Alzheimer's disease based on deep learning: A systematic review. Comput Biol Med 2022; 146:105634. [DOI: 10.1016/j.compbiomed.2022.105634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 04/25/2022] [Accepted: 04/25/2022] [Indexed: 11/03/2022]
|
8
|
Lu P, Hu L, Zhang N, Liang H, Tian T, Lu L. A Two-Stage Model for Predicting Mild Cognitive Impairment to Alzheimer's Disease Conversion. Front Aging Neurosci 2022; 14:826622. [PMID: 35386114 PMCID: PMC8979209 DOI: 10.3389/fnagi.2022.826622] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 02/17/2022] [Indexed: 12/21/2022] Open
Abstract
Early detection of Alzheimer's disease (AD), such as predicting development from mild cognitive impairment (MCI) to AD, is critical for slowing disease progression and increasing quality of life. Although deep learning is a promising technique for structural MRI-based diagnosis, the paucity of training samples limits its power, especially for three-dimensional (3D) models. To this end, we propose a two-stage model combining both transfer learning and contrastive learning that can achieve high accuracy of MRI-based early AD diagnosis even when the sample numbers are restricted. Specifically, a 3D CNN model was pretrained using publicly available medical image data to learn common medical features, and contrastive learning was further utilized to learn more specific features of MCI images. The two-stage model outperformed each benchmark method. Compared with the previous studies, we show that our model achieves superior performance in progressive MCI patients with an accuracy of 0.82 and AUC of 0.84. We further enhance the interpretability of the model by using 3D Grad-CAM, which highlights brain regions with high-predictive weights. Brain regions, including the hippocampus, temporal, and precuneus, are associated with the classification of MCI, which is supported by the various types of literature. Our model provides a novel model to avoid overfitting because of a lack of medical data and enable the early detection of AD.
Collapse
Affiliation(s)
- Peixin Lu
- School of Information Management, Wuhan University, Wuhan, China
| | - Lianting Hu
- Medical Big Data Center, Guangdong Provincial People’s Hospital, Guangzhou, China
- Guangdong Cardiovascular Institute, Guangdong Provincial People’s Hospital, Guangzhou, China
| | - Ning Zhang
- School of Business, Qingdao University, Qingdao, China
| | - Huiying Liang
- Medical Big Data Center, Guangdong Provincial People’s Hospital, Guangzhou, China
- Guangdong Cardiovascular Institute, Guangdong Provincial People’s Hospital, Guangzhou, China
| | - Tao Tian
- The First Division of Psychiatry, Jingmen No. 2 People’s Hospital, Jingmen, China
| | - Long Lu
- School of Information Management, Wuhan University, Wuhan, China
| |
Collapse
|