1
|
Li Y, El Habib Daho M, Conze PH, Zeghlache R, Le Boité H, Tadayoni R, Cochener B, Lamard M, Quellec G. A review of deep learning-based information fusion techniques for multimodal medical image classification. Comput Biol Med 2024; 177:108635. [PMID: 38796881 DOI: 10.1016/j.compbiomed.2024.108635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 03/18/2024] [Accepted: 05/18/2024] [Indexed: 05/29/2024]
Abstract
Multimodal medical imaging plays a pivotal role in clinical diagnosis and research, as it combines information from various imaging modalities to provide a more comprehensive understanding of the underlying pathology. Recently, deep learning-based multimodal fusion techniques have emerged as powerful tools for improving medical image classification. This review offers a thorough analysis of the developments in deep learning-based multimodal fusion for medical classification tasks. We explore the complementary relationships among prevalent clinical modalities and outline three main fusion schemes for multimodal classification networks: input fusion, intermediate fusion (encompassing single-level fusion, hierarchical fusion, and attention-based fusion), and output fusion. By evaluating the performance of these fusion techniques, we provide insight into the suitability of different network architectures for various multimodal fusion scenarios and application domains. Furthermore, we delve into challenges related to network architecture selection, handling incomplete multimodal data management, and the potential limitations of multimodal fusion. Finally, we spotlight the promising future of Transformer-based multimodal fusion techniques and give recommendations for future research in this rapidly evolving field.
Collapse
Affiliation(s)
- Yihao Li
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | - Mostafa El Habib Daho
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France.
| | | | - Rachid Zeghlache
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | - Hugo Le Boité
- Sorbonne University, Paris, France; Ophthalmology Department, Lariboisière Hospital, AP-HP, Paris, France
| | - Ramin Tadayoni
- Ophthalmology Department, Lariboisière Hospital, AP-HP, Paris, France; Paris Cité University, Paris, France
| | - Béatrice Cochener
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France; Ophthalmology Department, CHRU Brest, Brest, France
| | - Mathieu Lamard
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | | |
Collapse
|
2
|
Wang J, Zuo L, Cordente Martínez C. Basketball technique action recognition using 3D convolutional neural networks. Sci Rep 2024; 14:13156. [PMID: 38849454 PMCID: PMC11161614 DOI: 10.1038/s41598-024-63621-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 05/30/2024] [Indexed: 06/09/2024] Open
Abstract
This research investigates the recognition of basketball techniques actions through the implementation of three-dimensional (3D) Convolutional Neural Networks (CNNs), aiming to enhance the accurate and automated identification of various actions in basketball games. Initially, basketball action sequences are extracted from publicly available basketball action datasets, followed by data preprocessing, including image sampling, data augmentation, and label processing. Subsequently, a novel action recognition model is proposed, combining 3D convolutions and Long Short-Term Memory (LSTM) networks to model temporal features and capture the spatiotemporal relationships and temporal information of actions. This facilitates the facilitating automatic learning of the spatiotemporal features associated with basketball actions. The model's performance and robustness are further improved through the adoption of optimization algorithms, such as adaptive learning rate adjustment and regularization. The efficacy of the proposed method is verified through experiments conducted on three publicly available basketball action datasets: NTURGB + D, Basketball-Action-Dataset, and B3D Dataset. The results indicate that this approach achieves outstanding performance in basketball technique action recognition tasks across different datasets compared to two common traditional methods. Specifically, when compared to the frame difference-based method, this model exhibits a significant accuracy improvement of 15.1%. When compared to the optical flow-based method, this model demonstrates a substantial accuracy improvement of 12.4%. Moreover, this method showcases strong robustness, accurately recognizing actions under diverse lighting conditions and scenes, achieving an average accuracy of 93.1%. The research demonstrates that the method reported here effectively captures the spatiotemporal relationships of basketball actions, thereby providing reliable technical assessment tools for basketball coaches and players.
Collapse
Affiliation(s)
- Jingfei Wang
- Physical Education Department, Northwestern Polytechnical University, Xi'an, 710129, Shaanxi, People's Republic of China.
- Departamento de Deportes, Facultad de Ciencias de la Actividad Física y del Deporte (INEF), Universidad Politécnica de Madrid, 28040, Madrid, Spain.
| | - Liang Zuo
- Department of Sports, Chang'an University, Xi'an, 710064, Shaanxi, China
| | - Carlos Cordente Martínez
- Departamento de Deportes, Facultad de Ciencias de la Actividad Física y del Deporte (INEF), Universidad Politécnica de Madrid, 28040, Madrid, Spain
| |
Collapse
|
3
|
Cotta Ramusino M, Massa F, Festari C, Gandolfo F, Nicolosi V, Orini S, Nobili F, Frisoni GB, Morbelli S, Garibotto V. Diagnostic performance of molecular imaging methods in predicting the progression from mild cognitive impairment to dementia: an updated systematic review. Eur J Nucl Med Mol Imaging 2024; 51:1876-1890. [PMID: 38355740 DOI: 10.1007/s00259-024-06631-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 01/27/2024] [Indexed: 02/16/2024]
Abstract
PURPOSE Epidemiological and logistical reasons are slowing the clinical validation of the molecular imaging biomarkers in the initial stages of neurocognitive disorders. We provide an updated systematic review of the recent advances (2017-2022), highlighting methodological shortcomings. METHODS Studies reporting the diagnostic accuracy values of the molecular imaging techniques (i.e., amyloid-, tau-, [18F]FDG-PETs, DaT-SPECT, and cardiac [123I]-MIBG scintigraphy) in predicting progression from mild cognitive impairment (MCI) to dementia were selected according to the Preferred Reporting Items for a Systematic Review and Meta-Analysis (PRISMA) method and evaluated with the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Main eligibility criteria were as follows: (1) ≥ 50 subjects with MCI, (2) follow-up ≥ 3 years, (3) gold standard: progression to dementia or diagnosis on pathology, and (4) measures of prospective accuracy. RESULTS Sensitivity (SE) and specificity (SP) in predicting progression to dementia, mainly to Alzheimer's dementia were 43-100% and 63-94% for [18F]FDG-PET and 64-94% and 48-93% for amyloid-PET. Longitudinal studies were lacking for less common disorders (Dementia with Lewy bodies-DLB and Frontotemporal lobe degeneration-FTLD) and for tau-PET, DaT-SPECT, and [123I]-MIBG scintigraphy. Therefore, the accuracy values from cross-sectional studies in a smaller sample of subjects (n > 20, also including mild dementia stage) were chosen as surrogate outcomes. DaT-SPECT showed 47-100% SE and 71-100% SP in differentiating Lewy body disease (LBD) from non-LBD conditions; tau-PET: 88% SE and 100% SP in differentiating DLB from Posterior Cortical Atrophy. [123I]-MIBG scintigraphy differentiated LBD from non-LBD conditions with 47-100% SE and 71-100% SP. CONCLUSION Molecular imaging has a moderate-to-good accuracy in predicting the progression of MCI to Alzheimer's dementia. Longitudinal studies are sparse in non-AD conditions, requiring additional efforts in these settings.
Collapse
Affiliation(s)
- Matteo Cotta Ramusino
- Unit of Behavior Neurology and Dementia Research Center, IRCCS Mondino Foundation, via Mondino 2, 27100, Pavia, Italy.
| | - Federico Massa
- Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), University of Genoa, Genoa, Italy
- IRCCS Ospedale Policlinico San Martino, Genoa, Italy
| | - Cristina Festari
- Laboratory of Alzheimer's Neuroimaging and Epidemiology, IRCCS Istituto Centro San Giovanni Di Dio Fatebenefratelli, Brescia, Italy
| | - Federica Gandolfo
- Department of Geriatric Care, Orthogeriatrics and Rehabilitation, E.O. Galliera Hospital, Genoa, Italy
| | - Valentina Nicolosi
- UOC Neurologia Ospedale Magalini Di Villafranca Di Verona (VR) ULSS 9, Verona, Italy
| | - Stefania Orini
- Alzheimer's Unit-Memory Clinic, IRCCS Istituto Centro San Giovanni Di Dio Fatebenefratelli, Brescia, Italy
- Department of Clinical and Experimental Sciences, University of Brescia, Brescia, Italy
| | - Flavio Nobili
- IRCCS Ospedale Policlinico San Martino, Genoa, Italy
| | - Giovanni B Frisoni
- Laboratory of Neuroimaging of Aging (LANVIE), University of Geneva, Geneva, Switzerland
- Geneva Memory Center, Department of Rehabilitation and Geriatrics, Geneva University and University Hospitals, Geneva, Switzerland
| | - Silvia Morbelli
- IRCCS Ospedale Policlinico San Martino, Genoa, Italy
- Department of Health Sciences (DISSAL), University of Genoa, Genoa, Italy
| | - Valentina Garibotto
- Division of Nuclear Medicine and Molecular Imaging, Diagnostic Department, University Hospitals of Geneva, Geneva, Switzerland
- NIMTLab, Department of Radiology and Medical Informatics, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- CIBM Center for Biomedical Imaging, Geneva, Switzerland
| |
Collapse
|
4
|
Odusami M, Maskeliūnas R, Damaševičius R, Misra S. Machine learning with multimodal neuroimaging data to classify stages of Alzheimer's disease: a systematic review and meta-analysis. Cogn Neurodyn 2024; 18:775-794. [PMID: 38826669 PMCID: PMC11143094 DOI: 10.1007/s11571-023-09993-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 06/23/2023] [Accepted: 07/17/2023] [Indexed: 06/04/2024] Open
Abstract
In recent years, Alzheimer's disease (AD) has been a serious threat to human health. Researchers and clinicians alike encounter a significant obstacle when trying to accurately identify and classify AD stages. Several studies have shown that multimodal neuroimaging input can assist in providing valuable insights into the structural and functional changes in the brain related to AD. Machine learning (ML) algorithms can accurately categorize AD phases by identifying patterns and linkages in multimodal neuroimaging data using powerful computational methods. This study aims to assess the contribution of ML methods to the accurate classification of the stages of AD using multimodal neuroimaging data. A systematic search is carried out in IEEE Xplore, Science Direct/Elsevier, ACM DigitalLibrary, and PubMed databases with forward snowballing performed on Google Scholar. The quantitative analysis used 47 studies. The explainable analysis was performed on the classification algorithm and fusion methods used in the selected studies. The pooled sensitivity and specificity, including diagnostic efficiency, were evaluated by conducting a meta-analysis based on a bivariate model with the hierarchical summary receiver operating characteristics (ROC) curve of multimodal neuroimaging data and ML methods in the classification of AD stages. Wilcoxon signed-rank test is further used to statistically compare the accuracy scores of the existing models. With a 95% confidence interval of 78.87-87.71%, the combined sensitivity for separating participants with mild cognitive impairment (MCI) from healthy control (NC) participants was 83.77%; for separating participants with AD from NC, it was 94.60% (90.76%, 96.89%); for separating participants with progressive MCI (pMCI) from stable MCI (sMCI), it was 80.41% (74.73%, 85.06%). With a 95% confidence interval (78.87%, 87.71%), the Pooled sensitivity for distinguishing mild cognitive impairment (MCI) from healthy control (NC) participants was 83.77%, with a 95% confidence interval (90.76%, 96.89%), the Pooled sensitivity for distinguishing AD from NC was 94.60%, likewise (MCI) from healthy control (NC) participants was 83.77% progressive MCI (pMCI) from stable MCI (sMCI) was 80.41% (74.73%, 85.06%), and early MCI (EMCI) from NC was 86.63% (82.43%, 89.95%). Pooled specificity for differentiating MCI from NC was 79.16% (70.97%, 87.71%), AD from NC was 93.49% (91.60%, 94.90%), pMCI from sMCI was 81.44% (76.32%, 85.66%), and EMCI from NC was 85.68% (81.62%, 88.96%). The Wilcoxon signed rank test showed a low P-value across all the classification tasks. Multimodal neuroimaging data with ML is a promising future in classifying the stages of AD but more research is required to increase the validity of its application in clinical practice.
Collapse
Affiliation(s)
- Modupe Odusami
- Department of Multimedia Engineering, Kaunas University of Technology, Kaunas, Lithuania
| | - Rytis Maskeliūnas
- Department of Multimedia Engineering, Kaunas University of Technology, Kaunas, Lithuania
| | | | - Sanjay Misra
- Department of Applied Data Science, Institute for Energy Technology, Halden, Norway
| |
Collapse
|
5
|
Zheng D, Ruan Y, Cao X, Guo W, Zhang X, Qi W, Yuan Q, Liang X, Zhang D, Huang Q, Xue C. Directed Functional Connectivity Changes of Triple Networks for Stable and Progressive Mild Cognitive Impairment. Neuroscience 2024; 545:47-58. [PMID: 38490330 DOI: 10.1016/j.neuroscience.2024.03.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 02/22/2024] [Accepted: 03/05/2024] [Indexed: 03/17/2024]
Abstract
Mild cognitive impairment includes two distinct subtypes, namely progressive mild cognitive impairment and stable mild cognitive impairment. While alterations in extensive functional connectivity have been observed in both subtypes, limited attention has been given to directed functional connectivity. A triple network, composed of the central executive network, default mode network, and salience network, is considered to be the core cognitive network. We evaluated the alterations in directed functional connectivity within and between the triple network in progressive and stable mild cognitive impairment groups and investigated its role in predicting disease conversion. Resting-state functional magnetic resonance imaging was used to analyze directed functional connectivity within the triple networks. A correlation analysis was performed to investigate potential associations between altered directed functional connectivity within the triple networks and the neurocognitive performance of the participants. Our study revealed significant differences in directed functional connectivity within and between the triple network in the progressive and stable mild cognitive impairment groups. Altered directed functional connectivity within the triple network was involved in episodic memory and executive function. Thus, the directed functional connectivity of the triple network may be used as an imaging marker of mild cognitive impairment.
Collapse
Affiliation(s)
- Darui Zheng
- Department of Radiology, the Affiliated Brain Hospital of Nanjing Medical University, Nanjing, Jiangsu 210029, China
| | - Yiming Ruan
- Department of Radiology, the Affiliated Brain Hospital of Nanjing Medical University, Nanjing, Jiangsu 210029, China
| | - Xuan Cao
- Division of Statistics and Data Science, Department of Mathematical Sciences, University of Cincinnati, Cincinnati, USA
| | - Wenxuan Guo
- Department of Radiology, the Affiliated Brain Hospital of Nanjing Medical University, Nanjing, Jiangsu 210029, China
| | - Xulian Zhang
- Department of Radiology, the Affiliated Brain Hospital of Nanjing Medical University, Nanjing, Jiangsu 210029, China
| | - Wenzhang Qi
- Department of Radiology, the Affiliated Brain Hospital of Nanjing Medical University, Nanjing, Jiangsu 210029, China
| | - Qianqian Yuan
- Department of Radiology, the Affiliated Brain Hospital of Nanjing Medical University, Nanjing, Jiangsu 210029, China
| | - Xuhong Liang
- Department of Radiology, the Affiliated Brain Hospital of Nanjing Medical University, Nanjing, Jiangsu 210029, China
| | - Da Zhang
- Department of Radiology, the Affiliated Brain Hospital of Nanjing Medical University, Nanjing, Jiangsu 210029, China
| | - Qingling Huang
- Department of Radiology, the Affiliated Brain Hospital of Nanjing Medical University, Nanjing, Jiangsu 210029, China.
| | - Chen Xue
- Department of Radiology, the Affiliated Brain Hospital of Nanjing Medical University, Nanjing, Jiangsu 210029, China.
| |
Collapse
|
6
|
Castellano G, Esposito A, Lella E, Montanaro G, Vessio G. Automated detection of Alzheimer's disease: a multi-modal approach with 3D MRI and amyloid PET. Sci Rep 2024; 14:5210. [PMID: 38433282 PMCID: PMC10909869 DOI: 10.1038/s41598-024-56001-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 02/28/2024] [Indexed: 03/05/2024] Open
Abstract
Recent advances in deep learning and imaging technologies have revolutionized automated medical image analysis, especially in diagnosing Alzheimer's disease through neuroimaging. Despite the availability of various imaging modalities for the same patient, the development of multi-modal models leveraging these modalities remains underexplored. This paper addresses this gap by proposing and evaluating classification models using 2D and 3D MRI images and amyloid PET scans in uni-modal and multi-modal frameworks. Our findings demonstrate that models using volumetric data learn more effective representations than those using only 2D images. Furthermore, integrating multiple modalities enhances model performance over single-modality approaches significantly. We achieved state-of-the-art performance on the OASIS-3 cohort. Additionally, explainability analyses with Grad-CAM indicate that our model focuses on crucial AD-related regions for its predictions, underscoring its potential to aid in understanding the disease's causes.
Collapse
Affiliation(s)
| | - Andrea Esposito
- Department of Computer Science, University of Bari Aldo Moro, Bari, Italy
| | - Eufemia Lella
- Sirio - Research & Innovation, Sidea Group, Bari, Italy
| | | | - Gennaro Vessio
- Department of Computer Science, University of Bari Aldo Moro, Bari, Italy.
| |
Collapse
|
7
|
Cheng J, Wang H, Wei S, Mei J, Liu F, Zhang G. Alzheimer's disease prediction algorithm based on de-correlation constraint and multi-modal feature interaction. Comput Biol Med 2024; 170:108000. [PMID: 38232453 DOI: 10.1016/j.compbiomed.2024.108000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 12/25/2023] [Accepted: 01/13/2024] [Indexed: 01/19/2024]
Abstract
Alzheimer's disease (AD) is a neurodegenerative disease characterized by various pathological changes. Utilizing multimodal data from Fluorodeoxyglucose positron emission tomography(FDG-PET) and Magnetic Resonance Imaging(MRI) of the brain can offer comprehensive information about the lesions from different perspectives and improve the accuracy of prediction. However, there are significant differences in the feature space of multimodal data. Commonly, the simple concatenation of multimodal features can cause the model to struggle in distinguishing and utilizing the complementary information between different modalities, thus affecting the accuracy of predictions. Therefore, we propose an AD prediction model based on de-correlation constraint and multi-modal feature interaction. This model consists of the following three parts: (1) The feature extractor employs residual connections and attention mechanisms to capture distinctive lesion features from FDG-PET and MRI data within their respective modalities. (2) The de-correlation constraint function enhances the model's capacity to extract complementary information from different modalities by reducing the feature similarity between them. (3) The mutual attention feature fusion module interacts with the features within and between modalities to enhance the modal-specific features and adaptively adjust the weights of these features based on information from other modalities. The experimental results on ADNI database demonstrate that the proposed model achieves a prediction accuracy of 86.79% for AD, MCI and NC, which is higher than the existing multi-modal AD prediction models.
Collapse
Affiliation(s)
- Jiayuan Cheng
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, China; School of Computer Science and Technology, Anhui University, Hefei, China
| | - Huabin Wang
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, China; School of Computer Science and Technology, Anhui University, Hefei, China.
| | - Shicheng Wei
- School of Mathematics, Physics and Computing, University of Southern Queensland, Toowoomba, Australia
| | - Jiahao Mei
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, China; School of Computer Science and Technology, Anhui University, Hefei, China
| | - Fei Liu
- School of Engineering, Monash University Malaysia, Kuala Lumpur, Malaysia
| | - Gong Zhang
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, China; Hubei Key Laboratory of Intelligent Conveying Technology and Device, Hubei Polytechnic University, Huangshi, China
| |
Collapse
|
8
|
Gravina M, García-Pedrero A, Gonzalo-Martín C, Sansone C, Soda P. Multi input-Multi output 3D CNN for dementia severity assessment with incomplete multimodal data. Artif Intell Med 2024; 149:102774. [PMID: 38462278 DOI: 10.1016/j.artmed.2024.102774] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 12/08/2023] [Accepted: 01/14/2024] [Indexed: 03/12/2024]
Abstract
Alzheimer's Disease is the most common cause of dementia, whose progression spans in different stages, from very mild cognitive impairment to mild and severe conditions. In clinical trials, Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) are mostly used for the early diagnosis of neurodegenerative disorders since they provide volumetric and metabolic function information of the brain, respectively. In recent years, Deep Learning (DL) has been employed in medical imaging with promising results. Moreover, the use of the deep neural networks, especially Convolutional Neural Networks (CNNs), has also enabled the development of DL-based solutions in domains characterized by the need of leveraging information coming from multiple data sources, raising the Multimodal Deep Learning (MDL). In this paper, we conduct a systematic analysis of MDL approaches for dementia severity assessment exploiting MRI and PET scans. We propose a Multi Input-Multi Output 3D CNN whose training iterations change according to the characteristic of the input as it is able to handle incomplete acquisitions, in which one image modality is missed. Experiments performed on OASIS-3 dataset show the satisfactory results of the implemented network, which outperforms approaches exploiting both single image modality and different MDL fusion techniques.
Collapse
Affiliation(s)
- Michela Gravina
- Department of Electrical Engineering and Information Technology, University of Naples Federico II, Napoli, 80125, Italy
| | - Angel García-Pedrero
- Department of Computer Architecture and Technology, Universidad Politécnica de Madrid, Boadilla del Monte, 28660, Madrid, Spain; Center for Biomedical Technology, Campus de Montegancedo, Universidad Politécnica de Madrid, Pozuelo de Alarcón, 28233, Madrid, Spain
| | - Consuelo Gonzalo-Martín
- Department of Computer Architecture and Technology, Universidad Politécnica de Madrid, Boadilla del Monte, 28660, Madrid, Spain; Center for Biomedical Technology, Campus de Montegancedo, Universidad Politécnica de Madrid, Pozuelo de Alarcón, 28233, Madrid, Spain.
| | - Carlo Sansone
- Department of Electrical Engineering and Information Technology, University of Naples Federico II, Napoli, 80125, Italy
| | - Paolo Soda
- Department of Engineering, Unit of Computer Systems and Bioinformatics, University of Rome Campus Bio-Medico, Roma, 00128, Italy; Department of Diagnostics and Intervention, Radiation Physics, Biomedical Engineering, Umeå University, 90187, Umeå, Sweden
| |
Collapse
|
9
|
Chen E, Barile B, Durand-Dubief F, Grenier T, Sappey-Marinier D. Multiple sclerosis clinical forms classification with graph convolutional networks based on brain morphological connectivity. Front Neurosci 2024; 17:1268860. [PMID: 38304076 PMCID: PMC10830765 DOI: 10.3389/fnins.2023.1268860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 12/18/2023] [Indexed: 02/03/2024] Open
Abstract
Multiple Sclerosis (MS) is an autoimmune disease that combines chronic inflammatory and neurodegenerative processes underlying different clinical forms of evolution, such as relapsing-remitting, secondary progressive, or primary progressive MS. This identification is usually performed by clinical evaluation at the diagnosis or during the course of the disease for the secondary progressive phase. In parallel, magnetic resonance imaging (MRI) analysis is a mandatory diagnostic complement. Identifying the clinical form from MR images is therefore a helpful and challenging task. Here, we propose a new approach for the automatic classification of MS forms based on conventional MRI (i.e., T1-weighted images) that are commonly used in clinical context. For this purpose, we investigated the morphological connectome features using graph based convolutional neural network. Our results obtained from the longitudinal study of 91 MS patients highlight the performance (F1-score) of this approach that is better than state-of-the-art as 3D convolutional neural networks. These results open the way for clinical applications such as disability correlation only using T1-weighted images.
Collapse
Affiliation(s)
- Enyi Chen
- CREATIS, CNRS UMR 5220, INSERM U1294, Université de Lyon, Université Claude Bernard-Lyon 1, INSA Lyon, Lyon, France
| | - Berardino Barile
- CREATIS, CNRS UMR 5220, INSERM U1294, Université de Lyon, Université Claude Bernard-Lyon 1, INSA Lyon, Lyon, France
| | - Françoise Durand-Dubief
- CREATIS, CNRS UMR 5220, INSERM U1294, Université de Lyon, Université Claude Bernard-Lyon 1, INSA Lyon, Lyon, France
- Service de Sclérose en Plaques, des Pathologies de la Myéline et Neuro-Inflammation, Groupement Hospitalier Est, Hôpital Neurologique, Bron, France
| | - Thomas Grenier
- CREATIS, CNRS UMR 5220, INSERM U1294, Université de Lyon, Université Claude Bernard-Lyon 1, INSA Lyon, Lyon, France
| | - Dominique Sappey-Marinier
- CREATIS, CNRS UMR 5220, INSERM U1294, Université de Lyon, Université Claude Bernard-Lyon 1, INSA Lyon, Lyon, France
- CERMEP - Imagerie du Vivant, Université de Lyon, Bron, France
| |
Collapse
|
10
|
Tong L, Shi W, Isgut M, Zhong Y, Lais P, Gloster L, Sun J, Swain A, Giuste F, Wang MD. Integrating Multi-Omics Data With EHR for Precision Medicine Using Advanced Artificial Intelligence. IEEE Rev Biomed Eng 2024; 17:80-97. [PMID: 37824325 DOI: 10.1109/rbme.2023.3324264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
With the recent advancement of novel biomedical technologies such as high-throughput sequencing and wearable devices, multi-modal biomedical data ranging from multi-omics molecular data to real-time continuous bio-signals are generated at an unprecedented speed and scale every day. For the first time, these multi-modal biomedical data are able to make precision medicine close to a reality. However, due to data volume and the complexity, making good use of these multi-modal biomedical data requires major effort. Researchers and clinicians are actively developing artificial intelligence (AI) approaches for data-driven knowledge discovery and causal inference using a variety of biomedical data modalities. These AI-based approaches have demonstrated promising results in various biomedical and healthcare applications. In this review paper, we summarize the state-of-the-art AI models for integrating multi-omics data and electronic health records (EHRs) for precision medicine. We discuss the challenges and opportunities in integrating multi-omics data with EHRs and future directions. We hope this review can inspire future research and developing in integrating multi-omics data with EHRs for precision medicine.
Collapse
|
11
|
Guo H, Jian S, Zhou Y, Chen X, Chen J, Zhou J, Huang Y, Ma G, Li X, Ning Y, Wu F, Wu K. Discriminative analysis of schizophrenia patients using an integrated model combining 3D CNN with 2D CNN: A multimodal MR image and connectomics analysis. Brain Res Bull 2024; 206:110846. [PMID: 38104672 DOI: 10.1016/j.brainresbull.2023.110846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 11/20/2023] [Accepted: 12/12/2023] [Indexed: 12/19/2023]
Abstract
OBJECTIVE Few studies have applied deep learning to the discriminative analysis of schizophrenia (SZ) patients using the fusional features of multimodal MRI data. Here, we proposed an integrated model combining a 3D convolutional neural network (CNN) with a 2D CNN to classify SZ patients. METHOD Structural MRI (sMRI) and resting-state functional MRI (rs-fMRI) data were acquired for 140 SZ patients and 205 normal controls. We computed structural connectivity (SC) from the sMRI data as well as functional connectivity (FC), amplitude of low-frequency fluctuation (ALFF), and regional homogeneity (ReHo) from the rs-fMRI data. The 3D images of T1, ReHo, and ALFF were used as the inputs for the 3D CNN model, while the SC and FC matrices were used as the inputs for the 2D CNN model. Moreover, we added squeeze and excitation blocks (SE-blocks) to each layer of the integrated model and used a support vector machine (SVM) to replace the softmax classifier. RESULTS The integrated model proposed in this study, using the fusional features of the T1 images, and the matrices of FC, showed the best performance. The use of the SE-blocks and SVM classifiers significantly improved the performance of the integrated model, in which the accuracy, sensitivity, specificity, area under the curve, and F1-score were 89.86%, 86.21%, 92.50%, 89.35%, and 87.72%, respectively. CONCLUSIONS Our findings indicated that an integrated model combining 3D CNN with 2D CNN is a promising method to improve the classification performance of SZ patients and has potential for the clinical diagnosis of psychiatric diseases.
Collapse
Affiliation(s)
- Haiman Guo
- School of Biomedical Sciences and Engineering, South China University of Technology, Guangzhou International Campus, Guangzhou 511442, China
| | - Shuyi Jian
- School of Biomedical Sciences and Engineering, South China University of Technology, Guangzhou International Campus, Guangzhou 511442, China
| | - Yubin Zhou
- School of Biomedical Sciences and Engineering, South China University of Technology, Guangzhou International Campus, Guangzhou 511442, China
| | - Xiaoyi Chen
- School of Biomedical Sciences and Engineering, South China University of Technology, Guangzhou International Campus, Guangzhou 511442, China
| | - Jinbiao Chen
- School of Biomedical Sciences and Engineering, South China University of Technology, Guangzhou International Campus, Guangzhou 511442, China
| | - Jing Zhou
- School of Material Sciences and Engineering, South China University of Technology, Guangzhou 510610, China; Guangdong Engineering Technology Research Center for Translational Medicine of Mental Disorders, Guangzhou 510370, China; Guangdong Engineering Technology Research Center for Diagnosis and Rehabilitation of Dementia, Guangzhou 510500, China
| | - Yuanyuan Huang
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou 510370, China; Guangdong Engineering Technology Research Center for Translational Medicine of Mental Disorders, Guangzhou 510370, China
| | - Guolin Ma
- Department of Radiology, China-Japan Friendship Hospital, Beijing 100029, China
| | - Xiaobo Li
- Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ, USA
| | - Yuping Ning
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou 510370, China; Guangdong Engineering Technology Research Center for Translational Medicine of Mental Disorders, Guangzhou 510370, China
| | - Fengchun Wu
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou 510370, China; Guangdong Engineering Technology Research Center for Translational Medicine of Mental Disorders, Guangzhou 510370, China.
| | - Kai Wu
- School of Biomedical Sciences and Engineering, South China University of Technology, Guangzhou International Campus, Guangzhou 511442, China; National Engineering Research Center for Tissue Restoration and Reconstruction, South China University of Technology, Guangzhou 510006, China; Guangdong Province Key Laboratory of Biomedical Engineering, South China University of Technology, Guangzhou 510006, China; Department of Nuclear Medicine and Radiology, Institute of Development, Aging and Cancer, Tohoku University, Sendai 980-8575, Japan.
| |
Collapse
|
12
|
Bapat R, Ma D, Duong TQ. Predicting Four-Year's Alzheimer's Disease Onset Using Longitudinal Neurocognitive Tests and MRI Data Using Explainable Deep Convolutional Neural Networks. J Alzheimers Dis 2024; 97:459-469. [PMID: 38143361 DOI: 10.3233/jad-230893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2023]
Abstract
BACKGROUND Prognosis of future risk of dementia from neuroimaging and cognitive data is important for optimizing clinical management for patients at early stage of Alzheimer's disease (AD). However, existing studies lack an efficient way to integrate longitudinal information from both modalities to improve prognosis performance. OBJECTIVE In this study, we aim to develop and evaluate an explainable deep learning-based framework to predict mild cognitive impairment (MCI) to AD conversion within four years using longitudinal whole-brain 3D MRI and neurocognitive tests. METHODS We proposed a two-stage framework that first uses a 3D convolutional neural network to extract single-timepoint MRI-based AD-related latent features, followed by multi-modal longitudinal feature concatenation and a 1D convolutional neural network to predict the risk of future dementia onset in four years. RESULTS The proposed deep learning framework showed promising to predict MCI to AD conversion within 4 years using longitudinal whole-brain 3D MRI and cognitive data without extracting regional brain volumes or cortical thickness, reaching a balanced accuracy of 0.834, significantly improved from models trained from single timepoint or single modality. The post hoc model explainability revealed heatmap indicating regions that are important for predicting future risk of AD. CONCLUSIONS The proposed framework sets the stage for future studies for using multi-modal longitudinal data to achieve optimal prediction for prognosis of AD onset, leading to better management of the diseases, thereby improving the quality of life.
Collapse
Affiliation(s)
- Rohan Bapat
- Department of Radiology, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, NY, USA
| | - Da Ma
- Department of Internal Medicine Section of Gerontology and Geriatric Medicine, Wake Forest University School of Medicine, Winston-Salam, NC, USA
| | - Tim Q Duong
- Department of Radiology, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, NY, USA
| |
Collapse
|
13
|
Borchert RJ, Azevedo T, Badhwar A, Bernal J, Betts M, Bruffaerts R, Burkhart MC, Dewachter I, Gellersen HM, Low A, Lourida I, Machado L, Madan CR, Malpetti M, Mejia J, Michopoulou S, Muñoz-Neira C, Pepys J, Peres M, Phillips V, Ramanan S, Tamburin S, Tantiangco HM, Thakur L, Tomassini A, Vipin A, Tang E, Newby D, Ranson JM, Llewellyn DJ, Veldsman M, Rittman T. Artificial intelligence for diagnostic and prognostic neuroimaging in dementia: A systematic review. Alzheimers Dement 2023; 19:5885-5904. [PMID: 37563912 DOI: 10.1002/alz.13412] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 05/18/2023] [Accepted: 06/02/2023] [Indexed: 08/12/2023]
Abstract
INTRODUCTION Artificial intelligence (AI) and neuroimaging offer new opportunities for diagnosis and prognosis of dementia. METHODS We systematically reviewed studies reporting AI for neuroimaging in diagnosis and/or prognosis of cognitive neurodegenerative diseases. RESULTS A total of 255 studies were identified. Most studies relied on the Alzheimer's Disease Neuroimaging Initiative dataset. Algorithmic classifiers were the most commonly used AI method (48%) and discriminative models performed best for differentiating Alzheimer's disease from controls. The accuracy of algorithms varied with the patient cohort, imaging modalities, and stratifiers used. Few studies performed validation in an independent cohort. DISCUSSION The literature has several methodological limitations including lack of sufficient algorithm development descriptions and standard definitions. We make recommendations to improve model validation including addressing key clinical questions, providing sufficient description of AI methods and validating findings in independent datasets. Collaborative approaches between experts in AI and medicine will help achieve the promising potential of AI tools in practice. HIGHLIGHTS There has been a rapid expansion in the use of machine learning for diagnosis and prognosis in neurodegenerative disease Most studies (71%) relied on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset with no other individual dataset used more than five times There has been a recent rise in the use of more complex discriminative models (e.g., neural networks) that performed better than other classifiers for classification of AD vs healthy controls We make recommendations to address methodological considerations, addressing key clinical questions, and validation We also make recommendations for the field more broadly to standardize outcome measures, address gaps in the literature, and monitor sources of bias.
Collapse
Affiliation(s)
- Robin J Borchert
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
- Department of Radiology, University of Cambridge, Cambridge, UK
| | - Tiago Azevedo
- Department of Computer Science and Technology, University of Cambridge, Cambridge, UK
| | - AmanPreet Badhwar
- Department of Pharmacology and Physiology, University of Montreal, Montreal, Canada
- Centre de recherche de l'Institut Universitaire de Gériatrie (CRIUGM), Montreal, Canada
| | - Jose Bernal
- Centre for Clinical Brain Sciences, The University of Edinburgh, Edinburgh, UK
- Institute of Cognitive Neurology and Dementia Research, Otto-von-Guericke University Magdeburg, Magdeburg, Germany
- German Center for Neurodegenerative Diseases (DZNE), Magdeburg, Germany
| | - Matthew Betts
- Institute of Cognitive Neurology and Dementia Research, Otto-von-Guericke University Magdeburg, Magdeburg, Germany
- German Center for Neurodegenerative Diseases (DZNE), Magdeburg, Germany
- Center for Behavioral Brain Sciences, University of Magdeburg, Magdeburg, Germany
| | - Rose Bruffaerts
- Computational Neurology, Experimental Neurobiology Unit, Department of Biomedical Sciences, University of Antwerp, Antwerp, Belgium
- Biomedical Research Institute, Hasselt University, Diepenbeek, Belgium
| | | | - Ilse Dewachter
- Biomedical Research Institute, Hasselt University, Diepenbeek, Belgium
| | - Helena M Gellersen
- German Center for Neurodegenerative Diseases (DZNE), Magdeburg, Germany
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Audrey Low
- Department of Psychiatry, University of Cambridge, Cambridge, UK
| | | | - Luiza Machado
- Department of Biochemistry, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil
| | | | - Maura Malpetti
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | - Jhony Mejia
- Department of Biomedical Engineering, Universidad de Los Andes, Bogotá, Colombia
| | - Sofia Michopoulou
- Imaging Physics, University Hospital Southampton NHS Foundation Trust, Southampton, UK
| | - Carlos Muñoz-Neira
- Research into Memory, Brain sciences and dementia Group (ReMemBr Group), Translational Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
- Artificial Intelligence & Computational Neuroscience Group (AICN Group), Sheffield Institute for Translational Neuroscience (SITraN), Department of Neuroscience, University of Sheffield, Sheffield, UK
| | - Jack Pepys
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Italy
| | - Marion Peres
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | | | - Siddharth Ramanan
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Stefano Tamburin
- Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Verona, Italy
| | | | - Lokendra Thakur
- Division of Genetics and Genomics, Boston Children's Hospital, Harvard Medical School, Boston, Massachusetts, USA
- Broad Institute of MIT and Harvard, Cambridge, UK
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Alessandro Tomassini
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | | | - Eugene Tang
- Population Health Sciences Institute, Newcastle University, Newcastle upon Tyne, UK
| | - Danielle Newby
- Department of Psychiatry, University of Oxford, Oxford, UK
| | | | - David J Llewellyn
- University of Exeter Medical School, Exeter, UK
- Alan Turing Institute, London, UK
| | - Michele Veldsman
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Timothy Rittman
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| |
Collapse
|
14
|
Liu S, Zheng Y, Li H, Pan M, Fang Z, Liu M, Qiao Y, Pan N, Jia W, Ge X. Improving Alzheimer Diagnoses With An Interpretable Deep Learning Framework: Including Neuropsychiatric Symptoms. Neuroscience 2023; 531:86-98. [PMID: 37709003 DOI: 10.1016/j.neuroscience.2023.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 08/31/2023] [Accepted: 09/07/2023] [Indexed: 09/16/2023]
Abstract
Alzheimer's disease (AD) is a prevalent neurodegenerative disorder characterized by the progressive cognitive decline. Among the various clinical symptoms, neuropsychiatric symptoms (NPS) commonly occur during the course of AD. Previous researches have demonstrated a strong association between NPS and severity of AD, while the research methods are not sufficiently intuitive. Here, we report a hybrid deep learning framework for AD diagnosis using multimodal inputs such as structural MRI, behavioral scores, age, and gender information. The framework uses a 3D convolutional neural network to automatically extract features from MRI. The imaging features are passed to the Principal Component Analysis for dimensionality reduction, which fuse with non-imaging information to improve the diagnosis of AD. According to the experimental results, our model achieves an accuracy of 0.91 and an area under the curve of 0.97 in the task of classifying AD and cognitively normal individuals. SHapley Additive exPlanations are used to visually exhibit the contribution of specific NPS in the proposed model. Among all behavioral symptoms, apathy plays a particularly important role in the diagnosis of AD, which can be considered a valuable factor in further studies, as well as clinical trials.
Collapse
Affiliation(s)
- Shujuan Liu
- School of Information Science and Engineering, Shandong Normal University, Shandong, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, Shandong, China
| | - Hongzhuang Li
- School of Information Science and Engineering, Shandong Normal University, Shandong, China
| | - Minmin Pan
- School of Information Science and Engineering, Shandong Normal University, Shandong, China
| | - Zhicong Fang
- School of Information Science and Engineering, Shandong Normal University, Shandong, China
| | - Mengting Liu
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
| | - Yuchuan Qiao
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
| | - Ningning Pan
- School of Information Science and Engineering, Shandong Normal University, Shandong, China
| | - Weikuan Jia
- School of Information Science and Engineering, Shandong Normal University, Shandong, China
| | - Xinting Ge
- School of Information Science and Engineering, Shandong Normal University, Shandong, China.
| |
Collapse
|
15
|
Qiang YR, Zhang SW, Li JN, Li Y, Zhou QY. Diagnosis of Alzheimer's disease by joining dual attention CNN and MLP based on structural MRIs, clinical and genetic data. Artif Intell Med 2023; 145:102678. [PMID: 37925204 DOI: 10.1016/j.artmed.2023.102678] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 07/12/2023] [Accepted: 10/03/2023] [Indexed: 11/06/2023]
Abstract
Alzheimer's disease (AD) is an irreversible central nervous degenerative disease, while mild cognitive impairment (MCI) is a precursor state of AD. Accurate early diagnosis of AD is conducive to the prevention and early intervention treatment of AD. Although some computational methods have been developed for AD diagnosis, most employ only neuroimaging, ignoring other data (e.g., genetic, clinical) that may have potential disease information. In addition, the results of some methods lack interpretability. In this work, we proposed a novel method (called DANMLP) of joining dual attention convolutional neural network (CNN) and multilayer perceptron (MLP) for computer-aided AD diagnosis by integrating multi-modality data of the structural magnetic resonance imaging (sMRI), clinical data (i.e., demographics, neuropsychology), and APOE genetic data. Our DANMLP consists of four primary components: (1) the Patch-CNN for extracting the image characteristics from each local patch, (2) the position self-attention block for capturing the dependencies between features within a patch, (3) the channel self-attention block for capturing dependencies of inter-patch features, (4) two MLP networks for extracting the clinical features and outputting the AD classification results, respectively. Compared with other state-of-the-art methods in the 5CV test, DANMLP achieves 93% and 82.4% classification accuracy for the AD vs. MCI and MCI vs. NC tasks on the ADNI database, which is 0.2%∼15.2% and 3.4%∼26.8% higher than that of other five methods, respectively. The individualized visualization of focal areas can also help clinicians in the early diagnosis of AD. These results indicate that DANMLP can be effectively used for diagnosing AD and MCI patients.
Collapse
Affiliation(s)
- Yan-Rui Qiang
- Key Laboratory of Information Fusion Technology, School of Automation, Northwestern Polytechnical University, Xi'an, 710072, China
| | - Shao-Wu Zhang
- Key Laboratory of Information Fusion Technology, School of Automation, Northwestern Polytechnical University, Xi'an, 710072, China.
| | - Jia-Ni Li
- Key Laboratory of Information Fusion Technology, School of Automation, Northwestern Polytechnical University, Xi'an, 710072, China
| | - Yan Li
- Key Laboratory of Information Fusion Technology, School of Automation, Northwestern Polytechnical University, Xi'an, 710072, China
| | - Qin-Yi Zhou
- Key Laboratory of Information Fusion Technology, School of Automation, Northwestern Polytechnical University, Xi'an, 710072, China
| |
Collapse
|
16
|
Mora-Rubio A, Bravo-Ortíz MA, Quiñones Arredondo S, Saborit Torres JM, Ruz GA, Tabares-Soto R. Classification of Alzheimer's disease stages from magnetic resonance images using deep learning. PeerJ Comput Sci 2023; 9:e1490. [PMID: 37705614 PMCID: PMC10495979 DOI: 10.7717/peerj-cs.1490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 06/21/2023] [Indexed: 09/15/2023]
Abstract
Alzheimer's disease (AD) is a progressive type of dementia characterized by loss of memory and other cognitive abilities, including speech. Since AD is a progressive disease, detection in the early stages is essential for the appropriate care of the patient throughout its development, going from asymptomatic to a stage known as mild cognitive impairment (MCI), and then progressing to dementia and severe dementia; is worth mentioning that everyone suffers from cognitive impairment to some degree as we age, but the relevant task here is to identify which people are most likely to develop AD. Along with cognitive tests, evaluation of the brain morphology is the primary tool for AD diagnosis, where atrophy and loss of volume of the frontotemporal lobe are common features in patients who suffer from the disease. Regarding medical imaging techniques, magnetic resonance imaging (MRI) scans are one of the methods used by specialists to assess brain morphology. Recently, with the rise of deep learning (DL) and its successful implementation in medical imaging applications, it is of growing interest in the research community to develop computer-aided diagnosis systems that can help physicians to detect this disease, especially in the early stages where macroscopic changes are not so easily identified. This article presents a DL-based approach to classifying MRI scans in the different stages of AD, using a curated set of images from Alzheimer's Disease Neuroimaging Initiative and Open Access Series of Imaging Studies databases. Our methodology involves image pre-processing using FreeSurfer, spatial data-augmentation operations, such as rotation, flip, and random zoom during training, and state-of-the-art 3D convolutional neural networks such as EfficientNet, DenseNet, and a custom siamese network, as well as the relatively new approach of vision transformer architecture. With this approach, the best detection percentage among all four architectures was around 89% for AD vs. Control, 80% for Late MCI vs. Control, 66% for MCI vs. Control, and 67% for Early MCI vs. Control.
Collapse
Affiliation(s)
- Alejandro Mora-Rubio
- Department of Electronics and Automation, Universidad Autonóma de Manizales, Manizales, Caldas, Colombia
| | | | | | - Jose Manuel Saborit Torres
- Unidad Mixta de Imagen Biomédica FISABIO-CIPF, Fundación para el Fomento de la Investigación Sanitario y Biomédica de la Comunidad Valenciana, Valencia, Spain
| | - Gonzalo A. Ruz
- Center of Applied Ecology and Sustainability (CAPES), Santiago, Chile
- Data Observatory Foundation, Santiago, Chile
- Facultad de Ingeniería y Ciencias, Universidad Asdolfo Ibáñez, Santiago, Chile
| | - Reinel Tabares-Soto
- Department of Electronics and Automation, Universidad Autonóma de Manizales, Manizales, Caldas, Colombia
- Facultad de Ingeniería y Ciencias, Universidad Asdolfo Ibáñez, Santiago, Chile
- Department of Systems and Informatics, Universidad de Caldas, Manizales, Caldas, Colombia
| |
Collapse
|
17
|
Saleh H, Elrashidy N, Elaziz MA, Aseeri AO, El-sappagh S. Genetic algorithms based optimized hybrid deep learning model for explainable Alzheimer's prediction based on temporal multimodal cognitive data.. [DOI: 10.21203/rs.3.rs-3250006/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Abstract
Alzheimer's Disease (AD) is an irreversible neurodegenerative disease. Its early detection is crucial to stop disease progression at an early stage. Most deep learning (DL) literature focused on neuroimage analysis. However, there is no noticed effect of these studies in the real environment. Model's robustness, cost, and interpretability are considered the main reasons for these limitations. The medical intuition of physicians is to evaluate the clinical biomarkers of patients then test their neuroimages. Cognitive scores provide an medically acceptable and cost-effective alternative for the neuroimages to predict AD progression. Each score is calculated from a collection of sub-scores which provide a deeper insight about patient conditions. No study in the literature have explored the role of these multimodal time series sub-scores to predict AD progression.
We propose a hybrid CNN-LSTM DL model for predicting AD progression based on the fusion of four longitudinal cognitive sub-scores modalities. Bayesian optimizer has been used to select the best DL architecture. A genetic algorithms based feature selection optimization step has been added to the pipeline to select the best features from extracted deep representations of CNN-LSTM. The SoftMax classifier has been replaced by a robust and optimized random forest classifier. Extensive experiments using the ADNI dataset investigated the role of each optimization step, and the proposed model achieved the best results compared to other DL and classical machine learning models. The resulting model is robust, but it is a black box and it is difficult to understand the logic behind its decisions. Trustworthy AI models must be robust and explainable. We used SHAP and LIME to provide explainability features for the proposed model. The resulting trustworthy model has a great potential to be used to provide decision support in the real environments.
Collapse
Affiliation(s)
- Hager Saleh
- Faculty of Computers and Artificial Intelligence, South Valley University, Hurghada, Egypt
| | - Nora ElRashidy
- Machine Learning and Information Retrieval Department, Faculty of Artificial Intelligence, Kafrelsheiksh University, Kafrelsheiksh, 13518, Egypt
| | - Mohamed Abd Elaziz
- Faculty of Computer Science and Engineerings, Galala University, Suez, 435611, Egypt, Egypt
| | - Ahmad O. Aseeri
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, 11942, Saudi Arabia
| | - Shaker El-Sappagh
- Faculty of Computer Science and Engineerings, Galala University, Suez, 435611, Egypt, Egypt
| |
Collapse
|
18
|
Zhang ZC, Zhao X, Dong G, Zhao XM. Improving Alzheimer's Disease Diagnosis With Multi-Modal PET Embedding Features by a 3D Multi-Task MLP-Mixer Neural Network. IEEE J Biomed Health Inform 2023; 27:4040-4051. [PMID: 37247318 DOI: 10.1109/jbhi.2023.3280823] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Positron emission tomography (PET) with fluorodeoxyglucose (FDG) or florbetapir (AV45) has been proved effective in the diagnosis of Alzheimer's disease. However, the expensive and radioactive nature of PET has limited its application. Here, employing multi-layer perceptron mixer architecture, we present a deep learning model, namely 3-dimensional multi-task multi-layer perceptron mixer, for simultaneously predicting the standardized uptake value ratios (SUVRs) for FDG-PET and AV45-PET from the cheap and widely used structural magnetic resonance imaging data, and the model can be further used for Alzheimer's disease diagnosis based on embedding features derived from SUVR prediction. Experiment results demonstrate the high prediction accuracy of the proposed method for FDG/AV45-PET SUVRs, where we achieved Pearson's correlation coefficients of 0.66 and 0.61 respectively between the estimated and actual SUVR and the estimated SUVRs also show high sensitivity and distinct longitudinal patterns for different disease status. By taking into account PET embedding features, the proposed method outperforms other competing methods on five independent datasets in the diagnosis of Alzheimer's disease and discriminating between stable and progressive mild cognitive impairments, achieving the area under receiver operating characteristic curves of 0.968 and 0.776 respectively on ADNI dataset, and generalizes better to other external datasets. Moreover, the top-weighted patches extracted from the trained model involve important brain regions related to Alzheimer's disease, suggesting good biological interpretability of our proposed method."
Collapse
|
19
|
Azevedo T, Bethlehem RAI, Whiteside DJ, Swaddiwudhipong N, Rowe JB, Lió P, Rittman T. Identifying healthy individuals with Alzheimer's disease neuroimaging phenotypes in the UK Biobank. COMMUNICATIONS MEDICINE 2023; 3:100. [PMID: 37474615 PMCID: PMC10359360 DOI: 10.1038/s43856-023-00313-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 06/05/2023] [Indexed: 07/22/2023] Open
Abstract
BACKGROUND Identifying prediagnostic neurodegenerative disease is a critical issue in neurodegenerative disease research, and Alzheimer's disease (AD) in particular, to identify populations suitable for preventive and early disease-modifying trials. Evidence from genetic and other studies suggests the neurodegeneration of Alzheimer's disease measured by brain atrophy starts many years before diagnosis, but it is unclear whether these changes can be used to reliably detect prediagnostic sporadic disease. METHODS We trained a Bayesian machine learning neural network model to generate a neuroimaging phenotype and AD score representing the probability of AD using structural MRI data in the Alzheimer's Disease Neuroimaging Initiative (ADNI) Cohort (cut-off 0.5, AUC 0.92, PPV 0.90, NPV 0.93). We go on to validate the model in an independent real-world dataset of the National Alzheimer's Coordinating Centre (AUC 0.74, PPV 0.65, NPV 0.80) and demonstrate the correlation of the AD-score with cognitive scores in those with an AD-score above 0.5. We then apply the model to a healthy population in the UK Biobank study to identify a cohort at risk for Alzheimer's disease. RESULTS We show that the cohort with a neuroimaging Alzheimer's phenotype has a cognitive profile in keeping with Alzheimer's disease, with strong evidence for poorer fluid intelligence, and some evidence of poorer numeric memory, reaction time, working memory, and prospective memory. We found some evidence in the AD-score positive cohort for modifiable risk factors of hypertension and smoking. CONCLUSIONS This approach demonstrates the feasibility of using AI methods to identify a potentially prediagnostic population at high risk for developing sporadic Alzheimer's disease.
Collapse
Affiliation(s)
- Tiago Azevedo
- Department of Computer Science and Technology, University of Cambridge, Cambridge, UK
| | - Richard A I Bethlehem
- Brain Mapping Unit, Department of Psychiatry, University of Cambridge, Cambridge, UK
- Autism Research Centre, Department of Psychiatry, University of Cambridge, Cambridge, UK
| | - David J Whiteside
- Department of Clinical Neurosciences and Cambridge University Hospitals NHS Trust, University of Cambridge, Cambridge, UK
| | - Nol Swaddiwudhipong
- Department of Clinical Neurosciences and Cambridge University Hospitals NHS Trust, University of Cambridge, Cambridge, UK
| | - James B Rowe
- Department of Clinical Neurosciences and Cambridge University Hospitals NHS Trust, University of Cambridge, Cambridge, UK
| | - Pietro Lió
- Department of Computer Science and Technology, University of Cambridge, Cambridge, UK
| | - Timothy Rittman
- Department of Clinical Neurosciences and Cambridge University Hospitals NHS Trust, University of Cambridge, Cambridge, UK.
| |
Collapse
|
20
|
Illakiya T, Ramamurthy K, Siddharth MV, Mishra R, Udainiya A. AHANet: Adaptive Hybrid Attention Network for Alzheimer's Disease Classification Using Brain Magnetic Resonance Imaging. Bioengineering (Basel) 2023; 10:714. [PMID: 37370645 PMCID: PMC10294993 DOI: 10.3390/bioengineering10060714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 05/27/2023] [Accepted: 06/03/2023] [Indexed: 06/29/2023] Open
Abstract
Alzheimer's disease (AD) is a progressive neurological problem that causes brain atrophy and affects the memory and thinking skills of an individual. Accurate detection of AD has been a challenging research topic for a long time in the area of medical image processing. Detecting AD at its earliest stage is crucial for the successful treatment of the disease. The proposed Adaptive Hybrid Attention Network (AHANet) has two attention modules, namely Enhanced Non-Local Attention (ENLA) and Coordinate Attention. These modules extract global-level features and local-level features separately from the brain Magnetic Resonance Imaging (MRI), thereby boosting the feature extraction power of the network. The ENLA module extracts spatial and contextual information on a global scale while also capturing important long-range dependencies. The Coordinate Attention module captures local features from the input images. It embeds positional information into the channel attention mechanism for enhanced feature extraction. Moreover, an Adaptive Feature Aggregation (AFA) module is proposed to fuse features from the global and local levels in an effective way. As a result of incorporating the above architectural enhancements into the DenseNet architecture, the proposed network exhibited better performance compared to the existing works. The proposed network was trained and tested on the ADNI dataset, yielding a classification accuracy of 98.53%.
Collapse
Affiliation(s)
- T. Illakiya
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India;
| | - Karthik Ramamurthy
- Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai 600127, India
| | - M. V. Siddharth
- School of Mechanical Engineering, Vellore Institute of Technology, Chennai 600127, India;
| | - Rashmi Mishra
- School of Electronics Engineering, Vellore Institute of Technology, Chennai 600127, India; (R.M.); (A.U.)
| | - Ashish Udainiya
- School of Electronics Engineering, Vellore Institute of Technology, Chennai 600127, India; (R.M.); (A.U.)
| |
Collapse
|
21
|
Liu Y, Mazumdar S, Bath PA. An unsupervised learning approach to diagnosing Alzheimer's disease using brain magnetic resonance imaging scans. Int J Med Inform 2023; 173:105027. [PMID: 36921480 DOI: 10.1016/j.ijmedinf.2023.105027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 02/22/2023] [Accepted: 02/23/2023] [Indexed: 03/06/2023]
Abstract
BACKGROUND Alzheimer's disease (AD) is the most common cause of dementia, characterised by behavioural and cognitive impairment. Due to the lack of effectiveness of manual diagnosis by doctors, machine learning is now being applied to diagnose AD in many recent studies. Most research developing machine learning algorithms to diagnose AD use supervised learning to classify magnetic resonance imaging (MRI) scans. However, supervised learning requires a considerable volume of labelled data and MRI scans are difficult to label. OBJECTIVE This study applied a statistical method and unsupervised learning methods to discriminate between scans from cognitively normal (CN) and people with AD using a limited number of labelled structural MRI scans. METHODS We used two-sample t-tests to detect the AD-relevant regions, and then employed an unsupervised learning neural network to extract features from the regions. Finally, a clustering algorithm was implemented to discriminate between CN and AD data based on the extracted features. The approach was tested on baseline brain structural MRI scans from 429 individuals from the Alzheimer's Disease Neuroimaging Initiative (ADNI), of which 231 were CN and 198 had AD. RESULTS The abnormal regions around the lower parts of limbic system were indicated as AD-relevant regions based on the two-sample t-test (p < 0.001), and the proposed method yielded an accuracy of 0.84 for discriminating between CN and AD. CONCLUSION The study combined statistical and unsupervised learning methods to identify scans of people with AD. This method can detect AD-relevant regions and could be used to accurately diagnose AD; it does not require large amounts of labelled MRI scans. Our research could help in the automatic diagnosis of AD and provide a basis for diagnosing stable mild cognitive impairment (stable MCI) and progressive mild cognitive impairment (progressive MCI).
Collapse
Affiliation(s)
- Yuyang Liu
- Information School, University of Sheffield, 211 Portobello, Sheffield S1 4DP, UK.
| | - Suvodeep Mazumdar
- Information School, University of Sheffield, 211 Portobello, Sheffield S1 4DP, UK
| | - Peter A Bath
- Information School, University of Sheffield, 211 Portobello, Sheffield S1 4DP, UK
| | -
- Information School, University of Sheffield, 211 Portobello, Sheffield S1 4DP, UK
| |
Collapse
|
22
|
Wang D, Honnorat N, Fox PT, Ritter K, Eickhoff SB, Seshadri S, Habes M. Deep neural network heatmaps capture Alzheimer's disease patterns reported in a large meta-analysis of neuroimaging studies. Neuroimage 2023; 269:119929. [PMID: 36740029 PMCID: PMC11155416 DOI: 10.1016/j.neuroimage.2023.119929] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 01/06/2023] [Accepted: 02/02/2023] [Indexed: 02/05/2023] Open
Abstract
Deep neural networks currently provide the most advanced and accurate machine learning models to distinguish between structural MRI scans of subjects with Alzheimer's disease and healthy controls. Unfortunately, the subtle brain alterations captured by these models are difficult to interpret because of the complexity of these multi-layer and non-linear models. Several heatmap methods have been proposed to address this issue and analyze the imaging patterns extracted from the deep neural networks, but no quantitative comparison between these methods has been carried out so far. In this work, we explore these questions by deriving heatmaps from Convolutional Neural Networks (CNN) trained using T1 MRI scans of the ADNI data set and by comparing these heatmaps with brain maps corresponding to Support Vector Machine (SVM) activation patterns. Three prominent heatmap methods are studied: Layer-wise Relevance Propagation (LRP), Integrated Gradients (IG), and Guided Grad-CAM (GGC). Contrary to prior studies where the quality of heatmaps was visually or qualitatively assessed, we obtained precise quantitative measures by computing overlap with a ground-truth map from a large meta-analysis that combined 77 voxel-based morphometry (VBM) studies independently from ADNI. Our results indicate that all three heatmap methods were able to capture brain regions covering the meta-analysis map and achieved better results than SVM activation patterns. Among them, IG produced the heatmaps with the best overlap with the independent meta-analysis.
Collapse
Affiliation(s)
- Di Wang
- Neuroimage Analytics Laboratory and Biggs Institute Neuroimaging Core, Glenn Biggs Institute for Neurodegenerative Disorders, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Nicolas Honnorat
- Neuroimage Analytics Laboratory and Biggs Institute Neuroimaging Core, Glenn Biggs Institute for Neurodegenerative Disorders, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Peter T Fox
- Biomedical Image Analytics Division, Research Imaging Institute, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Kerstin Ritter
- Department of Psychiatry and Neurosciences, Charite - University of Medicine Berlin and Humboldt-University Berlin, Berlin, Germany
| | - Simon B Eickhoff
- Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany; Institute of Systems Neuroscience, Heinrich-Heine University Düsseldorf, Germany
| | - Sudha Seshadri
- Neuroimage Analytics Laboratory and Biggs Institute Neuroimaging Core, Glenn Biggs Institute for Neurodegenerative Disorders, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Mohamad Habes
- Neuroimage Analytics Laboratory and Biggs Institute Neuroimaging Core, Glenn Biggs Institute for Neurodegenerative Disorders, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA; Biomedical Image Analytics Division, Research Imaging Institute, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA.
| |
Collapse
|
23
|
Leng Y, Cui W, Peng Y, Yan C, Cao Y, Yan Z, Chen S, Jiang X, Zheng J. Multimodal cross enhanced fusion network for diagnosis of Alzheimer's disease and subjective memory complaints. Comput Biol Med 2023; 157:106788. [PMID: 36958233 DOI: 10.1016/j.compbiomed.2023.106788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 02/09/2023] [Accepted: 03/11/2023] [Indexed: 03/15/2023]
Abstract
Deep learning methods using multimodal imagings have been proposed for the diagnosis of Alzheimer's disease (AD) and its early stages (SMC, subjective memory complaints), which may help to slow the progression of the disease through early intervention. However, current fusion methods for multimodal imagings are generally coarse and may lead to suboptimal results through the use of shared extractors or simple downscaling stitching. Another issue with diagnosing brain diseases is that they often affect multiple areas of the brain, making it important to consider potential connections throughout the brain. However, traditional convolutional neural networks (CNNs) may struggle with this issue due to their limited local receptive fields. To address this, many researchers have turned to transformer networks, which can provide global information about the brain but can be computationally intensive and perform poorly on small datasets. In this work, we propose a novel lightweight network called MENet that adaptively recalibrates the multiscale long-range receptive field to localize discriminative brain regions in a computationally efficient manner. Based on this, the network extracts the intensity and location responses between structural magnetic resonance imagings (sMRI) and 18-Fluoro-Deoxy-Glucose Positron Emission computed Tomography (FDG-PET) as an enhancement fusion for AD and SMC diagnosis. Our method is evaluated on the publicly available ADNI datasets and achieves 97.67% accuracy in AD diagnosis tasks and 81.63% accuracy in SMC diagnosis tasks using sMRI and FDG-PET. These results achieve state-of-the-art (SOTA) performance in both tasks. To the best of our knowledge, this is one of the first deep learning research methods for SMC diagnosis with FDG-PET.
Collapse
Affiliation(s)
- Yilin Leng
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China; Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| | - Wenju Cui
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China
| | - Yunsong Peng
- Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou, 550002, China
| | - Caiying Yan
- Department of Radiology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou, 211103, China
| | - Yuzhu Cao
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China
| | - Zhuangzhi Yan
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China
| | - Shuangqing Chen
- Department of Radiology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou, 211103, China.
| | - Xi Jiang
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Jian Zheng
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China.
| | | |
Collapse
|
24
|
Duan J, Liu Y, Wu H, Wang J, Chen L, Chen CLP. Broad learning for early diagnosis of Alzheimer's disease using FDG-PET of the brain. Front Neurosci 2023; 17:1137567. [PMID: 36992851 PMCID: PMC10040750 DOI: 10.3389/fnins.2023.1137567] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 02/13/2023] [Indexed: 03/14/2023] Open
Abstract
Alzheimer's disease (AD) is a progressive neurodegenerative disease, and the development of AD is irreversible. However, preventive measures in the presymptomatic stage of AD can effectively slow down deterioration. Fluorodeoxyglucose positron emission tomography (FDG-PET) can detect the metabolism of glucose in patients' brains, which can help to identify changes related to AD before brain damage occurs. Machine learning is useful for early diagnosis of patients with AD using FDG-PET, but it requires a sufficiently large dataset, and it is easy for overfitting to occur in small datasets. Previous studies using machine learning for early diagnosis with FDG-PET have either involved the extraction of elaborately handcrafted features or validation on a small dataset, and few studies have explored the refined classification of early mild cognitive impairment (EMCI) and late mild cognitive impairment (LMCI). This article presents a broad network-based model for early diagnosis of AD (BLADNet) through PET imaging of the brain; this method employs a novel broad neural network to enhance the features of FDG-PET extracted via 2D CNN. BLADNet can search for information over a broad space through the addition of new BLS blocks without retraining of the whole network, thus improving the accuracy of AD classification. Experiments conducted on a dataset containing 2,298 FDG-PET images of 1,045 subjects from the ADNI database demonstrate that our methods are superior to those used in previous studies on early diagnosis of AD with FDG-PET. In particular, our methods achieved state-of-the-art results in EMCI and LMCI classification with FDG-PET.
Collapse
Affiliation(s)
- Junwei Duan
- College of Information Science and Technology, Jinan University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization, Jinan University, Guangzhou, China
- *Correspondence: Junwei Duan
| | - Yang Liu
- College of Information Science and Technology, Jinan University, Guangzhou, China
| | - Huanhua Wu
- Department of Nuclear Medicine and PET/CT-MRI Centre, The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Jing Wang
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
- Jing Wang
| | - Long Chen
- Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Taipa, Macau SAR, China
| | - C. L. Philip Chen
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| |
Collapse
|
25
|
Nguyen HD, Clément M, Mansencal B, Coupé P. Towards better interpretable and generalizable AD detection using collective artificial intelligence. Comput Med Imaging Graph 2023; 104:102171. [PMID: 36640484 DOI: 10.1016/j.compmedimag.2022.102171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 12/24/2022] [Accepted: 12/24/2022] [Indexed: 01/03/2023]
Abstract
Alzheimer's Disease is the most common cause of dementia. Accurate diagnosis and prognosis of this disease are essential to design an appropriate treatment plan, increasing the life expectancy of the patient. Intense research has been conducted on the use of machine learning to identify Alzheimer's Disease from neuroimaging data, such as structural magnetic resonance imaging. In recent years, advances of deep learning in computer vision suggest a new research direction for this problem. Current deep learning-based approaches in this field, however, have a number of drawbacks, including the interpretability of model decisions, a lack of generalizability information and a lower performance compared to traditional machine learning techniques. In this paper, we design a two-stage framework to overcome these limitations. In the first stage, an ensemble of 125 U-Nets is used to grade the input image, producing a 3D map that reflects the disease severity at voxel-level. This map can help to localize abnormal brain areas caused by the disease. In the second stage, we model a graph per individual using the generated grading map and other information about the subject. We propose to use a graph convolutional neural network classifier for the final classification. As a result, our framework demonstrates comparative performance to the state-of-the-art methods in different datasets for both diagnosis and prognosis. We also demonstrate that the use of a large ensemble of U-Nets offers a better generalization capacity for our framework.
Collapse
Affiliation(s)
- Huy-Dung Nguyen
- Univ. Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR 5800, 33400 Talence, France.
| | - Michaël Clément
- Univ. Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR 5800, 33400 Talence, France
| | - Boris Mansencal
- Univ. Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR 5800, 33400 Talence, France
| | - Pierrick Coupé
- Univ. Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR 5800, 33400 Talence, France
| |
Collapse
|
26
|
Effects of Patchwise Sampling Strategy to Three-Dimensional Convolutional Neural Network-Based Alzheimer's Disease Classification. Brain Sci 2023; 13:brainsci13020254. [PMID: 36831797 PMCID: PMC9953929 DOI: 10.3390/brainsci13020254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 12/16/2022] [Accepted: 01/03/2023] [Indexed: 02/05/2023] Open
Abstract
In recent years, the rapid development of artificial intelligence has promoted the widespread application of convolutional neural networks (CNNs) in neuroimaging analysis. Although three-dimensional (3D) CNNs can utilize the spatial information in 3D volumes, there are still some challenges related to high-dimensional features and potential overfitting issues. To overcome these problems, patch-based CNNs have been used, which are beneficial for model generalization. However, it is unclear how the choice of a patchwise sampling strategy affects the performance of the Alzheimer's Disease (AD) classification. To this end, the present work investigates the impact of a patchwise sampling strategy for 3D CNN based AD classification. A 3D framework cascaded by two-stage subnetworks was used for AD classification. The patch-level subnetworks learned feature representations from local image patches, and the subject-level subnetwork combined discriminative feature representations from all patch-level subnetworks to generate a classification score at the subject level. Experiments were conducted to determine the effect of patch partitioning methods, the effect of patch size, and interactions between patch size and training set size for AD classification. With the same data size and identical network structure, the 3D CNN model trained with 48 × 48 × 48 cubic image patches showed the best performance in AD classification (ACC = 89.6%). The model trained with hippocampus-centered, region of interest (ROI)-based image patches showed suboptimal performance. If the pathological features are concentrated only in some regions affected by the disease, the empirically predefined ROI patches might be the right choice. The better performance of cubic image patches compared with cuboidal image patches is likely related to the pathological distribution of AD. The image patch size and training sample size together have a complex influence on the performance of the classification. The size of the image patches should be determined based on the size of the training sample to compensate for noisy labels and the problem of the curse of dimensionality. The conclusions of the present study can serve as a reference for the researchers who wish to develop a superior 3D patch-based CNN model with an appropriate patch sampling strategy.
Collapse
|
27
|
Xu X, Lin L, Sun S, Wu S. A review of the application of three-dimensional convolutional neural networks for the diagnosis of Alzheimer's disease using neuroimaging. Rev Neurosci 2023:revneuro-2022-0122. [PMID: 36729918 DOI: 10.1515/revneuro-2022-0122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 01/02/2023] [Indexed: 02/03/2023]
Abstract
Alzheimer's disease (AD) is a degenerative disorder that leads to progressive, irreversible cognitive decline. To obtain an accurate and timely diagnosis and detect AD at an early stage, numerous approaches based on convolutional neural networks (CNNs) using neuroimaging data have been proposed. Because 3D CNNs can extract more spatial discrimination information than 2D CNNs, they have emerged as a promising research direction in the diagnosis of AD. The aim of this article is to present the current state of the art in the diagnosis of AD using 3D CNN models and neuroimaging modalities, focusing on the 3D CNN architectures and classification methods used, and to highlight potential future research topics. To give the reader a better overview of the content mentioned in this review, we briefly introduce the commonly used imaging datasets and the fundamentals of CNN architectures. Then we carefully analyzed the existing studies on AD diagnosis, which are divided into two levels according to their inputs: 3D subject-level CNNs and 3D patch-level CNNs, highlighting their contributions and significance in the field. In addition, this review discusses the key findings and challenges from the studies and highlights the lessons learned as a roadmap for future research. Finally, we summarize the paper by presenting some major findings, identifying open research challenges, and pointing out future research directions.
Collapse
Affiliation(s)
- Xinze Xu
- Intelligent Physiological Measurement and Clinical Translation, Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing 100124, China
| | - Lan Lin
- Intelligent Physiological Measurement and Clinical Translation, Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing 100124, China
| | - Shen Sun
- Intelligent Physiological Measurement and Clinical Translation, Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing 100124, China
| | - Shuicai Wu
- Intelligent Physiological Measurement and Clinical Translation, Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing 100124, China
| |
Collapse
|
28
|
Chen Z, Liu Y, Zhang Y, Li Q. Orthogonal latent space learning with feature weighting and graph learning for multimodal Alzheimer's disease diagnosis. Med Image Anal 2023; 84:102698. [PMID: 36462372 DOI: 10.1016/j.media.2022.102698] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 10/18/2022] [Accepted: 11/17/2022] [Indexed: 11/23/2022]
Abstract
Recent studies have shown that multimodal neuroimaging data provide complementary information of the brain and latent space-based methods have achieved promising results in fusing multimodal data for Alzheimer's disease (AD) diagnosis. However, most existing methods treat all features equally and adopt nonorthogonal projections to learn the latent space, which cannot retain enough discriminative information in the latent space. Besides, they usually preserve the relationships among subjects in the latent space based on the similarity graph constructed on original features for performance boosting. However, the noises and redundant features significantly corrupt the graph. To address these limitations, we propose an Orthogonal Latent space learning with Feature weighting and Graph learning (OLFG) model for multimodal AD diagnosis. Specifically, we map multiple modalities into a common latent space by orthogonal constrained projection to capture the discriminative information for AD diagnosis. Then, a feature weighting matrix is utilized to sort the importance of features in AD diagnosis adaptively. Besides, we devise a regularization term with learned graph to preserve the local structure of the data in the latent space and integrate the graph construction into the learning processing for accurately encoding the relationships among samples. Instead of constructing a similarity graph for each modality, we learn a joint graph for multiple modalities to capture the correlations among modalities. Finally, the representations in the latent space are projected into the target space to perform AD diagnosis. An alternating optimization algorithm with proved convergence is developed to solve the optimization objective. Extensive experimental results show the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Zhi Chen
- Knowledge and Data Engineering Laboratory of Chinese Medicine, School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Yongguo Liu
- Knowledge and Data Engineering Laboratory of Chinese Medicine, School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China.
| | - Yun Zhang
- Knowledge and Data Engineering Laboratory of Chinese Medicine, School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Qiaoqin Li
- Knowledge and Data Engineering Laboratory of Chinese Medicine, School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| |
Collapse
|
29
|
Liu F, Tao W, Yang J, Wu W, Wang J. STNet: A novel spiking neural network combining its own time signal with the spatial signal of an artificial neural network. Front Neurosci 2023; 17:1151949. [PMID: 37144088 PMCID: PMC10153670 DOI: 10.3389/fnins.2023.1151949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 03/10/2023] [Indexed: 05/06/2023] Open
Abstract
Introduction This article proposes a novel hybrid network that combines the temporal signal of a spiking neural network (SNN) with the spatial signal of an artificial neural network (ANN), namely the Spatio-Temporal Combined Network (STNet). Methods Inspired by the way the visual cortex in the human brain processes visual information, two versions of STNet are designed: a concatenated one (C-STNet) and a parallel one (P-STNet). In the C-STNet, the ANN, simulating the primary visual cortex, extracts the simple spatial information of objects first, and then the obtained spatial information is encoded as spiking time signals for transmission to the rear SNN which simulates the extrastriate visual cortex to process and classify the spikes. With the view that information from the primary visual cortex reaches the extrastriate visual cortex via ventral and dorsal streams, in P-STNet, the parallel combination of the ANN and the SNN is employed to extract the original spatio-temporal information from samples, and the extracted information is transferred to a posterior SNN for classification. Results The experimental results of the two STNets obtained on six small and two large benchmark datasets were compared with eight commonly used approaches, demonstrating that the two STNets can achieve improved performance in terms of accuracy, generalization, stability, and convergence. Discussion These prove that the idea of combining ANN and SNN is feasible and can greatly improve the performance of SNN.
Collapse
Affiliation(s)
- Fang Liu
- School of Mathematical Sciences, Dalian University of Technology, Dalian, China
- Key Laboratory for Computational Mathematics and Data Intelligence of Liaoning Province, Dalian, China
| | - Wentao Tao
- School of Mathematical Sciences, Dalian University of Technology, Dalian, China
- Key Laboratory for Computational Mathematics and Data Intelligence of Liaoning Province, Dalian, China
| | - Jie Yang
- School of Mathematical Sciences, Dalian University of Technology, Dalian, China
- Key Laboratory for Computational Mathematics and Data Intelligence of Liaoning Province, Dalian, China
- *Correspondence: Jie Yang
| | - Wei Wu
- School of Mathematical Sciences, Dalian University of Technology, Dalian, China
- Key Laboratory for Computational Mathematics and Data Intelligence of Liaoning Province, Dalian, China
| | - Jian Wang
- College of Science, China University of Petroleum (East China), Qingdao, China
| |
Collapse
|
30
|
Kwak K, Stanford W, Dayan E. Identifying the regional substrates predictive of Alzheimer's disease progression through a convolutional neural network model and occlusion. Hum Brain Mapp 2022; 43:5509-5519. [PMID: 35904092 PMCID: PMC9704798 DOI: 10.1002/hbm.26026] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 06/02/2022] [Accepted: 07/08/2022] [Indexed: 01/15/2023] Open
Abstract
Progressive brain atrophy is a key neuropathological hallmark of Alzheimer's disease (AD) dementia. However, atrophy patterns along the progression of AD dementia are diffuse and variable and are often missed by univariate methods. Consequently, identifying the major regional atrophy patterns underlying AD dementia progression is challenging. In the current study, we propose a method that evaluates the degree to which specific regional atrophy patterns are predictive of AD dementia progression, while holding all other atrophy changes constant using a total sample of 334 subjects. We first trained a dense convolutional neural network model to differentiate individuals with mild cognitive impairment (MCI) who progress to AD dementia versus those with a stable MCI diagnosis. Then, we retested the model multiple times, each time occluding different regions of interest (ROIs) from the model's testing set's input. We also validated this approach by occluding ROIs based on Braak's staging scheme. We found that the hippocampus, fusiform, and inferior temporal gyri were the strongest predictors of AD dementia progression, in agreement with established staging models. We also found that occlusion of limbic ROIs defined according to Braak stage III had the largest impact on the performance of the model. Our predictive model reveals the major regional patterns of atrophy predictive of AD dementia progression. These results highlight the potential for early diagnosis and stratification of individuals with prodromal AD dementia based on patterns of cortical atrophy, prior to interventional clinical trials.
Collapse
Affiliation(s)
- Kichang Kwak
- Biomedical Research Imaging CenterUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
| | - William Stanford
- Neuroscience Curriculum, Biological and Biomedical Sciences ProgramUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
| | - Eran Dayan
- Biomedical Research Imaging CenterUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
- Neuroscience Curriculum, Biological and Biomedical Sciences ProgramUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
- Department of RadiologyUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
| | | |
Collapse
|
31
|
Francis A, Pandian IA, Anitha J. A boon to aged society: Early diagnosis of Alzheimer's disease-An opinion. Front Public Health 2022; 10:1076472. [PMID: 36530651 PMCID: PMC9751990 DOI: 10.3389/fpubh.2022.1076472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 11/14/2022] [Indexed: 12/04/2022] Open
Affiliation(s)
- Ambily Francis
- Department of Electronics and Communication Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India,Department of Electronics and Communication Engineering, Sahrdaya College of Engineering and Technology, Kodakara, India
| | - Immanuel Alex Pandian
- Department of Electronics and Communication Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India
| | - J. Anitha
- Department of Computer Science and Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India,*Correspondence: J. Anitha
| |
Collapse
|
32
|
Dong A, Zhang G, Liu J, Wei Z. Latent feature representation learning for Alzheimer's disease classification. Comput Biol Med 2022; 150:106116. [PMID: 36215848 DOI: 10.1016/j.compbiomed.2022.106116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 08/18/2022] [Accepted: 09/17/2022] [Indexed: 11/03/2022]
Abstract
Early detection and treatment of Alzheimer's Disease (AD) are significant. Recently, multi-modality imaging data have promoted the development of the automatic diagnosis of AD. This paper proposes a method based on latent feature fusion to make full use of multi-modality image data information. Specifically, we learn a specific projection matrix for each modality by introducing a binary label matrix and local geometry constraints and then project the original features of each modality into a low-dimensional target space. In this space, we fuse latent feature representations of different modalities for AD classification. The experimental results on Alzheimer's Disease Neuroimaging Initiative database demonstrate the proposed methods effectiveness in classifying AD.
Collapse
Affiliation(s)
- Aimei Dong
- Faculty of Computer Science and Technology,Qilu University of Technology(Shandong Academy of Sciences),Jinan, 250353, China.
| | - Guodong Zhang
- Faculty of Computer Science and Technology,Qilu University of Technology(Shandong Academy of Sciences),Jinan, 250353, China.
| | - Jian Liu
- Faculty of Computer Science and Technology,Qilu University of Technology(Shandong Academy of Sciences),Jinan, 250353, China.
| | - Zhonghe Wei
- Faculty of Computer Science and Technology,Qilu University of Technology(Shandong Academy of Sciences),Jinan, 250353, China.
| |
Collapse
|
33
|
Ouyang J, Zhao Q, Adeli E, Zaharchuk G, Pohl KM. Self-supervised learning of neighborhood embedding for longitudinal MRI. Med Image Anal 2022; 82:102571. [PMID: 36115098 PMCID: PMC10168684 DOI: 10.1016/j.media.2022.102571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 07/11/2022] [Accepted: 08/11/2022] [Indexed: 11/19/2022]
Abstract
In recent years, several deep learning models recommend first to represent Magnetic Resonance Imaging (MRI) as latent features before performing a downstream task of interest (such as classification or regression). The performance of the downstream task generally improves when these latent representations are explicitly associated with factors of interest. For example, we derived such a representation for capturing brain aging by applying self-supervised learning to longitudinal MRIs and then used the resulting encoding to automatically identify diseases accelerating the aging of the brain. We now propose a refinement of this representation by replacing the linear modeling of brain aging with one that is consistent in local neighborhoods in the latent space. Called Longitudinal Neighborhood Embedding (LNE), we derive an encoding so that neighborhoods are age-consistent (i.e., brain MRIs of different subjects with similar brain ages are in close proximity of each other) and progression-consistent, i.e., the latent space is defined by a smooth trajectory field where each trajectory captures changes in brain ages between a pair of MRIs extracted from a longitudinal sequence. To make the problem computationally tractable, we further propose a strategy for mini-batch sampling so that the resulting local neighborhoods accurately approximate the ones that would be defined based on the whole cohort. We evaluate LNE on three different downstream tasks: (1) to predict chronological age from T1-w MRI of 274 healthy subjects participating in a study at SRI International; (2) to distinguish Normal Control (NC) from Alzheimer's Disease (AD) and stable Mild Cognitive Impairment (sMCI) from progressive Mild Cognitive Impairment (pMCI) based on T1-w MRI of 632 participants of the Alzheimer's Disease Neuroimaging Initiative (ADNI); and (3) to distinguish no-to-low from moderate-to-heavy alcohol drinkers based on fractional anisotropy derived from diffusion tensor MRIs of 764 adolescents recruited by the National Consortium on Alcohol and NeuroDevelopment in Adolescence (NCANDA). Across the three data sets, the visualization of the smooth trajectory vector fields and superior accuracy on downstream tasks demonstrate the strength of the proposed method over existing self-supervised methods in extracting information related to brain aging, which could help study the impact of substance use and neurodegenerative disorders. The code is available at https://github.com/ouyangjiahong/longitudinal-neighbourhood-embedding.
Collapse
Affiliation(s)
- Jiahong Ouyang
- Department of Electrical Engineering, Stanford University, Stanford, United States of America
| | - Qingyu Zhao
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, United States of America
| | - Ehsan Adeli
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, United States of America
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, United States of America
| | - Kilian M Pohl
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, United States of America; Center for Health Sciences, SRI International, Menlo Park, United States of America.
| |
Collapse
|
34
|
Deatsch A, Perovnik M, Namías M, Trošt M, Jeraj R. Development of a deep learning network for Alzheimer’s disease classification with evaluation of imaging modality and longitudinal data. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac8f10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 09/02/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. Neuroimaging uncovers important information about disease in the brain. Yet in Alzheimer’s disease (AD), there remains a clear clinical need for reliable tools to extract diagnoses from neuroimages. Significant work has been done to develop deep learning (DL) networks using neuroimaging for AD diagnosis. However, no particular model has emerged as optimal. Due to a lack of direct comparisons and evaluations on independent data, there is no consensus on which modality is best for diagnostic models or whether longitudinal information enhances performance. The purpose of this work was (1) to develop a generalizable DL model to distinguish neuroimaging scans of AD patients from controls and (2) to evaluate the influence of imaging modality and longitudinal data on performance. Approach. We trained a 2-class convolutional neural network (CNN) with and without a cascaded recurrent neural network (RNN). We used datasets of 772 (N
AD = 364, N
control = 408) 3D 18F-FDG PET scans and 780 (N
AD = 280, N
control = 500) T1-weighted volumetric-3D MR images (containing 131 and 144 patients with multiple timepoints) from the Alzheimer’s Disease Neuroimaging Initiative, plus an independent set of 104 (N
AD = 63, N
NC = 41) 18F-FDG PET scans (one per patient) for validation. Main Results. ROC analysis showed that PET-trained models outperformed MRI-trained, achieving maximum AUC with the CNN + RNN model of 0.93 ± 0.08, with accuracy 82.5 ± 8.9%. Adding longitudinal information offered significant improvement to performance on 18F-FDG PET, but not on T1-MRI. CNN model validation with an independent 18F-FDG PET dataset achieved AUC of 0.99. Layer-wise relevance propagation heatmaps added CNN interpretability. Significance. The development of a high-performing tool for AD diagnosis, with the direct evaluation of key influences, reveals the advantage of using 18F-FDG PET and longitudinal data over MRI and single timepoint analysis. This has significant implications for the potential of neuroimaging for future research on AD diagnosis and clinical management of suspected AD patients.
Collapse
|
35
|
Multi-class classification of Alzheimer’s disease through distinct neuroimaging computational approaches using Florbetapir PET scans. EVOLVING SYSTEMS 2022. [DOI: 10.1007/s12530-022-09467-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/07/2022]
|
36
|
Ouyang J, Zhao Q, Adeli E, Zaharchuk G, Pohl KM. Disentangling Normal Aging From Severity of Disease via Weak Supervision on Longitudinal MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2558-2569. [PMID: 35404811 PMCID: PMC9578549 DOI: 10.1109/tmi.2022.3166131] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The continuous progression of neurological diseases are often categorized into conditions according to their severity. To relate the severity to changes in brain morphometry, there is a growing interest in replacing these categories with a continuous severity scale that longitudinal MRIs are mapped onto via deep learning algorithms. However, existing methods based on supervised learning require large numbers of samples and those that do not, such as self-supervised models, fail to clearly separate the disease effect from normal aging. Here, we propose to explicitly disentangle those two factors via weak-supervision. In other words, training is based on longitudinal MRIs being labelled either normal or diseased so that the training data can be augmented with samples from disease categories that are not of primary interest to the analysis. We do so by encouraging trajectories of controls to be fully encoded by the direction associated with brain aging. Furthermore, an orthogonal direction linked to disease severity captures the residual component from normal aging in the diseased cohort. Hence, the proposed method quantifies disease severity and its progression speed in individuals without knowing their condition. We apply the proposed method on data from the Alzheimer's Disease Neuroimaging Initiative (ADNI, N =632 ). We then show that the model properly disentangled normal aging from the severity of cognitive impairment by plotting the resulting disentangled factors of each subject and generating simulated MRIs for a given chronological age and condition. Moreover, our representation obtains higher balanced accuracy when used for two downstream classification tasks compared to other pre-training approaches. The code for our weak-supervised approach is available at https://github.com/ouyangjiahong/longitudinal-direction-disentangle.
Collapse
|
37
|
Noella RSN, Priyadarshini J. Diagnosis of Alzheimer’s, Parkinson’s disease and frontotemporal dementia using a generative adversarial deep convolutional neural network. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07750-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
38
|
Chen T, Su P, Shen Y, Chen L, Mahmud M, Zhao Y, Antoniou G. A dominant set-informed interpretable fuzzy system for automated diagnosis of dementia. Front Neurosci 2022; 16:867664. [PMID: 35979331 PMCID: PMC9376621 DOI: 10.3389/fnins.2022.867664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 07/05/2022] [Indexed: 11/13/2022] Open
Abstract
Dementia is an incurable neurodegenerative disease primarily affecting the older population, for which the World Health Organisation has set to promoting early diagnosis and timely management as one of the primary goals for dementia care. While a range of popular machine learning algorithms and their variants have been applied for dementia diagnosis, fuzzy systems, which have been known effective in dealing with uncertainty and offer to explicitly reason how a diagnosis can be inferred, sporadically appear in recent literature. Given the advantages of a fuzzy rule-based model, which could potentially result in a clinical decision support system that offers understandable rules and a transparent inference process to support dementia diagnosis, this paper proposes a novel fuzzy inference system by adapting the concept of dominant sets that arise from the study of graph theory. A peeling-off strategy is used to iteratively extract from the constructed edge-weighted graph a collection of dominant sets. Each dominant set is further converted into a parameterized fuzzy rule, which is finally optimized in a supervised adaptive network-based fuzzy inference framework. An illustrative example is provided that demonstrates the interpretable rules and the transparent reasoning process of reaching a decision. Further systematic experiments conducted on data from the Open Access Series of Imaging Studies (OASIS) repository, also validate its superior performance over alternative methods.
Collapse
Affiliation(s)
- Tianhua Chen
- Department of Computer Science, School of Computing and Engineering, University of Huddersfield, Huddersfield, United Kingdom
| | - Pan Su
- School of Control and Computer Engineering, North China Electric Power University, Beijing, China
| | - Yinghua Shen
- School of Economics and Business Administration, Chongqing University, Chongqing, China
| | - Lu Chen
- Institute of Big Data Science and Industry, Shanxi University, Taiyuan, China
| | - Mufti Mahmud
- Department of Computer Science, Nottingham Trent University, Nottingham, United Kingdom
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Grigoris Antoniou
- Department of Computer Science, School of Computing and Engineering, University of Huddersfield, Huddersfield, United Kingdom
| |
Collapse
|
39
|
Khojaste-Sarakhsi M, Haghighi SS, Ghomi SF, Marchiori E. Deep learning for Alzheimer's disease diagnosis: A survey. Artif Intell Med 2022; 130:102332. [DOI: 10.1016/j.artmed.2022.102332] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 04/29/2022] [Accepted: 05/30/2022] [Indexed: 11/28/2022]
|
40
|
Qian S, Chou CA, Li JS. Deep multi-modal learning for joint linear representation of nonlinear dynamical systems. Sci Rep 2022; 12:12807. [PMID: 35896569 PMCID: PMC9329370 DOI: 10.1038/s41598-022-15669-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 06/28/2022] [Indexed: 11/09/2022] Open
Abstract
Dynamical systems pervasively seen in most real-life applications are complex and behave by following certain evolution rules or dynamical patterns, which are linear, non-linear, or stochastic. The underlying dynamics (or evolution rule) of such complex systems, if found, can be used for understanding the system behavior, and furthermore for system prediction and control. It is common to analyze the system's dynamics through observations in different modality approaches. For instance, to recognize patient deterioration in acute care, it usually relies on monitoring and analyzing vital signs and other observations, such as blood pressure, heart rate, respiration, and electroencephalography. These observations convey the information describing the same target system, but the dynamics is not able to be directly characterized due to high complexity of individual modality and maybe time-delay interactions among modalities. In this work, we suppose that the state behavior of a dynamical system follows an intrinsic dynamics shared among these modalities. We specifically propose a new deep auto-encoder framework using the Koopman operator theory to derive the joint linear dynamics for a target system in a space spanned by the intrinsic coordinates. The proposed method aims to reconstruct the original system states by learning the information provided among multiple modalities. Furthermore, with the derived intrinsic dynamics, our method is capable of restoring the missing observations within and across modalities, and used for predicting the future states of the system that follows the same evolution rule.
Collapse
Affiliation(s)
- Shaodi Qian
- Mechanical and Industrial Engineering, Northeastern University, Boston, MA, 02215, USA
| | - Chun-An Chou
- Mechanical and Industrial Engineering, Northeastern University, Boston, MA, 02215, USA.
| | - Jr-Shin Li
- Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO, 63130, USA
| |
Collapse
|
41
|
Zhang J, He X, Qing L, Xu Y, Liu Y, Chen H. Multi-scale discriminative regions analysis in FDG-PET imaging for early diagnosis of Alzheimer's disease. J Neural Eng 2022; 19. [PMID: 35882218 DOI: 10.1088/1741-2552/ac8450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 07/26/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Alzheimer's disease (AD) is a degenerative brain disorder, one of the main causes of death in elderly people, so early diagnosis of AD is vital to prompt access to medication and medical care. Fluorodeoxyglucose positron emission tomography (FDG-PET) proves to be effective to help understand neurological changes via measuring glucose uptake. Our aim is to explore information-rich regions of FDG-PET imaging, which enhance the accuracy and interpretability of AD-related diagnosis. APPROACH We develop a novel method for early diagnosis of AD based on multi-scale discriminative regions in FDG-PET imaging, which considers the diagnosis interpretability. Specifically, a multi-scale region localization (MSRL) module is discussed to automatically identify disease-related discriminative regions in full-volume FDG-PET images in an unsupervised manner, upon which a confidence score is designed to evaluate the prioritization of regions according to the density distribution of anomalies. Then, the proposed multi-scale region classification (MSRC) module adaptively fuses multi-scale region representations and makes decision fusion, which not only reduces useless information but also offers complementary information. Most of previous methods concentrate on discriminating AD from cognitively normal (CN), while mild cognitive impairment (MCI), a transitional state, facilitates early diagnosis. Therefore, our method is further applied to multiple AD-related diagnosis tasks, not limited to AD vs. CN. MAIN RESULTS Experimental results on the ADNI dataset show that the proposed method achieves superior performance over state-of-the-art FDG-PET-based approaches. Besides, some cerebral cortices highlighted by extracted regions cohere with medical research, further demonstrating the superiority. SIGNIFICANCE This work offers an effective method to achieve AD diagnosis and detect disease-affected regions in FDG-PET imaging. Our results could be beneficial for providing an additional opinion on the clinical diagnosis.
Collapse
Affiliation(s)
- Jin Zhang
- Sichuan University, College of Electronics and Information Engineering, Sichuan University, Chengdu, Sichuan, China, Chengdu, Sichuan, 610065, CHINA
| | - Xiaohai He
- Sichuan University, College of Electronics and Information Engineering, Sichuan University, Chengdu, Sichuan, China, Chengdu, Sichuan, 610065, CHINA
| | - Linbo Qing
- Sichuan University, College of Electronics and Information Engineering, Sichuan University, Chengdu, Sichuan, China, Chengdu, Sichuan, 610065, CHINA
| | - Yining Xu
- Sichuan University, College of Electronics and Information Engineering, Sichuan University, Chengdu, Sichuan, China, Chengdu, Sichuan, 610065, CHINA
| | - Yan Liu
- Chengdu Third People's Hospital, Department of Neurology, The Affiliated Hospital of Southwest Jiaotong University, The Third People's Hospital of Chengdu, Chengdu, Sichuan, China, Chengdu, Sichuan, 610014, CHINA
| | - Honggang Chen
- Sichuan University, College of Electronics and Information Engineering, Sichuan University, Chengdu, Sichuan, China, Chengdu, Sichuan, 610065, CHINA
| |
Collapse
|
42
|
Establishing an Intelligent Emotion Analysis System for Long-Term Care Application Based on LabVIEW. SUSTAINABILITY 2022. [DOI: 10.3390/su14148932] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this study, the authors implemented an intelligent long-term care system based on deep learning techniques, using an AI model that can be integrated with the Lab’s Virtual Instrumentation Engineering Workbench (LabVIEW) application for sentiment analysis. The input data collected is a database of numerous facial features and environmental variables that have been processed and analyzed; the output decisions are the corresponding controls for sentiment analysis and prediction. Convolutional neural network (CNN) is used to deal with the complex process of deep learning. After the convolutional layer simplifies the processing of the image matrix, the results are computed by the fully connected layer. Furthermore, the Multilayer Perceptron (MLP) model embedded in LabVIEW is constructed for numerical transformation, analysis, and predictive control; it predicts the corresponding control of emotional and environmental variables. Moreover, LabVIEW is used to design sensor components, data displays, and control interfaces. Remote sensing and control is achieved by using LabVIEW’s built-in web publishing tools.
Collapse
|
43
|
Jia H, Lao H. Deep learning and multimodal feature fusion for the aided diagnosis of Alzheimer's disease. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07501-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
44
|
Liu L, Wang YP, Wang Y, Zhang P, Xiong S. An enhanced multi-modal brain graph network for classifying neuropsychiatric disorders. Med Image Anal 2022; 81:102550. [PMID: 35872360 DOI: 10.1016/j.media.2022.102550] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 07/06/2022] [Accepted: 07/13/2022] [Indexed: 10/17/2022]
Abstract
It has been proven that neuropsychiatric disorders (NDs) can be associated with both structures and functions of brain regions. Thus, data about structures and functions could be usefully combined in a comprehensive analysis. While brain structural MRI (sMRI) images contain anatomic and morphological information about NDs, functional MRI (fMRI) images carry complementary information. However, efficient extraction and fusion of sMRI and fMRI data remains challenging. In this study, we develop an enhanced multi-modal graph convolutional network (MME-GCN) in a binary classification between patients with NDs and healthy controls, based on the fusion of the structural and functional graphs of the brain region. First, based on the same brain atlas, we construct structural and functional graphs from sMRI and fMRI data, respectively. Second, we use machine learning to extract important features from the structural graph network. Third, we use these extracted features to adjust the corresponding edge weights in the functional graph network. Finally, we train a multi-layer GCN and use it in binary classification task. MME-GCN achieved 93.71% classification accuracy on the open data set provided by the Consortium for Neuropsychiatric Phenomics. In addition, we analyzed the important features selected from the structural graph and verified them in the functional graph. Using MME-GCN, we found several specific brain connections important to NDs.
Collapse
Affiliation(s)
- Liangliang Liu
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, P.R. China.
| | - Yu-Ping Wang
- Dthe Biomedical Engineering Department, Tulane University, New Orleans, LA 70118, USA
| | - Yi Wang
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, P.R. China
| | - Pei Zhang
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, P.R. China
| | - Shufeng Xiong
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, P.R. China
| |
Collapse
|
45
|
Classification of Alzheimer's disease in MRI images using knowledge distillation framework: an investigation. Int J Comput Assist Radiol Surg 2022; 17:1235-1243. [PMID: 35633492 DOI: 10.1007/s11548-022-02661-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 04/26/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Computer-aided MRI analysis is helpful for early detection of Alzheimer's disease(AD). Recently, 3D convolutional neural networks(CNN) are widely used to analyse MRI images. However, 3D CNN requires huge memory cost. In this paper, we introduce cascaded CNN and long and short-term memory (LSTM) networks. We also use knowledge distillation to improve the accuracy of the model using small medical image dataset. METHODS We propose a cascade structure, CNN-LSTM. CNN is used as the function of feature extraction, and LSTM is used as the classifier. In this way, the correlation between different slices can be considered and the calculation cost caused by 3D data can be reduced. To overcome the problem of limited image training data, transfer learning is a more reasonable way of feature extraction. We use the knowledge distillation algorithm to improve the performance of student models for AD diagnosis through a powerful teacher model to guide the work of student models. RESULTS The accuracy of the proposed model is improved using knowledge distillation. The results show that the accuracy of the student models reached 85.96% after the guidance of the teacher models, an increase by 3.83%. CONCLUSION We propose cascaded CNN-LSTM to classify 3D ADNI data, and use knowledge distillation to improve the model accuracy when trained with small size dataset. It can process 3D data efficiently as well as reduce the computational cost.
Collapse
|
46
|
Deep Learning-Based Diagnosis of Alzheimer’s Disease. J Pers Med 2022; 12:jpm12050815. [PMID: 35629237 PMCID: PMC9143671 DOI: 10.3390/jpm12050815] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 05/15/2022] [Accepted: 05/16/2022] [Indexed: 12/27/2022] Open
Abstract
Alzheimer’s disease (AD), the most familiar type of dementia, is a severe concern in modern healthcare. Around 5.5 million people aged 65 and above have AD, and it is the sixth leading cause of mortality in the US. AD is an irreversible, degenerative brain disorder characterized by a loss of cognitive function and has no proven cure. Deep learning techniques have gained popularity in recent years, particularly in the domains of natural language processing and computer vision. Since 2014, these techniques have begun to achieve substantial consideration in AD diagnosis research, and the number of papers published in this arena is rising drastically. Deep learning techniques have been reported to be more accurate for AD diagnosis in comparison to conventional machine learning models. Motivated to explore the potential of deep learning in AD diagnosis, this study reviews the current state-of-the-art in AD diagnosis using deep learning. We summarize the most recent trends and findings using a thorough literature review. The study also explores the different biomarkers and datasets for AD diagnosis. Even though deep learning has shown promise in AD diagnosis, there are still several challenges that need to be addressed.
Collapse
|
47
|
Frizzell TO, Glashutter M, Liu CC, Zeng A, Pan D, Hajra SG, D’Arcy RC, Song X. Artificial intelligence in brain MRI analysis of Alzheimer's disease over the past 12 years: A systematic review. Ageing Res Rev 2022; 77:101614. [PMID: 35358720 DOI: 10.1016/j.arr.2022.101614] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 03/02/2022] [Accepted: 03/24/2022] [Indexed: 12/17/2022]
Abstract
INTRODUCTION Multiple structural brain changes in Alzheimer's disease (AD) and mild cognitive impairment (MCI) have been revealed on magnetic resonance imaging (MRI). There is a fast-growing effort in applying artificial intelligence (AI) to analyze these data. Here, we review and evaluate the AI studies in brain MRI analysis with synthesis. METHODS A systematic review of the literature, spanning the years from 2009 to 2020, was completed using the PubMed database. AI studies using MRI imaging to investigate normal aging, mild cognitive impairment, and AD-dementia were retrieved for review. Bias assessment was completed using the PROBAST criteria. RESULTS 97 relevant studies were included in the review. The studies were typically focused on the classification of AD, MCI, and normal aging (71% of the reported studies) and the prediction of MCI conversion to AD (25%). The best performance was achieved by using the deep learning-based convolution neural network algorithms (weighted average accuracy 89%), in contrast to 76-86% using Logistic Regression, Support Vector Machines, and other AI methods. DISCUSSION The synthesized evidence is paramount to developing sophisticated AI approaches to reliably capture and quantify multiple subtle MRI changes in the whole brain that exemplify the complexity and heterogeneity of AD and brain aging.
Collapse
|
48
|
Wang Q, Li L, Qiao L, Liu M. Adaptive Multimodal Neuroimage Integration for Major Depression Disorder Detection. Front Neuroinform 2022; 16:856175. [PMID: 35571867 PMCID: PMC9100686 DOI: 10.3389/fninf.2022.856175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 04/05/2022] [Indexed: 11/13/2022] Open
Abstract
Major depressive disorder (MDD) is one of the most common mental health disorders that can affect sleep, mood, appetite, and behavior of people. Multimodal neuroimaging data, such as functional and structural magnetic resonance imaging (MRI) scans, have been widely used in computer-aided detection of MDD. However, previous studies usually treat these two modalities separately, without considering their potentially complementary information. Even though a few studies propose integrating these two modalities, they usually suffer from significant inter-modality data heterogeneity. In this paper, we propose an adaptive multimodal neuroimage integration (AMNI) framework for automated MDD detection based on functional and structural MRIs. The AMNI framework consists of four major components: (1) a graph convolutional network to learn feature representations of functional connectivity networks derived from functional MRIs, (2) a convolutional neural network to learn features of T1-weighted structural MRIs, (3) a feature adaptation module to alleviate inter-modality difference, and (4) a feature fusion module to integrate feature representations extracted from two modalities for classification. To the best of our knowledge, this is among the first attempts to adaptively integrate functional and structural MRIs for neuroimaging-based MDD analysis by explicitly alleviating inter-modality heterogeneity. Extensive evaluations are performed on 533 subjects with resting-state functional MRI and T1-weighted MRI, with results suggesting the efficacy of the proposed method.
Collapse
Affiliation(s)
- Qianqian Wang
- School of Mathematics Science, Liaocheng University, Liaocheng, China
| | - Long Li
- Taian Tumor Prevention and Treatment Hospital, Taian, China
| | - Lishan Qiao
- School of Mathematics Science, Liaocheng University, Liaocheng, China
| | - Mingxia Liu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| |
Collapse
|
49
|
Goenka N, Tiwari S. AlzVNet: A volumetric convolutional neural network for multiclass classification of Alzheimer’s disease through multiple neuroimaging computational approaches. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103500] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
50
|
Zhang J, He X, Qing L, Gao F, Wang B. BPGAN: Brain PET synthesis from MRI using generative adversarial network for multi-modal Alzheimer's disease diagnosis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 217:106676. [PMID: 35167997 DOI: 10.1016/j.cmpb.2022.106676] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 01/30/2022] [Accepted: 01/30/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Multi-modal medical images, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), have been widely used for the diagnosis of brain disorder diseases like Alzheimer's disease (AD) since they can provide various information. PET scans can detect cellular changes in organs and tissues earlier than MRI. Unlike MRI, PET data is difficult to acquire due to cost, radiation, or other limitations. Moreover, PET data is missing for many subjects in the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. To solve this problem, a 3D end-to-end generative adversarial network (named BPGAN) is proposed to synthesize brain PET from MRI scans, which can be used as a potential data completion scheme for multi-modal medical image research. METHODS We propose BPGAN, which learns an end-to-end mapping function to transform the input MRI scans to their underlying PET scans. First, we design a 3D multiple convolution U-Net (MCU) generator architecture to improve the visual quality of synthetic results while preserving the diverse brain structures of different subjects. By further employing a 3D gradient profile (GP) loss and structural similarity index measure (SSIM) loss, the synthetic PET scans have higher-similarity to the ground truth. In this study, we explore alternative data partitioning ways to study their impact on the performance of the proposed method in different medical scenarios. RESULTS We conduct experiments on a publicly available ADNI database. The proposed BPGAN is evaluated by mean absolute error (MAE), peak-signal-to-noise-ratio (PSNR) and SSIM, superior to other compared models in these quantitative evaluation metrics. Qualitative evaluations also validate the effectiveness of our approach. Additionally, combined with MRI and our synthetic PET scans, the accuracies of multi-class AD diagnosis on dataset-A and dataset-B are 85.00% and 56.47%, which have been improved by about 1% and 1%, respectively, compared to the stand-alone MRI. CONCLUSIONS The experimental results of quantitative measures, qualitative displays, and classification evaluation demonstrate that the synthetic PET images by BPGAN are reasonable and high-quality, which provide complementary information to improve the performance of AD diagnosis. This work provides a valuable reference for multi-modal medical image analysis.
Collapse
Affiliation(s)
- Jin Zhang
- College of Electronics and Information Engineering, Sichuan University, Chengdu, Sichuan, 610064, China
| | - Xiaohai He
- College of Electronics and Information Engineering, Sichuan University, Chengdu, Sichuan, 610064, China.
| | - Linbo Qing
- College of Electronics and Information Engineering, Sichuan University, Chengdu, Sichuan, 610064, China
| | - Feng Gao
- National Interdisciplinary Institute on Aging (NIIA), Southwest Jiaotong University, Chengdu, Sichuan, 611756, China; External cooperation and liaison office, Southwest Jiaotong University, Chengdu, Sichuan, 611756, China
| | - Bin Wang
- College of Electronics and Information Engineering, Sichuan University, Chengdu, Sichuan, 610064, China
| |
Collapse
|