1
|
Tang C, Xi M, Sun J, Wang S, Zhang Y. MACFNet: Detection of Alzheimer's disease via multiscale attention and cross-enhancement fusion network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 254:108259. [PMID: 38865795 DOI: 10.1016/j.cmpb.2024.108259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 05/10/2024] [Accepted: 05/29/2024] [Indexed: 06/14/2024]
Abstract
BACKGROUND AND OBJECTIVE Alzheimer's disease (AD) is a dreaded degenerative disease that results in a profound decline in human cognition and memory. Due to its intricate pathogenesis and the lack of effective therapeutic interventions, early diagnosis plays a paramount role in AD. Recent research based on neuroimaging has shown that the application of deep learning methods by multimodal neural images can effectively detect AD. However, these methods only concatenate and fuse the high-level features extracted from different modalities, ignoring the fusion and interaction of low-level features across modalities. It consequently leads to unsatisfactory classification performance. METHOD In this paper, we propose a novel multi-scale attention and cross-enhanced fusion network, MACFNet, which enables the interaction of multi-stage low-level features between inputs to learn shared feature representations. We first construct a novel Cross-Enhanced Fusion Module (CEFM), which fuses low-level features from different modalities through a multi-stage cross-structure. In addition, an Efficient Spatial Channel Attention (ECSA) module is proposed, which is able to focus on important AD-related features in images more efficiently and achieve feature enhancement from different modalities through two-stage residual concatenation. Finally, we also propose a multiscale attention guiding block (MSAG) based on dilated convolution, which can obtain rich receptive fields without increasing model parameters and computation, and effectively improve the efficiency of multiscale feature extraction. RESULTS Experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate that our MACFNet has better classification performance than existing multimodal methods, with classification accuracies of 99.59 %, 98.85 %, 99.61 %, and 98.23 % for AD vs. CN, AD vs. MCI, CN vs. MCI and AD vs. CN vs. MCI, respectively, and specificity of 98.92 %, 97.07 %, 99.58 % and 99.04 %, and sensitivity of 99.91 %, 99.89 %, 99.63 % and 97.75 %, respectively. CONCLUSIONS The proposed MACFNet is a high-accuracy multimodal AD diagnostic framework. Through the cross mechanism and efficient attention, MACFNet can make full use of the low-level features of different modal medical images and effectively pay attention to the local and global information of the images. This work provides a valuable reference for multi-mode AD diagnosis.
Collapse
Affiliation(s)
- Chaosheng Tang
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan 454000, PR China
| | - Mengbo Xi
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan 454000, PR China
| | - Junding Sun
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan 454000, PR China.
| | - Shuihua Wang
- Department of Biological Sciences, Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu 215123, China.
| | - Yudong Zhang
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan 454000, PR China; School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK; Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia.
| |
Collapse
|
2
|
Kang S, Kim SW, Seong JK. Disentangling brain atrophy heterogeneity in Alzheimer's disease: A deep self-supervised approach with interpretable latent space. Neuroimage 2024; 297:120737. [PMID: 39004409 DOI: 10.1016/j.neuroimage.2024.120737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Revised: 07/03/2024] [Accepted: 07/11/2024] [Indexed: 07/16/2024] Open
Abstract
Alzheimer's disease (AD) is heterogeneous, but existing methods for capturing this heterogeneity through dimensionality reduction and unsupervised clustering have limitations when it comes to extracting intricate atrophy patterns. In this study, we propose a deep learning based self-supervised framework that characterizes complex atrophy features using latent space representation. It integrates feature engineering, classification, and clustering to synergistically disentangle heterogeneity in Alzheimer's disease. Through this representation learning, we trained a clustered latent space with distinct atrophy patterns and clinical characteristics in AD, and replicated the findings in prodromal Alzheimer's disease. Moreover, we discovered that these clusters are not solely attributed to subtypes but also reflect disease progression in the latent space, representing the core dimensions of heterogeneity, namely progression and subtypes. Furthermore, longitudinal latent space analysis revealed two distinct disease progression pathways: medial temporal and parietotemporal pathways. The proposed approach enables effective latent representations that can be integrated with individual-level cognitive profiles, thereby facilitating a comprehensive understanding of AD heterogeneity.
Collapse
Affiliation(s)
- Sohyun Kang
- Department of Artificial Intelligence, College of Informatics, Korea University, Seoul, 02841, South Korea
| | - Sung-Woo Kim
- School of Biomedical Engineering, College of Health Science, Korea University, Seoul, 02841, South Korea; Department of Neurology, Wonju Severance Christian Hospital, Yonsei University Wonju College of Medicine, Wonju, 26426, South Korea; Research Institute of Metabolism and Inflammation, Yonsei University Wonju College of Medicine, Wonju, 26426, South Korea
| | - Joon-Kyung Seong
- Department of Artificial Intelligence, College of Informatics, Korea University, Seoul, 02841, South Korea; School of Biomedical Engineering, College of Health Science, Korea University, Seoul, 02841, South Korea; Interdisciplinary Program in Precision Public Health, College of Health Science, Korea University, Seoul, 02841, South Korea.
| |
Collapse
|
3
|
Young AL, Oxtoby NP, Garbarino S, Fox NC, Barkhof F, Schott JM, Alexander DC. Data-driven modelling of neurodegenerative disease progression: thinking outside the black box. Nat Rev Neurosci 2024; 25:111-130. [PMID: 38191721 DOI: 10.1038/s41583-023-00779-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/30/2023] [Indexed: 01/10/2024]
Abstract
Data-driven disease progression models are an emerging set of computational tools that reconstruct disease timelines for long-term chronic diseases, providing unique insights into disease processes and their underlying mechanisms. Such methods combine a priori human knowledge and assumptions with large-scale data processing and parameter estimation to infer long-term disease trajectories from short-term data. In contrast to 'black box' machine learning tools, data-driven disease progression models typically require fewer data and are inherently interpretable, thereby aiding disease understanding in addition to enabling classification, prediction and stratification. In this Review, we place the current landscape of data-driven disease progression models in a general framework and discuss their enhanced utility for constructing a disease timeline compared with wider machine learning tools that construct static disease profiles. We review the insights they have enabled across multiple neurodegenerative diseases, notably Alzheimer disease, for applications such as determining temporal trajectories of disease biomarkers, testing hypotheses about disease mechanisms and uncovering disease subtypes. We outline key areas for technological development and translation to a broader range of neuroscience and non-neuroscience applications. Finally, we discuss potential pathways and barriers to integrating disease progression models into clinical practice and trial settings.
Collapse
Affiliation(s)
- Alexandra L Young
- UCL Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK.
- Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK.
| | - Neil P Oxtoby
- UCL Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK.
| | - Sara Garbarino
- Life Science Computational Laboratory, IRCCS Ospedale Policlinico San Martino, Genova, Italy
| | - Nick C Fox
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Frederik Barkhof
- UCL Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
- Department of Radiology & Nuclear Medicine, Amsterdam University Medical Center, Amsterdam, The Netherlands
| | - Jonathan M Schott
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Daniel C Alexander
- UCL Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
| |
Collapse
|
4
|
Fedorov A, Geenjaar E, Wu L, Sylvain T, DeRamus TP, Luck M, Misiura M, Mittapalle G, Hjelm RD, Plis SM, Calhoun VD. Self-supervised multimodal learning for group inferences from MRI data: Discovering disorder-relevant brain regions and multimodal links. Neuroimage 2024; 285:120485. [PMID: 38110045 PMCID: PMC10872501 DOI: 10.1016/j.neuroimage.2023.120485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 11/15/2023] [Accepted: 12/04/2023] [Indexed: 12/20/2023] Open
Abstract
In recent years, deep learning approaches have gained significant attention in predicting brain disorders using neuroimaging data. However, conventional methods often rely on single-modality data and supervised models, which provide only a limited perspective of the intricacies of the highly complex brain. Moreover, the scarcity of accurate diagnostic labels in clinical settings hinders the applicability of the supervised models. To address these limitations, we propose a novel self-supervised framework for extracting multiple representations from multimodal neuroimaging data to enhance group inferences and enable analysis without resorting to labeled data during pre-training. Our approach leverages Deep InfoMax (DIM), a self-supervised methodology renowned for its efficacy in learning representations by estimating mutual information without the need for explicit labels. While DIM has shown promise in predicting brain disorders from single-modality MRI data, its potential for multimodal data remains untapped. This work extends DIM to multimodal neuroimaging data, allowing us to identify disorder-relevant brain regions and explore multimodal links. We present compelling evidence of the efficacy of our multimodal DIM analysis in uncovering disorder-relevant brain regions, including the hippocampus, caudate, insula, - and multimodal links with the thalamus, precuneus, and subthalamus hypothalamus. Our self-supervised representations demonstrate promising capabilities in predicting the presence of brain disorders across a spectrum of Alzheimer's phenotypes. Comparative evaluations against state-of-the-art unsupervised methods based on autoencoders, canonical correlation analysis, and supervised models highlight the superiority of our proposed method in achieving improved classification performance, capturing joint information, and interpretability capabilities. The computational efficiency of the decoder-free strategy enhances its practical utility, as it saves compute resources without compromising performance. This work offers a significant step forward in addressing the challenge of understanding multimodal links in complex brain disorders, with potential applications in neuroimaging research and clinical diagnosis.
Collapse
Affiliation(s)
- Alex Fedorov
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, USA.
| | - Eloy Geenjaar
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, USA
| | - Lei Wu
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, USA
| | | | - Thomas P DeRamus
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, USA
| | - Margaux Luck
- Mila - Quebec AI Institute, Montréal, QC, Canada
| | - Maria Misiura
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, USA
| | - Girish Mittapalle
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, USA
| | - R Devon Hjelm
- Mila - Quebec AI Institute, Montréal, QC, Canada; Apple Machine Learning Research, Seattle, WA, USA
| | - Sergey M Plis
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, USA
| | - Vince D Calhoun
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, USA
| |
Collapse
|
5
|
Zhao Y, Guo Q, Zhang Y, Zheng J, Yang Y, Du X, Feng H, Zhang S. Application of Deep Learning for Prediction of Alzheimer's Disease in PET/MR Imaging. Bioengineering (Basel) 2023; 10:1120. [PMID: 37892850 PMCID: PMC10604050 DOI: 10.3390/bioengineering10101120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 09/19/2023] [Accepted: 09/22/2023] [Indexed: 10/29/2023] Open
Abstract
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that affects millions of people worldwide. Positron emission tomography/magnetic resonance (PET/MR) imaging is a promising technique that combines the advantages of PET and MR to provide both functional and structural information of the brain. Deep learning (DL) is a subfield of machine learning (ML) and artificial intelligence (AI) that focuses on developing algorithms and models inspired by the structure and function of the human brain's neural networks. DL has been applied to various aspects of PET/MR imaging in AD, such as image segmentation, image reconstruction, diagnosis and prediction, and visualization of pathological features. In this review, we introduce the basic concepts and types of DL algorithms, such as feed forward neural networks, convolutional neural networks, recurrent neural networks, and autoencoders. We then summarize the current applications and challenges of DL in PET/MR imaging in AD, and discuss the future directions and opportunities for automated diagnosis, predictions of models, and personalized medicine. We conclude that DL has great potential to improve the quality and efficiency of PET/MR imaging in AD, and to provide new insights into the pathophysiology and treatment of this devastating disease.
Collapse
Affiliation(s)
- Yan Zhao
- Department of Information Center, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| | - Qianrui Guo
- Department of Nuclear Medicine, Beijing Cancer Hospital, Beijing 100142, China;
| | - Yukun Zhang
- Department of Radiology, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| | - Jia Zheng
- Department of Nuclear Medicine, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| | - Yang Yang
- Beijing United Imaging Research Institute of Intelligent Imaging, Beijing 100094, China
| | - Xuemei Du
- Department of Nuclear Medicine, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| | - Hongbo Feng
- Department of Nuclear Medicine, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| | - Shuo Zhang
- Department of Nuclear Medicine, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| |
Collapse
|