1
|
Zhou F, Hu S, Du X, Lu Z. Motico: An attentional mechanism network model for smart aging disease risk prediction based on image data classification. Comput Biol Med 2024; 178:108763. [PMID: 38889629 DOI: 10.1016/j.compbiomed.2024.108763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2023] [Revised: 06/06/2024] [Accepted: 06/13/2024] [Indexed: 06/20/2024]
Abstract
The current disease risk prediction model with many parameters is complex to run smoothly on mobile terminals such as tablets and mobile phones in imaginative elderly care application scenarios. In order to further reduce the number of parameters in the model and enable the disease risk prediction model to run smoothly on mobile terminals, we designed a model called Motico (An Attention Mechanism Network Model for Image Data Classification). During the implementation of the Motico model, in order to protect image features, we designed an image data preprocessing method and an attention mechanism network model for image data classification. The Motico model parameter size is only 5.26 MB, and the memory only takes up 135.69 MB. In the experiment, the accuracy of disease risk prediction was 96 %, the precision rate was 97 %, the recall rate was 93 %, the specificity was 98 %, the F1 score was 95 %, and the AUC was 95 %. This experimental result shows that our Motico model can implement classification prediction based on the image data classification attention mechanism network on mobile terminals.
Collapse
Affiliation(s)
- Feng Zhou
- School of Computer Science, Fudan University, No. 2005 Songhu Road, Shanghai, 200438, China
| | - Shijing Hu
- School of Computer Science, Fudan University, No. 2005 Songhu Road, Shanghai, 200438, China.
| | - Xin Du
- School of Computer Science, Fudan University, No. 2005 Songhu Road, Shanghai, 200438, China
| | - Zhihui Lu
- School of Computer Science, Fudan University, No. 2005 Songhu Road, Shanghai, 200438, China
| |
Collapse
|
2
|
Wang Y, Gao R, Wei T, Johnston L, Yuan X, Zhang Y, Yu Z. Predicting long-term progression of Alzheimer's disease using a multimodal deep learning model incorporating interaction effects. J Transl Med 2024; 22:265. [PMID: 38468358 PMCID: PMC10926590 DOI: 10.1186/s12967-024-05025-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 02/24/2024] [Indexed: 03/13/2024] Open
Abstract
BACKGROUND Identifying individuals with mild cognitive impairment (MCI) at risk of progressing to Alzheimer's disease (AD) provides a unique opportunity for early interventions. Therefore, accurate and long-term prediction of the conversion from MCI to AD is desired but, to date, remains challenging. Here, we developed an interpretable deep learning model featuring a novel design that incorporates interaction effects and multimodality to improve the prediction accuracy and horizon for MCI-to-AD progression. METHODS This multi-center, multi-cohort retrospective study collected structural magnetic resonance imaging (sMRI), clinical assessments, and genetic polymorphism data of 252 patients with MCI at baseline from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our deep learning model was cross-validated on the ADNI-1 and ADNI-2/GO cohorts and further generalized in the ongoing ADNI-3 cohort. We evaluated the model performance using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and F1 score. RESULTS On the cross-validation set, our model achieved superior results for predicting MCI conversion within 4 years (AUC, 0.962; accuracy, 92.92%; sensitivity, 88.89%; specificity, 95.33%) compared to all existing studies. In the independent test, our model exhibited consistent performance with an AUC of 0.939 and an accuracy of 92.86%. Integrating interaction effects and multimodal data into the model significantly increased prediction accuracy by 4.76% (P = 0.01) and 4.29% (P = 0.03), respectively. Furthermore, our model demonstrated robustness to inter-center and inter-scanner variability, while generating interpretable predictions by quantifying the contribution of multimodal biomarkers. CONCLUSIONS The proposed deep learning model presents a novel perspective by combining interaction effects and multimodality, leading to more accurate and longer-term predictions of AD progression, which promises to improve pre-dementia patient care.
Collapse
Affiliation(s)
- Yifan Wang
- Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, 800 Dongchuan Road, Minhang District, Shanghai, 200240, China
- SJTU-Yale Joint Center for Biostatistics and Data Science, Shanghai Jiao Tong University, Shanghai, China
| | - Ruitian Gao
- Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, 800 Dongchuan Road, Minhang District, Shanghai, 200240, China
- SJTU-Yale Joint Center for Biostatistics and Data Science, Shanghai Jiao Tong University, Shanghai, China
| | - Ting Wei
- Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, 800 Dongchuan Road, Minhang District, Shanghai, 200240, China
- SJTU-Yale Joint Center for Biostatistics and Data Science, Shanghai Jiao Tong University, Shanghai, China
| | - Luke Johnston
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - Xin Yuan
- Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, 800 Dongchuan Road, Minhang District, Shanghai, 200240, China
- SJTU-Yale Joint Center for Biostatistics and Data Science, Shanghai Jiao Tong University, Shanghai, China
| | - Yue Zhang
- Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, 800 Dongchuan Road, Minhang District, Shanghai, 200240, China
- SJTU-Yale Joint Center for Biostatistics and Data Science, Shanghai Jiao Tong University, Shanghai, China
| | - Zhangsheng Yu
- Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, 800 Dongchuan Road, Minhang District, Shanghai, 200240, China.
- SJTU-Yale Joint Center for Biostatistics and Data Science, Shanghai Jiao Tong University, Shanghai, China.
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai, China.
- Clinical Research Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| |
Collapse
|
3
|
Guo R, Tian X, Lin H, McKenna S, Li HD, Guo F, Liu J. Graph-Based Fusion of Imaging, Genetic and Clinical Data for Degenerative Disease Diagnosis. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:57-68. [PMID: 37991907 DOI: 10.1109/tcbb.2023.3335369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2023]
Abstract
Graph learning methods have achieved noteworthy performance in disease diagnosis due to their ability to represent unstructured information such as inter-subject relationships. While it has been shown that imaging, genetic and clinical data are crucial for degenerative disease diagnosis, existing methods rarely consider how best to use their relationships. How best to utilize information from imaging, genetic and clinical data remains a challenging problem. This study proposes a novel graph-based fusion (GBF) approach to meet this challenge. To extract effective imaging-genetic features, we propose an imaging-genetic fusion module which uses an attention mechanism to obtain modality-specific and joint representations within and between imaging and genetic data. Then, considering the effectiveness of clinical information for diagnosing degenerative diseases, we propose a multi-graph fusion module to further fuse imaging-genetic and clinical features, which adopts a learnable graph construction strategy and a graph ensemble method. Experimental results on two benchmarks for degenerative disease diagnosis (Alzheimers Disease Neuroimaging Initiative and Parkinson's Progression Markers Initiative) demonstrate its effectiveness compared to state-of-the-art graph-based methods. Our findings should help guide further development of graph-based models for dealing with imaging, genetic and clinical data.
Collapse
|
4
|
Wang T, Chen X, Zhang J, Feng Q, Huang M. Deep multimodality-disentangled association analysis network for imaging genetics in neurodegenerative diseases. Med Image Anal 2023; 88:102842. [PMID: 37247468 DOI: 10.1016/j.media.2023.102842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 03/01/2023] [Accepted: 05/15/2023] [Indexed: 05/31/2023]
Abstract
Imaging genetics is a crucial tool that is applied to explore potentially disease-related biomarkers, particularly for neurodegenerative diseases (NDs). With the development of imaging technology, the association analysis between multimodal imaging data and genetic data is gradually being concerned by a wide range of imaging genetics studies. However, multimodal data are fused first and then correlated with genetic data in traditional methods, which leads to an incomplete exploration of their common and complementary information. In addition, the inaccurate formulation in the complex relationships between imaging and genetic data and information loss caused by missing multimodal data are still open problems in imaging genetics studies. Therefore, in this study, a deep multimodality-disentangled association analysis network (DMAAN) is proposed to solve the aforementioned issues and detect the disease-related biomarkers of NDs simultaneously. First, the imaging data are nonlinearly projected into a latent space and imaging representations can be achieved. The imaging representations are further disentangled into common and specific parts by using a multimodal-disentangled module. Second, the genetic data are encoded to achieve genetic representations, and then, the achieved genetic representations are nonlinearly mapped to the common and specific imaging representations to build nonlinear associations between imaging and genetic data through an association analysis module. Moreover, modality mask vectors are synchronously synthesized to integrate the genetic and imaging data, which helps the following disease diagnosis. Finally, the proposed method achieves reasonable diagnosis performance via a disease diagnosis module and utilizes the label information to detect the disease-related modality-shared and modality-specific biomarkers. Furthermore, the genetic representation can be used to impute the missing multimodal data with our learning strategy. Two publicly available datasets with different NDs are used to demonstrate the effectiveness of the proposed DMAAN. The experimental results show that the proposed DMAAN can identify the disease-related biomarkers, which suggests the proposed DMAAN may provide new insights into the pathological mechanism and early diagnosis of NDs. The codes are publicly available at https://github.com/Meiyan88/DMAAN.
Collapse
Affiliation(s)
- Tao Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Xiumei Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Jiawei Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Meiyan Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
5
|
Mulyadi AW, Jung W, Oh K, Yoon JS, Lee KH, Suk HI. Estimating explainable Alzheimer's disease likelihood map via clinically-guided prototype learning. Neuroimage 2023; 273:120073. [PMID: 37037063 DOI: 10.1016/j.neuroimage.2023.120073] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 03/03/2023] [Accepted: 03/30/2023] [Indexed: 04/12/2023] Open
Abstract
Identifying Alzheimer's disease (AD) involves a deliberate diagnostic process owing to its innate traits of irreversibility with subtle and gradual progression. These characteristics make AD biomarker identification from structural brain imaging (e.g., structural MRI) scans quite challenging. Using clinically-guided prototype learning, we propose a novel deep-learning approach through eXplainable AD Likelihood Map Estimation (XADLiME) for AD progression modeling over 3D sMRIs. Specifically, we establish a set of topologically-aware prototypes onto the clusters of latent clinical features, uncovering an AD spectrum manifold. Considering this pseudo map as an enriched reference, we employ an estimating network to approximate the AD likelihood map over a 3D sMRI scan. Additionally, we promote the explainability of such a likelihood map by revealing a comprehensible overview from clinical and morphological perspectives. During the inference, this estimated likelihood map served as a substitute for unseen sMRI scans for effectively conducting the downstream task while providing thorough explainable states.
Collapse
Affiliation(s)
- Ahmad Wisnu Mulyadi
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Wonsik Jung
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Kwanseok Oh
- Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea
| | - Jee Seok Yoon
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Kun Ho Lee
- Gwangju Alzheimer's & Related Dementia Cohort Research Center, Chosun University, Gwangju 61452, Republic of Korea; Department of Biomedical Science, Chosun University, Gwangju 61452, Republic of Korea; Korea Brain Research Institute, Daegu 41062, Republic of Korea
| | - Heung-Il Suk
- Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|