1
|
Zuo Q, Wu H, Chen CLP, Lei B, Wang S. Prior-Guided Adversarial Learning With Hypergraph for Predicting Abnormal Connections in Alzheimer's Disease. IEEE TRANSACTIONS ON CYBERNETICS 2024; 54:3652-3665. [PMID: 38236677 DOI: 10.1109/tcyb.2023.3344641] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2024]
Abstract
Alzheimer's disease (AD) is characterized by alterations of the brain's structural and functional connectivity during its progressive degenerative processes. Existing auxiliary diagnostic methods have accomplished the classification task, but few of them can accurately evaluate the changing characteristics of brain connectivity. In this work, a prior-guided adversarial learning with hypergraph (PALH) model is proposed to predict abnormal brain connections using triple-modality medical images. Concretely, a prior distribution from anatomical knowledge is estimated to guide multimodal representation learning using an adversarial strategy. Also, the pairwise collaborative discriminator structure is further utilized to narrow the difference in representation distribution. Moreover, the hypergraph perceptual network is developed to effectively fuse the learned representations while establishing high-order relations within and between multimodal images. Experimental results demonstrate that the proposed model outperforms other related methods in analyzing and predicting AD progression. More importantly, the identified abnormal connections are partly consistent with previous neuroscience discoveries. The proposed model can evaluate the characteristics of abnormal brain connections at different stages of AD, which is helpful for cognitive disease study and early treatment.
Collapse
|
2
|
Odusami M, Maskeliūnas R, Damaševičius R, Misra S. Machine learning with multimodal neuroimaging data to classify stages of Alzheimer's disease: a systematic review and meta-analysis. Cogn Neurodyn 2024; 18:775-794. [PMID: 38826669 PMCID: PMC11143094 DOI: 10.1007/s11571-023-09993-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 06/23/2023] [Accepted: 07/17/2023] [Indexed: 06/04/2024] Open
Abstract
In recent years, Alzheimer's disease (AD) has been a serious threat to human health. Researchers and clinicians alike encounter a significant obstacle when trying to accurately identify and classify AD stages. Several studies have shown that multimodal neuroimaging input can assist in providing valuable insights into the structural and functional changes in the brain related to AD. Machine learning (ML) algorithms can accurately categorize AD phases by identifying patterns and linkages in multimodal neuroimaging data using powerful computational methods. This study aims to assess the contribution of ML methods to the accurate classification of the stages of AD using multimodal neuroimaging data. A systematic search is carried out in IEEE Xplore, Science Direct/Elsevier, ACM DigitalLibrary, and PubMed databases with forward snowballing performed on Google Scholar. The quantitative analysis used 47 studies. The explainable analysis was performed on the classification algorithm and fusion methods used in the selected studies. The pooled sensitivity and specificity, including diagnostic efficiency, were evaluated by conducting a meta-analysis based on a bivariate model with the hierarchical summary receiver operating characteristics (ROC) curve of multimodal neuroimaging data and ML methods in the classification of AD stages. Wilcoxon signed-rank test is further used to statistically compare the accuracy scores of the existing models. With a 95% confidence interval of 78.87-87.71%, the combined sensitivity for separating participants with mild cognitive impairment (MCI) from healthy control (NC) participants was 83.77%; for separating participants with AD from NC, it was 94.60% (90.76%, 96.89%); for separating participants with progressive MCI (pMCI) from stable MCI (sMCI), it was 80.41% (74.73%, 85.06%). With a 95% confidence interval (78.87%, 87.71%), the Pooled sensitivity for distinguishing mild cognitive impairment (MCI) from healthy control (NC) participants was 83.77%, with a 95% confidence interval (90.76%, 96.89%), the Pooled sensitivity for distinguishing AD from NC was 94.60%, likewise (MCI) from healthy control (NC) participants was 83.77% progressive MCI (pMCI) from stable MCI (sMCI) was 80.41% (74.73%, 85.06%), and early MCI (EMCI) from NC was 86.63% (82.43%, 89.95%). Pooled specificity for differentiating MCI from NC was 79.16% (70.97%, 87.71%), AD from NC was 93.49% (91.60%, 94.90%), pMCI from sMCI was 81.44% (76.32%, 85.66%), and EMCI from NC was 85.68% (81.62%, 88.96%). The Wilcoxon signed rank test showed a low P-value across all the classification tasks. Multimodal neuroimaging data with ML is a promising future in classifying the stages of AD but more research is required to increase the validity of its application in clinical practice.
Collapse
Affiliation(s)
- Modupe Odusami
- Department of Multimedia Engineering, Kaunas University of Technology, Kaunas, Lithuania
| | - Rytis Maskeliūnas
- Department of Multimedia Engineering, Kaunas University of Technology, Kaunas, Lithuania
| | | | - Sanjay Misra
- Department of Applied Data Science, Institute for Energy Technology, Halden, Norway
| |
Collapse
|
3
|
Chen Z, Liu Y, Zhang Y, Zhu J, Li Q, Wu X. Shared Manifold Regularized Joint Feature Selection for Joint Classification and Regression in Alzheimer's Disease Diagnosis. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:2730-2745. [PMID: 38578858 DOI: 10.1109/tip.2024.3382600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/07/2024]
Abstract
In Alzheimer's disease (AD) diagnosis, joint feature selection for predicting disease labels (classification) and estimating cognitive scores (regression) with neuroimaging data has received increasing attention. In this paper, we propose a model named Shared Manifold regularized Joint Feature Selection (SMJFS) that performs classification and regression in a unified framework for AD diagnosis. For classification, unlike the existing works that build least squares regression models which are insufficient in the ability of extracting discriminative information for classification, we design an objective function that integrates linear discriminant analysis and subspace sparsity regularization for acquiring an informative feature subset. Furthermore, the local data relationships are learned according to the samples' transformed distances to exploit the local data structure adaptively. For regression, in contrast to previous works that overlook the correlations among cognitive scores, we learn a latent score space to capture the correlations and employ the latent space to design a regression model with l2,1 -norm regularization, facilitating the feature selection in regression task. Moreover, the missing cognitive scores can be recovered in the latent space for increasing the number of available training samples. Meanwhile, to capture the correlations between the two tasks and describe the local relationships between samples, we construct an adaptive shared graph to guide the subspace learning in classification and the latent cognitive score learning in regression simultaneously. An efficient iterative optimization algorithm is proposed to solve the optimization problem. Extensive experiments on three datasets validate the discriminability of the features selected by SMJFS.
Collapse
|
4
|
Zhang H, Chen J, Liao B, Wu FX, Bi XA. Deep Canonical Correlation Fusion Algorithm Based on Denoising Autoencoder for ASD Diagnosis and Pathogenic Brain Region Identification. Interdiscip Sci 2024:10.1007/s12539-024-00625-y. [PMID: 38573456 DOI: 10.1007/s12539-024-00625-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 02/22/2024] [Accepted: 02/25/2024] [Indexed: 04/05/2024]
Abstract
Autism Spectrum Disorder (ASD) is defined as a neurodevelopmental condition distinguished by unconventional neural activities. Early intervention is key to managing the progress of ASD, and current research primarily focuses on the use of structural magnetic resonance imaging (sMRI) or resting-state functional magnetic resonance imaging (rs-fMRI) for diagnosis. Moreover, the use of autoencoders for disease classification has not been sufficiently explored. In this study, we introduce a new framework based on autoencoder, the Deep Canonical Correlation Fusion algorithm based on Denoising Autoencoder (DCCF-DAE), which proves to be effective in handling high-dimensional data. This framework involves efficient feature extraction from different types of data with an advanced autoencoder, followed by the fusion of these features through the DCCF model. Then we utilize the fused features for disease classification. DCCF integrates functional and structural data to help accurately diagnose ASD and identify critical Regions of Interest (ROIs) in disease mechanisms. We compare the proposed framework with other methods by the Autism Brain Imaging Data Exchange (ABIDE) database and the results demonstrate its outstanding performance in ASD diagnosis. The superiority of DCCF-DAE highlights its potential as a crucial tool for early ASD diagnosis and monitoring.
Collapse
Affiliation(s)
- Huilian Zhang
- Key Laboratory of Data Science and Intelligence Education, Ministry of Education, Hainan Normal University, Haikou, 571126, China
- College of Mathematics and Statistics, Hainan Normal University, Haikou, 571126, China
| | - Jie Chen
- Key Laboratory of Data Science and Intelligence Education, Ministry of Education, Hainan Normal University, Haikou, 571126, China
- College of Mathematics and Statistics, Hainan Normal University, Haikou, 571126, China
| | - Bo Liao
- Key Laboratory of Data Science and Intelligence Education, Ministry of Education, Hainan Normal University, Haikou, 571126, China
- College of Mathematics and Statistics, Hainan Normal University, Haikou, 571126, China
| | - Fang-Xiang Wu
- Division of Biomedical Engineering, University of Saskatchewan, Saskatoon, S7N5A9, Canada
| | - Xia-An Bi
- Key Laboratory of Data Science and Intelligence Education, Ministry of Education, Hainan Normal University, Haikou, 571126, China.
- College of Mathematics and Statistics, Hainan Normal University, Haikou, 571126, China.
- College of Information Science and Engineering, Hunan Normal University, Changsha, Hunan, 410081, China.
| |
Collapse
|
5
|
Vedaei F, Mashhadi N, Alizadeh M, Zabrecky G, Monti D, Wintering N, Navarreto E, Hriso C, Newberg AB, Mohamed FB. Deep learning-based multimodality classification of chronic mild traumatic brain injury using resting-state functional MRI and PET imaging. Front Neurosci 2024; 17:1333725. [PMID: 38312737 PMCID: PMC10837852 DOI: 10.3389/fnins.2023.1333725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Accepted: 12/28/2023] [Indexed: 02/06/2024] Open
Abstract
Mild traumatic brain injury (mTBI) is a public health concern. The present study aimed to develop an automatic classifier to distinguish between patients with chronic mTBI (n = 83) and healthy controls (HCs) (n = 40). Resting-state functional MRI (rs-fMRI) and positron emission tomography (PET) imaging were acquired from the subjects. We proposed a novel deep-learning-based framework, including an autoencoder (AE), to extract high-level latent and rectified linear unit (ReLU) and sigmoid activation functions. Single and multimodality algorithms integrating multiple rs-fMRI metrics and PET data were developed. We hypothesized that combining different imaging modalities provides complementary information and improves classification performance. Additionally, a novel data interpretation approach was utilized to identify top-performing features learned by the AEs. Our method delivered a classification accuracy within the range of 79-91.67% for single neuroimaging modalities. However, the performance of classification improved to 95.83%, thereby employing the multimodality model. The models have identified several brain regions located in the default mode network, sensorimotor network, visual cortex, cerebellum, and limbic system as the most discriminative features. We suggest that this approach could be extended to the objective biomarkers predicting mTBI in clinical settings.
Collapse
Affiliation(s)
- Faezeh Vedaei
- Department of Radiology, Jefferson Integrated Magnetic Resonance Imaging Center, Thomas Jefferson University, Philadelphia, PA, United States
| | - Najmeh Mashhadi
- Department of Computer Science and Engineering, University of California, Santa Cruz, Santa Cruz, CA, United States
| | - Mahdi Alizadeh
- Department of Radiology, Jefferson Integrated Magnetic Resonance Imaging Center, Thomas Jefferson University, Philadelphia, PA, United States
| | - George Zabrecky
- Department of Integrative Medicine and Nutritional Sciences, Marcus Institute of Integrative, Health, Thomas Jefferson University, Philadelphia, PA, United States
| | - Daniel Monti
- Department of Integrative Medicine and Nutritional Sciences, Marcus Institute of Integrative, Health, Thomas Jefferson University, Philadelphia, PA, United States
| | - Nancy Wintering
- Department of Integrative Medicine and Nutritional Sciences, Marcus Institute of Integrative, Health, Thomas Jefferson University, Philadelphia, PA, United States
| | - Emily Navarreto
- Department of Integrative Medicine and Nutritional Sciences, Marcus Institute of Integrative, Health, Thomas Jefferson University, Philadelphia, PA, United States
| | - Chloe Hriso
- Department of Integrative Medicine and Nutritional Sciences, Marcus Institute of Integrative, Health, Thomas Jefferson University, Philadelphia, PA, United States
| | - Andrew B. Newberg
- Department of Radiology, Jefferson Integrated Magnetic Resonance Imaging Center, Thomas Jefferson University, Philadelphia, PA, United States
- Department of Integrative Medicine and Nutritional Sciences, Marcus Institute of Integrative, Health, Thomas Jefferson University, Philadelphia, PA, United States
| | - Feroze B. Mohamed
- Department of Radiology, Jefferson Integrated Magnetic Resonance Imaging Center, Thomas Jefferson University, Philadelphia, PA, United States
| |
Collapse
|
6
|
Zhang Y, Hu Y, Li K, Pan X, Mo X, Zhang H. Exploring the influence of transformer-based multimodal modeling on clinicians' diagnosis of skin diseases: A quantitative analysis. Digit Health 2024; 10:20552076241257087. [PMID: 38784049 PMCID: PMC11113036 DOI: 10.1177/20552076241257087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 05/08/2024] [Indexed: 05/25/2024] Open
Abstract
Objectives The study aimed to propose a multimodal model that incorporates both macroscopic and microscopic images and analyze its influence on clinicians' decision-making with different levels of experience. Methods First, we constructed a multimodal dataset for five skin disorders. Next, we trained unimodal models on three different types of images and selected the best-performing models as the base learners. Then, we used a soft voting strategy to create the multimodal model. Finally, 12 clinicians were divided into three groups, with each group including one director dermatologist, one dermatologist-in-charge, one resident dermatologist, and one general practitioner. They were asked to diagnose the skin disorders in four unaided situations (macroscopic images only, dermatopathological images only, macroscopic and dermatopathological images, all images and metadata), and three aided situations (macroscopic images with model 1 aid, dermatopathological images with model 2&3 aid, all images with multimodal model 4 aid). The clinicians' diagnosis accuracy and time for each diagnosis were recorded. Results Among the trained models, the vision transformer (ViT) achieved the best performance, with accuracies of 0.8636, 0.9545, 0.9673, and AUCs of 0.9823, 0.9952, 0.9989 on the training set, respectively. However, on the external validation set, they only achieved accuracies of 0.70, 0.90, and 0.94, respectively. The multimodal model performed well compared to the unimodal models, achieving an accuracy of 0.98 on the external validation set. The results of logit regression analysis indicate that all models are helpful to clinicians in making diagnostic decisions [Odds Ratios (OR) > 1], while metadata does not provide assistance to clinicians (OR < 1). Linear analysis results indicate that metadata significantly increases clinicians' diagnosis time (P < 0.05), while model assistance does not (P > 0.05). Conclusions The results of this study suggest that the multimodal model effectively improves clinicians' diagnostic performance without significantly increasing the diagnostic time. However, further large-scale prospective studies are necessary.
Collapse
Affiliation(s)
- Yujiao Zhang
- Department of Dermatology, The First Affiliated Hospital of Jinan University, Guangzhou, Guangdong, China
| | - Yunfeng Hu
- Department of Dermatology, The First Affiliated Hospital of Jinan University, Guangzhou, Guangdong, China
| | - Ke Li
- School of the First Clinical Medicine, Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Xiangjun Pan
- Department of Dermatology, The First Affiliated Hospital of Jinan University, Guangzhou, Guangdong, China
| | - Xiaoling Mo
- Department of Dermatology, The First Affiliated Hospital of Jinan University, Guangzhou, Guangdong, China
| | - Hong Zhang
- Department of Dermatology, The First Affiliated Hospital of Jinan University, Guangzhou, Guangdong, China
| |
Collapse
|
7
|
Gao X, Shi F, Shen D, Liu M. Multimodal transformer network for incomplete image generation and diagnosis of Alzheimer's disease. Comput Med Imaging Graph 2023; 110:102303. [PMID: 37832503 DOI: 10.1016/j.compmedimag.2023.102303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 06/27/2023] [Accepted: 09/27/2023] [Indexed: 10/15/2023]
Abstract
Multimodal images such as magnetic resonance imaging (MRI) and positron emission tomography (PET) could provide complementary information about the brain and have been widely investigated for the diagnosis of neurodegenerative disorders such as Alzheimer's disease (AD). However, multimodal brain images are often incomplete in clinical practice. It is still challenging to make use of multimodality for disease diagnosis with missing data. In this paper, we propose a deep learning framework with the multi-level guided generative adversarial network (MLG-GAN) and multimodal transformer (Mul-T) for incomplete image generation and disease classification, respectively. First, MLG-GAN is proposed to generate the missing data, guided by multi-level information from voxels, features, and tasks. In addition to voxel-level supervision and task-level constraint, a feature-level auto-regression branch is proposed to embed the features of target images for an accurate generation. With the complete multimodal images, we propose a Mul-T network for disease diagnosis, which can not only combine the global and local features but also model the latent interactions and correlations from one modality to another with the cross-modal attention mechanism. Comprehensive experiments on three independent datasets (i.e., ADNI-1, ADNI-2, and OASIS-3) show that the proposed method achieves superior performance in the tasks of image generation and disease diagnosis compared to state-of-the-art methods.
Collapse
Affiliation(s)
- Xingyu Gao
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co.,Ltd., China.
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co.,Ltd., China; School of Biomedical Engineering, ShanghaiTech University, China.
| | - Manhua Liu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, China; MoE Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China.
| |
Collapse
|
8
|
Jimenez-Mesa C, Arco JE, Martinez-Murcia FJ, Suckling J, Ramirez J, Gorriz JM. Applications of machine learning and deep learning in SPECT and PET imaging: General overview, challenges and future prospects. Pharmacol Res 2023; 197:106984. [PMID: 37940064 DOI: 10.1016/j.phrs.2023.106984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/04/2023] [Accepted: 11/04/2023] [Indexed: 11/10/2023]
Abstract
The integration of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging techniques with machine learning (ML) algorithms, including deep learning (DL) models, is a promising approach. This integration enhances the precision and efficiency of current diagnostic and treatment strategies while offering invaluable insights into disease mechanisms. In this comprehensive review, we delve into the transformative impact of ML and DL in this domain. Firstly, a brief analysis is provided of how these algorithms have evolved and which are the most widely applied in this domain. Their different potential applications in nuclear imaging are then discussed, such as optimization of image adquisition or reconstruction, biomarkers identification, multimodal fusion and the development of diagnostic, prognostic, and disease progression evaluation systems. This is because they are able to analyse complex patterns and relationships within imaging data, as well as extracting quantitative and objective measures. Furthermore, we discuss the challenges in implementation, such as data standardization and limited sample sizes, and explore the clinical opportunities and future horizons, including data augmentation and explainable AI. Together, these factors are propelling the continuous advancement of more robust, transparent, and reliable systems.
Collapse
Affiliation(s)
- Carmen Jimenez-Mesa
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan E Arco
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Communications Engineering, University of Malaga, 29010, Spain
| | | | - John Suckling
- Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK
| | - Javier Ramirez
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan Manuel Gorriz
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK.
| |
Collapse
|
9
|
Zhang Q, Sheng J, Zhang Q, Wang L, Yang Z, Xin Y. Enhanced Harris hawks optimization-based fuzzy k-nearest neighbor algorithm for diagnosis of Alzheimer's disease. Comput Biol Med 2023; 165:107392. [PMID: 37669585 DOI: 10.1016/j.compbiomed.2023.107392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 07/30/2023] [Accepted: 08/25/2023] [Indexed: 09/07/2023]
Abstract
In order to stop deterioration and give patients with Alzheimer's disease (AD) early therapy, it is crucial to correctly diagnose AD and its early stage, mild cognitive impairment (MCI). A framework for diagnosing AD is presented in this paper, which includes magnetic resonance imaging (MRI) image preprocessing, feature extraction, and the Fuzzy k-nearest neighbor algorithm (FKNN) model. In particular, the framework's novelty lies in the use of an improved Harris Hawks Optimization (HHO) algorithm named SSFSHHO, which integrates the Sobol sequence and Stochastic Fractal Search (SFS) mechanisms for optimizing the parameters of FKNN. The HHO method improves the quality of the initial population overall by incorporating the Sobol sequence, and the SFS mechanism increases the algorithm's capacity to get out of the local optimum solution. Comparisons with other classical meta-heuristic algorithms, state-of-the-art HHO variants in low and high dimensions, and enhanced meta-heuristic algorithms on 30 typical IEEE CEC2014 benchmark test problems show that the overall performance of SSFSHHO is significantly better than other comparative algorithms. Moreover, the created framework based on the SSFSHHO-FKNN model is employed to classify AD and MCI using MRI scans from the ADNI dataset, achieving high classification performance for 6 representative cases. Experimental findings indicate that the proposed algorithm performs better than a number of high-performance optimization algorithms and classical machine learning algorithms, thus offering a promising approach for AD classification. Additionally, the proposed strategy can successfully identify relevant features and enhance classification performance for AD diagnosis.
Collapse
Affiliation(s)
- Qian Zhang
- College of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, Zhejiang, 310018, China; Key Laboratory of Intelligent Image Analysis for Sensory and Cognitive Health, Ministry of Industry and Information Technology of China, Hangzhou, Zhejiang, 310018, China; School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Wenzhou, Zhejiang, 325035, China
| | - Jinhua Sheng
- College of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, Zhejiang, 310018, China; Key Laboratory of Intelligent Image Analysis for Sensory and Cognitive Health, Ministry of Industry and Information Technology of China, Hangzhou, Zhejiang, 310018, China.
| | - Qiao Zhang
- Beijing Hospital, Beijing, 100730, China; National Center of Gerontology, Beijing, 100730, China; Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Luyun Wang
- College of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, Zhejiang, 310018, China; Key Laboratory of Intelligent Image Analysis for Sensory and Cognitive Health, Ministry of Industry and Information Technology of China, Hangzhou, Zhejiang, 310018, China
| | - Ze Yang
- College of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, Zhejiang, 310018, China; Key Laboratory of Intelligent Image Analysis for Sensory and Cognitive Health, Ministry of Industry and Information Technology of China, Hangzhou, Zhejiang, 310018, China
| | - Yu Xin
- College of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, Zhejiang, 310018, China; Key Laboratory of Intelligent Image Analysis for Sensory and Cognitive Health, Ministry of Industry and Information Technology of China, Hangzhou, Zhejiang, 310018, China
| |
Collapse
|
10
|
Saleh H, Amer E, Abuhmed T, Ali A, Al-Fuqaha A, El-Sappagh S. Computer aided progression detection model based on optimized deep LSTM ensemble model and the fusion of multivariate time series data. Sci Rep 2023; 13:16336. [PMID: 37770490 PMCID: PMC10539296 DOI: 10.1038/s41598-023-42796-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 09/14/2023] [Indexed: 09/30/2023] Open
Abstract
Alzheimer's disease (AD) is the most common form of dementia. Early and accurate detection of AD is crucial to plan for disease modifying therapies that could prevent or delay the conversion to sever stages of the disease. As a chronic disease, patient's multivariate time series data including neuroimaging, genetics, cognitive scores, and neuropsychological battery provides a complete profile about patient's status. This data has been used to build machine learning and deep learning (DL) models for the early detection of the disease. However, these models still have limited performance and are not stable enough to be trusted in real medical settings. Literature shows that DL models outperform classical machine learning models, but ensemble learning has proven to achieve better results than standalone models. This study proposes a novel deep stacking framework which combines multiple DL models to accurately predict AD at an early stage. The study uses long short-term memory (LSTM) models as base models over patient's multivariate time series data to learn the deep longitudinal features. Each base LSTM classifier has been optimized using the Bayesian optimizer using different feature sets. As a result, the final optimized ensembled model employed heterogeneous base models that are trained on heterogeneous data. The performance of the resulting ensemble model has been explored using a cohort of 685 patients from the University of Washington's National Alzheimer's Coordinating Center dataset. Compared to the classical machine learning models and base LSTM classifiers, the proposed ensemble model achieves the highest testing results (i.e., 82.02, 82.25, 82.02, and 82.12 for accuracy, precision, recall, and F1-score, respectively). The resulting model enhances the performance of the state-of-the-art literature, and it could be used to build an accurate clinical decision support tool that can assist domain experts for AD progression detection.
Collapse
Affiliation(s)
- Hager Saleh
- Faculty of Computers and Artificial Intelligence, South Valley University, Hurghada, Egypt
| | - Eslam Amer
- Communications and Information Technology, The Institute of Electronics, Queen's University of Belfast, Belfast, UK
| | - Tamer Abuhmed
- Information Laboratory (InfoLab), College of Computing and Informatics, Sungkyunkwan University, Seoul, Suwon, 16419, South Korea.
| | - Amjad Ali
- Information and Computing Technology (ICT) Division, College of Science and Engineering (CSE), Hamad Bin Khalifa University, Doha, Qatar
| | - Ala Al-Fuqaha
- Information and Computing Technology (ICT) Division, College of Science and Engineering (CSE), Hamad Bin Khalifa University, Doha, Qatar
| | - Shaker El-Sappagh
- Information Laboratory (InfoLab), College of Computing and Informatics, Sungkyunkwan University, Seoul, Suwon, 16419, South Korea.
- Faculty of Computer Science and Engineering, Galala University, Suez, 435611, Egypt.
- Faculty of Computers and Artificial Intelligence, Benha University, Banha, 13518, Egypt.
| |
Collapse
|
11
|
Liu Y, Chakraborty N, Qin ZS, Kundu S. Integrative Bayesian tensor regression for imaging genetics applications. Front Neurosci 2023; 17:1212218. [PMID: 37680967 PMCID: PMC10481528 DOI: 10.3389/fnins.2023.1212218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 07/17/2023] [Indexed: 09/09/2023] Open
Abstract
Identifying biomarkers for Alzheimer's disease with a goal of early detection is a fundamental problem in clinical research. Both medical imaging and genetics have contributed informative biomarkers in literature. To further improve the performance, recently, there is an increasing interest in developing analytic approaches that combine data across modalities such as imaging and genetics. However, there are limited methods in literature that are able to systematically combine high-dimensional voxel-level imaging and genetic data for accurate prediction of clinical outcomes of interest. Existing prediction models that integrate imaging and genetic features often use region level imaging summaries, and they typically do not consider the spatial configurations of the voxels in the image or incorporate the dependence between genes that may compromise prediction ability. We propose a novel integrative Bayesian scalar-on-image regression model for predicting cognitive outcomes based on high-dimensional spatially distributed voxel-level imaging data, along with correlated transcriptomic features. We account for the spatial dependencies in the imaging voxels via a tensor approach that also enables massive dimension reduction to address the curse of dimensionality, and models the dependencies between the transcriptomic features via a Graph-Laplacian prior. We implement this approach via an efficient Markov chain Monte Carlo (MCMC) computation strategy. We apply the proposed method to the analysis of longitudinal ADNI data for predicting cognitive scores at different visits by integrating voxel-level cortical thickness measurements derived from T1w-MRI scans and transcriptomics data. We illustrate that the proposed imaging transcriptomics approach has significant improvements in prediction compared to prediction using a subset of features from only one modality (imaging or genetics), as well as when using imaging and transcriptomics features but ignoring the inherent dependencies between the features. Our analysis is one of the first to conclusively demonstrate the advantages of prediction based on combining voxel-level cortical thickness measurements along with transcriptomics features, while accounting for inherent structural information.
Collapse
Affiliation(s)
- Yajie Liu
- Department of Biostatistics and Data Science, School of Public Health, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Nilanjana Chakraborty
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Zhaohui S. Qin
- Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, GA, United States
| | - Suprateek Kundu
- Department of Biostatistics, Division of Basic Science Research, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | | |
Collapse
|
12
|
Fu X, Song C, Zhang R, Shi H, Jiao Z. Multimodal Classification Framework Based on Hypergraph Latent Relation for End-Stage Renal Disease Associated with Mild Cognitive Impairment. Bioengineering (Basel) 2023; 10:958. [PMID: 37627843 PMCID: PMC10451373 DOI: 10.3390/bioengineering10080958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 08/09/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023] Open
Abstract
Combined arterial spin labeling (ASL) and functional magnetic resonance imaging (fMRI) can reveal more comprehensive properties of the spatiotemporal and quantitative properties of brain networks. Imaging markers of end-stage renal disease associated with mild cognitive impairment (ESRDaMCI) will be sought from these properties. The current multimodal classification methods often neglect to collect high-order relationships of brain regions and remove noise from the feature matrix. A multimodal classification framework is proposed to address this issue using hypergraph latent relation (HLR). A brain functional network with hypergraph structural information is constructed by fMRI data. The feature matrix is obtained through graph theory (GT). The cerebral blood flow (CBF) from ASL is selected as the second modal feature matrix. Then, the adaptive similarity matrix is constructed by learning the latent relation between feature matrices. Latent relation adaptive similarity learning (LRAS) is introduced to multi-task feature learning to construct a multimodal feature selection method based on latent relation (LRMFS). The experimental results show that the best classification accuracy (ACC) reaches 88.67%, at least 2.84% better than the state-of-the-art methods. The proposed framework preserves more valuable information between brain regions and reduces noise among feature matrixes. It provides an essential reference value for ESRDaMCI recognition.
Collapse
Affiliation(s)
- Xidong Fu
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164, China
| | - Chaofan Song
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164, China
| | - Rupu Zhang
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164, China
| | - Haifeng Shi
- Department of Radiology, The Affiliated Changzhou No.2 People’s Hospital of Nanjing Medical University, Changzhou 213003, China
| | - Zhuqing Jiao
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164, China
| |
Collapse
|
13
|
Odusami M, Maskeliūnas R, Damaševičius R. Pareto Optimized Adaptive Learning with Transposed Convolution for Image Fusion Alzheimer's Disease Classification. Brain Sci 2023; 13:1045. [PMID: 37508977 PMCID: PMC10377099 DOI: 10.3390/brainsci13071045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 06/30/2023] [Accepted: 07/04/2023] [Indexed: 07/30/2023] Open
Abstract
Alzheimer's disease (AD) is a neurological condition that gradually weakens the brain and impairs cognition and memory. Multimodal imaging techniques have become increasingly important in the diagnosis of AD because they can help monitor disease progression over time by providing a more complete picture of the changes in the brain that occur over time in AD. Medical image fusion is crucial in that it combines data from various image modalities into a single, better-understood output. The present study explores the feasibility of employing Pareto optimized deep learning methodologies to integrate Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) images through the utilization of pre-existing models, namely the Visual Geometry Group (VGG) 11, VGG16, and VGG19 architectures. Morphological operations are carried out on MRI and PET images using Analyze 14.0 software and after which PET images are manipulated for the desired angle of alignment with MRI image using GNU Image Manipulation Program (GIMP). To enhance the network's performance, transposed convolution layer is incorporated into the previously extracted feature maps before image fusion. This process generates feature maps and fusion weights that facilitate the fusion process. This investigation concerns the assessment of the efficacy of three VGG models in capturing significant features from the MRI and PET data. The hyperparameters of the models are tuned using Pareto optimization. The models' performance is evaluated on the ADNI dataset utilizing the Structure Similarity Index Method (SSIM), Peak Signal-to-Noise Ratio (PSNR), Mean-Square Error (MSE), and Entropy (E). Experimental results show that VGG19 outperforms VGG16 and VGG11 with an average of 0.668, 0.802, and 0.664 SSIM for CN, AD, and MCI stages from ADNI (MRI modality) respectively. Likewise, an average of 0.669, 0.815, and 0.660 SSIM for CN, AD, and MCI stages from ADNI (PET modality) respectively.
Collapse
Affiliation(s)
- Modupe Odusami
- Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania
| | - Rytis Maskeliūnas
- Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
| |
Collapse
|
14
|
Wang X, Feng Y, Tong B, Bao J, Ritchie MD, Saykin AJ, Moore JH, Urbanowicz R, Shen L. Exploring Automated Machine Learning for Cognitive Outcome Prediction from Multimodal Brain Imaging using STREAMLINE. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE PROCEEDINGS. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE 2023; 2023:544-553. [PMID: 37350896 PMCID: PMC10283099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/24/2023]
Abstract
STREAMLINE is a simple, transparent, end-to-end automated machine learning (AutoML) pipeline for easily conducting rigorous machine learning (ML) modeling and analysis. The initial version is limited to binary classification. In this work, we extend STREAMLINE through implementing multiple regression-based ML models, including linear regression, elastic net, group lasso, and L21 norm. We demonstrate the effectiveness of the regression version of STREAMLINE by applying it to the prediction of Alzheimer's disease (AD) cognitive outcomes using multimodal brain imaging data. Our empirical results demonstrate the feasibility and effectiveness of the newly expanded STREAMLINE as an AutoML pipeline for evaluating AD regression models, and for discovering multimodal imaging biomarkers.
Collapse
Affiliation(s)
- Xinkai Wang
- University of Pennsylvania, Philadelphia, PA
| | - Yanbo Feng
- University of Pennsylvania, Philadelphia, PA
| | - Boning Tong
- University of Pennsylvania, Philadelphia, PA
| | | | | | | | | | | | - Li Shen
- University of Pennsylvania, Philadelphia, PA
| |
Collapse
|
15
|
Zhu Q, Xu B, Huang J, Wang H, Xu R, Shao W, Zhang D. Deep Multi-Modal Discriminative and Interpretability Network for Alzheimer's Disease Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1472-1483. [PMID: 37015464 DOI: 10.1109/tmi.2022.3230750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Multi-modal fusion has become an important data analysis technology in Alzheimer's disease (AD) diagnosis, which is committed to effectively extract and utilize complementary information among different modalities. However, most of the existing fusion methods focus on pursuing common feature representation by transformation, and ignore discriminative structural information among samples. In addition, most fusion methods use high-order feature extraction, such as deep neural network, by which it is difficult to identify biomarkers. In this paper, we propose a novel method named deep multi-modal discriminative and interpretability network (DMDIN), which aligns samples in a discriminative common space and provides a new approach to identify significant brain regions (ROIs) in AD diagnosis. Specifically, we reconstruct each modality with a hierarchical representation through multilayer perceptron (MLP), and take advantage of the shared self-expression coefficients constrained by diagonal blocks to embed the structural information of inter-class and the intra-class. Further, the generalized canonical correlation analysis (GCCA) is adopted as a correlation constraint to generate a discriminative common space, in which samples of the same category gather while samples of different categories stay away. Finally, in order to enhance the interpretability of the deep learning model, we utilize knowledge distillation to reproduce coordinated representations and capture influence of brain regions in AD classification. Experiments show that the proposed method performs better than several state-of-the-art methods in AD diagnosis.
Collapse
|
16
|
Leng Y, Cui W, Peng Y, Yan C, Cao Y, Yan Z, Chen S, Jiang X, Zheng J. Multimodal cross enhanced fusion network for diagnosis of Alzheimer's disease and subjective memory complaints. Comput Biol Med 2023; 157:106788. [PMID: 36958233 DOI: 10.1016/j.compbiomed.2023.106788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 02/09/2023] [Accepted: 03/11/2023] [Indexed: 03/15/2023]
Abstract
Deep learning methods using multimodal imagings have been proposed for the diagnosis of Alzheimer's disease (AD) and its early stages (SMC, subjective memory complaints), which may help to slow the progression of the disease through early intervention. However, current fusion methods for multimodal imagings are generally coarse and may lead to suboptimal results through the use of shared extractors or simple downscaling stitching. Another issue with diagnosing brain diseases is that they often affect multiple areas of the brain, making it important to consider potential connections throughout the brain. However, traditional convolutional neural networks (CNNs) may struggle with this issue due to their limited local receptive fields. To address this, many researchers have turned to transformer networks, which can provide global information about the brain but can be computationally intensive and perform poorly on small datasets. In this work, we propose a novel lightweight network called MENet that adaptively recalibrates the multiscale long-range receptive field to localize discriminative brain regions in a computationally efficient manner. Based on this, the network extracts the intensity and location responses between structural magnetic resonance imagings (sMRI) and 18-Fluoro-Deoxy-Glucose Positron Emission computed Tomography (FDG-PET) as an enhancement fusion for AD and SMC diagnosis. Our method is evaluated on the publicly available ADNI datasets and achieves 97.67% accuracy in AD diagnosis tasks and 81.63% accuracy in SMC diagnosis tasks using sMRI and FDG-PET. These results achieve state-of-the-art (SOTA) performance in both tasks. To the best of our knowledge, this is one of the first deep learning research methods for SMC diagnosis with FDG-PET.
Collapse
Affiliation(s)
- Yilin Leng
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China; Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| | - Wenju Cui
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China
| | - Yunsong Peng
- Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou, 550002, China
| | - Caiying Yan
- Department of Radiology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou, 211103, China
| | - Yuzhu Cao
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China
| | - Zhuangzhi Yan
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China
| | - Shuangqing Chen
- Department of Radiology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou, 211103, China.
| | - Xi Jiang
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Jian Zheng
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China.
| | | |
Collapse
|
17
|
Chen Z, Liu Y, Zhang Y, Li Q. Orthogonal latent space learning with feature weighting and graph learning for multimodal Alzheimer's disease diagnosis. Med Image Anal 2023; 84:102698. [PMID: 36462372 DOI: 10.1016/j.media.2022.102698] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 10/18/2022] [Accepted: 11/17/2022] [Indexed: 11/23/2022]
Abstract
Recent studies have shown that multimodal neuroimaging data provide complementary information of the brain and latent space-based methods have achieved promising results in fusing multimodal data for Alzheimer's disease (AD) diagnosis. However, most existing methods treat all features equally and adopt nonorthogonal projections to learn the latent space, which cannot retain enough discriminative information in the latent space. Besides, they usually preserve the relationships among subjects in the latent space based on the similarity graph constructed on original features for performance boosting. However, the noises and redundant features significantly corrupt the graph. To address these limitations, we propose an Orthogonal Latent space learning with Feature weighting and Graph learning (OLFG) model for multimodal AD diagnosis. Specifically, we map multiple modalities into a common latent space by orthogonal constrained projection to capture the discriminative information for AD diagnosis. Then, a feature weighting matrix is utilized to sort the importance of features in AD diagnosis adaptively. Besides, we devise a regularization term with learned graph to preserve the local structure of the data in the latent space and integrate the graph construction into the learning processing for accurately encoding the relationships among samples. Instead of constructing a similarity graph for each modality, we learn a joint graph for multiple modalities to capture the correlations among modalities. Finally, the representations in the latent space are projected into the target space to perform AD diagnosis. An alternating optimization algorithm with proved convergence is developed to solve the optimization objective. Extensive experimental results show the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Zhi Chen
- Knowledge and Data Engineering Laboratory of Chinese Medicine, School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Yongguo Liu
- Knowledge and Data Engineering Laboratory of Chinese Medicine, School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China.
| | - Yun Zhang
- Knowledge and Data Engineering Laboratory of Chinese Medicine, School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Qiaoqin Li
- Knowledge and Data Engineering Laboratory of Chinese Medicine, School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| |
Collapse
|
18
|
Dar SA, Imtiaz N. Classification of neuroimaging data in Alzheimer's disease using particle swarm optimization: A systematic review. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-12. [PMID: 36719791 DOI: 10.1080/23279095.2023.2169886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
AIM Particle swarm optimization (PSO) is an algorithm that involves the optimization of Non-linear and Multidimensional problems to reach the best solutions with minimal parameterization. This metaheuristic model has frequently been used in the Pathological domain. This optimization model has been used in diverse forms while predicting Alzheimer's disease. It is a robust algorithm that works on linear and multi-modal data while predicting Alzheimer's disease. PSO techniques have been in action for quite some time for detecting various diseases and this paper systematically reviews the papers on various kinds of PSO techniques. METHODS To perform the systematic review, PRISMA guidelines were followed and a Boolean search ("particle swarm optimization" OR "PSO") AND Neuroimaging AND (Alzheimer's disease prediction OR classification OR diagnosis) were performed. The query was run in 4-reputed databases: Google Scholar, Scopus, Science Direct, and Wiley publications. RESULTS For the final analysis, 10 papers were incorporated for qualitative and quantitative synthesis. PSO has shown a dominant character while handling the uni-modal as well as the multi-modal data while predicting the conversion from MCI to Alzheimer's. It can be seen from the table that almost all the 10 reviewed papers had MRI-driven data. The accuracy rate was accentuated while adding other modalities or Neurocognitive measures. CONCLUSIONS Through this algorithm, we are providing an opportunity to other researchers to compare this algorithm with other state-of-the-art algorithms, while seeing the classification accuracy, with the aim of early prediction and progression of MCI into Alzheimer's disease.
Collapse
Affiliation(s)
- Suhail Ahmad Dar
- Department of Psychology, Aligarh Muslim University, Aligarh, India
| | - Nasheed Imtiaz
- Department of Psychology, Aligarh Muslim University, Aligarh, India
| |
Collapse
|
19
|
Khan R, Akbar S, Mehmood A, Shahid F, Munir K, Ilyas N, Asif M, Zheng Z. A transfer learning approach for multiclass classification of Alzheimer's disease using MRI images. Front Neurosci 2023; 16:1050777. [PMID: 36699527 PMCID: PMC9869687 DOI: 10.3389/fnins.2022.1050777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 12/05/2022] [Indexed: 01/11/2023] Open
Abstract
Alzheimer's is an acute degenerative disease affecting the elderly population all over the world. The detection of disease at an early stage in the absence of a large-scale annotated dataset is crucial to the clinical treatment for the prevention and early detection of Alzheimer's disease (AD). In this study, we propose a transfer learning base approach to classify various stages of AD. The proposed model can distinguish between normal control (NC), early mild cognitive impairment (EMCI), late mild cognitive impairment (LMCI), and AD. In this regard, we apply tissue segmentation to extract the gray matter from the MRI scans obtained from the Alzheimer's Disease National Initiative (ADNI) database. We utilize this gray matter to tune the pre-trained VGG architecture while freezing the features of the ImageNet database. It is achieved through the addition of a layer with step-wise freezing of the existing blocks in the network. It not only assists transfer learning but also contributes to learning new features efficiently. Extensive experiments are conducted and results demonstrate the superiority of the proposed approach.
Collapse
Affiliation(s)
- Rizwan Khan
- Department of Computer Science and Mathematics, Zhejiang Normal University, Jinhua, China,*Correspondence: Rizwan Khan ✉
| | - Saeed Akbar
- School of Computer Science, Huazhong University of Science and Technology, Wuhan, China
| | - Atif Mehmood
- Division of Biomedical Imaging, Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Stockholm, Sweden,Department of Computer Science, National University of Modern Languages, Islamabad, Pakistan
| | - Farah Shahid
- Department of Computer Science, University of Agriculture, Sub Campus Burewala-Vehari, Faisalabad, Pakistan
| | - Khushboo Munir
- Department of Radiology and Diagnostic Imaging, University of Alberta, Edmonton, AB, Canada
| | - Naveed Ilyas
- Department of Physics, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates
| | - M. Asif
- Department of Radiology, Emory Brain Health Center-Neurosurgery, School of Medicine, Emory University, Atlanta, GA, United States
| | - Zhonglong Zheng
- Department of Computer Science and Mathematics, Zhejiang Normal University, Jinhua, China,Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, China
| |
Collapse
|
20
|
Gonzalez-Gomez R, Ibañez A, Moguilner S. Multiclass characterization of frontotemporal dementia variants via multimodal brain network computational inference. Netw Neurosci 2023; 7:322-350. [PMID: 37333999 PMCID: PMC10270711 DOI: 10.1162/netn_a_00285] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 10/03/2022] [Indexed: 04/03/2024] Open
Abstract
Characterizing a particular neurodegenerative condition against others possible diseases remains a challenge along clinical, biomarker, and neuroscientific levels. This is the particular case of frontotemporal dementia (FTD) variants, where their specific characterization requires high levels of expertise and multidisciplinary teams to subtly distinguish among similar physiopathological processes. Here, we used a computational approach of multimodal brain networks to address simultaneous multiclass classification of 298 subjects (one group against all others), including five FTD variants: behavioral variant FTD, corticobasal syndrome, nonfluent variant primary progressive aphasia, progressive supranuclear palsy, and semantic variant primary progressive aphasia, with healthy controls. Fourteen machine learning classifiers were trained with functional and structural connectivity metrics calculated through different methods. Due to the large number of variables, dimensionality was reduced, employing statistical comparisons and progressive elimination to assess feature stability under nested cross-validation. The machine learning performance was measured through the area under the receiver operating characteristic curves, reaching 0.81 on average, with a standard deviation of 0.09. Furthermore, the contributions of demographic and cognitive data were also assessed via multifeatured classifiers. An accurate simultaneous multiclass classification of each FTD variant against other variants and controls was obtained based on the selection of an optimum set of features. The classifiers incorporating the brain's network and cognitive assessment increased performance metrics. Multimodal classifiers evidenced specific variants' compromise, across modalities and methods through feature importance analysis. If replicated and validated, this approach may help to support clinical decision tools aimed to detect specific affectations in the context of overlapping diseases.
Collapse
Affiliation(s)
- Raul Gonzalez-Gomez
- Latin American Brain Health Institute (BrainLat), Universidad Adolfo Ibañez, Santiago de Chile, Chile
- Center for Social and Cognitive Neuroscience, School of Psychology, Universidad Adolfo Ibañez, Santiago de Chile, Chile
| | - Agustín Ibañez
- Latin American Brain Health Institute (BrainLat), Universidad Adolfo Ibañez, Santiago de Chile, Chile
- Cognitive Neuroscience Center, Universidad de San Andres, Buenos Aires, Argentina
- Global Brain Health Institute, University of California San Francisco, San Francisco, CA, USA
- Trinity College Dublin, Dublin, Ireland
| | - Sebastian Moguilner
- Center for Social and Cognitive Neuroscience, School of Psychology, Universidad Adolfo Ibañez, Santiago de Chile, Chile
- Cognitive Neuroscience Center, Universidad de San Andres, Buenos Aires, Argentina
- Global Brain Health Institute, University of California San Francisco, San Francisco, CA, USA
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| |
Collapse
|
21
|
Chen X, Xie H, Li Z, Cheng G, Leng M, Wang FL. Information fusion and artificial intelligence for smart healthcare: a bibliometric study. Inf Process Manag 2023. [DOI: 10.1016/j.ipm.2022.103113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
22
|
Pala D, Lee B, Ning X, Kim D, Shen L. Mediation Analysis and Mixed-Effects Models for the Identification of Stage-specific Imaging Genetics Patterns in Alzheimer's Disease. PROCEEDINGS. IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE 2022; 2022:2667-2673. [PMID: 36824222 PMCID: PMC9942815 DOI: 10.1109/bibm55620.2022.9995405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Alzheimer's disease (AD) is one of the most common and severe forms of Senile Dementia. Genome-wide association studies (GWAS) have identified dozens of AD susceptible loci. To better understand potential mechanism-of-action for AD, quantitative brain imaging features have been studied as mediators linking genetic variants to AD outcomes. In this study, Mediation analysis, Chow test and Mixed-effects Models are used to investigate the biological pathways by which genetic variants affect both brain structures/functions and disease diagnosis. We analyzed the imaging and genetics data collected from the Alzheimer's Disease Neuroimaging Initiative (ADNI) project, including a Polygenic Hazard Score (PHS) and 13 imaging quantitative traits (QTs) extracted from the AV45 PET scans quantifying the amyloid deposition in different brain regions of subjects from four separate diagnostic groups. Mediation analysis assessed the mediating effects of image QTs between PHS and diagnosis, whereas Chow test and Linear Mixed-Effects models were used to characterize intra-group differences in the associations between genetic scores and imaging QTs for different disease stages. Results show that promising stage-specific imaging QTs that mediate the genetic effect of the studied PHS on disease status have been identified, providing novel insights into the predictive power of the PHS and the mediating power of amyloid imaging QTs with respect to multiple stages over the AD progression.
Collapse
Affiliation(s)
- Daniele Pala
- Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania, Philadelphia, USA
| | - Brian Lee
- Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania, Philadelphia, USA
| | - Xia Ning
- Department of Biomedical Informatics, The Ohio State University, Columbus, USA
| | - Dokyoon Kim
- Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania, Philadelphia, USA
| | - Li Shen
- Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania, Philadelphia, USA
| | | |
Collapse
|
23
|
Liu Z, Johnson TS, Shao W, Zhang M, Zhang J, Huang K. Optimal transport- and kernel-based early detection of mild cognitive impairment patients based on magnetic resonance and positron emission tomography images. Alzheimers Res Ther 2022; 14:4. [PMID: 34996518 PMCID: PMC8742368 DOI: 10.1186/s13195-021-00915-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 10/05/2021] [Indexed: 11/26/2022]
Abstract
Background To help clinicians provide timely treatment and delay disease progression, it is crucial to identify dementia patients during the mild cognitive impairment (MCI) stage and stratify these MCI patients into early and late MCI stages before they progress to Alzheimer’s disease (AD). In the process of diagnosing MCI and AD in living patients, brain scans are collected using neuroimaging technologies such as computed tomography (CT), magnetic resonance imaging (MRI), or positron emission tomography (PET). These brain scans measure the volume and molecular activity within the brain resulting in a very promising avenue to diagnose patients early in a minimally invasive manner. Methods We have developed an optimal transport based transfer learning model to discriminate between early and late MCI. Combing this transfer learning model with bootstrap aggregation strategy, we overcome the overfitting problem and improve model stability and prediction accuracy. Results With the transfer learning methods that we have developed, we outperform the current state of the art MCI stage classification frameworks and show that it is crucial to leverage Alzheimer’s disease and normal control subjects to accurately predict early and late stage cognitive impairment. Conclusions Our method is the current state of the art based on benchmark comparisons. This method is a necessary technological stepping stone to widespread clinical usage of MRI-based early detection of AD. Supplementary Information The online version contains supplementary material available at (10.1186/s13195-021-00915-3).
Collapse
|
24
|
Li J, Xu H, Yu H, Jiang Z, Zhu L. Multi-modal feature selection with anchor graph for Alzheimer's disease. Front Neurosci 2022; 16:1036244. [DOI: 10.3389/fnins.2022.1036244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Accepted: 10/21/2022] [Indexed: 11/10/2022] Open
Abstract
In Alzheimer's disease, the researchers found that if the patients were treated at the early stage of the disease, it could effectively delay the development of the disease. At present, multi-modal feature selection is widely used in the early diagnosis of Alzheimer's disease. However, existing multi-modal feature selection algorithms focus on learning the internal information of multiple modalities. They ignore the relationship between modalities, the importance of each modality and the local structure in the multi-modal data. In this paper, we propose a multi-modal feature selection algorithm with anchor graph for Alzheimer's disease. Specifically, we first use the least square loss and l2,1−norm to obtain the weight of the feature under each modality. Then we embed a modal weight factor into the objective function to obtain the importance of each modality. Finally, we use anchor graph to quickly learn the local structure information in multi-modal data. In addition, we also verify the validity of the proposed algorithm on the published ADNI dataset.
Collapse
|
25
|
Multi-class classification of Alzheimer’s disease through distinct neuroimaging computational approaches using Florbetapir PET scans. EVOLVING SYSTEMS 2022. [DOI: 10.1007/s12530-022-09467-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/07/2022]
|
26
|
Xiao L, Cai B, Qu G, Zhang G, Stephen JM, Wilson TW, Calhoun VD, Wang YP. Distance Correlation-Based Brain Functional Connectivity Estimation and Non-Convex Multi-Task Learning for Developmental fMRI Studies. IEEE Trans Biomed Eng 2022; 69:3039-3050. [PMID: 35316180 PMCID: PMC9594860 DOI: 10.1109/tbme.2022.3160447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Resting-state functional magnetic resonance imaging (rs-fMRI)-derived functional connectivity (FC) patterns have been extensively used to delineate global functional organization of the human brain in healthy development and neuropsychiatric disorders. In this paper, we investigate how FC in males and females differs in an age prediction framework. METHODS We first estimate FC between regions-of-interest (ROIs) using distance correlation instead of Pearson's correlation. Distance correlation, as a multivariate statistical method, explores spatial relations of voxel-wise time courses within individual ROIs and measures both linear and nonlinear dependence, capturing more complex between-ROI interactions. Then, we propose a novel non-convex multi-task learning (NC-MTL) model to study age-related gender differences in FC, where age prediction for each gender group is viewed as one task, and a composite regularizer with a combination of the non-convex l2,1-2 and l1-2 terms is introduced for selecting both common and task-specific features. RESULTS AND CONCLUSION We validate the effectiveness of our NC-MTL model with distance correlation-based FC derived from rs-fMRI for predicting ages of both genders. The experimental results on the Philadelphia Neurodevelopmental Cohort demonstrate that our NC-MTL model outperforms several other competing MTL models in age prediction. We also compare the age prediction performance of our NC-MTL model using FC estimated by Pearson's correlation and distance correlation, which shows that distance correlation-based FC is more discriminative for age prediction than Pearson's correlation-based FC. SIGNIFICANCE This paper presents a novel framework for functional connectome developmental studies, characterizing developmental gender differences in FC patterns.
Collapse
|
27
|
|
28
|
Cobbinah BM, Sorg C, Yang Q, Ternblom A, Zheng C, Han W, Che L, Shao J. Reducing variations in multi-center Alzheimer's disease classification with convolutional adversarial autoencoder. Med Image Anal 2022; 82:102585. [PMID: 36057187 DOI: 10.1016/j.media.2022.102585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Revised: 07/22/2022] [Accepted: 08/15/2022] [Indexed: 11/29/2022]
Abstract
Based on brain magnetic resonance imaging (MRI), multiple variations ranging from MRI scanners to center-specific parameter settings, imaging protocols, and brain region-of-interest (ROI) definitions pose a big challenge for multi-center Alzheimer's disease characterization and classification. Existing approaches to reduce such variations require intricate multi-step, often manual preprocessing pipelines, including skull stripping, segmentation, registration, cortical reconstruction, and ROI outlining. Such procedures are time-consuming, and more importantly, tend to be user biased. Contrasting costly and biased preprocessing pipelines, the question arises whether we can design a deep learning model to automatically reduce these variations from multiple centers for Alzheimer's disease classification? In this study, we used T1 and T2-weighted structural MRI from Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset based on three groups with 375 subjects, respectively: patients with Alzheimer's disease (AD) dementia, with mild cognitive impairment (MCI), and healthy controls (HC); to test our approach, we defined AD classification as classifying an individual's structural image to one of the three group labels. We first introduced a convolutional adversarial autoencoder (CAAE) to reduce the variations existing in multi-center raw MRI scans by automatically registering them into a common aligned space. Afterward, a convolutional residual soft attention network (CRAT) was further proposed for AD classification. Canonical classification procedures demonstrated that our model achieved classification accuracies of 91.8%, 90.05%, and 88.10% for the 2-way classification tasks using the RAW aligned MRI scans, including AD vs. HC, AD vs. MCI, and MCI vs. HC, respectively. Thus, our automated approach achieves comparable or even better classification performance by comparing it with many baselines with dedicated conventional preprocessing pipelines. Furthermore, the uncovered brain hotpots, i.e., hippocampus, amygdala, and temporal pole, are consistent with previous studies.
Collapse
Affiliation(s)
- Bernard M Cobbinah
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, 611731 Chengdu, China
| | - Christian Sorg
- Department of Neuroradiology, TUM-NIC Neuroimaging Center of Technical University Munich, Germany
| | - Qinli Yang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, 611731 Chengdu, China
| | - Arvid Ternblom
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, 611731 Chengdu, China
| | - Changgang Zheng
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, 611731 Chengdu, China
| | - Wei Han
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, 611731 Chengdu, China
| | - Liwei Che
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, 611731 Chengdu, China
| | - Junming Shao
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, 611731 Chengdu, China; Center for Information in BioMedicine, University of Electronic Science and Technology of China, 611731 Chengdu, China; Yangtze Delta Region Institute (Huzhou), University of Electronic Science and Technology of China, Huzhou 313001, China.
| |
Collapse
|
29
|
Tu Y, Lin S, Qiao J, Zhuang Y, Zhang P. Alzheimer's disease diagnosis via multimodal feature fusion. Comput Biol Med 2022; 148:105901. [PMID: 35908497 DOI: 10.1016/j.compbiomed.2022.105901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 06/26/2022] [Accepted: 07/16/2022] [Indexed: 11/19/2022]
Abstract
Alzheimer's disease (AD) is the most common neurodegenerative disorder in the elderly. Early diagnosis of AD plays a vital role in slowing down the progress of AD because there is no effective drug to treat the disease. Some deep learning models have recently been presented for AD diagnosis and have more satisfactory performance than classic machine learning methods. Nevertheless, most of the existing computer-aided diagnostic models used neuroimaging features for diagnosis, ignoring patients' clinical and biological information. This makes the AD diagnosis inaccurate. In this study, we propose a novel multimodal feature transformation and fusion model for AD diagnosis. The feature transformation aims to avoid the difference in feature dimensions between different modal data and further mine the significant features for AD diagnosis. A geometric algebra-based feature extension method is proposed to obtain different levels of high-dimensional features from patients' clinical and personal biological data. Then, an influence degree-based feature filtration algorithm is proposed to filtrate those features that have no apparent guiding significance for AD diagnosis. Finally, an ANN (Artificial Neural Network)-based framework is designed to fuse transformed features with neuroimaging features extracted by CNN (Convolutional Neural Network) for AD diagnosis. The more in-depth feature mining of patients' clinical information and biological information can significantly improve the performance of computer-aided AD diagnosis. The experiments are obtained on the ADNI dataset. Our proposed model can converge faster and achieves 96.2% accuracy in AD diagnostic task and 87.4% accuracy in MCI (Mild Cognitive Impairment) diagnostic task. Compared with other methods, our proposed approach has an excellent performance in AD diagnosis and surpasses SOTA (state-of-the-art) methods. Therefore, our model can provide more reasonable suggestions for clinicians to diagnose and treat disease.
Collapse
Affiliation(s)
- Yue Tu
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Shukuan Lin
- School of Computer Science and Engineering, Northeastern University, Shenyang, China.
| | - Jianzhong Qiao
- School of Computer Science and Engineering, Northeastern University, Shenyang, China.
| | - Yilin Zhuang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Peng Zhang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| |
Collapse
|
30
|
A Practical Multiclass Classification Network for the Diagnosis of Alzheimer’s Disease. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Patients who have Alzheimer’s disease (AD) pass through several irreversible stages, which ultimately result in the patient’s death. It is crucial to understand and detect AD at an early stage to slow down its progression due to the non-curable nature of the disease. Diagnostic techniques are primarily based on magnetic resonance imaging (MRI) and expensive high-dimensional 3D imaging data. Classic methods can hardly discriminate among the almost similar pixels of the brain patterns of various age groups. The recent deep learning-based methods can contribute to the detection of the various stages of AD but require large-scale datasets and face several challenges while using the 3D volumes directly. The extant deep learning-based work is mainly focused on binary classification, but it is challenging to detect multiple stages with these methods. In this work, we propose a deep learning-based multiclass classification method to distinguish amongst various stages for the early diagnosis of Alzheimer’s. The proposed method significantly handles data shortage challenges by augmentation and manages to classify the 2D images obtained after the efficient pre-processing of the publicly available Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. Our method achieves an accuracy of 98.9% with an F1 score of 96.3. Extensive experiments are performed, and overall results demonstrate that the proposed method outperforms the state-of-the-art methods in terms of overall performance.
Collapse
|
31
|
Shang Q, Zhang Q, Liu X, Zhu L. Prediction of Early Alzheimer Disease by Hippocampal Volume Changes under Machine Learning Algorithm. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:3144035. [PMID: 35572832 PMCID: PMC9106502 DOI: 10.1155/2022/3144035] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 04/06/2022] [Accepted: 04/15/2022] [Indexed: 11/17/2022]
Abstract
This research was aimed at discussing the application value of different machine learning algorithms in the prediction of early Alzheimer's disease (AD), which was based on hippocampal volume changes in magnetic resonance imaging (MRI). In the research, the 84 cases in American Alzheimer's disease neuroimaging initiative (ADNI) database were selected as the research data. Based on the scoring results of cognitive function, all cases were divided into three groups, including cognitive function normal (normal group), early mild cognitive impairment (e-MCI group), and later mild cognitive impairment (l-MCI group) groups. Each group included 28 cases. The features of hippocampal volume changes in MRI images of the patients in different groups were extracted. The samples of training set and test set were established. Besides, the established support vector machine (SVM), decision tree (DT), and random forest (RF) prediction models were used to predict e-MCI. Metalinear regression was utilized to analyze MRI feature data, and the predictive accuracy, sensitivity, and specificity of different models were calculated. The result showed that the volumes of hippocampal left CA1, left CA2-3, left CA4-DG, left presubiculum, left tail, right CA2-3, right CA4-DG, right presubiculum, and right tail in e-MCI group were all smaller than those in normal group (P < 0.01). The corresponding volume of hippocampal subregions in l-MCI group was remarkably reduced compared with that in normal group (P < 0.001). The volumes of regions left CA1, left CA2-3, left CA4-DG, right CA2-3, right CA4-DG, and right presubiculum were all positively correlated with logical memory test-delay recall (LMT-DR) score (R 2 = 0.1702, 0.3779, 0.1607, 0.1620, 0.0426, and 0.1309; P < 0.001). The predictive accuracy of training set sample by DT, SVM, and RF was 86.67%, 93.33%, and 98.33%, respectively. Based on the changes in the volumes of left CA4-DG, right CA2-3, and right CA4-DG, the predictive accuracy of e-MCI and l-MCI by RF model was both higher than those by DT model (P < 0.01). Besides, the predictive accuracy, sensitivity, and specificity of e-MCI by RF model was all notably higher than those by DT model (P < 0.01). The above results demonstrated that the effective early AD prediction models were established by the volume changes in hippocampal subregions, which was based on RF in the research. The establishment of early AD prediction models offered certain reference basis to the diagnosis and treatment of AD patients.
Collapse
Affiliation(s)
- Qun Shang
- Department of Radiology, Zibo Central Hospital, Zibo, 255000 Shandong, China
| | - Qi Zhang
- Department of Radiology, Zibo Central Hospital, Zibo, 255000 Shandong, China
| | - Xiao Liu
- Department of Radiology, Zibo Central Hospital, Zibo, 255000 Shandong, China
| | - Lingchen Zhu
- Department of Radiology, Zibo Central Hospital, Zibo, 255000 Shandong, China
| |
Collapse
|
32
|
Abdelaziz M, Wang T, Elazab A. Fusing Multimodal and Anatomical Volumes of Interest Features Using Convolutional Auto-Encoder and Convolutional Neural Networks for Alzheimer's Disease Diagnosis. Front Aging Neurosci 2022; 14:812870. [PMID: 35572142 PMCID: PMC9096261 DOI: 10.3389/fnagi.2022.812870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 03/11/2022] [Indexed: 11/16/2022] Open
Abstract
Alzheimer's disease (AD) is an age-related disease that affects a large proportion of the elderly. Currently, the neuroimaging techniques [e.g., magnetic resonance imaging (MRI) and positron emission tomography (PET)] are promising modalities for AD diagnosis. Since not all brain regions are affected by AD, a common technique is to study some region-of-interests (ROIs) that are believed to be closely related to AD. Conventional methods used ROIs, identified by the handcrafted features through Automated Anatomical Labeling (AAL) atlas rather than utilizing the original images which may induce missing informative features. In addition, they learned their framework based on the discriminative patches instead of full images for AD diagnosis in multistage learning scheme. In this paper, we integrate the original image features from MRI and PET with their ROIs features in one learning process. Furthermore, we use the ROIs features for forcing the network to focus on the regions that is highly related to AD and hence, the performance of the AD diagnosis can be improved. Specifically, we first obtain the ROIs features from the AAL, then we register every ROI with its corresponding region of the original image to get a synthetic image for each modality of every subject. Then, we employ the convolutional auto-encoder network for learning the synthetic image features and the convolutional neural network (CNN) for learning the original image features. Meanwhile, we concatenate the features from both networks after each convolution layer. Finally, the highly learned features from the MRI and PET are concatenated for brain disease classification. Experiments are carried out on the ADNI datasets including ADNI-1 and ADNI-2 to evaluate our method performance. Our method demonstrates a higher performance in brain disease classification than the recent studies.
Collapse
Affiliation(s)
- Mohammed Abdelaziz
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Department of Communications and Electronics, Delta Higher Institute for Engineering and Technology (DHIET), Mansoura, Egypt
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Ahmed Elazab
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Computer Science Department, Misr Higher Institute of Commerce and Computers, Mansoura, Egypt
| |
Collapse
|
33
|
Bi XA, Zhou W, Luo S, Mao Y, Hu X, Zeng B, Xu L. Feature aggregation graph convolutional network based on imaging genetic data for diagnosis and pathogeny identification of Alzheimer's disease. Brief Bioinform 2022; 23:6572662. [PMID: 35453149 DOI: 10.1093/bib/bbac137] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 03/15/2022] [Accepted: 03/23/2022] [Indexed: 12/30/2022] Open
Abstract
The roles of brain regions activities and gene expressions in the development of Alzheimer's disease (AD) remain unclear. Existing imaging genetic studies usually has the problem of inefficiency and inadequate fusion of data. This study proposes a novel deep learning method to efficiently capture the development pattern of AD. First, we model the interaction between brain regions and genes as node-to-node feature aggregation in a brain region-gene network. Second, we propose a feature aggregation graph convolutional network (FAGCN) to transmit and update the node feature. Compared with the trivial graph convolutional procedure, we replace the input from the adjacency matrix with a weight matrix based on correlation analysis and consider common neighbor similarity to discover broader associations of nodes. Finally, we use a full-gradient saliency graph mechanism to score and extract the pathogenetic brain regions and risk genes. According to the results, FAGCN achieved the best performance among both traditional and cutting-edge methods and extracted AD-related brain regions and genes, providing theoretical and methodological support for the research of related diseases.
Collapse
Affiliation(s)
- Xia-An Bi
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, and the College of Information Science and Engineering in Hunan Normal University, P.R. China
| | - Wenyan Zhou
- College of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Sheng Luo
- College of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Yuhua Mao
- College of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Xi Hu
- College of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Bin Zeng
- Hunan Youdao Information Technology Co., Ltd, P.R. China
| | - Luyun Xu
- College of Business in Hunan Normal University, P.R. China
| |
Collapse
|
34
|
Goenka N, Tiwari S. AlzVNet: A volumetric convolutional neural network for multiclass classification of Alzheimer’s disease through multiple neuroimaging computational approaches. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103500] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
35
|
Bi XA, Li L, Wang Z, Wang Y, Luo X, Xu L. IHGC-GAN: influence hypergraph convolutional generative adversarial network for risk prediction of late mild cognitive impairment based on imaging genetic data. Brief Bioinform 2022; 23:6554128. [PMID: 35348583 DOI: 10.1093/bib/bbac093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Revised: 01/28/2022] [Accepted: 02/23/2022] [Indexed: 11/13/2022] Open
Abstract
Predicting disease progression in the initial stage to implement early intervention and treatment can effectively prevent the further deterioration of the condition. Traditional methods for medical data analysis usually fail to perform well because of their incapability for mining the correlation pattern of pathogenies. Therefore, many calculation methods have been excavated from the field of deep learning. In this study, we propose a novel method of influence hypergraph convolutional generative adversarial network (IHGC-GAN) for disease risk prediction. First, a hypergraph is constructed with genes and brain regions as nodes. Then, an influence transmission model is built to portray the associations between nodes and the transmission rule of disease information. Third, an IHGC-GAN method is constructed based on this model. This method innovatively combines the graph convolutional network (GCN) and GAN. The GCN is used as the generator in GAN to spread and update the lesion information of nodes in the brain region-gene hypergraph. Finally, the prediction accuracy of the method is improved by the mutual competition and repeated iteration between generator and discriminator. This method can not only capture the evolutionary pattern from early mild cognitive impairment (EMCI) to late MCI (LMCI) but also extract the pathogenic factors and predict the deterioration risk from EMCI to LMCI. The results on the two datasets indicate that the IHGC-GAN method has better prediction performance than the advanced methods in a variety of indicators.
Collapse
Affiliation(s)
- Xia-An Bi
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, and the College of Information Science and Engineering in Hunan Normal University, Changsha 410081, P.R. China
| | - Lou Li
- Department of Computing, School of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Zizheng Wang
- Department of Computing, School of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Yu Wang
- Department of Computing, School of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Xun Luo
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, and the College of Information Science and Engineering in Hunan Normal University, Changsha 410081, P.R. China
| | - Luyun Xu
- College of Business, Hunan Normal University, Changsha 410081, P.R. China
| |
Collapse
|
36
|
Cui W, Yan C, Yan Z, Peng Y, Leng Y, Liu C, Chen S, Jiang X, Zheng J, Yang X. BMNet: A New Region-Based Metric Learning Method for Early Alzheimer's Disease Identification With FDG-PET Images. Front Neurosci 2022; 16:831533. [PMID: 35281501 PMCID: PMC8908419 DOI: 10.3389/fnins.2022.831533] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 01/11/2022] [Indexed: 12/21/2022] Open
Abstract
18F-fluorodeoxyglucose (FDG)-positron emission tomography (PET) reveals altered brain metabolism in individuals with mild cognitive impairment (MCI) and Alzheimer's disease (AD). Some biomarkers derived from FDG-PET by computer-aided-diagnosis (CAD) technologies have been proved that they can accurately diagnosis normal control (NC), MCI, and AD. However, existing FDG-PET-based researches are still insufficient for the identification of early MCI (EMCI) and late MCI (LMCI). Compared with methods based other modalities, current methods with FDG-PET are also inadequate in using the inter-region-based features for the diagnosis of early AD. Moreover, considering the variability in different individuals, some hard samples which are very similar with both two classes limit the classification performance. To tackle these problems, in this paper, we propose a novel bilinear pooling and metric learning network (BMNet), which can extract the inter-region representation features and distinguish hard samples by constructing the embedding space. To validate the proposed method, we collect 898 FDG-PET images from Alzheimer's disease neuroimaging initiative (ADNI) including 263 normal control (NC) patients, 290 EMCI patients, 147 LMCI patients, and 198 AD patients. Following the common preprocessing steps, 90 features are extracted from each FDG-PET image according to the automatic anatomical landmark (AAL) template and then sent into the proposed network. Extensive fivefold cross-validation experiments are performed for multiple two-class classifications. Experiments show that most metrics are improved after adding the bilinear pooling module and metric losses to the Baseline model respectively. Specifically, in the classification task between EMCI and LMCI, the specificity improves 6.38% after adding the triple metric loss, and the negative predictive value (NPV) improves 3.45% after using the bilinear pooling module. In addition, the accuracy of classification between EMCI and LMCI achieves 79.64% using imbalanced FDG-PET images, which illustrates that the proposed method yields a state-of-the-art result of the classification accuracy between EMCI and LMCI based on PET images.
Collapse
Affiliation(s)
- Wenju Cui
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, China
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Caiying Yan
- Department of Radiology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou, China
| | - Zhuangzhi Yan
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Yunsong Peng
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
- School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Yilin Leng
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, China
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Chenlu Liu
- Department of Radiology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou, China
| | - Shuangqing Chen
- Department of Radiology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou, China
| | - Xi Jiang
- School of Life Sciences and Technology, The University of Electronic Science and Technology of China, Chengdu, China
| | - Jian Zheng
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Xiaodong Yang
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| |
Collapse
|
37
|
Bi XA, Xing Z, Zhou W, Li L, Xu L. Pathogeny Detection for Mild Cognitive Impairment via Weighted Evolutionary Random Forest with Brain Imaging and Genetic Data. IEEE J Biomed Health Inform 2022; 26:3068-3079. [PMID: 35157601 DOI: 10.1109/jbhi.2022.3151084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Medical imaging technology and gene sequencing technology have long been widely used to analyze the pathogenesis and make precise diagnoses of mild cognitive impairment (MCI). However, few studies involve the fusion of radiomics data with genomics data to make full use of the complementarity between different omics to detect pathogenic factors of MCI. This paper performs multimodal fusion analysis based on functional magnetic resonance imaging (fMRI) data and single nucleotide polymorphism (SNP) data of MCI patients. In specific, first, using correlation analysis methods on sequence information of regions of interests (ROIs) and digitalized gene sequences, the fusion features of samples are constructed. Then, introducing weighted evolution strategy into ensemble learning, a novel weighted evolutionary random forest (WERF) model is built to eliminate the inefficient features. Consequently, with the help of WERF, an overall multimodal data analysis framework is established to effectively identify MCI patients and extract pathogenic factors. Based on the data of MCI patients from the ADNI database and compared with some existing popular methods, the superiority in performance of the framework is verified. Our study has great potential to be an effective tool for pathogenic factors detection of MCI.
Collapse
|
38
|
A review on multimodal medical image fusion: Compendious analysis of medical modalities, multimodal databases, fusion techniques and quality metrics. Comput Biol Med 2022; 144:105253. [DOI: 10.1016/j.compbiomed.2022.105253] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 01/20/2022] [Indexed: 11/20/2022]
|
39
|
Yang J, Sui H, Jiao R, Zhang M, Zhao X, Wang L, Deng W, Liu X. Random-Forest-Algorithm-Based Applications of the Basic Characteristics and Serum and Imaging Biomarkers to Diagnose Mild Cognitive Impairment. Curr Alzheimer Res 2022; 19:76-83. [PMID: 35088670 PMCID: PMC9189735 DOI: 10.2174/1567205019666220128120927] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 12/04/2021] [Accepted: 01/13/2022] [Indexed: 11/24/2022]
Abstract
Background
Mild cognitive impairment (MCI) is considered the early stage of Alzheimer's Disease (AD). The purpose of our study was to analyze the basic characteristics and serum and imaging biomarkers for the diagnosis of MCI patients as a more objective and accurate approach. Methods
The Montreal Cognitive Test was used to test 119 patients aged ≥65. Such serum biomarkers were detected as preprandial blood glucose, triglyceride, total cholesterol, Aβ1-40, Aβ1-42, and P-tau. All the subjects were scanned with 1.5T MRI (GE Healthcare, WI, USA) to obtain DWI, DTI, and ASL images. DTI was used to calculate the anisotropy fraction (FA), DWI was used to calculate the apparent diffusion coefficient (ADC), and ASL was used to calculate the cerebral blood flow (CBF). All the images were then registered to the SPACE of the Montreal Neurological Institute (MNI). In 116 brain regions, the medians of FA, ADC, and CBF were extracted by automatic anatomical labeling. The basic characteristics included gender, education level, and previous disease history of hypertension, diabetes, and coronary heart disease. The data were randomly divided into training sets and test ones. The recursive random forest algorithm was applied to the diagnosis of MCI patients, and the recursive feature elimination (RFE) method was used to screen the significant basic features and serum and imaging biomarkers. The overall accuracy, sensitivity, and specificity were calculated, respectively, and so were the ROC curve and the area under the curve (AUC) of the test set. Results
When the variable of the MCI diagnostic model was an imaging biomarker, the training accuracy of the random forest was 100%, the correct rate of the test was 86.23%, the sensitivity was 78.26%, and the specificity was 100%. When combining the basic characteristics, the serum and imaging biomarkers as variables of the MCI diagnostic model, the training accuracy of the random forest was found to be 100%; the test accuracy was 97.23%, the sensitivity was 94.44%, and the specificity was 100%. RFE analysis showed that age, Aβ1-40, and cerebellum_4_6 were the most important basic feature, serum biomarker, imaging biomarker, respectively. Conclusion
Imaging biomarkers can effectively diagnose MCI. The diagnostic capacity of the basic trait biomarkers or serum biomarkers for MCI is limited, but their combination with imaging biomarkers can improve the diagnostic capacity, as indicated by the sensitivity of 94.44% and the specificity of 100% in our model. As a machine learning method, a random forest can help diagnose MCI effectively while screening important influencing factors.
Collapse
Affiliation(s)
- Juan Yang
- Department of Neurology, Shanghai Tenth People's Hospital, School of Medicine, Tongji University, Shanghai, 200092, China
- Department of Neurology, Shanghai Pudong New Area People's Hospital,Shanghai, 201299, China
| | - Haijing Sui
- Department of Radiology, Shanghai Pudong New Area People's Hospital, Shanghai, People's Republic of China
| | - Ronghong Jiao
- Department of Clinical Laboratory, Shanghai Pudong New Area People's Hospital, Shanghai, People's Republic of China
| | - Min Zhang
- hcit.ai Co., Shanghai, People's Republic of China
| | - Xiaohui Zhao
- Department of Neurology, Shanghai Pudong New Area People's Hospital, Shanghai, People's Republic of China
| | - Lingling Wang
- Department of Neurology, Shanghai Pudong New Area People's Hospital, Shanghai, People's Republic of China
| | - Wenping Deng
- Huawei Technology Co., Ltd Co, Shanghai, People's Republic of China
| | - Xueyuan Liu
- Department of Neurology, Shanghai Tenth People's Hospital, School of Medicine, Tongji University, Shanghai, 200092, China
- Department of Neurology, Shanghai Pudong New Area People's Hospital,Shanghai, 201299, China
| |
Collapse
|
40
|
Zeng A, Rong H, Pan D, Jia L, Zhang Y, Zhao F, Peng S. Discovery of Genetic Biomarkers for Alzheimer's Disease Using Adaptive Convolutional Neural Networks Ensemble and Genome-Wide Association Studies. Interdiscip Sci 2021; 13:787-800. [PMID: 34410590 DOI: 10.1007/s12539-021-00470-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2021] [Revised: 07/01/2021] [Accepted: 08/01/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE To identify candidate neuroimaging and genetic biomarkers for Alzheimer's disease (AD) and other brain disorders, especially for little-investigated brain diseases, we advocate a data-driven approach which incorporates an adaptive classifier ensemble model acquired by integrating Convolutional Neural Network (CNN) and Ensemble Learning (EL) with Genetic Algorithm (GA), i.e., the CNN-EL-GA method, into Genome-Wide Association Studies (GWAS). METHODS Above all, a large number of CNN models as base classifiers were trained using coronal, sagittal, or transverse magnetic resonance imaging slices, respectively, and the CNN models with strong discriminability were then selected to build a single classifier ensemble with the GA for classifying AD, with the help of the CNN-EL-GA method. While the acquired classifier ensemble exhibited the highest generalization capability, the points of intersection were determined with the most discriminative coronal, sagittal, and transverse slices. Finally, we conducted GWAS on the genotype data and the phenotypes, i.e., the gray matter volumes of the top ten most discriminative brain regions, which contained the ten most points of intersection. RESULTS Six genes of PCDH11X/Y, TPTE2, LOC107985902, MUC16 and LINC01621 as well as Single-Nucleotide Polymorphisms, e.g., rs36088804, rs34640393, rs2451078, rs10496214, rs17016520, rs2591597, rs9352767 and rs5941380, were identified. CONCLUSION This approach overcomes the limitations associated with the impact of subjective factors and dependence on prior knowledge while adaptively achieving more robust and effective candidate biomarkers in a data-driven way. SIGNIFICANCE The approach is promising to facilitate discovering effective candidate genetic biomarkers for brain disorders, as well as to help improve the effectiveness of identified candidate neuroimaging biomarkers for brain diseases.
Collapse
Affiliation(s)
- An Zeng
- Faculty of Computer, Guangdong University of Technology, Guangzhou, 510006, People's Republic of China
| | - Huabin Rong
- Faculty of Computer, Guangdong University of Technology, Guangzhou, 510006, People's Republic of China
| | - Dan Pan
- School of Electronics and Information, Guangdong Polytechnic Normal University, Guangzhou, 510665, People's Republic of China.
| | - Longfei Jia
- Faculty of Computer, Guangdong University of Technology, Guangzhou, 510006, People's Republic of China
| | - Yiqun Zhang
- Faculty of Computer, Guangdong University of Technology, Guangzhou, 510006, People's Republic of China
| | - Fengyi Zhao
- Faculty of Computer, Guangdong University of Technology, Guangzhou, 510006, People's Republic of China
| | - Shaoliang Peng
- College of Computer Science and Electronic Engineering, Hunan University, School of Computer Science, National University of Defense Technology, Peng Cheng Lab, Shenzhen, 518000, People's Republic of China.
| |
Collapse
|
41
|
Song X, Mao M, Qian X. Auto-Metric Graph Neural Network Based on a Meta-Learning Strategy for the Diagnosis of Alzheimer's Disease. IEEE J Biomed Health Inform 2021; 25:3141-3152. [PMID: 33493122 DOI: 10.1109/jbhi.2021.3053568] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Alzheimer's disease (AD) is the most common cognitive disorder. In recent years, many computer-aided diagnosis techniques have been proposed for AD diagnosis and progression predictions. Among them, graph neural networks (GNNs) have received extensive attention owing to their ability to effectively fuse multimodal features and model the correlation between samples. However, many GNNs for node classification use an entire dataset to construct a large fixed-graph structure, which cannot be used for independent testing. To overcome this limitation while maintaining the advantages of the GNN, we propose an auto-metric GNN (AMGNN) model for AD diagnosis. First, a metric-based meta-learning strategy is introduced to realize inductive learning for independent testing through multiple node classification tasks. In the meta-tasks, the small graphs help make the model insensitive to the sample size, thus improving the performance under small sample size conditions. Furthermore, an AMGNN layer with a probability constraint is designed to realize node similarity metric learning and effectively fuse multimodal data. We verified the model on two tasks based on the TADPOLE dataset: early AD diagnosis and mild cognitive impairment (MCI) conversion prediction. Our model provides excellent performance on both tasks with accuracies of 94.44% and 87.50% and median accuracies of 94.19% and 86.25%, respectively. These results show that our model improves flexibility while ensuring a good classification performance, thus promoting the development of graph-based deep learning algorithms for disease diagnosis.
Collapse
|
42
|
Bi XA, Zhou W, Li L, Xing Z. Detecting Risk Gene and Pathogenic Brain Region in EMCI Using a Novel GERF Algorithm Based on Brain Imaging and Genetic Data. IEEE J Biomed Health Inform 2021; 25:3019-3028. [PMID: 33750717 DOI: 10.1109/jbhi.2021.3067798] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Fusion analysis of disease-related multi-modal data is becoming increasingly important to illuminate the pathogenesis of complex brain diseases. However, owing to the small amount and high dimension of multi-modal data, current machine learning methods do not fully achieve the high veracity and reliability of fusion feature selection. In this paper, we propose a genetic-evolutionary random forest (GERF) algorithm to discover the risk genes and disease-related brain regions of early mild cognitive impairment (EMCI) based on the genetic data and resting-state functional magnetic resonance imaging (rs-fMRI) data. Classical correlation analysis method is used to explore the association between brain regions and genes, and fusion features are constructed. The genetic-evolutionary idea is introduced to enhance the classification performance, and to extract the optimal features effectively. The proposed GERF algorithm is evaluated by the public Alzheimer's Disease Neuroimaging Initiative (ADNI) database, and the results show that the algorithm achieves satisfactory classification accuracy in small sample learning. Moreover, we compare the GERF algorithm with other methods to prove its superiority. Furthermore, we propose the overall framework of detecting pathogenic factors, which can be accurately and efficiently applied to the multi-modal data analysis of EMCI and be able to extend to other diseases. This work provides a novel insight for early diagnosis and clinicopathologic analysis of EMCI, which facilitates clinical medicine to control further deterioration of diseases and is good for the accurate electric shock using transcranial magnetic stimulation.
Collapse
|
43
|
Alzheimer's disease diagnosis framework from incomplete multimodal data using convolutional neural networks. J Biomed Inform 2021; 121:103863. [PMID: 34229061 DOI: 10.1016/j.jbi.2021.103863] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 06/09/2021] [Accepted: 07/01/2021] [Indexed: 11/23/2022]
Abstract
Alzheimer's disease (AD) is a severe irreversible neurodegenerative disease that has great sufferings on patients and eventually leads to death. Early detection of AD and its prodromal stage, mild cognitive impairment (MCI) which can be either stable (sMCI) or progressive (pMCI), is highly desirable for effective treatment planning and tailoring therapy. Recent studies recommended using multimodal data fusion of genetic (single nucleotide polymorphisms, SNPs) and neuroimaging data (magnetic resonance imaging (MRI) and positron emission tomography (PET)) to discriminate AD/MCI from normal control (NC) subjects. However, missing multimodal data in the cohort under study is inevitable. In addition, data heterogeneity between phenotypes and genotypes biomarkers makes learning capability of the models more challenging. Also, the current studies mainly focus on identifying brain disease classification and ignoring the regression task. Furthermore, they utilize multistage for predicting the brain disease progression. To address these issues, we propose a novel multimodal neuroimaging and genetic data fusion for joint classification and clinical score regression tasks using the maximum number of available samples in one unified framework using convolutional neural network (CNN). Specifically, we initially perform a technique based on linear interpolation to fill the missing features for each incomplete sample. Then, we learn the neuroimaging features from MRI, PET, and SNPs using CNN to alleviate the heterogeneity among genotype and phenotype data. Meanwhile, the high learned features from each modality are combined for jointly identifying brain diseases and predicting clinical scores. To validate the performance of the proposed method, we test our method on 805 subjects from Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Also, we verify the similarity between the synthetic and real data using statistical analysis. Moreover, the experimental results demonstrate that the proposed method can yield better performance in both classification and regression tasks. Specifically, our proposed method achieves accuracy of 98.22%, 93.11%, and 97.35% for NC vs. AD, NC vs. sMCI, and NC vs. pMCI, respectively. On the other hand, our method attains the lowest root mean square error and the highest correlation coefficient for different clinical scores regression tasks compared with the state-of-the-art methods.
Collapse
|
44
|
Veluppal A, Sadhukhan D, Gopinath V, Swaminathan R. Detection of Mild Cognitive Impairment using Kernel Density Estimation based texture analysis of the Corpus Callosum in brain MR images. Ing Rech Biomed 2021. [DOI: 10.1016/j.irbm.2021.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
45
|
Ouyang Y, Cui D, Yuan Z, Liu Z, Jiao Q, Yin T, Qiu J. Analysis of Age-Related White Matter Microstructures Based on Diffusion Tensor Imaging. Front Aging Neurosci 2021; 13:664911. [PMID: 34262444 PMCID: PMC8273390 DOI: 10.3389/fnagi.2021.664911] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Accepted: 04/14/2021] [Indexed: 12/04/2022] Open
Abstract
Population aging has become a serious social problem. Accordingly, many researches are focusing on changes in brains of the elderly. In this study, we used multiple parameters to analyze age-related changes in white matter fibers. A sample cohort of 58 individuals was divided into young and middle-age groups and tract-based spatial statistics (TBSS) were used to analyze the differences in fractional anisotropy (FA), mean diffusion (MD), axial diffusion (AD), and radial diffusion (RD) between the two groups. Deterministic fiber tracking was used to investigate the correlation between fiber number and fiber length with age. The TBSS analysis revealed significant differences in FA, MD, AD, and RD in multiple white matter fibers between the two groups. In the middle-age group FA and AD were lower than in young people, whereas the MD and RD values were higher. Deterministic fiber tracking showed that the fiber length of some fibers correlated positively with age. These fibers were observed in the splenium of corpus callosum (SCC), the posterior limb of internal capsule (PLIC), the right posterior corona radiata (PCR_R), the anterior corona radiata (ACR), the left posterior thalamic radiation (include optic radiation; PTR_L), and the left superior longitudinal fasciculus (SLF_L), among others. The results showed that the SCC, PLIC, PCR_R, ACR, PTR_L, and SLF_L significantly differed between young and middle-age people. Therefore, we believe that these fibers could be used as image markers of age-related white matter changes.
Collapse
Affiliation(s)
- Yahui Ouyang
- Medical Engineering and Technology Research Center, Shandong First Medical University (Shandong Academy of Medical Sciences), Tai'an, China.,College of Radiology, Shandong First Medical University (Shandong Academy of Medical Sciences), Tai'an, China
| | - Dong Cui
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences & Peking Union Medical College, Tianjin, China
| | - Zilong Yuan
- Department of Radiology, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhipeng Liu
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences & Peking Union Medical College, Tianjin, China
| | - Qing Jiao
- College of Radiology, Shandong First Medical University (Shandong Academy of Medical Sciences), Tai'an, China
| | - Tao Yin
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences & Peking Union Medical College, Tianjin, China
| | - Jianfeng Qiu
- Medical Engineering and Technology Research Center, Shandong First Medical University (Shandong Academy of Medical Sciences), Tai'an, China.,College of Radiology, Shandong First Medical University (Shandong Academy of Medical Sciences), Tai'an, China
| |
Collapse
|
46
|
Wang M, Shao W, Hao X, Zhang D. Identify Complex Imaging Genetic Patterns via Fusion Self-Expressive Network Analysis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1673-1686. [PMID: 33661732 DOI: 10.1109/tmi.2021.3063785] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In the brain imaging genetic studies, it is a challenging task to estimate the association between quantitative traits (QTs) extracted from neuroimaging data and genetic markers such as single-nucleotide polymorphisms (SNPs). Most of the existing association studies are based on the extensions of sparse canonical correlation analysis (SCCA) for the identification of complex bi-multivariate associations, which can take the specific structure and group information into consideration. However, they often take the original data as input without considering its underlying complex multi-subspace structure, which will deteriorate the performance of the following integrative analysis. Accordingly, in this paper, the self-expressive property is exploited for the reconstruction of the original data before the association analysis, which can well describe the similarity structure. Specifically, we first apply the within-class similarity information to construct self-expressive networks by sparse representation. Then, we use the fusion method to iteratively fuse the self-expressive networks from multi-modality brain phenotypes into one network. Finally, we calculate the imaging genetic association based on the fused self-expressive network. We conduct the experiments on both single-modality and multi-modality phenotype data. Related experimental results validate that our method can not only better estimate the potential association between genetic markers and quantitative traits but also identify consistent multi-modality imaging genetic biomarkers to guide the interpretation of Alzheimer's disease.
Collapse
|
47
|
Ning Z, Xiao Q, Feng Q, Chen W, Zhang Y. Relation-Induced Multi-Modal Shared Representation Learning for Alzheimer's Disease Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1632-1645. [PMID: 33651685 DOI: 10.1109/tmi.2021.3063150] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The fusion of multi-modal data (e.g., magnetic resonance imaging (MRI) and positron emission tomography (PET)) has been prevalent for accurate identification of Alzheimer's disease (AD) by providing complementary structural and functional information. However, most of the existing methods simply concatenate multi-modal features in the original space and ignore their underlying associations which may provide more discriminative characteristics for AD identification. Meanwhile, how to overcome the overfitting issue caused by high-dimensional multi-modal data remains appealing. To this end, we propose a relation-induced multi-modal shared representation learning method for AD diagnosis. The proposed method integrates representation learning, dimension reduction, and classifier modeling into a unified framework. Specifically, the framework first obtains multi-modal shared representations by learning a bi-directional mapping between original space and shared space. Within this shared space, we utilize several relational regularizers (including feature-feature, feature-label, and sample-sample regularizers) and auxiliary regularizers to encourage learning underlying associations inherent in multi-modal data and alleviate overfitting, respectively. Next, we project the shared representations into the target space for AD diagnosis. To validate the effectiveness of our proposed approach, we conduct extensive experiments on two independent datasets (i.e., ADNI-1 and ADNI-2), and the experimental results demonstrate that our proposed method outperforms several state-of-the-art methods.
Collapse
|
48
|
Feng Q, Ding Z. MRI Radiomics Classification and Prediction in Alzheimer's Disease and Mild Cognitive Impairment: A Review. Curr Alzheimer Res 2021; 17:297-309. [PMID: 32124697 DOI: 10.2174/1567205017666200303105016] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2019] [Revised: 02/03/2020] [Accepted: 03/01/2020] [Indexed: 01/18/2023]
Abstract
BACKGROUND Alzheimer's Disease (AD) is a progressive neurodegenerative disease that threatens the health of the elderly. Mild Cognitive Impairment (MCI) is considered to be the prodromal stage of AD. To date, AD or MCI diagnosis is established after irreversible brain structure alterations. Therefore, the development of new biomarkers is crucial to the early detection and treatment of this disease. At present, there exist some research studies showing that radiomics analysis can be a good diagnosis and classification method in AD and MCI. OBJECTIVE An extensive review of the literature was carried out to explore the application of radiomics analysis in the diagnosis and classification among AD patients, MCI patients, and Normal Controls (NCs). RESULTS Thirty completed MRI radiomics studies were finally selected for inclusion. The process of radiomics analysis usually includes the acquisition of image data, Region of Interest (ROI) segmentation, feature extracting, feature selection, and classification or prediction. From those radiomics methods, texture analysis occupied a large part. In addition, the extracted features include histogram, shapebased features, texture-based features, wavelet features, Gray Level Co-Occurrence Matrix (GLCM), and Run-Length Matrix (RLM). CONCLUSION Although radiomics analysis is already applied to AD and MCI diagnosis and classification, there still is a long way to go from these computer-aided diagnostic methods to the clinical application.
Collapse
Affiliation(s)
- Qi Feng
- Department of Radiology, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Zhongxiang Ding
- Department of Radiology, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, China.,Translational Medicine Research Center, Key Laboratory of Clinical Cancer Pharmacology and Toxicology Research of Zhejiang Province, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, China
| |
Collapse
|
49
|
|
50
|
Zhang X, Yang Y, Li T, Zhang Y, Wang H, Fujita H. CMC: A consensus multi-view clustering model for predicting Alzheimer's disease progression. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 199:105895. [PMID: 33341477 DOI: 10.1016/j.cmpb.2020.105895] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 11/29/2020] [Indexed: 06/12/2023]
Abstract
Machine learning has been used in the past for the auxiliary diagnosis of Alzheimer's Disease (AD). However, most existing technologies only explore single-view data, require manual parameter setting and focus on two-class (i.e., dementia or not) classification problems. Unlike single-view data, multi-view data provide more powerful feature representation capability. Learning with multi-view data is referred to as multi-view learning, which has received certain attention in recent years. In this paper, we propose a new multi-view clustering model called Consensus Multi-view Clustering (CMC) based on nonnegative matrix factorization for predicting the multiple stages of AD progression. The proposed CMC performs multi-view learning idea to fully capture data features with limited medical images, approaches similarity relations between different entities, addresses the shortcoming from multi-view fusion that requires manual setting parameters, and further acquires a consensus representation containing shared features and complementary knowledge of multiple view data. It not only can improve the predication performance of AD, but also can screen and classify the symptoms of different AD's phases. Experimental results using data with twelve views constructed by brain Magnetic Resonance Imaging (MRI) database from Alzheimer's Disease Neuroimaging Initiative expound and prove the effectiveness of the proposed model.
Collapse
Affiliation(s)
- Xiaobo Zhang
- School of Information Science and Technology, Southwest Jiaotong University, Chengdu 611756, China; Institute of Artificial Intelligence, Southwest Jiaotong University, Chengdu 611756, China; National Engineering Laboratory of Integrated Transportation Big Data Application Technology, Southwest Jiaotong University, Chengdu 611756, China
| | - Yan Yang
- School of Information Science and Technology, Southwest Jiaotong University, Chengdu 611756, China; Institute of Artificial Intelligence, Southwest Jiaotong University, Chengdu 611756, China; National Engineering Laboratory of Integrated Transportation Big Data Application Technology, Southwest Jiaotong University, Chengdu 611756, China.
| | - Tianrui Li
- School of Information Science and Technology, Southwest Jiaotong University, Chengdu 611756, China; Institute of Artificial Intelligence, Southwest Jiaotong University, Chengdu 611756, China; National Engineering Laboratory of Integrated Transportation Big Data Application Technology, Southwest Jiaotong University, Chengdu 611756, China
| | - Yiling Zhang
- School of Information Science and Technology, Southwest Jiaotong University, Chengdu 611756, China; Institute of Artificial Intelligence, Southwest Jiaotong University, Chengdu 611756, China; National Engineering Laboratory of Integrated Transportation Big Data Application Technology, Southwest Jiaotong University, Chengdu 611756, China
| | - Hao Wang
- School of Information Science and Technology, Southwest Jiaotong University, Chengdu 611756, China; Institute of Artificial Intelligence, Southwest Jiaotong University, Chengdu 611756, China; National Engineering Laboratory of Integrated Transportation Big Data Application Technology, Southwest Jiaotong University, Chengdu 611756, China
| | - Hamido Fujita
- Faculty of Software and Information Science, Iwate Prefectural University, Iwate, Japan
| |
Collapse
|