1
|
Xi D, Cui D, Zhang M, Zhang J, Shang M, Guo L, Han J, Du L. Identification of genetic basis of brain imaging by group sparse multi-task learning leveraging summary statistics. Comput Struct Biotechnol J 2024; 23:3288-3299. [PMID: 39296810 PMCID: PMC11409045 DOI: 10.1016/j.csbj.2024.08.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Revised: 08/29/2024] [Accepted: 08/29/2024] [Indexed: 09/21/2024] Open
Abstract
Brain imaging genetics is an evolving neuroscience topic aiming to identify genetic variations related to neuroimaging measurements of interest. Traditional linear regression methods have shown success, but their reliance on individual-level imaging and genetic data limits their applicability. Herein, we proposed S-GsMTLR, a group sparse multi-task linear regression method designed to harness summary statistics from genome-wide association studies (GWAS) of neuroimaging quantitative traits. S-GsMTLR directly employs GWAS summary statistics, bypassing the requirement for raw imaging genetic data, and applies multivariate multi-task sparse learning to these univariate GWAS results. It amalgamates the strengths of conventional sparse learning methods, including sophisticated modeling techniques and efficient feature selection. Additionally, we implemented a rapid optimization strategy to alleviate computational burdens by identifying genetic variants associated with phenotypes of interest across the entire chromosome. We first evaluated S-GsMTLR using summary statistics derived from the Alzheimer's Disease Neuroimaging Initiative. The results were remarkably encouraging, demonstrating its comparability to conventional methods in modeling and identification of risk loci. Furthermore, our method was evaluated with two additional GWAS summary statistics datasets: One focused on white matter microstructures and the other on whole brain imaging phenotypes, where the original individual-level data was unavailable. The results not only highlighted S-GsMTLR's ability to pinpoint significant loci but also revealed intriguing structures within genetic variations and loci that went unnoticed by GWAS. These findings suggest that S-GsMTLR is a promising multivariate sparse learning method in brain imaging genetics. It eliminates the need for original individual-level imaging and genetic data while demonstrating commendable modeling and feature selection capabilities.
Collapse
Affiliation(s)
- Duo Xi
- Northwestern Polytechnical University, Xi'an, 710072, China
| | - Dingnan Cui
- Northwestern Polytechnical University, Xi'an, 710072, China
| | | | - Jin Zhang
- Northwestern Polytechnical University, Xi'an, 710072, China
| | - Muheng Shang
- Northwestern Polytechnical University, Xi'an, 710072, China
| | - Lei Guo
- Northwestern Polytechnical University, Xi'an, 710072, China
| | - Junwei Han
- Northwestern Polytechnical University, Xi'an, 710072, China
| | - Lei Du
- Northwestern Polytechnical University, Xi'an, 710072, China
| |
Collapse
|
2
|
Cheek CL, Lindner P, Grigorenko EL. Statistical and Machine Learning Analysis in Brain-Imaging Genetics: A Review of Methods. Behav Genet 2024; 54:233-251. [PMID: 38336922 DOI: 10.1007/s10519-024-10177-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 01/24/2024] [Indexed: 02/12/2024]
Abstract
Brain-imaging-genetic analysis is an emerging field of research that aims at aggregating data from neuroimaging modalities, which characterize brain structure or function, and genetic data, which capture the structure and function of the genome, to explain or predict normal (or abnormal) brain performance. Brain-imaging-genetic studies offer great potential for understanding complex brain-related diseases/disorders of genetic etiology. Still, a combined brain-wide genome-wide analysis is difficult to perform as typical datasets fuse multiple modalities, each with high dimensionality, unique correlational landscapes, and often low statistical signal-to-noise ratios. In this review, we outline the progress in brain-imaging-genetic methodologies starting from early massive univariate to current deep learning approaches, highlighting each approach's strengths and weaknesses and elongating it with the field's development. We conclude by discussing selected remaining challenges and prospects for the field.
Collapse
Affiliation(s)
- Connor L Cheek
- Texas Institute for Evaluation, Measurement, and Statistics, University of Houston, Houston, TX, USA.
- Department of Physics, University of Houston, Houston, TX, USA.
| | - Peggy Lindner
- Texas Institute for Evaluation, Measurement, and Statistics, University of Houston, Houston, TX, USA
- Department of Information Science Technology, University of Houston, Houston, TX, USA
| | - Elena L Grigorenko
- Texas Institute for Evaluation, Measurement, and Statistics, University of Houston, Houston, TX, USA
- Department of Psychology, University of Houston, Houston, TX, USA
- Baylor College of Medicine, Houston, TX, USA
- Sirius University of Science and Technology, Sochi, Russia
| |
Collapse
|
3
|
Wang T, Chen X, Zhang J, Feng Q, Huang M. Deep multimodality-disentangled association analysis network for imaging genetics in neurodegenerative diseases. Med Image Anal 2023; 88:102842. [PMID: 37247468 DOI: 10.1016/j.media.2023.102842] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 03/01/2023] [Accepted: 05/15/2023] [Indexed: 05/31/2023]
Abstract
Imaging genetics is a crucial tool that is applied to explore potentially disease-related biomarkers, particularly for neurodegenerative diseases (NDs). With the development of imaging technology, the association analysis between multimodal imaging data and genetic data is gradually being concerned by a wide range of imaging genetics studies. However, multimodal data are fused first and then correlated with genetic data in traditional methods, which leads to an incomplete exploration of their common and complementary information. In addition, the inaccurate formulation in the complex relationships between imaging and genetic data and information loss caused by missing multimodal data are still open problems in imaging genetics studies. Therefore, in this study, a deep multimodality-disentangled association analysis network (DMAAN) is proposed to solve the aforementioned issues and detect the disease-related biomarkers of NDs simultaneously. First, the imaging data are nonlinearly projected into a latent space and imaging representations can be achieved. The imaging representations are further disentangled into common and specific parts by using a multimodal-disentangled module. Second, the genetic data are encoded to achieve genetic representations, and then, the achieved genetic representations are nonlinearly mapped to the common and specific imaging representations to build nonlinear associations between imaging and genetic data through an association analysis module. Moreover, modality mask vectors are synchronously synthesized to integrate the genetic and imaging data, which helps the following disease diagnosis. Finally, the proposed method achieves reasonable diagnosis performance via a disease diagnosis module and utilizes the label information to detect the disease-related modality-shared and modality-specific biomarkers. Furthermore, the genetic representation can be used to impute the missing multimodal data with our learning strategy. Two publicly available datasets with different NDs are used to demonstrate the effectiveness of the proposed DMAAN. The experimental results show that the proposed DMAAN can identify the disease-related biomarkers, which suggests the proposed DMAAN may provide new insights into the pathological mechanism and early diagnosis of NDs. The codes are publicly available at https://github.com/Meiyan88/DMAAN.
Collapse
Affiliation(s)
- Tao Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Xiumei Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Jiawei Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Meiyan Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
4
|
Huang W, Tan K, Zhang Z, Hu J, Dong S. A Review of Fusion Methods for Omics and Imaging Data. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:74-93. [PMID: 35044920 DOI: 10.1109/tcbb.2022.3143900] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
The development of omics data and biomedical images has greatly advanced the progress of precision medicine in diagnosis, treatment, and prognosis. The fusion of omics and imaging data, i.e., omics-imaging fusion, offers a new strategy for understanding complex diseases. However, due to a variety of issues such as the limited number of samples, high dimensionality of features, and heterogeneity of different data types, efficiently learning complementary or associated discriminative fusion information from omics and imaging data remains a challenge. Recently, numerous machine learning methods have been proposed to alleviate these problems. In this review, from the perspective of fusion levels and fusion methods, we first provide an overview of preprocessing and feature extraction methods for omics and imaging data, and comprehensively analyze and summarize the basic forms and variations of commonly used and newly emerging fusion methods, along with their advantages, disadvantages and the applicable scope. We then describe public datasets and compare experimental results of various fusion methods on the ADNI and TCGA datasets. Finally, we discuss future prospects and highlight remaining challenges in the field.
Collapse
|
5
|
Durge AR, Shrimankar DD, Sawarkar AD. Heuristic Analysis of Genomic Sequence Processing Models for High Efficiency Prediction: A Statistical Perspective. Curr Genomics 2022; 23:299-317. [PMID: 36778194 PMCID: PMC9878859 DOI: 10.2174/1389202923666220927105311] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 08/29/2022] [Accepted: 09/01/2022] [Indexed: 11/22/2022] Open
Abstract
Genome sequences indicate a wide variety of characteristics, which include species and sub-species type, genotype, diseases, growth indicators, yield quality, etc. To analyze and study the characteristics of the genome sequences across different species, various deep learning models have been proposed by researchers, such as Convolutional Neural Networks (CNNs), Deep Belief Networks (DBNs), Multilayer Perceptrons (MLPs), etc., which vary in terms of evaluation performance, area of application and species that are processed. Due to a wide differentiation between the algorithmic implementations, it becomes difficult for research programmers to select the best possible genome processing model for their application. In order to facilitate this selection, the paper reviews a wide variety of such models and compares their performance in terms of accuracy, area of application, computational complexity, processing delay, precision and recall. Thus, in the present review, various deep learning and machine learning models have been presented that possess different accuracies for different applications. For multiple genomic data, Repeated Incremental Pruning to Produce Error Reduction with Support Vector Machine (Ripper SVM) outputs 99.7% of accuracy, and for cancer genomic data, it exhibits 99.27% of accuracy using the CNN Bayesian method. Whereas for Covid genome analysis, Bidirectional Long Short-Term Memory with CNN (BiLSTM CNN) exhibits the highest accuracy of 99.95%. A similar analysis of precision and recall of different models has been reviewed. Finally, this paper concludes with some interesting observations related to the genomic processing models and recommends applications for their efficient use.
Collapse
Affiliation(s)
- Aditi R. Durge
- Department of Computer Science and Engineering, Visvesvaraya National Institute of Technology (VNIT), Nagpur, India
| | - Deepti D. Shrimankar
- Department of Computer Science and Engineering, Visvesvaraya National Institute of Technology (VNIT), Nagpur, India,Address correspondence to this author at the Department of Computer Science and Engineering, Visvesvaraya National Institute of Technology (VNIT), Nagpur, India; Tel: 9860606477; E-mail:
| | - Ankush D. Sawarkar
- Department of Computer Science and Engineering, Visvesvaraya National Institute of Technology (VNIT), Nagpur, India
| |
Collapse
|
6
|
Zhang Y, Li C, Chen D, Tian R, Yan X, Zhou Y, Song Y, Yang Y, Wang X, Zhou B, Gao Y, Jiang Y, Zhang X. Repeated High-Definition Transcranial Direct Current Stimulation Modulated Temporal Variability of Brain Regions in Core Neurocognitive Networks Over the Left Dorsolateral Prefrontal Cortex in Mild Cognitive Impairment Patients. J Alzheimers Dis 2022; 90:655-666. [DOI: 10.3233/jad-220539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Background: Early intervention of amnestic mild cognitive impairment (aMCI) may be the most promising way for delaying or even preventing the progression to Alzheimer’s disease. Transcranial direct current stimulation (tDCS) is a noninvasive brain stimulation technique that has been recognized as a promising approach for the treatment of aMCI. Objective: In this paper, we aimed to investigate the modulating mechanism of tDCS on the core neurocognitive networks of brain. Methods: We used repeated anodal high-definition transcranial direct current stimulation (HD-tDCS) over the left dorsolateral prefrontal cortex and assessed the effect on cognition and dynamic functional brain network in aMCI patients. We used a novel method called temporal variability to depict the characteristics of the dynamic brain functional networks. Results: We found that true anodal stimulation significantly improved cognitive performance as measured by the Montreal Cognitive Assessment after simulation. Meanwhile, the Mini-Mental State Examination scores showed a clear upward trend. More importantly, we found significantly altered temporal variability of dynamic functional connectivity of regions belonging to the default mode network, central executive network, and the salience network after true anodal stimulation, indicating anodal HD-tDCS may enhance brain function by modulating the temporal variability of the brain regions. Conclusion: These results imply that ten days of anodal repeated HD-tDCS over the LDLPFC exerts beneficial effects on the temporal variability of the functional architecture of the brain, which may be a potential neural mechanism by which HD-tDCS enhances brain functions. Repeated HD-tDCS may have clinical uses for the intervention of brain function decline in aMCI patients.
Collapse
Affiliation(s)
- Yanchun Zhang
- Department of Neurology, Second Medical Center, National Clinical Research Center for Geriatric Disease, Chinese PLA General Hospital, Beijing, China
- Department of Rehabilitation, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Chenxi Li
- Department of the Psychology of Military Medicine, Air Force Medical University, Xi’an, Shaanxi, P.R. China
| | - Deqiang Chen
- Department of CT, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Rui Tian
- Department of Rehabilitation, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Xinyue Yan
- Department of Rehabilitation, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Yingwen Zhou
- Department of MR, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Yancheng Song
- Department of MR, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Yanlong Yang
- Department of MR, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Xiaoxuan Wang
- Department of MR, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Bo Zhou
- Department of Neurology, Second Medical Center, National Clinical Research Center for Geriatric Disease, Chinese PLA General Hospital, Beijing, China
| | - Yuhong Gao
- Institute of Geriatrics, Second Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Yujuan Jiang
- Department of Rehabilitation, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Xi Zhang
- Department of Neurology, Second Medical Center, National Clinical Research Center for Geriatric Disease, Chinese PLA General Hospital, Beijing, China
| |
Collapse
|
7
|
Zhang L, Yu M, Wang L, Steffens DC, Wu R, Potter GG, Liu M. Understanding Clinical Progression of Late-Life Depression to Alzheimer's Disease Over 5 Years with Structural MRI. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2022; 13583:259-268. [PMID: 36594904 PMCID: PMC9805302 DOI: 10.1007/978-3-031-21014-3_27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Previous studies have shown that late-life depression (LLD) may be a precursor of neurodegenerative diseases and may increase the risk of dementia. At present, the pathological relationship between LLD and dementia, in particularly Alzheimer's disease (AD) is unclear. Structural MRI (sMRI) can provide objective biomarkers for the computer-aided diagnosis of LLD and AD, providing a promising solution to understand the clinical progression of brain disorders. But few studies have focused on sMRI-based predictive analysis of clinical progression from LLD to AD. In this paper, we develop a deep learning method to predict the clinical progression of LLD to AD up to 5 years after baseline time using T1-weighted structural MRIs. We also analyze several important factors that limit the diagnostic performance of learning-based methods, including data imbalance, small-sample-size, and multi-site data heterogeneity, by leveraging a relatively large-scale database to aid model training. Experimental results on 308 subjects with sMRIs acquired from 2 imaging sites and the publicly available ADNI database demonstrate the potential of deep learning in predicting the clinical progression of LLD to AD. To the best of our knowledge, this is among the first attempts to explore the complex pathophysiological relationship between LLD and AD based on structural MRI using a deep learning method.
Collapse
Affiliation(s)
- Lintao Zhang
- School of Information Science and Engineering, Linyi University, Shandong, China,Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill 27599, USA
| | - Minhui Yu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill 27599, USA
| | - Lihong Wang
- Department of Psychiatry, University of Connecticut School of Medicine, University of Connecticut, Farmington, CT, USA
| | - David C. Steffens
- Department of Psychiatry, University of Connecticut School of Medicine, University of Connecticut, Farmington, CT, USA
| | - Rong Wu
- Connecticut Convergence Institute for Translation in Regenerative Engineering, University of Connecticut Health, Farmington, USA
| | - Guy G. Potter
- Department of Psychiatry and Behavioral Sciences, Duke University Medical Center, Durham, NC, USA
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill 27599, USA
| |
Collapse
|
8
|
Structure-constrained combination-based nonlinear association analysis between incomplete multimodal imaging and genetic data for biomarker detection of neurodegenerative diseases. Med Image Anal 2022; 78:102419. [DOI: 10.1016/j.media.2022.102419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Revised: 02/15/2022] [Accepted: 03/10/2022] [Indexed: 11/18/2022]
|
9
|
Xin Y, Sheng J, Miao M, Wang L, Yang Z, Huang H. A review ofimaging genetics in Alzheimer's disease. J Clin Neurosci 2022; 100:155-163. [PMID: 35487021 DOI: 10.1016/j.jocn.2022.04.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2022] [Revised: 03/01/2022] [Accepted: 04/15/2022] [Indexed: 01/18/2023]
Abstract
Determining the association between genetic variation and phenotype is a key step to study the mechanism of Alzheimer's disease (AD), laying the foundation for studying drug therapies and biomarkers. AD is the most common type of dementia in the aged population. At present, three early-onset AD genes (APP, PSEN1, PSEN2) and one late-onset AD susceptibility gene apolipoprotein E (APOE) have been determined. However, the pathogenesis of AD remains unknown. Imaging genetics, an emerging interdisciplinary field, is able to reveal the complex mechanisms from the genetic level to human cognition and mental disorders via macroscopic intermediates. This paper reviews methods of establishing genotype-phenotype to explore correlations, including sparse canonical correlation analysis, sparse reduced rank regression, sparse partial least squares and so on. We found that most research work did poorly in supervised learning and exploring the nonlinear relationship between SNP-QT.
Collapse
Affiliation(s)
- Yu Xin
- College of Computer Science, Hangzhou Dianzi University, Hangzhou, Zhejiang 310018, China; Key Laboratory of Intelligent Image Analysis for Sensory and Cognitive Health, Ministry of Industry and Information Technology of China, Hangzhou, Zhejiang 310018, China
| | - Jinhua Sheng
- College of Computer Science, Hangzhou Dianzi University, Hangzhou, Zhejiang 310018, China; Key Laboratory of Intelligent Image Analysis for Sensory and Cognitive Health, Ministry of Industry and Information Technology of China, Hangzhou, Zhejiang 310018, China.
| | - Miao Miao
- Beijing Hospital, Beijing 100730, China; National Center of Gerontology, Beijing 100730, China; Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing 100730, China
| | - Luyun Wang
- College of Computer Science, Hangzhou Dianzi University, Hangzhou, Zhejiang 310018, China; Key Laboratory of Intelligent Image Analysis for Sensory and Cognitive Health, Ministry of Industry and Information Technology of China, Hangzhou, Zhejiang 310018, China; Hangzhou Vocational & Technical College, Hangzhou, Zhejiang 310018, China
| | - Ze Yang
- College of Computer Science, Hangzhou Dianzi University, Hangzhou, Zhejiang 310018, China; Key Laboratory of Intelligent Image Analysis for Sensory and Cognitive Health, Ministry of Industry and Information Technology of China, Hangzhou, Zhejiang 310018, China
| | - He Huang
- College of Computer Science, Hangzhou Dianzi University, Hangzhou, Zhejiang 310018, China; Key Laboratory of Intelligent Image Analysis for Sensory and Cognitive Health, Ministry of Industry and Information Technology of China, Hangzhou, Zhejiang 310018, China
| |
Collapse
|
10
|
Lian C, Liu M, Pan Y, Shen D. Attention-Guided Hybrid Network for Dementia Diagnosis With Structural MR Images. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:1992-2003. [PMID: 32721906 PMCID: PMC7855081 DOI: 10.1109/tcyb.2020.3005859] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Deep-learning methods (especially convolutional neural networks) using structural magnetic resonance imaging (sMRI) data have been successfully applied to computer-aided diagnosis (CAD) of Alzheimer's disease (AD) and its prodromal stage [i.e., mild cognitive impairment (MCI)]. As it is practically challenging to capture local and subtle disease-associated abnormalities directly from the whole-brain sMRI, most of those deep-learning approaches empirically preselect disease-associated sMRI brain regions for model construction. Considering that such isolated selection of potentially informative brain locations might be suboptimal, very few methods have been proposed to perform disease-associated discriminative region localization and disease diagnosis in a unified deep-learning framework. However, those methods based on task-oriented discriminative localization still suffer from two common limitations, that is: 1) identified brain locations are strictly consistent across all subjects, which ignores the unique anatomical characteristics of each brain and 2) only limited local regions/patches are used for model training, which does not fully utilize the global structural information provided by the whole-brain sMRI. In this article, we propose an attention-guided deep-learning framework to extract multilevel discriminative sMRI features for dementia diagnosis. Specifically, we first design a backbone fully convolutional network to automatically localize the discriminative brain regions in a weakly supervised manner. Using the identified disease-related regions as spatial attention guidance, we further develop a hybrid network to jointly learn and fuse multilevel sMRI features for CAD model construction. Our proposed method was evaluated on three public datasets (i.e., ADNI-1, ADNI-2, and AIBL), showing superior performance compared with several state-of-the-art methods in both tasks of AD diagnosis and MCI conversion prediction.
Collapse
|
11
|
Nghiem LH, Hui FKC, Muller S, Welsh AH. Screening methods for linear errors-in-variables models in high dimensions. Biometrics 2022. [PMID: 35191015 DOI: 10.1111/biom.13628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Accepted: 01/11/2022] [Indexed: 11/29/2022]
Abstract
Microarray studies, in order to identify genes associated with an outcome of interest, usually produce noisy measurements for a large number of gene expression features from a small number of subjects. One common approach to analyzing such high-dimensional data is to use linear errors-in-variables models; however, current methods for fitting such models are computationally expensive. In this paper, we present two efficient screening procedures, namely corrected penalized marginal screening and corrected sure independence screening, to reduce the number of variables for final model building. Both screening procedures are based on fitting corrected marginal regression models relating the outcome to each contaminated covariate separately, which can be computed efficiently even with a large number of features. Under mild conditions, we show that these procedures achieve screening consistency and reduce the number of features substantially, even when the number of covariates grows exponentially with sample size. Additionally, if the true covariates are weakly correlated, we show that corrected penalized marginal screening can achieve full variable selection consistency. Through a simulation study and an analysis of gene expression data for bone mineral density of Norwegian women, we demonstrate that the two new screening procedures make estimation of linear errors-in-variables models computationally scalable in high dimensional settings, and improve finite sample estimation and selection performance compared with estimators that do not employ a screening stage. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Linh H Nghiem
- Research School of Finance, Actuarial Studies and Statistics, Australian National University, ACT 2600, Australia.,School of Mathematics and Statistics, The University of Sydney, NSW 2006, Australia
| | - Francis K C Hui
- Research School of Finance, Actuarial Studies and Statistics, Australian National University, ACT 2600, Australia
| | - Samuel Muller
- Department of Mathematics and Statistics, Macquarie University, NSW 2109, Australia
| | - A H Welsh
- Research School of Finance, Actuarial Studies and Statistics, Australian National University, ACT 2600, Australia
| |
Collapse
|
12
|
Liu Y, Yue L, Xiao S, Yang W, Shen D, Liu M. Assessing clinical progression from subjective cognitive decline to mild cognitive impairment with incomplete multi-modal neuroimages. Med Image Anal 2022; 75:102266. [PMID: 34700245 PMCID: PMC8678365 DOI: 10.1016/j.media.2021.102266] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 10/04/2021] [Accepted: 10/07/2021] [Indexed: 01/03/2023]
Abstract
Accurately assessing clinical progression from subjective cognitive decline (SCD) to mild cognitive impairment (MCI) is crucial for early intervention of pathological cognitive decline. Multi-modal neuroimaging data such as T1-weighted magnetic resonance imaging (MRI) and positron emission tomography (PET), help provide objective and supplementary disease biomarkers for computer-aided diagnosis of MCI. However, there are few studies dedicated to SCD progression prediction since subjects usually lack one or more imaging modalities. Besides, one usually has a limited number (e.g., tens) of SCD subjects, negatively affecting model robustness. To this end, we propose a Joint neuroimage Synthesis and Representation Learning (JSRL) framework for SCD conversion prediction using incomplete multi-modal neuroimages. The JSRL contains two components: 1) a generative adversarial network to synthesize missing images and generate multi-modal features, and 2) a classification network to fuse multi-modal features for SCD conversion prediction. The two components are incorporated into a joint learning framework by sharing the same features, encouraging effective fusion of multi-modal features for accurate prediction. A transfer learning strategy is employed in the proposed framework by leveraging model trained on the Alzheimer's Disease Neuroimaging Initiative (ADNI) with MRI and fluorodeoxyglucose PET from 863 subjects to both the Chinese Longitudinal Aging Study (CLAS) with only MRI from 76 SCD subjects and the Australian Imaging, Biomarkers and Lifestyle (AIBL) with MRI from 235 subjects. Experimental results suggest that the proposed JSRL yields superior performance in SCD and MCI conversion prediction and cross-database neuroimage synthesis, compared with several state-of-the-art methods.
Collapse
Affiliation(s)
- Yunbi Liu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA,School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Ling Yue
- Department of Geriatric Psychiatry, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200240, China,Corresponding authors: M. Liu () and L. Yue ()
| | - Shifu Xiao
- Department of Geriatric Psychiatry, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200240, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Mingxia Liu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA,Corresponding authors: M. Liu () and L. Yue ()
| |
Collapse
|
13
|
Zeng A, Rong H, Pan D, Jia L, Zhang Y, Zhao F, Peng S. Discovery of Genetic Biomarkers for Alzheimer's Disease Using Adaptive Convolutional Neural Networks Ensemble and Genome-Wide Association Studies. Interdiscip Sci 2021; 13:787-800. [PMID: 34410590 DOI: 10.1007/s12539-021-00470-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2021] [Revised: 07/01/2021] [Accepted: 08/01/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE To identify candidate neuroimaging and genetic biomarkers for Alzheimer's disease (AD) and other brain disorders, especially for little-investigated brain diseases, we advocate a data-driven approach which incorporates an adaptive classifier ensemble model acquired by integrating Convolutional Neural Network (CNN) and Ensemble Learning (EL) with Genetic Algorithm (GA), i.e., the CNN-EL-GA method, into Genome-Wide Association Studies (GWAS). METHODS Above all, a large number of CNN models as base classifiers were trained using coronal, sagittal, or transverse magnetic resonance imaging slices, respectively, and the CNN models with strong discriminability were then selected to build a single classifier ensemble with the GA for classifying AD, with the help of the CNN-EL-GA method. While the acquired classifier ensemble exhibited the highest generalization capability, the points of intersection were determined with the most discriminative coronal, sagittal, and transverse slices. Finally, we conducted GWAS on the genotype data and the phenotypes, i.e., the gray matter volumes of the top ten most discriminative brain regions, which contained the ten most points of intersection. RESULTS Six genes of PCDH11X/Y, TPTE2, LOC107985902, MUC16 and LINC01621 as well as Single-Nucleotide Polymorphisms, e.g., rs36088804, rs34640393, rs2451078, rs10496214, rs17016520, rs2591597, rs9352767 and rs5941380, were identified. CONCLUSION This approach overcomes the limitations associated with the impact of subjective factors and dependence on prior knowledge while adaptively achieving more robust and effective candidate biomarkers in a data-driven way. SIGNIFICANCE The approach is promising to facilitate discovering effective candidate genetic biomarkers for brain disorders, as well as to help improve the effectiveness of identified candidate neuroimaging biomarkers for brain diseases.
Collapse
Affiliation(s)
- An Zeng
- Faculty of Computer, Guangdong University of Technology, Guangzhou, 510006, People's Republic of China
| | - Huabin Rong
- Faculty of Computer, Guangdong University of Technology, Guangzhou, 510006, People's Republic of China
| | - Dan Pan
- School of Electronics and Information, Guangdong Polytechnic Normal University, Guangzhou, 510665, People's Republic of China.
| | - Longfei Jia
- Faculty of Computer, Guangdong University of Technology, Guangzhou, 510006, People's Republic of China
| | - Yiqun Zhang
- Faculty of Computer, Guangdong University of Technology, Guangzhou, 510006, People's Republic of China
| | - Fengyi Zhao
- Faculty of Computer, Guangdong University of Technology, Guangzhou, 510006, People's Republic of China
| | - Shaoliang Peng
- College of Computer Science and Electronic Engineering, Hunan University, School of Computer Science, National University of Defense Technology, Peng Cheng Lab, Shenzhen, 518000, People's Republic of China.
| |
Collapse
|
14
|
Dai M, Xiao G, Fiondella L, Shao M, Zhang YS. Deep Learning-Enabled Resolution-Enhancement in Mini- and Regular Microscopy for Biomedical Imaging. SENSORS AND ACTUATORS. A, PHYSICAL 2021; 331:112928. [PMID: 34393376 PMCID: PMC8362924 DOI: 10.1016/j.sna.2021.112928] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Artificial intelligence algorithms that aid mini-microscope imaging are attractive for numerous applications. In this paper, we optimize artificial intelligence techniques to provide clear, and natural biomedical imaging. We demonstrate that a deep learning-enabled super-resolution method can significantly enhance the spatial resolution of mini-microscopy and regular-microscopy. This data-driven approach trains a generative adversarial network to transform low-resolution images into super-resolved ones. Mini-microscopic images and regular-microscopic images acquired with different optical microscopes under various magnifications are collected as our experimental benchmark datasets. The only input to this generative-adversarial-network-based method are images from the datasets down-sampled by the Bicubic interpolation. We use independent test set to evaluate this deep learning approach with other deep learning-based algorithms through qualitative and quantitative comparisons. To clearly present the improvements achieved by this generative-adversarial-network-based method, we zoom into the local features to explore and highlight the qualitative differences. We also employ the peak signal-to-noise ratio and the structural similarity, to quantitatively compare alternative super-resolution methods. The quantitative results illustrate that super-resolution images obtained from our approach with interpolation parameter α=0.25 more closely match those of the original high-resolution images than to those obtained by any of the alternative state-of-the-art method. These results are significant for fields that use microscopy tools, such as biomedical imaging of engineered living systems. We also utilize this generative adversarial network-based algorithm to optimize the resolution of biomedical specimen images and then generate three-dimensional reconstruction, so as to enhance the ability of three-dimensional imaging throughout the entire volumes for spatial-temporal analyses of specimen structures.
Collapse
Affiliation(s)
- Manna Dai
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Cambridge, MA 02139, USA
| | - Gao Xiao
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Cambridge, MA 02139, USA
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138, USA
| | - Lance Fiondella
- Department of Electrical and Computer Engineering, College of Engineering, University of Massachusetts Dartmouth, North Dartmouth, MA 02747, USA
| | - Ming Shao
- Department of Computer and Information Science, College of Engineering, University of Massachusetts Dartmouth, North Dartmouth, MA 02747, USA
| | - Yu Shrike Zhang
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Cambridge, MA 02139, USA
| |
Collapse
|
15
|
Wen C, Ba H, Pan W, Huang M. Co-sparse reduced-rank regression for association analysis between imaging phenotypes and genetic variants. Bioinformatics 2021; 36:5214-5222. [PMID: 32683450 DOI: 10.1093/bioinformatics/btaa650] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Revised: 05/22/2020] [Accepted: 07/14/2020] [Indexed: 11/13/2022] Open
Abstract
MOTIVATION The association analysis between genetic variants and imaging phenotypes must be carried out to understand the inherited neuropsychiatric disorders via imaging genetic studies. Given the high dimensionality in imaging and genetic data, traditional methods based on massive univariate regression entail large computational cost and disregard many-to-many correlations between phenotypes and genetic variants. Several multivariate imaging genetic methods have been proposed to alleviate the above problems. However, most of these methods are based on the l1 penalty, which might cause the over-selection of variables and thus mislead scientists in analyzing data from the field of neuroimaging genetics. RESULTS To address these challenges in both statistics and computation, we propose a novel co-sparse reduced-rank regression model that identifies complex correlations in a dimensional reduction manner. We developed an iterative algorithm based on a group primal dual-active set formulation to detect simultaneously important genetic variants and imaging phenotypes efficiently and precisely via non-convex penalty. The simulation studies showed that our method achieved accurate and stable performance in parameter estimation and variable selection. In real application, the proposed approach successfully detected several novel Alzheimer's disease-related genetic variants and regions of interest, which indicate that our method may be a valuable statistical toolbox for imaging genetic studies. AVAILABILITY AND IMPLEMENTATION The R package csrrr, and the code for experiments in this article is available in Github: https://github.com/hailongba/csrrr. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Canhong Wen
- International Institute of Finance, School of Management, University of Science and Technology of China, Hefei 230026, China
| | - Hailong Ba
- International Institute of Finance, School of Management, University of Science and Technology of China, Hefei 230026, China
| | - Wenliang Pan
- Department of Statistical Science, School of Mathematics, Sun Yat-Sen University, Guangzhou 510275, China
| | - Meiyan Huang
- School of Biomedical Engineering, Guangzhou 510515, China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China
| | | |
Collapse
|
16
|
Huang M, Lai H, Yu Y, Chen X, Wang T, Feng Q. Deep-gated recurrent unit and diet network-based genome-wide association analysis for detecting the biomarkers of Alzheimer's disease. Med Image Anal 2021; 73:102189. [PMID: 34343841 DOI: 10.1016/j.media.2021.102189] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 05/30/2021] [Accepted: 07/16/2021] [Indexed: 01/01/2023]
Abstract
Genome-wide association analysis (GWAS) is a commonly used method to detect the potential biomarkers of Alzheimer's disease (AD). Most existing GWAS methods entail a high computational cost, disregard correlations among imaging data and correlations among genetic data, and ignore various associations between longitudinal imaging and genetic data. A novel GWAS method was proposed to identify potential AD biomarkers and address these problems. A network based on a gated recurrent unit was applied without imputing incomplete longitudinal imaging data to integrate the longitudinal data of variable lengths and extract an image representation. In this study, a modified diet network that can considerably reduce the number of parameters in the genetic network was proposed to perform GWAS between image representation and genetic data. Genetic representation can be extracted in this way. A link between genetic representation and AD was established to detect potential AD biomarkers. The proposed method was tested on a set of simulated data and a real AD dataset. Results of the simulated data showed that the proposed method can accurately detect relevant biomarkers. Moreover, the results of real AD dataset showed that the proposed method can detect some new risk-related genes of AD. Based on previous reports, no research has incorporated a deep-learning model into a GWAS framework to investigate the potential information on super-high-dimensional genetic data and longitudinal imaging data and create a link between imaging genetics and AD for detecting potential AD biomarkers. Therefore, the proposed method may provide new insights into the underlying pathological mechanism of AD.
Collapse
Affiliation(s)
- Meiyan Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Haoran Lai
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Yuwei Yu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Xiumei Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Tao Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | | |
Collapse
|
17
|
Huang M, Chen X, Yu Y, Lai H, Feng Q. Imaging Genetics Study Based on a Temporal Group Sparse Regression and Additive Model for Biomarker Detection of Alzheimer's Disease. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1461-1473. [PMID: 33556003 DOI: 10.1109/tmi.2021.3057660] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Imaging genetics is an effective tool used to detect potential biomarkers of Alzheimer's disease (AD) in imaging and genetic data. Most existing imaging genetics methods analyze the association between brain imaging quantitative traits (QTs) and genetic data [e.g., single nucleotide polymorphism (SNP)] by using a linear model, ignoring correlations between a set of QTs and SNP groups, and disregarding the varied associations between longitudinal imaging QTs and SNPs. To solve these problems, we propose a novel temporal group sparsity regression and additive model (T-GSRAM) to identify associations between longitudinal imaging QTs and SNPs for detection of potential AD biomarkers. We first construct a nonparametric regression model to analyze the nonlinear association between QTs and SNPs, which can accurately model the complex influence of SNPs on QTs. We then use longitudinal QTs to identify the trajectory of imaging genetic patterns over time. Moreover, the SNP information of group and individual levels are incorporated into the proposed method to boost the power of biomarker detection. Finally, we propose an efficient algorithm to solve the whole T-GSRAM model. We evaluated our method using simulation data and real data obtained from AD neuroimaging initiative. Experimental results show that our proposed method outperforms several state-of-the-art methods in terms of the receiver operating characteristic curves and area under the curve. Moreover, the detection of AD-related genes and QTs has been confirmed in previous studies, thereby further verifying the effectiveness of our approach and helping understand the genetic basis over time during disease progression.
Collapse
|
18
|
Zhou J, Hu L, Jiang Y, Liu L. A Correlation Analysis between SNPs and ROIs of Alzheimer's Disease Based on Deep Learning. BIOMED RESEARCH INTERNATIONAL 2021; 2021:8890513. [PMID: 33628827 PMCID: PMC7886593 DOI: 10.1155/2021/8890513] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2020] [Revised: 12/23/2020] [Accepted: 01/27/2021] [Indexed: 12/31/2022]
Abstract
Motivation. At present, the research methods for image genetics of Alzheimer's disease based on machine learning are mainly divided into three steps: the first step is to preprocess the original image and gene information into digital signals that are easy to calculate; the second step is feature selection aiming at eliminating redundant signals and obtain representative features; and the third step is to build a learning model and predict the unknown data with regression or bivariate correlation analysis. This type of method requires manual extraction of feature single-nucleotide polymorphisms (SNPs), and the extraction process relies on empirical knowledge to a certain extent, such as linkage imbalance and gene function information in a group sparse model, which puts forward certain requirements for applicable scenarios and application personnel. To solve the problems of insufficient biological significance and large errors in the previous methods of association analysis and disease diagnosis, this paper presents a method of correlation analysis and disease diagnosis between SNP and region of interest (ROI) based on a deep learning model. It is a data-driven method, which has no obvious feature selection process. Results. The deep learning method adopted in this paper has no obvious feature extraction process relying on prior knowledge and model assumptions. From the results of correlation analysis between SNP and ROI, this method is complementary to other regression model methods in application scenarios. In order to improve the disease diagnosis performance of deep learning, we use the deep learning model to integrate SNP characteristics and ROI characteristics. The SNP feature, ROI feature, and SNP-ROI joint feature were input into the deep learning model and trained by cross-validation technique. The experimental results show that the SNP-ROI joint feature describes the information of the samples from different angles, which makes the diagnosis accuracy higher.
Collapse
Affiliation(s)
- Juan Zhou
- School of Software, East China Jiaotong University, Nanchang 330013, China
| | - Linfeng Hu
- School of Software, East China Jiaotong University, Nanchang 330013, China
| | - Yu Jiang
- School of Software, East China Jiaotong University, Nanchang 330013, China
| | - Liyue Liu
- School of Software, East China Jiaotong University, Nanchang 330013, China
| |
Collapse
|
19
|
Zhang Z, Ding J, Xu J, Tang J, Guo F. Multi-Scale Time-Series Kernel-Based Learning Method for Brain Disease Diagnosis. IEEE J Biomed Health Inform 2021; 25:209-217. [PMID: 32248130 DOI: 10.1109/jbhi.2020.2983456] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
The functional magnetic resonance imaging (fMRI) is a noninvasive technique for studying brain activity, such as brain network analysis, neural disease automated diagnosis and so on. However, many existing methods have some drawbacks, such as limitations of graph theory, lack of global topology characteristic, local sensitivity of functional connectivity, and absence of temporal or context information. In addition to many numerical features, fMRI time series data also cover specific contextual knowledge and global fluctuation information. Here, we propose multi-scale time-series kernel-based learning model for brain disease diagnosis, based on Jensen-Shannon divergence. First, we calculate correlation value within and between brain regions over time. In addition, we extract multi-scale synergy expression probability distribution (interactional relation) between brain regions. Also, we produce state transition probability distribution (sequential relation) on single brain regions. Then, we build time-series kernel-based learning model based on Jensen-Shannon divergence to measure similarity of brain functional connectivity. Finally, we provide an efficient system to deal with brain network analysis and neural disease automated diagnosis. On Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, our proposed method achieves accuracy of 0.8994 and AUC of 0.8623. On Major Depressive Disorder (MDD) dataset, our proposed method achieves accuracy of 0.9166 and AUC of 0.9263. Experiments show that our proposed method outperforms other existing excellent neural disease automated diagnosis approaches. It shows that our novel prediction method performs great accurate for identification of brain diseases as well as existing outstanding prediction tools.
Collapse
|
20
|
Mao L, Zhang D, Chen Y, Zhang T, Song X. Deep aligned feature extraction for collaborative-representation-based face classification with group dictionary selection. INT J ADV ROBOT SYST 2020. [DOI: 10.1177/1729881420967577] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Face recognition plays an important role in many robotic and human–computer interaction systems. To this end, in recent years, sparse-representation-based classification and its variants have drawn extensive attention in compress sensing and pattern recognition. For image classification, one key to the success of a sparse-representation-based approach is to extract consistent image feature representations for the images of the same subject captured under a wide spectrum of appearance variations, for example, in pose, expression and illumination. These variations can be categorized into two main types: geometric and textural variations. To eliminate the difficulties posed by different appearance variations, the article presents a new collaborative-representation-based face classification approach using deep aligned neural network features. To be more specific, we first apply a facial landmark detection network to an input face image to obtain its fine-grained geometric information in the form of a set of 2D facial landmarks. These facial landmarks are then used to perform 2D geometric alignment across different face images. Second, we apply a deep neural network for facial image feature extraction due to the robustness of deep image features to a variety of appearance variations. We use the term deep aligned features for this two-step feature extraction approach. Last, a new collaborative-representation-based classification method is used to perform face classification. Specifically, we propose a group dictionary selection method for representation-based face classification to further boost the performance and reduce the uncertainty in decision-making. Experimental results obtained on several facial landmark detection and face classification data sets validate the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Li Mao
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, People’s Republic of China
| | - Delei Zhang
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, People’s Republic of China
| | - Youming Chen
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, People’s Republic of China
| | - Tao Zhang
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, People’s Republic of China
| | - Xiaoning Song
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, People’s Republic of China
| |
Collapse
|
21
|
Zhou J, Qiu Y, Chen S, Liu L, Liao H, Chen H, Lv S, Li X. A Novel Three-Stage Framework for Association Analysis Between SNPs and Brain Regions. Front Genet 2020; 11:572350. [PMID: 33193677 PMCID: PMC7542238 DOI: 10.3389/fgene.2020.572350] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Accepted: 08/17/2020] [Indexed: 12/17/2022] Open
Abstract
Motivation: At present, a number of correlation analysis methods between SNPs and ROIs have been devised to explore the pathogenic mechanism of Alzheimer's disease. However, some of the deficiencies inherent in these methods, including lack of statistical efficacy and biological meaning. This study aims at addressing issues: insufficient correlation by previous methods (relative high regression error) and the lack of biological meaning in association analysis. Results: In this paper, a novel three-stage SNPs and ROIs correlation analysis framework is proposed. Firstly, clustering algorithm is applied to remove the potential linkage unbalanced structure of two SNPs. Then, the group sparse model is used to introduce prior information such as gene structure and linkage unbalanced structure to select feature SNPs. After the above steps, each SNP has a weight vector corresponding to each ROI, and the importance of SNPs can be judged according to the weights in the feature vector, and then the feature SNPs can be selected. Finally, for the selected feature SNPS, a support vector machine regression model is used to implement the prediction of the ROIs phenotype values. The experimental results under multiple performance measures show that the proposed method has better accuracy than other methods.
Collapse
Affiliation(s)
- Juan Zhou
- School of Software, East China Jiaotong University, Nanchang, China
| | - Yangping Qiu
- School of Software, East China Jiaotong University, Nanchang, China
| | - Shuo Chen
- School of Software, East China Jiaotong University, Nanchang, China
| | - Liyue Liu
- School of Software, East China Jiaotong University, Nanchang, China
| | - Huifa Liao
- School of Software, East China Jiaotong University, Nanchang, China
| | - Hongli Chen
- School of Software, East China Jiaotong University, Nanchang, China
| | - Shanguo Lv
- School of Software, East China Jiaotong University, Nanchang, China
| | - Xiong Li
- School of Software, East China Jiaotong University, Nanchang, China
| |
Collapse
|
22
|
Self-calibrated brain network estimation and joint non-convex multi-task learning for identification of early Alzheimer's disease. Med Image Anal 2020; 61:101652. [DOI: 10.1016/j.media.2020.101652] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 01/16/2020] [Accepted: 01/16/2020] [Indexed: 12/17/2022]
|
23
|
Yang J, Feng X, Laine AF, Angelini ED. Characterizing Alzheimer's Disease With Image and Genetic Biomarkers Using Supervised Topic Models. IEEE J Biomed Health Inform 2020; 24:1180-1187. [PMID: 31380772 PMCID: PMC8938901 DOI: 10.1109/jbhi.2019.2928831] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Neuroimaging and genetic biomarkers have been widely studied from discriminative perspectives towards Alzheimer's disease (AD) classification, since neuroanatomical patterns and genetic variants are jointly critical indicators for AD diagnosis. Generative methods, designed to model common occurring patterns, could potentially advance the understanding of this disease, but have not been fully explored for AD characterization. Moreover, the introduction of a supervised component into the generative process can constrain the model for more discriminative characterization. In this study, we propose an original method based on supervised topic modeling to characterize AD from a generative perspective, yet maintaining discriminative power at differentiating disease populations. Our topic modeling jointly exploits discretized image features and categorical genetic features. Diagnostic information - cognitively normal (CN), mild cognitive impairment (MCI) and AD - is introduced as a supervision variable. Experimental results on the ADNI cohort demonstrate that our model, while achieving competitive discriminative performance, can discover topics revealing both well-known and novel neuroanatomical patterns including temporal, parietal and frontal regions; as well as associations between genetic factors and neuroanatomical patterns.
Collapse
Affiliation(s)
- Jie Yang
- Department of Biomedical Engineering, Columbia University, New York, NY, USA
| | - Xinyang Feng
- Department of Biomedical Engineering, Columbia University, New York, NY, USA
| | - Andrew F. Laine
- Departments of Biomedical Engineering, Radiology and Columbia Magnetic Resonance Research Center, Columbia University, New York, NY, USA
| | - Elsa D. Angelini
- Departments of Biomedical Engineering and Radiology, Columbia University, New York, NY, USA, and NIHR Imperial Biomedical Research Centre, ITMAT Data Science Group, Imperial College London, UK
| | | |
Collapse
|
24
|
Zhou T, Thung KH, Liu M, Shi F, Zhang C, Shen D. Multi-modal latent space inducing ensemble SVM classifier for early dementia diagnosis with neuroimaging data. Med Image Anal 2020; 60:101630. [PMID: 31927474 PMCID: PMC8260095 DOI: 10.1016/j.media.2019.101630] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2018] [Revised: 12/15/2019] [Accepted: 12/19/2019] [Indexed: 12/21/2022]
Abstract
Fusing multi-modality data is crucial for accurate identification of brain disorder as different modalities can provide complementary perspectives of complex neurodegenerative disease. However, there are at least four common issues associated with the existing fusion methods. First, many existing fusion methods simply concatenate features from each modality without considering the correlations among different modalities. Second, most existing methods often make prediction based on a single classifier, which might not be able to address the heterogeneity of the Alzheimer's disease (AD) progression. Third, many existing methods often employ feature selection (or reduction) and classifier training in two independent steps, without considering the fact that the two pipelined steps are highly related to each other. Forth, there are missing neuroimaging data for some of the participants (e.g., missing PET data), due to the participants' "no-show" or dropout. In this paper, to address the above issues, we propose an early AD diagnosis framework via novel multi-modality latent space inducing ensemble SVM classifier. Specifically, we first project the neuroimaging data from different modalities into a latent space, and then map the learned latent representations into the label space to learn multiple diversified classifiers. Finally, we obtain the more reliable classification results by using an ensemble strategy. More importantly, we present a Complete Multi-modality Latent Space (CMLS) learning model for complete multi-modality data and also an Incomplete Multi-modality Latent Space (IMLS) learning model for incomplete multi-modality data. Extensive experiments using the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset have demonstrated that our proposed models outperform other state-of-the-art methods.
Collapse
Affiliation(s)
- Tao Zhou
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA; Inception Institute of Artificial Intelligence, Abu Dhabi 51133, United Arab Emirates.
| | - Kim-Han Thung
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA.
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA.
| | - Feng Shi
- United Imaging Intelligence, Shanghai, China.
| | - Changqing Zhang
- School of Computer Science and Technology, Tianjin University, Tianjin 300072, China.
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
25
|
Adaptive sparse learning using multi-template for neurodegenerative disease diagnosis. Med Image Anal 2020; 61:101632. [PMID: 32028212 DOI: 10.1016/j.media.2019.101632] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Revised: 12/17/2019] [Accepted: 12/20/2019] [Indexed: 12/20/2022]
Abstract
Neurodegenerative diseases are excessively affecting millions of patients, especially elderly people. Early detection and management of these diseases are crucial as the clinical symptoms take years to appear after the onset of neuro-degeneration. This paper proposes an adaptive feature learning framework using multiple templates for early diagnosis. A multi-classification scheme is developed based on multiple brain parcellation atlases with various regions of interest. Different sets of features are extracted and then fused, and a feature selection is applied with an adaptively chosen sparse degree. In addition, both linear discriminative analysis and locally preserving projections are integrated to construct a least square regression model. Finally, we propose a feature space to predict the severity of the disease by the guidance of clinical scores. Our proposed method is validated on both Alzheimer's disease neuroimaging initiative and Parkinson's progression markers initiative databases. Extensive experimental results suggest that the proposed method outperforms the state-of-the-art methods, such as the multi-modal multi-task learning or joint sparse learning. Our method demonstrates that accurate feature learning facilitates the identification of the highly relevant brain regions with significant contribution in the prediction of disease progression. This may pave the way for further medical analysis and diagnosis in practical applications.
Collapse
|
26
|
Shen L, Thompson PM. Brain Imaging Genomics: Integrated Analysis and Machine Learning. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:125-162. [PMID: 31902950 PMCID: PMC6941751 DOI: 10.1109/jproc.2019.2947272] [Citation(s) in RCA: 88] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Brain imaging genomics is an emerging data science field, where integrated analysis of brain imaging and genomics data, often combined with other biomarker, clinical and environmental data, is performed to gain new insights into the phenotypic, genetic and molecular characteristics of the brain as well as their impact on normal and disordered brain function and behavior. It has enormous potential to contribute significantly to biomedical discoveries in brain science. Given the increasingly important role of statistical and machine learning in biomedicine and rapidly growing literature in brain imaging genomics, we provide an up-to-date and comprehensive review of statistical and machine learning methods for brain imaging genomics, as well as a practical discussion on method selection for various biomedical applications.
Collapse
Affiliation(s)
- Li Shen
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, PA 19104, USA
| | - Paul M Thompson
- Imaging Genetics Center, Mark & Mary Stevens Institute for Neuroimaging & Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA 90232, USA
| |
Collapse
|
27
|
Zhou T, Liu M, Thung KH, Shen D. Latent Representation Learning for Alzheimer's Disease Diagnosis With Incomplete Multi-Modality Neuroimaging and Genetic Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2411-2422. [PMID: 31021792 PMCID: PMC8034601 DOI: 10.1109/tmi.2019.2913158] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
The fusion of complementary information contained in multi-modality data [e.g., magnetic resonance imaging (MRI), positron emission tomography (PET), and genetic data] has advanced the progress of automated Alzheimer's disease (AD) diagnosis. However, multi-modality based AD diagnostic models are often hindered by the missing data, i.e., not all the subjects have complete multi-modality data. One simple solution used by many previous studies is to discard samples with missing modalities. However, this significantly reduces the number of training samples, thus leading to a sub-optimal classification model. Furthermore, when building the classification model, most existing methods simply concatenate features from different modalities into a single feature vector without considering their underlying associations. As features from different modalities are often closely related (e.g., MRI and PET features are extracted from the same brain region), utilizing their inter-modality associations may improve the robustness of the diagnostic model. To this end, we propose a novel latent representation learning method for multi-modality based AD diagnosis. Specifically, we use all the available samples (including samples with incomplete modality data) to learn a latent representation space. Within this space, we not only use samples with complete multi-modality data to learn a common latent representation, but also use samples with incomplete multi-modality data to learn independent modality-specific latent representations. We then project the latent representations to the label space for AD diagnosis. We perform experiments using 737 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, and the experimental results verify the effectiveness of our proposed method.
Collapse
Affiliation(s)
- Tao Zhou
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
- Inception Institute of Artificial Intelligence, Abu Dhabi 51133, United Arab Emirates
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
| | - Kim-Han Thung
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
28
|
Fan J, Yang J, Wang Y, Yang S, Ai D, Huang Y, Song H, Wang Y, Shen D. Deep feature descriptor based hierarchical dense matching for X-ray angiographic images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 175:233-242. [PMID: 31104711 DOI: 10.1016/j.cmpb.2019.04.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Revised: 03/09/2019] [Accepted: 04/07/2019] [Indexed: 06/09/2023]
Abstract
UNLABELLED Backgroud and Objective: X-ray angiography, a powerful technique for blood vessel visualization, is widely used for interventional diagnosis of coronary artery disease because of its fast imaging speed and perspective inspection ability. Matching feature points in angiographic images is a considerably challenging task due to repetitive weak-textured regions. METHODS In this paper, we propose an angiographic image matching method based on the hierarchical dense matching framework, where a novel deep feature descriptor is designed to compute multilevel correlation maps. In particular, the deep feature descriptor is computed by a deep learning model specifically designed and trained for angiographic images, thereby making the correlation maps more distinctive for corresponding feature points in different angiographic images. Moreover, point correspondences are further hierarchically extracted from multilevel correlation maps with the highest similarity response(s), which is relatively robust and accurate. To overcome the problem regarding the lack of training samples, the convolutional neural network (designed for deep feature descriptor) is initially trained on samples from natural images and then fine-tuned on manually annotated angiographic images. Finally, a dense matching completion method, based on the distance between deep feature descriptors, is proposed to generate dense matches between images. RESULTS The proposed method has been evaluated on the number and accuracy of extracted matches and the performance of subtraction images. Experiments on a variety of angiographic images show promising matching accuracy, compared with state-of-the-art methods. CONCLUSIONS The proposed angiographic image matching method is shown to be accurate and effective for feature matching in angiographic images, and further achieves good performance in image subtraction.
Collapse
Affiliation(s)
- Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China.
| | - Yachen Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Siyuan Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Yong Huang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Hong Song
- School of Software, Beijing Institute of Technology, Beijing 100081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
29
|
Xiong J, Li X, Lu L, Lawrence SH, Fu X, Zhao J, Zhao B. Implementation strategy of a CNN model affects the performance of CT assessment of EGFR mutation status in lung cancer patients. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2019; 7:64583-64591. [PMID: 32953368 PMCID: PMC7500487 DOI: 10.1109/access.2019.2916557] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
OBJECTIVE To compare CNN models implemented using different strategies in the CT assessment of EGFR mutation status in patients with lung adenocarcinoma. METHODS 1,010 consecutive lung adenocarcinoma patients with known EGFR mutation status were randomly divided into a training set (n=810) and a testing set (n=200). CNN models were constructed based on ResNet-101 architecture but implemented using different strategies: dimension filters (2D/3D), input sizes (small/middle/large and their fusion), slicing methods (transverse plane only and arbitrary multi-view planes), and training approaches (from scratch and fine-tuning a pre-trained CNN). The performance of the CNN models was compared using AUC. RESULTS The fusion approach yielded consistently better performance than other input sizes, although the effect often did not reach statistical significance. Multi-view slicing was significantly superior to the transverse method when fine-tuning a pre-trained 2D CNN but not a CNN trained from scratch. The 3D CNN was significantly better than the 2D transverse plane method but only marginally better than the multi-view slicing method when trained from scratch. The highest performance (AUC=0.838) was achieved for the fine-tuned 2D CNN model when built using the fusion input size and multi-view slicing method. CONCLUSION The assessment of EGFR mutation status in patients is more accurate when CNN models use more spatial information and are fine-tuned by transfer learning. Our finding about implementation strategy of a CNN model could be a guidance to other medical 3D images applications. Compared with other published studies which used medical images to identify EGFR mutation status, our CNN model achieved the best performance in a biggest patient cohort.
Collapse
Affiliation(s)
- Junfeng Xiong
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240 China
- Department of Radiology, Columbia University Medical Center, NY 10032 USA
| | - Xiaoyang Li
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, 200030 China
| | - Lin Lu
- Department of Radiology, Columbia University Medical Center, NY 10032 USA
| | | | - Xiaolong Fu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, 200030 China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240 China
| | - Binsheng Zhao
- Department of Radiology, Columbia University Medical Center, NY 10032 USA
| |
Collapse
|
30
|
Fan J, Cao X, Yap PT, Shen D. BIRNet: Brain image registration using dual-supervised fully convolutional networks. Med Image Anal 2019; 54:193-206. [PMID: 30939419 DOI: 10.1016/j.media.2019.03.006] [Citation(s) in RCA: 122] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Revised: 03/09/2019] [Accepted: 03/21/2019] [Indexed: 11/30/2022]
Abstract
In this paper, we propose a deep learning approach for image registration by predicting deformation from image appearance. Since obtaining ground-truth deformation fields for training can be challenging, we design a fully convolutional network that is subject to dual-guidance: (1) Ground-truth guidance using deformation fields obtained by an existing registration method; and (2) Image dissimilarity guidance using the difference between the images after registration. The latter guidance helps avoid overly relying on the supervision from the training deformation fields, which could be inaccurate. For effective training, we further improve the deep convolutional network with gap filling, hierarchical loss, and multi-source strategies. Experiments on a variety of datasets show promising registration accuracy and efficiency compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Jingfan Fan
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Xiaohuan Cao
- Shanghai United Imaging Intelligence Co. Ltd., Shanghai, China
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
31
|
Zhou T, Thung KH, Zhu X, Shen D. Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis. Hum Brain Mapp 2018; 40:1001-1016. [PMID: 30381863 DOI: 10.1002/hbm.24428] [Citation(s) in RCA: 111] [Impact Index Per Article: 15.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2017] [Revised: 09/04/2018] [Accepted: 10/03/2018] [Indexed: 12/13/2022] Open
Abstract
In this article, the authors aim to maximally utilize multimodality neuroimaging and genetic data for identifying Alzheimer's disease (AD) and its prodromal status, Mild Cognitive Impairment (MCI), from normal aging subjects. Multimodality neuroimaging data such as MRI and PET provide valuable insights into brain abnormalities, while genetic data such as single nucleotide polymorphism (SNP) provide information about a patient's AD risk factors. When these data are used together, the accuracy of AD diagnosis may be improved. However, these data are heterogeneous (e.g., with different data distributions), and have different number of samples (e.g., with far less number of PET samples than the number of MRI or SNPs). Thus, learning an effective model using these data is challenging. To this end, we present a novel three-stage deep feature learning and fusion framework, where deep neural network is trained stage-wise. Each stage of the network learns feature representations for different combinations of modalities, via effective training using the maximum number of available samples. Specifically, in the first stage, we learn latent representations (i.e., high-level features) for each modality independently, so that the heterogeneity among modalities can be partially addressed, and high-level features from different modalities can be combined in the next stage. In the second stage, we learn joint latent features for each pair of modality combination by using the high-level features learned from the first stage. In the third stage, we learn the diagnostic labels by fusing the learned joint latent features from the second stage. To further increase the number of samples during training, we also use data at multiple scanning time points for each training subject in the dataset. We evaluate the proposed framework using Alzheimer's disease neuroimaging initiative (ADNI) dataset for AD diagnosis, and the experimental results show that the proposed framework outperforms other state-of-the-art methods.
Collapse
Affiliation(s)
- Tao Zhou
- Department of Radiology and the Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina
| | - Kim-Han Thung
- Department of Radiology and the Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina
| | - Xiaofeng Zhu
- Department of Radiology and the Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina
| | - Dinggang Shen
- Department of Radiology and the Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina.,Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| |
Collapse
|
32
|
Multi-modal Neuroimaging Data Fusion via Latent Space Learning for Alzheimer's Disease Diagnosis. PREDICTIVE INTELLIGENCE IN MEDICINE. PRIME (WORKSHOP) 2018; 11121:76-84. [PMID: 30788463 PMCID: PMC6378693 DOI: 10.1007/978-3-030-00320-3_10] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
Abstract
Recent studies have shown that fusing multi-modal neuroimaging data can improve the performance of Alzheimer's Disease (AD) diagnosis. However, most existing methods simply concatenate features from each modality without appropriate consideration of the correlations among multi-modalities. Besides, existing methods often employ feature selection (or fusion) and classifier training in two independent steps without consideration of the fact that the two pipelined steps are highly related to each other. Furthermore, existing methods that make prediction based on a single classifier may not be able to address the heterogeneity of the AD progression. To address these issues, we propose a novel AD diagnosis framework based on latent space learning with ensemble classifiers, by integrating the latent representation learning and ensemble of multiple diversified classifiers learning into a unified framework. To this end, we first project the neuroimaging data from different modalities into a common latent space, and impose a joint sparsity constraint on the concatenated projection matrices. Then, we map the learned latent representations into the label space to learn multiple diversified classifiers and aggregate their predictions to obtain the final classification result. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset show that our method outperforms other state-of-the-art methods.
Collapse
|