1
|
Qu G, Orlichenko A, Wang J, Zhang G, Xiao L, Zhang K, Wilson TW, Stephen JM, Calhoun VD, Wang YP. Interpretable Cognitive Ability Prediction: A Comprehensive Gated Graph Transformer Framework for Analyzing Functional Brain Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1568-1578. [PMID: 38109241 PMCID: PMC11090410 DOI: 10.1109/tmi.2023.3343365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2023]
Abstract
Graph convolutional deep learning has emerged as a promising method to explore the functional organization of the human brain in neuroscience research. This paper presents a novel framework that utilizes the gated graph transformer (GGT) model to predict individuals' cognitive ability based on functional connectivity (FC) derived from fMRI. Our framework incorporates prior spatial knowledge and uses a random-walk diffusion strategy that captures the intricate structural and functional relationships between different brain regions. Specifically, our approach employs learnable structural and positional encodings (LSPE) in conjunction with a gating mechanism to efficiently disentangle the learning of positional encoding (PE) and graph embeddings. Additionally, we utilize the attention mechanism to derive multi-view node feature embeddings and dynamically distribute propagation weights between each node and its neighbors, which facilitates the identification of significant biomarkers from functional brain networks and thus enhances the interpretability of the findings. To evaluate our proposed model in cognitive ability prediction, we conduct experiments on two large-scale brain imaging datasets: the Philadelphia Neurodevelopmental Cohort (PNC) and the Human Connectome Project (HCP). The results show that our approach not only outperforms existing methods in prediction accuracy but also provides superior explainability, which can be used to identify important FCs underlying cognitive behaviors.
Collapse
|
2
|
Wang X, Zhao K, Yao L, Fonzo GA, Satterthwaite TD, Rekik I, Zhang Y. Delineating Transdiagnostic Subtypes in Neurodevelopmental Disorders via Contrastive Graph Machine Learning of Brain Connectivity Patterns. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.29.582790. [PMID: 38496573 PMCID: PMC10942316 DOI: 10.1101/2024.02.29.582790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
Neurodevelopmental disorders, such as Attention Deficit/Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD), are characterized by comorbidity and heterogeneity. Identifying distinct subtypes within these disorders can illuminate the underlying neurobiological and clinical characteristics, paving the way for more tailored treatments. We adopted a novel transdiagnostic approach across ADHD and ASD, using cutting-edge contrastive graph machine learning to determine subtypes based on brain network connectivity as revealed by resting-state functional magnetic resonance imaging. Our approach identified two generalizable subtypes characterized by robust and distinct functional connectivity patterns, prominently within the frontoparietal control network and the somatomotor network. These subtypes exhibited pronounced differences in major cognitive and behavioural measures. We further demonstrated the generalizability of these subtypes using data collected from independent study sites. Our data-driven approach provides a novel solution for parsing biological heterogeneity in neurodevelopmental disorders.
Collapse
Affiliation(s)
- Xuesong Wang
- Data 61, Commonwealth Scientific and Industrial Research Organisation, New South Wales, Australia
| | - Kanhao Zhao
- Department of Bioengineering, Lehigh University, Bethlehem, PA, USA
| | - Lina Yao
- Data 61, Commonwealth Scientific and Industrial Research Organisation, New South Wales, Australia
- School of Computer Science and Engineering, University of New South Wales, New South Wales, Australia
| | - Gregory A Fonzo
- Center for Psychedelic Research and Therapy, Department of Psychiatry and Behavioral Sciences, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | | | - Islem Rekik
- BASIRA Lab, Imperial-X and Department of Computing, Imperial College London, London, UK
| | - Yu Zhang
- Department of Bioengineering, Lehigh University, Bethlehem, PA, USA
- Department of Electrical and Computer Engineering, Lehigh University, Bethlehem, PA, USA
| |
Collapse
|
3
|
Liang M, Wang R, Liang J, Wang L, Li B, Jia X, Zhang Y, Chen Q, Zhang T, Zhang C. Interpretable Inference and Classification of Tissue Types in Histological Colorectal Cancer Slides Based on Ensembles Adaptive Boosting Prototype Tree. IEEE J Biomed Health Inform 2023; 27:6006-6017. [PMID: 37871093 DOI: 10.1109/jbhi.2023.3326467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Digital pathology images are treated as the "gold standard" for the diagnosis of colorectal lesions, especially colon cancer. Real-time, objective and accurate inspection results will assist clinicians to choose symptomatic treatment in a timely manner, which is of great significance in clinical medicine. However, Manual methods suffers from long inspection cycle and serious reliance on subjective interpretation. It is also a challenging task for existing computer-aided diagnosis methods to obtain models that are both accurate and interpretable. Models that exhibit high accuracy are always more complex and opaque, while interpretable models may lack the necessary accuracy. Therefore, the framework of ensemble adaptive boosting prototype tree is proposed to predict the colorectal pathology images and provide interpretable inference by visualizing the decision-making process in each base learner. The results showed that the proposed method could effectively address the "accuracy-interpretability trade-off" issue by ensemble of m adaptive boosting neural prototype trees. The superior performance of the framework provides a novel paradigm for interpretable inference and high-precision prediction of pathology image patches in computational pathology.
Collapse
|
4
|
Orlichenko A, Daly G, Zhou Z, Liu A, Shen H, Deng HW, Wang YP. ImageNomer: Description of a functional connectivity and omics analysis tool and case study identifying a race confound. NEUROIMAGE. REPORTS 2023; 3:100191. [PMID: 38125823 PMCID: PMC10732473 DOI: 10.1016/j.ynirp.2023.100191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
Most packages for the analysis of fMRI-based functional connectivity (FC) and genomic data are used with a programming language interface, lacking an easy-to-navigate GUI frontend. This exacerbates two problems found in these types of data: demographic confounds and quality control in the face of high dimensionality of features. The reason is that it is too slow and cumbersome to use a programming interface to create all the necessary visualizations required to identify all correlations, confounding effects, or quality control problems in a dataset. FC in particular usually contains tens of thousands of features per subject, and can only be summarized and efficiently explored using visualizations. To remedy this situation, we have developed ImageNomer, a data visualization and analysis tool that allows inspection of both subject-level and cohort-level demographic, genomic, and imaging features. The software is Python-based, runs in a self-contained Docker image, and contains a browser-based GUI frontend. We demonstrate the usefulness of ImageNomer by identifying an unexpected race confound when predicting achievement scores in the Philadelphia Neurodevelopmental Cohort (PNC) dataset, which contains multitask fMRI and single nucleotide polymorphism (SNP) data of healthy adolescents. In the past, many studies have attempted to use FC to identify achievement-related features in fMRI. Using ImageNomer to visualize trends in achievement scores between races, we find a clear potential for confounding effects if race can be predicted using FC. Using correlation analysis in the ImageNomer software, we show that FCs correlated with Wide Range Achievement Test (WRAT) score are in fact more highly correlated with race. Investigating further, we find that whereas both FC and SNP (genomic) features can account for 10-15% of WRAT score variation, this predictive ability disappears when controlling for race. We also use ImageNomer to investigate race-FC correlation in the Bipolar and Schizophrenia Network for Intermediate Phenotypes (BSNIP) dataset. In this work, we demonstrate the advantage of our ImageNomer GUI tool in data exploration and confound detection. Additionally, this work identifies race as a strong confound in FC data and casts doubt on the possibility of finding unbiased achievement-related features in fMRI and SNP data of healthy adolescents.
Collapse
Affiliation(s)
- Anton Orlichenko
- Department of Biomedical Engineering, Tulane University, New Orleans, LA, USA
| | - Grant Daly
- College of Medicine, University of South Alabama, Mobile, AL, USA
| | - Ziyu Zhou
- Department of Biomedical Engineering, Tulane University, New Orleans, LA, USA
| | - Anqi Liu
- School of Medicine, Tulane University, New Orleans, LA, USA
| | - Hui Shen
- School of Medicine, Tulane University, New Orleans, LA, USA
| | - Hong-Wen Deng
- School of Medicine, Tulane University, New Orleans, LA, USA
| | - Yu-Ping Wang
- Department of Biomedical Engineering, Tulane University, New Orleans, LA, USA
| |
Collapse
|
5
|
Sui J, Zhi D, Calhoun VD. Data-driven multimodal fusion: approaches and applications in psychiatric research. PSYCHORADIOLOGY 2023; 3:kkad026. [PMID: 38143530 PMCID: PMC10734907 DOI: 10.1093/psyrad/kkad026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 11/08/2023] [Accepted: 11/21/2023] [Indexed: 12/26/2023]
Abstract
In the era of big data, where vast amounts of information are being generated and collected at an unprecedented rate, there is a pressing demand for innovative data-driven multi-modal fusion methods. These methods aim to integrate diverse neuroimaging perspectives to extract meaningful insights and attain a more comprehensive understanding of complex psychiatric disorders. However, analyzing each modality separately may only reveal partial insights or miss out on important correlations between different types of data. This is where data-driven multi-modal fusion techniques come into play. By combining information from multiple modalities in a synergistic manner, these methods enable us to uncover hidden patterns and relationships that would otherwise remain unnoticed. In this paper, we present an extensive overview of data-driven multimodal fusion approaches with or without prior information, with specific emphasis on canonical correlation analysis and independent component analysis. The applications of such fusion methods are wide-ranging and allow us to incorporate multiple factors such as genetics, environment, cognition, and treatment outcomes across various brain disorders. After summarizing the diverse neuropsychiatric magnetic resonance imaging fusion applications, we further discuss the emerging neuroimaging analyzing trends in big data, such as N-way multimodal fusion, deep learning approaches, and clinical translation. Overall, multimodal fusion emerges as an imperative approach providing valuable insights into the underlying neural basis of mental disorders, which can uncover subtle abnormalities or potential biomarkers that may benefit targeted treatments and personalized medical interventions.
Collapse
Affiliation(s)
- Jing Sui
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China
| | - Dongmei Zhi
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China
| | - Vince D Calhoun
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia Institute of Technology, Emory University and Georgia State University, Atlanta, GA 30303, United States
| |
Collapse
|
6
|
Liu X, Chen Z, Zhang H, Li J, Jiang Q, Ren L, Luo Y. The interpretability of the activity signal detection model for wood-boring pests Semanotus bifasciatus in the larval stage. PEST MANAGEMENT SCIENCE 2023; 79:3830-3842. [PMID: 37218108 DOI: 10.1002/ps.7566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 05/17/2023] [Accepted: 05/23/2023] [Indexed: 05/24/2023]
Abstract
BACKGROUND The acoustic detection model of activity signals based on deep learning could detect wood-boring pests accurately and reliably. However, the black-box characteristics of the deep learning model have limited the credibility of the results and hindered its application. Aiming to address the reliability and interpretability of the model, this paper designed an active interpretable model called Dynamic Acoustic Larvae Prototype Network (DalPNet), which used the prototype to assist model decisions and achieve more flexible model explanation through dynamic feature patch computation. RESULTS In the experiments, the average recognition accuracy of the DalPNet on the simple test set and anti-noise test set for Semanotus bifasciatus larval activity signals reached 99.3% and 98.5%, respectively. The quantitative evaluation of interpretability was measured by the relative area under the curve (RAUC) and the cumulative slope (CS) of the accuracy change curve in this paper. In the experiments, the RAUC and the CS of DalPNet were 0.2923 and -2.0105, respectively. Additionally, according to the visualization results, the explanation results of DalPNet were more accurate in locating the bite pulses of the larvae and could better focus on multiple bite pulses in one signal, which showed better performance compared to the baseline model. CONCLUSION The experimental results demonstrated that the proposed DalPNet had better explanation while ensuring recognition accuracy. In view of that, it could improve the trust of forestry custodians in the activity signals detection model and aid in the practical application of the model in the forestry field. © 2023 Society of Chemical Industry.
Collapse
Affiliation(s)
- Xuanxin Liu
- School of Information Science and Technology, Beijing Forestry University, Beijing, China
- Engineering Research Center for Forestry-oriented Intelligent Information Processing of National Forestry and Grassland Administration, Beijing, China
| | - Zhibo Chen
- School of Information Science and Technology, Beijing Forestry University, Beijing, China
- Engineering Research Center for Forestry-oriented Intelligent Information Processing of National Forestry and Grassland Administration, Beijing, China
| | - Haiyan Zhang
- School of Information Science and Technology, Beijing Forestry University, Beijing, China
- Engineering Research Center for Forestry-oriented Intelligent Information Processing of National Forestry and Grassland Administration, Beijing, China
| | - Juhu Li
- School of Information Science and Technology, Beijing Forestry University, Beijing, China
- Engineering Research Center for Forestry-oriented Intelligent Information Processing of National Forestry and Grassland Administration, Beijing, China
| | - Qi Jiang
- Beijing Key Laboratory for Forest Pest Control, Beijing Forestry University, Beijing, China
| | - Lili Ren
- Beijing Key Laboratory for Forest Pest Control, Beijing Forestry University, Beijing, China
| | - Youqing Luo
- Beijing Key Laboratory for Forest Pest Control, Beijing Forestry University, Beijing, China
| |
Collapse
|
7
|
Zeibich R, Kwan P, J. O’Brien T, Perucca P, Ge Z, Anderson A. Applications for Deep Learning in Epilepsy Genetic Research. Int J Mol Sci 2023; 24:14645. [PMID: 37834093 PMCID: PMC10572791 DOI: 10.3390/ijms241914645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 09/11/2023] [Accepted: 09/21/2023] [Indexed: 10/15/2023] Open
Abstract
Epilepsy is a group of brain disorders characterised by an enduring predisposition to generate unprovoked seizures. Fuelled by advances in sequencing technologies and computational approaches, more than 900 genes have now been implicated in epilepsy. The development and optimisation of tools and methods for analysing the vast quantity of genomic data is a rapidly evolving area of research. Deep learning (DL) is a subset of machine learning (ML) that brings opportunity for novel investigative strategies that can be harnessed to gain new insights into the genomic risk of people with epilepsy. DL is being harnessed to address limitations in accuracy of long-read sequencing technologies, which improve on short-read methods. Tools that predict the functional consequence of genetic variation can represent breaking ground in addressing critical knowledge gaps, while methods that integrate independent but complimentary data enhance the predictive power of genetic data. We provide an overview of these DL tools and discuss how they may be applied to the analysis of genetic data for epilepsy research.
Collapse
Affiliation(s)
- Robert Zeibich
- Department of Neuroscience, Central Clinical School, Monash University, Melbourne, VIC 3800, Australia; (R.Z.); (P.K.); (T.J.O.); (P.P.)
| | - Patrick Kwan
- Department of Neuroscience, Central Clinical School, Monash University, Melbourne, VIC 3800, Australia; (R.Z.); (P.K.); (T.J.O.); (P.P.)
- Department of Neurology, Alfred Health, Melbourne, VIC 3004, Australia
- Department of Neurology, The Royal Melbourne Hospital, The University of Melbourne, Parkville, VIC 3052, Australia
- Department of Medicine, The Royal Melbourne Hospital, The University of Melbourne, Parkville, VIC 3052, Australia
| | - Terence J. O’Brien
- Department of Neuroscience, Central Clinical School, Monash University, Melbourne, VIC 3800, Australia; (R.Z.); (P.K.); (T.J.O.); (P.P.)
- Department of Neurology, Alfred Health, Melbourne, VIC 3004, Australia
- Department of Neurology, The Royal Melbourne Hospital, The University of Melbourne, Parkville, VIC 3052, Australia
- Department of Medicine, The Royal Melbourne Hospital, The University of Melbourne, Parkville, VIC 3052, Australia
| | - Piero Perucca
- Department of Neuroscience, Central Clinical School, Monash University, Melbourne, VIC 3800, Australia; (R.Z.); (P.K.); (T.J.O.); (P.P.)
- Department of Neurology, Alfred Health, Melbourne, VIC 3004, Australia
- Department of Neurology, The Royal Melbourne Hospital, The University of Melbourne, Parkville, VIC 3052, Australia
- Epilepsy Research Centre, Department of Medicine, Austin Health, The University of Melbourne, Melbourne, VIC 3084, Australia
- Bladin-Berkovic Comprehensive Epilepsy Program, Department of Neurology, Austin Health, The University of Melbourne, Melbourne, VIC 3084, Australia
| | - Zongyuan Ge
- Faculty of Engineering, Monash University, Melbourne, VIC 3800, Australia;
- Monash-Airdoc Research, Monash University, Melbourne, VIC 3800, Australia
| | - Alison Anderson
- Department of Neuroscience, Central Clinical School, Monash University, Melbourne, VIC 3800, Australia; (R.Z.); (P.K.); (T.J.O.); (P.P.)
- Department of Medicine, The Royal Melbourne Hospital, The University of Melbourne, Parkville, VIC 3052, Australia
| |
Collapse
|
8
|
Orlichenko A, Qu G, Zhang G, Patel B, Wilson TW, Stephen JM, Calhoun VD, Wang YP. Latent Similarity Identifies Important Functional Connections for Phenotype Prediction. IEEE Trans Biomed Eng 2023; 70:1979-1989. [PMID: 37015625 PMCID: PMC10284019 DOI: 10.1109/tbme.2022.3232964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
OBJECTIVE Endophenotypes such as brain age and fluid intelligence are important biomarkers of disease status. However, brain imaging studies to identify these biomarkers often encounter limited numbers of subjects but high dimensional imaging features, hindering reproducibility. Therefore, we develop an interpretable, multivariate classification/regression algorithm, called Latent Similarity (LatSim), suitable for small sample size but high feature dimension datasets. METHODS LatSim combines metric learning with a kernel similarity function and softmax aggregation to identify task-related similarities between subjects. Inter-subject similarity is utilized to improve performance on three prediction tasks using multi-paradigm fMRI data. A greedy selection algorithm, made possible by LatSim's computational efficiency, is developed as an interpretability method. RESULTS LatSim achieved significantly higher predictive accuracy at small sample sizes on the Philadelphia Neurodevelopmental Cohort (PNC) dataset. Connections identified by LatSim gave superior discriminative power compared to those identified by other methods. We identified 4 functional brain networks enriched in connections for predicting brain age, sex, and intelligence. CONCLUSION We find that most information for a predictive task comes from only a few (1-5) connections. Additionally, we find that the default mode network is over-represented in the top connections of all predictive tasks. SIGNIFICANCE We propose a novel prediction algorithm for small sample, high feature dimension datasets and use it to identify connections in task fMRI data. Our work can lead to new insights in both algorithm design and neuroscience research.
Collapse
|
9
|
Wang T, Chen X, Zhang J, Feng Q, Huang M. Deep multimodality-disentangled association analysis network for imaging genetics in neurodegenerative diseases. Med Image Anal 2023; 88:102842. [PMID: 37247468 DOI: 10.1016/j.media.2023.102842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 03/01/2023] [Accepted: 05/15/2023] [Indexed: 05/31/2023]
Abstract
Imaging genetics is a crucial tool that is applied to explore potentially disease-related biomarkers, particularly for neurodegenerative diseases (NDs). With the development of imaging technology, the association analysis between multimodal imaging data and genetic data is gradually being concerned by a wide range of imaging genetics studies. However, multimodal data are fused first and then correlated with genetic data in traditional methods, which leads to an incomplete exploration of their common and complementary information. In addition, the inaccurate formulation in the complex relationships between imaging and genetic data and information loss caused by missing multimodal data are still open problems in imaging genetics studies. Therefore, in this study, a deep multimodality-disentangled association analysis network (DMAAN) is proposed to solve the aforementioned issues and detect the disease-related biomarkers of NDs simultaneously. First, the imaging data are nonlinearly projected into a latent space and imaging representations can be achieved. The imaging representations are further disentangled into common and specific parts by using a multimodal-disentangled module. Second, the genetic data are encoded to achieve genetic representations, and then, the achieved genetic representations are nonlinearly mapped to the common and specific imaging representations to build nonlinear associations between imaging and genetic data through an association analysis module. Moreover, modality mask vectors are synchronously synthesized to integrate the genetic and imaging data, which helps the following disease diagnosis. Finally, the proposed method achieves reasonable diagnosis performance via a disease diagnosis module and utilizes the label information to detect the disease-related modality-shared and modality-specific biomarkers. Furthermore, the genetic representation can be used to impute the missing multimodal data with our learning strategy. Two publicly available datasets with different NDs are used to demonstrate the effectiveness of the proposed DMAAN. The experimental results show that the proposed DMAAN can identify the disease-related biomarkers, which suggests the proposed DMAAN may provide new insights into the pathological mechanism and early diagnosis of NDs. The codes are publicly available at https://github.com/Meiyan88/DMAAN.
Collapse
Affiliation(s)
- Tao Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Xiumei Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Jiawei Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Meiyan Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
10
|
Jiang L, Li F, Chen Z, Zhu B, Yi C, Li Y, Zhang T, Peng Y, Si Y, Cao Z, Chen A, Yao D, Chen X, Xu P. Information transmission velocity-based dynamic hierarchical brain networks. Neuroimage 2023; 270:119997. [PMID: 36868393 DOI: 10.1016/j.neuroimage.2023.119997] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 02/09/2023] [Accepted: 02/27/2023] [Indexed: 03/05/2023] Open
Abstract
The brain functions as an accurate circuit that regulates information to be sequentially propagated and processed in a hierarchical manner. However, it is still unknown how the brain is hierarchically organized and how information is dynamically propagated during high-level cognition. In this study, we developed a new scheme for quantifying the information transmission velocity (ITV) by combining electroencephalogram (EEG) and diffusion tensor imaging (DTI), and then mapped the cortical ITV network (ITVN) to explore the information transmission mechanism of the human brain. The application in MRI-EEG data of P300 revealed bottom-up and top-down ITVN interactions subserving P300 generation, which was comprised of four hierarchical modules. Among these four modules, information exchange between visual- and attention-activated regions occurred at a high velocity, related cognitive processes could thus be efficiently accomplished due to the heavy myelination of these regions. Moreover, inter-individual variability in P300 was probed to be attributed to the difference in information transmission efficiency of the brain, which may provide new insight into the cognitive degenerations in clinical neurodegenerative disorders, such as Alzheimer's disease, from the transmission velocity perspective. Together, these findings confirm the capacity of ITV to effectively determine the efficiency of information propagation in the brain.
Collapse
Affiliation(s)
- Lin Jiang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, No.2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, Sichuan 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Fali Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, No.2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, Sichuan 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Zhaojin Chen
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, No.2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, Sichuan 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Bin Zhu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, No.2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, Sichuan 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Chanlin Yi
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, No.2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, Sichuan 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Yuqin Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, No.2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, Sichuan 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Tao Zhang
- School of science, Xihua University, Chengdu 610039, China
| | - Yueheng Peng
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, No.2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, Sichuan 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Yajing Si
- School of Psychology, Xinxiang Medical University, Xinxiang 453003, China
| | - Zehong Cao
- STEM, University of South Australia, Adelaide, SA 5000, Australia
| | - Antao Chen
- Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, No.2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, Sichuan 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Electrical Engineering, Zhengzhou University, Zhengzhou 450001, China; Research Unit of NeuroInformation, Chinese Academy of Medical Sciences, Chengdu 2019RU035, China.
| | - Xun Chen
- Department of Neurosurgery, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230001, China; Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China.
| | - Peng Xu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, No.2006, Xiyuan Ave, West Hi-Tech Zone, Chengdu, Sichuan 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China.
| |
Collapse
|
11
|
Liu L, Chang J, Zhang P, Ma Q, Zhang H, Sun T, Qiao H. A joint multi-modal learning method for early-stage knee osteoarthritis disease classification. Heliyon 2023; 9:e15461. [PMID: 37123973 PMCID: PMC10130858 DOI: 10.1016/j.heliyon.2023.e15461] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 04/05/2023] [Accepted: 04/10/2023] [Indexed: 05/02/2023] Open
Abstract
Osteoarthritis (OA) is a progressive and chronic disease. Identifying the early stages of OA disease is important for the treatment and care of patients. However, most state-of-the-art methods only use single-modal data to predict disease status, so that these methods usually ignore complementary information in multi-modal data. In this study, we develop an integrated multi-modal learning method (MMLM) that uses an interpretable strategy to select and fuse clinical, imaging, and demographic features to classify the grade of early-stage knee OA disease. MMLM applies XGboost and ResNet50 to extract two heterogeneous features from the clinical data and imaging data, respectively. And then we integrate these extracted features with demographic data. To avoid the negative effects of redundant features in a direct integration of multiple features, we propose a L1-norm-based optimization method (MMLM) to regularize the inter-correlations among the multiple features. MMLM was assessed using the Osteoarthritis Initiative (OAI) data set with machine learning classifiers. Extensive experiments demonstrate that MMLM improves the performance of the classifiers. Furthermore, a visual analysis of the important features in the multimodal data verified the relations among the modalities when classifying the grade of knee OA disease.
Collapse
|
12
|
Xu F, Qiao C, Zhou H, Calhoun VD, Stephen JM, Wilson TW, Wang Y. An explainable autoencoder with multi-paradigm fMRI fusion for identifying differences in dynamic functional connectivity during brain development. Neural Netw 2023; 159:185-197. [PMID: 36580711 DOI: 10.1016/j.neunet.2022.12.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/19/2022] [Accepted: 12/12/2022] [Indexed: 12/24/2022]
Abstract
Multi-paradigm deep learning models show great potential for dynamic functional connectivity (dFC) analysis by integrating complementary information. However, many of them cannot use information from different paradigms effectively and have poor explainability, that is, the ability to identify significant features that contribute to decision making. In this paper, we propose a multi-paradigm fusion-based explainable deep sparse autoencoder (MF-EDSAE) to address these issues. Considering explainability, the MF-EDSAE is constructed based on a deep sparse autoencoder (DSAE). For integrating information effectively, the MF-EDASE contains the nonlinear fusion layer and multi-paradigm hypergraph regularization. We apply the model to the Philadelphia Neurodevelopmental Cohort and demonstrate it achieves better performance in detecting dynamic FC (dFC) that differ significantly during brain development than the single-paradigm DSAE. The experimental results show that children have more dispersive dFC patterns than adults. The function of the brain transits from undifferentiated systems to specialized networks during brain development. Meanwhile, adults have stronger connectivities between task-related functional networks for a given task than children. As the brain develops, the patterns of the global dFC change more quickly when stimulated by a task.
Collapse
Affiliation(s)
- Faming Xu
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, 710049, PR China.
| | - Chen Qiao
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, 710049, PR China.
| | - Huiyu Zhou
- School of Computing and Mathematical Sciences, University of Leicester, LE1 7RH, UK.
| | - Vince D Calhoun
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA 30030, USA.
| | | | - Tony W Wilson
- Institute for Human Neuroscience, Boys Town National Research Hospital, Boys Town, NE 68010, USA.
| | - Yuping Wang
- Department of Biomedical Engineering, Tulane University, New Orleans, LA 70118, USA.
| |
Collapse
|
13
|
Ding W, Abdel-Basset M, Hawash H, Ali AM. Explainability of artificial intelligence methods, applications and challenges: A comprehensive survey. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.10.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
14
|
Xiao L, Cai B, Qu G, Zhang G, Stephen JM, Wilson TW, Calhoun VD, Wang YP. Distance Correlation-Based Brain Functional Connectivity Estimation and Non-Convex Multi-Task Learning for Developmental fMRI Studies. IEEE Trans Biomed Eng 2022; 69:3039-3050. [PMID: 35316180 PMCID: PMC9594860 DOI: 10.1109/tbme.2022.3160447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Resting-state functional magnetic resonance imaging (rs-fMRI)-derived functional connectivity (FC) patterns have been extensively used to delineate global functional organization of the human brain in healthy development and neuropsychiatric disorders. In this paper, we investigate how FC in males and females differs in an age prediction framework. METHODS We first estimate FC between regions-of-interest (ROIs) using distance correlation instead of Pearson's correlation. Distance correlation, as a multivariate statistical method, explores spatial relations of voxel-wise time courses within individual ROIs and measures both linear and nonlinear dependence, capturing more complex between-ROI interactions. Then, we propose a novel non-convex multi-task learning (NC-MTL) model to study age-related gender differences in FC, where age prediction for each gender group is viewed as one task, and a composite regularizer with a combination of the non-convex l2,1-2 and l1-2 terms is introduced for selecting both common and task-specific features. RESULTS AND CONCLUSION We validate the effectiveness of our NC-MTL model with distance correlation-based FC derived from rs-fMRI for predicting ages of both genders. The experimental results on the Philadelphia Neurodevelopmental Cohort demonstrate that our NC-MTL model outperforms several other competing MTL models in age prediction. We also compare the age prediction performance of our NC-MTL model using FC estimated by Pearson's correlation and distance correlation, which shows that distance correlation-based FC is more discriminative for age prediction than Pearson's correlation-based FC. SIGNIFICANCE This paper presents a novel framework for functional connectome developmental studies, characterizing developmental gender differences in FC patterns.
Collapse
|
15
|
Zhang Y, Zhang H, Xiao L, Bai Y, Calhoun VD, Wang YP. Multi-Modal Imaging Genetics Data Fusion via a Hypergraph-Based Manifold Regularization: Application to Schizophrenia Study. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2263-2272. [PMID: 35320094 PMCID: PMC9661879 DOI: 10.1109/tmi.2022.3161828] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Recent studies show that multi-modal data fusion techniques combine information from diverse sources for comprehensive diagnosis and prognosis of complex brain disorder, often resulting in improved accuracy compared to single-modality approaches. However, many existing data fusion methods extract features from homogeneous networs, ignoring heterogeneous structural information among multiple modalities. To this end, we propose a Hypergraph-based Multi-modal data Fusion algorithm, namely HMF. Specifically, we first generate a hypergraph similarity matrix to represent the high-order relationships among subjects, and then enforce the regularization term based upon both the inter- and intra-modality relationships of the subjects. Finally, we apply HMF to integrate imaging and genetics datasets. Validation of the proposed method is performed on both synthetic data and real samples from schizophrenia study. Results show that our algorithm outperforms several competing methods, and reveals significant interactions among risk genes, environmental factors and abnormal brain regions.
Collapse
|
16
|
Liu L, Chang J, Wang Y, Liang G, Wang YP, Zhang H. Decomposition-Based Correlation Learning for Multi-Modal MRI-Based Classification of Neuropsychiatric Disorders. Front Neurosci 2022; 16:832276. [PMID: 35692429 PMCID: PMC9174798 DOI: 10.3389/fnins.2022.832276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 04/21/2022] [Indexed: 11/13/2022] Open
Abstract
Multi-modal magnetic resonance imaging (MRI) is widely used for diagnosing brain disease in clinical practice. However, the high-dimensionality of MRI images is challenging when training a convolution neural network. In addition, utilizing multiple MRI modalities jointly is even more challenging. We developed a method using decomposition-based correlation learning (DCL). To overcome the above challenges, we used a strategy to capture the complex relationship between structural MRI and functional MRI data. Under the guidance of matrix decomposition, DCL takes into account the spike magnitude of leading eigenvalues, the number of samples, and the dimensionality of the matrix. A canonical correlation analysis (CCA) was used to analyze the correlation and construct matrices. We evaluated DCL in the classification of multiple neuropsychiatric disorders listed in the Consortium for Neuropsychiatric Phenomics (CNP) dataset. In experiments, our method had a higher accuracy than several existing methods. Moreover, we found interesting feature connections from brain matrices based on DCL that can differentiate disease and normal cases and different subtypes of the disease. Furthermore, we extended experiments on a large sample size dataset and a small sample size dataset, compared with several other well-established methods that were designed for the multi neuropsychiatric disorder classification; our proposed method achieved state-of-the-art performance on all three datasets.
Collapse
Affiliation(s)
- Liangliang Liu
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| | - Jing Chang
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| | - Ying Wang
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| | - Gongbo Liang
- Department of Computer Science, Eastern Kentucky University, Richmond, KY, United States
| | - Yu-Ping Wang
- Biomedical Engineering Department, Tulane University, New Orleans, LA, United States
| | - Hui Zhang
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
- *Correspondence: Hui Zhang
| |
Collapse
|
17
|
Using Convolutional Neural Networks for the Assessment Research of Mental Health. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1636855. [PMID: 35586088 PMCID: PMC9110170 DOI: 10.1155/2022/1636855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 04/11/2022] [Accepted: 04/12/2022] [Indexed: 11/18/2022]
Abstract
Existing mental health assessment methods mainly rely on experts' experience, which has subjective bias, so convolutional neural networks are applied to mental health assessment to achieve the fusion of face, voice, and gait. Among them, the OpenPose algorithm is used to extract facial and posture features; openSMILE is used to extract voice features; and attention mechanism is introduced to reasonably allocate the weight values of different modal features. As can be seen, the effective identification and evaluation of 10 indicators such as mental health somatization, depression, and anxiety are realized. Simulation results show that the proposed method can accurately assess mental health. Here, the overall recognition accuracy can reach 77.20%, and the F1 value can reach 0.77. Compared with the recognition methods based on face single-mode fusion, face + voice dual-mode fusion, and face + voice + gait multimodal fusion, the recognition accuracy and F1 value of proposed method are improved to varying degrees, and the recognition effect is better, which has certain practical application value.
Collapse
|
18
|
Liu L, Zhang J, Wang JX, Xiong S, Zhang H. Co-optimization Learning Network for MRI Segmentation of Ischemic Penumbra Tissues. Front Neuroinform 2021; 15:782262. [PMID: 34975444 PMCID: PMC8717777 DOI: 10.3389/fninf.2021.782262] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 11/25/2021] [Indexed: 11/17/2022] Open
Abstract
Convolutional neural networks (CNNs) have brought hope for the medical image auxiliary diagnosis. However, the shortfall of labeled medical image data is the bottleneck that limits the performance improvement of supervised CNN methods. In addition, annotating a large number of labeled medical image data is often expensive and time-consuming. In this study, we propose a co-optimization learning network (COL-Net) for Magnetic Resonance Imaging (MRI) segmentation of ischemic penumbra tissues. COL-Net base on the limited labeled samples and consists of an unsupervised reconstruction network (R), a supervised segmentation network (S), and a transfer block (T). The reconstruction network extracts the robust features from reconstructing pseudo unlabeled samples, which is the auxiliary branch of the segmentation network. The segmentation network is used to segment the target lesions under the limited labeled samples and the auxiliary of the reconstruction network. The transfer block is used to co-optimization the feature maps between the bottlenecks of the reconstruction network and segmentation network. We propose a mix loss function to optimize COL-Net. COL-Net is verified on the public ischemic penumbra segmentation challenge (SPES) with two dozen labeled samples. Results demonstrate that COL-Net has high predictive accuracy and generalization with the Dice coefficient of 0.79. The extended experiment also shows COL-Net outperforms most supervised segmentation methods. COL-Net is a meaningful attempt to alleviate the limited labeled sample problem in medical image segmentation.
Collapse
Affiliation(s)
- Liangliang Liu
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| | - Jing Zhang
- Department of Computer Science, Henan Quality Engineering Vocational College, Pingdingshan, China
| | - Jin-xiang Wang
- Department of Computer Science, University of Melbourne, Parkville, VIC, Australia
| | - Shufeng Xiong
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| | - Hui Zhang
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, China
| |
Collapse
|
19
|
Qu G, Hu W, Xiao L, Wang J, Bai Y, Patel B, Zhang K, Wang YP. Brain Functional Connectivity Analysis via Graphical Deep Learning. IEEE Trans Biomed Eng 2021; 69:1696-1706. [PMID: 34882539 PMCID: PMC9219112 DOI: 10.1109/tbme.2021.3127173] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Graphical deep learning models provide a desirable way for brain functional connectivity analysis. However, the application of current graph deep learning models to brain network analysis is challenging due to the limited sample size and complex relationships between different brain regions. METHOD In this work, a graph convolutional network (GCN) based framework is proposed by exploiting the information from both region-to-region connectivities of the brain and subject-subject relationships. We first construct an affinity subject-subject graph followed by GCN analysis. A Laplacian regularization term is introduced in our model to tackle the overfitting problem. We apply and validate the proposed model to the Philadelphia Neurodevelopmental Cohort for the brain cognition study. RESULTS Experimental analysis shows that our proposed framework outperforms other competing models in classifying groups with low and high Wide Range Achievement Test (WRAT) scores. Moreover, to examine each brain region's contribution to cognitive function, we use the occlusion sensitivity analysis method to identify cognition-related brain functional networks. The results are consistent with previous research yet yield new findings. CONCLUSION AND SIGNIFICANCE Our study demonstrates that GCN incorporating prior knowledge about brain networks offers a powerful way to detect important brain networks and regions associated with cognitive functions.
Collapse
|
20
|
Huang M, Lai H, Yu Y, Chen X, Wang T, Feng Q. Deep-gated recurrent unit and diet network-based genome-wide association analysis for detecting the biomarkers of Alzheimer's disease. Med Image Anal 2021; 73:102189. [PMID: 34343841 DOI: 10.1016/j.media.2021.102189] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 05/30/2021] [Accepted: 07/16/2021] [Indexed: 01/01/2023]
Abstract
Genome-wide association analysis (GWAS) is a commonly used method to detect the potential biomarkers of Alzheimer's disease (AD). Most existing GWAS methods entail a high computational cost, disregard correlations among imaging data and correlations among genetic data, and ignore various associations between longitudinal imaging and genetic data. A novel GWAS method was proposed to identify potential AD biomarkers and address these problems. A network based on a gated recurrent unit was applied without imputing incomplete longitudinal imaging data to integrate the longitudinal data of variable lengths and extract an image representation. In this study, a modified diet network that can considerably reduce the number of parameters in the genetic network was proposed to perform GWAS between image representation and genetic data. Genetic representation can be extracted in this way. A link between genetic representation and AD was established to detect potential AD biomarkers. The proposed method was tested on a set of simulated data and a real AD dataset. Results of the simulated data showed that the proposed method can accurately detect relevant biomarkers. Moreover, the results of real AD dataset showed that the proposed method can detect some new risk-related genes of AD. Based on previous reports, no research has incorporated a deep-learning model into a GWAS framework to investigate the potential information on super-high-dimensional genetic data and longitudinal imaging data and create a link between imaging genetics and AD for detecting potential AD biomarkers. Therefore, the proposed method may provide new insights into the underlying pathological mechanism of AD.
Collapse
Affiliation(s)
- Meiyan Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Haoran Lai
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Yuwei Yu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Xiumei Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Tao Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | | |
Collapse
|
21
|
Qu G, Xiao L, Hu W, Wang J, Zhang K, Calhoun V, Wang YP. Ensemble Manifold Regularized Multi-Modal Graph Convolutional Network for Cognitive Ability Prediction. IEEE Trans Biomed Eng 2021; 68:3564-3573. [PMID: 33974537 DOI: 10.1109/tbme.2021.3077875] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Multi-modal functional magnetic resonance imaging (fMRI) can be used to make predictions about individual behavioral and cognitive traits based on brain connectivity networks. METHODS To take advantage of complementary information from multi-modal fMRI, we propose an interpretable multi-modal graph convolutional network (MGCN) model, incorporating both fMRI time series and functional connectivity (FC) between each pair of brain regions. Specifically, our model learns a graph embedding from individual brain networks derived from multi-modal data. A manifold-based regularization term is enforced to consider the relationships of subjects both within and between modalities. Furthermore, we propose the gradient-weighted regression activation mapping (Grad-RAM) and the edge mask learning to interpret the model, which is then used to identify significant cognition-related biomarkers. RESULTS We validate our MGCN model on the Philadelphia Neurodevelopmental Cohort to predict individual wide range achievement test (WRAT) score. Our model obtains superior predictive performance over GCN with a single modality and other competing approaches. The identified biomarkers are cross-validated from different approaches. CONCLUSION AND SIGNIFICANCE This paper develops a new interpretable graph deep learning framework for cognition prediction, with the potential to overcome the limitations of several current data-fusion models. The results demonstrate the power of MGCN in analyzing multi-modal fMRI and discovering significant biomarkers for human brain studies.
Collapse
|