1
|
Tang H, Ma G, Guo L, Fu X, Huang H, Zhan L. Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling Model. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7363-7375. [PMID: 36374890 PMCID: PMC10183052 DOI: 10.1109/tnnls.2022.3220220] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Recently, brain networks have been widely adopted to study brain dynamics, brain development, and brain diseases. Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases. However, current graph learning techniques have several issues on brain network mining. First, most current graph learning models are designed for unsigned graph, which hinders the analysis of many signed network data (e.g., brain functional networks). Meanwhile, the insufficiency of brain network data limits the model performance on clinical phenotypes' predictions. Moreover, few of the current graph learning models are interpretable, which may not be capable of providing biological insights for model outcomes. Here, we propose an interpretable hierarchical signed graph representation learning (HSGPL) model to extract graph-level representations from brain functional networks, which can be used for different prediction tasks. To further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning. We evaluate this framework on different classification and regression tasks using data from human connectome project (HCP) and open access series of imaging studies (OASIS). Our results from extensive experiments demonstrate the superiority of the proposed model compared with several state-of-the-art techniques. In addition, we use graph saliency maps, derived from these prediction tasks, to demonstrate detection and interpretation of phenotypic biomarkers.
Collapse
|
2
|
Zhang S, Yang J, Zhang Y, Zhong J, Hu W, Li C, Jiang J. The Combination of a Graph Neural Network Technique and Brain Imaging to Diagnose Neurological Disorders: A Review and Outlook. Brain Sci 2023; 13:1462. [PMID: 37891830 PMCID: PMC10605282 DOI: 10.3390/brainsci13101462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 10/06/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023] Open
Abstract
Neurological disorders (NDs), such as Alzheimer's disease, have been a threat to human health all over the world. It is of great importance to diagnose ND through combining artificial intelligence technology and brain imaging. A graph neural network (GNN) can model and analyze the brain, imaging from morphology, anatomical structure, function features, and other aspects, thus becoming one of the best deep learning models in the diagnosis of ND. Some researchers have investigated the application of GNN in the medical field, but the scope is broad, and its application to NDs is less frequent and not detailed enough. This review focuses on the research progress of GNNs in the diagnosis of ND. Firstly, we systematically investigated the GNN framework of ND, including graph construction, graph convolution, graph pooling, and graph prediction. Secondly, we investigated common NDs using the GNN diagnostic model in terms of data modality, number of subjects, and diagnostic accuracy. Thirdly, we discussed some research challenges and future research directions. The results of this review may be a valuable contribution to the ongoing intersection of artificial intelligence technology and brain imaging.
Collapse
Affiliation(s)
- Shuoyan Zhang
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Jiacheng Yang
- School of Life Sciences, Shanghai University, Shanghai 200444, China
| | - Ying Zhang
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Jiayi Zhong
- School of Life Sciences, Shanghai University, Shanghai 200444, China
| | - Wenjing Hu
- School of Life Sciences, Shanghai University, Shanghai 200444, China
| | - Chenyang Li
- School of Life Sciences, Shanghai University, Shanghai 200444, China
| | - Jiehui Jiang
- Shanghai Institute of Biomedical Engineering, Shanghai University, Shanghai 200444, China
| |
Collapse
|
3
|
Ye K, Tang H, Dai S, Guo L, Liu JY, Wang Y, Leow A, Thompson PM, Huang H, Zhan L. Bidirectional Mapping with Contrastive Learning on Multimodal Neuroimaging Data. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2023; 14222:138-148. [PMID: 39005889 PMCID: PMC11245326 DOI: 10.1007/978-3-031-43898-1_14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
The modeling of the interaction between brain structure and function using deep learning techniques has yielded remarkable success in identifying potential biomarkers for different clinical phenotypes and brain diseases. However, most existing studies focus on one-way mapping, either projecting brain function to brain structure or inversely. This type of unidirectional mapping approach is limited by the fact that it treats the mapping as a one-way task and neglects the intrinsic unity between these two modalities. Moreover, when dealing with the same biological brain, mapping from structure to function and from function to structure yields dissimilar outcomes, highlighting the likelihood of bias in one-way mapping. To address this issue, we propose a novel bidirectional mapping model, named Bidirectional Mapping with Contrastive Learning (BMCL), to reduce the bias between these two unidirectional mappings via ROI-level contrastive learning. We evaluate our framework on clinical phenotype and neurodegenerative disease predictions using two publicly available datasets (HCP and OASIS). Our results demonstrate the superiority of BMCL compared to several state-of-the-art methods.
Collapse
Affiliation(s)
- Kai Ye
- University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Haoteng Tang
- University of Texas Rio Grande Valley, Edinburg, TX 78539, USA
| | - Siyuan Dai
- University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Lei Guo
- University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Johnny Yuehan Liu
- Thomas Jefferson High School for Science and Technology, Alexandria, VA 22312, USA
| | - Yalin Wang
- Arizona State University, Tempe, AZ 85287, USA
| | - Alex Leow
- University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Paul M Thompson
- University of Southern California, Los Angeles, CA 90032, USA
| | - Heng Huang
- University of Maryland, College Park, MD 20742, USA
| | - Liang Zhan
- University of Pittsburgh, Pittsburgh, PA 15260, USA
| |
Collapse
|
4
|
Geenjaar EPT, Lewis NL, Fedorov A, Wu L, Ford JM, Preda A, Plis SM, Calhoun VD. Chromatic fusion: generative multimodal neuroimaging data fusion provides multi-informed insights into schizophrenia. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.05.18.23290184. [PMID: 37292973 PMCID: PMC10246163 DOI: 10.1101/2023.05.18.23290184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This work proposes a novel generative multimodal approach to jointly analyze multimodal data while linking the multimodal information to colors. By linking colors to private and shared information from modalities, we introduce chromatic fusion, a framework that allows for intuitively interpreting multimodal data. We test our framework on structural, functional, and diffusion modality pairs. In this framework, we use a multimodal variational autoencoder to learn separate latent subspaces; a private space for each modality, and a shared space between both modalities. These subspaces are then used to cluster subjects, and colored based on their distance from the variational prior, to obtain meta-chromatic patterns (MCPs). Each subspace corresponds to a different color, red is the private space of the first modality, green is the shared space, and blue is the private space of the second modality. We further analyze the most schizophrenia-enriched MCPs for each modality pair and find that distinct schizophrenia subgroups are captured by schizophrenia-enriched MCPs for different modality pairs, emphasizing schizophrenia's heterogeneity. For the FA-sFNC, sMRI-ICA, and sMRI-ICA MCPs, we generally find decreased fractional corpus callosum anisotropy and decreased spatial ICA map and voxel-based morphometry strength in the superior frontal lobe for schizophrenia patients. To additionally highlight the importance of the shared space between modalities, we perform a robustness analysis of the latent dimensions in the shared space across folds. These robust latent dimensions are subsequently correlated with schizophrenia to reveal that for each modality pair, multiple shared latent dimensions strongly correlate with schizophrenia. In particular, for FA-sFNC and sMRI-sFNC shared latent dimensions, we respectively observe a reduction in the modularity of the functional connectivity and a decrease in visual-sensorimotor connectivity for schizophrenia patients. The reduction in modularity couples with increased fractional anisotropy in the left part of the cerebellum dorsally. The reduction in the visual-sensorimotor connectivity couples with a reduction in the voxel-based morphometry generally but increased dorsal cerebellum voxel-based morphometry. Since the modalities are trained jointly, we can also use the shared space to try and reconstruct one modality from the other. We show that cross-reconstruction is possible with our network and is generally much better than depending on the variational prior. In sum, we introduce a powerful new multimodal neuroimaging framework designed to provide a rich and intuitive understanding of the data that we hope challenges the reader to think differently about how modalities interact.
Collapse
Affiliation(s)
- Eloy P T Geenjaar
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, 30303, USA
| | - Noah L Lewis
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, 30303, USA
- School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Alex Fedorov
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, 30303, USA
| | - Lei Wu
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, 30303, USA
| | - Judith M Ford
- San Francisco Veterans Affairs Medical Center, San Francisco, CA, USA
- Department of Psychiatry and Behavioral Sciences, University of California San Francisco, San Francisco, CA, USA
| | - Adrian Preda
- Department of Psychiatry and Human Behavior, University of California Irvine, Irvine, CA, USA
| | - Sergey M Plis
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, 30303, USA
- Dept. of Computer Science, Georgia State University, Atlanta, GA, USA
| | - Vince D Calhoun
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA, 30303, USA
- School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, GA, USA
- Dept. of Computer Science, Georgia State University, Atlanta, GA, USA
- Dept. of Psychology, Georgia State University, Atlanta, GA, USA
| |
Collapse
|
5
|
Bessadok A, Mahjoub MA, Rekik I. Graph Neural Networks in Network Neuroscience. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:5833-5848. [PMID: 36155474 DOI: 10.1109/tpami.2022.3209686] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Noninvasive medical neuroimaging has yielded many discoveries about the brain connectivity. Several substantial techniques mapping morphological, structural and functional brain connectivities were developed to create a comprehensive road map of neuronal activities in the human brain -namely brain graph. Relying on its non-euclidean data type, graph neural network (GNN) provides a clever way of learning the deep graph structure and it is rapidly becoming the state-of-the-art leading to enhanced performance in various network neuroscience tasks. Here we review current GNN-based methods, highlighting the ways that they have been used in several applications related to brain graphs such as missing brain graph synthesis and disease classification. We conclude by charting a path toward a better application of GNN models in network neuroscience field for neurological disorder diagnosis and population graph integration. The list of papers cited in our work is available at https://github.com/basiralab/GNNs-in-Network-Neuroscience.
Collapse
|
6
|
Yin C, Imms P, Cheng M, Amgalan A, Chowdhury NF, Massett RJ, Chaudhari NN, Chen X, Thompson PM, Bogdan P, Irimia A. Anatomically interpretable deep learning of brain age captures domain-specific cognitive impairment. Proc Natl Acad Sci U S A 2023; 120:e2214634120. [PMID: 36595679 PMCID: PMC9926270 DOI: 10.1073/pnas.2214634120] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 11/10/2022] [Indexed: 01/05/2023] Open
Abstract
The gap between chronological age (CA) and biological brain age, as estimated from magnetic resonance images (MRIs), reflects how individual patterns of neuroanatomic aging deviate from their typical trajectories. MRI-derived brain age (BA) estimates are often obtained using deep learning models that may perform relatively poorly on new data or that lack neuroanatomic interpretability. This study introduces a convolutional neural network (CNN) to estimate BA after training on the MRIs of 4,681 cognitively normal (CN) participants and testing on 1,170 CN participants from an independent sample. BA estimation errors are notably lower than those of previous studies. At both individual and cohort levels, the CNN provides detailed anatomic maps of brain aging patterns that reveal sex dimorphisms and neurocognitive trajectories in adults with mild cognitive impairment (MCI, N = 351) and Alzheimer's disease (AD, N = 359). In individuals with MCI (54% of whom were diagnosed with dementia within 10.9 y from MRI acquisition), BA is significantly better than CA in capturing dementia symptom severity, functional disability, and executive function. Profiles of sex dimorphism and lateralization in brain aging also map onto patterns of neuroanatomic change that reflect cognitive decline. Significant associations between BA and neurocognitive measures suggest that the proposed framework can map, systematically, the relationship between aging-related neuroanatomy changes in CN individuals and in participants with MCI or AD. Early identification of such neuroanatomy changes can help to screen individuals according to their AD risk.
Collapse
Affiliation(s)
- Chenzhong Yin
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Phoebe Imms
- Ethel Percy Andrus Gerontology Center, Leonard Davis School of Gerontology, University of Southern California, Los Angeles, CA90089
| | - Mingxi Cheng
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Anar Amgalan
- Ethel Percy Andrus Gerontology Center, Leonard Davis School of Gerontology, University of Southern California, Los Angeles, CA90089
| | - Nahian F. Chowdhury
- Ethel Percy Andrus Gerontology Center, Leonard Davis School of Gerontology, University of Southern California, Los Angeles, CA90089
| | - Roy J. Massett
- Ethel Percy Andrus Gerontology Center, Leonard Davis School of Gerontology, University of Southern California, Los Angeles, CA90089
| | - Nikhil N. Chaudhari
- Ethel Percy Andrus Gerontology Center, Leonard Davis School of Gerontology, University of Southern California, Los Angeles, CA90089
- Corwin D. Denney Research Center, Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Xinghe Chen
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Paul M. Thompson
- Corwin D. Denney Research Center, Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
- Imaging Genetics Center, Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Marina del Rey, CA90033
- Department of Quantitative & Computational Biology, Dana & David Dornsife College of Arts & Sciences, University of Southern California, Los Angeles, CA90089
- Department of Ophthalmology, Keck School of Medicine, University of Southern California, Los Angeles, CA90033
- Department of Neurology, Keck School of Medicine, University of Southern California, Los Angeles, CA90033
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA90033
- Department of Psychiatry, Keck School of Medicine, University of Southern California, Los Angeles, CA90033
- Department of Behavioral Sciences, Keck School of Medicine, University of Southern California, Los Angeles, CA90033
| | - Paul Bogdan
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Andrei Irimia
- Ethel Percy Andrus Gerontology Center, Leonard Davis School of Gerontology, University of Southern California, Los Angeles, CA90089
- Corwin D. Denney Research Center, Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
- Department of Quantitative & Computational Biology, Dana & David Dornsife College of Arts & Sciences, University of Southern California, Los Angeles, CA90089
| | | |
Collapse
|