1
|
Zeng X, Gong J, Li W, Yang Z. Knowledge-driven multi-graph convolutional network for brain network analysis and potential biomarker discovery. Med Image Anal 2024; 99:103368. [PMID: 39418829 DOI: 10.1016/j.media.2024.103368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Revised: 08/04/2024] [Accepted: 10/02/2024] [Indexed: 10/19/2024]
Abstract
In brain network analysis, individual-level data can provide biological features of individuals, while population-level data can provide demographic information of populations. However, existing methods mostly utilize either individual- or population-level features separately, inevitably neglecting the multi-level characteristics of brain disorders. To address this issue, we propose an end-to-end multi-graph neural network model called KMGCN. This model simultaneously leverages individual- and population-level features for brain network analysis. At the individual level, we construct multi-graph using both knowledge-driven and data-driven approaches. Knowledge-driven refers to constructing a knowledge graph based on prior knowledge, while data-driven involves learning a data graph from the data itself. At the population level, we construct multi-graph using both imaging and phenotypic data. Additionally, we devise a pooling method tailored for brain networks, capable of selecting brain regions that impact brain disorders. We evaluate the performance of our model on two large datasets, ADNI and ABIDE, and experimental results demonstrate that it achieves state-of-the-art performance, with 86.87% classification accuracy for ADNI and 86.40% for ABIDE, accompanied by around 10% improvements in all evaluation metrics compared to the state-of-the-art models. Additionally, the biomarkers identified by our model align well with recent neuroscience research, indicating the effectiveness of our model in brain network analysis and potential biomarker discovery. The code is available at https://github.com/GN-gjh/KMGCN.
Collapse
Affiliation(s)
- Xianhua Zeng
- Chongqing Key Laboratory of Image Cognition, School of Computer Science and Technology/School of Artificial Intelligence, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China; Key Laboratory of Cyberspace Big Data Intelligent Security (Ministry of Education), Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
| | - Jianhua Gong
- Chongqing Key Laboratory of Image Cognition, School of Computer Science and Technology/School of Artificial Intelligence, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China; Key Laboratory of Cyberspace Big Data Intelligent Security (Ministry of Education), Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Weisheng Li
- Chongqing Key Laboratory of Image Cognition, School of Computer Science and Technology/School of Artificial Intelligence, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China; Key Laboratory of Cyberspace Big Data Intelligent Security (Ministry of Education), Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Zhuoya Yang
- Chongqing Key Laboratory of Image Cognition, School of Computer Science and Technology/School of Artificial Intelligence, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China; Key Laboratory of Cyberspace Big Data Intelligent Security (Ministry of Education), Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| |
Collapse
|
2
|
Chen L, Qiao C, Ren K, Qu G, Calhoun VD, Stephen JM, Wilson TW, Wang YP. Explainable spatio-temporal graph evolution learning with applications to dynamic brain network analysis during development. Neuroimage 2024; 298:120771. [PMID: 39111376 PMCID: PMC11533345 DOI: 10.1016/j.neuroimage.2024.120771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Revised: 07/23/2024] [Accepted: 08/02/2024] [Indexed: 08/17/2024] Open
Abstract
Modeling dynamic interactions among network components is crucial to uncovering the evolution mechanisms of complex networks. Recently, spatio-temporal graph learning methods have achieved noteworthy results in characterizing the dynamic changes of inter-node relations (INRs). However, challenges remain: The spatial neighborhood of an INR is underexploited, and the spatio-temporal dependencies in INRs' dynamic changes are overlooked, ignoring the influence of historical states and local information. In addition, the model's explainability has been understudied. To address these issues, we propose an explainable spatio-temporal graph evolution learning (ESTGEL) model to model the dynamic evolution of INRs. Specifically, an edge attention module is proposed to utilize the spatial neighborhood of an INR at multi-level, i.e., a hierarchy of nested subgraphs derived from decomposing the initial node-relation graph. Subsequently, a dynamic relation learning module is proposed to capture the spatio-temporal dependencies of INRs. The INRs are then used as adjacent information to improve the node representation, resulting in comprehensive delineation of dynamic evolution of the network. Finally, the approach is validated with real data on brain development study. Experimental results on dynamic brain networks analysis reveal that brain functional networks transition from dispersed to more convergent and modular structures throughout development. Significant changes are observed in the dynamic functional connectivity (dFC) associated with functions including emotional control, decision-making, and language processing.
Collapse
Affiliation(s)
- Longyun Chen
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, 710049, China.
| | - Chen Qiao
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, 710049, China.
| | - Kai Ren
- Xijing Hospital, Fourth Military Medical University, Xi'an, 710032, China.
| | - Gang Qu
- Department of Biomedical Engineering, Tulane University, New Orleans, LA 70118, USA.
| | - Vince D Calhoun
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA 30303, USA.
| | | | - Tony W Wilson
- Institute for Human Neuroscience, Boys Town National Research Hospital, Boys Town, NE 68010, USA.
| | - Yu-Ping Wang
- Department of Biomedical Engineering, Tulane University, New Orleans, LA 70118, USA.
| |
Collapse
|
3
|
Qu G, Zhou Z, Calhoun VD, Zhang A, Wang YP. Integrated Brain Connectivity Analysis with fMRI, DTI, and sMRI Powered by Interpretable Graph Neural Networks. ARXIV 2024:arXiv:2408.14254v1. [PMID: 39253637 PMCID: PMC11383444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Multimodal neuroimaging modeling has become a widely used approach but confronts considerable challenges due to heterogeneity, which encompasses variability in data types, scales, and formats across modalities. This variability necessitates the deployment of advanced computational methods to integrate and interpret these diverse datasets within a cohesive analytical framework. In our research, we amalgamate functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and structural MRI (sMRI) into a cohesive framework. This integration capitalizes on the unique strengths of each modality and their inherent interconnections, aiming for a comprehensive understanding of the brain's connectivity and anatomical characteristics. Utilizing the Glasser atlas for parcellation, we integrate imaging-derived features from various modalities-functional connectivity from fMRI, structural connectivity from DTI, and anatomical features from sMRI-within consistent regions. Our approach incorporates a masking strategy to differentially weight neural connections, thereby facilitating a holistic amalgamation of multimodal imaging data. This technique enhances interpretability at connectivity level, transcending traditional analyses centered on singular regional attributes. The model is applied to the Human Connectome Project's Development study to elucidate the associations between multimodal imaging and cognitive functions throughout youth. The analysis demonstrates improved predictive accuracy and uncovers crucial anatomical features and essential neural connections, deepening our understanding of brain structure and function. This study not only advances multi-modal neuroimaging analytics by offering a novel method for the integrated analysis of diverse imaging modalities but also improves the understanding of intricate relationship between the brain's structural and functional networks and cognitive development.
Collapse
Affiliation(s)
- Gang Qu
- Biomedical Engineering Department, Tulane University, New Orleans, LA 70118, USA
| | - Ziyu Zhou
- Computer Science Department, Tulane University, New Orleans, LA 70118, USA
| | - Vince D. Calhoun
- Tri-Institutional Center for Translational Research in Neuro Imaging and Data Science (TreNDS) - Georgia State, Georgia Tech and Emory, Atlanta, GA 30303, USA
| | - Aiying Zhang
- School of Data Science, University of Virginia, Charlottesville, VA 22903, USA
| | - Yu-Ping Wang
- Biomedical Engineering Department, Tulane University, New Orleans, LA 70118, USA
| |
Collapse
|
4
|
Wu Y, Yao J, Xu XM, Zhou LL, Salvi R, Ding S, Gao X. Combination of static and dynamic neural imaging features to distinguish sensorineural hearing loss: a machine learning study. Front Neurosci 2024; 18:1402039. [PMID: 38933814 PMCID: PMC11201293 DOI: 10.3389/fnins.2024.1402039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2024] [Accepted: 05/13/2024] [Indexed: 06/28/2024] Open
Abstract
Purpose Sensorineural hearing loss (SNHL) is the most common form of sensory deprivation and is often unrecognized by patients, inducing not only auditory but also nonauditory symptoms. Data-driven classifier modeling with the combination of neural static and dynamic imaging features could be effectively used to classify SNHL individuals and healthy controls (HCs). Methods We conducted hearing evaluation, neurological scale tests and resting-state MRI on 110 SNHL patients and 106 HCs. A total of 1,267 static and dynamic imaging characteristics were extracted from MRI data, and three methods of feature selection were computed, including the Spearman rank correlation test, least absolute shrinkage and selection operator (LASSO) and t test as well as LASSO. Linear, polynomial, radial basis functional kernel (RBF) and sigmoid support vector machine (SVM) models were chosen as the classifiers with fivefold cross-validation. The receiver operating characteristic curve, area under the curve (AUC), sensitivity, specificity and accuracy were calculated for each model. Results SNHL subjects had higher hearing thresholds in each frequency, as well as worse performance in cognitive and emotional evaluations, than HCs. After comparison, the selected brain regions using LASSO based on static and dynamic features were consistent with the between-group analysis, including auditory and nonauditory areas. The subsequent AUCs of the four SVM models (linear, polynomial, RBF and sigmoid) were as follows: 0.8075, 0.7340, 0.8462 and 0.8562. The RBF and sigmoid SVM had relatively higher accuracy, sensitivity and specificity. Conclusion Our research raised attention to static and dynamic alterations underlying hearing deprivation. Machine learning-based models may provide several useful biomarkers for the classification and diagnosis of SNHL.
Collapse
Affiliation(s)
- Yuanqing Wu
- Department of Otorhinolaryngology Head and Neck Surgery, Nanjing Drum Tower Hospital Clinical College of Nanjing Medical University, Nanjing, China
- Department of Otorhinolaryngology Head and Neck Surgery, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Jun Yao
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Xiao-Min Xu
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Lei-Lei Zhou
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Richard Salvi
- Center for Hearing and Deafness, University at Buffalo, The State University of New York, Buffalo, NY, United States
| | - Shaohua Ding
- Department of Radiology, The Affiliated Taizhou People's Hospital of Nanjing Medical University, Taizhou School of Clinical Medicine, Nanjing Medical University, Taizhou, China
| | - Xia Gao
- Department of Otorhinolaryngology Head and Neck Surgery, Nanjing Drum Tower Hospital Clinical College of Nanjing Medical University, Nanjing, China
| |
Collapse
|
5
|
Qu G, Orlichenko A, Wang J, Zhang G, Xiao L, Zhang K, Wilson TW, Stephen JM, Calhoun VD, Wang YP. Interpretable Cognitive Ability Prediction: A Comprehensive Gated Graph Transformer Framework for Analyzing Functional Brain Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1568-1578. [PMID: 38109241 PMCID: PMC11090410 DOI: 10.1109/tmi.2023.3343365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2023]
Abstract
Graph convolutional deep learning has emerged as a promising method to explore the functional organization of the human brain in neuroscience research. This paper presents a novel framework that utilizes the gated graph transformer (GGT) model to predict individuals' cognitive ability based on functional connectivity (FC) derived from fMRI. Our framework incorporates prior spatial knowledge and uses a random-walk diffusion strategy that captures the intricate structural and functional relationships between different brain regions. Specifically, our approach employs learnable structural and positional encodings (LSPE) in conjunction with a gating mechanism to efficiently disentangle the learning of positional encoding (PE) and graph embeddings. Additionally, we utilize the attention mechanism to derive multi-view node feature embeddings and dynamically distribute propagation weights between each node and its neighbors, which facilitates the identification of significant biomarkers from functional brain networks and thus enhances the interpretability of the findings. To evaluate our proposed model in cognitive ability prediction, we conduct experiments on two large-scale brain imaging datasets: the Philadelphia Neurodevelopmental Cohort (PNC) and the Human Connectome Project (HCP). The results show that our approach not only outperforms existing methods in prediction accuracy but also provides superior explainability, which can be used to identify important FCs underlying cognitive behaviors.
Collapse
|
6
|
Wang Y, Long H, Zhou Q, Bo T, Zheng J. PLSNet: Position-aware GCN-based autism spectrum disorder diagnosis via FC learning and ROIs sifting. Comput Biol Med 2023; 163:107184. [PMID: 37356292 DOI: 10.1016/j.compbiomed.2023.107184] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 05/25/2023] [Accepted: 06/13/2023] [Indexed: 06/27/2023]
Abstract
Brain function connectivity, derived from functional magnetic resonance imaging (fMRI), has enjoyed high popularity in the studies of Autism Spectrum Disorder (ASD) diagnosis. Albeit rapid progress has been made, most studies still suffer from several knotty issues: (1) the hardship of modeling the sophisticated brain neuronal connectivity; (2) the mismatch of identically graph node setup to the variations of different brain regions; (3) the dimensionality explosion resulted from excessive voxels in each fMRI sample; (4) the poor interpretability giving rise to unpersuasive diagnosis. To ameliorate these issues, we propose a position-aware graph-convolution-network-based model, namely PLSNet, with superior accuracy and compelling built-in interpretability for ASD diagnosis. Specifically, a time-series encoder is designed for context-rich feature extraction, followed by a function connectivity generator to model the correlation with long range dependencies. In addition, to discriminate the brain nodes with different locations, the position embedding technique is adopted, giving a unique identity to each graph region. We then embed a rarefying method to sift the salient nodes during message diffusion, which would also benefit the reduction of the dimensionality complexity. Extensive experiments conducted on Autism Brain Imaging Data Exchange demonstrate that our PLSNet achieves state-of-the-art performance. Notably, on CC200 atlas, PLSNet reaches an accuracy of 76.4% and a specificity of 78.6%, overwhelming the previous state-of-the-art with 2.5% and 6.5% under five-fold cross-validation policy. Moreover, the most salient brain regions predicted by PLSNet are closely consistent with the theoretical knowledge in the medical domain, providing potential biomarkers for ASD clinical diagnosis. Our code is available at https://github.com/CodeGoat24/PLSNet.
Collapse
Affiliation(s)
- Yibin Wang
- College of Computer Science and Engineering, Zhejiang University of Technology, Hangzhou 310014, Zhejiang, China
| | - Haixia Long
- College of Computer Science and Engineering, Zhejiang University of Technology, Hangzhou 310014, Zhejiang, China
| | - Qianwei Zhou
- College of Computer Science and Engineering, Zhejiang University of Technology, Hangzhou 310014, Zhejiang, China
| | - Tao Bo
- Scientific Center, Shandong Provincial Hospital affiliated to Shandong First Medical University, Jinan 250021, Shandong, China
| | - Jianwei Zheng
- College of Computer Science and Engineering, Zhejiang University of Technology, Hangzhou 310014, Zhejiang, China.
| |
Collapse
|
7
|
Wang J, Li H, Qu G, Cecil KM, Dillman JR, Parikh NA, He L. Dynamic weighted hypergraph convolutional network for brain functional connectome analysis. Med Image Anal 2023; 87:102828. [PMID: 37130507 PMCID: PMC10247416 DOI: 10.1016/j.media.2023.102828] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 04/14/2023] [Accepted: 04/18/2023] [Indexed: 05/04/2023]
Abstract
The hypergraph structure has been utilized to characterize the brain functional connectome (FC) by capturing the high order relationships among multiple brain regions of interest (ROIs) compared with a simple graph. Accordingly, hypergraph neural network (HGNN) models have emerged and provided efficient tools for hypergraph embedding learning. However, most existing HGNN models can only be applied to pre-constructed hypergraphs with a static structure during model training, which might not be a sufficient representation of the complex brain networks. In this study, we propose a dynamic weighted hypergraph convolutional network (dwHGCN) framework to consider a dynamic hypergraph with learnable hyperedge weights. Specifically, we generate hyperedges based on sparse representation and calculate the hyper similarity as node features. The hypergraph and node features are fed into a neural network model, where the hyperedge weights are updated adaptively during training. The dwHGCN facilitates the learning of brain FC features by assigning larger weights to hyperedges with higher discriminative power. The weighting strategy also improves the interpretability of the model by identifying the highly active interactions among ROIs shared by a common hyperedge. We validate the performance of the proposed model on two classification tasks with three paradigms functional magnetic resonance imaging (fMRI) data from Philadelphia Neurodevelopmental Cohort. Experimental results demonstrate the superiority of our proposed method over existing hypergraph neural networks. We believe our model can be applied to other applications in neuroimaging for its strength in representation learning and interpretation.
Collapse
Affiliation(s)
- Junqi Wang
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Hailong Li
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Neurodevelopmental Disorders Prevention Center, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Artificial Intelligence Imaging Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Gang Qu
- Department of Biomedical Engineering, Tulane University, New Orleans, LA, USA
| | - Kim M Cecil
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Jonathan R Dillman
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Artificial Intelligence Imaging Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Nehal A Parikh
- Neurodevelopmental Disorders Prevention Center, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Lili He
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Neurodevelopmental Disorders Prevention Center, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Artificial Intelligence Imaging Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, USA.
| |
Collapse
|
8
|
Pitsik EN, Maximenko VA, Kurkin SA, Sergeev AP, Stoyanov D, Paunova R, Kandilarova S, Simeonova D, Hramov AE. The topology of fMRI-based networks defines the performance of a graph neural network for the classification of patients with major depressive disorder. CHAOS, SOLITONS & FRACTALS 2023; 167:113041. [DOI: 10.1016/j.chaos.2022.113041] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/01/2024]
|