1
|
Song R, Cao P, Wen G, Zhao P, Huang Z, Zhang X, Yang J, Zaiane OR. BrainDAS: Structure-aware domain adaptation network for multi-site brain network analysis. Med Image Anal 2024; 96:103211. [PMID: 38796945 DOI: 10.1016/j.media.2024.103211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 01/31/2024] [Accepted: 05/15/2024] [Indexed: 05/29/2024]
Abstract
In the medical field, datasets are mostly integrated across sites due to difficult data acquisition and insufficient data at a single site. The domain shift problem caused by the heterogeneous distribution among multi-site data makes autism spectrum disorder (ASD) hard to identify. Recently, domain adaptation has received considerable attention as a promising solution. However, domain adaptation on graph data like brain networks has not been fully studied. It faces two major challenges: (1) complex graph structure; and (2) multiple source domains. To overcome the issues, we propose an end-to-end structure-aware domain adaptation framework for brain network analysis (BrainDAS) using resting-state functional magnetic resonance imaging (rs-fMRI). The proposed approach contains two stages: supervision-guided multi-site graph domain adaptation with dynamic kernel generation and graph classification with attention-based graph pooling. We evaluate our BrainDAS on a public dataset provided by Autism Brain Imaging Data Exchange (ABIDE) which includes 871 subjects from 17 different sites, surpassing state-of-the-art algorithms in several different evaluation settings. Furthermore, our promising results demonstrate the interpretability and generalization of the proposed method. Our code is available at https://github.com/songruoxian/BrainDAS.
Collapse
Affiliation(s)
- Ruoxian Song
- Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Peng Cao
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, Shenyang, China.
| | - Guangqi Wen
- Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Pengfei Zhao
- Early Intervention Unit, Department of Psychiatry, Affiliated Nanjing Brain Hospital, Nanjing, China
| | - Ziheng Huang
- College of Software, Northeastern University, Shenyang, China
| | - Xizhe Zhang
- Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing, China
| | - Jinzhu Yang
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, Shenyang, China.
| | | |
Collapse
|
2
|
Zuo Q, Wu H, Chen CLP, Lei B, Wang S. Prior-Guided Adversarial Learning With Hypergraph for Predicting Abnormal Connections in Alzheimer's Disease. IEEE TRANSACTIONS ON CYBERNETICS 2024; 54:3652-3665. [PMID: 38236677 DOI: 10.1109/tcyb.2023.3344641] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2024]
Abstract
Alzheimer's disease (AD) is characterized by alterations of the brain's structural and functional connectivity during its progressive degenerative processes. Existing auxiliary diagnostic methods have accomplished the classification task, but few of them can accurately evaluate the changing characteristics of brain connectivity. In this work, a prior-guided adversarial learning with hypergraph (PALH) model is proposed to predict abnormal brain connections using triple-modality medical images. Concretely, a prior distribution from anatomical knowledge is estimated to guide multimodal representation learning using an adversarial strategy. Also, the pairwise collaborative discriminator structure is further utilized to narrow the difference in representation distribution. Moreover, the hypergraph perceptual network is developed to effectively fuse the learned representations while establishing high-order relations within and between multimodal images. Experimental results demonstrate that the proposed model outperforms other related methods in analyzing and predicting AD progression. More importantly, the identified abnormal connections are partly consistent with previous neuroscience discoveries. The proposed model can evaluate the characteristics of abnormal brain connections at different stages of AD, which is helpful for cognitive disease study and early treatment.
Collapse
|
3
|
Liu Y, Zhang Z, Zhang H, Wang X, Wang K, Yang R, Han P, Luan K, Zhou Y. Clinical prediction of microvascular invasion in hepatocellular carcinoma using an MRI-based graph convolutional network model integrated with nomogram. Br J Radiol 2024; 97:938-946. [PMID: 38552308 DOI: 10.1093/bjr/tqae056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 02/07/2024] [Accepted: 03/06/2024] [Indexed: 05/09/2024] Open
Abstract
OBJECTIVES Based on enhanced MRI, a prediction model of microvascular invasion (MVI) for hepatocellular carcinoma (HCC) was developed using graph convolutional network (GCN) combined nomogram. METHODS We retrospectively collected 182 HCC patients confirmed histopathologically, all of them performed enhanced MRI before surgery. The patients were randomly divided into training and validation groups. Radiomics features were extracted from the arterial phase (AP), portal venous phase (PVP), and delayed phase (DP), respectively. After removing redundant features, the graph structure by constructing the distance matrix with the feature matrix was built. Screening the superior phases and acquired GCN Score (GS). Finally, combining clinical, radiological and GS established the predicting nomogram. RESULTS 27.5% (50/182) patients were with MVI positive. In radiological analysis, intratumoural artery (P = 0.007) was an independent predictor of MVI. GCN model with grey-level cooccurrence matrix-grey-level run length matrix features exhibited area under the curves of the training group was 0.532, 0.690, and 0.885 and the validation group was 0.583, 0.580, and 0.854 for AP, PVP, and DP, respectively. DP was selected to develop final model and got GS. Combining GS with diameter, corona enhancement, mosaic architecture, and intratumoural artery constructed a nomogram which showed a C-index of 0.884 (95% CI: 0.829-0.927). CONCLUSIONS The GCN model based on DP has a high predictive ability. A nomogram combining GS, clinical and radiological characteristics can be a simple and effective guiding tool for selecting HCC treatment options. ADVANCES IN KNOWLEDGE GCN based on MRI could predict MVI on HCC.
Collapse
Affiliation(s)
- Yang Liu
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin 150010, Heilongjiang, China
| | - Ziqian Zhang
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin 150010, Heilongjiang, China
| | - Hongxia Zhang
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin 150010, Heilongjiang, China
| | - Xinxin Wang
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin 150010, Heilongjiang, China
| | - Kun Wang
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 150001, China
| | - Rui Yang
- Department of Medical Oncology, Harbin Medical University Cancer Hospital, No.150 Haping Road, Nangang District, Harbin 150081, Heilongjiang Province, China
| | - Peng Han
- Department of Surgical Oncology, Harbin Medical University Cancer Hospital, No.150 Haping Road, Nangang District, Harbin 150081, Heilongjiang Province, China
| | - Kuan Luan
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 150001, China
| | - Yang Zhou
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin 150010, Heilongjiang, China
| |
Collapse
|
4
|
Xia Z, Zhou T, Mamoon S, Lu J. Inferring brain causal and temporal-lag networks for recognizing abnormal patterns of dementia. Med Image Anal 2024; 94:103133. [PMID: 38458094 DOI: 10.1016/j.media.2024.103133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 11/21/2022] [Accepted: 03/01/2024] [Indexed: 03/10/2024]
Abstract
Brain functional network analysis has become a popular method to explore the laws of brain organization and identify biomarkers of neurological diseases. However, it is still a challenging task to construct an ideal brain network due to the limited understanding of the human brain. Existing methods often ignore the impact of temporal-lag on the results of brain network modeling, which may lead to some unreliable conclusions. To overcome this issue, we propose a novel brain functional network estimation method, which can simultaneously infer the causal mechanisms and temporal-lag values among brain regions. Specifically, our method converts the lag learning into an instantaneous effect estimation problem, and further embeds the search objectives into a deep neural network model as parameters to be learned. To verify the effectiveness of the proposed estimation method, we perform experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database by comparing the proposed model with several existing methods, including correlation-based and causality-based methods. The experimental results show that our brain networks constructed by the proposed estimation method can not only achieve promising classification performance, but also exhibit some characteristics of physiological mechanisms. Our approach provides a new perspective for understanding the pathogenesis of brain diseases. The source code is released at https://github.com/NJUSTxiazw/CTLN.
Collapse
Affiliation(s)
- Zhengwang Xia
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Tao Zhou
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Saqib Mamoon
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Jianfeng Lu
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China.
| |
Collapse
|
5
|
Hao X, Li J, Ma M, Qin J, Zhang D, Liu F. Hypergraph convolutional network for longitudinal data analysis in Alzheimer's disease. Comput Biol Med 2024; 168:107765. [PMID: 38042101 DOI: 10.1016/j.compbiomed.2023.107765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 11/06/2023] [Accepted: 11/21/2023] [Indexed: 12/04/2023]
Abstract
Alzheimer's disease (AD) is an irreversible and progressive neurodegenerative disease. Longitudinal structural magnetic resonance imaging (sMRI) data have been widely used for tracking AD pathogenesis and diagnosis. However, existing methods tend to treat each time point equally without considering the temporal characteristics of longitudinal data. In this paper, we propose a weighted hypergraph convolution network (WHGCN) to use the internal correlations among different time points and leverage high-order relationships between subjects for AD detection. Specifically, we construct hypergraphs for sMRI data at each time point using the K-nearest neighbor (KNN) method to represent relationships between subjects, and then fuse the hypergraphs according to the importance of the data at each time point to obtain the final hypergraph. Subsequently, we use hypergraph convolution to learn high-order information between subjects while performing feature dimensionality reduction. Finally, we conduct experiments on 518 subjects selected from the Alzheimer's disease neuroimaging initiative (ADNI) database, and the results show that the WHGCN can get higher AD detection performance and has the potential to improve our understanding of the pathogenesis of AD.
Collapse
Affiliation(s)
- Xiaoke Hao
- School of Artificial Intelligence, Hebei University of Technology, Tianjin, 300401, China.
| | - Jiawang Li
- School of Artificial Intelligence, Hebei University of Technology, Tianjin, 300401, China
| | - Mingming Ma
- School of Artificial Intelligence, Hebei University of Technology, Tianjin, 300401, China
| | - Jing Qin
- Centre for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, 999077, China
| | - Daoqiang Zhang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China.
| | - Feng Liu
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China.
| |
Collapse
|
6
|
Chen G, Qin J, Amor BB, Zhou W, Dai H, Zhou T, Huang H, Shao L. Automatic Detection of Tooth-Gingiva Trim Lines on Dental Surfaces. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3194-3204. [PMID: 37015112 DOI: 10.1109/tmi.2023.3263161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Detecting the tooth-gingiva trim line from a dental surface plays a critical role in dental treatment planning and aligner 3D printing. Existing methods treat this task as a segmentation problem, which is resolved with geometric deep learning based mesh segmentation techniques. However, these methods can only provide indirect results (i.e., segmented teeth) and suffer from unsatisfactory accuracy due to the incapability of making full use of high-resolution dental surfaces. To this end, we propose a two-stage geometric deep learning framework for automatically detecting tooth-gingiva trim lines from dental surfaces. Our framework consists of a trim line proposal network (TLP-Net) for predicting an initial trim line from the low-resolution dental surface as well as a trim line refinement network (TLR-Net) for refining the initial trim line with the information from the high-resolution dental surface. Specifically, our TLP-Net predicts the initial trim line by fusing the multi-scale features from a U-Net with a proposed residual multi-scale attention fusion module. Moreover, we propose feature bridge modules and a trim line loss to further improve the accuracy. The resulting trim line is then fed to our TLR-Net, which is a deep-based LDDMM model with the high-resolution dental surface as input. In addition, dense connections are incorporated into TLR-Net for improved performance. Our framework provides an automatic solution to trim line detection by making full use of raw high-resolution dental surfaces. Extensive experiments on a clinical dental surface dataset demonstrate that our TLP-Net and TLR-Net are superior trim line detection methods and outperform cutting-edge methods in both qualitative and quantitative evaluations.
Collapse
|
7
|
Jiao Z, Peng X, Wang Y, Xiao J, Nie D, Wu X, Wang X, Zhou J, Shen D. TransDose: Transformer-based radiotherapy dose prediction from CT images guided by super-pixel-level GCN classification. Med Image Anal 2023; 89:102902. [PMID: 37482033 DOI: 10.1016/j.media.2023.102902] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 06/13/2023] [Accepted: 07/11/2023] [Indexed: 07/25/2023]
Abstract
Radiotherapy is a mainstay treatment for cancer in clinic. An excellent radiotherapy treatment plan is always based on a high-quality dose distribution map which is produced by repeated manual trial-and-errors of experienced experts. To accelerate the radiotherapy planning process, many automatic dose distribution prediction methods have been proposed recently and achieved considerable fruits. Nevertheless, these methods require certain auxiliary inputs besides CT images, such as segmentation masks of the tumor and organs at risk (OARs), which limits their prediction efficiency and application potential. To address this issue, we design a novel approach named as TransDose for dose distribution prediction that treats CT images as the unique input in this paper. Specifically, instead of inputting the segmentation masks to provide the prior anatomical information, we utilize a super-pixel-based graph convolutional network (GCN) to extract category-specific features, thereby compensating the network for the necessary anatomical knowledge. Besides, considering the strong continuous dependency between adjacent CT slices as well as adjacent dose maps, we embed the Transformer into the backbone, and make use of its superior ability of long-range sequence modeling to endow input features with inter-slice continuity message. To our knowledge, this is the first network that specially designed for the task of dose prediction from only CT images without ignoring necessary anatomical structure. Finally, we evaluate our model on two real datasets, and extensive experiments demonstrate the generalizability and advantages of our method.
Collapse
Affiliation(s)
- Zhengyang Jiao
- School of Computer Science, Sichuan University, Chengdu, China
| | - Xingchen Peng
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China.
| | - Jianghong Xiao
- Department of Radiation Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Dong Nie
- Department of Computer Science, University of North Carolina at Chapel Hill, USA
| | - Xi Wu
- School of Computer Science, Chengdu University of Information Technology, China
| | | | - Jiliu Zhou
- School of Computer Science, Sichuan University, Chengdu, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China, and Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| |
Collapse
|
8
|
Liang Y, Long M, Yang P, Wang T, Jiao J, Lei B. Fused Brain Functional Connectivity Network and Edge-attention Graph Convolution Network for Fibromyalgia Syndrome Diagnosis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083477 DOI: 10.1109/embc40787.2023.10340485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Fibromyalgia syndrome (FMS) is a type of rheumatology that seriously affects the normal life of patients. Due to the complex clinical manifestations of FMS, it is challenging to detect FMS. Therefore, an automatic FMS diagnosis model is urgently needed to assist physicians. Brain functional connectivity networks (BFCNs) constructed by resting-state functional magnetic resonance imaging (rs-fMRI) to describe brain functions have been widely used to identify individuals with relevant diseases from normal control (NC). Therefore, we propose a novel model based on BFCN and graph convolutional network (GCN) for automatic FMS diagnosis. Firstly, a novel fused BFCN method is proposed by fusing Pearson's correlation (PC) and low-rank (LR) BFCN, which retains information and reduces data redundancy to construct BFCN. Then we combine the feature of BFCN with non-image information of subjects to obtain nodes and adjacency matrices, which builds a graph with edge attention. Finally, the graph is sent to the GCN layer for FMS diagnosis. Our model is evaluated on the in-house FMS dataset to achieve 82.48% accuracy. The experimental results show that our method outperforms the state-of-the-art competing methods.
Collapse
|
9
|
Zhang Y, Jiang J, Ling R, Wang L, Jiang J, Wang M. Early Diagnosis and Biomarkers of Alzheimer's Disease Based on Spatio-temporal Graph Convolution Network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083072 DOI: 10.1109/embc40787.2023.10341155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Functional magnetic resonance imaging (fMRI) could detect the dynamic activity of brain function and communication. Previous studies have found reduced brain functional connectivity in Alzheimer's disease (AD) patients. In this study, we proposed to process fMRI data by spatio-temporal graph convolution network (ST-GCN) to achieve an early differential diagnosis of AD and to extract image markers using gradient-weighted class activation mapping (Grad-CAM). The data used in this study were from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, Xuanwu Hospital, and Tongji Hospital. The study included 1105 normal controls and 790 patients with mild cognitive impairment (MCI). The grid search method of K-fold cross-validation was used to train the model. In addition, we used Grad-CAM to extract image markers and carried out visualization analysis. This model obtains better AD diagnosis power: accuracy = 0.92, sensitivity = 0.97, specificity = 0.89, and area under the curve=0.96. Salient brain regions extracted by Grad-CAM include the paracentral lobule, inferior occipital gyrus, middle frontal gyrus, superior temporal gyrus, cuneus, posterior cingulate gyrus, and superior parietal gyrus. Our proposed ST-GAN model will help to explore objective markers that can be used for the early diagnosis of AD.Clinical relevance- Our proposed model shows great potential for enhancing the understanding of the pathology of AD by detecting functional connectivity interruptions.
Collapse
|
10
|
Yang Y, Ye C, Ma T. A deep connectome learning network using graph convolution for connectome-disease association study. Neural Netw 2023; 164:91-104. [PMID: 37148611 DOI: 10.1016/j.neunet.2023.04.025] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 02/01/2023] [Accepted: 04/16/2023] [Indexed: 05/08/2023]
Abstract
Multivariate analysis approaches provide insights into the identification of phenotype associations in brain connectome data. In recent years, deep learning methods including convolutional neural network (CNN) and graph neural network (GNN), have shifted the development of connectome-wide association studies (CWAS) and made breakthroughs for connectome representation learning by leveraging deep embedded features. However, most existing studies remain limited by potentially ignoring the exploration of region-specific features, which play a key role in distinguishing brain disorders with high intra-class variations, such as autism spectrum disorder (ASD), and attention deficit hyperactivity disorder (ADHD). Here, we propose a multivariate distance-based connectome network (MDCN) that addresses the local specificity problem by efficient parcellation-wise learning, as well as associating population and parcellation dependencies to map individual differences. The approach incorporating an explainable method, parcellation-wise gradient and class activation map (p-GradCAM), is feasible for identifying individual patterns of interest and pinpointing connectome associations with diseases. We demonstrate the utility of our method on two largely aggregated multicenter public datasets by distinguishing ASD and ADHD from healthy controls and assessing their associations with underlying diseases. Extensive experiments have demonstrated the superiority of MDCN in classification and interpretation, where MDCN outperformed competitive state-of-the-art methods and achieved a high proportion of overlap with previous findings. As a CWAS-guided deep learning method, our proposed MDCN framework may narrow the bridge between deep learning and CWAS approaches, and provide new insights for connectome-wide association studies.
Collapse
Affiliation(s)
- Yanwu Yang
- Department of Electronic and Information Engineering, Harbin Institute of Technology at Shenzhen, Shenzhen, China; Peng Cheng Laboratory, Shenzhen, China.
| | - Chenfei Ye
- Peng Cheng Laboratory, Shenzhen, China; International Research Institute for Artificial Intelligence, Harbin Institute of Technology at Shenzhen, Shenzhen, China.
| | - Ting Ma
- Department of Electronic and Information Engineering, Harbin Institute of Technology at Shenzhen, Shenzhen, China; Peng Cheng Laboratory, Shenzhen, China; International Research Institute for Artificial Intelligence, Harbin Institute of Technology at Shenzhen, Shenzhen, China; Guangdong Provincial Key Laboratory of Aerospace Communication and Networking Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, China.
| |
Collapse
|
11
|
Manikantan K, Jaganathan S. A Model for Diagnosing Autism Patients Using Spatial and Statistical Measures Using rs-fMRI and sMRI by Adopting Graphical Neural Networks. Diagnostics (Basel) 2023; 13:1143. [PMID: 36980452 PMCID: PMC10047680 DOI: 10.3390/diagnostics13061143] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 03/09/2023] [Accepted: 03/14/2023] [Indexed: 03/19/2023] Open
Abstract
This article proposes a model to diagnose autism patients using graphical neural networks. A graphical neural network relates the subjects (nodes) using the features (edges). In our model, radiomic features obtained from sMRI are used as edges, and spatial-temporal data obtained through rs-fMRI are used as nodes. The similarity between first-order and texture features from the sMRI data of subjects are derived using radiomics to construct the edges of a graph. The features from brain summaries are assembled and learned using 3DCNN to represent the features of each node of the graph. Using the structural similarities of the brain rather than phenotypic data or graph kernel functions provides better accuracy. The proposed model was applied to a standard dataset, ABIDE, and it was shown that the classification results improved with the use of both spatial (sMRI) and statistical measures (brain summaries of rs-fMRI) instead of using only medical images.
Collapse
|
12
|
Song X, Zhou F, Frangi AF, Cao J, Xiao X, Lei Y, Wang T, Lei B. Multicenter and Multichannel Pooling GCN for Early AD Diagnosis Based on Dual-Modality Fused Brain Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:354-367. [PMID: 35767511 DOI: 10.1109/tmi.2022.3187141] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
For significant memory concern (SMC) and mild cognitive impairment (MCI), their classification performance is limited by confounding features, diverse imaging protocols, and limited sample size. To address the above limitations, we introduce a dual-modality fused brain connectivity network combining resting-state functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI), and propose three mechanisms in the current graph convolutional network (GCN) to improve classifier performance. First, we introduce a DTI-strength penalty term for constructing functional connectivity networks. Stronger structural connectivity and bigger structural strength diversity between groups provide a higher opportunity for retaining connectivity information. Second, a multi-center attention graph with each node representing a subject is proposed to consider the influence of data source, gender, acquisition equipment, and disease status of those training samples in GCN. The attention mechanism captures their different impacts on edge weights. Third, we propose a multi-channel mechanism to improve filter performance, assigning different filters to features based on feature statistics. Applying those nodes with low-quality features to perform convolution would also deteriorate filter performance. Therefore, we further propose a pooling mechanism, which introduces the disease status information of those training samples to evaluate the quality of nodes. Finally, we obtain the final classification results by inputting the multi-center attention graph into the multi-channel pooling GCN. The proposed method is tested on three datasets (i.e., an ADNI 2 dataset, an ADNI 3 dataset, and an in-house dataset). Experimental results indicate that the proposed method is effective and superior to other related algorithms, with a mean classification accuracy of 93.05% in our binary classification tasks. Our code is available at: https://github.com/Xuegang-S.
Collapse
|
13
|
Peng L, Wang N, Xu J, Zhu X, Li X. GATE: Graph CCA for Temporal Self-Supervised Learning for Label-Efficient fMRI Analysis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:391-402. [PMID: 36018878 DOI: 10.1109/tmi.2022.3201974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In this work, we focus on the challenging task, neuro-disease classification, using functional magnetic resonance imaging (fMRI). In population graph-based disease analysis, graph convolutional neural networks (GCNs) have achieved remarkable success. However, these achievements are inseparable from abundant labeled data and sensitive to spurious signals. To improve fMRI representation learning and classification under a label-efficient setting, we propose a novel and theory-driven self-supervised learning (SSL) framework on GCNs, namely Graph CCA for Temporal sElf-supervised learning on fMRI analysis (GATE). Concretely, it is demanding to design a suitable and effective SSL strategy to extract formation and robust features for fMRI. To this end, we investigate several new graph augmentation strategies from fMRI dynamic functional connectives (FC) for SSL training. Further, we leverage canonical-correlation analysis (CCA) on different temporal embeddings and present the theoretical implications. Consequently, this yields a novel two-step GCN learning procedure comprised of (i) SSL on an unlabeled fMRI population graph and (ii) fine-tuning on a small labeled fMRI dataset for a classification task. Our method is tested on two independent fMRI datasets, demonstrating superior performance on autism and dementia diagnosis. Our code is available at https://github.com/LarryUESTC/GATE.
Collapse
|
14
|
Warren SL, Moustafa AA. Functional magnetic resonance imaging, deep learning, and Alzheimer's disease: A systematic review. J Neuroimaging 2023; 33:5-18. [PMID: 36257926 PMCID: PMC10092597 DOI: 10.1111/jon.13063] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 09/30/2022] [Accepted: 09/30/2022] [Indexed: 02/01/2023] Open
Abstract
Alzheimer's disease (AD) is currently diagnosed using a mixture of psychological tests and clinical observations. However, these diagnoses are not perfect, and additional diagnostic tools (e.g., MRI) can help improve our understanding of AD as well as our ability to detect the disease. Accordingly, a large amount of research has been invested into innovative diagnostic methods for AD. Functional MRI (fMRI) is a form of neuroimaging technology that has been used to diagnose AD; however, fMRI is incredibly noisy, complex, and thus lacks clinical use. Nonetheless, recent innovations in deep learning technology could enable the simplified and streamlined analysis of fMRI. Deep learning is a form of artificial intelligence that uses computer algorithms based on human neural networks to solve complex problems. For example, in fMRI research, deep learning models can automatically denoise images and classify AD by detecting patterns in participants' brain scans. In this systematic review, we investigate how fMRI (specifically resting-state fMRI) and deep learning methods are used to diagnose AD. In turn, we outline the common deep neural network, preprocessing, and classification methods used in the literature. We also discuss the accuracy, strengths, limitations, and future direction of fMRI deep learning methods. In turn, we aim to summarize the current field for new researchers, suggest specific areas for future research, and highlight the potential of fMRI to aid AD diagnoses.
Collapse
Affiliation(s)
- Samuel L Warren
- School of Psychology, Faculty of Society and Design, Bond University, Gold Coast, Queensland, Australia
| | - Ahmed A Moustafa
- School of Psychology, Faculty of Society and Design, Bond University, Gold Coast, Queensland, Australia.,Department of Human Anatomy and Physiology, Faculty of Health Sciences, University of Johannesburg, Johannesburg, South Africa
| |
Collapse
|
15
|
Zhang S, Wang J, Yu S, Wang R, Han J, Zhao S, Liu T, Lv J. An explainable deep learning framework for characterizing and interpreting human brain states. Med Image Anal 2023; 83:102665. [PMID: 36370512 DOI: 10.1016/j.media.2022.102665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 08/01/2022] [Accepted: 10/13/2022] [Indexed: 11/11/2022]
Abstract
Deep learning approaches have been widely adopted in the medical image analysis field. However, a most of existing deep learning approaches focus on achieving promising performances such as classification, detection, and segmentation, and much less effort is devoted to the explanation of the designed models. Similarly, in the brain imaging field, many deep learning approaches have been designed and applied to characterize and predict human brain states. However, these models lack interpretation. In response, we propose a novel domain knowledge informed self-attention graph pooling-based (SAGPool) graph convolutional neural network to study human brain states. Specifically, the dense individualized and common connectivity-based cortical landmarks system (DICCCOL, structural brain connectivity profiles) and holistic atlases of functional networks and interactions system (HAFNI, functional brain connectivity profiles) are integrated with the SAGPool model to better characterize and interpret the brain states. Extensive experiments are designed and carried out on the large-scale human connectome project (HCP) Q1 and S1200 dataset. Promising brain state classification performances are observed (e.g., an average of 93.7% for seven-task classification and 100% for binary classification). In addition, the importance of the brain regions, which contributes most to the accurate classification, is successfully quantified and visualized. A thorough neuroscientific interpretation suggests that these extracted brain regions and their importance calculated from self-attention graph pooling layer offer substantial explainability.
Collapse
Affiliation(s)
- Shu Zhang
- Center for Brain and Brain-Inspired Computing Research, Department of Computer Science, Northwestern Polytechnical University, Xi'an, China
| | - Junxin Wang
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Sigang Yu
- Center for Brain and Brain-Inspired Computing Research, Department of Computer Science, Northwestern Polytechnical University, Xi'an, China
| | - Ruoyang Wang
- Center for Brain and Brain-Inspired Computing Research, Department of Computer Science, Northwestern Polytechnical University, Xi'an, China
| | - Junwei Han
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Shijie Zhao
- School of Automation, Northwestern Polytechnical University, Xi'an, China; Research & Development Institute of Northwestern Polytechnical University in Shenzhen, Shenzhen, China.
| | - Tianming Liu
- Department of Computer Science and Bioimaging Research Center, University of Georgia, Athens, GA, United States
| | - Jinglei Lv
- School of Biomedical Engineering & Brain and Mind Centre, University of Sydney, Sydney, Australia
| |
Collapse
|
16
|
Song X, Yang P, Han H, Lei B. Research on the intelligent diagnosis of dementia. LANCET REGIONAL HEALTH. AMERICAS 2022; 17:100421. [PMID: 36776565 PMCID: PMC9904105 DOI: 10.1016/j.lana.2022.100421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 11/28/2022] [Accepted: 12/09/2022] [Indexed: 12/29/2022]
Affiliation(s)
- Xuegang Song
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, 518060, China
| | - Peng Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, 518060, China
| | - Hongbin Han
- Department of Radiology, Peking University Third Hospital, Beijing, 100191, China,Institute of Medical Technology, Peking University Health Science Center, Beijing, 100191, China,NMPA Key Laboratory for Evaluation of Medical Imaging Equipment and Technique, Beijing, 100191, China,Corresponding author. Department of Radiology, Peking University Third Hospital, Beijing, China.
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, 518060, China,Corresponding author. School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China.
| |
Collapse
|
17
|
Bi XA, Mao Y, Luo S, Wu H, Zhang L, Luo X, Xu L. A novel generation adversarial network framework with characteristics aggregation and diffusion for brain disease classification and feature selection. Brief Bioinform 2022; 23:6762742. [PMID: 36259367 DOI: 10.1093/bib/bbac454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 09/01/2022] [Accepted: 09/23/2022] [Indexed: 12/14/2022] Open
Abstract
Imaging genetics provides unique insights into the pathological studies of complex brain diseases by integrating the characteristics of multi-level medical data. However, most current imaging genetics research performs incomplete data fusion. Also, there is a lack of effective deep learning methods to analyze neuroimaging and genetic data jointly. Therefore, this paper first constructs the brain region-gene networks to intuitively represent the association pattern of pathogenetic factors. Second, a novel feature information aggregation model is constructed to accurately describe the information aggregation process among brain region nodes and gene nodes. Finally, a deep learning method called feature information aggregation and diffusion generative adversarial network (FIAD-GAN) is proposed to efficiently classify samples and select features. We focus on improving the generator with the proposed convolution and deconvolution operations, with which the interpretability of the deep learning framework has been dramatically improved. The experimental results indicate that FIAD-GAN can not only achieve superior results in various disease classification tasks but also extract brain regions and genes closely related to AD. This work provides a novel method for intelligent clinical decisions. The relevant biomedical discoveries provide a reliable reference and technical basis for the clinical diagnosis, treatment and pathological analysis of disease.
Collapse
Affiliation(s)
- Xia-An Bi
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, and College of Information Science and Engineering in Hunan Normal University, Changsha, P.R. China
| | - Yuhua Mao
- Department of Computing, School of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Sheng Luo
- Department of Computing, School of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Hao Wu
- Department of Computing, School of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Lixia Zhang
- School of Information Science and Engineering, Hunan Normal University, Changsha, P.R. China
| | - Xun Luo
- College of Information Science and Engineering in Hunan Normal University, Changsha, P.R. China
| | - Luyun Xu
- College of Business in Hunan Normal University, Changsha, P.R. China
| |
Collapse
|
18
|
Qiao J, Wang R, Liu H, Xu G, Wang Z. Brain disorder prediction with dynamic multivariate spatio-temporal features: Application to Alzheimer’s disease and autism spectrum disorder. Front Aging Neurosci 2022; 14:912895. [PMID: 36110425 PMCID: PMC9468323 DOI: 10.3389/fnagi.2022.912895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 08/05/2022] [Indexed: 11/16/2022] Open
Abstract
The dynamic functional connectivity (dFC) in functional magnetic resonance imaging (fMRI) is beneficial for the analysis and diagnosis of neurological brain diseases. The dFCs between regions of interest (ROIs) are generally delineated by a specific template and clustered into multiple different states. However, these models inevitably fell into the model-driven self-contained system which ignored the diversity at spatial level and the dynamics at time level of the data. In this study, we proposed a spatial and time domain feature extraction approach for Alzheimer’s disease (AD) and autism spectrum disorder (ASD)-assisted diagnosis which exploited the dynamic connectivity among independent functional sub networks in brain. Briefly, independent sub networks were obtained by applying spatial independent component analysis (SICA) to the preprocessed fMRI data. Then, a sliding window approach was used to segment the time series of the spatial components. After that, the functional connections within the window were obtained sequentially. Finally, a temporal signal-sensitive long short-term memory (LSTM) network was used for classification. The experimental results on Alzheimer’s Disease Neuroimaging Initiative (ADNI) and Autism Brain Imaging Data Exchange (ABIDE) datasets showed that the proposed method effectively predicted the disease at the early stage and outperformed the existing algorithms. The dFCs between the different components of the brain could be used as biomarkers for the diagnosis of diseases such as AD and ASD, providing a reliable basis for the study of brain connectomics.
Collapse
Affiliation(s)
- Jianping Qiao
- Shandong Province Key Laboratory of Medical Physics and Image Processing Technology, School of Physics and Electronics, Shandong Normal University, Jinan, China
- *Correspondence: Jianping Qiao,
| | - Rong Wang
- Shandong Province Key Laboratory of Medical Physics and Image Processing Technology, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Hongjia Liu
- Shandong Province Key Laboratory of Medical Physics and Image Processing Technology, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Guangrun Xu
- Department of Neurology, Qilu Hospital of Shandong University, Jinan, China
- Guangrun Xu,
| | - Zhishun Wang
- Department of Psychiatry, Columbia University, New York, NY, United States
- Zhishun Wang,
| |
Collapse
|
19
|
Khojaste-Sarakhsi M, Haghighi SS, Ghomi SF, Marchiori E. Deep learning for Alzheimer's disease diagnosis: A survey. Artif Intell Med 2022; 130:102332. [DOI: 10.1016/j.artmed.2022.102332] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 04/29/2022] [Accepted: 05/30/2022] [Indexed: 11/28/2022]
|
20
|
Liu L, Wang YP, Wang Y, Zhang P, Xiong S. An enhanced multi-modal brain graph network for classifying neuropsychiatric disorders. Med Image Anal 2022; 81:102550. [PMID: 35872360 DOI: 10.1016/j.media.2022.102550] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 07/06/2022] [Accepted: 07/13/2022] [Indexed: 10/17/2022]
Abstract
It has been proven that neuropsychiatric disorders (NDs) can be associated with both structures and functions of brain regions. Thus, data about structures and functions could be usefully combined in a comprehensive analysis. While brain structural MRI (sMRI) images contain anatomic and morphological information about NDs, functional MRI (fMRI) images carry complementary information. However, efficient extraction and fusion of sMRI and fMRI data remains challenging. In this study, we develop an enhanced multi-modal graph convolutional network (MME-GCN) in a binary classification between patients with NDs and healthy controls, based on the fusion of the structural and functional graphs of the brain region. First, based on the same brain atlas, we construct structural and functional graphs from sMRI and fMRI data, respectively. Second, we use machine learning to extract important features from the structural graph network. Third, we use these extracted features to adjust the corresponding edge weights in the functional graph network. Finally, we train a multi-layer GCN and use it in binary classification task. MME-GCN achieved 93.71% classification accuracy on the open data set provided by the Consortium for Neuropsychiatric Phenomics. In addition, we analyzed the important features selected from the structural graph and verified them in the functional graph. Using MME-GCN, we found several specific brain connections important to NDs.
Collapse
Affiliation(s)
- Liangliang Liu
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, P.R. China.
| | - Yu-Ping Wang
- Dthe Biomedical Engineering Department, Tulane University, New Orleans, LA 70118, USA
| | - Yi Wang
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, P.R. China
| | - Pei Zhang
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, P.R. China
| | - Shufeng Xiong
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, P.R. China
| |
Collapse
|
21
|
Zhao F, Li N, Pan H, Chen X, Li Y, Zhang H, Mao N, Cheng D. Multi-View Feature Enhancement Based on Self-Attention Mechanism Graph Convolutional Network for Autism Spectrum Disorder Diagnosis. Front Hum Neurosci 2022; 16:918969. [PMID: 35911592 PMCID: PMC9334869 DOI: 10.3389/fnhum.2022.918969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 06/16/2022] [Indexed: 12/01/2022] Open
Abstract
Functional connectivity (FC) network based on resting-state functional magnetic resonance imaging (rs-fMRI) has become an important tool to explore and understand the brain, which can provide objective basis for the diagnosis of neurodegenerative diseases, such as autism spectrum disorder (ASD). However, most functional connectivity (FC) networks only consider the unilateral features of nodes or edges, and the interaction between them is ignored. In fact, their integration can provide more comprehensive and crucial information in the diagnosis. To address this issue, a new multi-view brain network feature enhancement method based on self-attention mechanism graph convolutional network (SA-GCN) is proposed in this article, which can enhance node features through the connection relationship among different nodes, and then extract deep-seated and more discriminative features. Specifically, we first plug the pooling operation of self-attention mechanism into graph convolutional network (GCN), which can consider the node features and topology of graph network at the same time and then capture more discriminative features. In addition, the sample size is augmented by a "sliding window" strategy, which is beneficial to avoid overfitting and enhance the generalization ability. Furthermore, to fully explore the complex connection relationship among brain regions, we constructed the low-order functional graph network (Lo-FGN) and the high-order functional graph network (Ho-FGN) and enhance the features of the two functional graph networks (FGNs) based on SA-GCN. The experimental results on benchmark datasets show that: (1) SA-GCN can play a role in feature enhancement and can effectively extract more discriminative features, and (2) the integration of Lo-FGN and Ho-FGN can achieve the best ASD classification accuracy (79.9%), which reveals the information complementarity between them.
Collapse
Affiliation(s)
- Feng Zhao
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai, China
| | - Na Li
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai, China
| | - Hongxin Pan
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai, China
| | - Xiaobo Chen
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai, China
| | - Yuan Li
- School of Management Science and Engineering, Shandong Technology and Business University, Yantai, China
| | - Haicheng Zhang
- Department of Radiology, Yantai Yuhuangding Hospital, Yantai, China
| | - Ning Mao
- Department of Radiology, Yantai Yuhuangding Hospital, Yantai, China
| | - Dapeng Cheng
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai, China
| |
Collapse
|
22
|
Predicting brain structural network using functional connectivity. Med Image Anal 2022; 79:102463. [PMID: 35490597 DOI: 10.1016/j.media.2022.102463] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Revised: 03/06/2022] [Accepted: 04/15/2022] [Indexed: 12/13/2022]
Abstract
Uncovering the non-trivial brain structure-function relationship is fundamentally important for revealing organizational principles of human brain. However, it is challenging to infer a reliable relationship between individual brain structure and function, e.g., the relations between individual brain structural connectivity (SC) and functional connectivity (FC). Brain structure-function displays a distributed and heterogeneous pattern, that is, many functional relationships arise from non-overlapping sets of anatomical connections. This complex relation can be interwoven with widely existed individual structural and functional variations. Motivated by the advances of generative adversarial network (GAN) and graph convolutional network (GCN) in the deep learning field, in this work, we proposed a multi-GCN based GAN (MGCN-GAN) to infer individual SC based on corresponding FC by automatically learning the complex associations between individual brain structural and functional networks. The generator of MGCN-GAN is composed of multiple multi-layer GCNs which are designed to model complex indirect connections in brain network. The discriminator of MGCN-GAN is a single multi-layer GCN which aims to distinguish the predicted SC from real SC. To overcome the inherent unstable behavior of GAN, we designed a new structure-preserving (SP) loss function to guide the generator to learn the intrinsic SC patterns more effectively. Using Human Connectome Project (HCP) dataset and Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset as test beds, our MGCN-GAN model can generate reliable individual SC from FC. This result implies that there may exist a common regulation between specific brain structural and functional architectures across different individuals.
Collapse
|
23
|
Lin QH, Niu YW, Sui J, Zhao WD, Zhuo C, Calhoun VD. SSPNet: An interpretable 3D-CNN for classification of schizophrenia using phase maps of resting-state complex-valued fMRI data. Med Image Anal 2022; 79:102430. [PMID: 35397470 DOI: 10.1016/j.media.2022.102430] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 03/16/2022] [Accepted: 03/18/2022] [Indexed: 01/05/2023]
Abstract
Convolutional neural networks (CNNs) have shown promising results in classifying individuals with mental disorders such as schizophrenia using resting-state fMRI data. However, complex-valued fMRI data is rarely used since additional phase data introduces high-level noise though it is potentially useful information for the context of classification. As such, we propose to use spatial source phase (SSP) maps derived from complex-valued fMRI data as the CNN input. The SSP maps are not only less noisy, but also more sensitive to spatial activation changes caused by mental disorders than magnitude maps. We build a 3D-CNN framework with two convolutional layers (named SSPNet) to fully explore the 3D structure and voxel-level relationships from the SSP maps. Two interpretability modules, consisting of saliency map generation and gradient-weighted class activation mapping (Grad-CAM), are incorporated into the well-trained SSPNet to provide additional information helpful for understanding the output. Experimental results from classifying schizophrenia patients (SZs) and healthy controls (HCs) show that the proposed SSPNet significantly improved accuracy and AUC compared to CNN using magnitude maps extracted from either magnitude-only (by 23.4 and 23.6% for DMN) or complex-valued fMRI data (by 10.6 and 5.8% for DMN). SSPNet captured more prominent HC-SZ differences in saliency maps, and Grad-CAM localized all contributing brain regions with opposite strengths for HCs and SZs within SSP maps. These results indicate the potential of SSPNet as a sensitive tool that may be useful for the development of brain-based biomarkers of mental disorders.
Collapse
Affiliation(s)
- Qiu-Hua Lin
- School of Information and Communication Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, China.
| | - Yan-Wei Niu
- School of Information and Communication Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, China
| | - Jing Sui
- State Key Laboratory of Brain Cognition and Learning, Beijing Normal University, Beijing, 100875, China
| | - Wen-Da Zhao
- School of Information and Communication Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, China
| | - Chuanjun Zhuo
- Department of Psychiatry, The Fourth Center Hospital of Tianjin, Tianjin Medical University Affiliated Fourth Center Hospital, Tianjin 300140, China
| | - Vince D Calhoun
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS) Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, USA
| |
Collapse
|
24
|
Bi XA, Li L, Wang Z, Wang Y, Luo X, Xu L. IHGC-GAN: influence hypergraph convolutional generative adversarial network for risk prediction of late mild cognitive impairment based on imaging genetic data. Brief Bioinform 2022; 23:6554128. [PMID: 35348583 DOI: 10.1093/bib/bbac093] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Revised: 01/28/2022] [Accepted: 02/23/2022] [Indexed: 11/13/2022] Open
Abstract
Predicting disease progression in the initial stage to implement early intervention and treatment can effectively prevent the further deterioration of the condition. Traditional methods for medical data analysis usually fail to perform well because of their incapability for mining the correlation pattern of pathogenies. Therefore, many calculation methods have been excavated from the field of deep learning. In this study, we propose a novel method of influence hypergraph convolutional generative adversarial network (IHGC-GAN) for disease risk prediction. First, a hypergraph is constructed with genes and brain regions as nodes. Then, an influence transmission model is built to portray the associations between nodes and the transmission rule of disease information. Third, an IHGC-GAN method is constructed based on this model. This method innovatively combines the graph convolutional network (GCN) and GAN. The GCN is used as the generator in GAN to spread and update the lesion information of nodes in the brain region-gene hypergraph. Finally, the prediction accuracy of the method is improved by the mutual competition and repeated iteration between generator and discriminator. This method can not only capture the evolutionary pattern from early mild cognitive impairment (EMCI) to late MCI (LMCI) but also extract the pathogenic factors and predict the deterioration risk from EMCI to LMCI. The results on the two datasets indicate that the IHGC-GAN method has better prediction performance than the advanced methods in a variety of indicators.
Collapse
Affiliation(s)
- Xia-An Bi
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, and the College of Information Science and Engineering in Hunan Normal University, Changsha 410081, P.R. China
| | - Lou Li
- Department of Computing, School of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Zizheng Wang
- Department of Computing, School of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Yu Wang
- Department of Computing, School of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Xun Luo
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, and the College of Information Science and Engineering in Hunan Normal University, Changsha 410081, P.R. China
| | - Luyun Xu
- College of Business, Hunan Normal University, Changsha 410081, P.R. China
| |
Collapse
|
25
|
Cui W, Yan C, Yan Z, Peng Y, Leng Y, Liu C, Chen S, Jiang X, Zheng J, Yang X. BMNet: A New Region-Based Metric Learning Method for Early Alzheimer's Disease Identification With FDG-PET Images. Front Neurosci 2022; 16:831533. [PMID: 35281501 PMCID: PMC8908419 DOI: 10.3389/fnins.2022.831533] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 01/11/2022] [Indexed: 12/21/2022] Open
Abstract
18F-fluorodeoxyglucose (FDG)-positron emission tomography (PET) reveals altered brain metabolism in individuals with mild cognitive impairment (MCI) and Alzheimer's disease (AD). Some biomarkers derived from FDG-PET by computer-aided-diagnosis (CAD) technologies have been proved that they can accurately diagnosis normal control (NC), MCI, and AD. However, existing FDG-PET-based researches are still insufficient for the identification of early MCI (EMCI) and late MCI (LMCI). Compared with methods based other modalities, current methods with FDG-PET are also inadequate in using the inter-region-based features for the diagnosis of early AD. Moreover, considering the variability in different individuals, some hard samples which are very similar with both two classes limit the classification performance. To tackle these problems, in this paper, we propose a novel bilinear pooling and metric learning network (BMNet), which can extract the inter-region representation features and distinguish hard samples by constructing the embedding space. To validate the proposed method, we collect 898 FDG-PET images from Alzheimer's disease neuroimaging initiative (ADNI) including 263 normal control (NC) patients, 290 EMCI patients, 147 LMCI patients, and 198 AD patients. Following the common preprocessing steps, 90 features are extracted from each FDG-PET image according to the automatic anatomical landmark (AAL) template and then sent into the proposed network. Extensive fivefold cross-validation experiments are performed for multiple two-class classifications. Experiments show that most metrics are improved after adding the bilinear pooling module and metric losses to the Baseline model respectively. Specifically, in the classification task between EMCI and LMCI, the specificity improves 6.38% after adding the triple metric loss, and the negative predictive value (NPV) improves 3.45% after using the bilinear pooling module. In addition, the accuracy of classification between EMCI and LMCI achieves 79.64% using imbalanced FDG-PET images, which illustrates that the proposed method yields a state-of-the-art result of the classification accuracy between EMCI and LMCI based on PET images.
Collapse
Affiliation(s)
- Wenju Cui
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, China
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Caiying Yan
- Department of Radiology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou, China
| | - Zhuangzhi Yan
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Yunsong Peng
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
- School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Yilin Leng
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, China
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Chenlu Liu
- Department of Radiology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou, China
| | - Shuangqing Chen
- Department of Radiology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou, China
| | - Xi Jiang
- School of Life Sciences and Technology, The University of Electronic Science and Technology of China, Chengdu, China
| | - Jian Zheng
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Xiaodong Yang
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| |
Collapse
|
26
|
Chu Y, Wang G, Cao L, Qiao L, Liu M. Multi-Scale Graph Representation Learning for Autism Identification With Functional MRI. Front Neuroinform 2022; 15:802305. [PMID: 35095453 PMCID: PMC8792610 DOI: 10.3389/fninf.2021.802305] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 12/06/2021] [Indexed: 11/16/2022] Open
Abstract
Resting-state functional MRI (rs-fMRI) has been widely used for the early diagnosis of autism spectrum disorder (ASD). With rs-fMRI, the functional connectivity networks (FCNs) are usually constructed for representing each subject, with each element representing the pairwise relationship between brain region-of-interests (ROIs). Previous studies often first extract handcrafted network features (such as node degree and clustering coefficient) from FCNs and then construct a prediction model for ASD diagnosis, which largely requires expert knowledge. Graph convolutional networks (GCNs) have recently been employed to jointly perform FCNs feature extraction and ASD identification in a data-driven manner. However, existing studies tend to focus on the single-scale topology of FCNs by using one single atlas for ROI partition, thus ignoring potential complementary topology information of FCNs at different spatial scales. In this paper, we develop a multi-scale graph representation learning (MGRL) framework for rs-fMRI based ASD diagnosis. The MGRL consists of three major components: (1) multi-scale FCNs construction using multiple brain atlases for ROI partition, (2) FCNs representation learning via multi-scale GCNs, and (3) multi-scale feature fusion and classification for ASD diagnosis. The proposed MGRL is evaluated on 184 subjects from the public Autism Brain Imaging Data Exchange (ABIDE) database with rs-fMRI scans. Experimental results suggest the efficacy of our MGRL in FCN feature extraction and ASD identification, compared with several state-of-the-art methods.
Collapse
Affiliation(s)
- Ying Chu
- School of Mathematics Science, Liaocheng University, Liaocheng, China
- Department of Information Science and Technology, Taishan University, Taian, China
| | - Guangyu Wang
- School of Mathematics Science, Liaocheng University, Liaocheng, China
| | - Liang Cao
- Taian Tumor Prevention and Treatment Hospital, Taian, China
| | - Lishan Qiao
- School of Mathematics Science, Liaocheng University, Liaocheng, China
- *Correspondence: Lishan Qiao
| | - Mingxia Liu
- Department of Information Science and Technology, Taishan University, Taian, China
- Mingxia Liu
| |
Collapse
|
27
|
Li Y, Liu J, Jiang Y, Liu Y, Lei B. Virtual Adversarial Training-Based Deep Feature Aggregation Network From Dynamic Effective Connectivity for MCI Identification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:237-251. [PMID: 34491896 DOI: 10.1109/tmi.2021.3110829] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Dynamic functional connectivity (dFC) network inferred from resting-state fMRI reveals macroscopic dynamic neural activity patterns for brain disease identification. However, dFC methods ignore the causal influence between the brain regions. Furthermore, due to the complex non-Euclidean structure of brain networks, advanced deep neural networks are difficult to be applied for learning high-dimensional representations from brain networks. In this paper, a group constrained Kalman filter (gKF) algorithm is proposed to construct dynamic effective connectivity (dEC), where the gKF provides a more comprehensive understanding of the directional interaction within the dynamic brain networks than the dFC methods. Then, a novel virtual adversarial training convolutional neural network (VAT-CNN) is employed to extract the local features of dEC. The VAT strategy improves the robustness of the model to adversarial perturbations, and therefore avoids the overfitting problem effectively. Finally, we propose the high-order connectivity weight-guided graph attention networks (cwGAT) to aggregate features of dEC. By injecting the weight information of high-order connectivity into the attention mechanism, the cwGAT provides more effective high-level feature representations than the conventional GAT. The high-level features generated from the cwGAT are applied for binary classification and multiclass classification tasks of mild cognitive impairment (MCI). Experimental results indicate that the proposed framework achieves the classification accuracy of 90.9%, 89.8%, and 82.7% for normal control (NC) vs. early MCI (EMCI), EMCI vs. late MCI (LMCI), and NC vs. EMCI vs. LMCI classification respectively, outperforming the state-of-the-art methods significantly.
Collapse
|
28
|
Ghorbani M, Kazi A, Soleymani Baghshah M, Rabiee HR, Navab N. RA-GCN: Graph convolutional network for disease prediction problems with imbalanced data. Med Image Anal 2021; 75:102272. [PMID: 34731774 DOI: 10.1016/j.media.2021.102272] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Revised: 10/03/2021] [Accepted: 10/15/2021] [Indexed: 10/20/2022]
Abstract
Disease prediction is a well-known classification problem in medical applications. Graph Convolutional Networks (GCNs) provide a powerful tool for analyzing the patients' features relative to each other. This can be achieved by modeling the problem as a graph node classification task, where each node is a patient. Due to the nature of such medical datasets, class imbalance is a prevalent issue in the field of disease prediction, where the distribution of classes is skewed. When the class imbalance is present in the data, the existing graph-based classifiers tend to be biased towards the major class(es) and neglect the samples in the minor class(es). On the other hand, the correct diagnosis of the rare positive cases (true-positives) among all the patients is vital in a healthcare system. In conventional methods, such imbalance is tackled by assigning appropriate weights to classes in the loss function which is still dependent on the relative values of weights, sensitive to outliers, and in some cases biased towards the minor class(es). In this paper, we propose a Re-weighted Adversarial Graph Convolutional Network (RA-GCN) to prevent the graph-based classifier from emphasizing the samples of any particular class. This is accomplished by associating a graph-based neural network to each class, which is responsible for weighting the class samples and changing the importance of each sample for the classifier. Therefore, the classifier adjusts itself and determines the boundary between classes with more attention to the important samples. The parameters of the classifier and weighting networks are trained by an adversarial approach. We show experiments on synthetic and three publicly available medical datasets. Our results demonstrate the superiority of RA-GCN compared to recent methods in identifying the patient's status on all three datasets. The detailed analysis of our method is provided as quantitative and qualitative experiments on synthetic datasets.
Collapse
Affiliation(s)
- Mahsa Ghorbani
- Department of Computer Engineering, Sharif University of Technology, Tehran, Iran; Computer Aided Medical Procedures, Department of Informatics, Technical University of Munich, Germany.
| | - Anees Kazi
- Computer Aided Medical Procedures, Department of Informatics, Technical University of Munich, Germany
| | | | - Hamid R Rabiee
- Department of Computer Engineering, Sharif University of Technology, Tehran, Iran.
| | - Nassir Navab
- Computer Aided Medical Procedures, Department of Informatics, Technical University of Munich, Germany; Whiting School of Engineering, Johns Hopkins University, Baltimore, USA
| |
Collapse
|
29
|
Zhang L, Wang L, Gao J, Risacher SL, Yan J, Li G, Liu T, Zhu D. Deep Fusion of Brain Structure-Function in Mild Cognitive Impairment. Med Image Anal 2021; 72:102082. [PMID: 34004495 DOI: 10.1016/j.media.2021.102082] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 03/20/2021] [Accepted: 04/13/2021] [Indexed: 01/22/2023]
Abstract
Multimodal fusion of different types of neural image data provides an irreplaceable opportunity to take advantages of complementary cross-modal information that may only partially be contained in single modality. To jointly analyze multimodal data, deep neural networks can be especially useful because many studies have suggested that deep learning strategy is very efficient to reveal complex and non-linear relations buried in the data. However, most deep models, e.g., convolutional neural network and its numerous extensions, can only operate on regular Euclidean data like voxels in 3D MRI. The interrelated and hidden structures that beyond the grid neighbors, such as brain connectivity, may be overlooked. Moreover, how to effectively incorporate neuroscience knowledge into multimodal data fusion with a single deep framework is understudied. In this work, we developed a graph-based deep neural network to simultaneously model brain structure and function in Mild Cognitive Impairment (MCI): the topology of the graph is initialized using structural network (from diffusion MRI) and iteratively updated by incorporating functional information (from functional MRI) to maximize the capability of differentiating MCI patients from elderly normal controls. This resulted in a new connectome by exploring "deep relations" between brain structure and function in MCI patients and we named it as Deep Brain Connectome. Though deep brain connectome is learned individually, it shows consistent patterns of alteration comparing to structural network at group level. With deep brain connectome, our developed deep model can achieve 92.7% classification accuracy on ADNI dataset.
Collapse
Affiliation(s)
- Lu Zhang
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019 USA
| | - Li Wang
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019 USA; Department of Mathematics, The University of Texas at Arlington, Arlington, TX 76019 USA
| | - Jean Gao
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019 USA
| | - Shannon L Risacher
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN 46202 USA
| | - Jingwen Yan
- School of Informatics and Computing, Indiana University School of Medicine, Indianapolis, IN 46202 USA
| | - Gang Li
- Biomedical Research Imaging Center and Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-7160, USA
| | - Tianming Liu
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Dajiang Zhu
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019 USA.
| | | |
Collapse
|