1
|
Qiu B, Wang Q, Li X, Li W, Shao W, Wang M. Adaptive spatial-temporal neural network for ADHD identification using functional fMRI. Front Neurosci 2024; 18:1394234. [PMID: 38872940 PMCID: PMC11169645 DOI: 10.3389/fnins.2024.1394234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 05/15/2024] [Indexed: 06/15/2024] Open
Abstract
Computer aided diagnosis methods play an important role in Attention Deficit Hyperactivity Disorder (ADHD) identification. Dynamic functional connectivity (dFC) analysis has been widely used for ADHD diagnosis based on resting-state functional magnetic resonance imaging (rs-fMRI), which can help capture abnormalities of brain activity. However, most existing dFC-based methods only focus on dependencies between two adjacent timestamps, ignoring global dynamic evolution patterns. Furthermore, the majority of these methods fail to adaptively learn dFCs. In this paper, we propose an adaptive spatial-temporal neural network (ASTNet) comprising three modules for ADHD identification based on rs-fMRI time series. Specifically, we first partition rs-fMRI time series into multiple segments using non-overlapping sliding windows. Then, adaptive functional connectivity generation (AFCG) is used to model spatial relationships among regions-of-interest (ROIs) with adaptive dFCs as input. Finally, we employ a temporal dependency mining (TDM) module which combines local and global branches to capture global temporal dependencies from the spatially-dependent pattern sequences. Experimental results on the ADHD-200 dataset demonstrate the superiority of the proposed ASTNet over competing approaches in automated ADHD classification.
Collapse
Affiliation(s)
- Bo Qiu
- School of Computer Science, Nanjing University of Information Science and Technology, Nanjing, China
| | - Qianqian Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Xizhi Li
- School of Computer Science, Nanjing University of Information Science and Technology, Nanjing, China
| | - Wenyang Li
- School of Computer Science, Nanjing University of Information Science and Technology, Nanjing, China
| | - Wei Shao
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Mingliang Wang
- School of Computer Science, Nanjing University of Information Science and Technology, Nanjing, China
- Nanjing Xinda Institute of Safety and Emergency Management, Nanjing, China
| |
Collapse
|
2
|
Wang M, Zhu L, Li X, Pan Y, Li L. Dynamic functional connectivity analysis with temporal convolutional network for attention deficit/hyperactivity disorder identification. Front Neurosci 2023; 17:1322967. [PMID: 38148943 PMCID: PMC10750397 DOI: 10.3389/fnins.2023.1322967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 11/24/2023] [Indexed: 12/28/2023] Open
Abstract
Introduction Dynamic functional connectivity (dFC), which can capture the abnormality of brain activity over time in resting-state functional magnetic resonance imaging (rs-fMRI) data, has a natural advantage in revealing the abnormal mechanism of brain activity in patients with Attention Deficit/Hyperactivity Disorder (ADHD). Several deep learning methods have been proposed to learn dynamic changes from rs-fMRI for FC analysis, and achieved superior performance than those using static FC. However, most existing methods only consider dependencies of two adjacent timestamps, which is limited when the change is related to the course of many timestamps. Methods In this paper, we propose a novel Temporal Dependence neural Network (TDNet) for FC representation learning and temporal-dependence relationship tracking from rs-fMRI time series for automated ADHD identification. Specifically, we first partition rs-fMRI time series into a sequence of consecutive and non-overlapping segments. For each segment, we design an FC generation module to learn more discriminative representations to construct dynamic FCs. Then, we employ the Temporal Convolutional Network (TCN) to efficiently capture long-range temporal patterns with dilated convolutions, followed by three fully connected layers for disease prediction. Results As the results, we found that considering the dynamic characteristics of rs-fMRI time series data is beneficial to obtain better diagnostic performance. In addition, dynamic FC networks generated in a data-driven manner are more informative than those constructed by Pearson correlation coefficients. Discussion We validate the effectiveness of the proposed approach through extensive experiments on the public ADHD-200 database, and the results demonstrate the superiority of the proposed model over state-of-the-art methods in ADHD identification.
Collapse
Affiliation(s)
- Mingliang Wang
- School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing, China
- Nanjing Xinda Institute of Safety and Emergency Management, Nanjing, China
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Lingyao Zhu
- School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing, China
| | - Xizhi Li
- School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing, China
| | - Yong Pan
- School of Accounting, Nanjing University of Finance and Economics, Nanjing, China
| | - Long Li
- Taian Tumor Prevention and Treatment Hospital, Taian, China
| |
Collapse
|
3
|
Teng J, Mi C, Shi J, Li N. Brain disease research based on functional magnetic resonance imaging data and machine learning: a review. Front Neurosci 2023; 17:1227491. [PMID: 37662098 PMCID: PMC10469689 DOI: 10.3389/fnins.2023.1227491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 07/13/2023] [Indexed: 09/05/2023] Open
Abstract
Brain diseases, including neurodegenerative diseases and neuropsychiatric diseases, have long plagued the lives of the affected populations and caused a huge burden on public health. Functional magnetic resonance imaging (fMRI) is an excellent neuroimaging technology for measuring brain activity, which provides new insight for clinicians to help diagnose brain diseases. In recent years, machine learning methods have displayed superior performance in diagnosing brain diseases compared to conventional methods, attracting great attention from researchers. This paper reviews the representative research of machine learning methods in brain disease diagnosis based on fMRI data in the recent three years, focusing on the most frequent four active brain disease studies, including Alzheimer's disease/mild cognitive impairment, autism spectrum disorders, schizophrenia, and Parkinson's disease. We summarize these 55 articles from multiple perspectives, including the effect of the size of subjects, extracted features, feature selection methods, classification models, validation methods, and corresponding accuracies. Finally, we analyze these articles and introduce future research directions to provide neuroimaging scientists and researchers in the interdisciplinary fields of computing and medicine with new ideas for AI-aided brain disease diagnosis.
Collapse
Affiliation(s)
- Jing Teng
- School of Control and Computer Engineering, North China Electric Power University, Beijing, China
| | - Chunlin Mi
- School of Control and Computer Engineering, North China Electric Power University, Beijing, China
| | - Jian Shi
- Department of Hematology and Critical Care Medicine, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Na Li
- Department of Radiology, The Third Xiangya Hospital of Central South University, Changsha, China
| |
Collapse
|
4
|
Wan Z, Cheng W, Li M, Zhu R, Duan W. GDNet-EEG: An attention-aware deep neural network based on group depth-wise convolution for SSVEP stimulation frequency recognition. Front Neurosci 2023; 17:1160040. [PMID: 37123356 PMCID: PMC10133471 DOI: 10.3389/fnins.2023.1160040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 03/27/2023] [Indexed: 05/02/2023] Open
Abstract
Background Steady state visually evoked potentials (SSVEPs) based early glaucoma diagnosis requires effective data processing (e.g., deep learning) to provide accurate stimulation frequency recognition. Thus, we propose a group depth-wise convolutional neural network (GDNet-EEG), a novel electroencephalography (EEG)-oriented deep learning model tailored to learn regional characteristics and network characteristics of EEG-based brain activity to perform SSVEPs-based stimulation frequency recognition. Method Group depth-wise convolution is proposed to extract temporal and spectral features from the EEG signal of each brain region and represent regional characteristics as diverse as possible. Furthermore, EEG attention consisting of EEG channel-wise attention and specialized network-wise attention is designed to identify essential brain regions and form significant feature maps as specialized brain functional networks. Two publicly SSVEPs datasets (large-scale benchmark and BETA dataset) and their combined dataset are utilized to validate the classification performance of our model. Results Based on the input sample with a signal length of 1 s, the GDNet-EEG model achieves the average classification accuracies of 84.11, 85.93, and 93.35% on the benchmark, BETA, and combination datasets, respectively. Compared with the average classification accuracies achieved by comparison baselines, the average classification accuracies of the GDNet-EEG trained on a combination dataset increased from 1.96 to 18.2%. Conclusion Our approach can be potentially suitable for providing accurate SSVEP stimulation frequency recognition and being used in early glaucoma diagnosis.
Collapse
Affiliation(s)
- Zhijiang Wan
- The First Affiliated Hospital of Nanchang University, Nanchang University, Nanchang, Jiangxi, China
- School of Information Engineering, Nanchang University, Nanchang, Jiangxi, China
- Industrial Institute of Artificial Intelligence, Nanchang University, Nanchang, Jiangxi, China
| | - Wangxinjun Cheng
- Queen Mary College of Nanchang University, Nanchang University, Nanchang, Jiangxi, China
| | - Manyu Li
- School of Information Engineering, Nanchang University, Nanchang, Jiangxi, China
| | - Renping Zhu
- School of Information Engineering, Nanchang University, Nanchang, Jiangxi, China
- Industrial Institute of Artificial Intelligence, Nanchang University, Nanchang, Jiangxi, China
- School of Information Management, Wuhan University, Wuhan, China
| | - Wenfeng Duan
- The First Affiliated Hospital of Nanchang University, Nanchang University, Nanchang, Jiangxi, China
| |
Collapse
|
5
|
Wang C, Zhang L, Zhang J, Qiao L, Liu M. Fusing Multiview Functional Brain Networks by Joint Embedding for Brain Disease Identification. J Pers Med 2023; 13:jpm13020251. [PMID: 36836485 PMCID: PMC9958959 DOI: 10.3390/jpm13020251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 12/27/2022] [Accepted: 01/13/2023] [Indexed: 01/31/2023] Open
Abstract
Background: Functional brain networks (FBNs) derived from resting-state functional MRI (rs-fMRI) have shown great potential in identifying brain disorders, such as autistic spectrum disorder (ASD). Therefore, many FBN estimation methods have been proposed in recent years. Most existing methods only model the functional connections between brain regions of interest (ROIs) from a single view (e.g., by estimating FBNs through a specific strategy), failing to capture the complex interactions among ROIs in the brain. Methods: To address this problem, we propose fusion of multiview FBNs through joint embedding, which can make full use of the common information of multiview FBNs estimated by different strategies. More specifically, we first stack the adjacency matrices of FBNs estimated by different methods into a tensor and use tensor factorization to learn the joint embedding (i.e., a common factor of all FBNs) for each ROI. Then, we use Pearson's correlation to calculate the connections between each embedded ROI in order to reconstruct a new FBN. Results: Experimental results obtained on the public ABIDE dataset with rs-fMRI data reveal that our method is superior to several state-of-the-art methods in automated ASD diagnosis. Moreover, by exploring FBN "features" that contributed most to ASD identification, we discovered potential biomarkers for ASD diagnosis. The proposed framework achieves an accuracy of 74.46%, which is generally better than the compared individual FBN methods. In addition, our method achieves the best performance compared to other multinetwork methods, i.e., an accuracy improvement of at least 2.72%. Conclusions: We present a multiview FBN fusion strategy through joint embedding for fMRI-based ASD identification. The proposed fusion method has an elegant theoretical explanation from the perspective of eigenvector centrality.
Collapse
Affiliation(s)
- Chengcheng Wang
- School of Mathematics Science, Liaocheng University, Liaocheng 252000, China
| | - Limei Zhang
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan 250101, China
- Correspondence: (L.Z.); (M.L.)
| | - Jinshan Zhang
- College of Mathematics and Statistics, Sichuan University of Science and Engineering, Zigong 643000, China
| | - Lishan Qiao
- School of Mathematics Science, Liaocheng University, Liaocheng 252000, China
| | - Mingxia Liu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Correspondence: (L.Z.); (M.L.)
| |
Collapse
|
6
|
Ma P, Xue T. Learning a spatial-temporal texture transformer network for video inpainting. Front Neurorobot 2022; 16:1002453. [PMID: 36310632 PMCID: PMC9606320 DOI: 10.3389/fnbot.2022.1002453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 09/20/2022] [Indexed: 11/16/2022] Open
Abstract
We study video inpainting, which aims to recover realistic textures from damaged frames. Recent progress has been made by taking other frames as references so that relevant textures can be transferred to damaged frames. However, existing video inpainting approaches neglect the ability of the model to extract information and reconstruct the content, resulting in the inability to reconstruct the textures that should be transferred accurately. In this paper, we propose a novel and effective spatial-temporal texture transformer network (STTTN) for video inpainting. STTTN consists of six closely related modules optimized for video inpainting tasks: feature similarity measure for more accurate frame pre-repair, an encoder with strong information extraction ability, embedding module for finding a correlation, coarse low-frequency feature transfer, refinement high-frequency feature transfer, and decoder with accurate content reconstruction ability. Such a design encourages joint feature learning across the input and reference frames. To demonstrate the advancedness and effectiveness of the proposed model, we conduct comprehensive ablation learning and qualitative and quantitative experiments on multiple datasets by using standard stationary masks and more realistic moving object masks. The excellent experimental results demonstrate the authenticity and reliability of the STTTN.
Collapse
|