1
|
Lei B, Li Y, Fu W, Yang P, Chen S, Wang T, Xiao X, Niu T, Fu Y, Wang S, Han H, Qin J. Alzheimer's disease diagnosis from multi-modal data via feature inductive learning and dual multilevel graph neural network. Med Image Anal 2024; 97:103213. [PMID: 38850625 DOI: 10.1016/j.media.2024.103213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 05/13/2024] [Accepted: 05/17/2024] [Indexed: 06/10/2024]
Abstract
Multi-modal data can provide complementary information of Alzheimer's disease (AD) and its development from different perspectives. Such information is closely related to the diagnosis, prevention, and treatment of AD, and hence it is necessary and critical to study AD through multi-modal data. Existing learning methods, however, usually ignore the influence of feature heterogeneity and directly fuse features in the last stages. Furthermore, most of these methods only focus on local fusion features or global fusion features, neglecting the complementariness of features at different levels and thus not sufficiently leveraging information embedded in multi-modal data. To overcome these shortcomings, we propose a novel framework for AD diagnosis that fuses gene, imaging, protein, and clinical data. Our framework learns feature representations under the same feature space for different modalities through a feature induction learning (FIL) module, thereby alleviating the impact of feature heterogeneity. Furthermore, in our framework, local and global salient multi-modal feature interaction information at different levels is extracted through a novel dual multilevel graph neural network (DMGNN). We extensively validate the proposed method on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset and experimental results demonstrate our method consistently outperforms other state-of-the-art multi-modal fusion methods. The code is publicly available on the GitHub website. (https://github.com/xiankantingqianxue/MIA-code.git).
Collapse
Affiliation(s)
- Baiying Lei
- National-Regional Key Technology Engineering Lab. for Medical Ultrasound, Guangdong Key Lab. for Biomedical Measurements and Ultrasound Imaging, Marshall Lab. of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518060, China
| | - Yafeng Li
- National-Regional Key Technology Engineering Lab. for Medical Ultrasound, Guangdong Key Lab. for Biomedical Measurements and Ultrasound Imaging, Marshall Lab. of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518060, China
| | - Wanyi Fu
- Department of Electronic Engineering, Tsinghua University, Beijing Key Laboratory of Magnetic Resonance Imaging Devices and Technology, China
| | - Peng Yang
- National-Regional Key Technology Engineering Lab. for Medical Ultrasound, Guangdong Key Lab. for Biomedical Measurements and Ultrasound Imaging, Marshall Lab. of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518060, China
| | - Shaobin Chen
- National-Regional Key Technology Engineering Lab. for Medical Ultrasound, Guangdong Key Lab. for Biomedical Measurements and Ultrasound Imaging, Marshall Lab. of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518060, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Lab. for Medical Ultrasound, Guangdong Key Lab. for Biomedical Measurements and Ultrasound Imaging, Marshall Lab. of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518060, China
| | - Xiaohua Xiao
- The First Affiliated Hospital of Shenzhen University, Shenzhen University Medical School, Shenzhen University, Shenzhen Second People's Hospital, Shenzhen, 530031, China
| | - Tianye Niu
- Shenzhen Bay Laboratory, Shenzhen, 518067, China
| | - Yu Fu
- Department of Neurology, Peking University Third Hospital, No. 49, North Garden Rd., Haidian District, Beijing, 100191, China.
| | - Shuqiang Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| | - Hongbin Han
- Institute of Medical Technology, Peking University Health Science Center, Department of Radiology, Peking University Third Hospital, Beijing Key Laboratory of Magnetic Resonance Imaging Devices and Technology, Beijing, 100191, China; The second hospital of Dalian Medical University,Research and developing center of medical technology, Dalian, 116027, China.
| | - Jing Qin
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
2
|
Tao T, Chen Y, Shang Y, He J, Hao J. SMMF: a self-attention-based multi-parametric MRI feature fusion framework for the diagnosis of bladder cancer grading. Front Oncol 2024; 14:1337186. [PMID: 38515574 PMCID: PMC10955083 DOI: 10.3389/fonc.2024.1337186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Accepted: 02/21/2024] [Indexed: 03/23/2024] Open
Abstract
Background Multi-parametric magnetic resonance imaging (MP-MRI) may provide comprehensive information for graded diagnosis of bladder cancer (BCa). Nevertheless, existing methods ignore the complex correlation between these MRI sequences, failing to provide adequate information. Therefore, the main objective of this study is to enhance feature fusion and extract comprehensive features from MP-MRI using deep learning methods to achieve an accurate diagnosis of BCa grading. Methods In this study, a self-attention-based MP-MRI feature fusion framework (SMMF) is proposed to enhance the performance of the model by extracting and fusing features of both T2-weighted imaging (T2WI) and dynamic contrast-enhanced imaging (DCE) sequences. A new multiscale attention (MA) model is designed to embed into the neural network (CNN) end to further extract rich features from T2WI and DCE. Finally, a self-attention feature fusion strategy (SAFF) was used to effectively capture and fuse the common and complementary features of patients' MP-MRIs. Results In a clinically collected sample of 138 BCa patients, the SMMF network demonstrated superior performance compared to the existing deep learning-based bladder cancer grading model, with accuracy, F1 value, and AUC values of 0.9488, 0.9426, and 0.9459, respectively. Conclusion Our proposed SMMF framework combined with MP-MRI information can accurately predict the pathological grading of BCa and can better assist physicians in diagnosing BCa.
Collapse
Affiliation(s)
- Tingting Tao
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
| | - Ying Chen
- Department of Radiology, Second Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Yunyun Shang
- Department of Radiology, Second Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Jianfeng He
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- School of Physics and Electronic Engineering, Yuxi Normal University, Yuxi, China
| | - Jingang Hao
- Department of Radiology, Second Affiliated Hospital of Kunming Medical University, Kunming, China
| |
Collapse
|
3
|
Wang X, Xin J, Wang Z, Qu L, Li J, Wang Z. Graph kernel of brain networks considering functional similarity measures. Comput Biol Med 2024; 171:108148. [PMID: 38367448 DOI: 10.1016/j.compbiomed.2024.108148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 02/07/2024] [Accepted: 02/12/2024] [Indexed: 02/19/2024]
Abstract
As a tool of brain network analysis, the graph kernel is often used to assist the diagnosis of neurodegenerative diseases. It is used to judge whether the subject is sick by measuring the similarity between brain networks. Most of the existing graph kernels calculate the similarity of brain networks based on structural similarity, which can better capture the topology of brain networks, but all ignore the functional information including the lobe, centers, left and right brain to which the brain region belongs and functions of brain regions in brain networks. The functional similarities can help more accurately locate the specific brain regions affected by diseases so that we can focus on measuring the similarity of brain networks. Therefore, a multi-attribute graph kernel for the brain network is proposed, which assigns multiple attributes to nodes in the brain network, and computes the graph kernel of the brain network according to Weisfeiler-Lehman color refinement algorithm. In addition, in order to capture the interaction between multiple brain regions, a multi-attribute hypergraph kernel is proposed, which takes into account the functional and structural similarities as well as the higher-order correlation between the nodes of the brain network. Finally, the experiments are conducted on real data sets and the experimental results show that the proposed methods can significantly improve the performance of neurodegenerative disease diagnosis. Besides, the statistical test shows that the proposed methods are significantly different from compared methods.
Collapse
Affiliation(s)
- Xinlei Wang
- School of Computer Science and Engineering, Northeastern University, 110169, China
| | - Junchang Xin
- School of Computer Science and Engineering, Northeastern University, 110169, China; Key Laboratory of Big Data Management and Analytics, Northeastern University, 110169, China.
| | - Zhongyang Wang
- School of Computer Science and Engineering, Shenyang Jianzhu University, 110169, China
| | - Luxuan Qu
- School of Computer Science and Engineering, Northeastern University, 110169, China
| | - Jiani Li
- School of Computer Science and Engineering, Northeastern University, 110169, China
| | - Zhiqiong Wang
- College of Medicine and Biological Information Engineering, Northeastern University, 110169, China
| |
Collapse
|
4
|
Guo R, Tian X, Lin H, McKenna S, Li HD, Guo F, Liu J. Graph-Based Fusion of Imaging, Genetic and Clinical Data for Degenerative Disease Diagnosis. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:57-68. [PMID: 37991907 DOI: 10.1109/tcbb.2023.3335369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2023]
Abstract
Graph learning methods have achieved noteworthy performance in disease diagnosis due to their ability to represent unstructured information such as inter-subject relationships. While it has been shown that imaging, genetic and clinical data are crucial for degenerative disease diagnosis, existing methods rarely consider how best to use their relationships. How best to utilize information from imaging, genetic and clinical data remains a challenging problem. This study proposes a novel graph-based fusion (GBF) approach to meet this challenge. To extract effective imaging-genetic features, we propose an imaging-genetic fusion module which uses an attention mechanism to obtain modality-specific and joint representations within and between imaging and genetic data. Then, considering the effectiveness of clinical information for diagnosing degenerative diseases, we propose a multi-graph fusion module to further fuse imaging-genetic and clinical features, which adopts a learnable graph construction strategy and a graph ensemble method. Experimental results on two benchmarks for degenerative disease diagnosis (Alzheimers Disease Neuroimaging Initiative and Parkinson's Progression Markers Initiative) demonstrate its effectiveness compared to state-of-the-art graph-based methods. Our findings should help guide further development of graph-based models for dealing with imaging, genetic and clinical data.
Collapse
|