1
|
Li S, Zhang R. A novel interactive deep cascade spectral graph convolutional network with multi-relational graphs for disease prediction. Neural Netw 2024; 175:106285. [PMID: 38593556 DOI: 10.1016/j.neunet.2024.106285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 11/16/2023] [Accepted: 03/29/2024] [Indexed: 04/11/2024]
Abstract
Graph neural networks (GNNs) have recently grown in popularity for disease prediction. Existing GNN-based methods primarily build the graph topological structure around a single modality and combine it with other modalities to acquire feature representations of acquisitions. The complicated relationship in each modality, however, may not be well highlighted due to its specificity. Further, relatively shallow networks restrict adequate extraction of high-level features, affecting disease prediction performance. Accordingly, this paper develops a new interactive deep cascade spectral graph convolutional network with multi-relational graphs (IDCGN) for disease prediction tasks. Its crucial points lie in constructing multiple relational graphs and dual cascade spectral graph convolution branches with interaction (DCSGBI). Specifically, the former designs a pairwise imaging-based edge generator and a pairwise non-imaging-based edge generator from different modalities by devising two learnable networks, which adaptively capture graph structures and provide various views of the same acquisition to aid in disease diagnosis. Again, DCSGBI is established to enrich high-level semantic information and low-level details of disease data. It devises a cascade spectral graph convolution operator for each branch and incorporates the interaction strategy between different branches into the network, successfully forming a deep model and capturing complementary information from diverse branches. In this manner, more favorable and sufficient features are learned for a reliable diagnosis. Experiments on several disease datasets reveal that IDCGN exceeds state-of-the-art models and achieves promising results.
Collapse
Affiliation(s)
- Sihui Li
- Medical Big data Research Center, School of Mathematics, Northwest University, Xi'an 710127, Shaanxi, China.
| | - Rui Zhang
- Medical Big data Research Center, School of Mathematics, Northwest University, Xi'an 710127, Shaanxi, China.
| |
Collapse
|
2
|
Guan H, Yap PT, Bozoki A, Liu M. Federated learning for medical image analysis: A survey. PATTERN RECOGNITION 2024; 151:110424. [PMID: 38559674 PMCID: PMC10976951 DOI: 10.1016/j.patcog.2024.110424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Machine learning in medical imaging often faces a fundamental dilemma, namely, the small sample size problem. Many recent studies suggest using multi-domain data pooled from different acquisition sites/centers to improve statistical power. However, medical images from different sites cannot be easily shared to build large datasets for model training due to privacy protection reasons. As a promising solution, federated learning, which enables collaborative training of machine learning models based on data from different sites without cross-site data sharing, has attracted considerable attention recently. In this paper, we conduct a comprehensive survey of the recent development of federated learning methods in medical image analysis. We have systematically gathered research papers on federated learning and its applications in medical image analysis published between 2017 and 2023. Our search and compilation were conducted using databases from IEEE Xplore, ACM Digital Library, Science Direct, Springer Link, Web of Science, Google Scholar, and PubMed. In this survey, we first introduce the background of federated learning for dealing with privacy protection and collaborative learning issues. We then present a comprehensive review of recent advances in federated learning methods for medical image analysis. Specifically, existing methods are categorized based on three critical aspects of a federated learning system, including client end, server end, and communication techniques. In each category, we summarize the existing federated learning methods according to specific research problems in medical image analysis and also provide insights into the motivations of different approaches. In addition, we provide a review of existing benchmark medical imaging datasets and software platforms for current federated learning research. We also conduct an experimental study to empirically evaluate typical federated learning methods for medical image analysis. This survey can help to better understand the current research status, challenges, and potential research opportunities in this promising research field.
Collapse
Affiliation(s)
- Hao Guan
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Andrea Bozoki
- Department of Neurology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
3
|
Wang R, Guo W, Wang Y, Zhou X, Leung JC, Yan S, Cui L. Hybrid multimodal fusion for graph learning in disease prediction. Methods 2024; 229:41-48. [PMID: 38880433 DOI: 10.1016/j.ymeth.2024.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Revised: 06/06/2024] [Accepted: 06/12/2024] [Indexed: 06/18/2024] Open
Abstract
Graph neural networks (GNNs) have gained significant attention in disease prediction where the latent embeddings of patients are modeled as nodes and the similarities among patients are represented through edges. The graph structure, which determines how information is aggregated and propagated, plays a crucial role in graph learning. Recent approaches typically create graphs based on patients' latent embeddings, which may not accurately reflect their real-world closeness. Our analysis reveals that raw data, such as demographic attributes and laboratory results, offers a wealth of information for assessing patient similarities and can serve as a compensatory measure for graphs constructed exclusively from latent embeddings. In this study, we first construct adaptive graphs from both latent representations and raw data respectively, and then merge these graphs via weighted summation. Given that the graphs may contain extraneous and noisy connections, we apply degree-sensitive edge pruning and kNN sparsification techniques to selectively sparsify and prune these edges. We conducted intensive experiments on two diagnostic prediction datasets, and the results demonstrate that our proposed method surpasses current state-of-the-art techniques.
Collapse
Affiliation(s)
| | - Wei Guo
- Shandong University, Jinan, 250210, China.
| | | | - Xin Zhou
- Nanyang Technological University, Singapore.
| | | | - Shuo Yan
- Shandong University, Jinan, 250210, China.
| | - Lizhen Cui
- Shandong University, Jinan, 250210, China.
| |
Collapse
|
4
|
Zhang S, Yang J, Zhang Y, Zhong J, Hu W, Li C, Jiang J. The Combination of a Graph Neural Network Technique and Brain Imaging to Diagnose Neurological Disorders: A Review and Outlook. Brain Sci 2023; 13:1462. [PMID: 37891830 PMCID: PMC10605282 DOI: 10.3390/brainsci13101462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 10/06/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023] Open
Abstract
Neurological disorders (NDs), such as Alzheimer's disease, have been a threat to human health all over the world. It is of great importance to diagnose ND through combining artificial intelligence technology and brain imaging. A graph neural network (GNN) can model and analyze the brain, imaging from morphology, anatomical structure, function features, and other aspects, thus becoming one of the best deep learning models in the diagnosis of ND. Some researchers have investigated the application of GNN in the medical field, but the scope is broad, and its application to NDs is less frequent and not detailed enough. This review focuses on the research progress of GNNs in the diagnosis of ND. Firstly, we systematically investigated the GNN framework of ND, including graph construction, graph convolution, graph pooling, and graph prediction. Secondly, we investigated common NDs using the GNN diagnostic model in terms of data modality, number of subjects, and diagnostic accuracy. Thirdly, we discussed some research challenges and future research directions. The results of this review may be a valuable contribution to the ongoing intersection of artificial intelligence technology and brain imaging.
Collapse
Affiliation(s)
- Shuoyan Zhang
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Jiacheng Yang
- School of Life Sciences, Shanghai University, Shanghai 200444, China
| | - Ying Zhang
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Jiayi Zhong
- School of Life Sciences, Shanghai University, Shanghai 200444, China
| | - Wenjing Hu
- School of Life Sciences, Shanghai University, Shanghai 200444, China
| | - Chenyang Li
- School of Life Sciences, Shanghai University, Shanghai 200444, China
| | - Jiehui Jiang
- Shanghai Institute of Biomedical Engineering, Shanghai University, Shanghai 200444, China
| |
Collapse
|
5
|
Guo S, Zhang H, Gao Y, Wang H, Xu L, Gao Z, Guzzo A, Fortino G. Survival prediction of heart failure patients using motion-based analysis method. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 236:107547. [PMID: 37126888 DOI: 10.1016/j.cmpb.2023.107547] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 04/06/2023] [Accepted: 04/09/2023] [Indexed: 05/03/2023]
Abstract
BACKGROUND AND OBJECTIVE Survival prediction of heart failure patients is critical to improve the prognostic management of the cardiovascular disease. The existing survival prediction methods focus on the clinical information while lacking the cardiac motion information. we propose a motion-based analysis method to predict the survival risk of heart failure patients for aiding clinical diagnosis and treatment. METHODS We propose a motion-based analysis method for survival prediction of heart failure patients. First, our method proposes the hierarchical spatial-temporal structure to capture the myocardial border. It promotes the model discrimination on border features. Second, our method explores the dense optical flow structure to capture motion fields. It improves the tracking capability on cardiac images. The cardiac motion information is obtained by fusing boundary information and motion fields of cardiac images. Finally, our method proposes the multi-modality deep-cox structure to predict the survival risk of heart failure patients. It improves the survival probability of heart failure patients. RESULTS The motion-based analysis method is confirmed to be able to improve the survival prediction of heart failure patients. The precision, recall, F1-score, and C-index are 0.8519, 0.8333, 0.8425, and 0.8478, respectively, which is superior to other state-of-the-art methods. CONCLUSIONS The experimental results show that the proposed model can effectively predict survival risk of heart failure patients. It facilitates the application of robust clinical treatment strategies.
Collapse
Affiliation(s)
- Saidi Guo
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
| | - Heye Zhang
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China.
| | - Yifeng Gao
- Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Hui Wang
- Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Lei Xu
- Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Zhifan Gao
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
| | - Antonella Guzzo
- Department of Informatics, Modeling, Electronics and Systems Engineering (DIMES), University of Calabria, Rende, Italy
| | - Giancarlo Fortino
- Department of Informatics, Modeling, Electronics and Systems Engineering (DIMES), University of Calabria, Rende, Italy
| |
Collapse
|
6
|
Zhang H, Gao Z, Zhang D, Hau WK, Zhang H. Progressive Perception Learning for Main Coronary Segmentation in X-Ray Angiography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:864-879. [PMID: 36327189 DOI: 10.1109/tmi.2022.3219126] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Main coronary segmentation from the X-ray angiography images is important for the computer-aided diagnosis and treatment of coronary disease. However, it confronts the challenge at three different image granularities (the semantic, surrounding, and local levels). The challenge includes the semantic confusion between the main and collateral vessels, low contrast between the foreground vessel and background surroundings, and local ambiguity near the vessel boundaries. The traditional hand-crafted feature-based methods may be insufficient because they may lack the semantic relationship information and may not distinguish the main and collateral vessels. The existing deep learning-based methods seem to have issues due to the deficiency in the long-distance semantic relationship capture, the foreground and background interference adaptability, and the boundary detail information preservation. To solve the main coronary segmentation challenge, we propose the progressive perception learning (PPL) framework to inspect these three different image granularities. Specifically, the PPL contains the context, interference, and boundary perception modules. The context perception is designed to focus on the main coronary vessel based on the semantic dependence capture among different coronary segments. The interference perception is designed to purify the feature maps based on the foreground vessel enhancement and background artifact suppression. The boundary perception is designed to highlight the boundary details based on boundary feature extraction through the intersection between the foreground and background predictions. Extensive experiments on 1085 subjects show that the PPL is effective (e.g., the overall Dice is greater than 95%), and superior to thirteen state-of-the-art coronary segmentation methods.
Collapse
|