1
|
Wang X, Chen G, Hu H, Zhang M, Rao Y, Yue Z. PDDGCN: A Parasitic Disease-Drug Association Predictor Based on Multi-view Fusion Graph Convolutional Network. Interdiscip Sci 2024; 16:231-242. [PMID: 38294648 DOI: 10.1007/s12539-023-00600-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Revised: 12/20/2023] [Accepted: 12/21/2023] [Indexed: 02/01/2024]
Abstract
The precise identification of associations between diseases and drugs is paramount for comprehending the etiology and mechanisms underlying parasitic diseases. Computational approaches are highly effective in discovering and predicting disease-drug associations. However, the majority of these approaches primarily rely on link-based methodologies within distinct biomedical bipartite networks. In this study, we reorganized a fundamental dataset of parasitic disease-drug associations using the latest databases, and proposed a prediction model called PDDGCN, based on a multi-view graph convolutional network. To begin with, we fused similarity networks with binary networks to establish multi-view heterogeneous networks. We utilized neighborhood information aggregation layers to refine node embeddings within each view of the multi-view heterogeneous networks, leveraging inter- and intra-domain message passing to aggregate information from neighboring nodes. Subsequently, we integrated multiple embeddings from each view and fed them into the ultimate discriminator. The experimental results demonstrate that PDDGCN outperforms five state-of-the-art methods and four compared machine learning algorithms. Additionally, case studies have substantiated the effectiveness of PDDGCN in identifying associations between parasitic diseases and drugs. In summary, the PDDGCN model has the potential to facilitate the discovery of potential treatments for parasitic diseases and advance our comprehension of the etiology in this field. The source code is available at https://github.com/AhauBioinformatics/PDDGCN .
Collapse
Affiliation(s)
- Xiaosong Wang
- School of Information and Artificial Intelligence, Anhui Provincial Engineering Research Center for Beidou Precision Agriculture Information, Key Laboratory of Agricultural Sensors for Ministry of Agriculture and Rural Affairs, Anhui Agricultural University, Hefei, 230036, Anhui, People's Republic of China
| | - Guojun Chen
- School of Information and Artificial Intelligence, Anhui Provincial Engineering Research Center for Beidou Precision Agriculture Information, Key Laboratory of Agricultural Sensors for Ministry of Agriculture and Rural Affairs, Anhui Agricultural University, Hefei, 230036, Anhui, People's Republic of China
| | - Hang Hu
- School of Information and Artificial Intelligence, Anhui Provincial Engineering Research Center for Beidou Precision Agriculture Information, Key Laboratory of Agricultural Sensors for Ministry of Agriculture and Rural Affairs, Anhui Agricultural University, Hefei, 230036, Anhui, People's Republic of China
| | - Min Zhang
- School of Information and Artificial Intelligence, Anhui Provincial Engineering Research Center for Beidou Precision Agriculture Information, Key Laboratory of Agricultural Sensors for Ministry of Agriculture and Rural Affairs, Anhui Agricultural University, Hefei, 230036, Anhui, People's Republic of China
| | - Yuan Rao
- School of Information and Artificial Intelligence, Anhui Provincial Engineering Research Center for Beidou Precision Agriculture Information, Key Laboratory of Agricultural Sensors for Ministry of Agriculture and Rural Affairs, Anhui Agricultural University, Hefei, 230036, Anhui, People's Republic of China.
| | - Zhenyu Yue
- School of Information and Artificial Intelligence, Anhui Provincial Engineering Research Center for Beidou Precision Agriculture Information, Key Laboratory of Agricultural Sensors for Ministry of Agriculture and Rural Affairs, Anhui Agricultural University, Hefei, 230036, Anhui, People's Republic of China.
| |
Collapse
|
2
|
Sun X, Qian X, Nai C, Xu Y, Liu Y, Yao G, Dong L. LDI-MVFNet: A Multi-view fusion deep network for leachate distribution imaging. Waste Manag 2023; 157:180-189. [PMID: 36563516 DOI: 10.1016/j.wasman.2022.12.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Revised: 11/23/2022] [Accepted: 12/14/2022] [Indexed: 06/17/2023]
Abstract
The accurate monitoring and early warning of groundwater pollution caused by the concealed leakage of landfills is a major challenge globally in the field of solid waste management and groundwater protection. Electrical resistivity tomography (ERT) represents a potential solution with advantages, owing to its fast and nondestructive characteristics. However, traditional ERT based on a single array cannot reveal the distribution and dynamics of pollution in complex underground media owing to the limited information it carries. We designed a novel deep network for multi-view fusion to invert the real resistivity distribution of the medium caused by leachate (named LDI-MVFNet) so as to infer the distribution of leachate. To support model establishment and validation, ERT instances collected from synthetic models and a salt tracer experiment were inverted. We compared the inversion results of the LDI-MVFNet with those of single arrays and found that the LDI-MVFNet performed the best overall. The average root mean square error (RMSE) of synthetic models reached 0.98, performing better than Dipole-Dipole (3.86), Wenner-Schlumberger (3.37), and Pole-Pole (6.61), which were inverted separately. The resultant inverted subsurface true resistivity data were presented in the form of two-dimensional (2D) cross sections. The imaging results of 2D cross sections showed that LDI-MVFNet was superior to others in data noise suppression and inversion accuracy. The results of this study indicate that the data fusion of multiple views can more accurately reflect the real resistivity than the inversion of a single array can.
Collapse
Affiliation(s)
- Xiaochen Sun
- School of Mechanical Electronic and Information Engineering, China University of Mining and Technology-Beijing, Beijing 100091, China; Research Institute of Soil and Solid Waste, Chinese Research Academy of Environment Sciences, Beijing 100012, China
| | - Xu Qian
- School of Mechanical Electronic and Information Engineering, China University of Mining and Technology-Beijing, Beijing 100091, China
| | - Changxin Nai
- Research Institute of Soil and Solid Waste, Chinese Research Academy of Environment Sciences, Beijing 100012, China
| | - Ya Xu
- Research Institute of Soil and Solid Waste, Chinese Research Academy of Environment Sciences, Beijing 100012, China.
| | - Yuqiang Liu
- Research Institute of Soil and Solid Waste, Chinese Research Academy of Environment Sciences, Beijing 100012, China
| | - Guangyuan Yao
- Research Institute of Soil and Solid Waste, Chinese Research Academy of Environment Sciences, Beijing 100012, China
| | - Lu Dong
- Research Institute of Soil and Solid Waste, Chinese Research Academy of Environment Sciences, Beijing 100012, China
| |
Collapse
|
3
|
Song D, Zhang Z, Li W, Yuan L, Zhang W. Judgment of benign and early malignant colorectal tumors from ultrasound images with deep multi-View fusion. Comput Methods Programs Biomed 2022; 215:106634. [PMID: 35081497 DOI: 10.1016/j.cmpb.2022.106634] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Revised: 11/28/2021] [Accepted: 01/11/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Colorectal cancer (CRC) is currently one of the main cancers world-wide, with a high incidence in the elderly. In the diagnosis of CRC, endorectal ultrasound plays an important role in judging benign and early malignant tumors. However, malignant tumors in the early-stage are not easy to identify visually and experts usually seek help from multi-view images, which increases the workload and also exists a certain probability of misdiagnosis. In recent years, with the widespread use of deep learning methods in the analysis of medical images, it becomes necessary to design an effective computer-aided diagnosis (CAD) system of CRC based on multi-view endorectal ultrasound images. METHOD In this study, we proposed a CAD system for judging benign and early malignant colorectal tumors, and constructed the first multi-view ultrasound image dataset of CRC to validate our algorithm. Our system is an end-to-end model based on a deep neural network (DNN) which includes a feature extraction module based on dense blocks, a multi-view fusion module, and a Multi-Layer Perception-based classifier. A center loss was used for the first time in CAD tasks, to optimize our model. RESULT On the constructed dataset, the proposed system surpasses expert diagnosis in accuracy, sensitivity, specificity, and F1-score. Compared with the popular deep classification networks and other CAD methods, the algorithm has reached the best performance. Comparative experiments using different feature extraction methods, different view fusion strategies, and different classifiers verify the effectiveness of each part of the algorithm. CONCLUSION We propose a CAD system for judging benign and early malignant colorectal tumors based on DNN, which combines information of ultrasound images from different views for comprehension. On the first CRC multi-view ultrasound image dataset which we constructed, our method outperforms expert diagnosis results and all other methods, and the effectiveness of each part of the system has been verified. Our system has application value in future medical practice on early diagnosis of CRC.
Collapse
Affiliation(s)
- Dan Song
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| | - Zheqi Zhang
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| | - Wenhui Li
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China.
| | - Lijun Yuan
- Department of Colorectal Surgery, Tianjin Union Medical Center, Tianjin 300121, China; Tianjin Institute of Coloproctology, Tianjin 300121, China.
| | - Wenshu Zhang
- EUREKA Robotics Centre, School of Technologies, Cardiff Metropolitan University, Cardiff, Wales, United Kingdom
| |
Collapse
|
4
|
Qu Y, Li X, Yan Z, Zhao L, Zhang L, Liu C, Xie S, Li K, Metaxas D, Wu W, Hao Y, Dai K, Zhang S, Tao X, Ai S. Surgical planning of pelvic tumor using multi-view CNN with relation-context representation learning. Med Image Anal 2021; 69:101954. [PMID: 33550006 DOI: 10.1016/j.media.2020.101954] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 11/21/2020] [Accepted: 12/28/2020] [Indexed: 12/20/2022]
Abstract
Limb salvage surgery of malignant pelvic tumors is the most challenging procedure in musculoskeletal oncology due to the complex anatomy of the pelvic bones and soft tissues. It is crucial to accurately resect the pelvic tumors with appropriate margins in this procedure. However, there is still a lack of efficient and repetitive image planning methods for tumor identification and segmentation in many hospitals. In this paper, we present a novel deep learning-based method to accurately segment pelvic bone tumors in MRI. Our method uses a multi-view fusion network to extract pseudo-3D information from two scans in different directions and improves the feature representation by learning a relational context. In this way, it can fully utilize spatial information in thick MRI scans and reduce over-fitting when learning from a small dataset. Our proposed method was evaluated on two independent datasets collected from 90 and 15 patients, respectively. The segmentation accuracy of our method was superior to several comparing methods and comparable to the expert annotation, while the average time consumed decreased about 100 times from 1820.3 seconds to 19.2 seconds. In addition, we incorporate our method into an efficient workflow to improve the surgical planning process. Our workflow took only 15 minutes to complete surgical planning in a phantom study, which is a dramatic acceleration compared with the 2-day time span in a traditional workflow.
Collapse
Affiliation(s)
- Yang Qu
- Department of Radiology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China
| | - Xiaomin Li
- Department of Radiology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China
| | - Zhennan Yan
- SenseBrain Technology, Princeton, NJ 08540, USA
| | - Liang Zhao
- SenseTime Research, Shanghai 200233, China
| | - Lichi Zhang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030 China
| | - Chang Liu
- SenseTime Research, Shanghai 200233, China
| | | | - Kang Li
- Department of Orthopaedics, Rutgers New Jersey Medical School, Newark, NJ 07103, USA
| | - Dimitris Metaxas
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA
| | - Wen Wu
- Shanghai Key Laboratory of Orthopaedic Implants, Department of Orthopaedics, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China
| | - Yongqiang Hao
- Shanghai Key Laboratory of Orthopaedic Implants, Department of Orthopaedics, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China
| | - Kerong Dai
- Shanghai Key Laboratory of Orthopaedic Implants, Department of Orthopaedics, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China; Engineering Research Center of Digital Medicine and Clinical Translation, Ministry of Education, Shanghai 200240, China
| | - Shaoting Zhang
- SenseTime Research, Shanghai 200233, China; Qing Yuan Research Institute, Shanghai Jiao Tong University, Shanghai 200240, China.
| | - Xiaofeng Tao
- Department of Radiology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China.
| | - Songtao Ai
- Department of Radiology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China.
| |
Collapse
|