1
|
Wang M, Zhao SW, Wu D, Zhang YH, Han YK, Zhao K, Qi T, Liu Y, Cui LB, Wei Y. Transcriptomic and neuroimaging data integration enhances machine learning classification of schizophrenia. PSYCHORADIOLOGY 2024; 4:kkae005. [PMID: 38694267 PMCID: PMC11061866 DOI: 10.1093/psyrad/kkae005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 02/26/2024] [Accepted: 03/25/2024] [Indexed: 05/04/2024]
Abstract
Background Schizophrenia is a polygenic disorder associated with changes in brain structure and function. Integrating macroscale brain features with microscale genetic data may provide a more complete overview of the disease etiology and may serve as potential diagnostic markers for schizophrenia. Objective We aim to systematically evaluate the impact of multi-scale neuroimaging and transcriptomic data fusion in schizophrenia classification models. Methods We collected brain imaging data and blood RNA sequencing data from 43 patients with schizophrenia and 60 age- and gender-matched healthy controls, and we extracted multi-omics features of macroscale brain morphology, brain structural and functional connectivity, and gene transcription of schizophrenia risk genes. Multi-scale data fusion was performed using a machine learning integration framework, together with several conventional machine learning methods and neural networks for patient classification. Results We found that multi-omics data fusion in conventional machine learning models achieved the highest accuracy (AUC ~0.76-0.92) in contrast to the single-modality models, with AUC improvements of 8.88 to 22.64%. Similar findings were observed for the neural network, showing an increase of 16.57% for the multimodal classification model (accuracy 71.43%) compared to the single-modal average. In addition, we identified several brain regions in the left posterior cingulate and right frontal pole that made a major contribution to disease classification. Conclusion We provide empirical evidence for the increased accuracy achieved by imaging genetic data integration in schizophrenia classification. Multi-scale data fusion holds promise for enhancing diagnostic precision, facilitating early detection and personalizing treatment regimens in schizophrenia.
Collapse
Affiliation(s)
- Mengya Wang
- Center for Artificial Intelligence in Medical Imaging, School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, 100876, China
| | - Shu-Wan Zhao
- Center for Artificial Intelligence in Medical Imaging, School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, 100876, China
- Schizophrenia Imaging Lab, Xijing 986 Hospital, Fourth Military Medical University, Xi'an, 710054, China
| | - Di Wu
- Department of Psychiatry, Xijing Hospital, Fourth Military Medical University, Xi'an, 710032, China
| | - Ya-Hong Zhang
- Department of Psychiatry, Xi'an Gaoxin Hospital, Xi'an, 710075, China
| | - Yan-Kun Han
- Schizophrenia Imaging Lab, Xijing 986 Hospital, Fourth Military Medical University, Xi'an, 710054, China
| | - Kun Zhao
- Center for Artificial Intelligence in Medical Imaging, School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, 100876, China
| | - Ting Qi
- Department of Neurology, School of Medicine, University of California San Francisco, San Francisco, 94143, California
| | - Yong Liu
- Center for Artificial Intelligence in Medical Imaging, School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, 100876, China
| | - Long-Biao Cui
- Schizophrenia Imaging Lab, Xijing 986 Hospital, Fourth Military Medical University, Xi'an, 710054, China
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710061, China
- Department of Radiology, The Second Medical Center, Chinese PLA General Hospital, Beijing, 100853, China
- Shaanxi Provincial Key Laboratory of Clinic Genetics, Fourth Military Medical University, Xi'an, 710032, China
| | - Yongbin Wei
- Center for Artificial Intelligence in Medical Imaging, School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, 100876, China
| |
Collapse
|
2
|
Huang W, Zhang H, Cheng Y, Quan X. DRCM: a disentangled representation network based on coordinate and multimodal attention for medical image fusion. Front Physiol 2023; 14:1241370. [PMID: 38028809 PMCID: PMC10656763 DOI: 10.3389/fphys.2023.1241370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 10/02/2023] [Indexed: 12/01/2023] Open
Abstract
Recent studies on medical image fusion based on deep learning have made remarkable progress, but the common and exclusive features of different modalities, especially their subsequent feature enhancement, are ignored. Since medical images of different modalities have unique information, special learning of exclusive features should be designed to express the unique information of different modalities so as to obtain a medical fusion image with more information and details. Therefore, we propose an attention mechanism-based disentangled representation network for medical image fusion, which designs coordinate attention and multimodal attention to extract and strengthen common and exclusive features. First, the common and exclusive features of each modality were obtained by the cross mutual information and adversarial objective methods, respectively. Then, coordinate attention is focused on the enhancement of the common and exclusive features of different modalities, and the exclusive features are weighted by multimodal attention. Finally, these two kinds of features are fused. The effectiveness of the three innovation modules is verified by ablation experiments. Furthermore, eight comparison methods are selected for qualitative analysis, and four metrics are used for quantitative comparison. The values of the four metrics demonstrate the effect of the DRCM. Furthermore, the DRCM achieved better results on SCD, Nabf, and MS-SSIM metrics, which indicates that the DRCM achieved the goal of further improving the visual quality of the fused image with more information from source images and less noise. Through the comprehensive comparison and analysis of the experimental results, it was found that the DRCM outperforms the comparison method.
Collapse
Affiliation(s)
| | - Han Zhang
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | | | | |
Collapse
|
3
|
Zhang W, Lu Y, Zheng H, Yu L. MBRARN: multibranch residual attention reconstruction network for medical image fusion. Med Biol Eng Comput 2023; 61:3067-3085. [PMID: 37624534 DOI: 10.1007/s11517-023-02902-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 08/01/2023] [Indexed: 08/26/2023]
Abstract
Medical image fusion aims to integrate complementary information from multimodal medical images and has been widely applied in the field of medicine, such as clinical diagnosis, pathology analysis, and healing examinations. For the fusion task, feature extraction is a crucial step. To obtain significant information embedded in medical images, many deep learning-based algorithms have been proposed recently and achieved good fusion results. However, most of them can hardly capture the independent and underlying features, which leads to unsatisfactory fusion results. To address these issues, a multibranch residual attention reconstruction network (MBRARN) is proposed for the medical image fusion task. The proposed network mainly consists of three parts: feature extraction, feature fusion, and feature reconstruction. Firstly, the input medical images are converted into three scales by image pyramid operation and then are input into three branches of the proposed network respectively. The purpose of this procedure is to capture the local detailed information and the global structural information. Then, convolutions with residual attention modules are designed, which can not only enhance the captured outstanding features, but also make the network converge fast and stably. Finally, feature fusion is performed with the designed fusion strategy. In this step, a new more effective fusion strategy is correspondently designed for MRI-SPECT based on the Euclidean norm, called feature distance ratio (FDR). The experimental results conducted on Harvard whole brain atlas dataset demonstrate that the proposed network can achieve better results in terms of both subjective and objective evaluation, compared with some state-of-the-art medical image fusion algorithms.
Collapse
Affiliation(s)
- Weihao Zhang
- College of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China
| | - Yuting Lu
- School of Big Data and Software Engineering, Chongqing University, Chongqing, 401331, China
| | - Haodong Zheng
- College of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China
| | - Lei Yu
- College of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China.
| |
Collapse
|
4
|
Dinh PH. Medical image fusion based on enhanced three-layer image decomposition and Chameleon swarm algorithm. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/26/2023]
|
5
|
Dinh PH. Combining spectral total variation with dynamic threshold neural P systems for medical image fusion. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
6
|
|
7
|
Yadav AS, Kumar S, Karetla GR, Cotrina-Aliaga JC, Arias-Gonzáles JL, Kumar V, Srivastava S, Gupta R, Ibrahim S, Paul R, Naik N, Singla B, Tatkar NS. A Feature Extraction Using Probabilistic Neural Network and BTFSC-Net Model with Deep Learning for Brain Tumor Classification. J Imaging 2022; 9:jimaging9010010. [PMID: 36662108 PMCID: PMC9865827 DOI: 10.3390/jimaging9010010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 12/21/2022] [Accepted: 12/28/2022] [Indexed: 01/03/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Brain Tumor Fusion-based Segments and Classification-Non-enhancing tumor (BTFSC-Net) is a hybrid system for classifying brain tumors that combine medical image fusion, segmentation, feature extraction, and classification procedures. MATERIALS AND METHODS to reduce noise from medical images, the hybrid probabilistic wiener filter (HPWF) is first applied as a preprocessing step. Then, to combine robust edge analysis (REA) properties in magnetic resonance imaging (MRI) and computed tomography (CT) medical images, a fusion network based on deep learning convolutional neural networks (DLCNN) is developed. Here, the brain images' slopes and borders are detected using REA. To separate the sick region from the color image, adaptive fuzzy c-means integrated k-means (HFCMIK) clustering is then implemented. To extract hybrid features from the fused image, low-level features based on the redundant discrete wavelet transform (RDWT), empirical color features, and texture characteristics based on the gray-level cooccurrence matrix (GLCM) are also used. Finally, to distinguish between benign and malignant tumors, a deep learning probabilistic neural network (DLPNN) is deployed. RESULTS according to the findings, the suggested BTFSC-Net model performed better than more traditional preprocessing, fusion, segmentation, and classification techniques. Additionally, 99.21% segmentation accuracy and 99.46% classification accuracy were reached using the proposed BTFSC-Net model. CONCLUSIONS earlier approaches have not performed as well as our presented method for image fusion, segmentation, feature extraction, classification operations, and brain tumor classification. These results illustrate that the designed approach performed more effectively in terms of enhanced quantitative evaluation with better accuracy as well as visual performance.
Collapse
Affiliation(s)
- Arun Singh Yadav
- Department of Computer Science, University of Lucknow, Lucknow 226007, Uttar Pradesh, India
| | - Surendra Kumar
- Department of Computer Application, Marwadi University, Rajkot 360003, Gujrat, India
| | - Girija Rani Karetla
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Penrith, NSW 2751, Australia
| | | | - José Luis Arias-Gonzáles
- Department of Business, Pontificia Universidad Católica del Perú, Av. Universitaria 1801, San Miguel 15088, Peru
| | - Vinod Kumar
- Department of Computer Applications, ABES Engineering College, Ghaziabad 201009, Uttar Pradesh, India
| | - Satyajee Srivastava
- Department of Computer Science and Engineering, University of Engineering and Technology Roorkee, Roorkee 247667, Uttarakhand, India
| | - Reena Gupta
- Department of Pharmacognosy, Institute of Pharmaceutical Research, GLA University, Mathura 281406, Uttar Pradesh, India
| | - Sufyan Ibrahim
- Neuro-Informatics Laboratory, Department of Neurological Surgery, Mayo Clinic, Rochester, MN 55905, USA
| | - Rahul Paul
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02115, USA
- iTRUE (International Training and Research in Uro-Oncology and Endourology) Group, Manipal 576104, Karnataka, India
| | - Nithesh Naik
- iTRUE (International Training and Research in Uro-Oncology and Endourology) Group, Manipal 576104, Karnataka, India
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
- Curiouz TechLab Private Limited, BIRAC-BioNEST, Manipal Government of Karnataka Bioincubator, Manipal 576104, Karnataka, India
- Correspondence: ; Tel.: +91-83-1087-4339
| | - Babita Singla
- Chitkara Business School, Chitkara University, Chandigarh 140401, Punjab, India
| | - Nisha S. Tatkar
- Department of Postgraduate Diploma in Management, Institute of PGDM, Mumbai Education Trust, Mumbai 400050, Maharashtra, India
| |
Collapse
|