1
|
Yang S, Bornot JMS, Fernandez RB, Deravi F, Wong-Lin K, Prasad G. Integrated space-frequency-time domain feature extraction for MEG-based Alzheimer's disease classification. Brain Inform 2021; 8:24. [PMID: 34725742 PMCID: PMC8560870 DOI: 10.1186/s40708-021-00145-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Accepted: 10/20/2021] [Indexed: 11/10/2022] Open
Abstract
Magnetoencephalography (MEG) has been combined with machine learning techniques, to recognize the Alzheimer's disease (AD), one of the most common forms of dementia. However, most of the previous studies are limited to binary classification and do not fully utilize the two available MEG modalities (extracted using magnetometer and gradiometer sensors). AD consists of several stages of progression, this study addresses this limitation by using both magnetometer and gradiometer data to discriminate between participants with AD, AD-related mild cognitive impairment (MCI), and healthy control (HC) participants in the form of a three-class classification problem. A series of wavelet-based biomarkers are developed and evaluated, which concurrently leverage the spatial, frequency and time domain characteristics of the signal. A bimodal recognition system based on an improved score-level fusion approach is proposed to reinforce interpretation of the brain activity captured by magnetometers and gradiometers. In this preliminary study, it was found that the markers derived from gradiometer tend to outperform the magnetometer-based markers. Interestingly, out of the total 10 regions of interest, left-frontal lobe demonstrates about 8% higher mean recognition rate than the second-best performing region (left temporal lobe) for AD/MCI/HC classification. Among the four types of markers proposed in this work, the spatial marker developed using wavelet coefficients provided the best recognition performance for the three-way classification. Overall, the proposed approach provides promising results for the potential of AD/MCI/HC three-way classification utilizing the bimodal MEG data.
Collapse
Affiliation(s)
- Su Yang
- Department of Computer Science, Swansea University, Swansea, UK.
| | - Jose Miguel Sanchez Bornot
- Intelligent Systems Research Centre, School of Computing, Eng & Intel. Sys, Ulster University, Derry-Londonderry, Northern Ireland, UK
| | | | - Farzin Deravi
- School of Engineering and Digital Arts at the University of Kent, Canterbury, UK
| | - KongFatt Wong-Lin
- Intelligent Systems Research Centre, School of Computing, Eng & Intel. Sys, Ulster University, Derry-Londonderry, Northern Ireland, UK
| | - Girijesh Prasad
- Intelligent Systems Research Centre, School of Computing, Eng & Intel. Sys, Ulster University, Derry-Londonderry, Northern Ireland, UK
| |
Collapse
|
2
|
Song J, Zheng J, Li P, Lu X, Zhu G, Shen P. An Effective Multimodal Image Fusion Method Using MRI and PET for Alzheimer's Disease Diagnosis. Front Digit Health 2021; 3:637386. [PMID: 34713109 PMCID: PMC8521941 DOI: 10.3389/fdgth.2021.637386] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 02/05/2021] [Indexed: 11/13/2022] Open
Abstract
Alzheimer's disease (AD) is an irreversible brain disease that severely damages human thinking and memory. Early diagnosis plays an important part in the prevention and treatment of AD. Neuroimaging-based computer-aided diagnosis (CAD) has shown that deep learning methods using multimodal images are beneficial to guide AD detection. In recent years, many methods based on multimodal feature learning have been proposed to extract and fuse latent representation information from different neuroimaging modalities including magnetic resonance imaging (MRI) and 18-fluorodeoxyglucose positron emission tomography (FDG-PET). However, these methods lack the interpretability required to clearly explain the specific meaning of the extracted information. To make the multimodal fusion process more persuasive, we propose an image fusion method to aid AD diagnosis. Specifically, we fuse the gray matter (GM) tissue area of brain MRI and FDG-PET images by registration and mask coding to obtain a new fused modality called "GM-PET." The resulting single composite image emphasizes the GM area that is critical for AD diagnosis, while retaining both the contour and metabolic characteristics of the subject's brain tissue. In addition, we use the three-dimensional simple convolutional neural network (3D Simple CNN) and 3D Multi-Scale CNN to evaluate the effectiveness of our image fusion method in binary classification and multi-classification tasks. Experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset indicate that the proposed image fusion method achieves better overall performance than unimodal and feature fusion methods, and that it outperforms state-of-the-art methods for AD diagnosis.
Collapse
Affiliation(s)
- Juan Song
- School of Computer Science and Technology, Xidian University, Shaanxi, China
| | - Jian Zheng
- School of Computer Science and Technology, Xidian University, Shaanxi, China
| | - Ping Li
- Data and Virtual Research Room, Shanghai Broadband Network Center, Shanghai, China
| | - Xiaoyuan Lu
- Data and Virtual Research Room, Shanghai Broadband Network Center, Shanghai, China
| | - Guangming Zhu
- School of Computer Science and Technology, Xidian University, Shaanxi, China
| | - Peiyi Shen
- School of Computer Science and Technology, Xidian University, Shaanxi, China
| |
Collapse
|
3
|
Zhou T, Zhang C, Peng X, Bhaskar H, Yang J. Dual Shared-Specific Multiview Subspace Clustering. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:3517-3530. [PMID: 31226094 DOI: 10.1109/tcyb.2019.2918495] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Multiview subspace clustering has received significant attention as the availability of diverse of multidomain and multiview real-world data has rapidly increased in the recent years. Boosting the performance of multiview clustering algorithms is challenged by two major factors. First, since original features from multiview data are highly redundant, reconstruction based on these attributes inevitably results in inferior performance. Second, since each view of such multiview data may contain unique knowledge as against the others, it remains a challenge to exploit complimentary information across multiple views while simultaneously investigating the uniqueness of each view. In this paper, we present a novel dual shared-specific multiview subspace clustering (DSS-MSC) approach that simultaneously learns the correlations between shared information across multiple views and also utilizes view-specific information to depict specific property for each independent view. Further, we formulate a dual learning framework to capture shared-specific information into the dimensional reduction and self-representation processes, which strengthens the ability of our approach to exploit shared information while preserving view-specific property effectively. The experimental results on several benchmark datasets have demonstrated the effectiveness of the proposed approach against other state-of-the-art techniques.
Collapse
|
4
|
Zhou T, Thung KH, Liu M, Shi F, Zhang C, Shen D. Multi-modal latent space inducing ensemble SVM classifier for early dementia diagnosis with neuroimaging data. Med Image Anal 2020; 60:101630. [PMID: 31927474 PMCID: PMC8260095 DOI: 10.1016/j.media.2019.101630] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2018] [Revised: 12/15/2019] [Accepted: 12/19/2019] [Indexed: 12/21/2022]
Abstract
Fusing multi-modality data is crucial for accurate identification of brain disorder as different modalities can provide complementary perspectives of complex neurodegenerative disease. However, there are at least four common issues associated with the existing fusion methods. First, many existing fusion methods simply concatenate features from each modality without considering the correlations among different modalities. Second, most existing methods often make prediction based on a single classifier, which might not be able to address the heterogeneity of the Alzheimer's disease (AD) progression. Third, many existing methods often employ feature selection (or reduction) and classifier training in two independent steps, without considering the fact that the two pipelined steps are highly related to each other. Forth, there are missing neuroimaging data for some of the participants (e.g., missing PET data), due to the participants' "no-show" or dropout. In this paper, to address the above issues, we propose an early AD diagnosis framework via novel multi-modality latent space inducing ensemble SVM classifier. Specifically, we first project the neuroimaging data from different modalities into a latent space, and then map the learned latent representations into the label space to learn multiple diversified classifiers. Finally, we obtain the more reliable classification results by using an ensemble strategy. More importantly, we present a Complete Multi-modality Latent Space (CMLS) learning model for complete multi-modality data and also an Incomplete Multi-modality Latent Space (IMLS) learning model for incomplete multi-modality data. Extensive experiments using the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset have demonstrated that our proposed models outperform other state-of-the-art methods.
Collapse
Affiliation(s)
- Tao Zhou
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA; Inception Institute of Artificial Intelligence, Abu Dhabi 51133, United Arab Emirates.
| | - Kim-Han Thung
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA.
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA.
| | - Feng Shi
- United Imaging Intelligence, Shanghai, China.
| | - Changqing Zhang
- School of Computer Science and Technology, Tianjin University, Tianjin 300072, China.
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
5
|
Zhou T, Liu M, Thung KH, Shen D. Latent Representation Learning for Alzheimer's Disease Diagnosis With Incomplete Multi-Modality Neuroimaging and Genetic Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2411-2422. [PMID: 31021792 PMCID: PMC8034601 DOI: 10.1109/tmi.2019.2913158] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
The fusion of complementary information contained in multi-modality data [e.g., magnetic resonance imaging (MRI), positron emission tomography (PET), and genetic data] has advanced the progress of automated Alzheimer's disease (AD) diagnosis. However, multi-modality based AD diagnostic models are often hindered by the missing data, i.e., not all the subjects have complete multi-modality data. One simple solution used by many previous studies is to discard samples with missing modalities. However, this significantly reduces the number of training samples, thus leading to a sub-optimal classification model. Furthermore, when building the classification model, most existing methods simply concatenate features from different modalities into a single feature vector without considering their underlying associations. As features from different modalities are often closely related (e.g., MRI and PET features are extracted from the same brain region), utilizing their inter-modality associations may improve the robustness of the diagnostic model. To this end, we propose a novel latent representation learning method for multi-modality based AD diagnosis. Specifically, we use all the available samples (including samples with incomplete modality data) to learn a latent representation space. Within this space, we not only use samples with complete multi-modality data to learn a common latent representation, but also use samples with incomplete multi-modality data to learn independent modality-specific latent representations. We then project the latent representations to the label space for AD diagnosis. We perform experiments using 737 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, and the experimental results verify the effectiveness of our proposed method.
Collapse
Affiliation(s)
- Tao Zhou
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
- Inception Institute of Artificial Intelligence, Abu Dhabi 51133, United Arab Emirates
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
| | - Kim-Han Thung
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|