1
|
Qiu Z, Yang P, Xiao C, Wang S, Xiao X, Qin J, Liu CM, Wang T, Lei B. 3D Multimodal Fusion Network With Disease-Induced Joint Learning for Early Alzheimer's Disease Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3161-3175. [PMID: 38607706 DOI: 10.1109/tmi.2024.3386937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/14/2024]
Abstract
Multimodal neuroimaging provides complementary information critical for accurate early diagnosis of Alzheimer's disease (AD). However, the inherent variability between multimodal neuroimages hinders the effective fusion of multimodal features. Moreover, achieving reliable and interpretable diagnoses in the field of multimodal fusion remains challenging. To address them, we propose a novel multimodal diagnosis network based on multi-fusion and disease-induced learning (MDL-Net) to enhance early AD diagnosis by efficiently fusing multimodal data. Specifically, MDL-Net proposes a multi-fusion joint learning (MJL) module, which effectively fuses multimodal features and enhances the feature representation from global, local, and latent learning perspectives. MJL consists of three modules, global-aware learning (GAL), local-aware learning (LAL), and outer latent-space learning (LSL) modules. GAL via a self-adaptive Transformer (SAT) learns the global relationships among the modalities. LAL constructs local-aware convolution to learn the local associations. LSL module introduces latent information through outer product operation to further enhance feature representation. MDL-Net integrates the disease-induced region-aware learning (DRL) module via gradient weight to enhance interpretability, which iteratively learns weight matrices to identify AD-related brain regions. We conduct the extensive experiments on public datasets and the results confirm the superiority of our proposed method. Our code will be available at: https://github.com/qzf0320/MDL-Net.
Collapse
|
2
|
Odusami M, Damaševičius R, Milieškaitė-Belousovienė E, Maskeliūnas R. Alzheimer's disease stage recognition from MRI and PET imaging data using Pareto-optimal quantum dynamic optimization. Heliyon 2024; 10:e34402. [PMID: 39145034 PMCID: PMC11320145 DOI: 10.1016/j.heliyon.2024.e34402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 07/06/2024] [Accepted: 07/09/2024] [Indexed: 08/16/2024] Open
Abstract
The threat posed by Alzheimer's disease (AD) to human health has grown significantly. However, the precise diagnosis and classification of AD stages remain a challenge. Neuroimaging methods such as structural magnetic resonance imaging (sMRI) and fluorodeoxyglucose positron emission tomography (FDG-PET) have been used to diagnose and categorize AD. However, feature selection approaches that are frequently used to extract additional data from multimodal imaging are prone to errors. This paper suggests using a static pulse-coupled neural network and a Laplacian pyramid to combine sMRI and FDG-PET data. After that, the fused images are used to train the Mobile Vision Transformer (MViT), optimized with Pareto-Optimal Quantum Dynamic Optimization for Neural Architecture Search, while the fused images are augmented to avoid overfitting and then classify unfused MRI and FDG-PET images obtained from the AD Neuroimaging Initiative (ADNI) and Open Access Series of Imaging Studies (OASIS) datasets into various stages of AD. The architectural hyperparameters of MViT are optimized using Quantum Dynamic Optimization, which ensures a Pareto-optimal solution. The Peak Signal-to-Noise Ratio (PSNR), the Mean Squared Error (MSE), and the Structured Similarity Indexing Method (SSIM) are used to measure the quality of the fused image. We found that the fused image was consistent in all metrics, having 0.64 SIMM, 35.60 PSNR, and 0.21 MSE for the FDG-PET image. In the classification of AD vs. cognitive normal (CN), AD vs. mild cognitive impairment (MCI), and CN vs. MCI, the precision of the proposed method is 94.73%, 92.98% and 89.36%, respectively. The sensitivity is 90. 70%, 90. 70%, and 90. 91% while the specificity is 100%, 100%, and 85. 71%, respectively, in the ADNI MRI test data.
Collapse
Affiliation(s)
- Modupe Odusami
- Faculty of Informatics, Kaunas University of Technology, Kaunas, Lithuania
| | | | | | - Rytis Maskeliūnas
- Faculty of Informatics, Kaunas University of Technology, Kaunas, Lithuania
| |
Collapse
|
3
|
Odusami M, Maskeliūnas R, Damaševičius R, Misra S. Machine learning with multimodal neuroimaging data to classify stages of Alzheimer's disease: a systematic review and meta-analysis. Cogn Neurodyn 2024; 18:775-794. [PMID: 38826669 PMCID: PMC11143094 DOI: 10.1007/s11571-023-09993-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 06/23/2023] [Accepted: 07/17/2023] [Indexed: 06/04/2024] Open
Abstract
In recent years, Alzheimer's disease (AD) has been a serious threat to human health. Researchers and clinicians alike encounter a significant obstacle when trying to accurately identify and classify AD stages. Several studies have shown that multimodal neuroimaging input can assist in providing valuable insights into the structural and functional changes in the brain related to AD. Machine learning (ML) algorithms can accurately categorize AD phases by identifying patterns and linkages in multimodal neuroimaging data using powerful computational methods. This study aims to assess the contribution of ML methods to the accurate classification of the stages of AD using multimodal neuroimaging data. A systematic search is carried out in IEEE Xplore, Science Direct/Elsevier, ACM DigitalLibrary, and PubMed databases with forward snowballing performed on Google Scholar. The quantitative analysis used 47 studies. The explainable analysis was performed on the classification algorithm and fusion methods used in the selected studies. The pooled sensitivity and specificity, including diagnostic efficiency, were evaluated by conducting a meta-analysis based on a bivariate model with the hierarchical summary receiver operating characteristics (ROC) curve of multimodal neuroimaging data and ML methods in the classification of AD stages. Wilcoxon signed-rank test is further used to statistically compare the accuracy scores of the existing models. With a 95% confidence interval of 78.87-87.71%, the combined sensitivity for separating participants with mild cognitive impairment (MCI) from healthy control (NC) participants was 83.77%; for separating participants with AD from NC, it was 94.60% (90.76%, 96.89%); for separating participants with progressive MCI (pMCI) from stable MCI (sMCI), it was 80.41% (74.73%, 85.06%). With a 95% confidence interval (78.87%, 87.71%), the Pooled sensitivity for distinguishing mild cognitive impairment (MCI) from healthy control (NC) participants was 83.77%, with a 95% confidence interval (90.76%, 96.89%), the Pooled sensitivity for distinguishing AD from NC was 94.60%, likewise (MCI) from healthy control (NC) participants was 83.77% progressive MCI (pMCI) from stable MCI (sMCI) was 80.41% (74.73%, 85.06%), and early MCI (EMCI) from NC was 86.63% (82.43%, 89.95%). Pooled specificity for differentiating MCI from NC was 79.16% (70.97%, 87.71%), AD from NC was 93.49% (91.60%, 94.90%), pMCI from sMCI was 81.44% (76.32%, 85.66%), and EMCI from NC was 85.68% (81.62%, 88.96%). The Wilcoxon signed rank test showed a low P-value across all the classification tasks. Multimodal neuroimaging data with ML is a promising future in classifying the stages of AD but more research is required to increase the validity of its application in clinical practice.
Collapse
Affiliation(s)
- Modupe Odusami
- Department of Multimedia Engineering, Kaunas University of Technology, Kaunas, Lithuania
| | - Rytis Maskeliūnas
- Department of Multimedia Engineering, Kaunas University of Technology, Kaunas, Lithuania
| | | | - Sanjay Misra
- Department of Applied Data Science, Institute for Energy Technology, Halden, Norway
| |
Collapse
|
4
|
Hussain D, Al-Masni MA, Aslam M, Sadeghi-Niaraki A, Hussain J, Gu YH, Naqvi RA. Revolutionizing tumor detection and classification in multimodality imaging based on deep learning approaches: Methods, applications and limitations. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:857-911. [PMID: 38701131 DOI: 10.3233/xst-230429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2024]
Abstract
BACKGROUND The emergence of deep learning (DL) techniques has revolutionized tumor detection and classification in medical imaging, with multimodal medical imaging (MMI) gaining recognition for its precision in diagnosis, treatment, and progression tracking. OBJECTIVE This review comprehensively examines DL methods in transforming tumor detection and classification across MMI modalities, aiming to provide insights into advancements, limitations, and key challenges for further progress. METHODS Systematic literature analysis identifies DL studies for tumor detection and classification, outlining methodologies including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their variants. Integration of multimodality imaging enhances accuracy and robustness. RESULTS Recent advancements in DL-based MMI evaluation methods are surveyed, focusing on tumor detection and classification tasks. Various DL approaches, including CNNs, YOLO, Siamese Networks, Fusion-Based Models, Attention-Based Models, and Generative Adversarial Networks, are discussed with emphasis on PET-MRI, PET-CT, and SPECT-CT. FUTURE DIRECTIONS The review outlines emerging trends and future directions in DL-based tumor analysis, aiming to guide researchers and clinicians toward more effective diagnosis and prognosis. Continued innovation and collaboration are stressed in this rapidly evolving domain. CONCLUSION Conclusions drawn from literature analysis underscore the efficacy of DL approaches in tumor detection and classification, highlighting their potential to address challenges in MMI analysis and their implications for clinical practice.
Collapse
Affiliation(s)
- Dildar Hussain
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Mohammed A Al-Masni
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Muhammad Aslam
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Abolghasem Sadeghi-Niaraki
- Department of Computer Science & Engineering and Convergence Engineering for Intelligent Drone, XR Research Center, Sejong University, Seoul, Korea
| | - Jamil Hussain
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Yeong Hyeon Gu
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Rizwan Ali Naqvi
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, Korea
| |
Collapse
|
5
|
Wang M, Shao W, Huang S, Zhang D. Hypergraph-regularized multimodal learning by graph diffusion for imaging genetics based Alzheimer's Disease diagnosis. Med Image Anal 2023; 89:102883. [PMID: 37467641 DOI: 10.1016/j.media.2023.102883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 04/06/2023] [Accepted: 06/28/2023] [Indexed: 07/21/2023]
Abstract
Recent studies show that multi-modal data fusion techniques combining information from diverse sources are helpful to diagnose and predict complex brain disorders. However, most existing diagnosis methods have only simply employed a feature combination strategy for multiple imaging and genetic data, ignoring the imaging phenotypes associated with the risk gene information. To this end, we present a hypergraph-regularized multimodal learning by graph diffusion (HMGD) for joint association learning and outcome prediction. Specifically, we first present a graph diffusion method for enhancing similarity measures among subjects given from multi-modality phenotypes, which fully uses multiple input similarity graphs and integrates them into a unified graph with valuable geometric structures among different imaging phenotypes. Then, we employ the unified graph to represent the high-order similarity relationships among subjects, and enforce a hypergraph-regularized term to incorporate both inter- and cross-modality information for selecting the imaging phenotypes associated with the risk single nucleotide polymorphism (SNP). Finally, a multi-kernel support vector machine (MK-SVM) is adopted to fuse such phenotypic features selected from different modalities for the final diagnosis and prediction. The proposed approach is experimentally explored on brain imaging genetic data of the Alzheimer's Disease Neuroimaging Initiative (ADNI) datasets. Relevant results present that the proposed approach is superior to several competing algorithms, and realizes strong associations and discovers significant consistent and robust ROIs across different imaging phenotypes associated with the genetic risk biomarkers to guide disease interpretation and prediction.
Collapse
Affiliation(s)
- Meiling Wang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing 211106, China; Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing 211106, China
| | - Wei Shao
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing 211106, China; Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing 211106, China
| | - Shuo Huang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing 211106, China; Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing 211106, China
| | - Daoqiang Zhang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing 211106, China; Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing 211106, China.
| |
Collapse
|
6
|
Dai Y, Zou B, Zhu C, Li Y, Chen Z, Ji Z, Kui X, Zhang W. DE-JANet: A unified network based on dual encoder and joint attention for Alzheimer's disease classification using multi-modal data. Comput Biol Med 2023; 165:107396. [PMID: 37703717 DOI: 10.1016/j.compbiomed.2023.107396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 07/28/2023] [Accepted: 08/26/2023] [Indexed: 09/15/2023]
Abstract
Structural magnetic resonance imaging (sMRI), which can reflect cerebral atrophy, plays an important role in the early detection of Alzheimer's disease (AD). However, the information provided by analyzing only the morphological changes in sMRI is relatively limited, and the assessment of the atrophy degree is subjective. Therefore, it is meaningful to combine sMRI with other clinical information to acquire complementary diagnosis information and achieve a more accurate classification of AD. Nevertheless, how to fuse these multi-modal data effectively is still challenging. In this paper, we propose DE-JANet, a unified AD classification network that integrates image data sMRI with non-image clinical data, such as age and Mini-Mental State Examination (MMSE) score, for more effective multi-modal analysis. DE-JANet consists of three key components: (1) a dual encoder module for extracting low-level features from the image and non-image data according to specific encoding regularity, (2) a joint attention module for fusing multi-modal features, and (3) a token classification module for performing AD-related classification according to the fused multi-modal features. Our DE-JANet is evaluated on the ADNI dataset, with a mean accuracy of 0.9722 and 0.9538 for AD classification and mild cognition impairment (MCI) classification, respectively, which is superior to existing methods and indicates advanced performance on AD-related diagnosis tasks.
Collapse
Affiliation(s)
- Yulan Dai
- School of Computer Science and Engineering, Central South University, Changsha, China; Hunan Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Beiji Zou
- School of Computer Science and Engineering, Central South University, Changsha, China; Hunan Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Chengzhang Zhu
- School of Computer Science and Engineering, Central South University, Changsha, China; Hunan Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China.
| | - Yang Li
- School of Computer Science and Engineering, Central South University, Changsha, China; Hunan Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Zhi Chen
- School of Computer Science and Engineering, Central South University, Changsha, China; Hunan Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Zexin Ji
- School of Computer Science and Engineering, Central South University, Changsha, China; Hunan Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Xiaoyan Kui
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Wensheng Zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
7
|
Moradi F, van den Berg M, Mirjebreili M, Kosten L, Verhoye M, Amiri M, Keliris GA. Early classification of Alzheimer's disease phenotype based on hippocampal electrophysiology in the TgF344-AD rat model. iScience 2023; 26:107454. [PMID: 37599835 PMCID: PMC10432721 DOI: 10.1016/j.isci.2023.107454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 04/27/2023] [Accepted: 07/19/2023] [Indexed: 08/22/2023] Open
Abstract
The hippocampus plays a vital role in navigation, learning, and memory, and is affected in Alzheimer's disease (AD). This study investigated the classification of AD-transgenic rats versus wild-type littermates using electrophysiological activity recorded from the hippocampus at an early, presymptomatic stage of the disease (6 months old) in the TgF344-AD rat model. The recorded signals were filtered into low frequency (LFP) and high frequency (spiking activity) signals, and machine learning classifiers were employed to identify the rat genotype (TG vs. WT). By analyzing specific frequency bands in the low frequency signals and calculating distance metrics between spike trains in the high frequency signals, accurate classification was achieved. Gamma band power emerged as a valuable signal for classification, and combining information from both low and high frequency signals improved the accuracy further. These findings provide valuable insights into the early stage effects of AD on different regions of the hippocampus.
Collapse
Affiliation(s)
- Faraz Moradi
- Faculty of Engineering, University of Ottawa, Ottawa, ON, Canada
| | - Monica van den Berg
- Bio-Imaging Lab, University of Antwerp, Antwerp, Belgium
- μNEURO Research Centre of Excellence, University of Antwerp, Antwerp, Belgium
| | | | - Lauren Kosten
- Bio-Imaging Lab, University of Antwerp, Antwerp, Belgium
- μNEURO Research Centre of Excellence, University of Antwerp, Antwerp, Belgium
| | - Marleen Verhoye
- Bio-Imaging Lab, University of Antwerp, Antwerp, Belgium
- μNEURO Research Centre of Excellence, University of Antwerp, Antwerp, Belgium
| | - Mahmood Amiri
- Medical Technology Research Center, Kermanshah University of Medical Sciences, Kermanshah, Iran
| | - Georgios A. Keliris
- Bio-Imaging Lab, University of Antwerp, Antwerp, Belgium
- μNEURO Research Centre of Excellence, University of Antwerp, Antwerp, Belgium
- Institute of Computer Science, Foundation for Research & Technology - Hellas, Heraklion, Crete, Greece
| |
Collapse
|
8
|
Odusami M, Maskeliūnas R, Damaševičius R. Pareto Optimized Adaptive Learning with Transposed Convolution for Image Fusion Alzheimer's Disease Classification. Brain Sci 2023; 13:1045. [PMID: 37508977 PMCID: PMC10377099 DOI: 10.3390/brainsci13071045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 06/30/2023] [Accepted: 07/04/2023] [Indexed: 07/30/2023] Open
Abstract
Alzheimer's disease (AD) is a neurological condition that gradually weakens the brain and impairs cognition and memory. Multimodal imaging techniques have become increasingly important in the diagnosis of AD because they can help monitor disease progression over time by providing a more complete picture of the changes in the brain that occur over time in AD. Medical image fusion is crucial in that it combines data from various image modalities into a single, better-understood output. The present study explores the feasibility of employing Pareto optimized deep learning methodologies to integrate Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) images through the utilization of pre-existing models, namely the Visual Geometry Group (VGG) 11, VGG16, and VGG19 architectures. Morphological operations are carried out on MRI and PET images using Analyze 14.0 software and after which PET images are manipulated for the desired angle of alignment with MRI image using GNU Image Manipulation Program (GIMP). To enhance the network's performance, transposed convolution layer is incorporated into the previously extracted feature maps before image fusion. This process generates feature maps and fusion weights that facilitate the fusion process. This investigation concerns the assessment of the efficacy of three VGG models in capturing significant features from the MRI and PET data. The hyperparameters of the models are tuned using Pareto optimization. The models' performance is evaluated on the ADNI dataset utilizing the Structure Similarity Index Method (SSIM), Peak Signal-to-Noise Ratio (PSNR), Mean-Square Error (MSE), and Entropy (E). Experimental results show that VGG19 outperforms VGG16 and VGG11 with an average of 0.668, 0.802, and 0.664 SSIM for CN, AD, and MCI stages from ADNI (MRI modality) respectively. Likewise, an average of 0.669, 0.815, and 0.660 SSIM for CN, AD, and MCI stages from ADNI (PET modality) respectively.
Collapse
Affiliation(s)
- Modupe Odusami
- Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania
| | - Rytis Maskeliūnas
- Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
| |
Collapse
|
9
|
Zhao Z, Chuah JH, Lai KW, Chow CO, Gochoo M, Dhanalakshmi S, Wang N, Bao W, Wu X. Conventional machine learning and deep learning in Alzheimer's disease diagnosis using neuroimaging: A review. Front Comput Neurosci 2023; 17:1038636. [PMID: 36814932 PMCID: PMC9939698 DOI: 10.3389/fncom.2023.1038636] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 01/13/2023] [Indexed: 02/08/2023] Open
Abstract
Alzheimer's disease (AD) is a neurodegenerative disorder that causes memory degradation and cognitive function impairment in elderly people. The irreversible and devastating cognitive decline brings large burdens on patients and society. So far, there is no effective treatment that can cure AD, but the process of early-stage AD can slow down. Early and accurate detection is critical for treatment. In recent years, deep-learning-based approaches have achieved great success in Alzheimer's disease diagnosis. The main objective of this paper is to review some popular conventional machine learning methods used for the classification and prediction of AD using Magnetic Resonance Imaging (MRI). The methods reviewed in this paper include support vector machine (SVM), random forest (RF), convolutional neural network (CNN), autoencoder, deep learning, and transformer. This paper also reviews pervasively used feature extractors and different types of input forms of convolutional neural network. At last, this review discusses challenges such as class imbalance and data leakage. It also discusses the trade-offs and suggestions about pre-processing techniques, deep learning, conventional machine learning methods, new techniques, and input type selection.
Collapse
Affiliation(s)
- Zhen Zhao
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Joon Huang Chuah
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia,*Correspondence: Joon Huang Chuah ✉
| | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia,Khin Wee Lai ✉
| | - Chee-Onn Chow
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Munkhjargal Gochoo
- Department of Computer Science and Software Engineering, United Arab Emirates University, Al Ain, United Arab Emirates
| | - Samiappan Dhanalakshmi
- Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Chennai, India
| | - Na Wang
- School of Automation, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Wei Bao
- China Electronics Standardization Institute, Beijing, China,Wei Bao ✉
| | - Xiang Wu
- School of Medical Information Engineering, Xuzhou Medical University, Xuzhou, China
| |
Collapse
|
10
|
Chen Z, Liu Y, Zhang Y, Li Q. Orthogonal latent space learning with feature weighting and graph learning for multimodal Alzheimer's disease diagnosis. Med Image Anal 2023; 84:102698. [PMID: 36462372 DOI: 10.1016/j.media.2022.102698] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 10/18/2022] [Accepted: 11/17/2022] [Indexed: 11/23/2022]
Abstract
Recent studies have shown that multimodal neuroimaging data provide complementary information of the brain and latent space-based methods have achieved promising results in fusing multimodal data for Alzheimer's disease (AD) diagnosis. However, most existing methods treat all features equally and adopt nonorthogonal projections to learn the latent space, which cannot retain enough discriminative information in the latent space. Besides, they usually preserve the relationships among subjects in the latent space based on the similarity graph constructed on original features for performance boosting. However, the noises and redundant features significantly corrupt the graph. To address these limitations, we propose an Orthogonal Latent space learning with Feature weighting and Graph learning (OLFG) model for multimodal AD diagnosis. Specifically, we map multiple modalities into a common latent space by orthogonal constrained projection to capture the discriminative information for AD diagnosis. Then, a feature weighting matrix is utilized to sort the importance of features in AD diagnosis adaptively. Besides, we devise a regularization term with learned graph to preserve the local structure of the data in the latent space and integrate the graph construction into the learning processing for accurately encoding the relationships among samples. Instead of constructing a similarity graph for each modality, we learn a joint graph for multiple modalities to capture the correlations among modalities. Finally, the representations in the latent space are projected into the target space to perform AD diagnosis. An alternating optimization algorithm with proved convergence is developed to solve the optimization objective. Extensive experimental results show the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Zhi Chen
- Knowledge and Data Engineering Laboratory of Chinese Medicine, School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Yongguo Liu
- Knowledge and Data Engineering Laboratory of Chinese Medicine, School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China.
| | - Yun Zhang
- Knowledge and Data Engineering Laboratory of Chinese Medicine, School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Qiaoqin Li
- Knowledge and Data Engineering Laboratory of Chinese Medicine, School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| |
Collapse
|
11
|
Khan R, Akbar S, Mehmood A, Shahid F, Munir K, Ilyas N, Asif M, Zheng Z. A transfer learning approach for multiclass classification of Alzheimer's disease using MRI images. Front Neurosci 2023; 16:1050777. [PMID: 36699527 PMCID: PMC9869687 DOI: 10.3389/fnins.2022.1050777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 12/05/2022] [Indexed: 01/11/2023] Open
Abstract
Alzheimer's is an acute degenerative disease affecting the elderly population all over the world. The detection of disease at an early stage in the absence of a large-scale annotated dataset is crucial to the clinical treatment for the prevention and early detection of Alzheimer's disease (AD). In this study, we propose a transfer learning base approach to classify various stages of AD. The proposed model can distinguish between normal control (NC), early mild cognitive impairment (EMCI), late mild cognitive impairment (LMCI), and AD. In this regard, we apply tissue segmentation to extract the gray matter from the MRI scans obtained from the Alzheimer's Disease National Initiative (ADNI) database. We utilize this gray matter to tune the pre-trained VGG architecture while freezing the features of the ImageNet database. It is achieved through the addition of a layer with step-wise freezing of the existing blocks in the network. It not only assists transfer learning but also contributes to learning new features efficiently. Extensive experiments are conducted and results demonstrate the superiority of the proposed approach.
Collapse
Affiliation(s)
- Rizwan Khan
- Department of Computer Science and Mathematics, Zhejiang Normal University, Jinhua, China,*Correspondence: Rizwan Khan ✉
| | - Saeed Akbar
- School of Computer Science, Huazhong University of Science and Technology, Wuhan, China
| | - Atif Mehmood
- Division of Biomedical Imaging, Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Stockholm, Sweden,Department of Computer Science, National University of Modern Languages, Islamabad, Pakistan
| | - Farah Shahid
- Department of Computer Science, University of Agriculture, Sub Campus Burewala-Vehari, Faisalabad, Pakistan
| | - Khushboo Munir
- Department of Radiology and Diagnostic Imaging, University of Alberta, Edmonton, AB, Canada
| | - Naveed Ilyas
- Department of Physics, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates
| | - M. Asif
- Department of Radiology, Emory Brain Health Center-Neurosurgery, School of Medicine, Emory University, Atlanta, GA, United States
| | - Zhonglong Zheng
- Department of Computer Science and Mathematics, Zhejiang Normal University, Jinhua, China,Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, China
| |
Collapse
|
12
|
Subramanyam Rallabandi V, Seetharaman K. Classification of cognitively normal controls, mild cognitive impairment and Alzheimer’s disease using transfer learning approach. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
13
|
Kim JS, Han JW, Bae JB, Moon DG, Shin J, Kong JE, Lee H, Yang HW, Lim E, Kim JY, Sunwoo L, Cho SJ, Lee D, Kim I, Ha SW, Kang MJ, Suh CH, Shim WH, Kim SJ, Kim KW. Deep learning-based diagnosis of Alzheimer's disease using brain magnetic resonance images: an empirical study. Sci Rep 2022; 12:18007. [PMID: 36289390 PMCID: PMC9606115 DOI: 10.1038/s41598-022-22917-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 10/20/2022] [Indexed: 01/24/2023] Open
Abstract
The limited accessibility of medical specialists for Alzheimer's disease (AD) can make obtaining an accurate diagnosis in a timely manner challenging and may influence prognosis. We investigated whether VUNO Med-DeepBrain AD (DBAD) using a deep learning algorithm can be employed as a decision support service for the diagnosis of AD. This study included 98 elderly participants aged 60 years or older who visited the Seoul Asan Medical Center and the Korea Veterans Health Service. We administered a standard diagnostic assessment for diagnosing AD. DBAD and three panels of medical experts (ME) diagnosed participants with normal cognition (NC) or AD using T1-weighted magnetic resonance imaging. The accuracy (87.1% for DBAD and 84.3% for ME), sensitivity (93.3% for DBAD and 80.0% for ME), and specificity (85.5% for DBAD and 85.5% for ME) of both DBAD and ME for diagnosing AD were comparable; however, DBAD showed a higher trend in every analysis than ME diagnosis. DBAD may support the clinical decisions of physicians who are not specialized in AD; this may enhance the accessibility of AD diagnosis and treatment.
Collapse
Affiliation(s)
- Jun Sung Kim
- grid.412484.f0000 0001 0302 820XInstitute of Human Behavioral Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea ,grid.412480.b0000 0004 0647 3378Department of Neuropsychiatry, Seoul National University Bundang Hospital, 82, Gumi-Ro 173 Beon-Gil, Bundang-Gu, Seongnam-Si, Gyeonggi-Do 13620 Republic of Korea
| | - Ji Won Han
- grid.412480.b0000 0004 0647 3378Department of Neuropsychiatry, Seoul National University Bundang Hospital, 82, Gumi-Ro 173 Beon-Gil, Bundang-Gu, Seongnam-Si, Gyeonggi-Do 13620 Republic of Korea ,grid.31501.360000 0004 0470 5905Department of Psychiatry, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Jong Bin Bae
- grid.412480.b0000 0004 0647 3378Department of Neuropsychiatry, Seoul National University Bundang Hospital, 82, Gumi-Ro 173 Beon-Gil, Bundang-Gu, Seongnam-Si, Gyeonggi-Do 13620 Republic of Korea
| | - Dong Gyu Moon
- grid.412480.b0000 0004 0647 3378Department of Neuropsychiatry, Seoul National University Bundang Hospital, 82, Gumi-Ro 173 Beon-Gil, Bundang-Gu, Seongnam-Si, Gyeonggi-Do 13620 Republic of Korea
| | - Jin Shin
- grid.412480.b0000 0004 0647 3378Department of Neuropsychiatry, Seoul National University Bundang Hospital, 82, Gumi-Ro 173 Beon-Gil, Bundang-Gu, Seongnam-Si, Gyeonggi-Do 13620 Republic of Korea
| | - Juhee Eliana Kong
- grid.412480.b0000 0004 0647 3378Department of Neuropsychiatry, Seoul National University Bundang Hospital, 82, Gumi-Ro 173 Beon-Gil, Bundang-Gu, Seongnam-Si, Gyeonggi-Do 13620 Republic of Korea
| | - Hyungji Lee
- grid.412480.b0000 0004 0647 3378Department of Neuropsychiatry, Seoul National University Bundang Hospital, 82, Gumi-Ro 173 Beon-Gil, Bundang-Gu, Seongnam-Si, Gyeonggi-Do 13620 Republic of Korea
| | - Hee Won Yang
- grid.411665.10000 0004 0647 2279Department of Psychiatry, Chungnam National University Hospital, Daejeon, Republic of Korea
| | - Eunji Lim
- grid.256681.e0000 0001 0661 1492Department of Neuropsychiatry, Gyeongsang National University Changwon Hospital, Changwon, Republic of Korea
| | - Jun Yup Kim
- grid.412480.b0000 0004 0647 3378Department of Neurology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Leonard Sunwoo
- grid.412480.b0000 0004 0647 3378Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea ,grid.31501.360000 0004 0470 5905Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Se Jin Cho
- grid.412480.b0000 0004 0647 3378Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea ,grid.31501.360000 0004 0470 5905Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | | | - Injoong Kim
- Department of Radiology, Veterans Health Service Medical Center, Seoul, Republic of Korea
| | - Sang Won Ha
- Department of Neurology, Veterans Health Service Medical Center, Seoul, Republic of Korea
| | - Min Ju Kang
- Department of Neurology, Veterans Health Service Medical Center, Seoul, Republic of Korea
| | - Chong Hyun Suh
- grid.267370.70000 0004 0533 4667Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Woo Hyun Shim
- grid.267370.70000 0004 0533 4667Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Joon Kim
- grid.267370.70000 0004 0533 4667Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Ki Woong Kim
- grid.412484.f0000 0001 0302 820XInstitute of Human Behavioral Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea ,grid.412480.b0000 0004 0647 3378Department of Neuropsychiatry, Seoul National University Bundang Hospital, 82, Gumi-Ro 173 Beon-Gil, Bundang-Gu, Seongnam-Si, Gyeonggi-Do 13620 Republic of Korea ,grid.31501.360000 0004 0470 5905Department of Psychiatry, Seoul National University College of Medicine, Seoul, Republic of Korea ,grid.31501.360000 0004 0470 5905Department of Brain and Cognitive Science, Seoul National University College of Natural Science, Seoul, Republic of Korea
| |
Collapse
|
14
|
Zhang J, Zhou L, Wang L, Liu M, Shen D. Diffusion Kernel Attention Network for Brain Disorder Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2814-2827. [PMID: 35471877 DOI: 10.1109/tmi.2022.3170701] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Constructing and analyzing functional brain networks (FBN) has become a promising approach to brain disorder classification. However, the conventional successive construct-and-analyze process would limit the performance due to the lack of interactions and adaptivity among the subtasks in the process. Recently, Transformer has demonstrated remarkable performance in various tasks, attributing to its effective attention mechanism in modeling complex feature relationships. In this paper, for the first time, we develop Transformer for integrated FBN modeling, analysis and brain disorder classification with rs-fMRI data by proposing a Diffusion Kernel Attention Network to address the specific challenges. Specifically, directly applying Transformer does not necessarily admit optimal performance in this task due to its extensive parameters in the attention module against the limited training samples usually available. Looking into this issue, we propose to use kernel attention to replace the original dot-product attention module in Transformer. This significantly reduces the number of parameters to train and thus alleviates the issue of small sample while introducing a non-linear attention mechanism to model complex functional connections. Another limit of Transformer for FBN applications is that it only considers pair-wise interactions between directly connected brain regions but ignores the important indirect connections. Therefore, we further explore diffusion process over the kernel attention to incorporate wider interactions among indirectly connected brain regions. Extensive experimental study is conducted on ADHD-200 data set for ADHD classification and on ADNI data set for Alzheimer's disease classification, and the results demonstrate the superior performance of the proposed method over the competing methods.
Collapse
|
15
|
Sun W, Ji S, Cambria E, Marttinen P. Multitask Balanced and Recalibrated Network for Medical Code Prediction. ACM T INTEL SYST TEC 2022. [DOI: 10.1145/3563041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
Human coders assign standardized medical codes to clinical documents generated during patients’ hospitalization, which is error-prone and labor-intensive. Automated medical coding approaches have been developed using machine learning methods such as deep neural networks. Nevertheless, automated medical coding is still challenging because of complex code association, noise in lengthy documents, and the imbalanced class problem. We propose a novel neural network called Multitask Balanced and Recalibrated Neural Network to solve these issues. Significantly, the multitask learning scheme shares the relationship knowledge between different coding branches to capture the code association. A recalibrated aggregation module is developed by cascading convolutional blocks to extract high-level semantic features that mitigate the impact of noise in documents. Also, the cascaded structure of the recalibrated module can benefit the learning from lengthy notes. To solve the imbalanced class problem, we deploy the focal loss to redistribute the attention on low and high-frequency medical codes. Experimental results show that our proposed model outperforms competitive baselines on a real-world clinical dataset MIMIC-III.
Collapse
|
16
|
A Practical Multiclass Classification Network for the Diagnosis of Alzheimer’s Disease. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Patients who have Alzheimer’s disease (AD) pass through several irreversible stages, which ultimately result in the patient’s death. It is crucial to understand and detect AD at an early stage to slow down its progression due to the non-curable nature of the disease. Diagnostic techniques are primarily based on magnetic resonance imaging (MRI) and expensive high-dimensional 3D imaging data. Classic methods can hardly discriminate among the almost similar pixels of the brain patterns of various age groups. The recent deep learning-based methods can contribute to the detection of the various stages of AD but require large-scale datasets and face several challenges while using the 3D volumes directly. The extant deep learning-based work is mainly focused on binary classification, but it is challenging to detect multiple stages with these methods. In this work, we propose a deep learning-based multiclass classification method to distinguish amongst various stages for the early diagnosis of Alzheimer’s. The proposed method significantly handles data shortage challenges by augmentation and manages to classify the 2D images obtained after the efficient pre-processing of the publicly available Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. Our method achieves an accuracy of 98.9% with an F1 score of 96.3. Extensive experiments are performed, and overall results demonstrate that the proposed method outperforms the state-of-the-art methods in terms of overall performance.
Collapse
|
17
|
Tabarestani S, Eslami M, Cabrerizo M, Curiel RE, Barreto A, Rishe N, Vaillancourt D, DeKosky ST, Loewenstein DA, Duara R, Adjouadi M. A Tensorized Multitask Deep Learning Network for Progression Prediction of Alzheimer's Disease. Front Aging Neurosci 2022; 14:810873. [PMID: 35601611 PMCID: PMC9120529 DOI: 10.3389/fnagi.2022.810873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 03/14/2022] [Indexed: 11/13/2022] Open
Abstract
With the advances in machine learning for the diagnosis of Alzheimer's disease (AD), most studies have focused on either identifying the subject's status through classification algorithms or on predicting their cognitive scores through regression methods, neglecting the potential association between these two tasks. Motivated by the need to enhance the prospects for early diagnosis along with the ability to predict future disease states, this study proposes a deep neural network based on modality fusion, kernelization, and tensorization that perform multiclass classification and longitudinal regression simultaneously within a unified multitask framework. This relationship between multiclass classification and longitudinal regression is found to boost the efficacy of the final model in dealing with both tasks. Different multimodality scenarios are investigated, and complementary aspects of the multimodal features are exploited to simultaneously delineate the subject's label and predict related cognitive scores at future timepoints using baseline data. The main intent in this multitask framework is to consolidate the highest accuracy possible in terms of precision, sensitivity, F1 score, and area under the curve (AUC) in the multiclass classification task while maintaining the highest similarity in the MMSE score as measured through the correlation coefficient and the RMSE for all time points under the prediction task, with both tasks, run simultaneously under the same set of hyperparameters. The overall accuracy for multiclass classification of the proposed KTMnet method is 66.85 ± 3.77. The prediction results show an average RMSE of 2.32 ± 0.52 and a correlation of 0.71 ± 5.98 for predicting MMSE throughout the time points. These results are compared to state-of-the-art techniques reported in the literature. A discovery from the multitasking of this consolidated machine learning framework is that a set of hyperparameters that optimize the prediction results may not necessarily be the same as those that would optimize the multiclass classification. In other words, there is a breakpoint beyond which enhancing further the results of one process could lead to the downgrading in accuracy for the other.
Collapse
Affiliation(s)
- Solale Tabarestani
- Center for Advanced Technology and Education, Florida International University, Miami, FL, United States
| | - Mohammad Eslami
- Harvard Ophthalmology AI Lab and Harvard Medical School, Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, MA, United States
| | - Mercedes Cabrerizo
- Center for Advanced Technology and Education, Florida International University, Miami, FL, United States
| | - Rosie E. Curiel
- Center for Cognitive Neuroscience and Aging, Psychiatry and Behavioral Sciences, University of Miami School of Medicine, Miami, FL, United States
- Florida Alzheimer’s Disease Research Center, University of Florida, Gainesville, FL, United States
| | - Armando Barreto
- Center for Advanced Technology and Education, Florida International University, Miami, FL, United States
| | - Naphtali Rishe
- Center for Advanced Technology and Education, Florida International University, Miami, FL, United States
| | - David Vaillancourt
- Florida Alzheimer’s Disease Research Center, University of Florida, Gainesville, FL, United States
- Department of Neurology, University of Florida, Gainesville, FL, United States
- Department of Applied Physiology and Kinesiology, University of Florida, Gainesville, FL, United States
| | - Steven T. DeKosky
- Florida Alzheimer’s Disease Research Center, University of Florida, Gainesville, FL, United States
- Department of Neurology, University of Florida, Gainesville, FL, United States
| | - David A. Loewenstein
- Center for Cognitive Neuroscience and Aging, Psychiatry and Behavioral Sciences, University of Miami School of Medicine, Miami, FL, United States
- Florida Alzheimer’s Disease Research Center, University of Florida, Gainesville, FL, United States
- Wien Center for Alzheimer’s Disease and Memory Disorders, Mount Sinai Medical Center, Miami Beach, FL, United States
| | - Ranjan Duara
- Florida Alzheimer’s Disease Research Center, University of Florida, Gainesville, FL, United States
- Wien Center for Alzheimer’s Disease and Memory Disorders, Mount Sinai Medical Center, Miami Beach, FL, United States
| | - Malek Adjouadi
- Center for Advanced Technology and Education, Florida International University, Miami, FL, United States
- Florida Alzheimer’s Disease Research Center, University of Florida, Gainesville, FL, United States
| |
Collapse
|
18
|
Tufail AB, Ullah K, Khan RA, Shakir M, Khan MA, Ullah I, Ma YK, Ali MS. On Improved 3D-CNN-Based Binary and Multiclass Classification of Alzheimer's Disease Using Neuroimaging Modalities and Data Augmentation Methods. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:1302170. [PMID: 35186220 PMCID: PMC8856791 DOI: 10.1155/2022/1302170] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 01/17/2022] [Accepted: 01/20/2022] [Indexed: 12/15/2022]
Abstract
Alzheimer's disease (AD) is an irreversible illness of the brain impacting the functional and daily activities of elderly population worldwide. Neuroimaging sensory systems such as Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) measure the pathological changes in the brain associated with this disorder especially in its early stages. Deep learning (DL) architectures such as Convolutional Neural Networks (CNNs) are successfully used in recognition, classification, segmentation, detection, and other domains for data interpretation. Data augmentation schemes work alongside DL techniques and may impact the final task performance positively or negatively. In this work, we have studied and compared the impact of three data augmentation techniques on the final performances of CNN architectures in the 3D domain for the early diagnosis of AD. We have studied both binary and multiclass classification problems using MRI and PET neuroimaging modalities. We have found the performance of random zoomed in/out augmentation to be the best among all the augmentation methods. It is also observed that combining different augmentation methods may result in deteriorating performances on the classification tasks. Furthermore, we have seen that architecture engineering has less impact on the final classification performance in comparison to the data manipulation schemes. We have also observed that deeper architectures may not provide performance advantages in comparison to their shallower counterparts. We have further observed that these augmentation schemes do not alleviate the class imbalance issue.
Collapse
Affiliation(s)
- Ahsan Bin Tufail
- School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China
- Department of Electrical and Computer Engineering, COMSATS University Islamabad Sahiwal Campus, Sahiwal, Pakistan
| | - Kalim Ullah
- Department of Zoology, Kohat University of Science and Technology, Kohat 26000, Pakistan
| | - Rehan Ali Khan
- Department of Electrical Engineering, University of Science and Technology Bannu, Bannu 28100, Pakistan
| | - Mustafa Shakir
- Department of Electrical Engineering, Superior University, Lahore 54000, Pakistan
| | - Muhammad Abbas Khan
- Department of Electrical Engineering, Balochistan University of Information Technology,Engineering and Management Sciences, Quetta,Balochistan 87300, Pakistan
| | - Inam Ullah
- College of Internet of Things (IoT) Engineering, Hohai University (HHU), Changzhou Campus 213022, China
| | - Yong-Kui Ma
- School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Md. Sadek Ali
- Communication Research Laboratory, Department of Information and Communication Technology, Islamic University, Kushtia-7003, Bangladesh
| |
Collapse
|
19
|
Grueso S, Viejo-Sobera R. Machine learning methods for predicting progression from mild cognitive impairment to Alzheimer's disease dementia: a systematic review. Alzheimers Res Ther 2021; 13:162. [PMID: 34583745 PMCID: PMC8480074 DOI: 10.1186/s13195-021-00900-w] [Citation(s) in RCA: 60] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 09/12/2021] [Indexed: 01/18/2023]
Abstract
BACKGROUND An increase in lifespan in our society is a double-edged sword that entails a growing number of patients with neurocognitive disorders, Alzheimer's disease being the most prevalent. Advances in medical imaging and computational power enable new methods for the early detection of neurocognitive disorders with the goal of preventing or reducing cognitive decline. Computer-aided image analysis and early detection of changes in cognition is a promising approach for patients with mild cognitive impairment, sometimes a prodromal stage of Alzheimer's disease dementia. METHODS We conducted a systematic review following PRISMA guidelines of studies where machine learning was applied to neuroimaging data in order to predict whether patients with mild cognitive impairment might develop Alzheimer's disease dementia or remain stable. After removing duplicates, we screened 452 studies and selected 116 for qualitative analysis. RESULTS Most studies used magnetic resonance image (MRI) and positron emission tomography (PET) data but also magnetoencephalography. The datasets were mainly extracted from the Alzheimer's disease neuroimaging initiative (ADNI) database with some exceptions. Regarding the algorithms used, the most common was support vector machine with a mean accuracy of 75.4%, but convolutional neural networks achieved a higher mean accuracy of 78.5%. Studies combining MRI and PET achieved overall better classification accuracy than studies that only used one neuroimaging technique. In general, the more complex models such as those based on deep learning, combined with multimodal and multidimensional data (neuroimaging, clinical, cognitive, genetic, and behavioral) achieved the best performance. CONCLUSIONS Although the performance of the different methods still has room for improvement, the results are promising and this methodology has a great potential as a support tool for clinicians and healthcare professionals.
Collapse
Affiliation(s)
- Sergio Grueso
- Cognitive NeuroLab, Faculty of Health Sciences, Universitat Oberta de Catalunya (UOC), Rambla del Poblenou 156, 08018, Barcelona, Spain.
| | - Raquel Viejo-Sobera
- Cognitive NeuroLab, Faculty of Health Sciences, Universitat Oberta de Catalunya (UOC), Rambla del Poblenou 156, 08018, Barcelona, Spain
| |
Collapse
|
20
|
Zhang YY, Dong LX, Bao HL, Liu Y, An FM, Zhang GW. RETRACTED: Inhibition of interleukin-1β plays a protective role in Alzheimer's disease by promoting microRNA-9-5p and downregulating targeting protein for xenopus kinesin-like protein 2. Int Immunopharmacol 2021; 97:107578. [PMID: 33892301 DOI: 10.1016/j.intimp.2021.107578] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Revised: 03/02/2021] [Accepted: 03/08/2021] [Indexed: 11/24/2022]
Abstract
This article has been retracted: please see Elsevier Policy on Article Withdrawal (http://www.elsevier.com/locate/withdrawalpolicy). This article has been retracted at the request of the Editor-in-Chief. Concern was raised about the reliability of the Western blot results in Figs. 2C, 4C, and 5B+E, which appear to have the same eyebrow shaped phenotype as many other publications tabulated here (https://docs.google.com/spreadsheets/d/149EjFXVxpwkBXYJOnOHb6RhAqT4a2llhj9LM60MBffM/edit#gid=0 [docs.google.com]). The journal requested the corresponding author comment on these concerns and provide the raw data. However the authors were not able to satisfactorily fulfil this request and therefore the Editor-in-Chief decided to retract the article.
Collapse
Affiliation(s)
- Yan-Yun Zhang
- College of Nursing, Inner Mongolia University for Nationalities, Tongliao 028000, Inner Mongolia Autonomous Region, PR China; Institute of Dementia, Inner Mongolia University for Nationalities, Tongliao 028000, Inner Mongolia, PR China
| | - Li-Xia Dong
- College of Nursing, Inner Mongolia University for Nationalities, Tongliao 028000, Inner Mongolia Autonomous Region, PR China; Institute of Dementia, Inner Mongolia University for Nationalities, Tongliao 028000, Inner Mongolia, PR China
| | - Hai-Lan Bao
- College of Nursing, Inner Mongolia University for Nationalities, Tongliao 028000, Inner Mongolia Autonomous Region, PR China; Institute of Dementia, Inner Mongolia University for Nationalities, Tongliao 028000, Inner Mongolia, PR China
| | - Yu Liu
- College of Nursing, Inner Mongolia University for Nationalities, Tongliao 028000, Inner Mongolia Autonomous Region, PR China; Institute of Dementia, Inner Mongolia University for Nationalities, Tongliao 028000, Inner Mongolia, PR China
| | - Feng-Mao An
- Institute of Dementia, Inner Mongolia University for Nationalities, Tongliao 028000, Inner Mongolia, PR China; Inner Mongolia Key Laboratory of Mongolian Medicine Pharmacology for Cardio-Cerebral Vascular System, Tongliao 028000, Inner Mongolia, PR China
| | - Guo-Wei Zhang
- College of Nursing, Inner Mongolia University for Nationalities, Tongliao 028000, Inner Mongolia Autonomous Region, PR China; Institute of Dementia, Inner Mongolia University for Nationalities, Tongliao 028000, Inner Mongolia, PR China.
| |
Collapse
|
21
|
Zhang T, Liao Q, Zhang D, Zhang C, Yan J, Ngetich R, Zhang J, Jin Z, Li L. Predicting MCI to AD Conversation Using Integrated sMRI and rs-fMRI: Machine Learning and Graph Theory Approach. Front Aging Neurosci 2021; 13:688926. [PMID: 34421570 PMCID: PMC8375594 DOI: 10.3389/fnagi.2021.688926] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 06/23/2021] [Indexed: 11/28/2022] Open
Abstract
BACKGROUND Graph theory and machine learning have been shown to be effective ways of classifying different stages of Alzheimer's disease (AD). Most previous studies have only focused on inter-subject classification with single-mode neuroimaging data. However, whether this classification can truly reflect the changes in the structure and function of the brain region in disease progression remains unverified. In the current study, we aimed to evaluate the classification framework, which combines structural Magnetic Resonance Imaging (sMRI) and resting-state functional Magnetic Resonance Imaging (rs-fMRI) metrics, to distinguish mild cognitive impairment non-converters (MCInc)/AD from MCI converters (MCIc) by using graph theory and machine learning. METHODS With the intra-subject (MCInc vs. MCIc) and inter-subject (MCIc vs. AD) design, we employed cortical thickness features, structural brain network features, and sub-frequency (full-band, slow-4, slow-5) functional brain network features for classification. Three feature selection methods [random subset feature selection algorithm (RSFS), minimal redundancy maximal relevance (mRMR), and sparse linear regression feature selection algorithm based on stationary selection (SS-LR)] were used respectively to select discriminative features in the iterative combinations of MRI and network measures. Then support vector machine (SVM) classifier with nested cross-validation was employed for classification. We also compared the performance of multiple classifiers (Random Forest, K-nearest neighbor, Adaboost, SVM) and verified the reliability of our results by upsampling. RESULTS We found that in the classifications of MCIc vs. MCInc, and MCIc vs. AD, the proposed RSFS algorithm achieved the best accuracies (84.71, 89.80%) than the other algorithms. And the high-sensitivity brain regions found with the two classification groups were inconsistent. Specifically, in MCIc vs. MCInc, the high-sensitivity brain regions associated with both structural and functional features included frontal, temporal, caudate, entorhinal, parahippocampal, and calcarine fissure and surrounding cortex. While in MCIc vs. AD, the high-sensitivity brain regions associated only with functional features included frontal, temporal, thalamus, olfactory, and angular. CONCLUSIONS These results suggest that our proposed method could effectively predict the conversion of MCI to AD, and the inconsistency of specific brain regions provides a novel insight for clinical AD diagnosis.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Zhenlan Jin
- Key Laboratory for NeuroInformation of Ministry of Education, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Information in Medicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Ling Li
- Key Laboratory for NeuroInformation of Ministry of Education, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Information in Medicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
22
|
Wang M, Shao W, Hao X, Zhang D. Identify Complex Imaging Genetic Patterns via Fusion Self-Expressive Network Analysis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1673-1686. [PMID: 33661732 DOI: 10.1109/tmi.2021.3063785] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In the brain imaging genetic studies, it is a challenging task to estimate the association between quantitative traits (QTs) extracted from neuroimaging data and genetic markers such as single-nucleotide polymorphisms (SNPs). Most of the existing association studies are based on the extensions of sparse canonical correlation analysis (SCCA) for the identification of complex bi-multivariate associations, which can take the specific structure and group information into consideration. However, they often take the original data as input without considering its underlying complex multi-subspace structure, which will deteriorate the performance of the following integrative analysis. Accordingly, in this paper, the self-expressive property is exploited for the reconstruction of the original data before the association analysis, which can well describe the similarity structure. Specifically, we first apply the within-class similarity information to construct self-expressive networks by sparse representation. Then, we use the fusion method to iteratively fuse the self-expressive networks from multi-modality brain phenotypes into one network. Finally, we calculate the imaging genetic association based on the fused self-expressive network. We conduct the experiments on both single-modality and multi-modality phenotype data. Related experimental results validate that our method can not only better estimate the potential association between genetic markers and quantitative traits but also identify consistent multi-modality imaging genetic biomarkers to guide the interpretation of Alzheimer's disease.
Collapse
|
23
|
Ning Z, Xiao Q, Feng Q, Chen W, Zhang Y. Relation-Induced Multi-Modal Shared Representation Learning for Alzheimer's Disease Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1632-1645. [PMID: 33651685 DOI: 10.1109/tmi.2021.3063150] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The fusion of multi-modal data (e.g., magnetic resonance imaging (MRI) and positron emission tomography (PET)) has been prevalent for accurate identification of Alzheimer's disease (AD) by providing complementary structural and functional information. However, most of the existing methods simply concatenate multi-modal features in the original space and ignore their underlying associations which may provide more discriminative characteristics for AD identification. Meanwhile, how to overcome the overfitting issue caused by high-dimensional multi-modal data remains appealing. To this end, we propose a relation-induced multi-modal shared representation learning method for AD diagnosis. The proposed method integrates representation learning, dimension reduction, and classifier modeling into a unified framework. Specifically, the framework first obtains multi-modal shared representations by learning a bi-directional mapping between original space and shared space. Within this shared space, we utilize several relational regularizers (including feature-feature, feature-label, and sample-sample regularizers) and auxiliary regularizers to encourage learning underlying associations inherent in multi-modal data and alleviate overfitting, respectively. Next, we project the shared representations into the target space for AD diagnosis. To validate the effectiveness of our proposed approach, we conduct extensive experiments on two independent datasets (i.e., ADNI-1 and ADNI-2), and the experimental results demonstrate that our proposed method outperforms several state-of-the-art methods.
Collapse
|
24
|
Jung W, Jun E, Suk HI. Deep recurrent model for individualized prediction of Alzheimer's disease progression. Neuroimage 2021; 237:118143. [PMID: 33991694 DOI: 10.1016/j.neuroimage.2021.118143] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 03/15/2021] [Accepted: 04/13/2021] [Indexed: 01/27/2023] Open
Abstract
Alzheimer's disease (AD) is known as one of the major causes of dementia and is characterized by slow progression over several years, with no treatments or available medicines. In this regard, there have been efforts to identify the risk of developing AD in its earliest time. While many of the previous works considered cross-sectional analysis, more recent studies have focused on the diagnosis and prognosis of AD with longitudinal or time series data in a way of disease progression modeling. Under the same problem settings, in this work, we propose a novel computational framework that can predict the phenotypic measurements of MRI biomarkers and trajectories of clinical status along with cognitive scores at multiple future time points. However, in handling time series data, it generally faces many unexpected missing observations. In regard to such an unfavorable situation, we define a secondary problem of estimating those missing values and tackle it in a systematic way by taking account of temporal and multivariate relations inherent in time series data. Concretely, we propose a deep recurrent network that jointly tackles the four problems of (i) missing value imputation, (ii) phenotypic measurements forecasting, (iii) trajectory estimation of a cognitive score, and (iv) clinical status prediction of a subject based on his/her longitudinal imaging biomarkers. Notably, the learnable parameters of all the modules in our predictive models are trained in an end-to-end manner by taking the morphological features and cognitive scores as input, with our circumspectly defined loss function. In our experiments over The Alzheimers Disease Prediction Of Longitudinal Evolution (TADPOLE) challenge cohort, we measured performance for various metrics and compared our method to competing methods in the literature. Exhaustive analyses and ablation studies were also conducted to better confirm the effectiveness of our method.
Collapse
Affiliation(s)
- Wonsik Jung
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Eunji Jun
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| | | |
Collapse
|
25
|
Yin L, Cao Z, Wang K, Tian J, Yang X, Zhang J. A review of the application of machine learning in molecular imaging. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:825. [PMID: 34268438 PMCID: PMC8246214 DOI: 10.21037/atm-20-5877] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 10/02/2020] [Indexed: 12/12/2022]
Abstract
Molecular imaging (MI) is a science that uses imaging methods to reflect the changes of molecular level in living state and conduct qualitative and quantitative studies on its biological behaviors in imaging. Optical molecular imaging (OMI) and nuclear medical imaging are two key research fields of MI. OMI technology refers to the optical information generated by the imaging target (such as tumors) due to drug intervention and other reasons. By collecting the optical information, researchers can track the motion trajectory of the imaging target at the molecular level. Owing to its high specificity and sensitivity, OMI has been widely used in preclinical research and clinical surgery. Nuclear medical imaging mainly detects ionizing radiation emitted by radioactive substances. It can provide molecular information for early diagnosis, effective treatment and basic research of diseases, which has become one of the frontiers and hot topics in the field of medicine in the world today. Both OMI and nuclear medical imaging technology require a lot of data processing and analysis. In recent years, artificial intelligence technology, especially neural network-based machine learning (ML) technology, has been widely used in MI because of its powerful data processing capability. It provides a feasible strategy to deal with large and complex data for the requirement of MI. In this review, we will focus on the applications of ML methods in OMI and nuclear medical imaging.
Collapse
Affiliation(s)
- Lin Yin
- Key Laboratory of Molecular Imaging of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Zhen Cao
- Peking University First Hospital, Beijing, China
| | - Kun Wang
- Key Laboratory of Molecular Imaging of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Jie Tian
- Key Laboratory of Molecular Imaging of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, China
| | - Xing Yang
- Peking University First Hospital, Beijing, China
| | | |
Collapse
|
26
|
Messina D, Borrelli P, Russo P, Salvatore M, Aiello M. Voxel-Wise Feature Selection Method for CNN Binary Classification of Neuroimaging Data. Front Neurosci 2021; 15:630747. [PMID: 33958980 PMCID: PMC8093438 DOI: 10.3389/fnins.2021.630747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 02/26/2021] [Indexed: 11/23/2022] Open
Abstract
Voxel-wise group analysis is presented as a novel feature selection (FS) technique for a deep learning (DL) approach to brain imaging data classification. The method, based on a voxel-wise two-sample t-test and denoted as t-masking, is integrated into the learning procedure as a data-driven FS strategy. t-Masking has been introduced in a convolutional neural network (CNN) for the test bench of binary classification of very-mild Alzheimer’s disease vs. normal control, using a structural magnetic resonance imaging dataset of 180 subjects. To better characterize the t-masking impact on CNN classification performance, six different experimental configurations were designed. Moreover, the performances of the presented FS method were compared to those of similar machine learning (ML) models that relied on different FS approaches. Overall, our results show an enhancement of about 6% in performance when t-masking was applied. Moreover, the reported performance enhancement was higher with respect to similar FS-based ML models. In addition, evaluation of the impact of t-masking on various selection rates has been provided, serving as a useful characterization for future insights. The proposed approach is also highly generalizable to other DL architectures, neuroimaging modalities, and brain pathologies.
Collapse
Affiliation(s)
| | | | - Paolo Russo
- Dipartimento di Fisica "Ettore Pancini", Università Degli Studi di Napoli "Federico II" - Complesso Universitario di Monte Sant'Angelo, Naples, Italy
| | | | | |
Collapse
|
27
|
Dong A, Li Z, Wang M, Shen D, Liu M. High-Order Laplacian Regularized Low-Rank Representation for Multimodal Dementia Diagnosis. Front Neurosci 2021; 15:634124. [PMID: 33776639 PMCID: PMC7994898 DOI: 10.3389/fnins.2021.634124] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 01/25/2021] [Indexed: 11/15/2022] Open
Abstract
Multimodal heterogeneous data, such as structural magnetic resonance imaging (MRI), positron emission tomography (PET), and cerebrospinal fluid (CSF), are effective in improving the performance of automated dementia diagnosis by providing complementary information on degenerated brain disorders, such as Alzheimer's prodromal stage, i.e., mild cognitive impairment. Effectively integrating multimodal data has remained a challenging problem, especially when these heterogeneous data are incomplete due to poor data quality and patient dropout. Besides, multimodal data usually contain noise information caused by different scanners or imaging protocols. The existing methods usually fail to well handle these heterogeneous and noisy multimodal data for automated brain dementia diagnosis. To this end, we propose a high-order Laplacian regularized low-rank representation method for dementia diagnosis using block-wise missing multimodal data. The proposed method was evaluated on 805 subjects (with incomplete MRI, PET, and CSF data) from the real Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort. Experimental results suggest the effectiveness of our method in three tasks of brain disease classification, compared with the state-of-the-art methods.
Collapse
Affiliation(s)
- Aimei Dong
- School of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Science), Jinan, China
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Zhigang Li
- School of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Science), Jinan, China
| | - Mingliang Wang
- College of Computer Science and Technology, Nanjing University of Aeronautics & Astronautics, Nanjing, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| |
Collapse
|
28
|
Torres-Velázquez M, Chen WJ, Li X, McMillan AB. Application and Construction of Deep Learning Networks in Medical Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:137-159. [PMID: 34017931 PMCID: PMC8132932 DOI: 10.1109/trpms.2020.3030611] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Deep learning (DL) approaches are part of the machine learning (ML) subfield concerned with the development of computational models to train artificial intelligence systems. DL models are characterized by automatically extracting high-level features from the input data to learn the relationship between matching datasets. Thus, its implementation offers an advantage over common ML methods that often require the practitioner to have some domain knowledge of the input data to select the best latent representation. As a result of this advantage, DL has been successfully applied within the medical imaging field to address problems, such as disease classification and tumor segmentation for which it is difficult or impossible to determine which image features are relevant. Therefore, taking into consideration the positive impact of DL on the medical imaging field, this article reviews the key concepts associated with its evolution and implementation. The sections of this review summarize the milestones related to the development of the DL field, followed by a description of the elements of deep neural network and an overview of its application within the medical imaging field. Subsequently, the key steps necessary to implement a supervised DL application are defined, and associated limitations are discussed.
Collapse
Affiliation(s)
- Maribel Torres-Velázquez
- Department of Biomedical Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Wei-Jie Chen
- Department of Electrical and Computer Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Xue Li
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Alan B McMillan
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA, and also with the Department of Medical Physics, University of Wisconsin-Madison, Madison, WI 53705 USA
| |
Collapse
|
29
|
Venugopalan J, Tong L, Hassanzadeh HR, Wang MD. Multimodal deep learning models for early detection of Alzheimer's disease stage. Sci Rep 2021; 11:3254. [PMID: 33547343 PMCID: PMC7864942 DOI: 10.1038/s41598-020-74399-w] [Citation(s) in RCA: 115] [Impact Index Per Article: 38.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Accepted: 01/22/2020] [Indexed: 02/06/2023] Open
Abstract
Most current Alzheimer's disease (AD) and mild cognitive disorders (MCI) studies use single data modality to make predictions such as AD stages. The fusion of multiple data modalities can provide a holistic view of AD staging analysis. Thus, we use deep learning (DL) to integrally analyze imaging (magnetic resonance imaging (MRI)), genetic (single nucleotide polymorphisms (SNPs)), and clinical test data to classify patients into AD, MCI, and controls (CN). We use stacked denoising auto-encoders to extract features from clinical and genetic data, and use 3D-convolutional neural networks (CNNs) for imaging data. We also develop a novel data interpretation method to identify top-performing features learned by the deep-models with clustering and perturbation analysis. Using Alzheimer's disease neuroimaging initiative (ADNI) dataset, we demonstrate that deep models outperform shallow models, including support vector machines, decision trees, random forests, and k-nearest neighbors. In addition, we demonstrate that integrating multi-modality data outperforms single modality models in terms of accuracy, precision, recall, and meanF1 scores. Our models have identified hippocampus, amygdala brain areas, and the Rey Auditory Verbal Learning Test (RAVLT) as top distinguished features, which are consistent with the known AD literature.
Collapse
Affiliation(s)
- Janani Venugopalan
- Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Li Tong
- Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Hamid Reza Hassanzadeh
- School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - May D Wang
- Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA.
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA.
- Winship Cancer Institute, Parker H. Petit Institute for Bioengineering and Biosciences, Institute of People and Technology, Georgia Institute of Technology and Emory University, Atlanta, GA, USA.
| |
Collapse
|
30
|
Shirbandi K, Khalafi M, Mirza-Aghazadeh-Attari M, Tahmasbi M, Kiani Shahvandi H, Javanmardi P, Rahim F. Accuracy of deep learning model-assisted amyloid positron emission tomography scan in predicting Alzheimer's disease: A Systematic Review and meta-analysis. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100710] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
|
31
|
Nasser M, Salim N, Hamza H, Saeed F, Rabiu I. Improved Deep Learning Based Method for Molecular Similarity Searching Using Stack of Deep Belief Networks. Molecules 2020; 26:E128. [PMID: 33383976 PMCID: PMC7795308 DOI: 10.3390/molecules26010128] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 12/24/2020] [Accepted: 12/25/2020] [Indexed: 11/24/2022] Open
Abstract
Virtual screening (VS) is a computational practice applied in drug discovery research. VS is popularly applied in a computer-based search for new lead molecules based on molecular similarity searching. In chemical databases similarity searching is used to identify molecules that have similarities to a user-defined reference structure and is evaluated by quantitative measures of intermolecular structural similarity. Among existing approaches, 2D fingerprints are widely used. The similarity of a reference structure and a database structure is measured by the computation of association coefficients. In most classical similarity approaches, it is assumed that the molecular features in both biological and non-biologically-related activity carry the same weight. However, based on the chemical structure, it has been found that some distinguishable features are more important than others. Hence, this difference should be taken consideration by placing more weight on each important fragment. The main aim of this research is to enhance the performance of similarity searching by using multiple descriptors. In this paper, a deep learning method known as deep belief networks (DBN) has been used to reweight the molecule features. Several descriptors have been used for the MDL Drug Data Report (MDDR) dataset each of which represents different important features. The proposed method has been implemented with each descriptor individually to select the important features based on a new weight, with a lower error rate, and merging together all new features from all descriptors to produce a new descriptor for similarity searching. Based on the extensive experiments conducted, the results show that the proposed method outperformed several existing benchmark similarity methods, including Bayesian inference networks (BIN), the Tanimoto similarity method (TAN), adapted similarity measure of text processing (ASMTP) and the quantum-based similarity method (SQB). The results of this proposed multi-descriptor-based on Stack of deep belief networks method (SDBN) demonstrated a higher accuracy compared to existing methods on structurally heterogeneous datasets.
Collapse
Affiliation(s)
- Maged Nasser
- School of Computing, Universiti Teknologi Malaysia, Johor Bahru 81310, Malaysia; (H.H.); (I.R.)
| | - Naomie Salim
- School of Computing, Universiti Teknologi Malaysia, Johor Bahru 81310, Malaysia; (H.H.); (I.R.)
| | - Hentabli Hamza
- School of Computing, Universiti Teknologi Malaysia, Johor Bahru 81310, Malaysia; (H.H.); (I.R.)
| | - Faisal Saeed
- College of Computer Science and Engineering, Taibah University, Medina 344, Saudi Arabia
| | - Idris Rabiu
- School of Computing, Universiti Teknologi Malaysia, Johor Bahru 81310, Malaysia; (H.H.); (I.R.)
| |
Collapse
|
32
|
Prabhakar SK, Rajaguru H. Alcoholic EEG signal classification with Correlation Dimension based distance metrics approach and Modified Adaboost classification. Heliyon 2020; 6:e05689. [PMID: 33364482 PMCID: PMC7750377 DOI: 10.1016/j.heliyon.2020.e05689] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Revised: 09/14/2020] [Accepted: 12/04/2020] [Indexed: 11/16/2022] Open
Abstract
The basic function of the brain is severely affected by alcoholism. For the easy depiction and assessment of the mental condition of a human brain, Electroencephalography (EEG) signals are highly useful as it can record and measure the electrical activities of the brain much to the satisfaction of doctors and researchers. Utilizing the standard conventional techniques is quite hectic to derive the useful information as these signals are highly non-linear and non-stationary in nature. While recording the EEG signals, the activities of the neurons are recorded from various scalp regions which has varied characteristics and has a very low magnitude. Therefore, human interpretation of such signals is very difficult and consumes a lot of time. Hence, with the advent of Computer Aided Diagnosis (CAD) Techniques, identifying the normal versus alcoholic EEG signals has been of great utility in the medical field. In this work, we perform the initial clustering of the alcoholic EEG signals by means of using Correlation Dimension (CD) for easy feature extraction and then the suitable features are selected in it by means of employing various distance metrics like correlation distance, city block distance, cosine distance and chebyshev distance. Proceeding in such a methodology aids and assures that a good discrimination could be achieved between normal and alcoholic EEG signals using non-linear features. Finally, classification is then carried out with the suitable classifiers chosen such as Adaboost.RT classifier, the proposed Modified Adaboost.RT classifier by means of introducing Ridge and Lasso based soft thresholding technique, Random Forest with bootstrap resampling technique, Artificial Neural Networks (ANN) such as Radial Basis Functions (RBF) and Multi-Layer Perceptron (MLP), Support Vector Machine (SVM) with Linear, Polynomial and RBF Kernel, Naïve Bayesian Classifier (NBC), K-means classifier, and K Nearest Neighbor (KNN) Classifier and the results are analyzed. Results report a comparatively high classification accuracy of about 98.99% when correlation distance metrics are utilized with CD and the proposed Modified Adaboost.RT classifier using Ridge based soft thresholding technique.
Collapse
Affiliation(s)
- Sunil Kumar Prabhakar
- Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 02841, South Korea
| | - Harikumar Rajaguru
- Department of ECE, Bannari Amman Institute of Technology, Sathyamangalam, 638402, India
| |
Collapse
|
33
|
Ienca M, Ignatiadis K. Artificial Intelligence in Clinical Neuroscience: Methodological and Ethical Challenges. AJOB Neurosci 2020; 11:77-87. [PMID: 32228387 DOI: 10.1080/21507740.2020.1740352] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Clinical neuroscience is increasingly relying on the collection of large volumes of differently structured data and the use of intelligent algorithms for data analytics. In parallel, the ubiquitous collection of unconventional data sources (e.g. mobile health, digital phenotyping, consumer neurotechnology) is increasing the variety of data points. Big data analytics and approaches to Artificial Intelligence (AI) such as advanced machine learning are showing great potential to make sense of these larger and heterogeneous data flows. AI provides great opportunities for making new discoveries about the brain, improving current preventative and diagnostic models in both neurology and psychiatry and developing more effective assistive neurotechnologies. Concurrently, it raises many new methodological and ethical challenges. Given their transformative nature, it is still largely unclear how AI-driven approaches to the study of the human brain will meet adequate standards of scientific validity and affect normative instruments in neuroethics and research ethics. This manuscript provides an overview of current AI-driven approaches to clinical neuroscience and an assessment of the associated key methodological and ethical challenges. In particular, it will discuss what ethical principles are primarily affected by AI approaches to human neuroscience, and what normative safeguards should be enforced in this domain.
Collapse
Affiliation(s)
- Marcello Ienca
- Swiss Federal Institute of Technology, ETH Zurich, Department of Health Sciences and Technology
| | - Karolina Ignatiadis
- Swiss Federal Institute of Technology, ETH Zurich, Department of Health Sciences and Technology
| |
Collapse
|
34
|
Affiliation(s)
- David Z Wang
- Neurovascular Division, Department of Neurology, Barrow Neurological Institute, St Joseph Hospital and Medical Center, Phoenix, AZ, USA
| | - Lee H Schwamm
- Comprehensive Stroke Center, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Tianyi Qian
- Department of Public Health, School of Medicine, Tsinghua University, Beijing
- Tencent Healthcare, Tencent AIMIS, Shenzhen, China
| | - Qionghai Dai
- Tencent Healthcare, Tencent AIMIS, Shenzhen, China
| |
Collapse
|
35
|
Zhang L, Wang M, Liu M, Zhang D. A Survey on Deep Learning for Neuroimaging-Based Brain Disorder Analysis. Front Neurosci 2020; 14:779. [PMID: 33117114 PMCID: PMC7578242 DOI: 10.3389/fnins.2020.00779] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2020] [Accepted: 06/02/2020] [Indexed: 12/12/2022] Open
Abstract
Deep learning has recently been used for the analysis of neuroimages, such as structural magnetic resonance imaging (MRI), functional MRI, and positron emission tomography (PET), and it has achieved significant performance improvements over traditional machine learning in computer-aided diagnosis of brain disorders. This paper reviews the applications of deep learning methods for neuroimaging-based brain disorder analysis. We first provide a comprehensive overview of deep learning techniques and popular network architectures by introducing various types of deep neural networks and recent developments. We then review deep learning methods for computer-aided analysis of four typical brain disorders, including Alzheimer's disease, Parkinson's disease, Autism spectrum disorder, and Schizophrenia, where the first two diseases are neurodegenerative disorders and the last two are neurodevelopmental and psychiatric disorders, respectively. More importantly, we discuss the limitations of existing studies and present possible future directions.
Collapse
Affiliation(s)
- Li Zhang
- College of Computer Science and Technology, Nanjing Forestry University, Nanjing, China
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Mingliang Wang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Daoqiang Zhang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| |
Collapse
|
36
|
Abrol A, Bhattarai M, Fedorov A, Du Y, Plis S, Calhoun V. Deep residual learning for neuroimaging: An application to predict progression to Alzheimer's disease. J Neurosci Methods 2020; 339:108701. [PMID: 32275915 PMCID: PMC7297044 DOI: 10.1016/j.jneumeth.2020.108701] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 01/03/2020] [Accepted: 03/25/2020] [Indexed: 01/22/2023]
Abstract
BACKGROUND The unparalleled performance of deep learning approaches in generic image processing has motivated its extension to neuroimaging data. These approaches learn abstract neuroanatomical and functional brain alterations that could enable exceptional performance in classification of brain disorders, predicting disease progression, and localizing brain abnormalities. NEW METHOD This work investigates the suitability of a modified form of deep residual neural networks (ResNet) for studying neuroimaging data in the specific application of predicting progression from mild cognitive impairment (MCI) to Alzheimer's disease (AD). Prediction was conducted first by training the deep models using MCI individuals only, followed by a domain transfer learning version that additionally trained on AD and controls. We also demonstrate a network occlusion based method to localize abnormalities. RESULTS The implemented framework captured non-linear features that successfully predicted AD progression and also conformed to the spectrum of various clinical scores. In a repeated cross-validated setup, the learnt predictive models showed highly similar peak activations that corresponded to previous AD reports. COMPARISON WITH EXISTING METHODS The implemented architecture achieved a significant performance improvement over the classical support vector machine and the stacked autoencoder frameworks (p < 0.005), numerically better than state-of-the-art performance using sMRI data alone (> 7% than the second-best performing method) and within 1% of the state-of-the-art performance considering learning using multiple neuroimaging modalities as well. CONCLUSIONS The explored frameworks reflected the high potential of deep learning architectures in learning subtle predictive features and utility in critical applications such as predicting and understanding disease progression.
Collapse
Affiliation(s)
- Anees Abrol
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA; Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, USA.
| | - Manish Bhattarai
- Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, USA; Los Alamos National Laboratory, Los Alamos, NM, 87545, USA
| | - Alex Fedorov
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA; Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, USA
| | - Yuhui Du
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA; School of Computer and Information Technology, Shanxi University, Taiyuan, China
| | - Sergey Plis
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA
| | - Vince Calhoun
- Joint (GSU/GaTech/Emory) Center for Transational Research in Neuroimaging and Data Science, Atlanta, GA, 30303, USA; The Mind Research Network, 1101 Yale Blvd NE, Albuquerque, NM, 87106, USA; Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, USA
| |
Collapse
|
37
|
Robust multitask feature learning for amnestic mild cognitive impairment diagnosis based on multidimensional surface measures. MEDICINE IN NOVEL TECHNOLOGY AND DEVICES 2020. [DOI: 10.1016/j.medntd.2020.100035] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
38
|
Zhang J, Wang Y. TEMPORALLY ADAPTIVE-DYNAMIC SPARSE NETWORK FOR MODELING DISEASE PROGRESSION. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2020; 2020:19725835. [PMID: 34012506 PMCID: PMC8130893 DOI: 10.1109/isbi45749.2020.9098321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Alzheimer's disease (AD) is a neurodegenerative disorder with progressive impairment of memory and cognitive functions. Sparse coding (SC) has been demonstrated to be an efficient and effective method for AD diagnosis and prognosis. However, previous SC methods usually focus on the baseline data while ignoring the consistent longitudinal features with strong sparsity pattern along the disease progression. Additionally, SC methods extract sparse features from image patches separately rather than learn with the dictionary atoms across the entire subject. To address these two concerns and comprehensively capture temporal-subject sparse features towards earlier and better discriminability of AD, we propose a novel supervised SC network termed Temporally Adaptive-Dynamic Sparse Network (TADsNet) to uncover the sequential correlation and native subject-level codes from the longitudinal brain images. Our work adaptively updates the sparse codes to impose the temporal regularized correlation and dynamically mine the dictionary atoms to make use of entire subject-level features. Experimental results on ADNI-I cohort validate the superiority of our approach.
Collapse
Affiliation(s)
- Jie Zhang
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ, USA
| | - Yalin Wang
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ, USA
| |
Collapse
|
39
|
Ebrahimighahnavieh MA, Luo S, Chiong R. Deep learning to detect Alzheimer's disease from neuroimaging: A systematic literature review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 187:105242. [PMID: 31837630 DOI: 10.1016/j.cmpb.2019.105242] [Citation(s) in RCA: 90] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2019] [Revised: 11/13/2019] [Accepted: 11/25/2019] [Indexed: 06/10/2023]
Abstract
Alzheimer's Disease (AD) is one of the leading causes of death in developed countries. From a research point of view, impressive results have been reported using computer-aided algorithms, but clinically no practical diagnostic method is available. In recent years, deep models have become popular, especially in dealing with images. Since 2013, deep learning has begun to gain considerable attention in AD detection research, with the number of published papers in this area increasing drastically since 2017. Deep models have been reported to be more accurate for AD detection compared to general machine learning techniques. Nevertheless, AD detection is still challenging, and for classification, it requires a highly discriminative feature representation to separate similar brain patterns. This paper reviews the current state of AD detection using deep learning. Through a systematic literature review of over 100 articles, we set out the most recent findings and trends. Specifically, we review useful biomarkers and features (personal information, genetic data, and brain scans), the necessary pre-processing steps, and different ways of dealing with neuroimaging data originating from single-modality and multi-modality studies. Deep models and their performance are described in detail. Although deep learning has achieved notable performance in detecting AD, there are several limitations, especially regarding the availability of datasets and training procedures.
Collapse
Affiliation(s)
| | - Suhuai Luo
- The University of Newcastle, University Drive, Callaghan 2308, Australia
| | - Raymond Chiong
- The University of Newcastle, University Drive, Callaghan 2308, Australia.
| |
Collapse
|
40
|
Wang L, Liu Y, Zeng X, Cheng H, Wang Z, Wang Q. Region-of-Interest based sparse feature learning method for Alzheimer's disease identification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 187:105290. [PMID: 31927305 DOI: 10.1016/j.cmpb.2019.105290] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Revised: 12/17/2019] [Accepted: 12/19/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE In recent years, some clinical parameters, such as the volume of gray matter (GM) and cortical thickness, have been used as anatomical features to identify Alzheimer's disease (AD) from Healthy Controls (HC) in some feature-based machine learning methods. However, fewer image-based feature parameters have been proposed, which are equivalent to these clinical parameters, to describe the atrophy of regions-of-interest (ROIs) of the brain. In this study, we aim to extract effective image-based feature parameters to improve the diagnostic performance of AD with magnetic resonance imaging (MRI) data. METHODS A new subspace-based sparse feature learning method is proposed, which builds a union-of-subspace representation model to realize feature extraction and disease identification. Specifically, the proposed method estimates feature dimensions reasonably, at the same time, it protects local features for the specified ROIs of the brain, and realizes image-based feature extraction and classification automatically instead of computing the volume of GM or cortical thickness preliminarily. RESULTS Experimental results illustrate the effectiveness and robustness of the proposed method on feature extraction and classification, which are based on the sampled clinical dataset from Peking University Third Hospital of China and the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The extracted image-based feature parameters describe the atrophy of ROIs of the brain well as clinical parameters but show better performance in AD identification than clinical parameters. Based on them, the important ROIs for AD identification can be identified even for correlated variables. CONCLUSION The extracted features and the proposed identification parameters show high correlation with the volume of GM and the clinical mini-mental state examination (MMSE) score respectively. The proposed method will be useful in denoting the changes of cerebral pathology and cognitive function in AD patients.
Collapse
Affiliation(s)
- Ling Wang
- Center for Robotics, University of Electronic Science and Technology of China, Chengdu, 611731 China
| | - Yan Liu
- School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, 100049 China.
| | - Xiangzhu Zeng
- Department of Radiology, Peking University Third Hospital, Beijing, 100191 China.
| | - Hong Cheng
- Center for Robotics, University of Electronic Science and Technology of China, Chengdu, 611731 China
| | - Zheng Wang
- Department of Radiology, Peking University Third Hospital, Beijing, 100191 China
| | - Qiang Wang
- Beijing Union University, Beijing, 100101 China
| |
Collapse
|
41
|
Liu L, Xu J, Huan Y, Zou Z, Yeh SC, Zheng LR. A Smart Dental Health-IoT Platform Based on Intelligent Hardware, Deep Learning, and Mobile Terminal. IEEE J Biomed Health Inform 2020; 24:898-906. [DOI: 10.1109/jbhi.2019.2919916] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
42
|
Multi-modal neuroimaging feature selection with consistent metric constraint for diagnosis of Alzheimer's disease. Med Image Anal 2020. [DOI: 10.1016/j.media.2019.101625 10.1016/j.media.2019.101625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
43
|
Hao X, Bao Y, Guo Y, Yu M, Zhang D, Risacher SL, Saykin AJ, Yao X, Shen L. Multi-modal neuroimaging feature selection with consistent metric constraint for diagnosis of Alzheimer's disease. Med Image Anal 2020; 60:101625. [PMID: 31841947 PMCID: PMC6980345 DOI: 10.1016/j.media.2019.101625] [Citation(s) in RCA: 57] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Revised: 11/25/2019] [Accepted: 11/25/2019] [Indexed: 12/12/2022]
Abstract
The accurate diagnosis of Alzheimer's disease (AD) and its early stage, e.g., mild cognitive impairment (MCI), is essential for timely treatment or possible intervention to slow down AD progression. Recent studies have demonstrated that multiple neuroimaging and biological measures contain complementary information for diagnosis and prognosis. Therefore, information fusion strategies with multi-modal neuroimaging data, such as voxel-based measures extracted from structural MRI (VBM-MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET), have shown their effectiveness for AD diagnosis. However, most existing methods are proposed to simply integrate the multi-modal data, but do not make full use of structure information across the different modalities. In this paper, we propose a novel multi-modal neuroimaging feature selection method with consistent metric constraint (MFCC) for AD analysis. First, the similarity is calculated for each modality (i.e. VBM-MRI or FDG-PET) individually by random forest strategy, which can extract pairwise similarity measures for multiple modalities. Then the group sparsity regularization term and the sample similarity constraint regularization term are used to constrain the objective function to conduct feature selection from multiple modalities. Finally, the multi-kernel support vector machine (MK-SVM) is used to fuse the features selected from different models for final classification. The experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) show that the proposed method has better classification performance than the start-of-the-art multimodality-based methods. Specifically, we achieved higher accuracy and area under the curve (AUC) for AD versus normal controls (NC), MCI versus NC, and MCI converters (MCI-C) versus MCI non-converters (MCI-NC) on ADNI datasets. Therefore, the proposed model not only outperforms the traditional method in terms of AD/MCI classification, but also discovers the characteristics associated with the disease, demonstrating its promise for improving disease-related mechanistic understanding.
Collapse
Affiliation(s)
- Xiaoke Hao
- School of Artificial Intelligence, Hebei University of Technology, Tianjin 300401, China
| | - Yongjin Bao
- School of Artificial Intelligence, Hebei University of Technology, Tianjin 300401, China
| | - Yingchun Guo
- School of Artificial Intelligence, Hebei University of Technology, Tianjin 300401, China.
| | - Ming Yu
- School of Artificial Intelligence, Hebei University of Technology, Tianjin 300401, China
| | - Daoqiang Zhang
- School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China.
| | - Shannon L Risacher
- Department of Radiology and Imaging Sciences, School of Medicine, Indiana University, Indianapolis 46202, USA
| | - Andrew J Saykin
- Department of Radiology and Imaging Sciences, School of Medicine, Indiana University, Indianapolis 46202, USA
| | - Xiaohui Yao
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia 19104, USA
| | - Li Shen
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia 19104, USA.
| |
Collapse
|
44
|
|
45
|
|
46
|
Mostapha M, Styner M. Role of deep learning in infant brain MRI analysis. Magn Reson Imaging 2019; 64:171-189. [PMID: 31229667 PMCID: PMC6874895 DOI: 10.1016/j.mri.2019.06.009] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2019] [Revised: 06/06/2019] [Accepted: 06/08/2019] [Indexed: 12/17/2022]
Abstract
Deep learning algorithms and in particular convolutional networks have shown tremendous success in medical image analysis applications, though relatively few methods have been applied to infant MRI data due numerous inherent challenges such as inhomogenous tissue appearance across the image, considerable image intensity variability across the first year of life, and a low signal to noise setting. This paper presents methods addressing these challenges in two selected applications, specifically infant brain tissue segmentation at the isointense stage and presymptomatic disease prediction in neurodevelopmental disorders. Corresponding methods are reviewed and compared, and open issues are identified, namely low data size restrictions, class imbalance problems, and lack of interpretation of the resulting deep learning solutions. We discuss how existing solutions can be adapted to approach these issues as well as how generative models seem to be a particularly strong contender to address them.
Collapse
Affiliation(s)
- Mahmoud Mostapha
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States of America.
| | - Martin Styner
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States of America; Neuro Image Research and Analysis Lab, Department of Psychiatry, University of North Carolina at Chapel Hill, NC 27599, United States of America.
| |
Collapse
|
47
|
Abstract
Deciphering the massive volume of complex electronic data that has been compiled by hospital systems over the past decades has the potential to revolutionize modern medicine, as well as present significant challenges. Deep learning is uniquely suited to address these challenges, and recent advances in techniques and hardware have poised the field of medical machine learning for transformational growth. The clinical neurosciences are particularly well positioned to benefit from these advances given the subtle presentation of symptoms typical of neurologic disease. Here we review the various domains in which deep learning algorithms have already provided impetus for change-areas such as medical image analysis for the improved diagnosis of Alzheimer's disease and the early detection of acute neurologic events; medical image segmentation for quantitative evaluation of neuroanatomy and vasculature; connectome mapping for the diagnosis of Alzheimer's, autism spectrum disorder, and attention deficit hyperactivity disorder; and mining of microscopic electroencephalogram signals and granular genetic signatures. We additionally note important challenges in the integration of deep learning tools in the clinical setting and discuss the barriers to tackling the challenges that currently exist.
Collapse
Affiliation(s)
- Aly Al-Amyn Valliani
- Department of Neurological Surgery, Mount Sinai Health System, 1 Gustave Levy Pl, New York, NY, 10029, USA
| | - Daniel Ranti
- Department of Neurological Surgery, Mount Sinai Health System, 1 Gustave Levy Pl, New York, NY, 10029, USA
| | - Eric Karl Oermann
- Department of Neurological Surgery, Mount Sinai Health System, 1 Gustave Levy Pl, New York, NY, 10029, USA.
| |
Collapse
|
48
|
Feng CM, Xu Y, Liu JX, Gao YL, Zheng CH. Supervised Discriminative Sparse PCA for Com-Characteristic Gene Selection and Tumor Classification on Multiview Biological Data. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:2926-2937. [PMID: 30802874 DOI: 10.1109/tnnls.2019.2893190] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Principal component analysis (PCA) has been used to study the pathogenesis of diseases. To enhance the interpretability of classical PCA, various improved PCA methods have been proposed to date. Among these, a typical method is the so-called sparse PCA, which focuses on seeking sparse loadings. However, the performance of these methods is still far from satisfactory due to their limitation of using unsupervised learning methods; moreover, the class ambiguity within the sample is high. To overcome this problem, this paper developed a new PCA method, which is named the supervised discriminative sparse PCA (SDSPCA). The main innovation of this method is the incorporation of discriminative information and sparsity into the PCA model. Specifically, in contrast to the traditional sparse PCA, which imposes sparsity on the loadings, here, sparse components are obtained to represent the data. Furthermore, via the linear transformation, the sparse components approximate the given label information. On the one hand, sparse components improve interpretability over the traditional PCA, while on the other hand, they are have discriminative abilities suitable for classification purposes. A simple algorithm is developed, and its convergence proof is provided. SDSPCA has been applied to the common-characteristic gene selection and tumor classification on multiview biological data. The sparsity and classification performance of SDSPCA are empirically verified via abundant, reasonable, and effective experiments, and the obtained results demonstrate that SDSPCA outperforms other state-of-the-art methods.
Collapse
|
49
|
Zhang J, Wang Y. Continually Modeling Alzheimer's Disease Progression via Deep Multi-Order Preserving Weight Consolidation. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2019; 11765:850-859. [PMID: 34378008 PMCID: PMC8351547 DOI: 10.1007/978-3-030-32245-8_94] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Alzheimer's disease (AD) is the most common type of dementia. Identifying biomarkers that can track AD at early stages is crucial for therapy to be successful. Many researchers have developed models to predict cognitive impairments by employing valuable longitudinal imaging information along the progression of the disease. However, previous methods model the problem either in the isolated single-task mode or multi-task batch mode, which ignores the fact that the longitudinal data always arrive in a continuous time sequence and, in reality, there are rich types of longitudinal data to apply our learned model to. To this end, we continually model the AD progression in time sequence via a proposed novel Deep Multi-order Preserving Weight Consolidation (DMoPWC) to simultaneously 1) discover the inter and inner relations among different cognitive measures at different time points and utilize such relations to enhance the learning of associations between imaging features and clinical scores; 2) continually learn new longitudinal patients' images to overcome forgetting the previously learned knowledge without access to the old data. Moreover, inspired by recent breakthroughs of Recurrent Neural Network, we consider time-order knowledge to further reinforce the statistical power of DMoPWC and ensure features at a particular time will be temporally ahead of the features at its subsequential times. Empirical studies on the longitudinal brain image dataset demonstrate that DMoPWC achieves superior performance over other AD prognosis algorithms.
Collapse
Affiliation(s)
- Jie Zhang
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ, USA
| | - Yalin Wang
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ, USA
| |
Collapse
|
50
|
Zhang Y, Wang S, Sui Y, Yang M, Liu B, Cheng H, Sun J, Jia W, Phillips P, Gorriz JM. Multivariate Approach for Alzheimer's Disease Detection Using Stationary Wavelet Entropy and Predator-Prey Particle Swarm Optimization. J Alzheimers Dis 2019; 65:855-869. [PMID: 28731432 DOI: 10.3233/jad-170069] [Citation(s) in RCA: 93] [Impact Index Per Article: 18.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND The number of patients with Alzheimer's disease is increasing rapidly every year. Scholars often use computer vision and machine learning methods to develop an automatic diagnosis system. OBJECTIVE In this study, we developed a novel machine learning system that can make diagnoses automatically from brain magnetic resonance images. METHODS First, the brain imaging was processed, including skull stripping and spatial normalization. Second, one axial slice was selected from the volumetric image, and stationary wavelet entropy (SWE) was done to extract the texture features. Third, a single-hidden-layer neural network was used as the classifier. Finally, a predator-prey particle swarm optimization was proposed to train the weights and biases of the classifier. RESULTS Our method used 4-level decomposition and yielded 13 SWE features. The classification yielded an overall accuracy of 92.73±1.03%, a sensitivity of 92.69±1.29%, and a specificity of 92.78±1.51%. The area under the curve is 0.95±0.02. Additionally, this method only cost 0.88 s to identify a subject in online stage, after its volumetric image is preprocessed. CONCLUSION In terms of classification performance, our method performs better than 10 state-of-the-art approaches and the performance of human observers. Therefore, this proposed method is effective in the detection of Alzheimer's disease.
Collapse
Affiliation(s)
- Yudong Zhang
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan, P. R. China.,School of Computer Science and Technology, Nanjing Normal University, Nanjing, Jiangsu, P. R. China
| | - Shuihua Wang
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan, P. R. China.,School of Electronic Science and Engineering, Nanjing University, Nanjing, Jiangsu, P. R. China
| | - Yuxiu Sui
- Department of Psychiatry, Affiliated Nanjing Brain Hospital of Nanjing Medical University, Nanjing, P. R.China
| | - Ming Yang
- Department of Radiology, Children's Hospital of Nanjing Medical University, Nanjing, P. R. China
| | - Bin Liu
- Department of Radiology, Zhong-Da Hospital of Southeast University, Nanjing, P. R. China
| | - Hong Cheng
- Department of Neurology, First Affiliated Hospital of Nanjing Medical University, Nanjing, P. R. China
| | - Junding Sun
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan, P. R. China
| | - Wenjuan Jia
- School of Computer Science and Technology, Nanjing Normal University, Nanjing, Jiangsu, P. R. China
| | - Preetha Phillips
- West Virginia School of Osteopathic Medicine, Lewisburg, WV, USA
| | - Juan Manuel Gorriz
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
| |
Collapse
|