1
|
Sun J, Han JDJ, Chen W. Exploring the relationship among Alzheimer's disease, aging and cognitive scores through neuroimaging-based approach. Sci Rep 2024; 14:27472. [PMID: 39523370 PMCID: PMC11551169 DOI: 10.1038/s41598-024-78712-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2024] [Accepted: 11/04/2024] [Indexed: 11/16/2024] Open
Abstract
Alzheimer's disease (AD) is a fatal neurodegenerative disorder, with the Mini-Mental State Examination (MMSE) and Clinical Dementia Rating (CDR) serving significant roles in monitoring its progression. We hypothesize that while cognitive assessment scores can detect AD-related brain changes, the targeted brain regions may differ. Additionally, given AD's strong association with aging, we propose that specific brain regions are influenced by both AD pathology and aging, exhibiting strong correlations with both. To test these hypotheses, we developed a 3D convolutional network with a mixed-attention mechanism to recognize AD subjects from structural magnetic resonance imaging (sMRI) data and utilize 3D convolutional methods to pinpoint brain regions significantly correlated with the AD, MMSE, CDR and age. All models were trained and internally validated on 417 samples from the Alzheimer's Disease Neuroimaging Initiative (ADNI), and the classification model was externally validated on 382 samples from the Australian Imaging and Lifestyle flagship (AIBL). This approach provided robust support for using MMSE and CDR in assessing AD progression and visually illustrated the relationship between aging and AD. The analysis revealed correlations among the four identification tasks (AD, MMSE, CDR and age) and highlighted asymmetric brain lesions in both AD and aging. Notably, we found that AD can accelerate aging to some extent, and a significant correlation exists between the rate of aging and cognitive assessment scores. This offers new insights into the relationship between AD and aging.
Collapse
Affiliation(s)
- Jinhui Sun
- School of Cyber Science and Engineering, Qufu Normal University, Qufu, 273165, People's Republic of China
| | - Jing-Dong J Han
- Peking-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies, Center for Quantitative Biology (CQB), Peking University, Beijing, 100871, People's Republic of China.
| | - Weiyang Chen
- School of Cyber Science and Engineering, Qufu Normal University, Qufu, 273165, People's Republic of China.
| |
Collapse
|
2
|
Luo M, He Z, Cui H, Ward P, Chen YPP. Dual attention based fusion network for MCI Conversion Prediction. Comput Biol Med 2024; 182:109039. [PMID: 39232405 DOI: 10.1016/j.compbiomed.2024.109039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 08/14/2024] [Accepted: 08/15/2024] [Indexed: 09/06/2024]
Abstract
Alzheimer's disease (AD) severely impacts the lives of many patients and their families. Predicting the progression of the disease from the early stage of mild cognitive impairment (MCI) is of substantial value for treatment, medical research and clinical trials. In this paper, we propose a novel dual attention network to classify progressive MCI (pMCI) and stable MCI (sMCI) using both magnetic resonance imaging (MRI) and neurocognitive metadata. A 3D CNN ShuffleNet V2 model is used as the network backbone to extract MRI image features. Then, neurocognitive metadata is used to guide the spatial attention mechanism to steer the model to focus attention on the most discriminative regions of the brain. In contrast to traditional fusion methods, we propose a ViT based self attention fusion mechanism to fuse the neurocognitive metadata with the 3D CNN feature maps. The experimental results show that our proposed model achieves an accuracy, AUC, and sensitivity of 81.34%, 0.874, and 0.85 respectively using 5-fold cross validation evaluation. A comprehensive experimental study shows our proposed approach significantly outperforms all previous methods for MCI progression classification. In addition, an ablation study shows both fusion methods contribute to the high final performance.
Collapse
Affiliation(s)
- Min Luo
- Department of Computer Science and Information Technology, La Trobe University, Melbourne Vic, 3086, Australia
| | - Zhen He
- Department of Computer Science and Information Technology, La Trobe University, Melbourne Vic, 3086, Australia.
| | - Hui Cui
- Department of Computer Science and Information Technology, La Trobe University, Melbourne Vic, 3086, Australia
| | - Phillip Ward
- Department of Computer Science and Information Technology, La Trobe University, Melbourne Vic, 3086, Australia
| | - Yi-Ping Phoebe Chen
- Department of Computer Science and Information Technology, La Trobe University, Melbourne Vic, 3086, Australia
| |
Collapse
|
3
|
Li H, Yang J, Xuan Z, Qu M, Wang Y, Feng C. A spatio-temporal graph convolutional network for ultrasound echocardiographic landmark detection. Med Image Anal 2024; 97:103272. [PMID: 39024972 DOI: 10.1016/j.media.2024.103272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 07/07/2024] [Accepted: 07/08/2024] [Indexed: 07/20/2024]
Abstract
Landmark detection is a crucial task in medical image analysis, with applications across various fields. However, current methods struggle to accurately locate landmarks in medical images with blurred tissue boundaries due to low image quality. In particular, in echocardiography, sparse annotations make it challenging to predict landmarks with position stability and temporal consistency. In this paper, we propose a spatio-temporal graph convolutional network tailored for echocardiography landmark detection. We specifically sample landmark labels from the left ventricular endocardium and pre-calculate their correlations to establish structural priors. Our approach involves a graph convolutional neural network that learns the interrelationships among landmarks, significantly enhancing landmark accuracy within ambiguous tissue contexts. Additionally, we integrate gate recurrent units to grasp the temporal consistency of landmarks across consecutive images, augmenting the model's resilience against unlabeled data. Through validation across three echocardiography datasets, our method demonstrates superior accuracy when contrasted with alternative landmark detection models.
Collapse
Affiliation(s)
- Honghe Li
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, China
| | - Jinzhu Yang
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, China.
| | - Zhanfeng Xuan
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, China
| | - Mingjun Qu
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, China
| | - Yonghuai Wang
- Department of Cardiovascular Ultrasound, The First Hospital of China Medical University, China
| | - Chaolu Feng
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, China
| |
Collapse
|
4
|
Yu Q, Ma Q, Da L, Li J, Wang M, Xu A, Li Z, Li W. A transformer-based unified multimodal framework for Alzheimer's disease assessment. Comput Biol Med 2024; 180:108979. [PMID: 39098237 DOI: 10.1016/j.compbiomed.2024.108979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Revised: 07/03/2024] [Accepted: 07/31/2024] [Indexed: 08/06/2024]
Abstract
In Alzheimer's disease (AD) assessment, traditional deep learning approaches have often employed separate methodologies to handle the diverse modalities of input data. Recognizing the critical need for a cohesive and interconnected analytical framework, we propose the AD-Transformer, a novel transformer-based unified deep learning model. This innovative framework seamlessly integrates structural magnetic resonance imaging (sMRI), clinical, and genetic data from the extensive Alzheimer's Disease Neuroimaging Initiative (ADNI) database, encompassing 1651 subjects. By employing a Patch-CNN block, the AD-Transformer efficiently transforms image data into image tokens, while a linear projection layer adeptly converts non-image data into corresponding tokens. As the core, a transformer block learns comprehensive representations of the input data, capturing the intricate interplay between modalities. The AD-Transformer sets a new benchmark in AD diagnosis and Mild Cognitive Impairment (MCI) conversion prediction, achieving remarkable average area under curve (AUC) values of 0.993 and 0.845, respectively, surpassing those of traditional image-only models and non-unified multimodal models. Our experimental results confirmed the potential of the AD-Transformer as a potent tool in AD diagnosis and MCI conversion prediction. By providing a unified framework that jointly learns holistic representations of both image and non-image data, the AD-Transformer paves the way for more effective and precise clinical assessments, offering a clinically adaptable strategy for leveraging diverse data modalities in the battle against AD.
Collapse
Affiliation(s)
- Qi Yu
- Department of Big Data in Health Science, School of Public Health and Center of Clinical Big Data and Analytics of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Qian Ma
- Department of Big Data in Health Science, School of Public Health and Center of Clinical Big Data and Analytics of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Lijuan Da
- Department of Big Data in Health Science, School of Public Health and Center of Clinical Big Data and Analytics of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Jiahui Li
- Department of Big Data in Health Science, School of Public Health and Center of Clinical Big Data and Analytics of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Mengying Wang
- Department of Big Data in Health Science, School of Public Health and Center of Clinical Big Data and Analytics of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Andi Xu
- Department of Big Data in Health Science, School of Public Health and Center of Clinical Big Data and Analytics of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Zilin Li
- School of Mathematics and Statistics, Northeast Normal University, Changchun, 130024, Jilin, China
| | - Wenyuan Li
- Department of Big Data in Health Science, School of Public Health and Center of Clinical Big Data and Analytics of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China.
| |
Collapse
|
5
|
Liu R, Huang ZA, Hu Y, Zhu Z, Wong KC, Tan KC. Spatial-Temporal Co-Attention Learning for Diagnosis of Mental Disorders From Resting-State fMRI Data. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:10591-10605. [PMID: 37027556 DOI: 10.1109/tnnls.2023.3243000] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Neuroimaging techniques have been widely adopted to detect the neurological brain structures and functions of the nervous system. As an effective noninvasive neuroimaging technique, functional magnetic resonance imaging (fMRI) has been extensively used in computer-aided diagnosis (CAD) of mental disorders, e.g., autism spectrum disorder (ASD) and attention deficit/hyperactivity disorder (ADHD). In this study, we propose a spatial-temporal co-attention learning (STCAL) model for diagnosing ASD and ADHD from fMRI data. In particular, a guided co-attention (GCA) module is developed to model the intermodal interactions of spatial and temporal signal patterns. A novel sliding cluster attention module is designed to address global feature dependency of self-attention mechanism in fMRI time series. Comprehensive experimental results demonstrate that our STCAL model can achieve competitive accuracies of 73.0 ± 4.5%, 72.0 ± 3.8%, and 72.5 ± 4.2% on the ABIDE I, ABIDE II, and ADHD-200 datasets, respectively. Moreover, the potential for feature pruning based on the co-attention scores is validated by the simulation experiment. The clinical interpretation analysis of STCAL can allow medical professionals to concentrate on the discriminative regions of interest and key time frames from fMRI data.
Collapse
|
6
|
Han K, Li G, Fang Z, Yang F. Multi-Template Meta-Information Regularized Network for Alzheimer's Disease Diagnosis Using Structural MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1664-1676. [PMID: 38109240 DOI: 10.1109/tmi.2023.3344384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2023]
Abstract
Structural magnetic resonance imaging (sMRI) has been widely applied in computer-aided Alzheimer's disease (AD) diagnosis, owing to its capabilities in providing detailed brain morphometric patterns and anatomical features in vivo. Although previous works have validated the effectiveness of incorporating metadata (e.g., age, gender, and educational years) for sMRI-based AD diagnosis, existing methods solely paid attention to metadata-associated correlation to AD (e.g., gender bias in AD prevalence) or confounding effects (e.g., the issue of normal aging and metadata-related heterogeneity). Hence, it is difficult to fully excavate the influence of metadata on AD diagnosis. To address these issues, we constructed a novel Multi-template Meta-information Regularized Network (MMRN) for AD diagnosis. Specifically, considering diagnostic variation resulting from different spatial transformations onto different brain templates, we first regarded different transformations as data augmentation for self-supervised learning after template selection. Since the confounding effects may arise from excessive attention to meta-information owing to its correlation with AD, we then designed the modules of weakly supervised meta-information learning and mutual information minimization to learn and disentangle meta-information from learned class-related representations, which accounts for meta-information regularization for disease diagnosis. We have evaluated our proposed MMRN on two public multi-center cohorts, including the Alzheimer's Disease Neuroimaging Initiative (ADNI) with 1,950 subjects and the National Alzheimer's Coordinating Center (NACC) with 1,163 subjects. The experimental results have shown that our proposed method outperformed the state-of-the-art approaches in both tasks of AD diagnosis, mild cognitive impairment (MCI) conversion prediction, and normal control (NC) vs. MCI vs. AD classification.
Collapse
|
7
|
Bergen RV, Rajotte JF, Yousefirizi F, Rahmim A, Ng RT. Assessing privacy leakage in synthetic 3-D PET imaging using transversal GAN. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107910. [PMID: 37976611 DOI: 10.1016/j.cmpb.2023.107910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 10/27/2023] [Accepted: 10/31/2023] [Indexed: 11/19/2023]
Abstract
BACKGROUND AND OBJECTIVE Training computer-vision related algorithms on medical images for disease diagnosis or image segmentation is difficult in large part due to privacy concerns. For this reason, generative image models are highly sought after to facilitate data sharing. However, 3-D generative models are understudied, and investigation of their privacy leakage is needed. METHODS We introduce our 3-D generative model, Transversal GAN (TrGAN), using head & neck PET images which are conditioned on tumor masks as a case study. We define quantitative measures of image fidelity and utility, and propose a novel framework for evaluating privacy-utility trade-off through membership inference attack. These metrics are evaluated in the course of training to identify ideal fidelity, utility and privacy trade-offs and establish the relationships between these parameters. RESULTS We show that the discriminator of the TrGAN is vulnerable to attack, and that an attacker can identify which samples were used in training with almost perfect accuracy (AUC = 0.99). We also show that an attacker with access to only the generator cannot reliably classify whether a sample had been used for training (AUC = 0.51). We also propose and demonstrate a general decision procedure for any deep learning based generative model, which allows the user to quantify and evaluate the decision trade-off between downstream utility and privacy protection. CONCLUSIONS TrGAN can generate 3-D medical images that retain important image features and statistical properties of the training data set, with minimal privacy loss as determined by a membership inference attack. Our utility-privacy decision procedure may be beneficial to researchers who wish to share data or lack a sufficient number of large labeled image datasets.
Collapse
Affiliation(s)
- Robert V Bergen
- Data Science Institute, University of British Columbia, BC V6T 1Z4, Canada.
| | | | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, BC V5Z 1L3, Canada
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, BC V5Z 1L3, Canada; Department of Radiology, University of British Columbia, BC V5Z 1M9, Canada
| | - Raymond T Ng
- Data Science Institute, University of British Columbia, BC V6T 1Z4, Canada
| |
Collapse
|
8
|
Fan X, Li H, Liu L, Zhang K, Zhang Z, Chen Y, Wang Z, He X, Xu J, Hu Q. Early Diagnosing and Transformation Prediction of Alzheimer's Disease Using Multi-Scaled Self-Attention Network on Structural MRI Images with Occlusion Sensitivity Analysis. J Alzheimers Dis 2024; 97:909-926. [PMID: 38160355 DOI: 10.3233/jad-230705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
BACKGROUND Structural magnetic resonance imaging (sMRI) is vital for early Alzheimer's disease (AD) diagnosis, though confirming specific biomarkers remains challenging. Our proposed Multi-Scale Self-Attention Network (MUSAN) enhances classification of cognitively normal (CN) and AD individuals, distinguishing stable (sMCI) from progressive mild cognitive impairment (pMCI). OBJECTIVE This study leverages AD structural atrophy properties to achieve precise AD classification, combining different scales of brain region features. The ultimate goal is an interpretable algorithm for this method. METHODS The MUSAN takes whole-brain sMRI as input, enabling automatic extraction of brain region features and modeling of correlations between different scales of brain regions, and achieves personalized disease interpretation of brain regions. Furthermore, we also employed an occlusion sensitivity algorithm to localize and visualize brain regions sensitive to disease. RESULTS Our method is applied to ADNI-1, ADNI-2, and ADNI-3, and achieves high performance on the classification of CN from AD with accuracy (0.93), specificity (0.82), sensitivity (0.96), and area under curve (AUC) (0.95), as well as notable performance on the distinguish of sMCI from pMCI with accuracy (0.85), specificity (0.84), sensitivity (0.74), and AUC (0.86). Our sensitivity masking algorithm identified key regions in distinguishing CN from AD: hippocampus, amygdala, and vermis. Moreover, cingulum, pallidum, and inferior frontal gyrus are crucial for sMCI and pMCI discrimination. These discoveries align with existing literature, confirming the dependability of our model in AD research. CONCLUSION Our method provides an effective AD diagnostic and conversion prediction method. The occlusion sensitivity algorithm enhances deep learning interpretability, bolstering AD research reliability.
Collapse
Affiliation(s)
- Xinxin Fan
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Haining Li
- Department of Neurology, General Hospital of Ningxia Medical University, Yinchuan, China
| | - Lin Liu
- University of Chinese Academy of Sciences, Beijing, China
| | - Kai Zhang
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhewei Zhang
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yi Chen
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhen Wang
- Zhuhai Institute of Advanced Technology, Zhuhai, China
| | - Xiaoli He
- Department of Psychology, Ningxia University, Yinchuan, China
| | - Jinping Xu
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qingmao Hu
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
9
|
Xue C, Li J, Hao M, Chen L, Chen Z, Tang Z, Tang H, Fang Q. High prevalence of subjective cognitive decline in older Chinese adults: a systematic review and meta-analysis. Front Public Health 2023; 11:1277995. [PMID: 38106895 PMCID: PMC10722401 DOI: 10.3389/fpubh.2023.1277995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 11/13/2023] [Indexed: 12/19/2023] Open
Abstract
Background Subjective cognitive decline (SCD) is considered a preclinical stage of Alzheimer's disease. However, reliable prevalence estimates of SCD in the Chinese population are lacking, underscoring the importance of such metrics for policymakers to formulate appropriate healthcare strategies. Objective To systematically evaluate SCD prevalence among older Chinese adults. Methods PubMed, Web of Science, The Cochrane Library, Embase, CNKI, Wanfang, VIP, CBM, and Airiti Library databases were searched for studies on SCD in older Chinese individuals published before May 2023. Two investigators independently screened the literature, extracted the information, and assessed the bias risk of the included studies. A meta-analysis was then conducted using Stata 16.0 software via a random-effects model to analyze SCD prevalence in older Chinese adults. Results A total of 17 studies were included (n = 31,782). The SCD prevalence in older Chinese adults was 46.4% (95% CI, 40.6-52.2%). Further, subgroup analyzes indicated that SCD prevalence was 50.8% in men and 58.9% among women. Additionally, SCD prevalence in individuals aged 60-69, 70-79, and ≥ 80 years was 38.0, 45.2, and 60.3%, respectively. Furthermore, SCD prevalence in older adults with BMI <18.5, 18.5-24.0, and > 24.0 was 59.3, 54.0, and 52.9%, respectively. Geographically, SCD prevalence among older Chinese individuals was 41.3% in North China and 50.0% in South China. In terms of residence, SCD prevalence was 47.1% in urban residents and 50.0% among rural residents. As for retired individuals, SCD prevalence was 44.2% in non-manual workers and 49.2% among manual workers. In the case of education, individuals with an education level of "elementary school and below" had an SCD prevalence rate of 62.8%; "middle school, "52.4%; "high school, "55.0%; and "college and above, "51.3%. Finally, SCD prevalence was lower among married individuals with surviving spouses than in single adults who were divorced, widowed, or unmarried. Conclusion Our systematic review and meta-analysis identified significant and widespread SCD prevalence in the older population in China. Therefore, our review findings highlight the urgent requirement for medical institutions and policymakers across all levels to prioritize and rapidly develop and implement comprehensive preventive and therapeutic strategies for SCD.Systematic review registration: https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42023406950, identifier: CRD42023406950.
Collapse
Affiliation(s)
- Chao Xue
- School of Nursing, Guizhou University of Traditional Chinese Medicine, Guiyang, Guizhou, China
- Department of Nursing, Guizhou Provincial People's Hospital, Guiyang, Guizhou, China
| | - Juan Li
- Department of Nursing, Guizhou Provincial People's Hospital, Guiyang, Guizhou, China
| | - Mingqing Hao
- Department of Nursing, Guizhou Provincial People's Hospital, Guiyang, Guizhou, China
| | - Lihua Chen
- Department of Nursing, Guizhou Provincial People's Hospital, Guiyang, Guizhou, China
| | - Zuoxiu Chen
- Department of Nursing, Guizhou Provincial People's Hospital, Guiyang, Guizhou, China
| | - Zeli Tang
- School of Nursing, Zunyi Medical University, Zunyi, Guizhou, China
| | - Huan Tang
- School of Nursing, Zunyi Medical University, Zunyi, Guizhou, China
| | - Qian Fang
- Department of Nursing, Guizhou Provincial People's Hospital, Guiyang, Guizhou, China
| |
Collapse
|
10
|
Cao G, Zhang M, Wang Y, Zhang J, Han Y, Xu X, Huang J, Kang G. End-to-end automatic pathology localization for Alzheimer's disease diagnosis using structural MRI. Comput Biol Med 2023; 163:107110. [PMID: 37321102 DOI: 10.1016/j.compbiomed.2023.107110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 05/18/2023] [Accepted: 05/30/2023] [Indexed: 06/17/2023]
Abstract
Structural magnetic resonance imaging (sMRI) is an essential part of the clinical assessment of patients at risk of Alzheimer dementia. One key challenge in sMRI-based computer-aided dementia diagnosis is to localize local pathological regions for discriminative feature learning. Existing solutions predominantly depend on generating saliency maps for pathology localization and handle the localization task independently of the dementia diagnosis task, leading to a complex multi-stage training pipeline that is hard to optimize with weakly-supervised sMRI-level annotations. In this work, we aim to simplify the pathology localization task and construct an end-to-end automatic localization framework (AutoLoc) for Alzheimer's disease diagnosis. To this end, we first present an efficient pathology localization paradigm that directly predicts the coordinate of the most disease-related region in each sMRI slice. Then, we approximate the non-differentiable patch-cropping operation with the bilinear interpolation technique, which eliminates the barrier to gradient backpropagation and thus enables the joint optimization of localization and diagnosis tasks. Extensive experiments on commonly used ADNI and AIBL datasets demonstrate the superiority of our method. Especially, we achieve 93.38% and 81.12% accuracy on Alzheimer's disease classification and mild cognitive impairment conversion prediction tasks, respectively. Several important brain regions, such as rostral hippocampus and globus pallidus, are identified to be highly associated with Alzheimer's disease.
Collapse
Affiliation(s)
- Gongpeng Cao
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing, 100876, China
| | - Manli Zhang
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing, 100876, China
| | - Yiping Wang
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing, 100876, China
| | - Jing Zhang
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing, 100876, China
| | - Ying Han
- Department of Neurology, Xuanwu Hospital of Capital Medical University, No. 45 Changchun Street, Xicheng District, Beijing, 100053, China
| | - Xin Xu
- Department of Neurosurgery, Chinese PLA General Hospital, No. 28 Fuxing Road, Haidian District, Beijing, 100853, China
| | - Jinguo Huang
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing, 100876, China.
| | - Guixia Kang
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing, 100876, China.
| |
Collapse
|
11
|
Wen J, Li Y, Fang M, Zhu L, Feng DD, Li P. Fine-Grained and Multiple Classification for Alzheimer's Disease With Wavelet Convolution Unit Network. IEEE Trans Biomed Eng 2023; 70:2592-2603. [PMID: 37030751 DOI: 10.1109/tbme.2023.3256042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023]
Abstract
In this article, we propose a novel wavelet convolution unit for the image-oriented neural network to integrate wavelet analysis with a vanilla convolution operator to extract deep abstract features more efficiently. On one hand, in order to acquire non-local receptive fields and avoid information loss, we define a new convolution operation by composing a traditional convolution function and approximate and detailed representations after single-scale wavelet decomposition of source images. On the other hand, multi-scale wavelet decomposition is introduced to obtain more comprehensive multi-scale feature information. Then, we fuse all these cross-scale features to improve the problem of inaccurate localization of singular points. Given the novel wavelet convolution unit, we further design a network based on it for fine-grained Alzheimer's disease classifications (i.e., Alzheimer's disease, Normal controls, early mild cognitive impairment, late mild cognitive impairment). Up to now, only a few methods have studied one or several fine-grained classifications, and even fewer methods can achieve both fine-grained and multi-class classifications. We adopt the novel network and diffuse tensor images to achieve fine-grained classifications, which achieved state-of-the-art accuracy for all eight kinds of fine-grained classifications, up to 97.30%, 95.78%, 95.00%, 94.00%, 97.89%, 95.71%, 95.07%, 93.79%. In order to build a reference standard for Alzheimer's disease classifications, we actually implemented all twelve coarse-grained and fine-grained classifications. The results show that the proposed method achieves solidly high accuracy for them. Its classification ability greatly exceeds any kind of existing Alzheimer's disease classification method.
Collapse
|
12
|
IDA-Net: Inheritable Deformable Attention Network of structural MRI for Alzheimer’s Disease Diagnosis. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2023]
|
13
|
Luo M, He Z, Cui H, Chen YPP, Ward P. Class activation attention transfer neural networks for MCI conversion prediction. Comput Biol Med 2023; 156:106700. [PMID: 36871338 DOI: 10.1016/j.compbiomed.2023.106700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Revised: 08/24/2022] [Accepted: 12/09/2022] [Indexed: 02/23/2023]
Abstract
Accurate prediction of the trajectory of Alzheimer's disease (AD) from an early stage is of substantial value for treatment and planning to delay the onset of AD. We propose a novel attention transfer method to train a 3D convolutional neural network to predict which patients with mild cognitive impairment (MCI) will progress to AD within 3 years. A model is first trained on a separate but related source task (task we are transferring information from) to automatically learn regions of interest (ROI) from a given image. Next we train a model to simultaneously classify progressive MCI (pMCI) and stable MCI (sMCI) (the target task we want to solve) and the ROIs learned from the source task. The predicted ROIs are then used to focus the model's attention on certain areas of the brain when classifying pMCI versus sMCI. Thus, in contrast to traditional transfer learning, we transfer attention maps instead of transferring model weights from a source task to the target classification task. Our Method outperformed all methods tested including traditional transfer learning and methods that used expert knowledge to define ROI. Furthermore, the attention map transferred from the source task highlights known Alzheimer's pathology.
Collapse
Affiliation(s)
- Min Luo
- Department of Computer Science and Information Technology, La Trobe University, Melbourne Vic, 3086, Australia
| | - Zhen He
- Department of Computer Science and Information Technology, La Trobe University, Melbourne Vic, 3086, Australia.
| | - Hui Cui
- Department of Computer Science and Information Technology, La Trobe University, Melbourne Vic, 3086, Australia
| | - Yi-Ping Phoebe Chen
- Department of Computer Science and Information Technology, La Trobe University, Melbourne Vic, 3086, Australia
| | - Phillip Ward
- Monash Biomedical Imaging, Melbourne Vic, 3800, Australia; Turner Institute for Brain and Mental Health, Monash University, Melbourne, Vic, 3800, Australia; Australian Research Council Centre of Excellence for Integrative Brain Function, Melbourne 3800, Australia
| |
Collapse
|
14
|
Shi R, Sheng C, Jin S, Zhang Q, Zhang S, Zhang L, Ding C, Wang L, Wang L, Han Y, Jiang J. Generative adversarial network constrained multiple loss autoencoder: A deep learning-based individual atrophy detection for Alzheimer's disease and mild cognitive impairment. Hum Brain Mapp 2023; 44:1129-1146. [PMID: 36394351 PMCID: PMC9875916 DOI: 10.1002/hbm.26146] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 10/02/2022] [Accepted: 11/01/2022] [Indexed: 11/18/2022] Open
Abstract
Exploring individual brain atrophy patterns is of great value in precision medicine for Alzheimer's disease (AD) and mild cognitive impairment (MCI). However, the current individual brain atrophy detection models are deficient. Here, we proposed a framework called generative adversarial network constrained multiple loss autoencoder (GANCMLAE) for precisely depicting individual atrophy patterns. The GANCMLAE model was trained using normal controls (NCs) from the Alzheimer's Disease Neuroimaging Initiative cohort, and the Xuanwu cohort was employed to validate the robustness of the model. The potential of the model for identifying different atrophy patterns of MCI subtypes was also assessed. Furthermore, the clinical application potential of the GANCMLAE model was investigated. The results showed that the model can achieve good image reconstruction performance on the structural similarity index measure (0.929 ± 0.003), peak signal-to-noise ratio (31.04 ± 0.09), and mean squared error (0.0014 ± 0.0001) with less latent loss in the Xuanwu cohort. The individual atrophy patterns extracted from this model are more precise in reflecting the clinical symptoms of MCI subtypes. The individual atrophy patterns exhibit a better discriminative power in identifying patients with AD and MCI from NCs than those of the t-test model, with areas under the receiver operating characteristic curve of 0.867 (95%: 0.837-0.897) and 0.752 (95%: 0.71-0.790), respectively. Similar findings are also reported in the AD and MCI subgroups. In conclusion, the GANCMLAE model can serve as an effective tool for individualised atrophy detection.
Collapse
Affiliation(s)
- Rong Shi
- School of Information and Communication EngineeringShanghai UniversityShanghaiChina
| | - Can Sheng
- Department of NeurologyXuanwu Hospital of Capital Medical UniversityBeijingChina
| | - Shichen Jin
- School of Information and Communication EngineeringShanghai UniversityShanghaiChina
| | - Qi Zhang
- School of Information and Communication EngineeringShanghai UniversityShanghaiChina
| | - Shuoyan Zhang
- School of Information and Communication EngineeringShanghai UniversityShanghaiChina
| | - Liang Zhang
- Key Laboratory of Biomedical Engineering of Hainan ProvinceSchool of Biomedical Engineering, Hainan UniversityHaikouChina
| | - Changchang Ding
- School of Information and Communication EngineeringShanghai UniversityShanghaiChina
| | - Luyao Wang
- School of Information and Communication EngineeringShanghai UniversityShanghaiChina
| | - Lei Wang
- College of Computing and InformaticsDrexel UniversityPhiladelphiaPennsylvaniaUSA
| | - Ying Han
- Department of NeurologyXuanwu Hospital of Capital Medical UniversityBeijingChina
- Key Laboratory of Biomedical Engineering of Hainan ProvinceSchool of Biomedical Engineering, Hainan UniversityHaikouChina
- Center of Alzheimer's DiseaseBeijing Institute for Brain DisordersBeijingChina
- National Clinical Research Center for Geriatric DisordersBeijingChina
| | - Jiehui Jiang
- Institute of Biomedical EngineeringSchool of Life Science, Shanghai UniversityShanghaiChina
| |
Collapse
|
15
|
Prabha S, Sakthidasan Sankaran K, Chitradevi D. Efficient optimization based thresholding technique for analysis of alzheimer MRIs. Int J Neurosci 2023; 133:201-214. [PMID: 33715571 DOI: 10.1080/00207454.2021.1901696] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Purpose study: Alzheimer is a type of dementia that usually affects older adults by creating memory loss due to damaged brain cells. The damaged brain cells lead to shrinkage in the size of the brain and it is very difficult to extract the grey matter (GM) and white matter (WM). The segmentation of GM and WM is a challenging task due to its homogeneous nature between the neighborhood tissues. In this proposed system, an attempt has been made to extract GM and WM tissues using optimization-based segmentation techniques.Materials and methods: The optimization method is considered for the classification of normal and alzheimer disease (ad) through magnetic resonance images (mri) using a modified cuckoo search algorithm. Gray Level Co-Occurrence Matrix (GLCM) features are calculated from the extracted GM and WM. Principal Component Analysis (PCA) is adopted for selecting the best features from the GLCM features. Support Vector Machine (SVM) is a classifier which is used to classify the normal and abnormal images. Results: The proposed optimization algorithm provides most promising and efficient level of image segmentation compared to fuzzy c means (fcm), otsu, particle swarm optimization (pso) and cuckoo search (cs). The modified cuckoo yields high accuracy of 96%, sensitivity of 97% and specificity of 94% than other methods due to its powerful searching potential for the proper identification of gray and WM tissues.Conclusions: The results of the classification process proved the effectiveness of the proposed technique in identifying alzheimer affected patients due to its very strong optimization ability. The proposed pipeline helps to diagnose early detection of AD and better assessment of the neuroprotective effect of a therapy.
Collapse
Affiliation(s)
- S Prabha
- Department of Electronics and Communication Engineering, Hindustan Institute of Technology and Science, Chennai, India
| | - K Sakthidasan Sankaran
- Department of Electronics and Communication Engineering, Hindustan Institute of Technology and Science, Chennai, India
| | - D Chitradevi
- Department of Computer Science and Engineering, Hindustan Institute of Technology and Science, Chennai, India
| |
Collapse
|
16
|
Wu TH, Lian C, Lee S, Pastewait M, Piers C, Liu J, Wang F, Wang L, Chiu CY, Wang W, Jackson C, Chao WL, Shen D, Ko CC. Two-Stage Mesh Deep Learning for Automated Tooth Segmentation and Landmark Localization on 3D Intraoral Scans. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3158-3166. [PMID: 35666796 PMCID: PMC10547011 DOI: 10.1109/tmi.2022.3180343] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Accurately segmenting teeth and identifying the corresponding anatomical landmarks on dental mesh models are essential in computer-aided orthodontic treatment. Manually performing these two tasks is time-consuming, tedious, and, more importantly, highly dependent on orthodontists' experiences due to the abnormality and large-scale variance of patients' teeth. Some machine learning-based methods have been designed and applied in the orthodontic field to automatically segment dental meshes (e.g., intraoral scans). In contrast, the number of studies on tooth landmark localization is still limited. This paper proposes a two-stage framework based on mesh deep learning (called TS-MDL) for joint tooth labeling and landmark identification on raw intraoral scans. Our TS-MDL first adopts an end-to-end iMeshSegNet method (i.e., a variant of the existing MeshSegNet with both improved accuracy and efficiency) to label each tooth on the downsampled scan. Guided by the segmentation outputs, our TS-MDL further selects each tooth's region of interest (ROI) on the original mesh to construct a light-weight variant of the pioneering PointNet (i.e., PointNet-Reg) for regressing the corresponding landmark heatmaps. Our TS-MDL was evaluated on a real-clinical dataset, showing promising segmentation and localization performance. Specifically, iMeshSegNet in the first stage of TS-MDL reached an averaged Dice similarity coefficient (DSC) at 0.964±0.054 , significantly outperforming the original MeshSegNet. In the second stage, PointNet-Reg achieved a mean absolute error (MAE) of 0.597±0.761 mm in distances between the prediction and ground truth for 66 landmarks, which is superior compared with other networks for landmark detection. All these results suggest the potential usage of our TS-MDL in orthodontics.
Collapse
|
17
|
Negrillo-Cárdenas J, Jiménez-Pérez JR, Cañada-Oya H, Feito FR, Delgado-Martínez AD. Hybrid curvature-geometrical detection of landmarks for the automatic analysis of the reduction of supracondylar fractures of the femur. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107177. [PMID: 36242867 DOI: 10.1016/j.cmpb.2022.107177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Revised: 09/29/2022] [Accepted: 10/05/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE The analysis of the features of certain tissues is required by many procedures of modern medicine, allowing the development of more efficient treatments. The recognition of landmarks allows the planning of orthopedic and trauma surgical procedures, such as the design of prostheses or the treatment of fractures. Formerly, their detection has been carried out by hand, making the workflow inaccurate and tedious. In this paper we propose an automatic algorithm for the detection of landmarks of human femurs and an analysis of the quality of the reduction of supracondylar fractures. METHODS The detection of anatomical landmarks follows a knowledge-based approach, consisting of a hybrid strategy: curvature and spatial decomposition. Prior training is unrequired. The analysis of the reduction quality is performed by a side-to-side comparison between healthy and fractured sides. The pre-clinical validation of the technique consists of a two-stage study: Initially, we tested our algorithm with 14 healthy femurs, comparing the output with ground truth values. Then, a total of 140 virtual fractures was processed to assess the validity of our analysis of the quality of reduction. A two-sample t test and correlation coefficients between metrics and the degree of reduction have been employed to determine the reliability of the algorithm. RESULTS The average detection error of landmarks was maintained below 1.7 mm and 2∘ (p< 0.01) for points and axes, respectively. Regarding the contralateral analysis, the resulting P-values reveal the possibility to determine whether a supracondylar fracture is properly reduced or not with a 95% of confidence. Furthermore, the correlation is high between the metrics and the quality of the reduction. CONCLUSIONS This research concludes that our technique allows to classify supracondylar fracture reductions of the femur by only analyzing the detected anatomical landmarks. A initial training set is not required as input of our algorithm.
Collapse
Affiliation(s)
| | | | | | - Francisco R Feito
- Graphics and Geomatics Group of Jaén, University of Jaén, Jaén, Spain
| | - Alberto D Delgado-Martínez
- Department of Orthopedic Surgery, Complejo Hospitalario de Jaén, Jaén, Spain; Department of Health Sciences, University of Jaén, Jaén, Spain
| |
Collapse
|
18
|
Pan Y, Liu M, Xia Y, Shen D. Disease-Image-Specific Learning for Diagnosis-Oriented Neuroimage Synthesis With Incomplete Multi-Modality Data. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:6839-6853. [PMID: 34156939 PMCID: PMC9297233 DOI: 10.1109/tpami.2021.3091214] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Incomplete data problem is commonly existing in classification tasks with multi-source data, particularly the disease diagnosis with multi-modality neuroimages, to track which, some methods have been proposed to utilize all available subjects by imputing missing neuroimages. However, these methods usually treat image synthesis and disease diagnosis as two standalone tasks, thus ignoring the specificity conveyed in different modalities, i.e., different modalities may highlight different disease-relevant regions in the brain. To this end, we propose a disease-image-specific deep learning (DSDL) framework for joint neuroimage synthesis and disease diagnosis using incomplete multi-modality neuroimages. Specifically, with each whole-brain scan as input, we first design a Disease-image-Specific Network (DSNet) with a spatial cosine module to implicitly model the disease-image specificity. We then develop a Feature-consistency Generative Adversarial Network (FGAN) to impute missing neuroimages, where feature maps (generated by DSNet) of a synthetic image and its respective real image are encouraged to be consistent while preserving the disease-image-specific information. Since our FGAN is correlated with DSNet, missing neuroimages can be synthesized in a diagnosis-oriented manner. Experimental results on three datasets suggest that our method can not only generate reasonable neuroimages, but also achieve state-of-the-art performance in both tasks of Alzheimer's disease identification and mild cognitive impairment conversion prediction.
Collapse
|
19
|
Yu W, Lei B, Ng MK, Cheung AC, Shen Y, Wang S. Tensorizing GAN With High-Order Pooling for Alzheimer's Disease Assessment. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4945-4959. [PMID: 33729958 DOI: 10.1109/tnnls.2021.3063516] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
It is of great significance to apply deep learning for the early diagnosis of Alzheimer's disease (AD). In this work, a novel tensorizing GAN with high-order pooling is proposed to assess mild cognitive impairment (MCI) and AD. By tensorizing a three-player cooperative game-based framework, the proposed model can benefit from the structural information of the brain. By incorporating the high-order pooling scheme into the classifier, the proposed model can make full use of the second-order statistics of holistic magnetic resonance imaging (MRI). To the best of our knowledge, the proposed Tensor-train, High-order pooling and Semisupervised learning-based GAN (THS-GAN) is the first work to deal with classification on MR images for AD diagnosis. Extensive experimental results on Alzheimer's disease neuroimaging initiative (ADNI) data set are reported to demonstrate that the proposed THS-GAN achieves superior performance compared with existing methods, and to show that both tensor-train and high-order pooling can enhance classification performance. The visualization of generated samples also shows that the proposed model can generate plausible samples for semisupervised learning purpose.
Collapse
|
20
|
Lian C, Liu M, Wang L, Shen D. Multi-Task Weakly-Supervised Attention Network for Dementia Status Estimation With Structural MRI. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4056-4068. [PMID: 33656999 PMCID: PMC8413399 DOI: 10.1109/tnnls.2021.3055772] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Accurate prediction of clinical scores (of neuropsychological tests) based on noninvasive structural magnetic resonance imaging (MRI) helps understand the pathological stage of dementia (e.g., Alzheimer's disease (AD)) and forecast its progression. Existing machine/deep learning approaches typically preselect dementia-sensitive brain locations for MRI feature extraction and model construction, potentially leading to undesired heterogeneity between different stages and degraded prediction performance. Besides, these methods usually rely on prior anatomical knowledge (e.g., brain atlas) and time-consuming nonlinear registration for the preselection of brain locations, thereby ignoring individual-specific structural changes during dementia progression because all subjects share the same preselected brain regions. In this article, we propose a multi-task weakly-supervised attention network (MWAN) for the joint regression of multiple clinical scores from baseline MRI scans. Three sequential components are included in MWAN: 1) a backbone fully convolutional network for extracting MRI features; 2) a weakly supervised dementia attention block for automatically identifying subject-specific discriminative brain locations; and 3) an attention-aware multitask regression block for jointly predicting multiple clinical scores. The proposed MWAN is an end-to-end and fully trainable deep learning model in which dementia-aware holistic feature learning and multitask regression model construction are integrated into a unified framework. Our MWAN method was evaluated on two public AD data sets for estimating clinical scores of mini-mental state examination (MMSE), clinical dementia rating sum of boxes (CDRSB), and AD assessment scale cognitive subscale (ADAS-Cog). Quantitative experimental results demonstrate that our method produces superior regression performance compared with state-of-the-art methods. Importantly, qualitative results indicate that the dementia-sensitive brain locations automatically identified by our MWAN method well retain individual specificities and are biologically meaningful.
Collapse
|
21
|
Tu Y, Lin S, Qiao J, Zhuang Y, Zhang P. Alzheimer's disease diagnosis via multimodal feature fusion. Comput Biol Med 2022; 148:105901. [PMID: 35908497 DOI: 10.1016/j.compbiomed.2022.105901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 06/26/2022] [Accepted: 07/16/2022] [Indexed: 11/19/2022]
Abstract
Alzheimer's disease (AD) is the most common neurodegenerative disorder in the elderly. Early diagnosis of AD plays a vital role in slowing down the progress of AD because there is no effective drug to treat the disease. Some deep learning models have recently been presented for AD diagnosis and have more satisfactory performance than classic machine learning methods. Nevertheless, most of the existing computer-aided diagnostic models used neuroimaging features for diagnosis, ignoring patients' clinical and biological information. This makes the AD diagnosis inaccurate. In this study, we propose a novel multimodal feature transformation and fusion model for AD diagnosis. The feature transformation aims to avoid the difference in feature dimensions between different modal data and further mine the significant features for AD diagnosis. A geometric algebra-based feature extension method is proposed to obtain different levels of high-dimensional features from patients' clinical and personal biological data. Then, an influence degree-based feature filtration algorithm is proposed to filtrate those features that have no apparent guiding significance for AD diagnosis. Finally, an ANN (Artificial Neural Network)-based framework is designed to fuse transformed features with neuroimaging features extracted by CNN (Convolutional Neural Network) for AD diagnosis. The more in-depth feature mining of patients' clinical information and biological information can significantly improve the performance of computer-aided AD diagnosis. The experiments are obtained on the ADNI dataset. Our proposed model can converge faster and achieves 96.2% accuracy in AD diagnostic task and 87.4% accuracy in MCI (Mild Cognitive Impairment) diagnostic task. Compared with other methods, our proposed approach has an excellent performance in AD diagnosis and surpasses SOTA (state-of-the-art) methods. Therefore, our model can provide more reasonable suggestions for clinicians to diagnose and treat disease.
Collapse
Affiliation(s)
- Yue Tu
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Shukuan Lin
- School of Computer Science and Engineering, Northeastern University, Shenyang, China.
| | - Jianzhong Qiao
- School of Computer Science and Engineering, Northeastern University, Shenyang, China.
| | - Yilin Zhuang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Peng Zhang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| |
Collapse
|
22
|
Chen R, Ma Y, Chen N, Liu L, Cui Z, Lin Y, Wang W. Structure-Aware Long Short-Term Memory Network for 3D Cephalometric Landmark Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1791-1801. [PMID: 35130151 DOI: 10.1109/tmi.2022.3149281] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Detecting 3D landmarks on cone-beam computed tomography (CBCT) is crucial to assessing and quantifying the anatomical abnormalities in 3D cephalometric analysis. However, the current methods are time-consuming and suffer from large biases in landmark localization, leading to unreliable diagnosis results. In this work, we propose a novel Structure-Aware Long Short-Term Memory framework (SA-LSTM) for efficient and accurate 3D landmark detection. To reduce the computational burden, SA-LSTM is designed in two stages. It first locates the coarse landmarks via heatmap regression on a down-sampled CBCT volume and then progressively refines landmarks by attentive offset regression using multi-resolution cropped patches. To boost accuracy, SA-LSTM captures global-local dependence among the cropping patches via self-attention. Specifically, a novel graph attention module implicitly encodes the landmark's global structure to rationalize the predicted position. Moreover, a novel attention-gated module recursively filters irrelevant local features and maintains high-confident local predictions for aggregating the final result. Experiments conducted on an in-house dataset and a public dataset show that our method outperforms state-of-the-art methods, achieving 1.64 mm and 2.37 mm average errors, respectively. Furthermore, our method is very efficient, taking only 0.5 seconds for inferring the whole CBCT volume of resolution 768×768×576 .
Collapse
|
23
|
Feng J, Zhang SW, Chen L. Extracting ROI-Based Contourlet Subband Energy Feature From the sMRI Image for Alzheimer's Disease Classification. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2022; 19:1627-1639. [PMID: 33434134 DOI: 10.1109/tcbb.2021.3051177] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Structural magnetic resonance imaging (sMRI)-based Alzheimer's disease (AD) classification and its prodromal stage-mild cognitive impairment (MCI) classification have attracted many attentions and been widely investigated in recent years. Owing to the high dimensionality, representation of the sMRI image becomes a difficult issue in AD classification. Furthermore, regions of interest (ROI) reflected in the sMRI image are not characterized properly by spatial analysis techniques, which has been a main cause of weakening the discriminating ability of the extracted spatial feature. In this study, we propose a ROI-based contourlet subband energy (ROICSE) feature to represent the sMRI image in the frequency domain for AD classification. Specifically, a preprocessed sMRI image is first segmented into 90 ROIs by a constructed brain mask. Instead of extracting features from the 90 ROIs in the spatial domain, the contourlet transform is performed on each of these ROIs to obtain their energy subbands. And then for an ROI, a subband energy (SE) feature vector is constructed to capture its energy distribution and contour information. Afterwards, SE feature vectors of the 90 ROIs are concatenated to form a ROICSE feature of the sMRI image. Finally, support vector machine (SVM) classifier is used to classify 880 subjects from ADNI and OASIS databases. Experimental results show that the ROICSE approach outperforms six other state-of-the-art methods, demonstrating that energy and contour information of the ROI are important to capture differences between the sMRI images of AD and HC subjects. Meanwhile, brain regions related to AD can also be found using the ROICSE feature, indicating that the ROICSE feature can be a promising assistant imaging marker for the AD diagnosis via the sMRI image. Code and Sample IDs of this paper can be downloaded at https://github.com/NWPU-903PR/ROICSE.git.
Collapse
|
24
|
Chen L, Qiao H, Zhu F. Alzheimer's Disease Diagnosis With Brain Structural MRI Using Multiview-Slice Attention and 3D Convolution Neural Network. Front Aging Neurosci 2022; 14:871706. [PMID: 35557839 PMCID: PMC9088013 DOI: 10.3389/fnagi.2022.871706] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 03/17/2022] [Indexed: 01/01/2023] Open
Abstract
Numerous artificial intelligence (AI) based approaches have been proposed for automatic Alzheimer's disease (AD) prediction with brain structural magnetic resonance imaging (sMRI). Previous studies extract features from the whole brain or individual slices separately, ignoring the properties of multi-view slices and feature complementarity. For this reason, we present a novel AD diagnosis model based on the multiview-slice attention and 3D convolution neural network (3D-CNN). Specifically, we begin by extracting the local slice-level characteristic in various dimensions using multiple sub-networks. Then we proposed a slice-level attention mechanism to emphasize specific 2D-slices to exclude the redundancy features. After that, a 3D-CNN was employed to capture the global subject-level structural changes. Finally, all these 2D and 3D features were fused to obtain more discriminative representations. We conduct the experiments on 1,451 subjects from ADNI-1 and ADNI-2 datasets. Experimental results showed the superiority of our model over the state-of-the-art approaches regarding dementia classification. Specifically, our model achieves accuracy values of 91.1 and 80.1% on ADNI-1 for AD diagnosis and mild cognitive impairment (MCI) convention prediction, respectively.
Collapse
Affiliation(s)
- Lin Chen
- Chongqing Key Laboratory of Big Data and Intelligent Computing, Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing, China
| | - Hezhe Qiao
- Chongqing Key Laboratory of Big Data and Intelligent Computing, Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Fan Zhu
- Chongqing Key Laboratory of Big Data and Intelligent Computing, Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing, China
| |
Collapse
|
25
|
Han K, He M, Yang F, Zhang Y. Multi-task multi-level feature adversarial network for joint Alzheimer’s disease diagnosis and atrophy localization using sMRI. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac5ed5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 03/17/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Capitalizing on structural magnetic resonance imaging (sMRI), existing deep learning methods (especially convolutional neural networks, CNNs) have been widely and successfully applied to computer-aided diagnosis of Alzheimer’s disease (AD) and its prodromal stage (i.e. mild cognitive impairment, MCI). But considering the generalization capability of the obtained model trained on limited number of samples, we construct a multi-task multi-level feature adversarial network (M2FAN) for joint diagnosis and atrophy localization using baseline sMRI. Specifically, the linear-aligned T1 MR images were first processed by a lightweight CNN backbone to capture the shared intermediate feature representations, which were then branched into a global subnet for preliminary dementia diagnosis and a multi instance learning network for brain atrophy localization in multi-task learning manner. As the global discriminative information captured by the global subnet might be unstable for disease diagnosis, we further designed a module of multi-level feature adversarial learning that accounts for regularization to make global features robust against the adversarial perturbation synthesized by the local/instance features to improve the diagnostic performance. Our proposed method was evaluated on three public datasets (i.e. ADNI-1, ADNI-2, and AIBL), demonstrating competitive performance compared with several state-of-the-art methods in both tasks of AD diagnosis and MCI conversion prediction.
Collapse
|
26
|
Lian C, Liu M, Pan Y, Shen D. Attention-Guided Hybrid Network for Dementia Diagnosis With Structural MR Images. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:1992-2003. [PMID: 32721906 PMCID: PMC7855081 DOI: 10.1109/tcyb.2020.3005859] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Deep-learning methods (especially convolutional neural networks) using structural magnetic resonance imaging (sMRI) data have been successfully applied to computer-aided diagnosis (CAD) of Alzheimer's disease (AD) and its prodromal stage [i.e., mild cognitive impairment (MCI)]. As it is practically challenging to capture local and subtle disease-associated abnormalities directly from the whole-brain sMRI, most of those deep-learning approaches empirically preselect disease-associated sMRI brain regions for model construction. Considering that such isolated selection of potentially informative brain locations might be suboptimal, very few methods have been proposed to perform disease-associated discriminative region localization and disease diagnosis in a unified deep-learning framework. However, those methods based on task-oriented discriminative localization still suffer from two common limitations, that is: 1) identified brain locations are strictly consistent across all subjects, which ignores the unique anatomical characteristics of each brain and 2) only limited local regions/patches are used for model training, which does not fully utilize the global structural information provided by the whole-brain sMRI. In this article, we propose an attention-guided deep-learning framework to extract multilevel discriminative sMRI features for dementia diagnosis. Specifically, we first design a backbone fully convolutional network to automatically localize the discriminative brain regions in a weakly supervised manner. Using the identified disease-related regions as spatial attention guidance, we further develop a hybrid network to jointly learn and fuse multilevel sMRI features for CAD model construction. Our proposed method was evaluated on three public datasets (i.e., ADNI-1, ADNI-2, and AIBL), showing superior performance compared with several state-of-the-art methods in both tasks of AD diagnosis and MCI conversion prediction.
Collapse
|
27
|
Feng J, Zhang SW, Chen L, Zuo C. Detection of Alzheimer’s Disease Using Features of Brain Region-of-Interest-Based Individual Network Constructed with the sMRI Image. Comput Med Imaging Graph 2022; 98:102057. [DOI: 10.1016/j.compmedimag.2022.102057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 02/18/2022] [Accepted: 03/17/2022] [Indexed: 10/18/2022]
|
28
|
Han R, Liu Z, Chen CP. Multi-scale 3D convolution feature-based Broad Learning System for Alzheimer’s Disease diagnosis via MRI images. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108660] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
29
|
Tufail AB, Ullah K, Khan RA, Shakir M, Khan MA, Ullah I, Ma YK, Ali MS. On Improved 3D-CNN-Based Binary and Multiclass Classification of Alzheimer's Disease Using Neuroimaging Modalities and Data Augmentation Methods. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:1302170. [PMID: 35186220 PMCID: PMC8856791 DOI: 10.1155/2022/1302170] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 01/17/2022] [Accepted: 01/20/2022] [Indexed: 12/15/2022]
Abstract
Alzheimer's disease (AD) is an irreversible illness of the brain impacting the functional and daily activities of elderly population worldwide. Neuroimaging sensory systems such as Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) measure the pathological changes in the brain associated with this disorder especially in its early stages. Deep learning (DL) architectures such as Convolutional Neural Networks (CNNs) are successfully used in recognition, classification, segmentation, detection, and other domains for data interpretation. Data augmentation schemes work alongside DL techniques and may impact the final task performance positively or negatively. In this work, we have studied and compared the impact of three data augmentation techniques on the final performances of CNN architectures in the 3D domain for the early diagnosis of AD. We have studied both binary and multiclass classification problems using MRI and PET neuroimaging modalities. We have found the performance of random zoomed in/out augmentation to be the best among all the augmentation methods. It is also observed that combining different augmentation methods may result in deteriorating performances on the classification tasks. Furthermore, we have seen that architecture engineering has less impact on the final classification performance in comparison to the data manipulation schemes. We have also observed that deeper architectures may not provide performance advantages in comparison to their shallower counterparts. We have further observed that these augmentation schemes do not alleviate the class imbalance issue.
Collapse
Affiliation(s)
- Ahsan Bin Tufail
- School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China
- Department of Electrical and Computer Engineering, COMSATS University Islamabad Sahiwal Campus, Sahiwal, Pakistan
| | - Kalim Ullah
- Department of Zoology, Kohat University of Science and Technology, Kohat 26000, Pakistan
| | - Rehan Ali Khan
- Department of Electrical Engineering, University of Science and Technology Bannu, Bannu 28100, Pakistan
| | - Mustafa Shakir
- Department of Electrical Engineering, Superior University, Lahore 54000, Pakistan
| | - Muhammad Abbas Khan
- Department of Electrical Engineering, Balochistan University of Information Technology,Engineering and Management Sciences, Quetta,Balochistan 87300, Pakistan
| | - Inam Ullah
- College of Internet of Things (IoT) Engineering, Hohai University (HHU), Changzhou Campus 213022, China
| | - Yong-Kui Ma
- School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Md. Sadek Ali
- Communication Research Laboratory, Department of Information and Communication Technology, Islamic University, Kushtia-7003, Bangladesh
| |
Collapse
|
30
|
Li R, Wang X, Lawler K, Garg S, Bai Q, Alty J. Applications of Artificial Intelligence to aid detection of dementia: a scoping review on current capabilities and future directions. J Biomed Inform 2022; 127:104030. [DOI: 10.1016/j.jbi.2022.104030] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 01/21/2022] [Accepted: 02/12/2022] [Indexed: 12/17/2022]
|
31
|
Poloni KM, Ferrari RJ. Automated detection, selection and classification of hippocampal landmark points for the diagnosis of Alzheimer's disease. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 214:106581. [PMID: 34923325 DOI: 10.1016/j.cmpb.2021.106581] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 11/12/2021] [Accepted: 12/04/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Alzheimer's disease (AD) is a neurodegenerative, progressive, and irreversible disease that accounts for up to 80% of all dementia cases. AD predominantly affects older adults, and its clinical diagnosis is a challenging evaluation process, with imprecision rates between 12 and 23%. Structural magnetic resonance (MR) imaging has been widely used in studies related to AD because this technique provides images with excellent anatomical details and information about structural changes induced by the disease in the brain. Current studies are focused on detecting AD in its initial stage, i.e., mild cognitive impairment (MCI), since treatments for preventing or delaying the onset of symptoms is more effective when administered at the early stages of the disease. This study proposes a new technique to perform MR image classification in AD diagnosis using discriminative hippocampal point landmarks among the cognitively normal (CN), MCI, and AD populations. METHODS Our approach, based on a two-level classification, first detects and selects discriminative landmark points from two diagnosis populations based on their matching distance compared to a probabilistic atlas of 3-D labeled landmark points. The points are classified using attributes computed in a spherical support region around each point using information from brain probability image tissues of gray matter, white matter, and cerebrospinal fluid as sources of information. Next, at the second level, the images are classified based on a quantitative evaluation obtained from the first-level classifier outputs. RESULTS For the CN×MCI experiment, we achieved an AUC of 0.83, an accuracy of 75.58%, with 72.9% of sensitivity and 77.81% of specificity. For the MCI×AD experiment, we achieved an AUC value of 0.73, an accuracy of 69.8%, a sensitivity of 74.09% and specificity of 64.57%. Finally, for the CN×AD, we achieved an AUC of 0.95, an accuracy of 89.24%, with 85.58% of sensitivity and 92.71% of specificity. CONCLUSIONS The obtained classification results are similar to (or even higher than) other studies that classify AD compared to CN individuals and comparable to those classified patients with MCI.
Collapse
Affiliation(s)
- Katia M Poloni
- Department of Computing, Federal University of São Carlos, Rod. Washington Luis, Km 235, São Carlos, 13565-905, SP, Brazil
| | - Ricardo J Ferrari
- Department of Computing, Federal University of São Carlos, Rod. Washington Luis, Km 235, São Carlos, 13565-905, SP, Brazil.
| |
Collapse
|
32
|
Qiao H, Chen L, Zhu F. Ranking convolutional neural network for Alzheimer's disease mini-mental state examination prediction at multiple time-points. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 213:106503. [PMID: 34798407 DOI: 10.1016/j.cmpb.2021.106503] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Accepted: 10/22/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Alzheimer's disease (AD) is a fatal neurodegenerative disease. Predicting Mini-mental state examination (MMSE) based on magnetic resonance imaging (MRI) plays an important role in monitoring the progress of AD. Existing machine learning based methods cast MMSE prediction as a single metric regression problem simply and ignore the relationship between subjects with various scores. METHODS In this study, we proposed a ranking convolutional neural network (rankCNN) to address the prediction of MMSE through muti-classification. Specifically, we use a 3D convolutional neural network with sharing weights to extract the feature from MRI, followed by multiple sub-networks which transform the cognitive regression into a series of simpler binary classification. In addition, we further use a ranking layer to measure the ranking information between samples to strengthen the ability of the classification by extracting more discriminative features. RESULTS We evaluated the proposed model on ADNI-1 and ADNI-2 datasets with a total of 1,569 subjects. The Root Mean Squared Error (RMSE) of our proposed model at baseline is 2.238 and 2.434 on ADNI-1 and ADNI-2, respectively. Extensive experimental results on ADNI-1 and ADNI-2 datasets demonstrate that our proposed model is superior to several state-of-the-art methods at both baseline and future MMSE prediction of subjects. CONCLUSION This paper provides a new method that can effectively predict the MMSE at baseline and future time points using baseline MRI, making it possible to use MRI for accurate early diagnosis of AD. The source code is freely available at https://github.com/fengduqianhe/ADrankCNN-master.
Collapse
Affiliation(s)
- Hezhe Qiao
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China; University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Lin Chen
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China.
| | - Fan Zhu
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China.
| |
Collapse
|
33
|
Wang S, Cao G, Wang Y, Liao S, Wang Q, Shi J, Li C, Shen D. Review and Prospect: Artificial Intelligence in Advanced Medical Imaging. FRONTIERS IN RADIOLOGY 2021; 1:781868. [PMID: 37492170 PMCID: PMC10365109 DOI: 10.3389/fradi.2021.781868] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 11/08/2021] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) as an emerging technology is gaining momentum in medical imaging. Recently, deep learning-based AI techniques have been actively investigated in medical imaging, and its potential applications range from data acquisition and image reconstruction to image analysis and understanding. In this review, we focus on the use of deep learning in image reconstruction for advanced medical imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET). Particularly, recent deep learning-based methods for image reconstruction will be emphasized, in accordance with their methodology designs and performances in handling volumetric imaging data. It is expected that this review can help relevant researchers understand how to adapt AI for medical imaging and which advantages can be achieved with the assistance of AI.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
- Pengcheng Laboratrory, Shenzhen, China
| | - Guohua Cao
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China
| | - Shu Liao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Qian Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jun Shi
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| |
Collapse
|
34
|
Deepa PV, Joseph Jawhar S, Merry Geisa J. Diagnosis of Brain Tumor Using Nano Segmentation and Advanced-Convolutional Neural Networks Classification. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
The field of nanotechnology has lately acquired prominence according to the raised level of correct identification and performance in the patients using Computer-Aided Diagnosis (CAD). Nano-scale imaging model enables for a high level of precision and accuracy in determining if a brain
tumour is malignant or benign. This contributes to people with brain tumours having a better standard of living. In this study, We present a revolutionary Semantic nano-segmentation methodology for the nanoscale classification of brain tumours. The suggested Advanced-Convolutional Neural Networks-based
Semantic Nano-segmentation will aid radiologists in detecting brain tumours even when lesions are minor. ResNet-50 was employed in the suggested Advanced-Convolutional Neural Networks (A-CNN) approach. The tumour image is partitioned using Semantic Nano-segmentation, that has averaged dice
and SSIM values of 0.9704 and 0.2133, correspondingly. The input is a nano-image, and the tumour image is segmented using Semantic Nano-segmentation, which has averaged dice and SSIM values of 0.9704 and 0.2133, respectively. The suggested Semantic nano segments achieves 93.2 percent and 92.7
percent accuracy for benign and malignant tumour pictures, correspondingly. For malignant or benign pictures, The accuracy of the A-CNN methodology of correct segmentation is 99.57 percent and 95.7 percent, respectively. This unique nano-method is designed to detect tumour areas in nanometers
(nm) and hence accurately assess the illness. The suggested technique’s closeness to with regard to True Positive values, the ROC curve implies that it outperforms earlier approaches. A comparison analysis is conducted on ResNet-50 using testing and training data at rates of 90%–10%,
80%–20%, and 70%–30%, corresponding, indicating the utility of the suggested work.
Collapse
Affiliation(s)
- P. V. Deepa
- Department of ECE, Arunachala College of Engineering for Women, Manavilai 629203, Tamilnadu, India
| | - S. Joseph Jawhar
- Department of EEE, Arunachala College of Engineering for Women, Manavilai 629203, Tamilnadu, India
| | - J. Merry Geisa
- Department of EEE, St. Xavier’s Catholic College of Engineering, 629003, Tamilnadu, India
| |
Collapse
|
35
|
Zhang P, Lin S, Qiao J, Tu Y. Diagnosis of Alzheimer's Disease with Ensemble Learning Classifier and 3D Convolutional Neural Network. SENSORS (BASEL, SWITZERLAND) 2021; 21:7634. [PMID: 34833710 PMCID: PMC8623279 DOI: 10.3390/s21227634] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 11/06/2021] [Accepted: 11/15/2021] [Indexed: 12/21/2022]
Abstract
Alzheimer's disease (AD), the most common type of dementia, is a progressive disease beginning with mild memory loss, possibly leading to loss of the ability to carry on a conversation and respond to environments. It can seriously affect a person's ability to carry out daily activities. Therefore, early diagnosis of AD is conducive to better treatment and avoiding further deterioration of the disease. Magnetic resonance imaging (MRI) has become the main tool for humans to study brain tissues. It can clearly reflect the internal structure of a brain and plays an important role in the diagnosis of Alzheimer's disease. MRI data is widely used for disease diagnosis. In this paper, based on MRI data, a method combining a 3D convolutional neural network and ensemble learning is proposed to improve the diagnosis accuracy. Then, a data denoising module is proposed to reduce boundary noise. The experimental results on ADNI dataset demonstrate that the model proposed in this paper improves the training speed of the neural network and achieves 95.2% accuracy in AD vs. NC (normal control) task and 77.8% accuracy in sMCI (stable mild cognitive impairment) vs. pMCI (progressive mild cognitive impairment) task in the diagnosis of Alzheimer's disease.
Collapse
Affiliation(s)
| | - Shukuan Lin
- Department of Computer Science and Engineering, Northeastern University, Shenyang 110819, China; (P.Z.); (J.Q.); (Y.T.)
| | | | | |
Collapse
|
36
|
Qiao H, Chen L, Zhu F. A Fusion of Multi-view 2D and 3D Convolution Neural Network based MRI for Alzheimer's Disease Diagnosis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3317-3321. [PMID: 34891950 DOI: 10.1109/embc46164.2021.9629923] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Alzheimer's disease (AD) is a neurodegenerative disease leading to irreversible and progressive brain damage. Close monitoring is essential for slowing down the progression of AD. Magnetic Resonance Imaging (MRI) has been widely used for AD diagnosis and disease monitoring. Previous studies usually focused on extracting features from whole image or specific slices separately, but ignore the characteristics of each slice from multiple perspectives and the complementarity between features at different scales. In this study, we proposed a novel classification method based on the fusion of multi-view 2D and 3D convolutions for MRI-based AD diagnosis. Specifically, we first use multiple sub-networks to extract the local slice-level feature of each slice in different dimensions. Then a 3D convolution network was used to extract the global subject-level information of MRI. Finally, local and global information were fused to acquire more discriminative features. Experiments conducted on the ADNI-1 and ADNI-2 dataset demonstrated the superiority of this proposed model over other state-of-the-art methods for their ability to discriminate AD and Normal Controls (NC). Our model achieves 90.2% and 85.2% of accuracy on ADNI-2 and ADNI-1 respectively, thus it can be effective in AD diagnosis. The source code of our model is freely available at https://github.com/fengduqianhe/ADMultiView.
Collapse
|
37
|
Guan H, Wang C, Cheng J, Jing J, Liu T. A parallel attention-augmented bilinear network for early magnetic resonance imaging-based diagnosis of Alzheimer's disease. Hum Brain Mapp 2021; 43:760-772. [PMID: 34676625 PMCID: PMC8720194 DOI: 10.1002/hbm.25685] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Revised: 09/15/2021] [Accepted: 09/28/2021] [Indexed: 11/16/2022] Open
Abstract
Structural magnetic resonance imaging (sMRI) can capture the spatial patterns of brain atrophy in Alzheimer's disease (AD) and incipient dementia. Recently, many sMRI‐based deep learning methods have been developed for AD diagnosis. Some of these methods utilize neural networks to extract high‐level representations on the basis of handcrafted features, while others attempt to learn useful features from brain regions proposed by a separate module. However, these methods require considerable manual engineering. Their stepwise training procedures would introduce cascading errors. Here, we propose the parallel attention‐augmented bilinear network, a novel deep learning framework for AD diagnosis. Based on a 3D convolutional neural network, the framework directly learns both global and local features from sMRI scans without any prior knowledge. The framework is lightweight and suitable for end‐to‐end training. We evaluate the framework on two public datasets (ADNI‐1 and ADNI‐2) containing 1,340 subjects. On both the AD classification and mild cognitive impairment conversion prediction tasks, our framework achieves competitive results. Furthermore, we generate heat maps that highlight discriminative areas for visual interpretation. Experiments demonstrate the effectiveness of the proposed framework when medical priors are unavailable or the computing resources are limited. The proposed framework is general for 3D medical image analysis with both efficiency and interpretability.
Collapse
Affiliation(s)
- Hao Guan
- School of Computer Science, Faculty of Engineering, The University of Sydney, Darlington, New South Wales, Australia
| | - Chaoyue Wang
- School of Computer Science, Faculty of Engineering, The University of Sydney, Darlington, New South Wales, Australia
| | - Jian Cheng
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, China
| | - Jing Jing
- China National Clinical Research Center for Neurological Diseases, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Tao Liu
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, China.,Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| |
Collapse
|
38
|
Zhu Q, Ye H, Sun L, Li Z, Wang R, Shi F, Shen D, Zhang D. GACDN: generative adversarial feature completion and diagnosis network for COVID-19. BMC Med Imaging 2021; 21:154. [PMID: 34674660 PMCID: PMC8529574 DOI: 10.1186/s12880-021-00681-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 10/05/2021] [Indexed: 01/12/2023] Open
Abstract
Background The outbreak of coronavirus disease 2019 (COVID-19) causes tens of million infection world-wide. Many machine learning methods have been proposed for the computer-aided diagnosis between COVID-19 and community-acquired pneumonia (CAP) from chest computed tomography (CT) images. Most of these methods utilized the location-specific handcrafted features based on the segmentation results to improve the diagnose performance. However, the prerequisite segmentation step is time-consuming and needs the intervention by lots of expert radiologists, which cannot be achieved in the areas with limited medical resources. Methods We propose a generative adversarial feature completion and diagnosis network (GACDN) that simultaneously generates handcrafted features by radiomic counterparts and makes accurate diagnoses based on both original and generated features. Specifically, we first calculate the radiomic features from the CT images. Then, in order to fast obtain the location-specific handcrafted features, we use the proposed GACDN to generate them by its corresponding radiomic features. Finally, we use both radiomic features and location-specific handcrafted features for COVID-19 diagnosis. Results For the performance of our generated location-specific handcrafted features, the results of four basic classifiers show that it has an average of 3.21% increase in diagnoses accuracy. Besides, the experimental results on COVID-19 dataset show that our proposed method achieved superior performance in COVID-19 vs. community acquired pneumonia (CAP) classification compared with the state-of-the-art methods. Conclusions The proposed method significantly improves the diagnoses accuracy of COVID-19 vs. CAP in the condition of incomplete location-specific handcrafted features. Besides, it is also applicable in some regions lacking of expert radiologists and high-performance computing resources.
Collapse
Affiliation(s)
- Qi Zhu
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China.,Corroborative Innovation Center of Novel Software Technology and Industrialization, Nanjing, 210093, China
| | - Haizhou Ye
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China
| | - Liang Sun
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China
| | - Zhongnian Li
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China
| | - Ran Wang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, 201807, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China. .,Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, 201807, China.
| | - Daoqiang Zhang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China
| |
Collapse
|
39
|
Gao K, Sun Y, Niu S, Wang L. Unified framework for early stage status prediction of autism based on infant structural magnetic resonance imaging. Autism Res 2021; 14:2512-2523. [PMID: 34643325 PMCID: PMC8665129 DOI: 10.1002/aur.2626] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 09/04/2021] [Accepted: 09/24/2021] [Indexed: 11/25/2022]
Abstract
Autism, or autism spectrum disorder (ASD), is a developmental disability that is diagnosed at about 2 years of age based on abnormal behaviors. Existing neuroimaging‐based methods for the prediction of ASD typically focus on functional magnetic resonance imaging (fMRI); however, most of these fMRI‐based studies include subjects older than 5 years of age. Due to challenges in the application of fMRI for infants, structural magnetic resonance imaging (sMRI) has increasingly received attention in the field for early status prediction of ASD. In this study, we propose an automated prediction framework based on infant sMRI at about 24 months of age. Specifically, by leveraging an infant‐dedicated pipeline, iBEAT V2.0 Cloud, we derived segmentation and parcellation maps from infant sMRI. We employed a convolutional neural network to extract features from pairwise maps and a Siamese network to distinguish whether paired subjects were from the same or different classes. As compared to T1w imaging without segmentation and parcellation maps, our proposed approach with segmentation and parcellation maps yielded greater sensitivity, specificity, and accuracy of ASD prediction, which was validated using two datasets with different imaging protocols/scanners and was confirmed by receiver operating characteristic analysis. Furthermore, comparison with state‐of‐the‐art methods demonstrated the superior effectiveness and robustness of the proposed method. Finally, attention maps were generated to identify subject‐specific autism effects, supporting the reasonability of the predictive results. Collectively, these findings demonstrate the utility of our unified framework for the early‐stage status prediction of ASD by sMRI.
Collapse
Affiliation(s)
- Kun Gao
- Developing Brain Computing Lab, Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Yue Sun
- Developing Brain Computing Lab, Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Sijie Niu
- Developing Brain Computing Lab, Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA.,School of Information Science and Engineering, University of Jinan, Jinan, China
| | - Li Wang
- Developing Brain Computing Lab, Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| |
Collapse
|
40
|
Zhu W, Sun L, Huang J, Han L, Zhang D. Dual Attention Multi-Instance Deep Learning for Alzheimer's Disease Diagnosis With Structural MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2354-2366. [PMID: 33939609 DOI: 10.1109/tmi.2021.3077079] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Structural magnetic resonance imaging (sMRI) is widely used for the brain neurological disease diagnosis, which could reflect the variations of brain. However, due to the local brain atrophy, only a few regions in sMRI scans have obvious structural changes, which are highly correlative with pathological features. Hence, the key challenge of sMRI-based brain disease diagnosis is to enhance the identification of discriminative features. To address this issue, we propose a dual attention multi-instance deep learning network (DA-MIDL) for the early diagnosis of Alzheimer's disease (AD) and its prodromal stage mild cognitive impairment (MCI). Specifically, DA-MIDL consists of three primary components: 1) the Patch-Nets with spatial attention blocks for extracting discriminative features within each sMRI patch whilst enhancing the features of abnormally changed micro-structures in the cerebrum, 2) an attention multi-instance learning (MIL) pooling operation for balancing the relative contribution of each patch and yield a global different weighted representation for the whole brain structure, and 3) an attention-aware global classifier for further learning the integral features and making the AD-related classification decisions. Our proposed DA-MIDL model is evaluated on the baseline sMRI scans of 1689 subjects from two independent datasets (i.e., ADNI and AIBL). The experimental results show that our DA-MIDL model can identify discriminative pathological locations and achieve better classification performance in terms of accuracy and generalizability, compared with several state-of-the-art methods.
Collapse
|
41
|
Qiao H, Chen L, Ye Z, Zhu F. Early Alzheimer's disease diagnosis with the contrastive loss using paired structural MRIs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106282. [PMID: 34343744 DOI: 10.1016/j.cmpb.2021.106282] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 07/08/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Alzheimer's Disease (AD) is a chronic and fatal neurodegenerative disease with progressive impairment of memory. Brain structural magnetic resonance imaging (sMRI) has been widely applied as important biomarkers of AD. Various machine learning approaches, especially deep learning-based models, have been proposed for the early diagnosis of AD and monitoring the disease progression on sMRI data. However, the requirement for a large number of training images still hinders the extensive usage of AD diagnosis. In addition, due to the similarities in human whole-brain structure, finding the subtle brain changes is essential to extract discriminative features from limited sMRI data effectively. METHODS In this work, we proposed two types of contrastive losses with paired sMRIs to promote the diagnostic performance using group categories (G-CAT) and varying subject mini-mental state examination (S-MMSE) information, respectively. Specifically, G-CAT contrastive loss layer was used to learn the closer feature representation from sMRIs with the same categories, while ranking information from S-MMSE assists the model to explore subtle changes between individuals. RESULTS The model was trained on ADNI-1. Comparison with baseline methods was performed on MIRIAD and ADNI-2. For the classification task on MIRIAD, S-MMSE achieves 93.5% of accuracy, 96.6% of sensitivity, and 94.9% of specificity, respectively. G-CAT and S-MMSE both reach remarkable performance in terms of classification sensitivity and specificity respectively. Comparing with state-of-the-art methods, we found this proposed method could achieve comparable results with other approaches. CONCLUSION The proposed model could extract discriminative features under whole-brain similarity. Extensive experiments also support the accuracy of this model, i.e., it provides better ability to identify uncertain samples, especially for the classification task of subjects with MMSE in 22-27. Source code is freely available at https://github.com/fengduqianhe/ADComparative.
Collapse
Affiliation(s)
- Hezhe Qiao
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China; University of Chinese Academy of Sciences, BeiJing 100049, China.
| | - Lin Chen
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China.
| | - Zi Ye
- Johns Hopkins University, Baltimore, MD 21218, United States of America.
| | - Fan Zhu
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China.
| |
Collapse
|
42
|
Zhang J, Liu M, Lu K, Gao Y. Group-Wise Learning for Aurora Image Classification With Multiple Representations. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:4112-4124. [PMID: 30932858 DOI: 10.1109/tcyb.2019.2903591] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In conventional aurora image classification methods, it is general to employ only one single feature representation to capture the morphological characteristics of aurora images, which is difficult to describe the complicated morphologies of different aurora categories. Although several studies have proposed to use multiple feature representations, the inherent correlation among these representations are usually neglected. To address this problem, we propose a group-wise learning (GWL) method for the automatic aurora image classification using multiple representations. Specifically, we first extract the multiple feature representations for aurora images, and then construct a graph in each of multiple feature spaces. To model the correlation among different representations, we partition multiple graphs into several groups via a clustering algorithm. We further propose a GWL model to automatically estimate class labels for aurora images and optimal weights for the multiple representations in a data-driven manner. Finally, we develop a label fusion approach to make a final classification decision for new testing samples. The proposed GWL method focuses on the diverse properties of multiple feature representations, by clustering the correlated representations into the same group. We evaluate our method on an aurora image data set that contains 12 682 aurora images from 19 days. The experimental results demonstrate that the proposed GWL method achieves approximately 6% improvement in terms of classification accuracy, compared to the methods using a single feature representation.
Collapse
|
43
|
Poloni KM, Duarte de Oliveira IA, Tam R, Ferrari RJ. Brain MR image classification for Alzheimer’s disease diagnosis using structural hippocampal asymmetrical attributes from directional 3-D log-Gabor filter responses. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.07.102] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
44
|
Liu Y, Fan L, Zhang C, Zhou T, Xiao Z, Geng L, Shen D. Incomplete multi-modal representation learning for Alzheimer's disease diagnosis. Med Image Anal 2021; 69:101953. [PMID: 33460880 DOI: 10.1016/j.media.2020.101953] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 12/25/2020] [Accepted: 12/28/2020] [Indexed: 11/29/2022]
Abstract
Alzheimers disease (AD) is a complex neurodegenerative disease. Its early diagnosis and treatment have been a major concern of researchers. Currently, the multi-modality data representation learning of this disease is gradually becoming an emerging research field, attracting widespread attention. However, in practice, data from multiple modalities are only partially available, and most of the existing multi-modal learning algorithms can not deal with the incomplete multi-modality data. In this paper, we propose an Auto-Encoder based Multi-View missing data Completion framework (AEMVC) to learn common representations for AD diagnosis. Specifically, we firstly map the original complete view to a latent space using an auto-encoder network framework. Then, the latent representations measuring statistical dependence learned from the complete view are used to complement the kernel matrix of the incomplete view in the kernel space. Meanwhile, the structural information of original data and the inherent association between views are maintained by graph regularization and Hilbert-Schmidt Independence Criterion (HSIC) constraints. Finally, a kernel based multi-view method is applied to the learned kernel matrix for the acquisition of common representations. Experimental results achieved on Alzheimers Disease Neuroimaging Initiative (ADNI) datasets validate the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Yanbei Liu
- School of Life Sciences, Tiangong University, Tianjin 300387, China; Tianjin Key Laboratory of Optoelectronic Detection Technology and Systems, Tianjin, China
| | - Lianxi Fan
- School of Electronics and Information Engineering, Tiangong University, Tianjin 300387, China
| | - Changqing Zhang
- College of Intelligence and Computing, Tianjin University, Tianjin, China.
| | - Tao Zhou
- Inception Institute of Artificial Intelligence, Abu Dhabi 51133, United Arab Emirates
| | - Zhitao Xiao
- School of Life Sciences, Tiangong University, Tianjin 300387, China
| | - Lei Geng
- School of Life Sciences, Tiangong University, Tianjin 300387, China
| | - Dinggang Shen
- School of Biomedical Engineering, Shanghai Tech University, Shanghai, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
45
|
Liu C, Xie H, Zhang S, Mao Z, Sun J, Zhang Y. Misshapen Pelvis Landmark Detection With Local-Global Feature Learning for Diagnosing Developmental Dysplasia of the Hip. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3944-3954. [PMID: 32746137 DOI: 10.1109/tmi.2020.3008382] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Developmental dysplasia of the hip (DDH) is one of the most common orthopedic disorders in infants and young children. Accurately detecting and identifying the misshapen anatomical landmarks plays a crucial role in the diagnosis of DDH. However, the diversity during the calcification and the deformity due to the dislocation lead it a difficult task to detect the misshapen pelvis landmarks for both human expert and computer. Generally, the anatomical landmarks exhibit stable morphological features in part regions and rigid structural features in long ranges, which can be strong identification for the landmarks. In this paper, we investigate the local morphological features and global structural features for the misshapen landmark detection with a novel Pyramid Non-local UNet (PN-UNet). Firstly, we mine the local morphological features with a series of convolutional neural network (CNN) stacks, and convert the detection of a landmark to the segmentation of the landmark's local neighborhood by UNet. Secondly, a non-local module is employed to capture the global structural features with high-level structural knowledge. With the end-to-end and accurate detection of pelvis landmarks, we realize a fully automatic and highly reliable diagnosis of DDH. In addition, a dataset with 10,000 pelvis X-ray images is constructed in our work. It is the first public dataset for diagnosing DDH and has been already released for open research. To the best of our knowledge, this is the first attempt to apply deep learning method in the diagnosis of DDH. Experimental results show that our approach achieves an excellent precision in landmark detection (average point to point error of 0.9286mm) and illness diagnosis over human experts. Project is available at http://imcc.ustc.edu.cn/project/ddh/.
Collapse
|
46
|
Tuan TA, Pham TB, Kim JY, Tavares JMRS. Alzheimer's diagnosis using deep learning in segmenting and classifying 3D brain MR images. Int J Neurosci 2020; 132:689-698. [PMID: 33045895 DOI: 10.1080/00207454.2020.1835900] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
BACKGROUND AND OBJECTIVES Dementia is one of the brain diseases with serious symptoms such as memory loss, and thinking problems. According to the World Alzheimer Report 2016, in the world, there are 47 million people having dementia and it can be 131 million by 2050. There is no standard method to diagnose dementia, and consequently unable to access the treatment effectively. Hence, the computational diagnosis of the disease from brain Magnetic Resonance Image (MRI) scans plays an important role in supporting the early diagnosis. Alzheimer's Disease (AD), a common type of Dementia, includes problems related to disorientation, mood swings, not managing self-care, and behavioral issues. In this article, we present a new computational method to diagnosis Alzheimer's disease from 3D brain MR images. METHODS An efficient approach to diagnosis Alzheimer's disease from brain MRI scans is proposed comprising two phases: I) segmentation and II) classification, both based on deep learning. After the brain tissues are segmented by a model that combines Gaussian Mixture Model (GMM) and Convolutional Neural Network (CNN), a new model combining Extreme Gradient Boosting (XGBoost) and Support Vector Machine (SVM) is used to classify Alzheimer's disease based on the segmented tissues. RESULTS We present two evaluations for segmentation and classification. For comparison, the new method was evaluated using the AD-86 and AD-126 datasets leading to Dice 0.96 for segmentation in both datasets and accuracies 0.88, and 0.80 for classification, respectively. CONCLUSION Deep learning gives prominent results for segmentation and feature extraction in medical image processing. The combination of XGboost and SVM improves the results obtained.
Collapse
Affiliation(s)
- Tran Anh Tuan
- Faculty of Mathematics and Computer Science, University of Science, Vietnam National University, Ho Chi Minh City, Vietnam
| | - The Bao Pham
- Department of Computer Science, Sai Gon University, Ho Chi Minh City, Vietnam
| | - Jin Young Kim
- Department of Electronic and Computer Engineering, Chonnam National University, Gwangju, South Korea
| | - João Manuel R S Tavares
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Departamento de Engenharia Mecânica, Faculdade de Engenharia, Universidade do Porto, Porto, Portugal
| |
Collapse
|
47
|
Gray Matter Segmentation of Brain MRI Using Hybrid Enhanced Independent Component Analysis in Noisy and Noise Free Environment. JOURNAL OF BIOMIMETICS BIOMATERIALS AND BIOMEDICAL ENGINEERING 2020. [DOI: 10.4028/www.scientific.net/jbbbe.47.75] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Medical segmentation is the primary task performed to diagnosis the abnormalities in the human body. The brain is the complex organ and anatomical segmentation of brain tissues is a challenging task. In this paper, we used Enhanced Independent component analysis to perform the segmentation of gray matter. We used modified K means, Expected Maximization and Hidden Markov random field to provide better spatial correlation that overcomes in-homogeneity, noise and low contrast. Our objective is achieved in two steps (i) initially unwanted tissues are clipped from the MRI image using skull stripped Algorithm (ii) Enhanced Independent Component analysis is used to perform the segmentation of gray matter. We apply the proposed method on both T1w and T2w MRI to perform segmentation of gray matter at different noisy environments. We evaluate the the performance of our proposed system with Jaccard Index, Dice Coefficient and Accuracy. We further compared the proposed system performance with the existing frameworks. Our proposed method gives better segmentation of gray matter useful for diagnosis neurodegenerative disorders.
Collapse
|
48
|
Madusanka N, Choi HK, So JH, Choi BK. Alzheimer's Disease Classification Based on Multi-feature Fusion. Curr Med Imaging 2020; 15:161-169. [PMID: 31975662 DOI: 10.2174/1573405614666181012102626] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Revised: 10/01/2018] [Accepted: 10/05/2018] [Indexed: 11/22/2022]
Abstract
BACKGROUND In this study, we investigated the fusion of texture and morphometric features as a possible diagnostic biomarker for Alzheimer's Disease (AD). METHODS In particular, we classified subjects with Alzheimer's disease, Mild Cognitive Impairment (MCI) and Normal Control (NC) based on texture and morphometric features. Currently, neuropsychiatric categorization provides the ground truth for AD and MCI diagnosis. This can then be supported by biological data such as the results of imaging studies. Cerebral atrophy has been shown to correlate strongly with cognitive symptoms. Hence, Magnetic Resonance (MR) images of the brain are important resources for AD diagnosis. In the proposed method, we used three different types of features identified from structural MR images: Gabor, hippocampus morphometric, and Two Dimensional (2D) and Three Dimensional (3D) Gray Level Co-occurrence Matrix (GLCM). The experimental results, obtained using a 5-fold cross-validated Support Vector Machine (SVM) with 2DGLCM and 3DGLCM multi-feature fusion approaches, indicate that we achieved 81.05% ±1.34, 86.61% ±1.25 correct classification rate with 95% Confidence Interval (CI) falls between (80.75-81.35) and (86.33-86.89) respectively, 83.33%±2.15, 84.21%±1.42 sensitivity and 80.95%±1.52, 85.00%±1.24 specificity in our classification of AD against NC subjects, thus outperforming recent works found in the literature. For the classification of MCI against AD, the SVM achieved a 76.31% ± 2.18, 78.95% ±2.26 correct classification rate, 75.00% ±1.34, 76.19%±1.84 sensitivity and 77.78% ±1.14, 82.35% ±1.34 specificity. RESULTS AND CONCLUSION The results of the third experiment, with MCI against NC, also showed that the multiclass SVM provided highly accurate classification results. These findings suggest that this approach is efficient and may be a promising strategy for obtaining better AD, MCI and NC classification performance.
Collapse
Affiliation(s)
- Nuwan Madusanka
- Department of Computer Engineering, u-AHRC, Inje University, Gimhae, Gyeongsangnam, Korea
| | - Heung-Kook Choi
- Department of Computer Engineering, u-AHRC, Inje University, Gimhae, Gyeongsangnam, Korea
| | - Jae-Hong So
- Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae, Gyeongsangnam, Korea
| | - Boo-Kyeong Choi
- Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae, Gyeongsangnam, Korea
| |
Collapse
|
49
|
Pan Y, Liu M, Lian C, Xia Y, Shen D. Spatially-Constrained Fisher Representation for Brain Disease Identification With Incomplete Multi-Modal Neuroimages. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2965-2975. [PMID: 32217472 PMCID: PMC7485604 DOI: 10.1109/tmi.2020.2983085] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Multi-modal neuroimages, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), can provide complementary structural and functional information of the brain, thus facilitating automated brain disease identification. Incomplete data problem is unavoidable in multi-modal neuroimage studies due to patient dropouts and/or poor data quality. Conventional methods usually discard data-missing subjects, thus significantly reducing the number of training samples. Even though several deep learning methods have been proposed, they usually rely on pre-defined regions-of-interest in neuroimages, requiring disease-specific expert knowledge. To this end, we propose a spatially-constrained Fisher representation framework for brain disease diagnosis with incomplete multi-modal neuroimages. We first impute missing PET images based on their corresponding MRI scans using a hybrid generative adversarial network. With the complete (after imputation) MRI and PET data, we then develop a spatially-constrained Fisher representation network to extract statistical descriptors of neuroimages for disease diagnosis, assuming that these descriptors follow a Gaussian mixture model with a strong spatial constraint (i.e., images from different subjects have similar anatomical structures). Experimental results on three databases suggest that our method can synthesize reasonable neuroimages and achieve promising results in brain disease identification, compared with several state-of-the-art methods.
Collapse
Affiliation(s)
- Yongsheng Pan
- Y. Pan and Y. Xia are with the National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi’an 710072, China. M. Liu, C. Lian, and D. Shen are with the Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA. D. Shen is also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, South Korea
| | - Mingxia Liu
- Y. Pan and Y. Xia are with the National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi’an 710072, China. M. Liu, C. Lian, and D. Shen are with the Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA. D. Shen is also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, South Korea
| | - Chunfeng Lian
- Y. Pan and Y. Xia are with the National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi’an 710072, China. M. Liu, C. Lian, and D. Shen are with the Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA. D. Shen is also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, South Korea
| | - Yong Xia
- Y. Pan and Y. Xia are with the National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi’an 710072, China. M. Liu, C. Lian, and D. Shen are with the Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA. D. Shen is also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, South Korea
| | - Dinggang Shen
- Y. Pan and Y. Xia are with the National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi’an 710072, China. M. Liu, C. Lian, and D. Shen are with the Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA. D. Shen is also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, South Korea
| |
Collapse
|
50
|
Liu M, Zhang J, Lian C, Shen D. Weakly Supervised Deep Learning for Brain Disease Prognosis Using MRI and Incomplete Clinical Scores. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:3381-3392. [PMID: 30932861 PMCID: PMC8034591 DOI: 10.1109/tcyb.2019.2904186] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
As a hot topic in brain disease prognosis, predicting clinical measures of subjects based on brain magnetic resonance imaging (MRI) data helps to assess the stage of pathology and predict future development of the disease. Due to incomplete clinical labels/scores, previous learning-based studies often simply discard subjects without ground-truth scores. This would result in limited training data for learning reliable and robust models. Also, existing methods focus only on using hand-crafted features (e.g., image intensity or tissue volume) of MRI data, and these features may not be well coordinated with prediction models. In this paper, we propose a weakly supervised densely connected neural network (wiseDNN) for brain disease prognosis using baseline MRI data and incomplete clinical scores. Specifically, we first extract multiscale image patches (located by anatomical landmarks) from MRI to capture local-to-global structural information of images, and then develop a weakly supervised densely connected network for task-oriented extraction of imaging features and joint prediction of multiple clinical measures. A weighted loss function is further employed to make full use of all available subjects (even those without ground-truth scores at certain time-points) for network training. The experimental results on 1469 subjects from both ADNI-1 and ADNI-2 datasets demonstrate that our proposed method can efficiently predict future clinical measures of subjects.
Collapse
|