1
|
Yu Q, Ma Q, Da L, Li J, Wang M, Xu A, Li Z, Li W. A transformer-based unified multimodal framework for Alzheimer's disease assessment. Comput Biol Med 2024; 180:108979. [PMID: 39098237 DOI: 10.1016/j.compbiomed.2024.108979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Revised: 07/03/2024] [Accepted: 07/31/2024] [Indexed: 08/06/2024]
Abstract
In Alzheimer's disease (AD) assessment, traditional deep learning approaches have often employed separate methodologies to handle the diverse modalities of input data. Recognizing the critical need for a cohesive and interconnected analytical framework, we propose the AD-Transformer, a novel transformer-based unified deep learning model. This innovative framework seamlessly integrates structural magnetic resonance imaging (sMRI), clinical, and genetic data from the extensive Alzheimer's Disease Neuroimaging Initiative (ADNI) database, encompassing 1651 subjects. By employing a Patch-CNN block, the AD-Transformer efficiently transforms image data into image tokens, while a linear projection layer adeptly converts non-image data into corresponding tokens. As the core, a transformer block learns comprehensive representations of the input data, capturing the intricate interplay between modalities. The AD-Transformer sets a new benchmark in AD diagnosis and Mild Cognitive Impairment (MCI) conversion prediction, achieving remarkable average area under curve (AUC) values of 0.993 and 0.845, respectively, surpassing those of traditional image-only models and non-unified multimodal models. Our experimental results confirmed the potential of the AD-Transformer as a potent tool in AD diagnosis and MCI conversion prediction. By providing a unified framework that jointly learns holistic representations of both image and non-image data, the AD-Transformer paves the way for more effective and precise clinical assessments, offering a clinically adaptable strategy for leveraging diverse data modalities in the battle against AD.
Collapse
Affiliation(s)
- Qi Yu
- Department of Big Data in Health Science, School of Public Health and Center of Clinical Big Data and Analytics of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Qian Ma
- Department of Big Data in Health Science, School of Public Health and Center of Clinical Big Data and Analytics of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Lijuan Da
- Department of Big Data in Health Science, School of Public Health and Center of Clinical Big Data and Analytics of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Jiahui Li
- Department of Big Data in Health Science, School of Public Health and Center of Clinical Big Data and Analytics of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Mengying Wang
- Department of Big Data in Health Science, School of Public Health and Center of Clinical Big Data and Analytics of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Andi Xu
- Department of Big Data in Health Science, School of Public Health and Center of Clinical Big Data and Analytics of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Zilin Li
- School of Mathematics and Statistics, Northeast Normal University, Changchun, 130024, Jilin, China
| | - Wenyuan Li
- Department of Big Data in Health Science, School of Public Health and Center of Clinical Big Data and Analytics of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China.
| |
Collapse
|
2
|
Zabihi M, Kia SM, Wolfers T, de Boer S, Fraza C, Dinga R, Arenas AL, Bzdok D, Beckmann CF, Marquand A. Nonlinear latent representations of high-dimensional task-fMRI data: Unveiling cognitive and behavioral insights in heterogeneous spatial maps. PLoS One 2024; 19:e0308329. [PMID: 39116147 PMCID: PMC11309387 DOI: 10.1371/journal.pone.0308329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 07/22/2024] [Indexed: 08/10/2024] Open
Abstract
Finding an interpretable and compact representation of complex neuroimaging data is extremely useful for understanding brain behavioral mapping and hence for explaining the biological underpinnings of mental disorders. However, hand-crafted representations, as well as linear transformations, may inadequately capture the considerable variability across individuals. Here, we implemented a data-driven approach using a three-dimensional autoencoder on two large-scale datasets. This approach provides a latent representation of high-dimensional task-fMRI data which can account for demographic characteristics whilst also being readily interpretable both in the latent space learned by the autoencoder and in the original voxel space. This was achieved by addressing a joint optimization problem that simultaneously reconstructs the data and predicts clinical or demographic variables. We then applied normative modeling to the latent variables to define summary statistics ('latent indices') and establish a multivariate mapping to non-imaging measures. Our model, trained with multi-task fMRI data from the Human Connectome Project (HCP) and UK biobank task-fMRI data, demonstrated high performance in age and sex predictions and successfully captured complex behavioral characteristics while preserving individual variability through a latent representation. Our model also performed competitively with respect to various baseline models including several variants of principal components analysis, independent components analysis and classical regions of interest, both in terms of reconstruction accuracy and strength of association with behavioral variables.
Collapse
Affiliation(s)
- Mariam Zabihi
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, the Netherlands
- Department for Cognitive Neuroscience, Radboud University Medical Center Nijmegen, Nijmegen, the Netherlands
- MRC Unit for Lifelong Health & Ageing, University College London (UCL), London, United Kingdom
| | - Seyed Mostafa Kia
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, the Netherlands
- Department for Cognitive Neuroscience, Radboud University Medical Center Nijmegen, Nijmegen, the Netherlands
- Department of Psychiatry, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Thomas Wolfers
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, the Netherlands
- Department for Cognitive Neuroscience, Radboud University Medical Center Nijmegen, Nijmegen, the Netherlands
- NORMENT, KG Jebsen Centre for Psychosis Research, Division of Mental Health and Addiction, Oslo University Hospital & Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health, University of Tübingen, Tübingen, Germany
| | - Stijn de Boer
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, the Netherlands
| | - Charlotte Fraza
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, the Netherlands
- Department for Cognitive Neuroscience, Radboud University Medical Center Nijmegen, Nijmegen, the Netherlands
| | - Richard Dinga
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, the Netherlands
- Department for Cognitive Neuroscience, Radboud University Medical Center Nijmegen, Nijmegen, the Netherlands
| | - Alberto Llera Arenas
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, the Netherlands
| | - Danilo Bzdok
- Multimodal Imaging and Connectome Analysis Lab, McConnell Brain Imaging Centre, Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
- Mila ‐ Quebec Artificial Intelligence Institute, Montreal, Quebec, Canada
| | - Christian F. Beckmann
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, the Netherlands
- Department for Cognitive Neuroscience, Radboud University Medical Center Nijmegen, Nijmegen, the Netherlands
- Centre for Functional MRI of the Brain, University of Oxford, Oxford, United Kingdom
| | - Andre Marquand
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, the Netherlands
- Department for Cognitive Neuroscience, Radboud University Medical Center Nijmegen, Nijmegen, the Netherlands
- Department of Neuroimaging, Institute of Psychiatry, Psychology, & Neuroscience, King’s College London, London, United Kingdom
| |
Collapse
|
3
|
Zhang M, Cui Q, Lü Y, Li W. A feature-aware multimodal framework with auto-fusion for Alzheimer's disease diagnosis. Comput Biol Med 2024; 178:108740. [PMID: 38901184 DOI: 10.1016/j.compbiomed.2024.108740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Revised: 05/02/2024] [Accepted: 06/08/2024] [Indexed: 06/22/2024]
Abstract
Alzheimer's disease (AD), one of the most common dementias, has about 4.6 million new cases yearly worldwide. Due to the significant amount of suspected AD patients, early screening for the disease has become particularly important. There are diversified types of AD diagnosis data, such as cognitive tests, images, and risk factors, many prior investigations have primarily concentrated on integrating only high-dimensional features and simple fusion concatenation, resulting in less-than-optimal outcomes for AD diagnosis. Therefore, We propose an enhanced multimodal AD diagnostic framework comprising a feature-aware module and an automatic model fusion strategy (AMFS). To preserve the correlation and significance features within a low-dimensional space, the feature-aware module employs a low-dimensional SHapley Additive exPlanation (SHAP) boosting feature selection as the initial step, following this analysis, diverse tiers of low-dimensional features are extracted from patients' biological data. Besides, in the high-dimensional stage, the feature-aware module integrates cross-modal attention mechanisms to capture subtle relationships among different cognitive domains, neuroimaging modalities, and risk factors. Subsequently, we integrate the aforementioned feature-aware module with graph convolutional networks (GCN) to address heterogeneous data in multimodal AD, while also possessing the capability to perceive relationships between different modalities. Lastly, our proposed AMFS autonomously learns optimal parameters for aligning two sub-models. The validation tests using two ADNI datasets show the high accuracies of 95.9% and 91.9% respectively, in AD diagnosis. The methods efficiently select features from multimodal AD data, optimizing model fusion for potential clinical assistance in diagnostics.
Collapse
Affiliation(s)
- Meiwei Zhang
- College of Electrical Engineering, Chongqing University, Chongqing, 400030, China
| | - Qiushi Cui
- College of Electrical Engineering, Chongqing University, Chongqing, 400030, China.
| | - Yang Lü
- Department of Geriatrics, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China
| | - Wenyuan Li
- College of Electrical Engineering, Chongqing University, Chongqing, 400030, China
| |
Collapse
|
4
|
Li Y, El Habib Daho M, Conze PH, Zeghlache R, Le Boité H, Tadayoni R, Cochener B, Lamard M, Quellec G. A review of deep learning-based information fusion techniques for multimodal medical image classification. Comput Biol Med 2024; 177:108635. [PMID: 38796881 DOI: 10.1016/j.compbiomed.2024.108635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 03/18/2024] [Accepted: 05/18/2024] [Indexed: 05/29/2024]
Abstract
Multimodal medical imaging plays a pivotal role in clinical diagnosis and research, as it combines information from various imaging modalities to provide a more comprehensive understanding of the underlying pathology. Recently, deep learning-based multimodal fusion techniques have emerged as powerful tools for improving medical image classification. This review offers a thorough analysis of the developments in deep learning-based multimodal fusion for medical classification tasks. We explore the complementary relationships among prevalent clinical modalities and outline three main fusion schemes for multimodal classification networks: input fusion, intermediate fusion (encompassing single-level fusion, hierarchical fusion, and attention-based fusion), and output fusion. By evaluating the performance of these fusion techniques, we provide insight into the suitability of different network architectures for various multimodal fusion scenarios and application domains. Furthermore, we delve into challenges related to network architecture selection, handling incomplete multimodal data management, and the potential limitations of multimodal fusion. Finally, we spotlight the promising future of Transformer-based multimodal fusion techniques and give recommendations for future research in this rapidly evolving field.
Collapse
Affiliation(s)
- Yihao Li
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | - Mostafa El Habib Daho
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France.
| | | | - Rachid Zeghlache
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | - Hugo Le Boité
- Sorbonne University, Paris, France; Ophthalmology Department, Lariboisière Hospital, AP-HP, Paris, France
| | - Ramin Tadayoni
- Ophthalmology Department, Lariboisière Hospital, AP-HP, Paris, France; Paris Cité University, Paris, France
| | - Béatrice Cochener
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France; Ophthalmology Department, CHRU Brest, Brest, France
| | - Mathieu Lamard
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | | |
Collapse
|
5
|
Malik I, Iqbal A, Gu YH, Al-antari MA. Deep Learning for Alzheimer's Disease Prediction: A Comprehensive Review. Diagnostics (Basel) 2024; 14:1281. [PMID: 38928696 PMCID: PMC11202897 DOI: 10.3390/diagnostics14121281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 06/10/2024] [Accepted: 06/13/2024] [Indexed: 06/28/2024] Open
Abstract
Alzheimer's disease (AD) is a neurological disorder that significantly impairs cognitive function, leading to memory loss and eventually death. AD progresses through three stages: early stage, mild cognitive impairment (MCI) (middle stage), and dementia. Early diagnosis of Alzheimer's disease is crucial and can improve survival rates among patients. Traditional methods for diagnosing AD through regular checkups and manual examinations are challenging. Advances in computer-aided diagnosis systems (CADs) have led to the development of various artificial intelligence and deep learning-based methods for rapid AD detection. This survey aims to explore the different modalities, feature extraction methods, datasets, machine learning techniques, and validation methods used in AD detection. We reviewed 116 relevant papers from repositories including Elsevier (45), IEEE (25), Springer (19), Wiley (6), PLOS One (5), MDPI (3), World Scientific (3), Frontiers (3), PeerJ (2), Hindawi (2), IO Press (1), and other multiple sources (2). The review is presented in tables for ease of reference, allowing readers to quickly grasp the key findings of each study. Additionally, this review addresses the challenges in the current literature and emphasizes the importance of interpretability and explainability in understanding deep learning model predictions. The primary goal is to assess existing techniques for AD identification and highlight obstacles to guide future research.
Collapse
Affiliation(s)
- Isra Malik
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 44000, Pakistan
| | - Ahmed Iqbal
- Department of Computer Science, Sir Syed Case Institute of Technology, Islamabad 45230, Pakistan
| | - Yeong Hyeon Gu
- Department of Artificial Intelligence and Data Science, College of AI Convergence, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
| | - Mugahed A. Al-antari
- Department of Artificial Intelligence and Data Science, College of AI Convergence, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
| |
Collapse
|
6
|
Zhou B, Zhao Y, Wu X. Differences of individual gray matter networks between MCI patients who converted to AD within 3 Years and nonconverters. Heliyon 2024; 10:e28874. [PMID: 38623255 PMCID: PMC11016615 DOI: 10.1016/j.heliyon.2024.e28874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 03/24/2024] [Accepted: 03/26/2024] [Indexed: 04/17/2024] Open
Abstract
Objective Here we aimed to explore the differences in individual gray matter (GM) networks at baseline in mild cognitive impairment patients who converted to Alzheimer's disease (AD) within 3 years (MCI-C) and nonconverters (MCI-NC). Materials and methods Data from 461 MCI patients (180 MCI-C and 281 MCI-NC) were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI). For each subject, a GM network was constructed using 3D-T1 imaging and the Kullback-Leibler divergence method. Gradient and topological analyses of individual GM networks were performed, and partial correlations were calculated to evaluate relationships among network properties, cognitive function, and apolipoprotein E (APOE) €4 alleles. Subsequently, a support vector machine (SVM) model was constructed to discriminate the MCI-C and MCI-NC patients at baseline. Results The gradient analysis revealed that the principal gradient score distribution was more compressed in the MCI-C group than in the MCI-NC group, with scores for the left lingual gyrus, right fusiform gyrus and left middle temporal gyrus being increased in the MCI-C group (p < 0.05, FDR corrected). The topological analysis showed significant differences in nodal efficiency in four nodes between the two groups. Furthermore, the regional gradient scores or nodal efficiency were found to be significantly related to the neuropsychological test scores, and the left middle temporal gyrus gradient scores were positively associated with the number of APOE €4 alleles (r = 0.192, p = 0.002). Ultimately, the SVM model achieved a balanced accuracy of 79.4% in classifying MCI-C and MCI-NC patients (p < 0.001). Conclusion The whole-brain GM network hierarchy in the MCI-C group was more compressed than that in the MCI-NC group, suggesting more serious cognitive impairments in the MCI-C group. The left middle temporal gyrus gradient scores were related to both cognitive function and APOE €4 alleles, thus serving as potential biomarkers distinguishing MCI-C from MCI-NC at baseline.
Collapse
Affiliation(s)
- Baiwan Zhou
- Department of Radiology, the Second Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Yueqi Zhao
- Department of Radiology, the Second Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Xiaojia Wu
- Department of Radiology, the Second Affiliated Hospital of Chongqing Medical University, Chongqing, China
| |
Collapse
|
7
|
Chang C, Shi W, Wang Y, Zhang Z, Huang X, Jiao Y. The path from task-specific to general purpose artificial intelligence for medical diagnostics: A bibliometric analysis. Comput Biol Med 2024; 172:108258. [PMID: 38467093 DOI: 10.1016/j.compbiomed.2024.108258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 02/08/2024] [Accepted: 03/06/2024] [Indexed: 03/13/2024]
Abstract
Artificial intelligence (AI) has revolutionized many fields, and its potential in healthcare has been increasingly recognized. Based on diverse data sources such as imaging, laboratory tests, medical records, and electrophysiological data, diagnostic AI has witnessed rapid development in recent years. A comprehensive understanding of the development status, contributing factors, and their relationships in the application of AI to medical diagnostics is essential to further promote its use in clinical practice. In this study, we conducted a bibliometric analysis to explore the evolution of task-specific to general-purpose AI for medical diagnostics. We used the Web of Science database to search for relevant articles published between 2010 and 2023, and applied VOSviewer, the R package Bibliometrix, and CiteSpace to analyze collaborative networks and keywords. Our analysis revealed that the field of AI in medical diagnostics has experienced rapid growth in recent years, with a focus on tasks such as image analysis, disease prediction, and decision support. Collaborative networks were observed among researchers and institutions, indicating a trend of global cooperation in this field. Additionally, we identified several key factors contributing to the development of AI in medical diagnostics, including data quality, algorithm design, and computational power. Challenges to progress in the field include model explainability, robustness, and equality, which will require multi-stakeholder, interdisciplinary collaboration to tackle. Our study provides a holistic understanding of the path from task-specific, mono-modal AI toward general-purpose, multimodal AI for medical diagnostics. With the continuous improvement of AI technology and the accumulation of medical data, we believe that AI will play a greater role in medical diagnostics in the future.
Collapse
Affiliation(s)
- Chuheng Chang
- Department of General Practice (General Internal Medicine), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; 4+4 Medical Doctor Program, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| | - Wen Shi
- Department of Gastroenterology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| | - Youyang Wang
- Department of General Practice (General Internal Medicine), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| | - Zhan Zhang
- Department of Computer Science and Technology, Tsinghua University, Beijing, China.
| | - Xiaoming Huang
- Department of General Practice (General Internal Medicine), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| | - Yang Jiao
- Department of General Practice (General Internal Medicine), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| |
Collapse
|
8
|
Jang Y, Choi H, Yoo S, Park H, Park BY. Structural connectome alterations between individuals with autism and neurotypical controls using feature representation learning. BEHAVIORAL AND BRAIN FUNCTIONS : BBF 2024; 20:2. [PMID: 38267953 PMCID: PMC10807082 DOI: 10.1186/s12993-024-00228-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 01/10/2024] [Indexed: 01/26/2024]
Abstract
Autism spectrum disorder is one of the most common neurodevelopmental conditions associated with sensory and social communication impairments. Previous neuroimaging studies reported that atypical nodal- or network-level functional brain organization in individuals with autism was associated with autistic behaviors. Although dimensionality reduction techniques have the potential to uncover new biomarkers, the analysis of whole-brain structural connectome abnormalities in a low-dimensional latent space is underinvestigated. In this study, we utilized autoencoder-based feature representation learning for diffusion magnetic resonance imaging-based structural connectivity in 80 individuals with autism and 61 neurotypical controls that passed strict quality controls. We generated low-dimensional latent features using the autoencoder model for each group and adopted an integrated gradient approach to assess the contribution of the input data for predicting latent features during the encoding process. Subsequently, we compared the integrated gradient values between individuals with autism and neurotypical controls and observed differences within the transmodal regions and between the sensory and limbic systems. Finally, we identified significant associations between integrated gradient values and communication abilities in individuals with autism. Our findings provide insights into the whole-brain structural connectome in autism and may help identify potential biomarkers for autistic connectopathy.
Collapse
Affiliation(s)
- Yurim Jang
- Artificial Intelligence Convergence Research Center, Inha University, Incheon, Republic of Korea
| | - Hyoungshin Choi
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Republic of Korea
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
| | - Seulki Yoo
- Convergence Research Institute, Sungkyunkwan University, Suwon, Republic of Korea
| | - Hyunjin Park
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
- School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, Republic of Korea
| | - Bo-Yong Park
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea.
- Department of Data Science, Inha University, Incheon, Republic of Korea.
| |
Collapse
|
9
|
Adarsh V, Gangadharan GR, Fiore U, Zanetti P. Multimodal classification of Alzheimer's disease and mild cognitive impairment using custom MKSCDDL kernel over CNN with transparent decision-making for explainable diagnosis. Sci Rep 2024; 14:1774. [PMID: 38245656 PMCID: PMC10799876 DOI: 10.1038/s41598-024-52185-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 01/15/2024] [Indexed: 01/22/2024] Open
Abstract
The study presents an innovative diagnostic framework that synergises Convolutional Neural Networks (CNNs) with a Multi-feature Kernel Supervised within-class-similar Discriminative Dictionary Learning (MKSCDDL). This integrative methodology is designed to facilitate the precise classification of individuals into categories of Alzheimer's Disease, Mild Cognitive Impairment (MCI), and Cognitively Normal (CN) statuses while also discerning the nuanced phases within the MCI spectrum. Our approach is distinguished by its robustness and interpretability, offering clinicians an exceptionally transparent tool for diagnosis and therapeutic strategy formulation. We use scandent decision trees to deal with the unpredictability and complexity of neuroimaging data. Considering that different people's brain scans are different, this enables the model to make more detailed individualised assessments and explains how the algorithm illuminates the specific neuroanatomical regions that are indicative of cognitive impairment. This explanation is beneficial for clinicians because it gives them concrete ideas for early intervention and targeted care. The empirical review of our model shows that it makes diagnoses with a level of accuracy that is unmatched, with a classification efficacy of 98.27%. This shows that the model is good at finding important parts of the brain that may be damaged by cognitive diseases.
Collapse
Affiliation(s)
- V Adarsh
- National Institute of Technology Tiruchirappalli, Tiruchirappalli, India
| | - G R Gangadharan
- National Institute of Technology Tiruchirappalli, Tiruchirappalli, India
| | - Ugo Fiore
- University of Salerno, Fisciano, Italy
| | | |
Collapse
|
10
|
Choi H, Byeon K, Lee J, Hong S, Park B, Park H. Identifying subgroups of eating behavior traits unrelated to obesity using functional connectivity and feature representation learning. Hum Brain Mapp 2024; 45:e26581. [PMID: 38224537 PMCID: PMC10789215 DOI: 10.1002/hbm.26581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 12/13/2023] [Accepted: 12/20/2023] [Indexed: 01/17/2024] Open
Abstract
Eating behavior is highly heterogeneous across individuals and cannot be fully explained using only the degree of obesity. We utilized unsupervised machine learning and functional connectivity measures to explore the heterogeneity of eating behaviors measured by a self-assessment instrument using 424 healthy adults (mean ± standard deviation [SD] age = 47.07 ± 18.89 years; 67% female). We generated low-dimensional representations of functional connectivity using resting-state functional magnetic resonance imaging and estimated latent features using the feature representation capabilities of an autoencoder by nonlinearly compressing the functional connectivity information. The clustering approaches applied to latent features identified three distinct subgroups. The subgroups exhibited different levels of hunger traits, while their body mass indices were comparable. The results were replicated in an independent dataset consisting of 212 participants (mean ± SD age = 38.97 ± 19.80 years; 35% female). The model interpretation technique of integrated gradients revealed that the between-group differences in the integrated gradient maps were associated with functional reorganization in heteromodal association and limbic cortices and reward-related subcortical structures such as the accumbens, amygdala, and caudate. The cognitive decoding analysis revealed that these systems are associated with reward- and emotion-related systems. Our findings provide insights into the macroscopic brain organization of eating behavior-related subgroups independent of obesity.
Collapse
Affiliation(s)
- Hyoungshin Choi
- Department of Electrical and Computer EngineeringSungkyunkwan UniversitySuwonRepublic of Korea
- Center for Neuroscience Imaging ResearchInstitute for Basic ScienceSuwonRepublic of Korea
| | | | - Jong‐eun Lee
- Department of Electrical and Computer EngineeringSungkyunkwan UniversitySuwonRepublic of Korea
- Center for Neuroscience Imaging ResearchInstitute for Basic ScienceSuwonRepublic of Korea
| | - Seok‐Jun Hong
- Center for Neuroscience Imaging ResearchInstitute for Basic ScienceSuwonRepublic of Korea
- Center for the Developing BrainChild Mind InstituteNew YorkUSA
- Department of Biomedical EngineeringSungkyunkwan UniversitySuwonRepublic of Korea
| | - Bo‐yong Park
- Center for Neuroscience Imaging ResearchInstitute for Basic ScienceSuwonRepublic of Korea
- Department of Data ScienceInha UniversityIncheonRepublic of Korea
- Department of Statistics and Data ScienceInha UniversityIncheonRepublic of Korea
| | - Hyunjin Park
- Center for Neuroscience Imaging ResearchInstitute for Basic ScienceSuwonRepublic of Korea
- School of Electronic and Electrical EngineeringSungkyunkwan UniversitySuwonRepublic of Korea
| |
Collapse
|
11
|
Qiao C, Gao B, Liu Y, Hu X, Hu W, Calhoun VD, Wang YP. Deep learning with explainability for characterizing age-related intrinsic differences in dynamic brain functional connectivity. Med Image Anal 2023; 90:102941. [PMID: 37683445 DOI: 10.1016/j.media.2023.102941] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 08/19/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023]
Abstract
Although many deep learning models-based medical applications are performance-driven, i.e., accuracy-oriented, their explainability is more critical. This is especially the case with neuroimaging, where we are often interested in identifying biomarkers underlying brain development or disorders. Herein we propose an explainable deep learning approach by elucidating the information transmission mechanism between two layers of a deep network with a joint feature selection strategy that considers several shallow-layer explainable machine learning models and sparse learning of the deep network. At the end, we apply and validate the proposed approach to the analysis of dynamic brain functional connectivity (FC) from fMRI in a brain development study. Our approach can identify the differences within and between functional brain networks over age during development. The results indicate that the brain network transits from undifferentiated structures to more specialized and organized ones, and the information processing ability becomes more efficient as age increases. In addition, we detect two developmental patterns in the brain network: the FCs in regions related to visual and sound processing and mental regulation become weakened, while those between regions corresponding to emotional processing and cognitive activities are enhanced.
Collapse
Affiliation(s)
- Chen Qiao
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, 710049, PR China.
| | - Bin Gao
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, 710049, PR China.
| | - Yuechen Liu
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, 710049, PR China.
| | - Xinyu Hu
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, 710049, PR China.
| | - Wenxing Hu
- Department of Biomedical Engineering, Tulane University, New Orleans, LA, 70118, USA.
| | - Vince D Calhoun
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, USA; Emory University, Atlanta, GA 30303, USA.
| | - Yu-Ping Wang
- Department of Biomedical Engineering, Tulane University, New Orleans, LA, 70118, USA.
| |
Collapse
|
12
|
Zhu X, Kim Y, Ravid O, He X, Suarez-Jimenez B, Zilcha-Mano S, Lazarov A, Lee S, Abdallah CG, Angstadt M, Averill CL, Baird CL, Baugh LA, Blackford JU, Bomyea J, Bruce SE, Bryant RA, Cao Z, Choi K, Cisler J, Cotton AS, Daniels JK, Davenport ND, Davidson RJ, DeBellis MD, Dennis EL, Densmore M, deRoon-Cassini T, Disner SG, Hage WE, Etkin A, Fani N, Fercho KA, Fitzgerald J, Forster GL, Frijling JL, Geuze E, Gonenc A, Gordon EM, Gruber S, Grupe DW, Guenette JP, Haswell CC, Herringa RJ, Herzog J, Hofmann DB, Hosseini B, Hudson AR, Huggins AA, Ipser JC, Jahanshad N, Jia-Richards M, Jovanovic T, Kaufman ML, Kennis M, King A, Kinzel P, Koch SBJ, Koerte IK, Koopowitz SM, Korgaonkar MS, Krystal JH, Lanius R, Larson CL, Lebois LAM, Li G, Liberzon I, Lu GM, Luo Y, Magnotta VA, Manthey A, Maron-Katz A, May G, McLaughlin K, Mueller SC, Nawijn L, Nelson SM, Neufeld RWJ, Nitschke JB, O'Leary EM, Olatunji BO, Olff M, Peverill M, Phan KL, Qi R, Quidé Y, Rektor I, Ressler K, Riha P, Ross M, Rosso IM, Salminen LE, Sambrook K, Schmahl C, Shenton ME, Sheridan M, Shih C, Sicorello M, Sierk A, Simmons AN, Simons RM, Simons JS, Sponheim SR, Stein MB, Stein DJ, Stevens JS, Straube T, Sun D, Théberge J, Thompson PM, Thomopoulos SI, van der Wee NJA, van der Werff SJA, van Erp TGM, van Rooij SJH, van Zuiden M, Varkevisser T, Veltman DJ, Vermeiren RRJM, Walter H, Wang L, Wang X, Weis C, Winternitz S, Xie H, Zhu Y, Wall M, Neria Y, Morey RA. Neuroimaging-based classification of PTSD using data-driven computational approaches: A multisite big data study from the ENIGMA-PGC PTSD consortium. Neuroimage 2023; 283:120412. [PMID: 37858907 PMCID: PMC10842116 DOI: 10.1016/j.neuroimage.2023.120412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 09/10/2023] [Accepted: 10/16/2023] [Indexed: 10/21/2023] Open
Abstract
BACKGROUND Recent advances in data-driven computational approaches have been helpful in devising tools to objectively diagnose psychiatric disorders. However, current machine learning studies limited to small homogeneous samples, different methodologies, and different imaging collection protocols, limit the ability to directly compare and generalize their results. Here we aimed to classify individuals with PTSD versus controls and assess the generalizability using a large heterogeneous brain datasets from the ENIGMA-PGC PTSD Working group. METHODS We analyzed brain MRI data from 3,477 structural-MRI; 2,495 resting state-fMRI; and 1,952 diffusion-MRI. First, we identified the brain features that best distinguish individuals with PTSD from controls using traditional machine learning methods. Second, we assessed the utility of the denoising variational autoencoder (DVAE) and evaluated its classification performance. Third, we assessed the generalizability and reproducibility of both models using leave-one-site-out cross-validation procedure for each modality. RESULTS We found lower performance in classifying PTSD vs. controls with data from over 20 sites (60 % test AUC for s-MRI, 59 % for rs-fMRI and 56 % for d-MRI), as compared to other studies run on single-site data. The performance increased when classifying PTSD from HC without trauma history in each modality (75 % AUC). The classification performance remained intact when applying the DVAE framework, which reduced the number of features. Finally, we found that the DVAE framework achieved better generalization to unseen datasets compared with the traditional machine learning frameworks, albeit performance was slightly above chance. CONCLUSION These results have the potential to provide a baseline classification performance for PTSD when using large scale neuroimaging datasets. Our findings show that the control group used can heavily affect classification performance. The DVAE framework provided better generalizability for the multi-site data. This may be more significant in clinical practice since the neuroimaging-based diagnostic DVAE classification models are much less site-specific, rendering them more generalizable.
Collapse
Affiliation(s)
- Xi Zhu
- Department of Psychiatry, Columbia University Medical Center, New York, NY, USA; New York State Psychiatric Institute, New York, NY, USA
| | - Yoojean Kim
- New York State Psychiatric Institute, New York, NY, USA
| | - Orren Ravid
- New York State Psychiatric Institute, New York, NY, USA
| | - Xiaofu He
- Department of Psychiatry, Columbia University Medical Center, New York, NY, USA
| | | | | | | | - Seonjoo Lee
- Department of Psychiatry, Columbia University Medical Center, New York, NY, USA; New York State Psychiatric Institute, New York, NY, USA
| | - Chadi G Abdallah
- Baylor College of Medicine, Houston, TX, USA; Yale University School of Medicine, New Haven, CT, USA
| | | | - Christopher L Averill
- Baylor College of Medicine, Houston, TX, USA; Yale University School of Medicine, New Haven, CT, USA
| | | | - Lee A Baugh
- Sanford School of Medicine, University of South Dakota, Vermillion, SD, USA
| | | | | | - Steven E Bruce
- Center for Trauma Recovery, Department of Psychological Sciences, University of Missouri-St. Louis, St. Louis, MO, USA
| | - Richard A Bryant
- School of Psychology, University of New South Wales, Sydney, NSW, Australia
| | - Zhihong Cao
- Department of Radiology, The Affiliated Yixing Hospital of Jiangsu University, Yixing, Jiangsu, China
| | - Kyle Choi
- University of California San Diego, La Jolla, CA, USA
| | - Josh Cisler
- Department of Psychiatry, University of Texas at Austin, Austin, TX, USA
| | | | | | | | | | | | - Emily L Dennis
- University of Utah School of Medicine, Salt Lake City, UT, USA
| | - Maria Densmore
- Departments of Psychology and Psychiatry, Neuroscience Program, Western University, London, ON, Canada; Department of Psychology, University of British Columbia, Okanagan, Kelowna, British Columbia, Canada
| | | | - Seth G Disner
- Minneapolis VA Health Care System, Minneapolis, MN, USA
| | - Wissam El Hage
- UMR 1253, CIC 1415, University of Tours, CHRU de Tours, INSERM, France
| | | | - Negar Fani
- Emory University Department of Psychiatry and Behavioral Sciences, Atlanta, GA, USA
| | - Kelene A Fercho
- Civil Aerospace Medical Institute, US Federal Aviation Administration, Oklahoma City, OK, USA
| | | | - Gina L Forster
- Brain Health Research Centre, Department of Anatomy, University of Otago, Dunedin, New Zealand
| | - Jessie L Frijling
- Department of Psychiatry, Amsterdam University Medical Centers, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Elbert Geuze
- Brain Research and Innovation Centre, Ministry of Defence, Utrecht, The Netherlands
| | - Atilla Gonenc
- Cognitive and Clinical Neuroimaging Core, McLean Hospital, Belmont, MA, USA
| | - Evan M Gordon
- Department of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | - Staci Gruber
- Cognitive and Clinical Neuroimaging Core, McLean Hospital, Belmont, MA, USA
| | | | - Jeffrey P Guenette
- Division of Neuroradiology, Brigham and Women's Hospital, Boston, MA, USA
| | | | - Ryan J Herringa
- School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI, USA
| | | | | | | | | | | | | | - Neda Jahanshad
- Imaging Genetics Center, Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine of the University of Southern California, Marina del Rey, CA, USA
| | | | | | - Milissa L Kaufman
- Division of Women's Mental Health, McLean Hospital, Belmont, MA, USA
| | - Mitzy Kennis
- Brain Research and Innovation Centre, Ministry of Defence, Utrecht, The Netherlands
| | | | - Philipp Kinzel
- Department of Child and Adolescent Psychiatry, Psychosomatic and Psychotherapy, Ludwig Maximilian University of Munich, Munich, Germany; Psychiatry Neuroimaging Laboratory, Brigham and Women's Hospital, Boston, MA, USA
| | - Saskia B J Koch
- Donders Institute for Brain, Cognition and Behavior, Centre for Cognitive Neuroimaging, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Inga K Koerte
- Department of Child and Adolescent Psychiatry, Psychosomatic and Psychotherapy, Ludwig Maximilian University of Munich, Munich, Germany; Psychiatry Neuroimaging Laboratory, Brigham and Women's Hospital, Boston, MA, USA
| | | | | | | | - Ruth Lanius
- Department of Neuroscience, Western University, London, ON, Canada
| | | | - Lauren A M Lebois
- McLean Hospital, Belmont, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Gen Li
- Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Israel Liberzon
- Psychiatry and Behavioral Science, Texas A&M University Health Science Center, College Station, TX, USA
| | - Guang Ming Lu
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, Jiangsu, China
| | - Yifeng Luo
- Department of Radiology, The Affiliated Yixing Hospital of Jiangsu University, Yixing, Jiangsu, China
| | | | - Antje Manthey
- Charité Universitätsmedizin Berlin Campus Charite Mitte: Charite Universitatsmedizin Berlin, Berlin, Germany
| | | | - Geoffery May
- VISN 17 Center of Excellence for Research on Returning War Veterans, Waco, TX, USA
| | | | | | - Laura Nawijn
- Department of Psychiatry, Amsterdam University Medical Centers, VU University Medical Center, VU University, Amsterdam, The Netherlands
| | - Steven M Nelson
- Department of Pediatrics, University of Minnesota, Minneapolis, MN, USA
| | - Richard W J Neufeld
- Departments of Psychology and Psychiatry, Neuroscience Program, Western University, London, ON, Canada; Department of Psychology, University of British Columbia, Okanagan, Kelowna, British Columbia, Canada
| | | | | | - Bunmi O Olatunji
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| | - Miranda Olff
- Department of Psychiatry, Amsterdam University Medical Centers, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | | | - K Luan Phan
- Department of Psychiatry and Behavioral Health, Ohio State University, Columbus, OH, USA
| | - Rongfeng Qi
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, Jiangsu, China
| | - Yann Quidé
- School of Psychology, University of New South Wales, Sydney, NSW, Australia; Neuroscience Research Australia, Randwick, NSW, Australia
| | | | - Kerry Ressler
- McLean Hospital, Belmont, MA, USA; Harvard Medical School, Boston, MA, USA
| | | | - Marisa Ross
- Northwestern Neighborhood and Networks Initiative, Northwestern University Institute for Policy Research, Evanston, IL, USA
| | - Isabelle M Rosso
- McLean Hospital, Belmont, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Lauren E Salminen
- Imaging Genetics Center, Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine of the University of Southern California, Marina del Rey, CA, USA
| | | | | | - Martha E Shenton
- Psychiatry Neuroimaging Laboratory, Brigham and Women's Hospital, Boston, MA, USA
| | | | | | | | - Anika Sierk
- Charité Universitätsmedizin Berlin Campus Charite Mitte: Charite Universitatsmedizin Berlin, Berlin, Germany
| | - Alan N Simmons
- Center of Excellence for Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA
| | | | | | - Scott R Sponheim
- Minneapolis VA Health Care System, Minneapolis, MN, USA; University of Minnesota, Minneapolis, MN, USA
| | | | - Dan J Stein
- University of Cape Town, Cape Town, South Africa
| | - Jennifer S Stevens
- Emory University Department of Psychiatry and Behavioral Sciences, Atlanta, GA, USA
| | | | | | - Jean Théberge
- Departments of Psychology and Psychiatry, Neuroscience Program, Western University, London, ON, Canada; Department of Psychology, University of British Columbia, Okanagan, Kelowna, British Columbia, Canada
| | - Paul M Thompson
- Imaging Genetics Center, Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine of the University of Southern California, Marina del Rey, CA, USA
| | - Sophia I Thomopoulos
- Imaging Genetics Center, Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine of the University of Southern California, Marina del Rey, CA, USA
| | | | | | | | - Sanne J H van Rooij
- Emory University Department of Psychiatry and Behavioral Sciences, Atlanta, GA, USA
| | - Mirjam van Zuiden
- Department of Psychiatry, Amsterdam University Medical Centers, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Tim Varkevisser
- Brain Research and Innovation Centre, Ministry of Defence, Utrecht, The Netherlands
| | - Dick J Veltman
- Department of Psychiatry, Amsterdam University Medical Centers, VU University Medical Center, VU University, Amsterdam, The Netherlands
| | | | - Henrik Walter
- Charité Universitätsmedizin Berlin Campus Charite Mitte: Charite Universitatsmedizin Berlin, Berlin, Germany
| | - Li Wang
- Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Xin Wang
- University of Toledo, Toledo, OH, USA
| | - Carissa Weis
- Medical College of Wisconsin, Milwaukee, WI, USA
| | - Sherry Winternitz
- Division of Women's Mental Health, McLean Hospital, Belmont, MA, USA
| | - Hong Xie
- University of Toledo, Toledo, OH, USA
| | - Ye Zhu
- Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Melanie Wall
- Department of Psychiatry, Columbia University Medical Center, New York, NY, USA; New York State Psychiatric Institute, New York, NY, USA
| | - Yuval Neria
- Department of Psychiatry, Columbia University Medical Center, New York, NY, USA
| | | |
Collapse
|
13
|
Shanmugavadivel K, Sathishkumar VE, Cho J, Subramanian M. Advancements in computer-assisted diagnosis of Alzheimer's disease: A comprehensive survey of neuroimaging methods and AI techniques for early detection. Ageing Res Rev 2023; 91:102072. [PMID: 37709055 DOI: 10.1016/j.arr.2023.102072] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 09/05/2023] [Accepted: 09/10/2023] [Indexed: 09/16/2023]
Abstract
Alzheimer's Disease (AD) is a brain disorder that causes the brain to shrink and eventually causes brain cells to die. This neurological condition progressively hampers cognitive and memory functions, along with the ability to carry out fundamental tasks over time. From the symptoms it is very difficult to detect during its early stage. It has become necessary to develop a computer assisted diagnostic models for the early AD detection. This survey work, discussed about a review of 110 published AD detection methods and techniques from the year 2011 to till-date. This study lies in its comprehensive exploration of AD detection methods using a range of artificial intelligence (AI) techniques and neuroimaging modalities. By collecting and analysing 50 papers related to AD diagnosis datasets, the study provides a comprehensive understanding of the diversity of input types, subjects, and classes used in AD research. Summarizing 60 papers on methodologies gives researchers a succinct overview of various approaches that contribute to enhancing detection accuracy. From the review, data are acquired and pre-processed form multiple modalities of neuroimaging. This paper mainly focused on review of different datasets used, various feature extraction methods, parameters used in neuro images. To diagnosis the Alzheimer's disease, the existing methods utilized three most common artificial intelligence techniques such as machine learning, deep learning, and transfer learning. We conclude this survey work by providing future perspectives for AD diagnosis at early stage.
Collapse
Affiliation(s)
| | - V E Sathishkumar
- Department of Software Engineering, Jeonbuk National University, Jeonju-si, Jeollabuk-do 54896, Republic of Korea
| | - Jaehyuk Cho
- Department of Software Engineering, Jeonbuk National University, Jeonju-si, Jeollabuk-do 54896, Republic of Korea.
| | | |
Collapse
|
14
|
Duan H, Wang H, Chen Y, Liu F, Tao L. EAMNet: an Alzheimer's disease prediction model based on representation learning. Phys Med Biol 2023; 68:215005. [PMID: 37774713 DOI: 10.1088/1361-6560/acfec8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Accepted: 09/29/2023] [Indexed: 10/01/2023]
Abstract
Objective. Brain18F-FDG PET images indicate brain lesions' metabolic status and offer the predictive potential for Alzheimer's disease (AD). However, the complexity of extracting relevant lesion features and dealing with extraneous information in PET images poses challenges for accurate prediction.Approach. To address these issues, we propose an innovative solution called the efficient adaptive multiscale network (EAMNet) for predicting potential patient populations using positron emission tomography (PET) image slices, enabling effective intervention and treatment. Firstly, we introduce an efficient convolutional strategy to enhance the receptive field of PET images during the feature learning process, avoiding excessive extraction of fine tissue features by deep-level networks while reducing the model's computational complexity. Secondly, we construct a channel attention module that enables the prediction model to adaptively allocate weights between different channels, compensating for the spatial noise in PET images' impact on classification. Finally, we use skip connections to merge features from different-scale lesion information. Through visual analysis, the network constructed in this article aligns with the regions of interest of clinical doctors.Main results. Through visualization analysis, our network aligns with regions of interest identified by clinical doctors. Experimental evaluations conducted on the ADNI (Alzheimer's Disease Neuroimaging Initiative) dataset demonstrate the outstanding classification performance of our proposed method. The accuracy rates for AD versus NC (Normal Controls), AD versus MCI (Mild Cognitive Impairment), MCI versus NC, and AD versus MCI versus NC classifications achieve 97.66%, 96.32%, 95.23%, and 95.68%, respectively.Significance. The proposed method surpasses advanced algorithms in the field, providing a hopeful advancement in accurately predicting and classifying Alzheimer's Disease using18F-FDG PET images. The source code has been uploaded tohttps://github.com/Haoliang-D-AHU/EAMNet/tree/master.
Collapse
Affiliation(s)
- Haoliang Duan
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, People's Republic of China
- School of Computer Science and Technology, Anhui University, Hefei, People's Republic of China
| | - Huabin Wang
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, People's Republic of China
- School of Computer Science and Technology, Anhui University, Hefei, People's Republic of China
| | - Yonglin Chen
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, People's Republic of China
- School of Computer Science and Technology, Anhui University, Hefei, People's Republic of China
| | - Fei Liu
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, People's Republic of China
- School of Computer Science and Technology, Anhui University, Hefei, People's Republic of China
| | - Liang Tao
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, People's Republic of China
- School of Computer Science and Technology, Anhui University, Hefei, People's Republic of China
| |
Collapse
|
15
|
Gao X, Liu H, Shi F, Shen D, Liu M. Brain Status Transferring Generative Adversarial Network for Decoding Individualized Atrophy in Alzheimer's Disease. IEEE J Biomed Health Inform 2023; 27:4961-4970. [PMID: 37607152 DOI: 10.1109/jbhi.2023.3304388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Deep learning has been widely investigated in brain image computational analysis for diagnosing brain diseases such as Alzheimer's disease (AD). Most of the existing methods built end-to-end models to learn discriminative features by group-wise analysis. However, these methods cannot detect pathological changes in each subject, which is essential for the individualized interpretation of disease variances and precision medicine. In this article, we propose a brain status transferring generative adversarial network (BrainStatTrans-GAN) to generate corresponding healthy images of patients, which are further used to decode individualized brain atrophy. The BrainStatTrans-GAN consists of generator, discriminator, and status discriminator. First, a normative GAN is built to generate healthy brain images from normal controls. However, it cannot generate healthy images from diseased ones due to the lack of paired healthy and diseased images. To address this problem, a status discriminator with adversarial learning is designed in the training process to produce healthy brain images for patients. Then, the residual between the generated and input images can be computed to quantify pathological brain changes. Finally, a residual-based multi-level fusion network (RMFN) is built for more accurate disease diagnosis. Compared to the existing methods, our method can model individualized brain atrophy for facilitating disease diagnosis and interpretation. Experimental results on T1-weighted magnetic resonance imaging (MRI) data of 1,739 subjects from three datasets demonstrate the effectiveness of our method.
Collapse
|
16
|
Ma T, Wang H, Ye Z. Artificial intelligence applications in computed tomography in gastric cancer: a narrative review. Transl Cancer Res 2023; 12:2379-2392. [PMID: 37859746 PMCID: PMC10583011 DOI: 10.21037/tcr-23-201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 08/01/2023] [Indexed: 10/21/2023]
Abstract
Background and Objective Artificial intelligence (AI) is a revolutionary technique which is deeply impacting and reshaping clinical practice in oncology. This review aims to summarize the current status of the clinical application of AI-based computed tomography (CT) for gastric cancer (GC), focusing on diagnosis, genetic status detection and risk prediction of metastasis, prognosis and treatment efficacy. The challenges and prospects for future research will also be discussed. Methods We searched the PubMed/MEDLINE database to identify clinical studies published between 1990 and November 2022 that investigated AI applications in CT in GC. The major findings of the verified studies were summarized. Key Content and Findings AI applications in CT images have attracted considerable attention in various fields such as diagnosis, prediction of metastasis risk, survival, and treatment response. These emerging techniques have shown a high potential to outperform clinicians in diagnostic accuracy and time-saving. Conclusions AI-powered tools showed great potential to increase diagnostic accuracy and reduce radiologists' workload. However, the goal of AI is not to replace human ability but to help oncologists make decisions in their practice. Therefore, radiologists should play a predominant role in AI applications and decide the best ways to integrate these complementary techniques within clinical practice.
Collapse
Affiliation(s)
- Tingting Ma
- Department of Radiology, Tianjin Cancer Hospital Airport Hospital, Tianjin, China
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- National Clinical Research Center for Cancer, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- The Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Hua Wang
- Department of Radiology, Tianjin Cancer Hospital Airport Hospital, Tianjin, China
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- National Clinical Research Center for Cancer, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- The Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Zhaoxiang Ye
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- National Clinical Research Center for Cancer, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- The Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| |
Collapse
|
17
|
Kim B, Lee GY, Park SH. Attention fusion network with self-supervised learning for staging of osteonecrosis of the femoral head (ONFH) using multiple MR protocols. Med Phys 2023; 50:5528-5540. [PMID: 36945733 DOI: 10.1002/mp.16380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 11/21/2022] [Accepted: 02/20/2023] [Indexed: 03/23/2023] Open
Abstract
BACKGROUND Osteonecrosis of the femoral head (ONFH) is characterized as bone cell death in the hip joint, involving a severe pain in the groin. The staging of ONFH is commonly based on Magnetic resonance imaging and computed tomography (CT), which are important for establishing effective treatment plans. There have been some attempts to automate ONFH staging using deep learning, but few of them used only MR images. PURPOSE To propose a deep learning model for MR-only ONFH staging, which can reduce additional cost and radiation exposure from the acquisition of CT images. METHODS We integrated information from the MR images of five different imaging protocols by a newly proposed attention fusion method, which was composed of intra-modality attention and inter-modality attention. In addition, a self-supervised learning was used to learn deep representations from a large amount of paired MR-CT dataset. The encoder part of the MR-CT translation network was used as a pretraining network for the staging, which aimed to overcome the lack of annotated data for staging. Ablation studies were performed to investigate the contributions of each proposed method. The area under the receiver operating characteristic curve (AUROC) was used to evaluate the performance of the networks. RESULTS Our model improved the performance of the four-way classification of the association research circulation osseous (ARCO) stage using MR images of the multiple protocols by 6.8%p in AUROC over a plain VGG network. Each proposed method increased the performance by 4.7%p (self-supervised learning) and 2.6%p (attention fusion) in AUROC, which was demonstrated by the ablation experiments. CONCLUSIONS We have shown the feasibility of the MR-only ONFH staging by using self-supervised learning and attention fusion. A large amount of paired MR-CT data in hospitals can be used to further improve the performance of the staging, and the proposed method has potential to be used in the diagnosis of various diseases that require staging from multiple MR protocols.
Collapse
Affiliation(s)
- Bomin Kim
- Department of Bio and Brain Engineering, KAIST, Daejeon, Republic of Korea
| | - Geun Young Lee
- Department of Radiology, Chung-Ang University Gwangmyeong Hospital, Chung-Ang University College of Medicine, Republic of Korea
| | - Sung-Hong Park
- Department of Bio and Brain Engineering, KAIST, Daejeon, Republic of Korea
| |
Collapse
|
18
|
Pruthviraja D, Nagaraju SC, Mudligiriyappa N, Raisinghani MS, Khan SB, Alkhaldi NA, Malibari AA. Detection of Alzheimer's Disease Based on Cloud-Based Deep Learning Paradigm. Diagnostics (Basel) 2023; 13:2687. [PMID: 37627946 PMCID: PMC10453097 DOI: 10.3390/diagnostics13162687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 07/19/2023] [Accepted: 07/28/2023] [Indexed: 08/27/2023] Open
Abstract
Deep learning is playing a major role in identifying complicated structure, and it outperforms in term of training and classification tasks in comparison to traditional algorithms. In this work, a local cloud-based solution is developed for classification of Alzheimer's disease (AD) as MRI scans as input modality. The multi-classification is used for AD variety and is classified into four stages. In order to leverage the capabilities of the pre-trained GoogLeNet model, transfer learning is employed. The GoogLeNet model, which is pre-trained for image classification tasks, is fine-tuned for the specific purpose of multi-class AD classification. Through this process, a better accuracy of 98% is achieved. As a result, a local cloud web application for Alzheimer's prediction is developed using the proposed architectures of GoogLeNet. This application enables doctors to remotely check for the presence of AD in patients.
Collapse
Affiliation(s)
- Dayananda Pruthviraja
- Department of Information Technology, Manipal Institute of Technology Bengaluru, Manipal Academy of Higher Education, Manipal 576104, India
| | - Sowmyarani C. Nagaraju
- Department of Computer Science and Engineering, R V College of Engineering, Bengaluru 560059, India
| | - Niranjanamurthy Mudligiriyappa
- Department of Artificial Intelligence and Machine Learning, BMS Institute of Technology and Management, Bengaluru 560064, India
| | | | - Surbhi Bhatia Khan
- Department of Data Science, School of Science, Engineering and Environment, University of Salford, Manchester M54WT, UK
| | - Nora A. Alkhaldi
- Department of Computer Science, College of Computer Science and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Areej A. Malibari
- Department of Industrial and Systems Engineering, College of Engineering, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| |
Collapse
|
19
|
Wagner DT, Tilmans L, Peng K, Niedermeier M, Rohl M, Ryan S, Yadav D, Takacs N, Garcia-Fraley K, Koso M, Dikici E, Prevedello LM, Nguyen XV. Artificial Intelligence in Neuroradiology: A Review of Current Topics and Competition Challenges. Diagnostics (Basel) 2023; 13:2670. [PMID: 37627929 PMCID: PMC10453240 DOI: 10.3390/diagnostics13162670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 08/07/2023] [Accepted: 08/09/2023] [Indexed: 08/27/2023] Open
Abstract
There is an expanding body of literature that describes the application of deep learning and other machine learning and artificial intelligence methods with potential relevance to neuroradiology practice. In this article, we performed a literature review to identify recent developments on the topics of artificial intelligence in neuroradiology, with particular emphasis on large datasets and large-scale algorithm assessments, such as those used in imaging AI competition challenges. Numerous applications relevant to ischemic stroke, intracranial hemorrhage, brain tumors, demyelinating disease, and neurodegenerative/neurocognitive disorders were discussed. The potential applications of these methods to spinal fractures, scoliosis grading, head and neck oncology, and vascular imaging were also reviewed. The AI applications examined perform a variety of tasks, including localization, segmentation, longitudinal monitoring, diagnostic classification, and prognostication. While research on this topic is ongoing, several applications have been cleared for clinical use and have the potential to augment the accuracy or efficiency of neuroradiologists.
Collapse
Affiliation(s)
- Daniel T. Wagner
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA (L.M.P.)
| | - Luke Tilmans
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA (L.M.P.)
| | - Kevin Peng
- College of Medicine, The Ohio State University, Columbus, OH 43210, USA
| | | | - Matt Rohl
- College of Arts and Sciences, The Ohio State University, Columbus, OH 43210, USA
| | - Sean Ryan
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA (L.M.P.)
| | - Divya Yadav
- College of Medicine, The Ohio State University, Columbus, OH 43210, USA
| | - Noah Takacs
- College of Medicine, The Ohio State University, Columbus, OH 43210, USA
| | - Krystle Garcia-Fraley
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA (L.M.P.)
| | - Mensur Koso
- College of Medicine, The Ohio State University, Columbus, OH 43210, USA
| | - Engin Dikici
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA (L.M.P.)
| | - Luciano M. Prevedello
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA (L.M.P.)
| | - Xuan V. Nguyen
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA (L.M.P.)
| |
Collapse
|
20
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
21
|
Messaoudi H, Belaid A, Ben Salem D, Conze PH. Cross-dimensional transfer learning in medical image segmentation with deep learning. Med Image Anal 2023; 88:102868. [PMID: 37384952 DOI: 10.1016/j.media.2023.102868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 06/06/2023] [Accepted: 06/08/2023] [Indexed: 07/01/2023]
Abstract
Over the last decade, convolutional neural networks have emerged and advanced the state-of-the-art in various image analysis and computer vision applications. The performance of 2D image classification networks is constantly improving and being trained on databases made of millions of natural images. Conversely, in the field of medical image analysis, the progress is also remarkable but has mainly slowed down due to the relative lack of annotated data and besides, the inherent constraints related to the acquisition process. These limitations are even more pronounced given the volumetry of medical imaging data. In this paper, we introduce an efficient way to transfer the efficiency of a 2D classification network trained on natural images to 2D, 3D uni- and multi-modal medical image segmentation applications. In this direction, we designed novel architectures based on two key principles: weight transfer by embedding a 2D pre-trained encoder into a higher dimensional U-Net, and dimensional transfer by expanding a 2D segmentation network into a higher dimension one. The proposed networks were tested on benchmarks comprising different modalities: MR, CT, and ultrasound images. Our 2D network ranked first on the CAMUS challenge dedicated to echo-cardiographic data segmentation and surpassed the state-of-the-art. Regarding 2D/3D MR and CT abdominal images from the CHAOS challenge, our approach largely outperformed the other 2D-based methods described in the challenge paper on Dice, RAVD, ASSD, and MSSD scores and ranked third on the online evaluation platform. Our 3D network applied to the BraTS 2022 competition also achieved promising results, reaching an average Dice score of 91.69% (91.22%) for the whole tumor, 83.23% (84.77%) for the tumor core and 81.75% (83.88%) for enhanced tumor using the approach based on weight (dimensional) transfer. Experimental and qualitative results illustrate the effectiveness of our methods for multi-dimensional medical image segmentation.
Collapse
Affiliation(s)
- Hicham Messaoudi
- Laboratory of Medical Informatics (LIMED), Faculty of Technology, University of Bejaia, 06000 Bejaia, Algeria.
| | - Ahror Belaid
- Laboratory of Medical Informatics (LIMED), Faculty of Exact Sciences, University of Bejaia, 06000 Bejaia, Algeria; Data Science & Applications Research Unit - CERIST, 06000, Bejaia, Algeria
| | - Douraied Ben Salem
- Laboratory of Medical Information Processing (LaTIM) UMR 1101, Inserm, 29200, Brest, France; Neuroradiology Department, University Hospital of Brest, 29200, Brest, France
| | - Pierre-Henri Conze
- Laboratory of Medical Information Processing (LaTIM) UMR 1101, Inserm, 29200, Brest, France; IMT Atlantique, 29200, Brest, France
| |
Collapse
|
22
|
Joshi F, Wang JZ, Vaden KI, Eckert MA. Deep Learning Classification of Reading Disability with Regional Brain Volume Features. Neuroimage 2023; 273:120075. [PMID: 37054828 PMCID: PMC10167676 DOI: 10.1016/j.neuroimage.2023.120075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 12/02/2022] [Accepted: 03/30/2023] [Indexed: 04/15/2023] Open
Abstract
Developmental reading disability is a prevalent and often enduring problem with varied mechanisms that contributes to its phenotypic heterogeneity. This mechanistic and phenotypic variation, as well as relatively modest sample sizes, may have limited the development of accurate neuroimaging-based classifiers for reading disability, including because of the large feature space of neuroimaging datasets. An unsupervised learning model was used to reduce deformation-based data to a lower-dimensional manifold and then supervised learning models were used to classify these latent representations in a dataset of 96 reading disability cases and 96 controls (mean age: 9.86 ± 1.56). A combined unsupervised autoencoder and supervised convolutional neural network approach provided an effective classification of cases and controls (accuracy: 77%; precision: 0.75; recall: 0.78). Brain regions that contributed to this classification accuracy were identified by adding noise to the voxel-level image data, which showed that reading disability classification accuracy was most influenced by the superior temporal sulcus, dorsal cingulate, and lateral occipital cortex. Regions that were most important for the accurate classification of controls included the supramarginal gyrus, orbitofrontal, and medial occipital cortex. The contribution of these regions reflected individual differences in reading-related abilities, such as non-word decoding or verbal comprehension. Together, the results demonstrate an optimal deep learning solution for classification using neuroimaging data. In contrast with standard mass-univariate test results, results from the deep learning model also provided evidence for regions that may be specifically affected in reading disability cases.
Collapse
Affiliation(s)
- Foram Joshi
- School of Computing, Clemson University, Clemson, S.C. U.S.A
| | - James Z Wang
- School of Computing, Clemson University, Clemson, S.C. U.S.A
| | - Kenneth I Vaden
- Department of Otolaryngology - Head and Neck Surgery Medical University of South Carolina, Charleston, S.C. U.S.A
| | - Mark A Eckert
- Department of Otolaryngology - Head and Neck Surgery Medical University of South Carolina, Charleston, S.C. U.S.A..
| |
Collapse
|
23
|
Cui C, Yang H, Wang Y, Zhao S, Asad Z, Coburn LA, Wilson KT, Landman BA, Huo Y. Deep multimodal fusion of image and non-image data in disease diagnosis and prognosis: a review. PROGRESS IN BIOMEDICAL ENGINEERING (BRISTOL, ENGLAND) 2023; 5:10.1088/2516-1091/acc2fe. [PMID: 37360402 PMCID: PMC10288577 DOI: 10.1088/2516-1091/acc2fe] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
The rapid development of diagnostic technologies in healthcare is leading to higher requirements for physicians to handle and integrate the heterogeneous, yet complementary data that are produced during routine practice. For instance, the personalized diagnosis and treatment planning for a single cancer patient relies on various images (e.g. radiology, pathology and camera images) and non-image data (e.g. clinical data and genomic data). However, such decision-making procedures can be subjective, qualitative, and have large inter-subject variabilities. With the recent advances in multimodal deep learning technologies, an increasingly large number of efforts have been devoted to a key question: how do we extract and aggregate multimodal information to ultimately provide more objective, quantitative computer-aided clinical decision making? This paper reviews the recent studies on dealing with such a question. Briefly, this review will include the (a) overview of current multimodal learning workflows, (b) summarization of multimodal fusion methods, (c) discussion of the performance, (d) applications in disease diagnosis and prognosis, and (e) challenges and future directions.
Collapse
Affiliation(s)
- Can Cui
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
| | - Haichun Yang
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
| | - Yaohong Wang
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
| | - Shilin Zhao
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
| | - Zuhayr Asad
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
| | - Lori A Coburn
- Division of Gastroenterology Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, United States of America
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN 37212, United States of America
| | - Keith T Wilson
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
- Division of Gastroenterology Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, United States of America
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN 37212, United States of America
| | - Bennett A Landman
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, United States of America
| | - Yuankai Huo
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, United States of America
| |
Collapse
|
24
|
Mulyadi AW, Jung W, Oh K, Yoon JS, Lee KH, Suk HI. Estimating explainable Alzheimer's disease likelihood map via clinically-guided prototype learning. Neuroimage 2023; 273:120073. [PMID: 37037063 DOI: 10.1016/j.neuroimage.2023.120073] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 03/03/2023] [Accepted: 03/30/2023] [Indexed: 04/12/2023] Open
Abstract
Identifying Alzheimer's disease (AD) involves a deliberate diagnostic process owing to its innate traits of irreversibility with subtle and gradual progression. These characteristics make AD biomarker identification from structural brain imaging (e.g., structural MRI) scans quite challenging. Using clinically-guided prototype learning, we propose a novel deep-learning approach through eXplainable AD Likelihood Map Estimation (XADLiME) for AD progression modeling over 3D sMRIs. Specifically, we establish a set of topologically-aware prototypes onto the clusters of latent clinical features, uncovering an AD spectrum manifold. Considering this pseudo map as an enriched reference, we employ an estimating network to approximate the AD likelihood map over a 3D sMRI scan. Additionally, we promote the explainability of such a likelihood map by revealing a comprehensible overview from clinical and morphological perspectives. During the inference, this estimated likelihood map served as a substitute for unseen sMRI scans for effectively conducting the downstream task while providing thorough explainable states.
Collapse
Affiliation(s)
- Ahmad Wisnu Mulyadi
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Wonsik Jung
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Kwanseok Oh
- Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea
| | - Jee Seok Yoon
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Kun Ho Lee
- Gwangju Alzheimer's & Related Dementia Cohort Research Center, Chosun University, Gwangju 61452, Republic of Korea; Department of Biomedical Science, Chosun University, Gwangju 61452, Republic of Korea; Korea Brain Research Institute, Daegu 41062, Republic of Korea
| | - Heung-Il Suk
- Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
25
|
Zhou X, Zhao C, Sun J, Yao K, Xu M. Detection of lead content in oilseed rape leaves and roots based on deep transfer learning and hyperspectral imaging technology. SPECTROCHIMICA ACTA. PART A, MOLECULAR AND BIOMOLECULAR SPECTROSCOPY 2023; 290:122288. [PMID: 36608517 DOI: 10.1016/j.saa.2022.122288] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 09/22/2022] [Accepted: 12/28/2022] [Indexed: 06/17/2023]
Abstract
The evaluation capability of hyperspectral imaging technology was studied for the forecasts of heavy metal lead concentration of oilseed rape plant. In addition, a transfer stacked auto-encoder (T-SAE) algorithm including two network methods, the dual-model T-SAE and the single-model T-SAE, was proposed in this paper. The hyperspectral images of oilseed rape leaf and root were acquired under different Pb stress concentrations. The entire region of the oilseed rape leaf (or root) was selected as the region of interest (ROI) to extract the spectral data, and standard normalized variable (SNV), first derivative (1st Der) and second derivative (2nd Der) were used to preprocess the ROI spectra. Besides, the principal component analysis (PCA) algorithm was used to reduce the dimensionality of the spectral data before and after preprocessing. Hence, the best pre-processed data was determined for subsequent research and analysis. Furthermore, the SAE deep learning networks were built based on the oilseed rape leaf data, oilseed rape root data, and the combined data of oilseed rape leaf and root based on the best pre-processed spectral data. Finally, the T-SAE models were obtained through transfer learning of the best SAE deep learning network. The results show that the best preprocessing algorithms of the oilseed rape leaf and root spectra were SNV and 1st Der algorithm, respectively. In addition, the prediction set recognition accuracy of the best T-SAE model of Pb stress gradient in oilseed rape plants was 98.75%. Additionally, the prediction set coefficient of determination of the best T-SAE model of the Pb content in the oilseed rape leaf and root data were 0.9215 and 0.9349, respectively. Therefore, a deep transfer learning method combined with hyperspectral imaging technology can effectively realize the the qualitative and quantitative detection of heavy metal Pb in oilseed rape plants.
Collapse
Affiliation(s)
- Xin Zhou
- School of Electrical and Information Engineering of Jiangsu University, Zhenjiang 212013, China.
| | - Chunjiang Zhao
- School of Electrical and Information Engineering of Jiangsu University, Zhenjiang 212013, China; National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China; National Engineering Laboratory for Agri-product Quality Traceability, Beijing 100097, China
| | - Jun Sun
- School of Electrical and Information Engineering of Jiangsu University, Zhenjiang 212013, China.
| | - Kunshan Yao
- School of Electrical and Information Engineering of Jiangsu University, Zhenjiang 212013, China
| | - Min Xu
- School of Electrical and Information Engineering of Jiangsu University, Zhenjiang 212013, China
| |
Collapse
|
26
|
El-Sappagh S, Alonso-Moral JM, Abuhmed T, Ali F, Bugarín-Diz A. Trustworthy artificial intelligence in Alzheimer’s disease: state of the art, opportunities, and challenges. Artif Intell Rev 2023. [DOI: 10.1007/s10462-023-10415-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
|
27
|
Sensi SL, Russo M, Tiraboschi P. Biomarkers of diagnosis, prognosis, pathogenesis, response to therapy: Convergence or divergence? Lessons from Alzheimer's disease and synucleinopathies. HANDBOOK OF CLINICAL NEUROLOGY 2023; 192:187-218. [PMID: 36796942 DOI: 10.1016/b978-0-323-85538-9.00015-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
Abstract
Alzheimer's disease (AD) is the most common disorder associated with cognitive impairment. Recent observations emphasize the pathogenic role of multiple factors inside and outside the central nervous system, supporting the notion that AD is a syndrome of many etiologies rather than a "heterogeneous" but ultimately unifying disease entity. Moreover, the defining pathology of amyloid and tau coexists with many others, such as α-synuclein, TDP-43, and others, as a rule, not an exception. Thus, an effort to shift our AD paradigm as an amyloidopathy must be reconsidered. Along with amyloid accumulation in its insoluble state, β-amyloid is becoming depleted in its soluble, normal states, as a result of biological, toxic, and infectious triggers, requiring a shift from convergence to divergence in our approach to neurodegeneration. These aspects are reflected-in vivo-by biomarkers, which have become increasingly strategic in dementia. Similarly, synucleinopathies are primarily characterized by abnormal deposition of misfolded α-synuclein in neurons and glial cells and, in the process, depleting the levels of the normal, soluble α-synuclein that the brain needs for many physiological functions. The soluble to insoluble conversion also affects other normal brain proteins, such as TDP-43 and tau, accumulating in their insoluble states in both AD and dementia with Lewy bodies (DLB). The two diseases have been distinguished by the differential burden and distribution of insoluble proteins, with neocortical phosphorylated tau deposition more typical of AD and neocortical α-synuclein deposition peculiar to DLB. We propose a reappraisal of the diagnostic approach to cognitive impairment from convergence (based on clinicopathologic criteria) to divergence (based on what differs across individuals affected) as a necessary step for the launch of precision medicine.
Collapse
Affiliation(s)
- Stefano L Sensi
- Department of Neuroscience, Imaging, and Clinical Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy; Molecular Neurology Unit, Center for Advanced Studies and Technology-CAST and ITAB Institute for Advanced Biotechnology, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy.
| | - Mirella Russo
- Department of Neuroscience, Imaging, and Clinical Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy; Molecular Neurology Unit, Center for Advanced Studies and Technology-CAST and ITAB Institute for Advanced Biotechnology, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| | - Pietro Tiraboschi
- Division of Neurology V-Neuropathology, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy
| |
Collapse
|
28
|
Zheng X, Cawood J, Hayre C, Wang S. Computer assisted diagnosis of Alzheimer's disease using statistical likelihood-ratio test. PLoS One 2023; 18:e0279574. [PMID: 36800393 PMCID: PMC9937475 DOI: 10.1371/journal.pone.0279574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Accepted: 12/11/2022] [Indexed: 02/18/2023] Open
Abstract
The purpose of this work is to present a computer assisted diagnostic tool for radiologists in their diagnosis of Alzheimer's disease. A statistical likelihood-ratio procedure from signal detection theory was implemented in the detection of Alzheimer's disease. The probability density functions of the likelihood ratio were constructed by using medial temporal lobe (MTL) volumes of patients with Alzheimer's disease (AD) and normal controls (NC). The volumes of MTL as well as other anatomical regions of the brains were calculated by the FreeSurfer software using T1 weighted MRI images. The MRI images of AD and NC were downloaded from the database of Alzheimer's disease neuroimaging initiative (ADNI). A separate dataset of minimal interval resonance imaging in Alzheimer's disease (MIRIAD) was used for diagnostic testing. A sensitivity of 89.1% and specificity of 87.0% were achieved for the MIRIAD dataset which are better than the 85% sensitivity and specificity achieved by the best radiologists without input of other patient information.
Collapse
Affiliation(s)
- Xiaoming Zheng
- Medical Radiation Science, School of Dentistry and Medical Sciences, Charles Sturt University, Wagga Wagga, NSW, Australia
- * E-mail:
| | - Justin Cawood
- Medical Radiation Science, School of Dentistry and Medical Sciences, Charles Sturt University, Wagga Wagga, NSW, Australia
| | - Chris Hayre
- Medical Radiation Science, School of Dentistry and Medical Sciences, Charles Sturt University, Wagga Wagga, NSW, Australia
| | - Shaoyu Wang
- Biomedical Sciences, School of Dentistry and Medical Sciences, Charles Sturt University, Wagga Wagga, NSW, Australia
| | | |
Collapse
|
29
|
Zhao Z, Chuah JH, Lai KW, Chow CO, Gochoo M, Dhanalakshmi S, Wang N, Bao W, Wu X. Conventional machine learning and deep learning in Alzheimer's disease diagnosis using neuroimaging: A review. Front Comput Neurosci 2023; 17:1038636. [PMID: 36814932 PMCID: PMC9939698 DOI: 10.3389/fncom.2023.1038636] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 01/13/2023] [Indexed: 02/08/2023] Open
Abstract
Alzheimer's disease (AD) is a neurodegenerative disorder that causes memory degradation and cognitive function impairment in elderly people. The irreversible and devastating cognitive decline brings large burdens on patients and society. So far, there is no effective treatment that can cure AD, but the process of early-stage AD can slow down. Early and accurate detection is critical for treatment. In recent years, deep-learning-based approaches have achieved great success in Alzheimer's disease diagnosis. The main objective of this paper is to review some popular conventional machine learning methods used for the classification and prediction of AD using Magnetic Resonance Imaging (MRI). The methods reviewed in this paper include support vector machine (SVM), random forest (RF), convolutional neural network (CNN), autoencoder, deep learning, and transformer. This paper also reviews pervasively used feature extractors and different types of input forms of convolutional neural network. At last, this review discusses challenges such as class imbalance and data leakage. It also discusses the trade-offs and suggestions about pre-processing techniques, deep learning, conventional machine learning methods, new techniques, and input type selection.
Collapse
Affiliation(s)
- Zhen Zhao
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Joon Huang Chuah
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia,*Correspondence: Joon Huang Chuah ✉
| | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia,Khin Wee Lai ✉
| | - Chee-Onn Chow
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Munkhjargal Gochoo
- Department of Computer Science and Software Engineering, United Arab Emirates University, Al Ain, United Arab Emirates
| | - Samiappan Dhanalakshmi
- Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Chennai, India
| | - Na Wang
- School of Automation, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Wei Bao
- China Electronics Standardization Institute, Beijing, China,Wei Bao ✉
| | - Xiang Wu
- School of Medical Information Engineering, Xuzhou Medical University, Xuzhou, China
| |
Collapse
|
30
|
Liu F, Gao L, Wan J, Lyu ZL, Huang YY, Liu C, Han M. Recognition of Digital Dental X-ray Images Using a Convolutional Neural Network. J Digit Imaging 2023; 36:73-79. [PMID: 36109403 PMCID: PMC9984574 DOI: 10.1007/s10278-022-00694-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 07/26/2022] [Accepted: 08/16/2022] [Indexed: 11/30/2022] Open
Abstract
Digital dental X-ray images are an important basis for diagnosing dental diseases, especially endodontic and periodontal diseases. Conventional diagnostic methods depend on the experience of doctors, so they are highly subjective and consume more energy than other approaches. The current computer-aided interpretation technology has low accuracy and poor lesion classification. This study proposes an efficient and accurate method for identifying common lesions in digital dental X-ray images by a convolutional neural network (CNN). In total, 188 digital dental X-ray images that were previously diagnosed as periapical periodontitis, dental caries, periapical cysts, and other common dental diseases by dentists in Qilu Hospital of Shandong University were collected and augmented. The images and labels were inputted into four CNN models for training, including visual geometry group (VGG)-16, InceptionV3, residual network (ResNet)-50, and densely connected convolutional networks (DenseNet)-121. The average classification accuracy of the four trained network models on the test set was 95.9%, while the classification accuracy of the trained DenseNet-121 network model reached 99.5%. It is demonstrated that the use of CNNs to interpret digital dental X-ray images is an efficient and accurate way to conduct auxiliary diagnoses of dental diseases.
Collapse
Affiliation(s)
- Feng Liu
- School of Information Science and Engineering, Shandong University, Qingdao, 266237, People's Republic of China
| | - Lei Gao
- Department of First Operating Room, Qilu Hospital of Shandong University, Jinan, 250012, People's Republic of China
| | - Jun Wan
- School of Information Science and Engineering, Shandong University, Qingdao, 266237, People's Republic of China
| | - Zhi-Lei Lyu
- Department of Oral Radiology, Qilu Hospital of Shandong University, Jinan, 250012, People's Republic of China
| | - Ying-Ying Huang
- Department of Oral Radiology, Qilu Hospital of Shandong University, Jinan, 250012, People's Republic of China
| | - Chao Liu
- Department of Oral and Maxillofacial Surgery, Qilu Hospital of Shandong University, Jinan, 250012, People's Republic of China.
- Department of Oral Surgery, Shanghai Key Laboratory of Stomatology, National Clinical Research Center of Stomatology, Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, People's Republic of China.
| | - Min Han
- School of Information Science and Engineering, Shandong University, Qingdao, 266237, People's Republic of China.
| |
Collapse
|
31
|
Zhou Y, Bu F. An Overview of Advancements in Lie Detection Technology in Speech. INTERNATIONAL JOURNAL OF INFORMATION TECHNOLOGIES AND SYSTEMS APPROACH 2023. [DOI: 10.4018/ijitsa.316935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Lie detection technology in speech is a process of lying psychological state recognition according speech signal analysis. Normally, the emotion of tension if felt when people lie. As a result, this tension leads to some subtle changes of the sound channel structure; for example, the semantic characteristics, prosodic characteristics, resonance peak, and the psychoacoustics parameters all can be different from before. In this paper, the development situation of current lie detection technology is presented. Several public speech databases for lie detection are also introduced. Then, the research situation of feature expression, selection, and extraction for lie detection is described. In addition, the research progress of lie detection algorithm is highlighted. Finally, the future direction and the existing problems of lie detection technology in speech are summarized.
Collapse
Affiliation(s)
- Yan Zhou
- Suzhou Vocational University, China
| | - Feng Bu
- Suzhou Vocational University, China
| |
Collapse
|
32
|
Subramanyam Rallabandi V, Seetharaman K. Classification of cognitively normal controls, mild cognitive impairment and Alzheimer’s disease using transfer learning approach. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
33
|
Simović A, Lutovac-Banduka M, Lekić S, Kuleto V. Smart Visualization of Medical Images as a Tool in the Function of Education in Neuroradiology. Diagnostics (Basel) 2022; 12:3208. [PMID: 36553215 PMCID: PMC9777748 DOI: 10.3390/diagnostics12123208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/09/2022] [Accepted: 12/12/2022] [Indexed: 12/23/2022] Open
Abstract
The smart visualization of medical images (SVMI) model is based on multi-detector computed tomography (MDCT) data sets and can provide a clearer view of changes in the brain, such as tumors (expansive changes), bleeding, and ischemia on native imaging (i.e., a non-contrast MDCT scan). The new SVMI method provides a more precise representation of the brain image by hiding pixels that are not carrying information and rescaling and coloring the range of pixels essential for detecting and visualizing the disease. In addition, SVMI can be used to avoid the additional exposure of patients to ionizing radiation, which can lead to the occurrence of allergic reactions due to the contrast media administration. Results of the SVMI model were compared with the final diagnosis of the disease after additional diagnostics and confirmation by neuroradiologists, who are highly trained physicians with many years of experience. The application of the realized and presented SVMI model can optimize the engagement of material, medical, and human resources and has the potential for general application in medical training, education, and clinical research.
Collapse
Affiliation(s)
- Aleksandar Simović
- Department of Information Technology, Information Technology School ITS, 11000 Belgrade, Serbia
| | - Maja Lutovac-Banduka
- Department of RT-RK Institute, RT-RK for Computer Based Systems, 21000 Novi Sad, Serbia
| | - Snežana Lekić
- Department of Emergency Neuroradiology, University Clinical Centre of Serbia UKCS, 11000 Belgrade, Serbia
| | - Valentin Kuleto
- Department of Information Technology, Information Technology School ITS, 11000 Belgrade, Serbia
| |
Collapse
|
34
|
Wang Y, Tang S, Ma R, Zamit I, Wei Y, Pan Y. Multi-modal intermediate integrative methods in neuropsychiatric disorders: A review. Comput Struct Biotechnol J 2022; 20:6149-6162. [PMID: 36420153 PMCID: PMC9674886 DOI: 10.1016/j.csbj.2022.11.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 11/04/2022] [Accepted: 11/04/2022] [Indexed: 11/09/2022] Open
Abstract
The etiology of neuropsychiatric disorders involves complex biological processes at different omics layers, such as genomics, transcriptomics, epigenetics, proteomics, and metabolomics. The advent of high-throughput technology, as well as the availability of large open-source datasets, has ushered in a new era in system biology, necessitating the integration of various types of omics data. The complexity of biological mechanisms, the limitations of integrative strategies, and the heterogeneity of multi-omics data have all presented significant challenges to computational scientists. In comparison to early and late integration, intermediate integration may transform each data type into appropriate intermediate representations using various data transformation techniques, allowing it to capture more complementary information contained in each omics and highlight new interactions across omics layers. Here, we reviewed multi-modal intermediate integrative techniques based on component analysis, matrix factorization, similarity network, multiple kernel learning, Bayesian network, artificial neural networks, and graph transformation, as well as their applications in neuropsychiatric domains. We depicted advancements in these approaches and compared the strengths and weaknesses of each method examined. We believe that our findings will aid researchers in their understanding of the transformation and integration of multi-omics data in neuropsychiatric disorders.
Collapse
Affiliation(s)
- Yanlin Wang
- Center for High Performance Computing, Joint Engineering Research Center for Health Big Data Intelligent Analysis Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Shi Tang
- Li Chiu Kong Family Sleep Assessment Unit, Department of Psychiatry, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong Special Administrative Region
| | - Ruimin Ma
- Center for High Performance Computing, Joint Engineering Research Center for Health Big Data Intelligent Analysis Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Ibrahim Zamit
- Center for High Performance Computing, Joint Engineering Research Center for Health Big Data Intelligent Analysis Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yanjie Wei
- Center for High Performance Computing, Joint Engineering Research Center for Health Big Data Intelligent Analysis Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Yi Pan
- Center for High Performance Computing, Joint Engineering Research Center for Health Big Data Intelligent Analysis Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| |
Collapse
|
35
|
Sun H, He Q, Qi S, Yao Y, Teng Y. Improving the level of autism discrimination with augmented data by GraphRNN. Comput Biol Med 2022; 150:106141. [PMID: 36191394 DOI: 10.1016/j.compbiomed.2022.106141] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Revised: 09/07/2022] [Accepted: 09/18/2022] [Indexed: 11/16/2022]
Abstract
Datasets are the key to deep learning in autism disease research. However, due to the small quantity and heterogeneity of samples in current public datasets, for example Autism Brain Imaging Data Exchange (ABIDE), the recognition research is not sufficiently effective. Previous studies primarily focused on optimizing feature selection methods and data augmentation to improve recognition accuracy. This research is based on the latter, which learns the edge distribution of a real brain network through the graph recurrent neural network (GraphRNN) and generates synthetic data that have an incentive effect on the discriminant model. Experimental results show that the synthetic data greatly improves the classification ability of the subsequent classifiers, for example, it can improve the classification accuracy of a 50-layer ResNet by up to 30% compared with the case without synthetic data.
Collapse
Affiliation(s)
- Haonan Sun
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110004, China
| | - Qiang He
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110004, China
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110004, China
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07102, USA
| | - Yueyang Teng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110004, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China.
| |
Collapse
|
36
|
Murden RJ, Zhang Z, Guo Y, Risk BB. Interpretive JIVE: Connections with CCA and an application to brain connectivity. Front Neurosci 2022; 16:969510. [PMID: 36312020 PMCID: PMC9614436 DOI: 10.3389/fnins.2022.969510] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 09/26/2022] [Indexed: 01/19/2023] Open
Abstract
Joint and Individual Variation Explained (JIVE) is a model that decomposes multiple datasets obtained on the same subjects into shared structure, structure unique to each dataset, and noise. JIVE is an important tool for multimodal data integration in neuroimaging. The two most common algorithms are R.JIVE, an iterative approach, and AJIVE, which uses principal angle analysis. The joint structure in JIVE is defined by shared subspaces, but interpreting these subspaces can be challenging. In this paper, we reinterpret AJIVE as a canonical correlation analysis of principal component scores. This reformulation, which we call CJIVE, (1) provides an intuitive view of AJIVE; (2) uses a permutation test for the number of joint components; (3) can be used to predict subject scores for out-of-sample observations; and (4) is computationally fast. We conduct simulation studies that show CJIVE and AJIVE are accurate when the total signal ranks are correctly specified but, generally inaccurate when the total ranks are too large. CJIVE and AJIVE can still extract joint signal even when the joint signal variance is relatively small. JIVE methods are applied to integrate functional connectivity (resting-state fMRI) and structural connectivity (diffusion MRI) from the Human Connectome Project. Surprisingly, the edges with largest loadings in the joint component in functional connectivity do not coincide with the same edges in the structural connectivity, indicating more complex patterns than assumed in spatial priors. Using these loadings, we accurately predict joint subject scores in new participants. We also find joint scores are associated with fluid intelligence, highlighting the potential for JIVE to reveal important shared structure.
Collapse
Affiliation(s)
- Raphiel J. Murden
- Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, GA, United States
| | - Zhengwu Zhang
- Department of Statistics and Operations Research, University of North Carolina, Chapel Hill, NC, United States
| | - Ying Guo
- Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, GA, United States
| | - Benjamin B. Risk
- Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, GA, United States
| |
Collapse
|
37
|
Alzheimer’s Disease Prediction Algorithm Based on Group Convolution and a Joint Loss Function. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:1854718. [PMID: 36277022 PMCID: PMC9581650 DOI: 10.1155/2022/1854718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Accepted: 09/07/2022] [Indexed: 11/17/2022]
Abstract
Alzheimer's disease (AD) can effectively predict by 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) of the brain, but current PET images still suffer from indistinct lesion features, low signal-to-noise ratios, and severe artefacts, resulting in poor prediction accuracy for patients with mild cognitive impairment (MCI) and unclear lesion features. In this paper, an AD prediction algorithm based on group convolution and a joint loss function is proposed. First, a group convolutional backbone network based on ResNet18 is designed to extract lesion features from multiple channels, which makes the expression ability of the network improved to a great extent. Then, a hybrid attention mechanism is presented, which enables the network to focus on target regions and learn feature weights, so as to enhance the network's learning ability of the lesion regions that are relevant to disease diagnosis. Finally, a joint loss function, that avoids the overfitting phenomenon, increases the generalization of the model, and improves prediction accuracy by adding a regularization loss function to the conventional cross-entropy function, is proposed. Experiments conducted on the public Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset show that the algorithm we proposed gives a prediction accuracy improvement of 2.4% over that of the current AD prediction algorithm, thus proving the effectiveness and availability of the new algorithm.
Collapse
|
38
|
Shastry KA, Vijayakumar V, V MKM, B A M, B N C. Deep Learning Techniques for the Effective Prediction of Alzheimer's Disease: A Comprehensive Review. Healthcare (Basel) 2022; 10:1842. [PMID: 36292289 PMCID: PMC9601959 DOI: 10.3390/healthcare10101842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Revised: 09/15/2022] [Accepted: 09/15/2022] [Indexed: 11/23/2022] Open
Abstract
"Alzheimer's disease" (AD) is a neurodegenerative disorder in which the memory shrinks and neurons die. "Dementia" is described as a gradual decline in mental, psychological, and interpersonal qualities that hinders a person's ability to function autonomously. AD is the most common degenerative brain disease. Among the first signs of AD are missing recent incidents or conversations. "Deep learning" (DL) is a type of "machine learning" (ML) that allows computers to learn by doing, much like people do. DL techniques can attain cutting-edge precision, beating individuals in certain cases. A large quantity of tagged information with multi-layered "neural network" architectures is used to perform analysis. Because significant advancements in computed tomography have resulted in sizable heterogeneous brain signals, the use of DL for the timely identification as well as automatic classification of AD has piqued attention lately. With these considerations in mind, this paper provides an in-depth examination of the various DL approaches and their implementations for the identification and diagnosis of AD. Diverse research challenges are also explored, as well as current methods in the field.
Collapse
Affiliation(s)
- K Aditya Shastry
- Department of Information Science and Engineering, Nitte Meenakshi Institute of Technology, Bangalore 560064, India
| | - V Vijayakumar
- School of Computer Science and Engineering, The University of New South Wales, Sydney, NSW 2052, Australia
- School of NUOVOS, Ajeenkya D Y Patil University, Pune 412105, India
- Swiss School of Business and Management, 1213 Geneva, Switzerland
| | - Manoj Kumar M V
- Department of Information Science and Engineering, Nitte Meenakshi Institute of Technology, Bangalore 560064, India
| | - Manjunatha B A
- Department of Information Science and Engineering, Nitte Meenakshi Institute of Technology, Bangalore 560064, India
| | - Chandrashekhar B N
- Department of Information Science and Engineering, Nitte Meenakshi Institute of Technology, Bangalore 560064, India
| |
Collapse
|
39
|
Yan B, Li Y, Li L, Yang X, Li TQ, Yang G, Jiang M. Quantifying the impact of Pyramid Squeeze Attention mechanism and filtering approaches on Alzheimer's disease classification. Comput Biol Med 2022; 148:105944. [PMID: 35969934 DOI: 10.1016/j.compbiomed.2022.105944] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 07/23/2022] [Accepted: 08/06/2022] [Indexed: 11/20/2022]
Abstract
Brain medical imaging and deep learning are important foundations for diagnosing and predicting Alzheimer's disease. In this study, we explored the impact of different image filtering approaches and Pyramid Squeeze Attention (PSA) mechanism on the image classification of Alzheimer's disease. First, during the image preprocessing, we register MRI images and remove skulls, then apply median filtering, Gaussian blur filtering, and anisotropic diffusion filtering to obtain different experimental images. After that, we add the Squeeze and Excitation (SE) mechanism and Pyramid Squeeze Attention (PSA) mechanism to the Fully Convolutional Network (FCN) model respectively, to obtain each MRI image's corresponding feature information of disease probability map. Besides, we also construct Multi-Layer Perceptron (MLP) model's framework, combining feature information of disease probability map with age, gender, and Mini-Mental State Examination (MMSE) of each sample, to get the final classification performance of model. Among them, the accuracy of the MLP-C model combining anisotropic diffusion filtering with the Pyramid Squeeze Attention mechanism can reach 98.85%. The corresponding quantitative experimental results show that different image filtering approaches and attention mechanisms provide effective assistance for the diagnosis and classification of Alzheimer's disease.
Collapse
Affiliation(s)
- Bin Yan
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, China
| | - Yang Li
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, China
| | - Lin Li
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, China
| | - Xiaocheng Yang
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, China
| | - Tie-Qiang Li
- Department of Clinical Science, Intervention and Technology, Karolinska Institutet, 171 77, Stockholm, Sweden.
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, UK; National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK.
| | - Mingfeng Jiang
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, 310018, China.
| |
Collapse
|
40
|
Ko W, Jung W, Jeon E, Suk HI. A Deep Generative-Discriminative Learning for Multimodal Representation in Imaging Genetics. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2348-2359. [PMID: 35344489 DOI: 10.1109/tmi.2022.3162870] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Imaging genetics, one of the foremost emerging topics in the medical imaging field, analyzes the inherent relations between neuroimaging and genetic data. As deep learning has gained widespread acceptance in many applications, pioneering studies employed deep learning frameworks for imaging genetics. However, existing approaches suffer from some limitations. First, they often adopt a simple strategy for joint learning of phenotypic and genotypic features. Second, their findings have not been extended to biomedical applications, e.g., degenerative brain disease diagnosis and cognitive score prediction. Finally, existing studies perform insufficient and inappropriate analyses from the perspective of data science and neuroscience. In this work, we propose a novel deep learning framework to simultaneously tackle the aforementioned issues. Our proposed framework learns to effectively represent the neuroimaging and the genetic data jointly, and achieves state-of-the-art performance when used for Alzheimer's disease and mild cognitive impairment identification. Furthermore, unlike the existing methods, the framework enables learning the relation between imaging phenotypes and genotypes in a nonlinear way without any prior neuroscientific knowledge. To demonstrate the validity of our proposed framework, we conducted experiments on a publicly available dataset and analyzed the results from diverse perspectives. Based on our experimental results, we believe that the proposed framework has immense potential to provide new insights and perspectives in deep learning-based imaging genetics studies.
Collapse
|
41
|
Khojaste-Sarakhsi M, Haghighi SS, Ghomi SF, Marchiori E. Deep learning for Alzheimer's disease diagnosis: A survey. Artif Intell Med 2022; 130:102332. [DOI: 10.1016/j.artmed.2022.102332] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 04/29/2022] [Accepted: 05/30/2022] [Indexed: 11/28/2022]
|
42
|
Ban Y, Zhang X, Lao H. Diagnosis of Alzheimer's Disease using Structure Highlighting Key Slice Stacking and Transfer Learning. Med Phys 2022; 49:5855-5869. [PMID: 35894542 DOI: 10.1002/mp.15888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 06/16/2022] [Accepted: 07/23/2022] [Indexed: 11/08/2022] Open
Abstract
BACKGROUND In recent years, two-dimensional convolutional neural network (2D CNN) have been widely used in the diagnosis of Alzheimer's disease (AD) based on structural magnetic resonance imaging (sMRI). However, due to the lack of targeted processing of the key slices of sMRI images, the classification performance of the CNN model needs to be improved. PURPOSE Therefore, in this paper, we propose a key slice processing technique called the structural highlighting key slice stacking (SHKSS) technique, and we apply it to a 2D transfer learning model for AD classification. METHODS Specifically, first, 3D MR images were preprocessed. Second, the 2D axial middle-layer image was extracted from the MR image as a key slice. Then, the image was normalized by intensity and mapped to the RGB space, and histogram specification was performed on the obtained RGB image to generate the final three-channel image. The final three-channel image was input into a pre-trained CNN model for AD classification. Finally, classification and generalization experiments were conducted to verify the validity of the proposed method. RESULTS The experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset show that our SHKSS method can effectively highlight the structural information in MRI slices. Compared with existing key slice processing techniques, our SHKSS method has an average accuracy improvement of at least 26% on the same test dataset, and it has better performance and generalization ability. CONCLUSIONS Our SHKSS method not only converts single-channel images into three-channel images to match the input requirements of the 2D transfer learning model but also highlights the structural information of MRI slices to improve the accuracy of AD diagnosis. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Yanjiao Ban
- School of Computer, Electronics and Information, Guangxi University, Nanning, Guangxi, 530004, PR China
| | - Xuejun Zhang
- School of Computer, Electronics and Information, Guangxi University, Nanning, Guangxi, 530004, PR China.,School of Artificial Intelligence, Guangxi Minzu University, Guangxi, 530006, PR China.,Guangxi Key Laboratory of Multimedia Communications and Network Technology, Nanning, Guangxi, 530004, PR China
| | - Huan Lao
- School of Artificial Intelligence, Guangxi Minzu University, Guangxi, 530006, PR China
| |
Collapse
|
43
|
Li H, Song Q, Gui D, Wang M, Min X, Li A. Reconstruction-assisted Feature Encoding Network for Histologic Subtype Classification of Non-small Cell Lung Cancer. IEEE J Biomed Health Inform 2022; 26:4563-4574. [PMID: 35849680 DOI: 10.1109/jbhi.2022.3192010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Accurate histological subtype classification between adenocarcinoma (ADC) and squamous cell carcinoma (SCC) using computed tomography (CT) images is of great importance to assist clinicians in determining treatment and therapy plans for non-small cell lung cancer (NSCLC) patients. Although current deep learning approaches have achieved promising progress in this field, they are often difficult to capture efficient tumor representations due to inadequate training data, and in consequence show limited performance. In this study, we propose a novel and effective reconstruction-assisted feature encoding network (RAFENet) for histological subtype classification by leveraging an auxiliary image reconstruction task to enable extra guidance and regularization for enhanced tumor feature representations. Different from existing reconstruction-assisted methods that directly use generalizable features obtained from shared encoder for primary task, a dedicated task-aware encoding module is utilized in RAFENet to perform refinement of generalizable features. Specifically, a cascade of cross-level non-local blocks are introduced to progressively refine generalizable features at different levels with the aid of lower-level task-specific information, which can successfully learn multi-level task-specific features tailored to histological subtype classification. Moreover, in addition to widely adopted pixel-wise reconstruction loss, we introduce a powerful semantic consistency loss function to explicitly supervise the training of RAFENet, which combines both feature consistency loss and prediction consistency loss to ensure semantic invariance during image reconstruction. Extensive experimental results show that RAFENet effectively addresses the difficult issues that cannot be resolved by existing reconstruction-based methods and consistently outperforms other state-of-the-art methods on both public and in-house NSCLC datasets.
Collapse
|
44
|
Xu L, Li X, Yang Q, Tan L, Liu Q, Liu Y. Application of Bidirectional Generative Adversarial Networks to Predict Potential miRNAs Associated With Diseases. Front Genet 2022; 13:936823. [PMID: 35903359 PMCID: PMC9314862 DOI: 10.3389/fgene.2022.936823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 06/08/2022] [Indexed: 11/18/2022] Open
Abstract
Substantial evidence has shown that microRNAs are crucial for biological processes within complex human diseases. Identifying the association of miRNA–disease pairs will contribute to accelerating the discovery of potential biomarkers and pathogenesis. Researchers began to focus on constructing computational models to facilitate the progress of disease pathology and clinical medicine by identifying the potential disease-related miRNAs. However, most existing computational methods are expensive, and their use is limited to unobserved relationships for unknown miRNAs (diseases) without association information. In this manuscript, we proposed a creatively semi-supervised model named bidirectional generative adversarial network for miRNA-disease association prediction (BGANMDA). First, we constructed a microRNA similarity network, a disease similarity network, and Gaussian interaction profile kernel similarity based on the known miRNA–disease association and comprehensive similarity of miRNAs (diseases). Next, an integrated similarity feature network with the full underlying relationships of miRNA–disease pairwise was obtained. Then, the similarity feature network was fed into the BGANMDA model to learn advanced traits in latent space. Finally, we ranked an association score list and predicted the associations between miRNA and disease. In our experiment, a five-fold cross validation was applied to estimate BGANMDA’s performance, and an area under the curve (AUC) of 0.9319 and a standard deviation of 0.00021 were obtained. At the same time, in the global and local leave-one-out cross validation (LOOCV), the AUC value and standard deviation of BGANMDA were 0.9116 ± 0.0025 and 0.8928 ± 0.0022, respectively. Furthermore, BGANMDA was employed in three different case studies to validate its prediction capability and accuracy. The experimental results of the case studies showed that 46, 46, and 48 of the top 50 prediction lists had been identified in previous studies.
Collapse
Affiliation(s)
- Long Xu
- School of Computer Science and Technology, Heilongjiang University, Harbin, China
| | - Xiaokun Li
- School of Computer Science and Technology, Heilongjiang University, Harbin, China
- Postdoctoral Program of Heilongjiang Hengxun Technology Co., Ltd., Heilongjiang University, Harbin, China
- *Correspondence: Xiaokun Li, ; Yong Liu,
| | - Qiang Yang
- School of Electronic Engineering, Heilongjiang University, Harbin, China
| | - Long Tan
- School of Computer Science and Technology, Heilongjiang University, Harbin, China
| | - Qingyuan Liu
- Postdoctoral Program of Heilongjiang Hengxun Technology Co., Ltd., Heilongjiang University, Harbin, China
| | - Yong Liu
- School of Computer Science and Technology, Heilongjiang University, Harbin, China
- *Correspondence: Xiaokun Li, ; Yong Liu,
| |
Collapse
|
45
|
Wang H, Feng T, Zhao Z, Bai X, Han G, Wang J, Dai Z, Wang R, Zhao W, Ren F, Gao F. Classification of Alzheimer's Disease Based on Deep Learning of Brain Structural and Metabolic Data. Front Aging Neurosci 2022; 14:927217. [PMID: 35903535 PMCID: PMC9315355 DOI: 10.3389/fnagi.2022.927217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Accepted: 06/08/2022] [Indexed: 11/30/2022] Open
Abstract
To improve the diagnosis and classification of Alzheimer's disease (AD), a modeling method is proposed based on the combining magnetic resonance images (MRI) brain structural data with metabolite levels of the frontal and parietal regions. First, multi-atlas brain segmentation technology based on T1-weighted images and edited magnetic resonance spectroscopy (MRS) were used to extract data of 279 brain regions and levels of 12 metabolites from regions of interest (ROIs) in the frontal and parietal regions. The t-test combined with false discovery rate (FDR) correction was used to reduce the dimensionality in the data, and MRI structural data of 54 brain regions and levels of 4 metabolites that obviously correlated with AD were screened out. Lastly, the stacked auto-encoder neural network (SAE) was used to classify AD and healthy controls (HCs), which judged the effect of classification method by fivefold cross validation. The results indicated that the mean accuracy of the five experimental model increased from 96 to 100%, the AUC value increased from 0.97 to 1, specificity increased from 90 to 100%, and F1 value increased from 0.97 to 1. Comparing the effect of each metabolite on model performance revealed that the gamma-aminobutyric acid (GABA) + levels in the parietal region resulted in the most significant improvement in model performance, with the accuracy rate increasing from 96 to 98%, the AUC value increased from 0.97 to 0.99 and the specificity increasing from 90 to 95%. Moreover, the GABA + levels in the parietal region was significantly correlated with Mini Mental State Examination (MMSE) scores of patients with AD (r = 0.627), and the F statistics were largest (F = 25.538), which supports the hypothesis that dysfunctional GABAergic system play an important role in the pathogenesis of AD. Overall, our findings support that a comprehensive method that combines MRI structural and metabolic data of brain regions can improve model classification efficiency of AD.
Collapse
Affiliation(s)
- Huiquan Wang
- School of Life Sciences, Tiangong University, Tianjin, China
| | - Tianzi Feng
- School of Electrical and Information Engineering, Tiangong University, Tianjin, China
| | - Zhe Zhao
- School of Electrical and Information Engineering, Tiangong University, Tianjin, China
| | - Xue Bai
- Department of Radiology, Qilu Hospital of Shandong University, Jinan, China
| | - Guang Han
- School of Life Sciences, Tiangong University, Tianjin, China
| | - Jinhai Wang
- School of Life Sciences, Tiangong University, Tianjin, China
| | - Zongrui Dai
- Westa College, Southwest University, Chongqing, China
| | - Rong Wang
- School of Life Sciences, Tiangong University, Tianjin, China
| | - Weibiao Zhao
- School of Life Sciences, Tiangong University, Tianjin, China
| | - Fuxin Ren
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Fei Gao
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| |
Collapse
|
46
|
Bai T, Du M, Zhang L, Ren L, Ruan L, Yang Y, Qian G, Meng Z, Zhao L, Deen MJ. A novel Alzheimer’s disease detection approach using GAN-based brain slice image enhancement. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.04.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
47
|
Multigroup recognition of dementia patients with dynamic brain connectivity under multimodal cortex parcellation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
48
|
Mishra L, Verma S. Graph Attention Autoencoder Inspired CNN based Brain Tumor Classification using MRI. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.06.107] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
49
|
Zamani J, Sadr A, Javadi AH. Classification of early-MCI patients from healthy controls using evolutionary optimization of graph measures of resting-state fMRI, for the Alzheimer's disease neuroimaging initiative. PLoS One 2022; 17:e0267608. [PMID: 35727837 PMCID: PMC9212187 DOI: 10.1371/journal.pone.0267608] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 04/11/2022] [Indexed: 11/21/2022] Open
Abstract
Identifying individuals with early mild cognitive impairment (EMCI) can be an effective strategy for early diagnosis and delay the progression of Alzheimer's disease (AD). Many approaches have been devised to discriminate those with EMCI from healthy control (HC) individuals. Selection of the most effective parameters has been one of the challenging aspects of these approaches. In this study we suggest an optimization method based on five evolutionary algorithms that can be used in optimization of neuroimaging data with a large number of parameters. Resting-state functional magnetic resonance imaging (rs-fMRI) measures, which measure functional connectivity, have been shown to be useful in prediction of cognitive decline. Analysis of functional connectivity data using graph measures is a common practice that results in a great number of parameters. Using graph measures we calculated 1155 parameters from the functional connectivity data of HC (n = 72) and EMCI (n = 68) extracted from the publicly available database of the Alzheimer's disease neuroimaging initiative database (ADNI). These parameters were fed into the evolutionary algorithms to select a subset of parameters for classification of the data into two categories of EMCI and HC using a two-layer artificial neural network. All algorithms achieved classification accuracy of 94.55%, which is extremely high considering single-modality input and low number of data participants. These results highlight potential application of rs-fMRI and efficiency of such optimization methods in classification of images into HC and EMCI. This is of particular importance considering that MRI images of EMCI individuals cannot be easily identified by experts.
Collapse
Affiliation(s)
- Jafar Zamani
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Ali Sadr
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Amir-Homayoun Javadi
- School of Psychology, University of Kent, Canterbury, United Kingdom
- School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
50
|
Fouladi S, Safaei AA, Mammone N, Ghaderi F, Ebadi MJ. Efficient Deep Neural Networks for Classification of Alzheimer’s Disease and Mild Cognitive Impairment from Scalp EEG Recordings. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10033-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|