1
|
Shi Y, Liao Y. A New Integrated Interpolation Method for High Missing Unstable Disease Surveillance Data - 12 Urban Agglomerations, China, 2009-2020. China CDC Wkly 2024; 6:670-676. [PMID: 39027630 PMCID: PMC11252051 DOI: 10.46234/ccdcw2024.124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Accepted: 07/01/2024] [Indexed: 07/20/2024] Open
Abstract
Introduction The prevalence of unstable and incomplete monitoring data significantly complicates syndromic analysis. Many data interpolation methods currently available demonstrate inadequate effectiveness in overcoming this issue. Methods To improve the accuracy of interpolation, we propose the integration of the SHapley Additive exPlanation model (SHAP) with the structural equation model (SEM), forming a combined SHAP-SEM approach. A case study is then performed to assess the enhanced performance of this novel model compared to traditional methods. Results The SHAP-SEM model was utilized to develop an interpolation model employing data from the Chinese respiratory syndrome surveillance database. We executed three distinct experiments to establish the model datasets, comprising a total of 100 replicates. The performance of the model was evaluated using the root mean square error (RMSE), correlation coefficient (r), and F-score. The findings demonstrate that the SHAP-SEM model consistently achieves superior accuracy in data interpolation, which is evident across different seasons and in overall performance. Discussion We conclude that the SHAP-SEM model demonstrates an exceptional capacity for accurately interpolating volatile and incomplete data. This capability is crucial for developing a comprehensive database that is essential for conducting risk assessments related to syndromes.
Collapse
Affiliation(s)
- Yuanhao Shi
- The State Key Laboratory of Resources and Environmental Information System, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Science, Beijing, China
| | - Yilan Liao
- The State Key Laboratory of Resources and Environmental Information System, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
2
|
Li Y, El Habib Daho M, Conze PH, Zeghlache R, Le Boité H, Tadayoni R, Cochener B, Lamard M, Quellec G. A review of deep learning-based information fusion techniques for multimodal medical image classification. Comput Biol Med 2024; 177:108635. [PMID: 38796881 DOI: 10.1016/j.compbiomed.2024.108635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 03/18/2024] [Accepted: 05/18/2024] [Indexed: 05/29/2024]
Abstract
Multimodal medical imaging plays a pivotal role in clinical diagnosis and research, as it combines information from various imaging modalities to provide a more comprehensive understanding of the underlying pathology. Recently, deep learning-based multimodal fusion techniques have emerged as powerful tools for improving medical image classification. This review offers a thorough analysis of the developments in deep learning-based multimodal fusion for medical classification tasks. We explore the complementary relationships among prevalent clinical modalities and outline three main fusion schemes for multimodal classification networks: input fusion, intermediate fusion (encompassing single-level fusion, hierarchical fusion, and attention-based fusion), and output fusion. By evaluating the performance of these fusion techniques, we provide insight into the suitability of different network architectures for various multimodal fusion scenarios and application domains. Furthermore, we delve into challenges related to network architecture selection, handling incomplete multimodal data management, and the potential limitations of multimodal fusion. Finally, we spotlight the promising future of Transformer-based multimodal fusion techniques and give recommendations for future research in this rapidly evolving field.
Collapse
Affiliation(s)
- Yihao Li
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | - Mostafa El Habib Daho
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France.
| | | | - Rachid Zeghlache
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | - Hugo Le Boité
- Sorbonne University, Paris, France; Ophthalmology Department, Lariboisière Hospital, AP-HP, Paris, France
| | - Ramin Tadayoni
- Ophthalmology Department, Lariboisière Hospital, AP-HP, Paris, France; Paris Cité University, Paris, France
| | - Béatrice Cochener
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France; Ophthalmology Department, CHRU Brest, Brest, France
| | - Mathieu Lamard
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | | |
Collapse
|
3
|
Odusami M, Maskeliūnas R, Damaševičius R, Misra S. Machine learning with multimodal neuroimaging data to classify stages of Alzheimer's disease: a systematic review and meta-analysis. Cogn Neurodyn 2024; 18:775-794. [PMID: 38826669 PMCID: PMC11143094 DOI: 10.1007/s11571-023-09993-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 06/23/2023] [Accepted: 07/17/2023] [Indexed: 06/04/2024] Open
Abstract
In recent years, Alzheimer's disease (AD) has been a serious threat to human health. Researchers and clinicians alike encounter a significant obstacle when trying to accurately identify and classify AD stages. Several studies have shown that multimodal neuroimaging input can assist in providing valuable insights into the structural and functional changes in the brain related to AD. Machine learning (ML) algorithms can accurately categorize AD phases by identifying patterns and linkages in multimodal neuroimaging data using powerful computational methods. This study aims to assess the contribution of ML methods to the accurate classification of the stages of AD using multimodal neuroimaging data. A systematic search is carried out in IEEE Xplore, Science Direct/Elsevier, ACM DigitalLibrary, and PubMed databases with forward snowballing performed on Google Scholar. The quantitative analysis used 47 studies. The explainable analysis was performed on the classification algorithm and fusion methods used in the selected studies. The pooled sensitivity and specificity, including diagnostic efficiency, were evaluated by conducting a meta-analysis based on a bivariate model with the hierarchical summary receiver operating characteristics (ROC) curve of multimodal neuroimaging data and ML methods in the classification of AD stages. Wilcoxon signed-rank test is further used to statistically compare the accuracy scores of the existing models. With a 95% confidence interval of 78.87-87.71%, the combined sensitivity for separating participants with mild cognitive impairment (MCI) from healthy control (NC) participants was 83.77%; for separating participants with AD from NC, it was 94.60% (90.76%, 96.89%); for separating participants with progressive MCI (pMCI) from stable MCI (sMCI), it was 80.41% (74.73%, 85.06%). With a 95% confidence interval (78.87%, 87.71%), the Pooled sensitivity for distinguishing mild cognitive impairment (MCI) from healthy control (NC) participants was 83.77%, with a 95% confidence interval (90.76%, 96.89%), the Pooled sensitivity for distinguishing AD from NC was 94.60%, likewise (MCI) from healthy control (NC) participants was 83.77% progressive MCI (pMCI) from stable MCI (sMCI) was 80.41% (74.73%, 85.06%), and early MCI (EMCI) from NC was 86.63% (82.43%, 89.95%). Pooled specificity for differentiating MCI from NC was 79.16% (70.97%, 87.71%), AD from NC was 93.49% (91.60%, 94.90%), pMCI from sMCI was 81.44% (76.32%, 85.66%), and EMCI from NC was 85.68% (81.62%, 88.96%). The Wilcoxon signed rank test showed a low P-value across all the classification tasks. Multimodal neuroimaging data with ML is a promising future in classifying the stages of AD but more research is required to increase the validity of its application in clinical practice.
Collapse
Affiliation(s)
- Modupe Odusami
- Department of Multimedia Engineering, Kaunas University of Technology, Kaunas, Lithuania
| | - Rytis Maskeliūnas
- Department of Multimedia Engineering, Kaunas University of Technology, Kaunas, Lithuania
| | | | - Sanjay Misra
- Department of Applied Data Science, Institute for Energy Technology, Halden, Norway
| |
Collapse
|
4
|
Gravina M, García-Pedrero A, Gonzalo-Martín C, Sansone C, Soda P. Multi input-Multi output 3D CNN for dementia severity assessment with incomplete multimodal data. Artif Intell Med 2024; 149:102774. [PMID: 38462278 DOI: 10.1016/j.artmed.2024.102774] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 12/08/2023] [Accepted: 01/14/2024] [Indexed: 03/12/2024]
Abstract
Alzheimer's Disease is the most common cause of dementia, whose progression spans in different stages, from very mild cognitive impairment to mild and severe conditions. In clinical trials, Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) are mostly used for the early diagnosis of neurodegenerative disorders since they provide volumetric and metabolic function information of the brain, respectively. In recent years, Deep Learning (DL) has been employed in medical imaging with promising results. Moreover, the use of the deep neural networks, especially Convolutional Neural Networks (CNNs), has also enabled the development of DL-based solutions in domains characterized by the need of leveraging information coming from multiple data sources, raising the Multimodal Deep Learning (MDL). In this paper, we conduct a systematic analysis of MDL approaches for dementia severity assessment exploiting MRI and PET scans. We propose a Multi Input-Multi Output 3D CNN whose training iterations change according to the characteristic of the input as it is able to handle incomplete acquisitions, in which one image modality is missed. Experiments performed on OASIS-3 dataset show the satisfactory results of the implemented network, which outperforms approaches exploiting both single image modality and different MDL fusion techniques.
Collapse
Affiliation(s)
- Michela Gravina
- Department of Electrical Engineering and Information Technology, University of Naples Federico II, Napoli, 80125, Italy
| | - Angel García-Pedrero
- Department of Computer Architecture and Technology, Universidad Politécnica de Madrid, Boadilla del Monte, 28660, Madrid, Spain; Center for Biomedical Technology, Campus de Montegancedo, Universidad Politécnica de Madrid, Pozuelo de Alarcón, 28233, Madrid, Spain
| | - Consuelo Gonzalo-Martín
- Department of Computer Architecture and Technology, Universidad Politécnica de Madrid, Boadilla del Monte, 28660, Madrid, Spain; Center for Biomedical Technology, Campus de Montegancedo, Universidad Politécnica de Madrid, Pozuelo de Alarcón, 28233, Madrid, Spain.
| | - Carlo Sansone
- Department of Electrical Engineering and Information Technology, University of Naples Federico II, Napoli, 80125, Italy
| | - Paolo Soda
- Department of Engineering, Unit of Computer Systems and Bioinformatics, University of Rome Campus Bio-Medico, Roma, 00128, Italy; Department of Diagnostics and Intervention, Radiation Physics, Biomedical Engineering, Umeå University, 90187, Umeå, Sweden
| |
Collapse
|
5
|
Hu Z, Li Y, Wang Z, Zhang S, Hou W. Conv-Swinformer: Integration of CNN and shift window attention for Alzheimer's disease classification. Comput Biol Med 2023; 164:107304. [PMID: 37549456 DOI: 10.1016/j.compbiomed.2023.107304] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 07/14/2023] [Accepted: 07/28/2023] [Indexed: 08/09/2023]
Abstract
Deep learning (DL) algorithms based on brain MRI images have achieved great success in the prediction of Alzheimer's disease (AD), with classification accuracy exceeding even that of the most experienced clinical experts. As a novel feature fusion method, Transformer has achieved excellent performance in many computer vision tasks, which also greatly promotes the application of Transformer in medical images. However, when Transformer is used for 3D MRI image feature fusion, existing DL models treat the input local features equally, which is inconsistent with the fact that adjacent voxels have stronger semantic connections than spatially distant voxels. In addition, due to the relatively small size of the dataset for medical images, it is difficult to capture local lesion features in limited iterative training by treating all input features equally. This paper proposes a deep learning model Conv-Swinformer that focuses on extracting and integrating local fine-grained features. Conv-Swinformer consists of a CNN module and a Transformer encoder module. The CNN module summarizes the planar features of the MRI slices, and the Transformer module establishes semantic connections in 3D space for these planar features. By introducing the shift window attention mechanism in the Transformer encoder, the attention is focused on a small spatial area of the MRI image, which effectively reduces unnecessary background semantic information and enables the model to capture local features more accurately. In addition, the layer-by-layer enlarged attention window can further integrate local fine-grained features, thus enhancing the model's attention ability. Compared with DL algorithms that indiscriminately fuse local features of MRI images, Conv-Swinformer can fine-grained extract local lesion features, thus achieving better classification results.
Collapse
Affiliation(s)
- Zhentao Hu
- School of Artificial Intelligence, Henan University, Zhengzhou, 450046, China
| | - Yanyang Li
- School of Artificial Intelligence, Henan University, Zhengzhou, 450046, China
| | - Zheng Wang
- School of Artificial Intelligence, Henan University, Zhengzhou, 450046, China.
| | - Shuo Zhang
- School of Artificial Intelligence, Henan University, Zhengzhou, 450046, China
| | - Wei Hou
- College of Computer and Information Engineering, Henan University, Kaifeng, 475004, China
| |
Collapse
|
6
|
Morsy SE, Zayed N, Yassine IA. Hierarchical based classification method based on fusion of Gaussian map descriptors for Alzheimer diagnosis using T 1-weighted magnetic resonance imaging. Sci Rep 2023; 13:13734. [PMID: 37612307 PMCID: PMC10447428 DOI: 10.1038/s41598-023-40635-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 08/14/2023] [Indexed: 08/25/2023] Open
Abstract
Alzheimer's disease (AD) is considered one of the most spouting elderly diseases. In 2015, AD is reported the US's sixth cause of death. Substantially, non-invasive imaging is widely employed to provide biomarkers supporting AD screening, diagnosis, and progression. In this study, Gaussian descriptors-based features are proposed to be efficient new biomarkers using Magnetic Resonance Imaging (MRI) T1-weighted images to differentiate between Alzheimer's disease (AD), Mild Cognitive Impairment (MCI), and Normal controls (NC). Several Gaussian map-based features are extracted such as Gaussian shape operator, Gaussian curvature, and mean curvature. The aforementioned features are then introduced to the Support Vector Machine (SVM). They were, first, calculated separately for the Hippocampus and Amygdala. Followed by the fusion of the features. Moreover, Fusion of the regions before feature extraction was also employed. Alzheimer's disease Neuroimaging Initiative (ADNI) dataset, formed of 45, 55, and 65 cases for AD, MCI, and NC respectively, is appointed in this study. The shape operator feature outperformed the other features, with 74.6%, and 98.9% accuracy in the case of normal vs. abnormal, and AD vs. MCI classification respectively.
Collapse
Affiliation(s)
- Shereen E Morsy
- Systems and Biomedical Engineering, Cairo University, Cairo, Egypt
| | - Nourhan Zayed
- Computer and Systems Department, Electronics Research Institute, Cairo, Egypt.
- Mechanical Engineering Department, The British University in Egypt, Cairo, Egypt.
| | - Inas A Yassine
- Systems and Biomedical Engineering, Cairo University, Cairo, Egypt
| |
Collapse
|
7
|
Illakiya T, Karthik R. Automatic Detection of Alzheimer's Disease using Deep Learning Models and Neuro-Imaging: Current Trends and Future Perspectives. Neuroinformatics 2023; 21:339-364. [PMID: 36884142 DOI: 10.1007/s12021-023-09625-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/16/2023] [Indexed: 03/09/2023]
Abstract
Deep learning algorithms have a huge influence on tackling research issues in the field of medical image processing. It acts as a vital aid for the radiologists in producing accurate results toward effective disease diagnosis. The objective of this research is to highlight the importance of deep learning models in the detection of Alzheimer's Disease (AD). The main objective of this research is to analyze different deep learning methods used for detecting AD. This study examines 103 research articles published in various research databases. These articles have been selected based on specific criteria to find the most relevant findings in the field of AD detection. The review was carried out based on deep learning techniques such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Transfer Learning (TL). To propose accurate methods for the detection, segmentation, and severity grading of AD, the radiological features need to be examined in greater depth. This review attempts to analyze different deep learning methods applied for AD detection using neuroimaging modalities like Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), etc. The focus of this review is restricted to deep learning works based on radiological imaging data for AD detection. There are a few works that have utilized other biomarkers to understand the effect of AD. Also, articles published in English were alone considered for analysis. This work concludes by highlighting the key research issues towards effective AD detection. Though several methods have yielded promising results in AD detection, the progression from Mild Cognitive Impairment (MCI) to AD need to be analyzed in greater depth using DL models.
Collapse
Affiliation(s)
- T Illakiya
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India
| | - R Karthik
- Centre for Cyber Physical Systems, School of Electronics Engineering, Vellore Institute of Technology, Chennai, India.
| |
Collapse
|
8
|
Zhu J, Tan Y, Lin R, Miao J, Fan X, Zhu Y, Liang P, Gong J, He H. Efficient self-attention mechanism and structural distilling model for Alzheimer’s disease diagnosis. Comput Biol Med 2022; 147:105737. [DOI: 10.1016/j.compbiomed.2022.105737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Revised: 05/23/2022] [Accepted: 06/11/2022] [Indexed: 11/27/2022]
|
9
|
Okyay S, Adar N. Dementia-related user-based collaborative filtering for imputing missing data and generating a reliability scale on clinical test scores. PeerJ 2022; 10:e13425. [PMID: 35642196 PMCID: PMC9148556 DOI: 10.7717/peerj.13425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 04/21/2022] [Indexed: 01/14/2023] Open
Abstract
Medical doctors may struggle to diagnose dementia, particularly when clinical test scores are missing or incorrect. In case of any doubts, both morphometrics and demographics are crucial when examining dementia in medicine. This study aims to impute and verify clinical test scores with brain MRI analysis and additional demographics, thereby proposing a decision support system that improves diagnosis and prognosis in an easy-to-understand manner. Therefore, we impute the missing clinical test score values by unsupervised dementia-related user-based collaborative filtering to minimize errors. By analyzing succession rates, we propose a reliability scale that can be utilized for the consistency of existing clinical test scores. The complete base of 816 ADNI1-screening samples was processed, and a hybrid set of 603 features was handled. Moreover, the detailed parameters in use, such as the best neighborhood and input features were evaluated for further comparative analysis. Overall, certain collaborative filtering configurations outperformed alternative state-of-the-art imputation techniques. The imputation system and reliability scale based on the proposed methodology are promising for supporting the clinical tests.
Collapse
Affiliation(s)
- Savas Okyay
- Computer Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey,Computer Engineering, Eskisehir Technical University, Eskisehir, Turkey
| | - Nihat Adar
- Computer Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| |
Collapse
|
10
|
Abdelaziz M, Wang T, Elazab A. Fusing Multimodal and Anatomical Volumes of Interest Features Using Convolutional Auto-Encoder and Convolutional Neural Networks for Alzheimer's Disease Diagnosis. Front Aging Neurosci 2022; 14:812870. [PMID: 35572142 PMCID: PMC9096261 DOI: 10.3389/fnagi.2022.812870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 03/11/2022] [Indexed: 11/16/2022] Open
Abstract
Alzheimer's disease (AD) is an age-related disease that affects a large proportion of the elderly. Currently, the neuroimaging techniques [e.g., magnetic resonance imaging (MRI) and positron emission tomography (PET)] are promising modalities for AD diagnosis. Since not all brain regions are affected by AD, a common technique is to study some region-of-interests (ROIs) that are believed to be closely related to AD. Conventional methods used ROIs, identified by the handcrafted features through Automated Anatomical Labeling (AAL) atlas rather than utilizing the original images which may induce missing informative features. In addition, they learned their framework based on the discriminative patches instead of full images for AD diagnosis in multistage learning scheme. In this paper, we integrate the original image features from MRI and PET with their ROIs features in one learning process. Furthermore, we use the ROIs features for forcing the network to focus on the regions that is highly related to AD and hence, the performance of the AD diagnosis can be improved. Specifically, we first obtain the ROIs features from the AAL, then we register every ROI with its corresponding region of the original image to get a synthetic image for each modality of every subject. Then, we employ the convolutional auto-encoder network for learning the synthetic image features and the convolutional neural network (CNN) for learning the original image features. Meanwhile, we concatenate the features from both networks after each convolution layer. Finally, the highly learned features from the MRI and PET are concatenated for brain disease classification. Experiments are carried out on the ADNI datasets including ADNI-1 and ADNI-2 to evaluate our method performance. Our method demonstrates a higher performance in brain disease classification than the recent studies.
Collapse
Affiliation(s)
- Mohammed Abdelaziz
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Department of Communications and Electronics, Delta Higher Institute for Engineering and Technology (DHIET), Mansoura, Egypt
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Ahmed Elazab
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Computer Science Department, Misr Higher Institute of Commerce and Computers, Mansoura, Egypt
| |
Collapse
|
11
|
Jin L, Zhao K, Zhao Y, Che T, Li S. A Hybrid Deep Learning Method for Early and Late Mild Cognitive Impairment Diagnosis With Incomplete Multimodal Data. Front Neuroinform 2022; 16:843566. [PMID: 35370588 PMCID: PMC8965366 DOI: 10.3389/fninf.2022.843566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Accepted: 02/21/2022] [Indexed: 11/13/2022] Open
Abstract
Multimodality neuroimages have been widely applied to diagnose mild cognitive impairment (MCI). However, the missing data problem is unavoidable. Most previously developed methods first train a generative adversarial network (GAN) to synthesize missing data and then train a classification network with the completed data. These methods independently train two networks with no information communication. Thus, the resulting GAN cannot focus on the crucial regions that are helpful for classification. To overcome this issue, we propose a hybrid deep learning method. First, a classification network is pretrained with paired MRI and PET images. Afterward, we use the pretrained classification network to guide a GAN by focusing on the features that are helpful for classification. Finally, we synthesize the missing PET images and use them with real MR images to fine-tune the classification model to make it better adapt to the synthesized images. We evaluate our proposed method on the ADNI dataset, and the results show that our method improves the accuracies obtained on the validation and testing sets by 3.84 and 5.82%, respectively. Moreover, our method increases the accuracies for the validation and testing sets by 7.7 and 9.09%, respectively, when we synthesize the missing PET images via our method. An ablation experiment shows that the last two stages are essential for our method. We also compare our method with other state-of-the-art methods, and our method achieves better classification performance.
Collapse
Affiliation(s)
- Leiming Jin
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Kun Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Yan Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Tongtong Che
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Shuyu Li
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
- State Key Lab of Cognition Neuroscience and Learning, Beijing Normal University, Beijing, China
- *Correspondence: Shuyu Li,
| |
Collapse
|
12
|
Minoshima S, Cross D. Application of artificial intelligence in brain molecular imaging. Ann Nucl Med 2022; 36:103-110. [PMID: 35028878 DOI: 10.1007/s12149-021-01697-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 11/15/2021] [Indexed: 12/22/2022]
Abstract
Initial development of artificial Intelligence (AI) and machine learning (ML) dates back to the mid-twentieth century. A growing awareness of the potential for AI, as well as increases in computational resources, research, and investment are rapidly advancing AI applications to medical imaging and, specifically, brain molecular imaging. AI/ML can improve imaging operations and decision making, and potentially perform tasks that are not readily possible by physicians, such as predicting disease prognosis, and identifying latent relationships from multi-modal clinical information. The number of applications of image-based AI algorithms, such as convolutional neural network (CNN), is increasing rapidly. The applications for brain molecular imaging (MI) include image denoising, PET and PET/MRI attenuation correction, image segmentation and lesion detection, parametric image formation, and the detection/diagnosis of Alzheimer's disease and other brain disorders. When effectively used, AI will likely improve the quality of patient care, instead of replacing radiologists. A regulatory framework is being developed to facilitate AI adaptation for medical imaging.
Collapse
Affiliation(s)
- Satoshi Minoshima
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A071, Salt Lake City, UT, 84132, USA.
| | - Donna Cross
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A071, Salt Lake City, UT, 84132, USA
| |
Collapse
|