1
|
Li Y, El Habib Daho M, Conze PH, Zeghlache R, Le Boité H, Tadayoni R, Cochener B, Lamard M, Quellec G. A review of deep learning-based information fusion techniques for multimodal medical image classification. Comput Biol Med 2024; 177:108635. [PMID: 38796881 DOI: 10.1016/j.compbiomed.2024.108635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 03/18/2024] [Accepted: 05/18/2024] [Indexed: 05/29/2024]
Abstract
Multimodal medical imaging plays a pivotal role in clinical diagnosis and research, as it combines information from various imaging modalities to provide a more comprehensive understanding of the underlying pathology. Recently, deep learning-based multimodal fusion techniques have emerged as powerful tools for improving medical image classification. This review offers a thorough analysis of the developments in deep learning-based multimodal fusion for medical classification tasks. We explore the complementary relationships among prevalent clinical modalities and outline three main fusion schemes for multimodal classification networks: input fusion, intermediate fusion (encompassing single-level fusion, hierarchical fusion, and attention-based fusion), and output fusion. By evaluating the performance of these fusion techniques, we provide insight into the suitability of different network architectures for various multimodal fusion scenarios and application domains. Furthermore, we delve into challenges related to network architecture selection, handling incomplete multimodal data management, and the potential limitations of multimodal fusion. Finally, we spotlight the promising future of Transformer-based multimodal fusion techniques and give recommendations for future research in this rapidly evolving field.
Collapse
Affiliation(s)
- Yihao Li
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | - Mostafa El Habib Daho
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France.
| | | | - Rachid Zeghlache
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | - Hugo Le Boité
- Sorbonne University, Paris, France; Ophthalmology Department, Lariboisière Hospital, AP-HP, Paris, France
| | - Ramin Tadayoni
- Ophthalmology Department, Lariboisière Hospital, AP-HP, Paris, France; Paris Cité University, Paris, France
| | - Béatrice Cochener
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France; Ophthalmology Department, CHRU Brest, Brest, France
| | - Mathieu Lamard
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | | |
Collapse
|
2
|
Zuo Q, Wu H, Chen CLP, Lei B, Wang S. Prior-Guided Adversarial Learning With Hypergraph for Predicting Abnormal Connections in Alzheimer's Disease. IEEE TRANSACTIONS ON CYBERNETICS 2024; 54:3652-3665. [PMID: 38236677 DOI: 10.1109/tcyb.2023.3344641] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2024]
Abstract
Alzheimer's disease (AD) is characterized by alterations of the brain's structural and functional connectivity during its progressive degenerative processes. Existing auxiliary diagnostic methods have accomplished the classification task, but few of them can accurately evaluate the changing characteristics of brain connectivity. In this work, a prior-guided adversarial learning with hypergraph (PALH) model is proposed to predict abnormal brain connections using triple-modality medical images. Concretely, a prior distribution from anatomical knowledge is estimated to guide multimodal representation learning using an adversarial strategy. Also, the pairwise collaborative discriminator structure is further utilized to narrow the difference in representation distribution. Moreover, the hypergraph perceptual network is developed to effectively fuse the learned representations while establishing high-order relations within and between multimodal images. Experimental results demonstrate that the proposed model outperforms other related methods in analyzing and predicting AD progression. More importantly, the identified abnormal connections are partly consistent with previous neuroscience discoveries. The proposed model can evaluate the characteristics of abnormal brain connections at different stages of AD, which is helpful for cognitive disease study and early treatment.
Collapse
|
3
|
Ma Y, He J, Tan D, Han X, Feng R, Xiong H, Peng X, Pu X, Zhang L, Li Y, Chen S. The clinical and imaging data fusion model for single-period cerebral CTA collateral circulation assessment. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024:XST240083. [PMID: 38820061 DOI: 10.3233/xst-240083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2024]
Abstract
Background The Chinese population ranks among the highest globally in terms of stroke prevalence. In the clinical diagnostic process, radiologists utilize computed tomography angiography (CTA) images for diagnosis, enabling a precise assessment of collateral circulation in the brains of stroke patients. Recent studies frequently combine imaging and machine learning methods to develop computer-aided diagnostic algorithms. However, in studies concerning collateral circulation assessment, the extracted imaging features are primarily composed of manually designed statistical features, which exhibit significant limitations in their representational capacity. Accurately assessing collateral circulation using image features in brain CTA images still presents challenges. Methods To tackle this issue, considering the scarcity of publicly accessible medical datasets, we combined clinical data with imaging data to establish a dataset named RadiomicsClinicCTA. Moreover, we devised two collateral circulation assessment models to exploit the synergistic potential of patients' clinical information and imaging data for a more accurate assessment of collateral circulation: data-level fusion and feature-level fusion. To remove redundant features from the dataset, we employed Levene's test and T-test methods for feature pre-screening. Subsequently, we performed feature dimensionality reduction using the LASSO and random forest algorithms and trained classification models with various machine learning algorithms on the data-level fusion dataset after feature engineering. Results Experimental results on the RadiomicsClinicCTA dataset demonstrate that the optimized data-level fusion model achieves an accuracy and AUC value exceeding 86% . Subsequently, we trained and assessed the performance of the feature-level fusion classification model. The results indicate the feature-level fusion classification model outperforms the optimized data-level fusion model. Comparative experiments show that the fused dataset better differentiates between good and bad side branch features relative to the pure radiomics dataset. Conclusions Our study underscores the efficacy of integrating clinical and imaging data through fusion models, significantly enhancing the accuracy of collateral circulation assessment in stroke patients.
Collapse
Affiliation(s)
- Yuqi Ma
- College of Computer and Information Science, Southwest University, Chongqing, China
| | - Jingliu He
- College of Computer and Information Science, Southwest University, Chongqing, China
| | - Duo Tan
- The Second People's Hospital of Guizhou Province, Guizhou, China
| | - Xu Han
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Ruiqi Feng
- College of Computer and Information Science, Southwest University, Chongqing, China
| | - Hailing Xiong
- College of Electronic and Information Engineering, Southwest University, Chongqing, China
| | - Xihua Peng
- College of Computer and Information Science, Southwest University, Chongqing, China
| | - Xun Pu
- College of Computer and Information Science, Southwest University, Chongqing, China
| | - Lin Zhang
- College of Computer and Information Science, Southwest University, Chongqing, China
| | - Yongmei Li
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Shanxiong Chen
- College of Computer and Information Science, Southwest University, Chongqing, China
- Big Data & Intelligence Engineering School, Chongqing College of International Business and Economics, Chongqing, China
| |
Collapse
|
4
|
Cheek CL, Lindner P, Grigorenko EL. Statistical and Machine Learning Analysis in Brain-Imaging Genetics: A Review of Methods. Behav Genet 2024; 54:233-251. [PMID: 38336922 DOI: 10.1007/s10519-024-10177-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 01/24/2024] [Indexed: 02/12/2024]
Abstract
Brain-imaging-genetic analysis is an emerging field of research that aims at aggregating data from neuroimaging modalities, which characterize brain structure or function, and genetic data, which capture the structure and function of the genome, to explain or predict normal (or abnormal) brain performance. Brain-imaging-genetic studies offer great potential for understanding complex brain-related diseases/disorders of genetic etiology. Still, a combined brain-wide genome-wide analysis is difficult to perform as typical datasets fuse multiple modalities, each with high dimensionality, unique correlational landscapes, and often low statistical signal-to-noise ratios. In this review, we outline the progress in brain-imaging-genetic methodologies starting from early massive univariate to current deep learning approaches, highlighting each approach's strengths and weaknesses and elongating it with the field's development. We conclude by discussing selected remaining challenges and prospects for the field.
Collapse
Affiliation(s)
- Connor L Cheek
- Texas Institute for Evaluation, Measurement, and Statistics, University of Houston, Houston, TX, USA.
- Department of Physics, University of Houston, Houston, TX, USA.
| | - Peggy Lindner
- Texas Institute for Evaluation, Measurement, and Statistics, University of Houston, Houston, TX, USA
- Department of Information Science Technology, University of Houston, Houston, TX, USA
| | - Elena L Grigorenko
- Texas Institute for Evaluation, Measurement, and Statistics, University of Houston, Houston, TX, USA
- Department of Psychology, University of Houston, Houston, TX, USA
- Baylor College of Medicine, Houston, TX, USA
- Sirius University of Science and Technology, Sochi, Russia
| |
Collapse
|
5
|
Machado Reyes D, Chao H, Hahn J, Shen L, Yan P. Identifying Progression-Specific Alzheimer's Subtypes Using Multimodal Transformer. J Pers Med 2024; 14:421. [PMID: 38673048 PMCID: PMC11051083 DOI: 10.3390/jpm14040421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 04/01/2024] [Accepted: 04/08/2024] [Indexed: 04/28/2024] Open
Abstract
Alzheimer's disease (AD) is the most prevalent neurodegenerative disease, yet its current treatments are limited to stopping disease progression. Moreover, the effectiveness of these treatments remains uncertain due to the heterogeneity of the disease. Therefore, it is essential to identify disease subtypes at a very early stage. Current data-driven approaches can be used to classify subtypes during later stages of AD or related disorders, but making predictions in the asymptomatic or prodromal stage is challenging. Furthermore, the classifications of most existing models lack explainability, and these models rely solely on a single modality for assessment, limiting the scope of their analysis. Thus, we propose a multimodal framework that utilizes early-stage indicators, including imaging, genetics, and clinical assessments, to classify AD patients into progression-specific subtypes at an early stage. In our framework, we introduce a tri-modal co-attention mechanism (Tri-COAT) to explicitly capture cross-modal feature associations. Data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) (slow progressing = 177, intermediate = 302, and fast = 15) were used to train and evaluate Tri-COAT using a 10-fold stratified cross-testing approach. Our proposed model outperforms baseline models and sheds light on essential associations across multimodal features supported by known biological mechanisms. The multimodal design behind Tri-COAT allows it to achieve the highest classification area under the receiver operating characteristic curve while simultaneously providing interpretability to the model predictions through the co-attention mechanism.
Collapse
Affiliation(s)
- Diego Machado Reyes
- Department of Biomedical Engineering, Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA; (D.M.R.); (H.C.); (J.H.)
| | - Hanqing Chao
- Department of Biomedical Engineering, Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA; (D.M.R.); (H.C.); (J.H.)
| | - Juergen Hahn
- Department of Biomedical Engineering, Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA; (D.M.R.); (H.C.); (J.H.)
| | - Li Shen
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA;
| | - Pingkun Yan
- Department of Biomedical Engineering, Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA; (D.M.R.); (H.C.); (J.H.)
| | | |
Collapse
|
6
|
Lee S, Cho Y, Ji Y, Jeon M, Kim A, Ham BJ, Joo YY. Multimodal integration of neuroimaging and genetic data for the diagnosis of mood disorders based on computer vision models. J Psychiatr Res 2024; 172:144-155. [PMID: 38382238 DOI: 10.1016/j.jpsychires.2024.02.036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 02/12/2024] [Accepted: 02/14/2024] [Indexed: 02/23/2024]
Abstract
Mood disorders, particularly major depressive disorder (MDD) and bipolar disorder (BD), are often underdiagnosed, leading to substantial morbidity. Harnessing the potential of emerging methodologies, we propose a novel multimodal fusion approach that integrates patient-oriented brain structural magnetic resonance imaging (sMRI) scans with DNA whole-exome sequencing (WES) data. Multimodal data fusion aims to improve the detection of mood disorders by employing established deep-learning architectures for computer vision and machine-learning strategies. We analyzed brain imaging genetic data of 321 East Asian individuals, including 147 patients with MDD, 78 patients with BD, and 96 healthy controls. We developed and evaluated six fusion models by leveraging common computer vision models in image classification: Vision Transformer (ViT), Inception-V3, and ResNet50, in conjunction with advanced machine-learning techniques (XGBoost and LightGBM) known for high-dimensional data analysis. Model validation was performed using a 10-fold cross-validation. Our ViT ⊕ XGBoost fusion model with MRI scans, genomic Single Nucleotide polymorphism (SNP) data, and unweighted polygenic risk score (PRS) outperformed baseline models, achieving an incremental area under the curve (AUC) of 0.2162 (32.03% increase) and 0.0675 (+8.19%) and incremental accuracy of 0.1455 (+25.14%) and 0.0849 (+13.28%) compared to SNP-only and image-only baseline models, respectively. Our findings highlight the opportunity to refine mood disorder diagnostics by demonstrating the transformative potential of integrating diverse, yet complementary, data modalities and methodologies.
Collapse
Affiliation(s)
- Seungeun Lee
- Department of Mathematics, Korea University, Anamro 145, Seoungbuk-gu, Seoul, 02841, Republic of Korea
| | - Yongwon Cho
- Department of Computer Science and Engineering, Soonchunhyang University, South Korea, Republic of Korea
| | - Yuyoung Ji
- Division of Life Science, Korea University, Anamro 145, Seoungbuk-gu, Seoul, 02841, Republic of Korea
| | - Minhyek Jeon
- Division of Biotechnology, Korea University, Anamro 145, Seoungbuk-gu, Seoul, 02841, Republic of Korea; Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA, 15213, United States
| | - Aram Kim
- Department of Biomedical Sciences, Korea University College of Medicine, Seoul, 02841, Republic of Korea
| | - Byung-Joo Ham
- Department of Psychiatry, Korea University Anam Hospital, 73, Goryeodae-ro, Seoungbuk-gu, Seoul, 02841, Republic of Korea.
| | - Yoonjung Yoonie Joo
- Department of Digital Health, Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Samsung Medical Center, 115 Irwon-Ro, Gangnam-Gu, Seoul, 06355, Republic of Korea.
| |
Collapse
|
7
|
Castellano G, Esposito A, Lella E, Montanaro G, Vessio G. Automated detection of Alzheimer's disease: a multi-modal approach with 3D MRI and amyloid PET. Sci Rep 2024; 14:5210. [PMID: 38433282 PMCID: PMC10909869 DOI: 10.1038/s41598-024-56001-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 02/28/2024] [Indexed: 03/05/2024] Open
Abstract
Recent advances in deep learning and imaging technologies have revolutionized automated medical image analysis, especially in diagnosing Alzheimer's disease through neuroimaging. Despite the availability of various imaging modalities for the same patient, the development of multi-modal models leveraging these modalities remains underexplored. This paper addresses this gap by proposing and evaluating classification models using 2D and 3D MRI images and amyloid PET scans in uni-modal and multi-modal frameworks. Our findings demonstrate that models using volumetric data learn more effective representations than those using only 2D images. Furthermore, integrating multiple modalities enhances model performance over single-modality approaches significantly. We achieved state-of-the-art performance on the OASIS-3 cohort. Additionally, explainability analyses with Grad-CAM indicate that our model focuses on crucial AD-related regions for its predictions, underscoring its potential to aid in understanding the disease's causes.
Collapse
Affiliation(s)
| | - Andrea Esposito
- Department of Computer Science, University of Bari Aldo Moro, Bari, Italy
| | - Eufemia Lella
- Sirio - Research & Innovation, Sidea Group, Bari, Italy
| | | | - Gennaro Vessio
- Department of Computer Science, University of Bari Aldo Moro, Bari, Italy.
| |
Collapse
|
8
|
Vedaei F, Mashhadi N, Alizadeh M, Zabrecky G, Monti D, Wintering N, Navarreto E, Hriso C, Newberg AB, Mohamed FB. Deep learning-based multimodality classification of chronic mild traumatic brain injury using resting-state functional MRI and PET imaging. Front Neurosci 2024; 17:1333725. [PMID: 38312737 PMCID: PMC10837852 DOI: 10.3389/fnins.2023.1333725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Accepted: 12/28/2023] [Indexed: 02/06/2024] Open
Abstract
Mild traumatic brain injury (mTBI) is a public health concern. The present study aimed to develop an automatic classifier to distinguish between patients with chronic mTBI (n = 83) and healthy controls (HCs) (n = 40). Resting-state functional MRI (rs-fMRI) and positron emission tomography (PET) imaging were acquired from the subjects. We proposed a novel deep-learning-based framework, including an autoencoder (AE), to extract high-level latent and rectified linear unit (ReLU) and sigmoid activation functions. Single and multimodality algorithms integrating multiple rs-fMRI metrics and PET data were developed. We hypothesized that combining different imaging modalities provides complementary information and improves classification performance. Additionally, a novel data interpretation approach was utilized to identify top-performing features learned by the AEs. Our method delivered a classification accuracy within the range of 79-91.67% for single neuroimaging modalities. However, the performance of classification improved to 95.83%, thereby employing the multimodality model. The models have identified several brain regions located in the default mode network, sensorimotor network, visual cortex, cerebellum, and limbic system as the most discriminative features. We suggest that this approach could be extended to the objective biomarkers predicting mTBI in clinical settings.
Collapse
Affiliation(s)
- Faezeh Vedaei
- Department of Radiology, Jefferson Integrated Magnetic Resonance Imaging Center, Thomas Jefferson University, Philadelphia, PA, United States
| | - Najmeh Mashhadi
- Department of Computer Science and Engineering, University of California, Santa Cruz, Santa Cruz, CA, United States
| | - Mahdi Alizadeh
- Department of Radiology, Jefferson Integrated Magnetic Resonance Imaging Center, Thomas Jefferson University, Philadelphia, PA, United States
| | - George Zabrecky
- Department of Integrative Medicine and Nutritional Sciences, Marcus Institute of Integrative, Health, Thomas Jefferson University, Philadelphia, PA, United States
| | - Daniel Monti
- Department of Integrative Medicine and Nutritional Sciences, Marcus Institute of Integrative, Health, Thomas Jefferson University, Philadelphia, PA, United States
| | - Nancy Wintering
- Department of Integrative Medicine and Nutritional Sciences, Marcus Institute of Integrative, Health, Thomas Jefferson University, Philadelphia, PA, United States
| | - Emily Navarreto
- Department of Integrative Medicine and Nutritional Sciences, Marcus Institute of Integrative, Health, Thomas Jefferson University, Philadelphia, PA, United States
| | - Chloe Hriso
- Department of Integrative Medicine and Nutritional Sciences, Marcus Institute of Integrative, Health, Thomas Jefferson University, Philadelphia, PA, United States
| | - Andrew B. Newberg
- Department of Radiology, Jefferson Integrated Magnetic Resonance Imaging Center, Thomas Jefferson University, Philadelphia, PA, United States
- Department of Integrative Medicine and Nutritional Sciences, Marcus Institute of Integrative, Health, Thomas Jefferson University, Philadelphia, PA, United States
| | - Feroze B. Mohamed
- Department of Radiology, Jefferson Integrated Magnetic Resonance Imaging Center, Thomas Jefferson University, Philadelphia, PA, United States
| |
Collapse
|
9
|
Zhang L, Wang L, Liu T, Zhu D. Disease2Vec: Encoding Alzheimer's progression via disease embedding tree. Pharmacol Res 2024; 199:107038. [PMID: 38072216 DOI: 10.1016/j.phrs.2023.107038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 11/06/2023] [Accepted: 12/07/2023] [Indexed: 12/17/2023]
Abstract
For decades, a variety of predictive approaches have been proposed and evaluated in terms of their prediction capability for Alzheimer's Disease (AD) and its precursor - mild cognitive impairment (MCI). Most of them focused on prediction or identification of statistical differences among different clinical groups or phases, especially in the context of binary or multi-class classification. The continuous nature of AD development and transition states between successive AD related stages have been typically overlooked. Though a few progression models of AD have been studied recently, they were mainly designed to determine and compare the order of specific biomarkers. How to effectively predict the individual patient's status within a wide spectrum of continuous AD progression has been largely understudied. In this work, we developed a novel learning-based embedding framework to encode the intrinsic relations among AD related clinical stages by a set of meaningful embedding vectors in the latent space (Disease2Vec). We named this process as disease embedding. By Disease2Vec, our framework generates a disease embedding tree (DETree) which effectively represents different clinical stages as a tree trajectory reflecting AD progression and thus can be used to predict clinical status by projecting individuals onto this continuous trajectory. Through this model, DETree can not only perform efficient and accurate prediction for patients at any stages of AD development (across five fine-grained clinical groups instead of typical two groups), but also provide richer status information by examining the projecting locations within a wide and continuous AD progression process. (Code will be available: https://github.com/qidianzl/Disease2Vec.).
Collapse
Affiliation(s)
- Lu Zhang
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX, USA
| | - Li Wang
- Department of Mathematics, The University of Texas at Arlington, Arlington, TX, USA
| | - Tianming Liu
- Department of Computer Science, The University of Georgia, Athens, GA, USA
| | - Dajiang Zhu
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX, USA.
| |
Collapse
|
10
|
Gao X, Shi F, Shen D, Liu M. Multimodal transformer network for incomplete image generation and diagnosis of Alzheimer's disease. Comput Med Imaging Graph 2023; 110:102303. [PMID: 37832503 DOI: 10.1016/j.compmedimag.2023.102303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 06/27/2023] [Accepted: 09/27/2023] [Indexed: 10/15/2023]
Abstract
Multimodal images such as magnetic resonance imaging (MRI) and positron emission tomography (PET) could provide complementary information about the brain and have been widely investigated for the diagnosis of neurodegenerative disorders such as Alzheimer's disease (AD). However, multimodal brain images are often incomplete in clinical practice. It is still challenging to make use of multimodality for disease diagnosis with missing data. In this paper, we propose a deep learning framework with the multi-level guided generative adversarial network (MLG-GAN) and multimodal transformer (Mul-T) for incomplete image generation and disease classification, respectively. First, MLG-GAN is proposed to generate the missing data, guided by multi-level information from voxels, features, and tasks. In addition to voxel-level supervision and task-level constraint, a feature-level auto-regression branch is proposed to embed the features of target images for an accurate generation. With the complete multimodal images, we propose a Mul-T network for disease diagnosis, which can not only combine the global and local features but also model the latent interactions and correlations from one modality to another with the cross-modal attention mechanism. Comprehensive experiments on three independent datasets (i.e., ADNI-1, ADNI-2, and OASIS-3) show that the proposed method achieves superior performance in the tasks of image generation and disease diagnosis compared to state-of-the-art methods.
Collapse
Affiliation(s)
- Xingyu Gao
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co.,Ltd., China.
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co.,Ltd., China; School of Biomedical Engineering, ShanghaiTech University, China.
| | - Manhua Liu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, China; MoE Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China.
| |
Collapse
|
11
|
Newby D, Orgeta V, Marshall CR, Lourida I, Albertyn CP, Tamburin S, Raymont V, Veldsman M, Koychev I, Bauermeister S, Weisman D, Foote IF, Bucholc M, Leist AK, Tang EYH, Tai XY, Llewellyn DJ, Ranson JM. Artificial intelligence for dementia prevention. Alzheimers Dement 2023; 19:5952-5969. [PMID: 37837420 PMCID: PMC10843720 DOI: 10.1002/alz.13463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 08/01/2023] [Accepted: 08/07/2023] [Indexed: 10/16/2023]
Abstract
INTRODUCTION A wide range of modifiable risk factors for dementia have been identified. Considerable debate remains about these risk factors, possible interactions between them or with genetic risk, and causality, and how they can help in clinical trial recruitment and drug development. Artificial intelligence (AI) and machine learning (ML) may refine understanding. METHODS ML approaches are being developed in dementia prevention. We discuss exemplar uses and evaluate the current applications and limitations in the dementia prevention field. RESULTS Risk-profiling tools may help identify high-risk populations for clinical trials; however, their performance needs improvement. New risk-profiling and trial-recruitment tools underpinned by ML models may be effective in reducing costs and improving future trials. ML can inform drug-repurposing efforts and prioritization of disease-modifying therapeutics. DISCUSSION ML is not yet widely used but has considerable potential to enhance precision in dementia prevention. HIGHLIGHTS Artificial intelligence (AI) is not widely used in the dementia prevention field. Risk-profiling tools are not used in clinical practice. Causal insights are needed to understand risk factors over the lifespan. AI will help personalize risk-management tools for dementia prevention. AI could target specific patient groups that will benefit most for clinical trials.
Collapse
Affiliation(s)
- Danielle Newby
- University of Oxford, Department of Psychiatry, Warneford Hospital, Oxford, OX3 7JX, UK
| | - Vasiliki Orgeta
- Division of Psychiatry, University College London, London, W1T 7BN, UK
| | - Charles R Marshall
- Preventive Neurology Unit, Wolfson Institute of Population Health, Barts and The London School of Medicine and Dentistry, Queen Mary University of London, E1 4NS, UK
- Department of Neurology, Royal London Hospital, London, E1 1BB, UK
| | - Ilianna Lourida
- Population Health Sciences Institute, Newcastle University, Newcastle, NE2 4AX, UK
- University of Exeter Medical School, Exeter, EX1 2HZ, UK
| | - Christopher P Albertyn
- Department of Old Age Psychiatry, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, SE5 8AF, UK
| | - Stefano Tamburin
- Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Verona, 37129, Italy
| | - Vanessa Raymont
- University of Oxford, Department of Psychiatry, Warneford Hospital, Oxford, OX3 7JX, UK
| | - Michele Veldsman
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, OX3 9DU, UK
- Department of Experimental Psychology, University of Oxford, Oxford, OX2 6GG, UK
| | - Ivan Koychev
- University of Oxford, Department of Psychiatry, Warneford Hospital, Oxford, OX3 7JX, UK
| | - Sarah Bauermeister
- University of Oxford, Department of Psychiatry, Warneford Hospital, Oxford, OX3 7JX, UK
| | - David Weisman
- Abington Neurological Associates, Abington, PA 19001, USA
| | - Isabelle F Foote
- Preventive Neurology Unit, Wolfson Institute of Population Health, Barts and The London School of Medicine and Dentistry, Queen Mary University of London, E1 4NS, UK
- Institute for Behavioral Genetics, University of Colorado Boulder, Boulder, CO 80309, USA
| | - Magda Bucholc
- Cognitive Analytics Research Lab, School of Computing, Engineering & Intelligent Systems, Ulster University, Derry, BT48 7JL, UK
| | - Anja K Leist
- Institute for Research on Socio-Economic Inequality (IRSEI), Department of Social Sciences, University of Luxembourg, L-4365, Luxembourg
| | - Eugene Y H Tang
- Population Health Sciences Institute, Newcastle University, Newcastle, NE2 4AX, UK
| | - Xin You Tai
- Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, OX3 9DU, UK
- Division of Clinical Neurology, John Radcliffe Hospital, Oxford University Hospitals Trust, Oxford, OX3 9DU, UK
| | | | - David J. Llewellyn
- University of Exeter Medical School, Exeter, EX1 2HZ, UK
- The Alan Turing Institute, London, NW1 2DB, UK
| | | |
Collapse
|
12
|
Kim H, Jeon YD, Park KB, Cha H, Kim MS, You J, Lee SW, Shin SH, Chung YG, Kang SB, Jang WS, Yoon DK. Automatic segmentation of inconstant fractured fragments for tibia/fibula from CT images using deep learning. Sci Rep 2023; 13:20431. [PMID: 37993627 PMCID: PMC10665312 DOI: 10.1038/s41598-023-47706-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 11/17/2023] [Indexed: 11/24/2023] Open
Abstract
Orthopaedic surgeons need to correctly identify bone fragments using 2D/3D CT images before trauma surgery. Advances in deep learning technology provide good insights into trauma surgery over manual diagnosis. This study demonstrates the application of the DeepLab v3+ -based deep learning model for the automatic segmentation of fragments of the fractured tibia and fibula from CT images and the results of the evaluation of the performance of the automatic segmentation. The deep learning model, which was trained using over 11 million images, showed good performance with a global accuracy of 98.92%, a weighted intersection over the union of 0.9841, and a mean boundary F1 score of 0.8921. Moreover, deep learning performed 5-8 times faster than the experts' recognition performed manually, which is comparatively inefficient, with almost the same significance. This study will play an important role in preoperative surgical planning for trauma surgery with convenience and speed.
Collapse
Affiliation(s)
- Hyeonjoo Kim
- Department of Medical Device Engineering and Management, College of Medicine, Yonsei University, Seoul, Republic of Korea
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Young Dae Jeon
- Department of Orthopedic Surgery, University of Ulsan, College of Medicine, Ulsan University Hospital, Ulsan, Republic of Korea
| | - Ki Bong Park
- Department of Orthopedic Surgery, University of Ulsan, College of Medicine, Ulsan University Hospital, Ulsan, Republic of Korea
| | - Hayeong Cha
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Moo-Sub Kim
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Juyeon You
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Se-Won Lee
- Department of Orthopedic Surgery, Yeouido St. Mary's Hospital,, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Seung-Han Shin
- Department of Orthopedic Surgery, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yang-Guk Chung
- Department of Orthopedic Surgery, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Sung Bin Kang
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Won Seuk Jang
- Department of Medical Device Engineering and Management, College of Medicine, Yonsei University, Seoul, Republic of Korea.
| | - Do-Kun Yoon
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea.
| |
Collapse
|
13
|
Zuo Q, Zhong N, Pan Y, Wu H, Lei B, Wang S. Brain Structure-Function Fusing Representation Learning Using Adversarial Decomposed-VAE for Analyzing MCI. IEEE Trans Neural Syst Rehabil Eng 2023; 31:4017-4028. [PMID: 37815971 DOI: 10.1109/tnsre.2023.3323432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/12/2023]
Abstract
Integrating the brain structural and functional connectivity features is of great significance in both exploring brain science and analyzing cognitive impairment clinically. However, it remains a challenge to effectively fuse structural and functional features in exploring the complex brain network. In this paper, a novel brain structure-function fusing-representation learning (BSFL) model is proposed to effectively learn fused representation from diffusion tensor imaging (DTI) and resting-state functional magnetic resonance imaging (fMRI) for mild cognitive impairment (MCI) analysis. Specifically, the decomposition-fusion framework is developed to first decompose the feature space into the union of the uniform and unique spaces for each modality, and then adaptively fuse the decomposed features to learn MCI-related representation. Moreover, a knowledge-aware transformer module is designed to automatically capture local and global connectivity features throughout the brain. Also, a uniform-unique contrastive loss is further devised to make the decomposition more effective and enhance the complementarity of structural and functional features. The extensive experiments demonstrate that the proposed model achieves better performance than other competitive methods in predicting and analyzing MCI. More importantly, the proposed model could be a potential tool for reconstructing unified brain networks and predicting abnormal connections during the degenerative processes in MCI.
Collapse
|
14
|
Wu B, Li C, Zhang J, Lai H, Feng Q, Huang M. Unsupervised dual-domain disentangled network for removal of rigid motion artifacts in MRI. Comput Biol Med 2023; 165:107373. [PMID: 37611424 DOI: 10.1016/j.compbiomed.2023.107373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 07/28/2023] [Accepted: 08/12/2023] [Indexed: 08/25/2023]
Abstract
Motion artifacts in magnetic resonance imaging (MRI) have always been a serious issue because they can affect subsequent diagnosis and treatment. Supervised deep learning methods have been investigated for the removal of motion artifacts; however, they require paired data that are difficult to obtain in clinical settings. Although unsupervised methods are widely proposed to fully use clinical unpaired data, they generally focus on anatomical structures generated by the spatial domain while ignoring phase error (deviations or inaccuracies in phase information that are possibly caused by rigid motion artifacts during image acquisition) provided by the frequency domain. In this study, a 2D unsupervised deep learning method named unsupervised disentangled dual-domain network (UDDN) was proposed to effectively disentangle and remove unwanted rigid motion artifacts from images. In UDDN, a dual-domain encoding module was presented to capture different types of information from the spatial and frequency domains to enrich the information. Moreover, a cross-domain attention fusion module was proposed to effectively fuse information from different domains, reduce information redundancy, and improve the performance of motion artifact removal. UDDN was validated on a publicly available dataset and a clinical dataset. Qualitative and quantitative experimental results showed that our method could effectively remove motion artifacts and reconstruct image details. Moreover, the performance of UDDN surpasses that of several state-of-the-art unsupervised methods and is comparable with that of the supervised method. Therefore, our method has great potential for clinical application in MRI, such as real-time removal of rigid motion artifacts.
Collapse
Affiliation(s)
- Boya Wu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Caixia Li
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China.
| | - Jiawei Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Haoran Lai
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Meiyan Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
15
|
Tong Y, Udupa JK, Chong E, Winchell N, Sun C, Zou Y, Schuster SJ, Torigian DA. Prediction of lymphoma response to CAR T cells by deep learning-based image analysis. PLoS One 2023; 18:e0282573. [PMID: 37478073 PMCID: PMC10361488 DOI: 10.1371/journal.pone.0282573] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 02/21/2023] [Indexed: 07/23/2023] Open
Abstract
Clinical prognostic scoring systems have limited utility for predicting treatment outcomes in lymphomas. We therefore tested the feasibility of a deep-learning (DL)-based image analysis methodology on pre-treatment diagnostic computed tomography (dCT), low-dose CT (lCT), and 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) images and rule-based reasoning to predict treatment response to chimeric antigen receptor (CAR) T-cell therapy in B-cell lymphomas. Pre-treatment images of 770 lymph node lesions from 39 adult patients with B-cell lymphomas treated with CD19-directed CAR T-cells were analyzed. Transfer learning using a pre-trained neural network model, then retrained for a specific task, was used to predict lesion-level treatment responses from separate dCT, lCT, and FDG-PET images. Patient-level response analysis was performed by applying rule-based reasoning to lesion-level prediction results. Patient-level response prediction was also compared to prediction based on the international prognostic index (IPI) for diffuse large B-cell lymphoma. The average accuracy of lesion-level response prediction based on single whole dCT slice-based input was 0.82+0.05 with sensitivity 0.87+0.07, specificity 0.77+0.12, and AUC 0.91+0.03. Patient-level response prediction from dCT, using the "Majority 60%" rule, had accuracy 0.81, sensitivity 0.75, and specificity 0.88 using 12-month post-treatment patient response as the reference standard and outperformed response prediction based on IPI risk factors (accuracy 0.54, sensitivity 0.38, and specificity 0.61 (p = 0.046)). Prediction of treatment outcome in B-cell lymphomas from pre-treatment medical images using DL-based image analysis and rule-based reasoning is feasible. This approach can potentially provide clinically useful prognostic information for decision-making in advance of initiating CAR T-cell therapy.
Collapse
Affiliation(s)
- Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Emeline Chong
- Lymphoma Program, Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Nicole Winchell
- Lymphoma Program, Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Changjian Sun
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Yongning Zou
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Stephen J Schuster
- Lymphoma Program, Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Lymphoma Program, Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
16
|
李 昕, 李 振, 刘 毅, 苏 芮, 徐 永, 景 军, 尹 立. [Research on mild cognitive impairment diagnosis based on Bayesian optimized long-short-term neural network model]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2023; 40:450-457. [PMID: 37380383 PMCID: PMC10307618 DOI: 10.7507/1001-5515.202205005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 04/16/2023] [Indexed: 06/30/2023]
Abstract
The recurrent neural network architecture improves the processing ability of time-series data. However, issues such as exploding gradients and poor feature extraction limit its application in the automatic diagnosis of mild cognitive impairment (MCI). This paper proposed a research approach for building an MCI diagnostic model using a Bayesian-optimized bidirectional long short-term memory network (BO-BiLSTM) to address this problem. The diagnostic model was based on a Bayesian algorithm and combined prior distribution and posterior probability results to optimize the BO-BiLSTM network hyperparameters. It also used multiple feature quantities that fully reflected the cognitive state of the MCI brain, such as power spectral density, fuzzy entropy, and multifractal spectrum, as the input of the diagnostic model to achieve automatic MCI diagnosis. The results showed that the feature-fused Bayesian-optimized BiLSTM network model achieved an MCI diagnostic accuracy of 98.64% and effectively completed the diagnostic assessment of MCI. In conclusion, based on this optimization, the long short-term neural network model has achieved automatic diagnostic assessment of MCI, providing a new diagnostic model for intelligent diagnosis of MCI.
Collapse
Affiliation(s)
- 昕 李
- 燕山大学 电气工程学院 生物医学工程研究所(河北秦皇岛 066004)School of Electrical Engineering, Yanshan University, Qinhuangdao, Hebei 066004, P. R. China
- 河北省测试计量技术及仪器重点实验室(河北秦皇岛 066004)Measurement Technology and Instrumentation Key Lab of Hebei Province, Qinhuangdao, Hebei 066004, P. R. China
| | - 振阳 李
- 燕山大学 电气工程学院 生物医学工程研究所(河北秦皇岛 066004)School of Electrical Engineering, Yanshan University, Qinhuangdao, Hebei 066004, P. R. China
- 河北省测试计量技术及仪器重点实验室(河北秦皇岛 066004)Measurement Technology and Instrumentation Key Lab of Hebei Province, Qinhuangdao, Hebei 066004, P. R. China
| | - 毅 刘
- 燕山大学 电气工程学院 生物医学工程研究所(河北秦皇岛 066004)School of Electrical Engineering, Yanshan University, Qinhuangdao, Hebei 066004, P. R. China
- 河北省测试计量技术及仪器重点实验室(河北秦皇岛 066004)Measurement Technology and Instrumentation Key Lab of Hebei Province, Qinhuangdao, Hebei 066004, P. R. China
| | - 芮 苏
- 燕山大学 电气工程学院 生物医学工程研究所(河北秦皇岛 066004)School of Electrical Engineering, Yanshan University, Qinhuangdao, Hebei 066004, P. R. China
- 河北省测试计量技术及仪器重点实验室(河北秦皇岛 066004)Measurement Technology and Instrumentation Key Lab of Hebei Province, Qinhuangdao, Hebei 066004, P. R. China
| | - 永红 徐
- 燕山大学 电气工程学院 生物医学工程研究所(河北秦皇岛 066004)School of Electrical Engineering, Yanshan University, Qinhuangdao, Hebei 066004, P. R. China
- 河北省测试计量技术及仪器重点实验室(河北秦皇岛 066004)Measurement Technology and Instrumentation Key Lab of Hebei Province, Qinhuangdao, Hebei 066004, P. R. China
| | - 军 景
- 燕山大学 电气工程学院 生物医学工程研究所(河北秦皇岛 066004)School of Electrical Engineering, Yanshan University, Qinhuangdao, Hebei 066004, P. R. China
- 河北省测试计量技术及仪器重点实验室(河北秦皇岛 066004)Measurement Technology and Instrumentation Key Lab of Hebei Province, Qinhuangdao, Hebei 066004, P. R. China
| | - 立勇 尹
- 燕山大学 电气工程学院 生物医学工程研究所(河北秦皇岛 066004)School of Electrical Engineering, Yanshan University, Qinhuangdao, Hebei 066004, P. R. China
| |
Collapse
|
17
|
Wang T, Chen X, Zhang J, Feng Q, Huang M. Deep multimodality-disentangled association analysis network for imaging genetics in neurodegenerative diseases. Med Image Anal 2023; 88:102842. [PMID: 37247468 DOI: 10.1016/j.media.2023.102842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 03/01/2023] [Accepted: 05/15/2023] [Indexed: 05/31/2023]
Abstract
Imaging genetics is a crucial tool that is applied to explore potentially disease-related biomarkers, particularly for neurodegenerative diseases (NDs). With the development of imaging technology, the association analysis between multimodal imaging data and genetic data is gradually being concerned by a wide range of imaging genetics studies. However, multimodal data are fused first and then correlated with genetic data in traditional methods, which leads to an incomplete exploration of their common and complementary information. In addition, the inaccurate formulation in the complex relationships between imaging and genetic data and information loss caused by missing multimodal data are still open problems in imaging genetics studies. Therefore, in this study, a deep multimodality-disentangled association analysis network (DMAAN) is proposed to solve the aforementioned issues and detect the disease-related biomarkers of NDs simultaneously. First, the imaging data are nonlinearly projected into a latent space and imaging representations can be achieved. The imaging representations are further disentangled into common and specific parts by using a multimodal-disentangled module. Second, the genetic data are encoded to achieve genetic representations, and then, the achieved genetic representations are nonlinearly mapped to the common and specific imaging representations to build nonlinear associations between imaging and genetic data through an association analysis module. Moreover, modality mask vectors are synchronously synthesized to integrate the genetic and imaging data, which helps the following disease diagnosis. Finally, the proposed method achieves reasonable diagnosis performance via a disease diagnosis module and utilizes the label information to detect the disease-related modality-shared and modality-specific biomarkers. Furthermore, the genetic representation can be used to impute the missing multimodal data with our learning strategy. Two publicly available datasets with different NDs are used to demonstrate the effectiveness of the proposed DMAAN. The experimental results show that the proposed DMAAN can identify the disease-related biomarkers, which suggests the proposed DMAAN may provide new insights into the pathological mechanism and early diagnosis of NDs. The codes are publicly available at https://github.com/Meiyan88/DMAAN.
Collapse
Affiliation(s)
- Tao Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Xiumei Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Jiawei Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Meiyan Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
18
|
Ford E, Milne R, Curlewis K. Ethical issues when using digital biomarkers and artificial intelligence for the early detection of dementia. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1492. [PMID: 38439952 PMCID: PMC10909482 DOI: 10.1002/widm.1492] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 01/12/2023] [Accepted: 01/13/2023] [Indexed: 03/06/2024]
Abstract
Dementia poses a growing challenge for health services but remains stigmatized and under-recognized. Digital technologies to aid the earlier detection of dementia are approaching market. These include traditional cognitive screening tools presented on mobile devices, smartphone native applications, passive data collection from wearable, in-home and in-car sensors, as well as machine learning techniques applied to clinic and imaging data. It has been suggested that earlier detection and diagnosis may help patients plan for their future, achieve a better quality of life, and access clinical trials and possible future disease modifying treatments. In this review, we explore whether digital tools for the early detection of dementia can or should be deployed, by assessing them against the principles of ethical screening programs. We conclude that while the importance of dementia as a health problem is unquestionable, significant challenges remain. There is no available treatment which improves the prognosis of diagnosed disease. Progression from early-stage disease to dementia is neither given nor currently predictable. Available technologies are generally not both minimally invasive and highly accurate. Digital deployment risks exacerbating health inequalities due to biased training data and inequity in digital access. Finally, the acceptability of early dementia detection is not established, and resources would be needed to ensure follow-up and support for those flagged by any new system. We conclude that early dementia detection deployed at scale via digital technologies does not meet standards for a screening program and we offer recommendations for moving toward an ethical mode of implementation. This article is categorized under:Application Areas > Health CareCommercial, Legal, and Ethical Issues > Ethical ConsiderationsTechnologies > Artificial Intelligence.
Collapse
Affiliation(s)
- Elizabeth Ford
- Department of Primary Care and Public HealthBrighton and Sussex Medical SchoolBrightonUK
| | - Richard Milne
- Kavli Centre for Ethics, Science and the PublicUniversity of CambridgeCambridgeUK
- Engagement and SocietyWellcome Connecting ScienceCambridgeUK
| | | |
Collapse
|
19
|
Cui C, Yang H, Wang Y, Zhao S, Asad Z, Coburn LA, Wilson KT, Landman BA, Huo Y. Deep multimodal fusion of image and non-image data in disease diagnosis and prognosis: a review. PROGRESS IN BIOMEDICAL ENGINEERING (BRISTOL, ENGLAND) 2023; 5:10.1088/2516-1091/acc2fe. [PMID: 37360402 PMCID: PMC10288577 DOI: 10.1088/2516-1091/acc2fe] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
The rapid development of diagnostic technologies in healthcare is leading to higher requirements for physicians to handle and integrate the heterogeneous, yet complementary data that are produced during routine practice. For instance, the personalized diagnosis and treatment planning for a single cancer patient relies on various images (e.g. radiology, pathology and camera images) and non-image data (e.g. clinical data and genomic data). However, such decision-making procedures can be subjective, qualitative, and have large inter-subject variabilities. With the recent advances in multimodal deep learning technologies, an increasingly large number of efforts have been devoted to a key question: how do we extract and aggregate multimodal information to ultimately provide more objective, quantitative computer-aided clinical decision making? This paper reviews the recent studies on dealing with such a question. Briefly, this review will include the (a) overview of current multimodal learning workflows, (b) summarization of multimodal fusion methods, (c) discussion of the performance, (d) applications in disease diagnosis and prognosis, and (e) challenges and future directions.
Collapse
Affiliation(s)
- Can Cui
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
| | - Haichun Yang
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
| | - Yaohong Wang
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
| | - Shilin Zhao
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
| | - Zuhayr Asad
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
| | - Lori A Coburn
- Division of Gastroenterology Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, United States of America
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN 37212, United States of America
| | - Keith T Wilson
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
- Division of Gastroenterology Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, United States of America
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN 37212, United States of America
| | - Bennett A Landman
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, United States of America
| | - Yuankai Huo
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, United States of America
| |
Collapse
|
20
|
Zhang L, Yu X, Lyu Y, Liu T, Zhu D. REPRESENTATIVE FUNCTIONAL CONNECTIVITY LEARNING FOR MULTIPLE CLINICAL GROUPS IN ALZHEIMER'S DISEASE. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 2023. [PMID: 38414667 PMCID: PMC10897952 DOI: 10.1109/isbi53787.2023.10230521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/29/2024]
Abstract
Mild cognitive impairment (MCI) is a high-risk dementia condition which progresses to probable Alzheimer's disease (AD) at approximately 10% to 15% per year. Characterization of group-level differences between two subtypes of MCI - stable MCI (sMCI) and progressive MCI (pMCI) is the key step to understand the mechanisms of MCI progression and enable possible delay of transition from MCI to AD. Functional connectivity (FC) is considered as a promising way to study MCI progression since which may show alterations even in preclinical stages and provide substrates for AD progression. However, the representative FC patterns during AD development for different clinical groups, especially for sMCI and pMCI, have been understudied. In this work, we integrated autoencoder and multi-class classification into a single deep model and successfully learned a set of clinical group related feature vectors. Specifically, we trained two non-linear mappings which realized the mutual transformations between the original FC space and the feature space. By mapping the learned clinical group related feature vectors to the original FC space, representative FCs were constructed for each group. Moreover, based on these feature vectors, our model achieves a high classification accuracy - 68% for multi-class classification (NC vs SMC vs sMCI vs pMCI vs AD). Code has been released.
Collapse
Affiliation(s)
- Lu Zhang
- Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX, USA
| | - Xiaowei Yu
- Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX, USA
| | - Yanjun Lyu
- Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX, USA
| | - Tianming Liu
- Computer Science, The University of Georgia, Athens, USA
| | - Dajiang Zhu
- Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX, USA
| |
Collapse
|
21
|
Pei X, Zuo K, Li Y, Pang Z. A Review of the Application of Multi-modal Deep Learning in Medicine: Bibliometrics and Future Directions. INT J COMPUT INT SYS 2023. [DOI: 10.1007/s44196-023-00225-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/31/2023] Open
Abstract
AbstractIn recent years, deep learning has been applied in the field of clinical medicine to process large-scale medical images, for large-scale data screening, and in the diagnosis and efficacy evaluation of various major diseases. Multi-modal medical data fusion based on deep learning can effectively extract and integrate characteristic information of different modes, improve clinical applicability in diagnosis and medical evaluation, and provide quantitative analysis, real-time monitoring, and treatment planning. This study investigates the performance of existing multi-modal fusion pre-training algorithms and medical multi-modal fusion methods and compares their key characteristics, such as supported medical data, diseases, target samples, and implementation performance. Additionally, we present the main challenges and goals of the latest trends in multi-modal medical convergence. To provide a clearer perspective on new trends, we also analyzed relevant papers on the Web of Science. We obtain some meaningful results based on the annual development trends, country, institution, and journal-level research, highly cited papers, and research directions. Finally, we perform co-authorship analysis, co-citation analysis, co-occurrence analysis, and bibliographic coupling analysis using the VOSviewer software.
Collapse
|
22
|
Subramanyam Rallabandi V, Seetharaman K. Deep learning-based classification of healthy aging controls, mild cognitive impairment and Alzheimer’s disease using fusion of MRI-PET imaging. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
23
|
Huang W, Tan K, Zhang Z, Hu J, Dong S. A Review of Fusion Methods for Omics and Imaging Data. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:74-93. [PMID: 35044920 DOI: 10.1109/tcbb.2022.3143900] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
The development of omics data and biomedical images has greatly advanced the progress of precision medicine in diagnosis, treatment, and prognosis. The fusion of omics and imaging data, i.e., omics-imaging fusion, offers a new strategy for understanding complex diseases. However, due to a variety of issues such as the limited number of samples, high dimensionality of features, and heterogeneity of different data types, efficiently learning complementary or associated discriminative fusion information from omics and imaging data remains a challenge. Recently, numerous machine learning methods have been proposed to alleviate these problems. In this review, from the perspective of fusion levels and fusion methods, we first provide an overview of preprocessing and feature extraction methods for omics and imaging data, and comprehensively analyze and summarize the basic forms and variations of commonly used and newly emerging fusion methods, along with their advantages, disadvantages and the applicable scope. We then describe public datasets and compare experimental results of various fusion methods on the ADNI and TCGA datasets. Finally, we discuss future prospects and highlight remaining challenges in the field.
Collapse
|
24
|
Chai J, Wu R, Li A, Xue C, Qiang Y, Zhao J, Zhao Q, Yang Q. Classification of mild cognitive impairment based on handwriting dynamics and qEEG. Comput Biol Med 2023; 152:106418. [PMID: 36566627 DOI: 10.1016/j.compbiomed.2022.106418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 11/01/2022] [Accepted: 12/10/2022] [Indexed: 12/14/2022]
Abstract
Subtle changes in fine motor control and quantitative electroencephalography (qEEG) in patients with mild cognitive impairment (MCI) are important in screening for early dementia in primary care populations. In this study, an automated, non-invasive and rapid detection protocol for mild cognitive impairment based on handwriting kinetics and quantitative EEG analysis was proposed, and a classification model based on a dual fusion of feature and decision layers was designed for clinical decision-marking. Seventy-nine volunteers (39 healthy elderly controls and 40 patients with mild cognitive impairment) were recruited for this study, and the handwritten data and the EEG signals were performed using a tablet and MUSE under four designed handwriting tasks. Sixty-eight features were extracted from the EEG and handwriting parameters of each test. Features selected from both models were fused using a late feature fusion strategy with a weighted voting strategy for decision making, and classification accuracy was compared using three different classifiers under handwritten features, EEG features and fused features respectively. The results show that the dual fusion model can further improve the classification accuracy, with the highest classification accuracy for the combined features and the best classification result of 96.3% using SVM with RBF kernel as the base classifier. In addition, this not only supports the greater significance of multimodal data for differentiating MCI, but also tests the feasibility of using the portable EEG headband as a measure of EEG in patients with cognitive impairment.
Collapse
Affiliation(s)
- Jiali Chai
- College of Information and Computer, Taiyuan University of Technology, 030000, Taiyuan, Shanxi, China.
| | - Ruixuan Wu
- College of Information and Computer, Taiyuan University of Technology, 030000, Taiyuan, Shanxi, China
| | - Aoyu Li
- College of Information and Computer, Taiyuan University of Technology, 030000, Taiyuan, Shanxi, China
| | - Chen Xue
- College of Information and Computer, Taiyuan University of Technology, 030000, Taiyuan, Shanxi, China
| | - Yan Qiang
- College of Information and Computer, Taiyuan University of Technology, 030000, Taiyuan, Shanxi, China.
| | - Juanjuan Zhao
- College of Information and Computer, Taiyuan University of Technology, 030000, Taiyuan, Shanxi, China; Jinzhong College of Information, 030600, Taiyuan, Shanxi, China
| | - Qinghua Zhao
- College of Information and Computer, Taiyuan University of Technology, 030000, Taiyuan, Shanxi, China
| | - Qianqian Yang
- Jinzhong College of Information, 030600, Taiyuan, Shanxi, China
| |
Collapse
|
25
|
Subramanyam Rallabandi V, Seetharaman K. Classification of cognitively normal controls, mild cognitive impairment and Alzheimer’s disease using transfer learning approach. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
26
|
Mirabnahrazam G, Ma D, Beaulac C, Lee S, Popuri K, Lee H, Cao J, Galvin JE, Wang L, Beg MF. Predicting time-to-conversion for dementia of Alzheimer's type using multi-modal deep survival analysis. Neurobiol Aging 2023; 121:139-156. [PMID: 36442416 PMCID: PMC10535369 DOI: 10.1016/j.neurobiolaging.2022.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 10/08/2022] [Accepted: 10/11/2022] [Indexed: 11/27/2022]
Abstract
Dementia of Alzheimer's Type (DAT) is a complex disorder influenced by numerous factors, and it is difficult to predict individual progression trajectory from normal or mildly impaired cognition to DAT. An in-depth examination of multiple modalities of data may yield an accurate estimate of time-to-conversion to DAT for preclinical subjects at various stages of disease development. We used a deep-learning model designed for survival analyses to predict subjects' time-to-conversion to DAT using the baseline data of 401 subjects with 63 features from MRI, genetic, and CDC (Cognitive tests, Demographic, and CSF) data in the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our study demonstrated that CDC data outperform genetic or MRI data in predicting DAT time-to-conversion for subjects with Mild Cognitive Impairment (MCI). On the other hand, genetic data provided the most predictive power for subjects with Normal Cognition (NC) at the time of the visit. Furthermore, combining MRI and genetic features improved the time-to-event prediction over using either modality alone. Finally, adding CDC to any combination of features only worked as well as using only the CDC features.
Collapse
Affiliation(s)
- Ghazal Mirabnahrazam
- School of Engineering, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Da Ma
- School of Medicine, Wake Forest University, Winston-Salem, NC, USA; School of Engineering, Simon Fraser University, Burnaby, British Columbia, Canada.
| | - Cédric Beaulac
- Department of Mathematics and Statistics, University of Victoria, Victoria, British Columbia, Canada; School of Engineering, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Sieun Lee
- Mental Health & Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK; School of Engineering, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Karteek Popuri
- Department of Computer Science, Memorial University of Newfoundland, St. John's, Newfoundland & Labrador, Canada; School of Engineering, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Hyunwoo Lee
- Division of Neurology, Department of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Jiguo Cao
- Department of Statistics and Actuarial Science, Simon Fraser University, Burnaby, British Columbia, Canada
| | - James E Galvin
- Comprehensive Center for Brain Health, Department of Neurology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Lei Wang
- Psychiatry and Behavioral Health, Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - Mirza Faisal Beg
- School of Engineering, Simon Fraser University, Burnaby, British Columbia, Canada.
| |
Collapse
|
27
|
Chen X, Xie H, Li Z, Cheng G, Leng M, Wang FL. Information fusion and artificial intelligence for smart healthcare: a bibliometric study. Inf Process Manag 2023. [DOI: 10.1016/j.ipm.2022.103113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
28
|
Chen ZS, Kulkarni P(P, Galatzer-Levy IR, Bigio B, Nasca C, Zhang Y. Modern views of machine learning for precision psychiatry. PATTERNS (NEW YORK, N.Y.) 2022; 3:100602. [PMID: 36419447 PMCID: PMC9676543 DOI: 10.1016/j.patter.2022.100602] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
In light of the National Institute of Mental Health (NIMH)'s Research Domain Criteria (RDoC), the advent of functional neuroimaging, novel technologies and methods provide new opportunities to develop precise and personalized prognosis and diagnosis of mental disorders. Machine learning (ML) and artificial intelligence (AI) technologies are playing an increasingly critical role in the new era of precision psychiatry. Combining ML/AI with neuromodulation technologies can potentially provide explainable solutions in clinical practice and effective therapeutic treatment. Advanced wearable and mobile technologies also call for the new role of ML/AI for digital phenotyping in mobile mental health. In this review, we provide a comprehensive review of ML methodologies and applications by combining neuroimaging, neuromodulation, and advanced mobile technologies in psychiatry practice. We further review the role of ML in molecular phenotyping and cross-species biomarker identification in precision psychiatry. We also discuss explainable AI (XAI) and neuromodulation in a closed human-in-the-loop manner and highlight the ML potential in multi-media information extraction and multi-modal data fusion. Finally, we discuss conceptual and practical challenges in precision psychiatry and highlight ML opportunities in future research.
Collapse
Affiliation(s)
- Zhe Sage Chen
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY 10016, USA
- Department of Neuroscience and Physiology, New York University Grossman School of Medicine, New York, NY 10016, USA
- The Neuroscience Institute, New York University Grossman School of Medicine, New York, NY 10016, USA
- Department of Biomedical Engineering, New York University Tandon School of Engineering, Brooklyn, NY 11201, USA
| | | | - Isaac R. Galatzer-Levy
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY 10016, USA
- Meta Reality Lab, New York, NY, USA
| | - Benedetta Bigio
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Carla Nasca
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY 10016, USA
- The Neuroscience Institute, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Yu Zhang
- Department of Bioengineering, Lehigh University, Bethlehem, PA 18015, USA
- Department of Electrical and Computer Engineering, Lehigh University, Bethlehem, PA 18015, USA
| |
Collapse
|
29
|
Kline A, Wang H, Li Y, Dennis S, Hutch M, Xu Z, Wang F, Cheng F, Luo Y. Multimodal machine learning in precision health: A scoping review. NPJ Digit Med 2022; 5:171. [PMID: 36344814 PMCID: PMC9640667 DOI: 10.1038/s41746-022-00712-8] [Citation(s) in RCA: 67] [Impact Index Per Article: 33.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 10/14/2022] [Indexed: 11/09/2022] Open
Abstract
Machine learning is frequently being leveraged to tackle problems in the health sector including utilization for clinical decision-support. Its use has historically been focused on single modal data. Attempts to improve prediction and mimic the multimodal nature of clinical expert decision-making has been met in the biomedical field of machine learning by fusing disparate data. This review was conducted to summarize the current studies in this field and identify topics ripe for future research. We conducted this review in accordance with the PRISMA extension for Scoping Reviews to characterize multi-modal data fusion in health. Search strings were established and used in databases: PubMed, Google Scholar, and IEEEXplore from 2011 to 2021. A final set of 128 articles were included in the analysis. The most common health areas utilizing multi-modal methods were neurology and oncology. Early fusion was the most common data merging strategy. Notably, there was an improvement in predictive performance when using data fusion. Lacking from the papers were clear clinical deployment strategies, FDA-approval, and analysis of how using multimodal approaches from diverse sub-populations may improve biases and healthcare disparities. These findings provide a summary on multimodal data fusion as applied to health diagnosis/prognosis problems. Few papers compared the outputs of a multimodal approach with a unimodal prediction. However, those that did achieved an average increase of 6.4% in predictive accuracy. Multi-modal machine learning, while more robust in its estimations over unimodal methods, has drawbacks in its scalability and the time-consuming nature of information concatenation.
Collapse
Affiliation(s)
- Adrienne Kline
- Department of Preventive Medicine, Northwestern University, Chicago, 60201, IL, USA
| | - Hanyin Wang
- Department of Preventive Medicine, Northwestern University, Chicago, 60201, IL, USA
| | - Yikuan Li
- Department of Preventive Medicine, Northwestern University, Chicago, 60201, IL, USA
| | - Saya Dennis
- Department of Preventive Medicine, Northwestern University, Chicago, 60201, IL, USA
| | - Meghan Hutch
- Department of Preventive Medicine, Northwestern University, Chicago, 60201, IL, USA
| | - Zhenxing Xu
- Department of Population Health Sciences, Cornell University, New York, 10065, NY, USA
| | - Fei Wang
- Department of Population Health Sciences, Cornell University, New York, 10065, NY, USA
| | - Feixiong Cheng
- Cleveland Clinic Lerner College of Medicine, Case Western Reserve University, Cleveland, 44195, OH, USA
| | - Yuan Luo
- Department of Preventive Medicine, Northwestern University, Chicago, 60201, IL, USA.
| |
Collapse
|
30
|
Wang Y, Song D, Wang W, Rao S, Wang X, Wang M. Self-supervised learning and semi-supervised learning for multi-sequence medical image classification. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.09.097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
31
|
Li Y, Shi X, Yang L, Pu C, Tan Q, Yang Z, Huang H. MC-GAT: multi-layer collaborative generative adversarial transformer for cholangiocarcinoma classification from hyperspectral pathological images. BIOMEDICAL OPTICS EXPRESS 2022; 13:5794-5812. [PMID: 36733731 PMCID: PMC9872896 DOI: 10.1364/boe.472106] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 09/24/2022] [Accepted: 10/01/2022] [Indexed: 06/18/2023]
Abstract
Accurate histopathological analysis is the core step of early diagnosis of cholangiocarcinoma (CCA). Compared with color pathological images, hyperspectral pathological images have advantages for providing rich band information. Existing algorithms of HSI classification are dominated by convolutional neural network (CNN), which has the deficiency of distorting spectral sequence information of HSI data. Although vision transformer (ViT) alleviates this problem to a certain extent, the expressive power of transformer encoder will gradually decrease with increasing number of layers, which still degrades the classification performance. In addition, labeled HSI samples are limited in practical applications, which restricts the performance of methods. To address these issues, this paper proposed a multi-layer collaborative generative adversarial transformer termed MC-GAT for CCA classification from hyperspectral pathological images. MC-GAT consists of two pure transformer-based neural networks including a generator and a discriminator. The generator learns the implicit probability of real samples and transforms noise sequences into band sequences, which produces fake samples. These fake samples and corresponding real samples are mixed together as input to confuse the discriminator, which increases model generalization. In discriminator, a multi-layer collaborative transformer encoder is designed to integrate output features from different layers into collaborative features, which adaptively mines progressive relations from shallow to deep encoders and enhances the discriminating power of the discriminator. Experimental results on the Multidimensional Choledoch Datasets demonstrate that the proposed MC-GAT can achieve better classification results than many state-of-the-art methods. This confirms the potentiality of the proposed method in aiding pathologists in CCA histopathological analysis from hyperspectral imagery.
Collapse
Affiliation(s)
- Yuan Li
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| | - Xu Shi
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| | - Liping Yang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| | - Chunyu Pu
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| | - Qijuan Tan
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - Zhengchun Yang
- Department of ultrasound, Chongqing Health Center for Women and Children, Chongqing 401147, China
- Department of ultrasound, Women and Children's Hospital of Chongqing Medical University, Chongqing 401147, China
| | - Hong Huang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| |
Collapse
|
32
|
Wang Y, Tang S, Ma R, Zamit I, Wei Y, Pan Y. Multi-modal intermediate integrative methods in neuropsychiatric disorders: A review. Comput Struct Biotechnol J 2022; 20:6149-6162. [DOI: 10.1016/j.csbj.2022.11.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 11/04/2022] [Accepted: 11/04/2022] [Indexed: 11/09/2022] Open
|
33
|
Avberšek LK, Repovš G. Deep learning in neuroimaging data analysis: Applications, challenges, and solutions. FRONTIERS IN NEUROIMAGING 2022; 1:981642. [PMID: 37555142 PMCID: PMC10406264 DOI: 10.3389/fnimg.2022.981642] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 10/10/2022] [Indexed: 08/10/2023]
Abstract
Methods for the analysis of neuroimaging data have advanced significantly since the beginning of neuroscience as a scientific discipline. Today, sophisticated statistical procedures allow us to examine complex multivariate patterns, however most of them are still constrained by assuming inherent linearity of neural processes. Here, we discuss a group of machine learning methods, called deep learning, which have drawn much attention in and outside the field of neuroscience in recent years and hold the potential to surpass the mentioned limitations. Firstly, we describe and explain the essential concepts in deep learning: the structure and the computational operations that allow deep models to learn. After that, we move to the most common applications of deep learning in neuroimaging data analysis: prediction of outcome, interpretation of internal representations, generation of synthetic data and segmentation. In the next section we present issues that deep learning poses, which concerns multidimensionality and multimodality of data, overfitting and computational cost, and propose possible solutions. Lastly, we discuss the current reach of DL usage in all the common applications in neuroimaging data analysis, where we consider the promise of multimodality, capability of processing raw data, and advanced visualization strategies. We identify research gaps, such as focusing on a limited number of criterion variables and the lack of a well-defined strategy for choosing architecture and hyperparameters. Furthermore, we talk about the possibility of conducting research with constructs that have been ignored so far or/and moving toward frameworks, such as RDoC, the potential of transfer learning and generation of synthetic data.
Collapse
Affiliation(s)
- Lev Kiar Avberšek
- Department of Psychology, Faculty of Arts, University of Ljubljana, Ljubljana, Slovenia
| | | |
Collapse
|
34
|
A T, Saju R, John A, C UA. DMSENet: Deep multi-modal squeeze and excitation network for the diagnosis of Alzheimer's disease. INTERNATIONAL JOURNAL OF HEALTHCARE MANAGEMENT 2022. [DOI: 10.1080/20479700.2022.2130631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2022]
Affiliation(s)
- Thushara A
- Department of Computer Science and Engineering, TKM College of Engineering Kollam, APJ Abdul Kalam Technological University, Thiruvananthapuram, India
| | - Reshma Saju
- Department of Computer Science and Engineering, TKM College of Engineering Kollam, APJ Abdul Kalam Technological University, Thiruvananthapuram, India
| | - Ansamma John
- Department of Computer Science and Engineering, TKM College of Engineering Kollam, APJ Abdul Kalam Technological University, Thiruvananthapuram, India
| | - UshaDevi Amma C
- Department of Electronics and Communications Engineering, Amrita Vishwa Vidyapeetham Amritapuri CampusAmrita University, Kollam, India
| |
Collapse
|
35
|
Zhang Y, Li C, Chen D, Tian R, Yan X, Zhou Y, Song Y, Yang Y, Wang X, Zhou B, Gao Y, Jiang Y, Zhang X. Repeated High-Definition Transcranial Direct Current Stimulation Modulated Temporal Variability of Brain Regions in Core Neurocognitive Networks Over the Left Dorsolateral Prefrontal Cortex in Mild Cognitive Impairment Patients. J Alzheimers Dis 2022; 90:655-666. [DOI: 10.3233/jad-220539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Background: Early intervention of amnestic mild cognitive impairment (aMCI) may be the most promising way for delaying or even preventing the progression to Alzheimer’s disease. Transcranial direct current stimulation (tDCS) is a noninvasive brain stimulation technique that has been recognized as a promising approach for the treatment of aMCI. Objective: In this paper, we aimed to investigate the modulating mechanism of tDCS on the core neurocognitive networks of brain. Methods: We used repeated anodal high-definition transcranial direct current stimulation (HD-tDCS) over the left dorsolateral prefrontal cortex and assessed the effect on cognition and dynamic functional brain network in aMCI patients. We used a novel method called temporal variability to depict the characteristics of the dynamic brain functional networks. Results: We found that true anodal stimulation significantly improved cognitive performance as measured by the Montreal Cognitive Assessment after simulation. Meanwhile, the Mini-Mental State Examination scores showed a clear upward trend. More importantly, we found significantly altered temporal variability of dynamic functional connectivity of regions belonging to the default mode network, central executive network, and the salience network after true anodal stimulation, indicating anodal HD-tDCS may enhance brain function by modulating the temporal variability of the brain regions. Conclusion: These results imply that ten days of anodal repeated HD-tDCS over the LDLPFC exerts beneficial effects on the temporal variability of the functional architecture of the brain, which may be a potential neural mechanism by which HD-tDCS enhances brain functions. Repeated HD-tDCS may have clinical uses for the intervention of brain function decline in aMCI patients.
Collapse
Affiliation(s)
- Yanchun Zhang
- Department of Neurology, Second Medical Center, National Clinical Research Center for Geriatric Disease, Chinese PLA General Hospital, Beijing, China
- Department of Rehabilitation, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Chenxi Li
- Department of the Psychology of Military Medicine, Air Force Medical University, Xi’an, Shaanxi, P.R. China
| | - Deqiang Chen
- Department of CT, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Rui Tian
- Department of Rehabilitation, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Xinyue Yan
- Department of Rehabilitation, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Yingwen Zhou
- Department of MR, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Yancheng Song
- Department of MR, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Yanlong Yang
- Department of MR, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Xiaoxuan Wang
- Department of MR, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Bo Zhou
- Department of Neurology, Second Medical Center, National Clinical Research Center for Geriatric Disease, Chinese PLA General Hospital, Beijing, China
| | - Yuhong Gao
- Institute of Geriatrics, Second Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Yujuan Jiang
- Department of Rehabilitation, Cangzhou Central Hospital, Cangzhoug, Hebei Province, China
| | - Xi Zhang
- Department of Neurology, Second Medical Center, National Clinical Research Center for Geriatric Disease, Chinese PLA General Hospital, Beijing, China
| |
Collapse
|
36
|
Biomedical Application of Identified Biomarkers Gene Expression Based Early Diagnosis and Detection in Cervical Cancer with Modified Probabilistic Neural Network. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:4946154. [PMID: 36134120 PMCID: PMC9482500 DOI: 10.1155/2022/4946154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Revised: 04/15/2022] [Accepted: 05/13/2022] [Indexed: 11/17/2022]
Abstract
Cervical squamous cell carcinoma (CSC) is expected to rise to become the fourth most prevalent cancer in women globally and to replace breast cancer as the top cause of death in women in the future years, according to the World Health Organization. According to the World Health Organization, developing countries are responsible for 86 percent of all cervical cancer cases globally in women aged 15 to 44 (WHO). Cancer mortality is associated with the largest amount of monotonous antecedent in low- and middle-income nations, while cancer mortality is associated with the least amount of monotonous antecedent in high-income countries. Cervical cancer is thought to be caused by aberrant proliferation of cells in the cervix that is capable of stealing or invading other human organs, according to current thinking. Cancer of the cerebral cell is the most prevalent kind of cancer in women. It is expected that cervical squamous cell carcinoma (CSC) will be the fourth most frequent cancer in the world and the main cause of death in women by the year 2050. Despite the fact that technology has improved tremendously since then, this is still the case. When compared to high-income countries, low- and middle-income countries have the highest consistent antecedent for cancer mortality, according to the World Cancer Research Fund. Cancerous growths of cells in the cervix, such as cervical cancer, are caused by cells that have the ability to steal from or invade auxiliary organs of the body, as is the case with cervical cancer. Although technological advances have been made in recent years, gene expression profiling continues to be a prominent approach in the investigation of cervical cancer. Since then, researchers have had the opportunity to examine a gene coexpression network, which has evolved into an exceptionally comprehensive technique for microarray research. This has helped them to get a better understanding of the human genome. When a specific biological issue is addressed, gene coexpression networks retain a considerable percentage of their once vast component of physiognomy, which was previously immense. When comparing the properties of genes in a population, it is well known that feature selection may be used to choose genes that outperform the rest of the genes in the population. There are several benefits to feature selection, and this is only one of them. Typically used gene selection approaches have been shown to be insufficient in acquiring the best potential sequence of genes for training purposes, and as a result, the accuracy of the classifier has likely suffered as a result of this. Recently, a considerable number of scientists have advocated for the use of optimization approaches in the process of gene selection, and this trend is expected to continue. A metaheuristic algorithm may be used to choose a suitable subset of genes, according to the preceding assertion, which is also consistent with the metaheuristic approach. A Modified Probabilistic Neural Network differs from other networks in that the underlying gene expression associated with DEGs and standard data in a Modified Probabilistic Neural Network is not uniformly distributed as it is in other networks (MPN). As previously said, selecting the most relevant genes or repeating genes is a vital step in the prediction process. It was this technique that was used in the research of cervical cancer. Since then, researchers have had the opportunity to examine a gene coexpression network, which has evolved into an exceptionally comprehensive technique for microarray research. This has helped them to get a better understanding of the human genome. When a specific biological issue is addressed, gene coexpression networks are able to preserve a previously major section of the face that had been lost. When comparing the properties of genes in a population, it is well known that feature selection may be used to choose genes that outperform the rest of the genes in the population. There are several benefits to feature selection, and this is only one of them. Typically used gene selection approaches have been shown to be insufficient in acquiring the best potential sequence of genes for training purposes, and as a result, the accuracy of the classifier has likely suffered as a result of this. In the field of gene selection, several scholars have argued in favor of the employment of optimization approaches. A metaheuristic algorithm may be used to choose a suitable subset of genes, according to the preceding assertion, which is also consistent with the metaheuristic approach. It was discovered that Modified Probabilistic Neural Networks (MPNs) had a different distribution of gene expression linked with DEGs and normal data than other networks, which had not been previously seen. This was previously unknown. Following what has been said before, selecting the most appropriate or repeated genes is a critical task throughout the prediction process.
Collapse
|
37
|
Ko W, Jung W, Jeon E, Suk HI. A Deep Generative-Discriminative Learning for Multimodal Representation in Imaging Genetics. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2348-2359. [PMID: 35344489 DOI: 10.1109/tmi.2022.3162870] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Imaging genetics, one of the foremost emerging topics in the medical imaging field, analyzes the inherent relations between neuroimaging and genetic data. As deep learning has gained widespread acceptance in many applications, pioneering studies employed deep learning frameworks for imaging genetics. However, existing approaches suffer from some limitations. First, they often adopt a simple strategy for joint learning of phenotypic and genotypic features. Second, their findings have not been extended to biomedical applications, e.g., degenerative brain disease diagnosis and cognitive score prediction. Finally, existing studies perform insufficient and inappropriate analyses from the perspective of data science and neuroscience. In this work, we propose a novel deep learning framework to simultaneously tackle the aforementioned issues. Our proposed framework learns to effectively represent the neuroimaging and the genetic data jointly, and achieves state-of-the-art performance when used for Alzheimer's disease and mild cognitive impairment identification. Furthermore, unlike the existing methods, the framework enables learning the relation between imaging phenotypes and genotypes in a nonlinear way without any prior neuroscientific knowledge. To demonstrate the validity of our proposed framework, we conducted experiments on a publicly available dataset and analyzed the results from diverse perspectives. Based on our experimental results, we believe that the proposed framework has immense potential to provide new insights and perspectives in deep learning-based imaging genetics studies.
Collapse
|
38
|
Khojaste-Sarakhsi M, Haghighi SS, Ghomi SF, Marchiori E. Deep learning for Alzheimer's disease diagnosis: A survey. Artif Intell Med 2022; 130:102332. [DOI: 10.1016/j.artmed.2022.102332] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 04/29/2022] [Accepted: 05/30/2022] [Indexed: 11/28/2022]
|
39
|
Automated Screening of COVID-19-Based Tongue Image on Chinese Medicine. BIOMED RESEARCH INTERNATIONAL 2022; 2022:6825576. [PMID: 35782081 PMCID: PMC9246631 DOI: 10.1155/2022/6825576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 05/01/2022] [Accepted: 05/11/2022] [Indexed: 12/02/2022]
Abstract
Objective Artificial intelligence-powered screening systems of coronavirus disease 2019 (COVID-19) are urgently demanding since the ongoing outbreak of SARS-CoV-2 worldwide. Chest CT or X-ray is not sufficient to support the large-scale screening of COVID-19 because mildly-infected patients do not have imaging features on these images. Therefore, it is imperative to exploit supplementary medical imaging strategies. Traditional Chinese medicine has played an essential role in the fight against COVID-19. Methods In this paper, we conduct two kinds of verification experiments based on a newly-collected multi-modality dataset, which consists of three types of modalities: tongue images, chest CT scans, and X-ray images. First, we study a binary classification experiment on tongue images to verify the discriminative ability between COVID-19 and non-COVID-19. Second, we design extensive multimodality experiments to validate whether introducing tongue image can improve the screening accuracy of COVID-19 based on chest CT or X-ray images. Results Tongue image screening of COVID-19 showed that the accuracy (ACC), sensitivity (SEN), specificity (SPEC), and Matthew correlation coefficient (MCC) of the improved AlexNet and Googlenet both reached 98.39%, 98.97%, 96.67%, and 99.11%. The fusion of chest CT and tongue images used a tandem multimodal classifier fusion strategy to achieve optimal classification, and the results and screening accuracy of COVID-19 reached 98.98%, resulting in a significant improvement of 4.75% the highest accuracy in 375 years compared with the single-modality model. The fusion of chest x-rays and tongue images also had good classification accuracy. Conclusions Both experimental results demonstrate that tongue image not only has an excellent discriminative ability for screening COVID-19 but also can improve the screening accuracy based on chest CT or X-rays. To the best of our knowledge, it is the first work that verifies the effectiveness of tongue image on screening COVID-19. This paper provides a new perspective and a novel solution that contributes to large-scale screening toward fast stopping the pandemic of COVID-19.
Collapse
|
40
|
Long Z, Li J, Liao H, Deng L, Du Y, Fan J, Li X, Miao J, Qiu S, Long C, Jing B. A Multi-Modal and Multi-Atlas Integrated Framework for Identification of Mild Cognitive Impairment. Brain Sci 2022; 12:brainsci12060751. [PMID: 35741636 PMCID: PMC9221217 DOI: 10.3390/brainsci12060751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 05/29/2022] [Accepted: 06/03/2022] [Indexed: 11/16/2022] Open
Abstract
Background: Multi-modal neuroimaging with appropriate atlas is vital for effectively differentiating mild cognitive impairment (MCI) from healthy controls (HC). Methods: The resting-state functional magnetic resonance imaging (rs-fMRI) and structural MRI (sMRI) of 69 MCI patients and 61 HC subjects were collected. Then, the gray matter volumes obtained from the sMRI and Hurst exponent (HE) values calculated from rs-fMRI data in the Automated Anatomical Labeling (AAL-90), Brainnetome (BN-246), Harvard–Oxford (HOA-112) and AAL3-170 atlases were extracted, respectively. Next, these characteristics were selected with a minimal redundancy maximal relevance algorithm and a sequential feature collection method in single or multi-modalities, and only the optimal features were retained after this procedure. Lastly, the retained characteristics were served as the input features for the support vector machine (SVM)-based method to classify MCI patients, and the performance was estimated with a leave-one-out cross-validation (LOOCV). Results: Our proposed method obtained the best 92.00% accuracy, 94.92% specificity and 89.39% sensitivity with the sMRI in AAL-90 and the fMRI in HOA-112 atlas, which was much better than using the single-modal or single-atlas features. Conclusion: The results demonstrated that the multi-modal and multi-atlas integrated method could effectively recognize MCI patients, which could be extended into various neurological and neuropsychiatric diseases.
Collapse
Affiliation(s)
- Zhuqing Long
- Medical Apparatus and Equipment Deployment, Hunan Children’s Hospital, Changsha 410007, China; (Z.L.); (J.L.); (H.L.); (Y.D.); (S.Q.)
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
| | - Jie Li
- Medical Apparatus and Equipment Deployment, Hunan Children’s Hospital, Changsha 410007, China; (Z.L.); (J.L.); (H.L.); (Y.D.); (S.Q.)
| | - Haitao Liao
- Medical Apparatus and Equipment Deployment, Hunan Children’s Hospital, Changsha 410007, China; (Z.L.); (J.L.); (H.L.); (Y.D.); (S.Q.)
| | - Li Deng
- Department of Data Assessment and Examination, Hunan Children’s Hospital, Changsha 410007, China;
| | - Yukeng Du
- Medical Apparatus and Equipment Deployment, Hunan Children’s Hospital, Changsha 410007, China; (Z.L.); (J.L.); (H.L.); (Y.D.); (S.Q.)
| | - Jianghua Fan
- Department of Pediatric Emergency Center, Emergency Generally Department I, Hunan Children’s Hospital, Changsha 410007, China;
| | - Xiaofeng Li
- Hunan Guangxiu Hospital, Hunan Normal University, Changsha 410006, China;
| | - Jichang Miao
- Department of Medical Devices, Nanfang Hospital, Guangzhou 510515, China;
| | - Shuang Qiu
- Medical Apparatus and Equipment Deployment, Hunan Children’s Hospital, Changsha 410007, China; (Z.L.); (J.L.); (H.L.); (Y.D.); (S.Q.)
| | - Chaojie Long
- Medical Apparatus and Equipment Deployment, Hunan Children’s Hospital, Changsha 410007, China; (Z.L.); (J.L.); (H.L.); (Y.D.); (S.Q.)
- Correspondence: (C.L.); (B.J.); Tel./Fax: +86-731-8560-0908 (C.L.); +86-10-8391-1552 (B.J.)
| | - Bin Jing
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Correspondence: (C.L.); (B.J.); Tel./Fax: +86-731-8560-0908 (C.L.); +86-10-8391-1552 (B.J.)
| |
Collapse
|
41
|
Deep-Learning-Based Cancer Profiles Classification Using Gene Expression Data Profile. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:4715998. [PMID: 35035840 PMCID: PMC8759849 DOI: 10.1155/2022/4715998] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Accepted: 11/17/2021] [Indexed: 12/14/2022]
Abstract
The quantity of data required to give a valid analysis grows exponentially as machine learning dimensionality increases. In a single experiment, microarrays or gene expression profiling assesses and determines gene expression levels and patterns in various cell types or tissues. The advent of DNA microarray technology has enabled simultaneous intensive care of hundreds of gene expressions on a single chip, advancing cancer categorization. The most challenging aspect of categorization is working out many information points from many sources. The proposed approach uses microarray data to train deep learning algorithms on extracted features and then uses the Latent Feature Selection Technique to reduce classification time and increase accuracy. The feature-selection-based techniques will pick the important genes before classifying microarray data for cancer prediction and diagnosis. These methods improve classification accuracy by removing duplicate and superfluous information. The Artificial Bee Colony (ABC) technique of feature selection was proposed in this research using bone marrow PC gene expression data. The ABC algorithm, based on swarm intelligence, has been proposed for gene identification. The ABC has been used here for feature selection that generates a subset of features and every feature produced by the spectators, making this a wrapper-based feature selection system. This method's main goal is to choose the fewest genes that are critical to PC performance while also increasing prediction accuracy. Convolutional Neural Networks were used to classify tumors without labelling them. Lung, kidney, and brain cancer datasets were used in the procedure's training and testing stages. Using the cross-validation technique of k-fold methodology, the Convolutional Neural Network has an accuracy rate of 96.43%. The suggested research includes techniques for preprocessing and modifying gene expression data to enhance future cancer detection accuracy.
Collapse
|
42
|
Abdelaziz M, Wang T, Elazab A. Fusing Multimodal and Anatomical Volumes of Interest Features Using Convolutional Auto-Encoder and Convolutional Neural Networks for Alzheimer's Disease Diagnosis. Front Aging Neurosci 2022; 14:812870. [PMID: 35572142 PMCID: PMC9096261 DOI: 10.3389/fnagi.2022.812870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 03/11/2022] [Indexed: 11/16/2022] Open
Abstract
Alzheimer's disease (AD) is an age-related disease that affects a large proportion of the elderly. Currently, the neuroimaging techniques [e.g., magnetic resonance imaging (MRI) and positron emission tomography (PET)] are promising modalities for AD diagnosis. Since not all brain regions are affected by AD, a common technique is to study some region-of-interests (ROIs) that are believed to be closely related to AD. Conventional methods used ROIs, identified by the handcrafted features through Automated Anatomical Labeling (AAL) atlas rather than utilizing the original images which may induce missing informative features. In addition, they learned their framework based on the discriminative patches instead of full images for AD diagnosis in multistage learning scheme. In this paper, we integrate the original image features from MRI and PET with their ROIs features in one learning process. Furthermore, we use the ROIs features for forcing the network to focus on the regions that is highly related to AD and hence, the performance of the AD diagnosis can be improved. Specifically, we first obtain the ROIs features from the AAL, then we register every ROI with its corresponding region of the original image to get a synthetic image for each modality of every subject. Then, we employ the convolutional auto-encoder network for learning the synthetic image features and the convolutional neural network (CNN) for learning the original image features. Meanwhile, we concatenate the features from both networks after each convolution layer. Finally, the highly learned features from the MRI and PET are concatenated for brain disease classification. Experiments are carried out on the ADNI datasets including ADNI-1 and ADNI-2 to evaluate our method performance. Our method demonstrates a higher performance in brain disease classification than the recent studies.
Collapse
Affiliation(s)
- Mohammed Abdelaziz
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Department of Communications and Electronics, Delta Higher Institute for Engineering and Technology (DHIET), Mansoura, Egypt
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Ahmed Elazab
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Computer Science Department, Misr Higher Institute of Commerce and Computers, Mansoura, Egypt
| |
Collapse
|
43
|
A fully automatic AI system for tooth and alveolar bone segmentation from cone-beam CT images. Nat Commun 2022; 13:2096. [PMID: 35440592 PMCID: PMC9018763 DOI: 10.1038/s41467-022-29637-2] [Citation(s) in RCA: 62] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 02/03/2022] [Indexed: 12/20/2022] Open
Abstract
Accurate delineation of individual teeth and alveolar bones from dental cone-beam CT (CBCT) images is an essential step in digital dentistry for precision dental healthcare. In this paper, we present an AI system for efficient, precise, and fully automatic segmentation of real-patient CBCT images. Our AI system is evaluated on the largest dataset so far, i.e., using a dataset of 4,215 patients (with 4,938 CBCT scans) from 15 different centers. This fully automatic AI system achieves a segmentation accuracy comparable to experienced radiologists (e.g., 0.5% improvement in terms of average Dice similarity coefficient), while significant improvement in efficiency (i.e., 500 times faster). In addition, it consistently obtains accurate results on the challenging cases with variable dental abnormalities, with the average Dice scores of 91.5% and 93.0% for tooth and alveolar bone segmentation. These results demonstrate its potential as a powerful system to boost clinical workflows of digital dentistry.
Collapse
|
44
|
Lian C, Liu M, Pan Y, Shen D. Attention-Guided Hybrid Network for Dementia Diagnosis With Structural MR Images. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:1992-2003. [PMID: 32721906 PMCID: PMC7855081 DOI: 10.1109/tcyb.2020.3005859] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Deep-learning methods (especially convolutional neural networks) using structural magnetic resonance imaging (sMRI) data have been successfully applied to computer-aided diagnosis (CAD) of Alzheimer's disease (AD) and its prodromal stage [i.e., mild cognitive impairment (MCI)]. As it is practically challenging to capture local and subtle disease-associated abnormalities directly from the whole-brain sMRI, most of those deep-learning approaches empirically preselect disease-associated sMRI brain regions for model construction. Considering that such isolated selection of potentially informative brain locations might be suboptimal, very few methods have been proposed to perform disease-associated discriminative region localization and disease diagnosis in a unified deep-learning framework. However, those methods based on task-oriented discriminative localization still suffer from two common limitations, that is: 1) identified brain locations are strictly consistent across all subjects, which ignores the unique anatomical characteristics of each brain and 2) only limited local regions/patches are used for model training, which does not fully utilize the global structural information provided by the whole-brain sMRI. In this article, we propose an attention-guided deep-learning framework to extract multilevel discriminative sMRI features for dementia diagnosis. Specifically, we first design a backbone fully convolutional network to automatically localize the discriminative brain regions in a weakly supervised manner. Using the identified disease-related regions as spatial attention guidance, we further develop a hybrid network to jointly learn and fuse multilevel sMRI features for CAD model construction. Our proposed method was evaluated on three public datasets (i.e., ADNI-1, ADNI-2, and AIBL), showing superior performance compared with several state-of-the-art methods in both tasks of AD diagnosis and MCI conversion prediction.
Collapse
|
45
|
Jin L, Zhao K, Zhao Y, Che T, Li S. A Hybrid Deep Learning Method for Early and Late Mild Cognitive Impairment Diagnosis With Incomplete Multimodal Data. Front Neuroinform 2022; 16:843566. [PMID: 35370588 PMCID: PMC8965366 DOI: 10.3389/fninf.2022.843566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Accepted: 02/21/2022] [Indexed: 11/13/2022] Open
Abstract
Multimodality neuroimages have been widely applied to diagnose mild cognitive impairment (MCI). However, the missing data problem is unavoidable. Most previously developed methods first train a generative adversarial network (GAN) to synthesize missing data and then train a classification network with the completed data. These methods independently train two networks with no information communication. Thus, the resulting GAN cannot focus on the crucial regions that are helpful for classification. To overcome this issue, we propose a hybrid deep learning method. First, a classification network is pretrained with paired MRI and PET images. Afterward, we use the pretrained classification network to guide a GAN by focusing on the features that are helpful for classification. Finally, we synthesize the missing PET images and use them with real MR images to fine-tune the classification model to make it better adapt to the synthesized images. We evaluate our proposed method on the ADNI dataset, and the results show that our method improves the accuracies obtained on the validation and testing sets by 3.84 and 5.82%, respectively. Moreover, our method increases the accuracies for the validation and testing sets by 7.7 and 9.09%, respectively, when we synthesize the missing PET images via our method. An ablation experiment shows that the last two stages are essential for our method. We also compare our method with other state-of-the-art methods, and our method achieves better classification performance.
Collapse
Affiliation(s)
- Leiming Jin
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Kun Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Yan Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Tongtong Che
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Shuyu Li
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
- State Key Lab of Cognition Neuroscience and Learning, Beijing Normal University, Beijing, China
- *Correspondence: Shuyu Li,
| |
Collapse
|
46
|
Classification of Alzheimer’s Disease and Mild-Cognitive Impairment Base on High-Order Dynamic Functional Connectivity at Different Frequency Band. MATHEMATICS 2022. [DOI: 10.3390/math10050805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Functional brain connectivity networks obtained from resting-state functional magnetic resonance imaging (rs-fMRI) have been extensively utilized for the diagnosis of Alzheimer’s disease (AD). However, the traditional correlation analysis technique only explores the pairwise relation, which may not be suitable for revealing sufficient and proper functional connectivity links among brain regions. Additionally, previous literature typically focuses on only lower-order dynamics, without considering higher-order dynamic networks properties, and they particularly focus on single frequency range time series of rs-fMRI. To solve these problems, in this article, a new diagnosis scheme is proposed by constructing a high-order dynamic functional network at different frequency level time series (full-band (0.01–0.08 Hz); slow-4 (0.027–0.08 Hz); and slow-5 (0.01–0.027 Hz)) data obtained from rs-fMRI to build the functional brain network for all brain regions. Especially, to tune the precise analysis of the regularized parameters in the Support Vector Machine (SVM), a nested leave-one-out cross-validation (LOOCV) technique is adopted. Finally, the SVM classifier is trained to classify AD from HC based on these higher-order dynamic functional brain networks at different frequency ranges. The experiment results illustrate that for all bands with a LOOCV classification accuracy of 94.10% with a 90.95% of sensitivity, and a 96.75% of specificity outperforms the individual networks. Utilization of the given technique for the identification of AD from HC compete for the most state-of-the-art technology in terms of the diagnosis accuracy. Additionally, results obtained for the all-band shows performance further suggest that our proposed scheme has a high-rate accuracy. These results have validated the effectiveness of the proposed methods for clinical value to the identification of AD.
Collapse
|
47
|
Predictive classification of Alzheimer’s disease using brain imaging and genetic data. Sci Rep 2022; 12:2405. [PMID: 35165327 PMCID: PMC8844076 DOI: 10.1038/s41598-022-06444-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 01/24/2022] [Indexed: 02/06/2023] Open
Abstract
For now, Alzheimer’s disease (AD) is incurable. But if it can be diagnosed early, the correct treatment can be used to delay the disease. Most of the existing research methods use single or multi-modal imaging features for prediction, relatively few studies combine brain imaging with genetic features for disease diagnosis. In order to accurately identify AD, healthy control (HC) and the two stages of mild cognitive impairment (MCI: early MCI, late MCI) combined with brain imaging and genetic characteristics, we proposed an integrated Fisher score and multi-modal multi-task feature selection research method. We learned first genetic features with Fisher score to perform dimensionality reduction in order to solve the problem of the large difference between the feature scales of genetic and brain imaging. Then we learned the potential related features of brain imaging and genetic data, and multiplied the selected features with the learned weight coefficients. Through the feature selection program, five imaging and five genetic features were selected to achieve an average classification accuracy of 98% for HC and AD, 82% for HC and EMCI, 86% for HC and LMCI, 80% for EMCI and LMCI, 88% for EMCI and AD, and 72% for LMCI and AD. Compared with only using imaging features, the classification accuracy has been improved to a certain extent, and a set of interrelated features of brain imaging phenotypes and genetic factors were selected.
Collapse
|
48
|
The Road to Personalized Medicine in Alzheimer’s Disease: The Use of Artificial Intelligence. Biomedicines 2022; 10:biomedicines10020315. [PMID: 35203524 PMCID: PMC8869403 DOI: 10.3390/biomedicines10020315] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 01/21/2022] [Accepted: 01/24/2022] [Indexed: 02/05/2023] Open
Abstract
Dementia remains an extremely prevalent syndrome among older people and represents a major cause of disability and dependency. Alzheimer’s disease (AD) accounts for the majority of dementia cases and stands as the most common neurodegenerative disease. Since age is the major risk factor for AD, the increase in lifespan not only represents a rise in the prevalence but also adds complexity to the diagnosis. Moreover, the lack of disease-modifying therapies highlights another constraint. A shift from a curative to a preventive approach is imminent and we are moving towards the application of personalized medicine where we can shape the best clinical intervention for an individual patient at a given point. This new step in medicine requires the most recent tools and analysis of enormous amounts of data where the application of artificial intelligence (AI) plays a critical role on the depiction of disease–patient dynamics, crucial in reaching early/optimal diagnosis, monitoring and intervention. Predictive models and algorithms are the key elements in this innovative field. In this review, we present an overview of relevant topics regarding the application of AI in AD, detailing the algorithms and their applications in the fields of drug discovery, and biomarkers.
Collapse
|
49
|
E-Commerce Credit Risk Assessment Based on Fuzzy Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3088915. [PMID: 35035456 PMCID: PMC8759834 DOI: 10.1155/2022/3088915] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 12/14/2021] [Indexed: 12/28/2022]
Abstract
In this paper, we propose a cooperative strategy-based self-organization mechanism to reconstruct the network. The mechanism includes a comprehensive evaluation algorithm and structure adjustment mechanism. The self-organization mechanism can be carried out simultaneously with the parameter optimization process. By calculating the similarity and independent contribution of normative neurons, the effectiveness of fuzzy rules can be jointly evaluated, and effective structural changes can be realized. Moreover, this mechanism should not set the threshold in advance in practical application. In order to optimize the parameters of SC-IR2FNN, we developed a parameter optimization mechanism based on an interaction strategy. The parameter optimization mechanism based on a joint strategy, namely multilayer optimization engine, can split SC-IR2FNN parameters into nonlinear and linear parameters for joint optimization. The nonlinear parameters are optimized by an advanced two-level algorithm, and the linear parameters are updated with the minimum biological multiplication. Two parameter optimization algorithms optimize nonlinear and linear parameters, reduce the computational complexity of SC-IR2FNN, and improve the learning rate. Using the principal component factor analysis method, seven representative common factors are selected to replace the original variables, which include the profitability factor of the financing enterprise, the solvency factor of the financing enterprise, the profitability factor of the core enterprise, the operation guarantee factor, and the growth ability of the financing enterprise. Factors, supply chain online degree factors, financing enterprise quality, and cooperation factors, can well measure the credit risk of online supply chains. The logistic model shows that the profitability factor of the financing company, the debt repayment factor of the financing company, and the profitability of the core company are three factors that have a significant impact on the credit risk of online supply chain finance. Based on the improved credit calculation model, we developed an online clue risk calculation. This method is based on site conditions and can evaluate credit risk. From the test results, the improved credit scoring system is the result of facing speculative and circular credit fraud and implies that the traders of risk commentators are in a leading position in each electronic device. The results show that risk analysis is effective in any case.
Collapse
|
50
|
Mirabnahrazam G, Ma D, Lee S, Popuri K, Lee H, Cao J, Wang L, Galvin JE, Beg MF. Machine Learning Based Multimodal Neuroimaging Genomics Dementia Score for Predicting Future Conversion to Alzheimer's Disease. J Alzheimers Dis 2022; 87:1345-1365. [PMID: 35466939 PMCID: PMC9195128 DOI: 10.3233/jad-220021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND The increasing availability of databases containing both magnetic resonance imaging (MRI) and genetic data allows researchers to utilize multimodal data to better understand the characteristics of dementia of Alzheimer's type (DAT). OBJECTIVE The goal of this study was to develop and analyze novel biomarkers that can help predict the development and progression of DAT. METHODS We used feature selection and ensemble learning classifier to develop an image/genotype-based DAT score that represents a subject's likelihood of developing DAT in the future. Three feature types were used: MRI only, genetic only, and combined multimodal data. We used a novel data stratification method to better represent different stages of DAT. Using a pre-defined 0.5 threshold on DAT scores, we predicted whether a subject would develop DAT in the future. RESULTS Our results on Alzheimer's Disease Neuroimaging Initiative (ADNI) database showed that dementia scores using genetic data could better predict future DAT progression for currently normal control subjects (Accuracy = 0.857) compared to MRI (Accuracy = 0.143), while MRI can better characterize subjects with stable mild cognitive impairment (Accuracy = 0.614) compared to genetics (Accuracy = 0.356). Combining MRI and genetic data showed improved classification performance in the remaining stratified groups. CONCLUSION MRI and genetic data can contribute to DAT prediction in different ways. MRI data reflects anatomical changes in the brain, while genetic data can detect the risk of DAT progression prior to the symptomatic onset. Combining information from multimodal data appropriately can improve prediction performance.
Collapse
Affiliation(s)
| | - Da Ma
- School of Engineering, Simon Fraser University, Burnaby, BC, Canada
- School of Medicine, Wake Forest University, Winston-Salem, NC, USA
| | - Sieun Lee
- School of Engineering, Simon Fraser University, Burnaby, BC, Canada
- Mental Health & Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Karteek Popuri
- School of Engineering, Simon Fraser University, Burnaby, BC, Canada
| | - Hyunwoo Lee
- Division of Neurology, Department of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Jiguo Cao
- Department of Statistics and Actuarial Science, Simon Fraser University, Burnaby, BC, Canada
| | - Lei Wang
- Psychiatry and Behavioral Health, Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - James E Galvin
- Comprehensive Center for Brain Health, Department of Neurology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Mirza Faisal Beg
- School of Engineering, Simon Fraser University, Burnaby, BC, Canada
| | | |
Collapse
|