1
|
Ou Z, Wang H, Zhang B, Liang H, Hu B, Ren L, Liu Y, Zhang Y, Dai C, Wu H, Li W, Li X. Early identification of stroke through deep learning with multi-modal human speech and movement data. Neural Regen Res 2025; 20:234-241. [PMID: 38767488 PMCID: PMC11246124 DOI: 10.4103/1673-5374.393103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 11/21/2023] [Indexed: 05/22/2024] Open
Abstract
JOURNAL/nrgr/04.03/01300535-202501000-00031/figure1/v/2024-05-14T021156Z/r/image-tiff Early identification and treatment of stroke can greatly improve patient outcomes and quality of life. Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale (CPSS) and the Face Arm Speech Test (FAST) are commonly used for stroke screening, accurate administration is dependent on specialized training. In this study, we proposed a novel multimodal deep learning approach, based on the FAST, for assessing suspected stroke patients exhibiting symptoms such as limb weakness, facial paresis, and speech disorders in acute settings. We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements, facial expressions, and speech tests based on the FAST. We compared the constructed deep learning model, which was designed to process multi-modal datasets, with six prior models that achieved good action classification performance, including the I3D, SlowFast, X3D, TPN, TimeSformer, and MViT. We found that the findings of our deep learning model had a higher clinical value compared with the other approaches. Moreover, the multi-modal model outperformed its single-module variants, highlighting the benefit of utilizing multiple types of patient data, such as action videos and speech audio. These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke, thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.
Collapse
Affiliation(s)
- Zijun Ou
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, Guangdong Province, China
| | - Haitao Wang
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, Guangdong Province, China
| | - Bin Zhang
- Department of Neurology, Guangdong Neuroscience Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, Guangdong Province, China
| | - Haobang Liang
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, Guangdong Province, China
| | - Bei Hu
- Department of Emergency Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, Guangdong Province, China
| | - Longlong Ren
- Department of Emergency Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, Guangdong Province, China
| | - Yanjuan Liu
- Department of Emergency Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, Guangdong Province, China
| | - Yuhu Zhang
- Department of Neurology, Guangdong Neuroscience Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, Guangdong Province, China
| | - Chengbo Dai
- Department of Neurology, Guangdong Neuroscience Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, Guangdong Province, China
| | - Hejun Wu
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, Guangdong Province, China
| | - Weifeng Li
- Department of Emergency Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, Guangdong Province, China
| | - Xin Li
- Department of Emergency Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, Guangdong Province, China
| |
Collapse
|
2
|
Souza LA, Passos LA, Santana MCS, Mendel R, Rauber D, Ebigbo A, Probst A, Messmann H, Papa JP, Palm C. Layer-selective deep representation to improve esophageal cancer classification. Med Biol Eng Comput 2024; 62:3355-3372. [PMID: 38848031 DOI: 10.1007/s11517-024-03142-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Accepted: 05/25/2024] [Indexed: 10/17/2024]
Abstract
Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their accountability and transparency level must be improved to transfer this success into clinical practice. The reliability of machine learning decisions must be explained and interpreted, especially for supporting the medical diagnosis. For this task, the deep learning techniques' black-box nature must somehow be lightened up to clarify its promising results. Hence, we aim to investigate the impact of the ResNet-50 deep convolutional design for Barrett's esophagus and adenocarcinoma classification. For such a task, and aiming at proposing a two-step learning technique, the output of each convolutional layer that composes the ResNet-50 architecture was trained and classified for further definition of layers that would provide more impact in the architecture. We showed that local information and high-dimensional features are essential to improve the classification for our task. Besides, we observed a significant improvement when the most discriminative layers expressed more impact in the training and classification of ResNet-50 for Barrett's esophagus and adenocarcinoma classification, demonstrating that both human knowledge and computational processing may influence the correct learning of such a problem.
Collapse
Affiliation(s)
- Luis A Souza
- Department of Informatics, Espírito Santo Federal University, Vitória, Brazil.
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Germany.
| | - Leandro A Passos
- CMI Lab, School of Engineering and Informatics, University of Wolverhampton, Wolverhampton, UK
| | | | - Robert Mendel
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Germany
| | - David Rauber
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Germany
| | - Alanna Ebigbo
- Department of Gastroenterology, University Hospital Augsburg, Augsburg, Germany
| | - Andreas Probst
- Department of Gastroenterology, University Hospital Augsburg, Augsburg, Germany
| | - Helmut Messmann
- Department of Gastroenterology, University Hospital Augsburg, Augsburg, Germany
| | - João Paulo Papa
- Department of Computing, São Paulo State University, Bauru, Brazil
| | - Christoph Palm
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Germany
| |
Collapse
|
3
|
Li T, Hou N, Yu J, Zhao Z, Sun Q, Chen M, Yao Z, Ma S, Zhou J, Hu B. Evolutionary neural architecture search for automated MDD diagnosis using multimodal MRI imaging. iScience 2024; 27:111020. [PMID: 39429775 PMCID: PMC11490728 DOI: 10.1016/j.isci.2024.111020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 08/17/2024] [Accepted: 09/20/2024] [Indexed: 10/22/2024] Open
Abstract
Major depressive disorder (MDD) is a prevalent mental disorder with serious impacts on life and health. Neuroimaging offers valuable diagnostic insights. However, traditional computer-aided diagnosis methods are limited by reliance on researchers' experience. To address this, we proposed an evolutionary neural architecture search (M-ENAS) framework for automatically diagnosing MDD using multi-modal magnetic resonance imaging (MRI). M-ENAS determines the optimal weight and network architecture through a two-stage search method. Specifically, we designed a one-shot network architecture search (NAS) strategy to train supernet weights and a self-defined evolutionary search to optimize the network structure. Finally, M-ENAS was evaluated on two datasets, demonstrating that M-ENAS outperforms existing hand-designed methods. Additionally, our findings reveal that brain regions within the somatomotor network play important roles in the diagnosis of MDD, providing additional insight into the biological mechanisms underlying the disorder.
Collapse
Affiliation(s)
- Tongtong Li
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
- Gansu Provincial Key Laboratory of Wearable Computing, Lanzhou University, Lanzhou 730000, China
| | - Ning Hou
- Medical Department, The Third People’s Hospital of Tianshui, Tianshui 741000, China
| | - Jiandong Yu
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
- Gansu Provincial Key Laboratory of Wearable Computing, Lanzhou University, Lanzhou 730000, China
| | - Ziyang Zhao
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
- Gansu Provincial Key Laboratory of Wearable Computing, Lanzhou University, Lanzhou 730000, China
| | - Qi Sun
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
- Gansu Provincial Key Laboratory of Wearable Computing, Lanzhou University, Lanzhou 730000, China
| | - Miao Chen
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
- Gansu Provincial Key Laboratory of Wearable Computing, Lanzhou University, Lanzhou 730000, China
| | - Zhijun Yao
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
- Gansu Provincial Key Laboratory of Wearable Computing, Lanzhou University, Lanzhou 730000, China
| | - Sujie Ma
- Sleep Department, The Third People’s Hospital of Tianshui, Tianshui 741000, China
| | - Jiansong Zhou
- National Clinical Research Center for Mental Disorders, and National Center for Mental Disorders, The Second Xiangya Hospital of Central South University, Changsha 410000, China
| | - Bin Hu
- School of Information Science and Engineering, Lanzhou University, Lanzhou 730000, China
- Gansu Provincial Key Laboratory of Wearable Computing, Lanzhou University, Lanzhou 730000, China
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
- CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China
- Joint Research Center for Cognitive Neurosensor Technology of Lanzhou University & Institute of Semiconductors, Chinese Academy of Sciences, Lanzhou 730000, China
| |
Collapse
|
4
|
Dagnew TM, Tseng CEJ, Yoo CH, Makary MM, Goodheart AE, Striar R, Meyer TN, Rattray AK, Kang L, Wolf KA, Fiedler SA, Tocci D, Shapiro H, Provost S, Sultana E, Liu Y, Ding W, Chen P, Kubicki M, Shen S, Catana C, Zürcher NR, Wey HY, Hooker JM, Weiss RD, Wang C. Toward AI-driven neuroepigenetic imaging biomarker for alcohol use disorder: A proof-of-concept study. iScience 2024; 27:110159. [PMID: 39021792 PMCID: PMC11253155 DOI: 10.1016/j.isci.2024.110159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 04/13/2024] [Accepted: 05/29/2024] [Indexed: 07/20/2024] Open
Abstract
Alcohol use disorder (AUD) is a disorder of clinical and public health significance requiring novel and improved therapeutic solutions. Both environmental and genetic factors play a significant role in its pathophysiology. However, the underlying epigenetic molecular mechanisms that link the gene-environment interaction in AUD remain largely unknown. In this proof-of-concept study, we showed, for the first time, the neuroepigenetic biomarker capability of non-invasive imaging of class I histone deacetylase (HDAC) epigenetic enzymes in the in vivo brain for classifying AUD patients from healthy controls using a machine learning approach in the context of precision diagnosis. Eleven AUD patients and 16 age- and sex-matched healthy controls completed a simultaneous positron emission tomography-magnetic resonance (PET/MR) scan with the HDAC-binding radiotracer [11C]Martinostat. Our results showed lower HDAC expression in the anterior cingulate region in AUD. Furthermore, by applying a genetic algorithm feature selection, we identified five particular brain regions whose combined [11C]Martinostat relative standard uptake value (SUVR) features could reliably classify AUD vs. controls. We validate their promising classification reliability using a support vector machine classifier. These findings inform the potential of in vivo HDAC imaging biomarkers coupled with machine learning tools in the objective diagnosis and molecular translation of AUD that could complement the current diagnostic and statistical manual of mental disorders (DSM)-based intervention to propel precision medicine forward.
Collapse
Affiliation(s)
- Tewodros Mulugeta Dagnew
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Chieh-En J. Tseng
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Chi-Hyeon Yoo
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Meena M. Makary
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Systems and Biomedical Engineering Department, Cairo University, Giza, Egypt
| | - Anna E. Goodheart
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | - Robin Striar
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Tyler N. Meyer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Anna K. Rattray
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Leyi Kang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Kendall A. Wolf
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Stephanie A. Fiedler
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Darcy Tocci
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Hannah Shapiro
- Division of Alcohol, Drugs, and Addiction, McLean Hospital, Belmont, MA, USA
| | - Scott Provost
- Division of Alcohol, Drugs, and Addiction, McLean Hospital, Belmont, MA, USA
| | - Eleanor Sultana
- Division of Alcohol, Drugs, and Addiction, McLean Hospital, Belmont, MA, USA
| | - Yan Liu
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Wei Ding
- Department of Computer Science, University of Massachusetts Boston, Boston, MA, USA
| | - Ping Chen
- Department of Engineering, University of Massachusetts Boston, Boston, MA, USA
| | - Marek Kubicki
- Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Psychiatry Neuroimaging Laboratory, Departments of Psychiatry and Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Shiqian Shen
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Ciprian Catana
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Nicole R. Zürcher
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Hsiao-Ying Wey
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jacob M. Hooker
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Roger D. Weiss
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
- Division of Alcohol, Drugs, and Addiction, McLean Hospital, Belmont, MA, USA
| | - Changning Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
5
|
Xia Z, Zhou T, Mamoon S, Lu J. Inferring brain causal and temporal-lag networks for recognizing abnormal patterns of dementia. Med Image Anal 2024; 94:103133. [PMID: 38458094 DOI: 10.1016/j.media.2024.103133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 11/21/2022] [Accepted: 03/01/2024] [Indexed: 03/10/2024]
Abstract
Brain functional network analysis has become a popular method to explore the laws of brain organization and identify biomarkers of neurological diseases. However, it is still a challenging task to construct an ideal brain network due to the limited understanding of the human brain. Existing methods often ignore the impact of temporal-lag on the results of brain network modeling, which may lead to some unreliable conclusions. To overcome this issue, we propose a novel brain functional network estimation method, which can simultaneously infer the causal mechanisms and temporal-lag values among brain regions. Specifically, our method converts the lag learning into an instantaneous effect estimation problem, and further embeds the search objectives into a deep neural network model as parameters to be learned. To verify the effectiveness of the proposed estimation method, we perform experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database by comparing the proposed model with several existing methods, including correlation-based and causality-based methods. The experimental results show that our brain networks constructed by the proposed estimation method can not only achieve promising classification performance, but also exhibit some characteristics of physiological mechanisms. Our approach provides a new perspective for understanding the pathogenesis of brain diseases. The source code is released at https://github.com/NJUSTxiazw/CTLN.
Collapse
Affiliation(s)
- Zhengwang Xia
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Tao Zhou
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Saqib Mamoon
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Jianfeng Lu
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China.
| |
Collapse
|
6
|
Liu H, Ma Z, Wei L, Chen Z, Peng Y, Jiao Z, Bai H, Jing B. A radiomics-based brain network in T1 images: construction, attributes, and applications. Cereb Cortex 2024; 34:bhae016. [PMID: 38300184 PMCID: PMC10839838 DOI: 10.1093/cercor/bhae016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/08/2024] [Accepted: 01/09/2024] [Indexed: 02/02/2024] Open
Abstract
T1 image is a widely collected imaging sequence in various neuroimaging datasets, but it is rarely used to construct an individual-level brain network. In this study, a novel individualized radiomics-based structural similarity network was proposed from T1 images. In detail, it used voxel-based morphometry to obtain the preprocessed gray matter images, and radiomic features were then extracted on each region of interest in Brainnetome atlas, and an individualized radiomics-based structural similarity network was finally built using the correlational values of radiomic features between any pair of regions of interest. After that, the network characteristics of individualized radiomics-based structural similarity network were assessed, including graph theory attributes, test-retest reliability, and individual identification ability (fingerprinting). At last, two representative applications for individualized radiomics-based structural similarity network, namely mild cognitive impairment subtype discrimination and fluid intelligence prediction, were exemplified and compared with some other networks on large open-source datasets. The results revealed that the individualized radiomics-based structural similarity network displays remarkable network characteristics and exhibits advantageous performances in mild cognitive impairment subtype discrimination and fluid intelligence prediction. In summary, the individualized radiomics-based structural similarity network provides a distinctive, reliable, and informative individualized structural brain network, which can be combined with other networks such as resting-state functional connectivity for various phenotypic and clinical applications.
Collapse
Affiliation(s)
- Han Liu
- Department of Radiology, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, No. 56, Nanlishilu Road, Xicheng District, Beijing 100045, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, School of Biomedical Engineering, Capital Medical University, No. 10, Xitoutiao Youanmenwai, Fengtai District, Beijing 100069, China
| | - Zhe Ma
- Department of Radiology, Henan Cancer Hospital, The Affiliated Cancer Hospital of Zhengzhou University, 127 Dongming Road, Jinshui District, Zhengzhou, Henan 450008, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, School of Biomedical Engineering, Capital Medical University, No. 10, Xitoutiao Youanmenwai, Fengtai District, Beijing 100069, China
| | - Lijiang Wei
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, School of Biomedical Engineering, Capital Medical University, No. 10, Xitoutiao Youanmenwai, Fengtai District, Beijing 100069, China
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, No. 19, Xinjiekouwai Street, Haidian District, Beijing 100875, China
| | - Zhenpeng Chen
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, School of Biomedical Engineering, Capital Medical University, No. 10, Xitoutiao Youanmenwai, Fengtai District, Beijing 100069, China
| | - Yun Peng
- Department of Radiology, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, No. 56, Nanlishilu Road, Xicheng District, Beijing 100045, China
| | - Zhicheng Jiao
- Department of Diagnostic Imaging, Brown University, 593 Eddy Street, Providence, Rhode Island 02903, United States
| | - Harrison Bai
- Department of Radiology and Radiological Sciences, Johns Hopkins University, 1800 Orleans Street, Baltimore, Maryland 21205, United States
| | - Bin Jing
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, School of Biomedical Engineering, Capital Medical University, No. 10, Xitoutiao Youanmenwai, Fengtai District, Beijing 100069, China
| |
Collapse
|
7
|
Xie J, Zhong W, Yang R, Wang L, Zhen X. Discriminative fusion of moments-aligned latent representation of multimodality medical data. Phys Med Biol 2023; 69:015015. [PMID: 38052076 DOI: 10.1088/1361-6560/ad1271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 12/05/2023] [Indexed: 12/07/2023]
Abstract
Fusion of multimodal medical data provides multifaceted, disease-relevant information for diagnosis or prognosis prediction modeling. Traditional fusion strategies such as feature concatenation often fail to learn hidden complementary and discriminative manifestations from high-dimensional multimodal data. To this end, we proposed a methodology for the integration of multimodality medical data by matching their moments in a latent space, where the hidden, shared information of multimodal data is gradually learned by optimization with multiple feature collinearity and correlation constrains. We first obtained the multimodal hidden representations by learning mappings between the original domain and shared latent space. Within this shared space, we utilized several relational regularizations, including data attribute preservation, feature collinearity and feature-task correlation, to encourage learning of the underlying associations inherent in multimodal data. The fused multimodal latent features were finally fed to a logistic regression classifier for diagnostic prediction. Extensive evaluations on three independent clinical datasets have demonstrated the effectiveness of the proposed method in fusing multimodal data for medical prediction modeling.
Collapse
Affiliation(s)
- Jincheng Xie
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People's Republic of China
| | - Weixiong Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People's Republic of China
| | - Ruimeng Yang
- Department of Radiology, the Second Affiliated Hospital, School of Medicine, South China University of Technology, Guangzhou, Guangdong, 510180, People's Republic of China
| | - Linjing Wang
- Radiotherapy Center, Affiliated Cancer Hospital & Institute of Guangzhou Medical University, Guangzhou, Guangdong 510095, People's Republic of China
| | - Xin Zhen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People's Republic of China
| |
Collapse
|
8
|
Gao X, Shi F, Shen D, Liu M. Multimodal transformer network for incomplete image generation and diagnosis of Alzheimer's disease. Comput Med Imaging Graph 2023; 110:102303. [PMID: 37832503 DOI: 10.1016/j.compmedimag.2023.102303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 06/27/2023] [Accepted: 09/27/2023] [Indexed: 10/15/2023]
Abstract
Multimodal images such as magnetic resonance imaging (MRI) and positron emission tomography (PET) could provide complementary information about the brain and have been widely investigated for the diagnosis of neurodegenerative disorders such as Alzheimer's disease (AD). However, multimodal brain images are often incomplete in clinical practice. It is still challenging to make use of multimodality for disease diagnosis with missing data. In this paper, we propose a deep learning framework with the multi-level guided generative adversarial network (MLG-GAN) and multimodal transformer (Mul-T) for incomplete image generation and disease classification, respectively. First, MLG-GAN is proposed to generate the missing data, guided by multi-level information from voxels, features, and tasks. In addition to voxel-level supervision and task-level constraint, a feature-level auto-regression branch is proposed to embed the features of target images for an accurate generation. With the complete multimodal images, we propose a Mul-T network for disease diagnosis, which can not only combine the global and local features but also model the latent interactions and correlations from one modality to another with the cross-modal attention mechanism. Comprehensive experiments on three independent datasets (i.e., ADNI-1, ADNI-2, and OASIS-3) show that the proposed method achieves superior performance in the tasks of image generation and disease diagnosis compared to state-of-the-art methods.
Collapse
Affiliation(s)
- Xingyu Gao
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co.,Ltd., China.
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co.,Ltd., China; School of Biomedical Engineering, ShanghaiTech University, China.
| | - Manhua Liu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, China; MoE Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China.
| |
Collapse
|
9
|
Odusami M, Maskeliūnas R, Damaševičius R. Optimized Convolutional Fusion for Multimodal Neuroimaging in Alzheimer's Disease Diagnosis: Enhancing Data Integration and Feature Extraction. J Pers Med 2023; 13:1496. [PMID: 37888107 PMCID: PMC10608760 DOI: 10.3390/jpm13101496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Revised: 09/28/2023] [Accepted: 10/12/2023] [Indexed: 10/28/2023] Open
Abstract
Multimodal neuroimaging has gained traction in Alzheimer's Disease (AD) diagnosis by integrating information from multiple imaging modalities to enhance classification accuracy. However, effectively handling heterogeneous data sources and overcoming the challenges posed by multiscale transform methods remains a significant hurdle. This article proposes a novel approach to address these challenges. To harness the power of diverse neuroimaging data, we employ a strategy that leverages optimized convolution techniques. These optimizations include varying kernel sizes and the incorporation of instance normalization, both of which play crucial roles in feature extraction from magnetic resonance imaging (MRI) and positron emission tomography (PET) images. Specifically, varying kernel sizes allow us to adapt the receptive field to different image characteristics, enhancing the model's ability to capture relevant information. Furthermore, we employ transposed convolution, which increases spatial resolution of feature maps, and it is optimized with varying kernel sizes and instance normalization. This heightened resolution facilitates the alignment and integration of data from disparate MRI and PET data. The use of larger kernels and strides in transposed convolution expands the receptive field, enabling the model to capture essential cross-modal relationships. Instance normalization, applied to each modality during the fusion process, mitigates potential biases stemming from differences in intensity, contrast, or scale between modalities. This enhancement contributes to improved model performance by reducing complexity and ensuring robust fusion. The performance of the proposed fusion method is assessed on three distinct neuroimaging datasets, which include: Alzheimer's Disease Neuroimaging Initiative (ADNI), consisting of 50 participants each at various stages of AD for both MRI and PET (Cognitive Normal, AD, and Early Mild Cognitive); Open Access Series of Imaging Studies (OASIS), consisting of 50 participants each at various stages of AD for both MRI and PET (Cognitive Normal, Mild Dementia, Very Mild Dementia); and whole-brain atlas neuroimaging (AANLIB) (consisting of 50 participants each at various stages of AD for both MRI and PET (Cognitive Normal, AD). To evaluate the quality of the fused images generated via our method, we employ a comprehensive set of evaluation metrics, including Structural Similarity Index Measurement (SSIM), which assesses the structural similarity between two images; Peak Signal-to-Noise Ratio (PSNR), which measures how closely the generated image resembles the ground truth; Entropy (E), which assesses the amount of information preserved or lost during fusion; the Feature Similarity Indexing Method (FSIM), which assesses the structural and feature similarities between two images; and Edge-Based Similarity (EBS), which measures the similarity of edges between the fused and ground truth images. The obtained fused image is further evaluated using a Mobile Vision Transformer. In the classification of AD vs. Cognitive Normal, the model achieved an accuracy of 99.00%, specificity of 99.00%, and sensitivity of 98.44% on the AANLIB dataset.
Collapse
Affiliation(s)
- Modupe Odusami
- Department of Multimedia Engineering, Kaunas University of Technology, 51423 Kaunas, Lithuania
| | - Rytis Maskeliūnas
- Department of Multimedia Engineering, Kaunas University of Technology, 51423 Kaunas, Lithuania
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, 53361 Kaunas, Lithuania
| |
Collapse
|
10
|
Yu M, Liu Y, Wu J, Bozoki A, Qiu S, Yue L, Liu M. Hybrid Multimodality Fusion with Cross-Domain Knowledge Transfer to Forecast Progression Trajectories in Cognitive Decline. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2023; 14394:265-275. [PMID: 38435413 PMCID: PMC10904401 DOI: 10.1007/978-3-031-47425-5_24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2024]
Abstract
Magnetic resonance imaging (MRI) and positron emission tomography (PET) are increasingly used to forecast progression trajectories of cognitive decline caused by preclinical and prodromal Alzheimer's disease (AD). Many existing studies have explored the potential of these two distinct modalities with diverse machine and deep learning approaches. But successfully fusing MRI and PET can be complex due to their unique characteristics and missing modalities. To this end, we develop a hybrid multimodality fusion (HMF) framework with cross-domain knowledge transfer for joint MRI and PET representation learning, feature fusion, and cognitive decline progression forecasting. Our HMF consists of three modules: 1) a module to impute missing PET images, 2) a module to extract multimodality features from MRI and PET images, and 3) a module to fuse the extracted multimodality features. To address the issue of small sample sizes, we employ a cross-domain knowledge transfer strategy from the ADNI dataset, which includes 795 subjects, to independent small-scale AD-related cohorts, in order to leverage the rich knowledge present within the ADNI. The proposed HMF is extensively evaluated in three AD-related studies with 272 subjects across multiple disease stages, such as subjective cognitive decline and mild cognitive impairment. Experimental results demonstrate the superiority of our method over several state-of-the-art approaches in forecasting progression trajectories of AD-related cognitive decline.
Collapse
Affiliation(s)
- Minhui Yu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill and North Carolina State University, Chapel Hill, NC 27599, USA
| | - Yunbi Liu
- School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen 518172, China
| | - Jinjian Wu
- Department of Acupuncture and Rehabilitation, The Affiliated Hospital of TCM of Guangzhou Medical University, Guangzhou 510130, Guangdong, China
| | - Andrea Bozoki
- Department of Neurology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Shijun Qiu
- Department of Radiology, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou 510000, Guangdong, China
| | - Ling Yue
- Department of Geriatric Psychiatry, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200030, China
| | - Mingxia Liu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
11
|
Dai Y, Zou B, Zhu C, Li Y, Chen Z, Ji Z, Kui X, Zhang W. DE-JANet: A unified network based on dual encoder and joint attention for Alzheimer's disease classification using multi-modal data. Comput Biol Med 2023; 165:107396. [PMID: 37703717 DOI: 10.1016/j.compbiomed.2023.107396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 07/28/2023] [Accepted: 08/26/2023] [Indexed: 09/15/2023]
Abstract
Structural magnetic resonance imaging (sMRI), which can reflect cerebral atrophy, plays an important role in the early detection of Alzheimer's disease (AD). However, the information provided by analyzing only the morphological changes in sMRI is relatively limited, and the assessment of the atrophy degree is subjective. Therefore, it is meaningful to combine sMRI with other clinical information to acquire complementary diagnosis information and achieve a more accurate classification of AD. Nevertheless, how to fuse these multi-modal data effectively is still challenging. In this paper, we propose DE-JANet, a unified AD classification network that integrates image data sMRI with non-image clinical data, such as age and Mini-Mental State Examination (MMSE) score, for more effective multi-modal analysis. DE-JANet consists of three key components: (1) a dual encoder module for extracting low-level features from the image and non-image data according to specific encoding regularity, (2) a joint attention module for fusing multi-modal features, and (3) a token classification module for performing AD-related classification according to the fused multi-modal features. Our DE-JANet is evaluated on the ADNI dataset, with a mean accuracy of 0.9722 and 0.9538 for AD classification and mild cognition impairment (MCI) classification, respectively, which is superior to existing methods and indicates advanced performance on AD-related diagnosis tasks.
Collapse
Affiliation(s)
- Yulan Dai
- School of Computer Science and Engineering, Central South University, Changsha, China; Hunan Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Beiji Zou
- School of Computer Science and Engineering, Central South University, Changsha, China; Hunan Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Chengzhang Zhu
- School of Computer Science and Engineering, Central South University, Changsha, China; Hunan Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China.
| | - Yang Li
- School of Computer Science and Engineering, Central South University, Changsha, China; Hunan Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Zhi Chen
- School of Computer Science and Engineering, Central South University, Changsha, China; Hunan Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Zexin Ji
- School of Computer Science and Engineering, Central South University, Changsha, China; Hunan Engineering Research Center of Machine Vision and Intelligent Medicine, Changsha, China
| | - Xiaoyan Kui
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Wensheng Zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
12
|
Long Z, Li J, Fan J, Li B, Du Y, Qiu S, Miao J, Chen J, Yin J, Jing B. Identifying Alzheimer's disease and mild cognitive impairment with atlas-based multi-modal metrics. Front Aging Neurosci 2023; 15:1212275. [PMID: 37719872 PMCID: PMC10501142 DOI: 10.3389/fnagi.2023.1212275] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 08/21/2023] [Indexed: 09/19/2023] Open
Abstract
Introduction Multi-modal neuroimaging metrics in combination with advanced machine learning techniques have attracted more and more attention for an effective multi-class identification of Alzheimer's disease (AD), mild cognitive impairment (MCI) and health controls (HC) recently. Methods In this paper, a total of 180 subjects consisting of 44 AD, 66 MCI and 58 HC subjects were enrolled, and the multi-modalities of the resting-state functional magnetic resonance imaging (rs-fMRI) and the structural MRI (sMRI) for all participants were obtained. Then, four kinds of metrics including the Hurst exponent (HE) metric and bilateral hippocampus seed independently based connectivity metrics generated from fMRI data, and the gray matter volume (GMV) metric obtained from sMRI data, were calculated and extracted in each region of interest (ROI) based on a newly proposed automated anatomical Labeling (AAL3) atlas after data pre-processing. Next, these metrics were selected with a minimal redundancy maximal relevance (MRMR) method and a sequential feature collection (SFC) algorithm, and only a subset of optimal features were retained after this step. Finally, the support vector machine (SVM) based classification methods and artificial neural network (ANN) algorithm were utilized to identify the multi-class of AD, MCI and HC subjects in single modal and multi-modal metrics respectively, and a nested ten-fold cross-validation was utilized to estimate the final classification performance. Results The results of the SVM and ANN based methods indicated the best accuracies of 80.36 and 74.40%, respectively, by utilizing all the multi-modal metrics, and the optimal accuracies for AD, MCI and HC were 79.55, 78.79 and 82.76%, respectively, in the SVM based method. In contrast, when using single modal metric, the SVM based method obtained a best accuracy of 72.62% with the HE metric, and the accuracies for AD, MCI and HC subjects were just 56.82, 80.30 and 75.86%, respectively. Moreover, the overlapping abnormal brain regions detected by multi-modal metrics were mainly located at posterior cingulate gyrus, superior frontal gyrus and cuneus. Conclusion Taken together, the SVM based method with multi-modal metrics could provide effective diagnostic information for identifying AD, MCI and HC subjects.
Collapse
Affiliation(s)
- Zhuqing Long
- Medical Apparatus and Equipment Deployment, Hunan Children’s Hospital, Changsha, Hunan Province, China
- School of Biomedical Engineering, Capital Medical University, Beijing, China
| | - Jie Li
- Medical Apparatus and Equipment Deployment, Hunan Children’s Hospital, Changsha, Hunan Province, China
| | - Jianghua Fan
- Department of Pediatric Emergency Center, Hunan Children’s Hospital, Changsha, Hunan Province, China
| | - Bo Li
- Department of Traditional Chinese Medicine, Beijing Chest Hospital, Capital Medical University, Beijing Tuberculosis and Thoracic Tumor Research Institute, Beijing, China
| | - Yukeng Du
- Medical Apparatus and Equipment Deployment, Hunan Children’s Hospital, Changsha, Hunan Province, China
| | - Shuang Qiu
- Medical Apparatus and Equipment Deployment, Hunan Children’s Hospital, Changsha, Hunan Province, China
| | - Jichang Miao
- Department of Medical Devices, Nanfang Hospital, Guangzhou, China
| | - Jian Chen
- School of Electronic, Electrical Engineering and Physics, Fujian University of Technology, Fuzhou, Fujian, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Beijing, China
| | - Juanwu Yin
- Medical Apparatus and Equipment Deployment, Hunan Children’s Hospital, Changsha, Hunan Province, China
| | - Bin Jing
- School of Biomedical Engineering, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Beijing, China
| |
Collapse
|
13
|
Wang L, Zheng Z, Su Y, Chen K, Weidman D, Wu T, Lo S, Lure F, Li J. Early Prediction of Progression to Alzheimer's Disease using Multi-Modality Neuroimages by a Novel Ordinal Learning Model ADPacer. IISE TRANSACTIONS ON HEALTHCARE SYSTEMS ENGINEERING 2023; 14:167-177. [PMID: 39239251 PMCID: PMC11374100 DOI: 10.1080/24725579.2023.2249487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/07/2024]
Abstract
Machine learning has shown great promise for integrating multi-modality neuroimaging datasets to predict the risk of progression/conversion to Alzheimer's Disease (AD) for individuals with Mild Cognitive Impairment (MCI). Most existing work aims to classify MCI patients into converters versus non-converters using a pre-defined timeframe. The limitation is a lack of granularity in differentiating MCI patients who convert at different paces. Progression pace prediction has important clinical values, which allow from more personalized interventional strategies, better preparation of patients and their caregivers, and facilitation of patient selection in clinical trials. We proposed a novel ADPacer model which formulated the pace prediction into an ordinal learning problem with a unique capability of leveraging training samples with label ambiguity to augment the training set. This capability differentiates ADPacer from existing ordinal learning algorithms. We applied ADPacer to MCI patient cohorts from the Alzheimer's Disease Neuroimaging Initiative (ADNI) and the Australian Imaging, Biomarker & Lifestyle Flagship Study of Ageing (AIBL), and demonstrated the superior performance of ADPacer compared to existing ordinal learning algorithms. We also integrated the SHapley Additive exPlanations (SHAP) method with ADPacer to assess the contributions from different modalities to the model prediction. The findings are consistent with the AD literature.
Collapse
Affiliation(s)
- Lujia Wang
- H. Hilton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, GA USA
| | - Zhiyang Zheng
- H. Hilton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, GA USA
| | - Yi Su
- Banner Alzheimer's Institute, AZ USA
| | | | | | - Teresa Wu
- School of Computing and Augmented Intelligence, Arizona State University, AZ USA
| | | | | | - Jing Li
- H. Hilton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, GA USA
| |
Collapse
|
14
|
Odusami M, Maskeliūnas R, Damaševičius R. Pareto Optimized Adaptive Learning with Transposed Convolution for Image Fusion Alzheimer's Disease Classification. Brain Sci 2023; 13:1045. [PMID: 37508977 PMCID: PMC10377099 DOI: 10.3390/brainsci13071045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 06/30/2023] [Accepted: 07/04/2023] [Indexed: 07/30/2023] Open
Abstract
Alzheimer's disease (AD) is a neurological condition that gradually weakens the brain and impairs cognition and memory. Multimodal imaging techniques have become increasingly important in the diagnosis of AD because they can help monitor disease progression over time by providing a more complete picture of the changes in the brain that occur over time in AD. Medical image fusion is crucial in that it combines data from various image modalities into a single, better-understood output. The present study explores the feasibility of employing Pareto optimized deep learning methodologies to integrate Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) images through the utilization of pre-existing models, namely the Visual Geometry Group (VGG) 11, VGG16, and VGG19 architectures. Morphological operations are carried out on MRI and PET images using Analyze 14.0 software and after which PET images are manipulated for the desired angle of alignment with MRI image using GNU Image Manipulation Program (GIMP). To enhance the network's performance, transposed convolution layer is incorporated into the previously extracted feature maps before image fusion. This process generates feature maps and fusion weights that facilitate the fusion process. This investigation concerns the assessment of the efficacy of three VGG models in capturing significant features from the MRI and PET data. The hyperparameters of the models are tuned using Pareto optimization. The models' performance is evaluated on the ADNI dataset utilizing the Structure Similarity Index Method (SSIM), Peak Signal-to-Noise Ratio (PSNR), Mean-Square Error (MSE), and Entropy (E). Experimental results show that VGG19 outperforms VGG16 and VGG11 with an average of 0.668, 0.802, and 0.664 SSIM for CN, AD, and MCI stages from ADNI (MRI modality) respectively. Likewise, an average of 0.669, 0.815, and 0.660 SSIM for CN, AD, and MCI stages from ADNI (PET modality) respectively.
Collapse
Affiliation(s)
- Modupe Odusami
- Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania
| | - Rytis Maskeliūnas
- Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
| |
Collapse
|
15
|
Wang T, Chen X, Zhang J, Feng Q, Huang M. Deep multimodality-disentangled association analysis network for imaging genetics in neurodegenerative diseases. Med Image Anal 2023; 88:102842. [PMID: 37247468 DOI: 10.1016/j.media.2023.102842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 03/01/2023] [Accepted: 05/15/2023] [Indexed: 05/31/2023]
Abstract
Imaging genetics is a crucial tool that is applied to explore potentially disease-related biomarkers, particularly for neurodegenerative diseases (NDs). With the development of imaging technology, the association analysis between multimodal imaging data and genetic data is gradually being concerned by a wide range of imaging genetics studies. However, multimodal data are fused first and then correlated with genetic data in traditional methods, which leads to an incomplete exploration of their common and complementary information. In addition, the inaccurate formulation in the complex relationships between imaging and genetic data and information loss caused by missing multimodal data are still open problems in imaging genetics studies. Therefore, in this study, a deep multimodality-disentangled association analysis network (DMAAN) is proposed to solve the aforementioned issues and detect the disease-related biomarkers of NDs simultaneously. First, the imaging data are nonlinearly projected into a latent space and imaging representations can be achieved. The imaging representations are further disentangled into common and specific parts by using a multimodal-disentangled module. Second, the genetic data are encoded to achieve genetic representations, and then, the achieved genetic representations are nonlinearly mapped to the common and specific imaging representations to build nonlinear associations between imaging and genetic data through an association analysis module. Moreover, modality mask vectors are synchronously synthesized to integrate the genetic and imaging data, which helps the following disease diagnosis. Finally, the proposed method achieves reasonable diagnosis performance via a disease diagnosis module and utilizes the label information to detect the disease-related modality-shared and modality-specific biomarkers. Furthermore, the genetic representation can be used to impute the missing multimodal data with our learning strategy. Two publicly available datasets with different NDs are used to demonstrate the effectiveness of the proposed DMAAN. The experimental results show that the proposed DMAAN can identify the disease-related biomarkers, which suggests the proposed DMAAN may provide new insights into the pathological mechanism and early diagnosis of NDs. The codes are publicly available at https://github.com/Meiyan88/DMAAN.
Collapse
Affiliation(s)
- Tao Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Xiumei Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Jiawei Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Meiyan Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
16
|
Rhyou SY, Yoo JC. Aggregated micropatch-based deep learning neural network for ultrasonic diagnosis of cirrhosis. Artif Intell Med 2023; 139:102541. [PMID: 37100510 DOI: 10.1016/j.artmed.2023.102541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 03/15/2023] [Accepted: 03/27/2023] [Indexed: 04/28/2023]
Abstract
Despite the advancements in the diagnosis of early-stage cirrhosis, the accuracy in the diagnosis using ultrasound is still challenging owing to the presence of various image artifacts, which results in poor visual quality of the textural and lower-frequency components. In this study, we propose an end-to-end multistep network called CirrhosisNet that includes two transfer-learned convolutional neural networks for semantic segmentation and classification tasks. It uses a uniquely designed image, called an aggregated micropatch (AMP), as an input image to the classification network, thereby assessing whether the liver is in a cirrhotic stage. With a prototype AMP image, we synthesized a bunch of AMP images while retaining the textural features. This synthesis significantly increases the number of insufficient cirrhosis-labeled images, thereby circumventing overfitting issues and optimizing network performance. Furthermore, the synthesized AMP images contained unique textural patterns, mostly generated on the boundaries between adjacent micropatches (μ-patches) during their aggregation. These newly created boundary patterns provide rich information regarding the texture features of the ultrasound image, thereby making cirrhosis diagnosis more accurate and sensitive. The experimental results demonstrated that our proposed AMP image synthesis is extremely effective in expanding the dataset of cirrhosis images, thus diagnosing liver cirrhosis with considerably high accuracy. We achieved an accuracy of 99.95 %, a sensitivity of 100 %, and a specificity of 99.9 % on the Samsung Medical Center dataset using 8 × 8 pixels-sized μ-patches. The proposed approach provides an effective solution to deep-learning models with limited-training data, such as medical imaging tasks.
Collapse
Affiliation(s)
- Se-Yeol Rhyou
- Department of Electrical and Computer Engineering, College of Information and Communication Engineering, Sungkyunkwan University, Suwon 440-746, South Korea
| | - Jae-Chern Yoo
- Department of Electrical and Computer Engineering, College of Information and Communication Engineering, Sungkyunkwan University, Suwon 440-746, South Korea.
| |
Collapse
|
17
|
Mulyadi AW, Jung W, Oh K, Yoon JS, Lee KH, Suk HI. Estimating explainable Alzheimer's disease likelihood map via clinically-guided prototype learning. Neuroimage 2023; 273:120073. [PMID: 37037063 DOI: 10.1016/j.neuroimage.2023.120073] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 03/03/2023] [Accepted: 03/30/2023] [Indexed: 04/12/2023] Open
Abstract
Identifying Alzheimer's disease (AD) involves a deliberate diagnostic process owing to its innate traits of irreversibility with subtle and gradual progression. These characteristics make AD biomarker identification from structural brain imaging (e.g., structural MRI) scans quite challenging. Using clinically-guided prototype learning, we propose a novel deep-learning approach through eXplainable AD Likelihood Map Estimation (XADLiME) for AD progression modeling over 3D sMRIs. Specifically, we establish a set of topologically-aware prototypes onto the clusters of latent clinical features, uncovering an AD spectrum manifold. Considering this pseudo map as an enriched reference, we employ an estimating network to approximate the AD likelihood map over a 3D sMRI scan. Additionally, we promote the explainability of such a likelihood map by revealing a comprehensible overview from clinical and morphological perspectives. During the inference, this estimated likelihood map served as a substitute for unseen sMRI scans for effectively conducting the downstream task while providing thorough explainable states.
Collapse
Affiliation(s)
- Ahmad Wisnu Mulyadi
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Wonsik Jung
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Kwanseok Oh
- Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea
| | - Jee Seok Yoon
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Kun Ho Lee
- Gwangju Alzheimer's & Related Dementia Cohort Research Center, Chosun University, Gwangju 61452, Republic of Korea; Department of Biomedical Science, Chosun University, Gwangju 61452, Republic of Korea; Korea Brain Research Institute, Daegu 41062, Republic of Korea
| | - Heung-Il Suk
- Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
18
|
Gong W, Bai S, Zheng YQ, Smith SM, Beckmann CF. Supervised Phenotype Discovery From Multimodal Brain Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:834-849. [PMID: 36318559 DOI: 10.1109/tmi.2022.3218720] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Data-driven discovery of image-derived phenotypes (IDPs) from large-scale multimodal brain imaging data has enormous potential for neuroscientific and clinical research by linking IDPs to subjects' demographic, behavioural, clinical and cognitive measures (i.e., non-imaging derived phenotypes or nIDPs). However, current approaches are primarily based on unsupervised approaches, without the use of information in nIDPs. In this paper, we proposed a semi-supervised, multimodal, and multi-task fusion approach, termed SuperBigFLICA, for IDP discovery, which simultaneously integrates information from multiple imaging modalities as well as multiple nIDPs. SuperBigFLICA is computationally efficient and largely avoids the need for parameter tuning. Using the UK Biobank brain imaging dataset with around 40,000 subjects and 47 modalities, along with more than 17,000 nIDPs, we showed that SuperBigFLICA enhances the prediction power of nIDPs, benchmarked against IDPs derived by conventional expert-knowledge and unsupervised-learning approaches (with average nIDP prediction accuracy improvements of up to 46%). It also enables the learning of generic imaging features that can predict new nIDPs. Further empirical analysis of the SuperBigFLICA algorithm demonstrates its robustness in different prediction tasks and the ability to derive biologically meaningful IDPs in predicting health outcomes and cognitive nIDPs, such as fluid intelligence and hypertension.
Collapse
|
19
|
Chen Z, Liu Y, Zhang Y, Li Q. Orthogonal latent space learning with feature weighting and graph learning for multimodal Alzheimer's disease diagnosis. Med Image Anal 2023; 84:102698. [PMID: 36462372 DOI: 10.1016/j.media.2022.102698] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 10/18/2022] [Accepted: 11/17/2022] [Indexed: 11/23/2022]
Abstract
Recent studies have shown that multimodal neuroimaging data provide complementary information of the brain and latent space-based methods have achieved promising results in fusing multimodal data for Alzheimer's disease (AD) diagnosis. However, most existing methods treat all features equally and adopt nonorthogonal projections to learn the latent space, which cannot retain enough discriminative information in the latent space. Besides, they usually preserve the relationships among subjects in the latent space based on the similarity graph constructed on original features for performance boosting. However, the noises and redundant features significantly corrupt the graph. To address these limitations, we propose an Orthogonal Latent space learning with Feature weighting and Graph learning (OLFG) model for multimodal AD diagnosis. Specifically, we map multiple modalities into a common latent space by orthogonal constrained projection to capture the discriminative information for AD diagnosis. Then, a feature weighting matrix is utilized to sort the importance of features in AD diagnosis adaptively. Besides, we devise a regularization term with learned graph to preserve the local structure of the data in the latent space and integrate the graph construction into the learning processing for accurately encoding the relationships among samples. Instead of constructing a similarity graph for each modality, we learn a joint graph for multiple modalities to capture the correlations among modalities. Finally, the representations in the latent space are projected into the target space to perform AD diagnosis. An alternating optimization algorithm with proved convergence is developed to solve the optimization objective. Extensive experimental results show the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Zhi Chen
- Knowledge and Data Engineering Laboratory of Chinese Medicine, School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Yongguo Liu
- Knowledge and Data Engineering Laboratory of Chinese Medicine, School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China.
| | - Yun Zhang
- Knowledge and Data Engineering Laboratory of Chinese Medicine, School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Qiaoqin Li
- Knowledge and Data Engineering Laboratory of Chinese Medicine, School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| |
Collapse
|
20
|
Lin CT, Ghosh S, Hinkley LB, Dale CL, Souza ACS, Sabes JH, Hess CP, Adams ME, Cheung SW, Nagarajan SS. Multi-tasking deep network for tinnitus classification and severity prediction from multimodal structural MR images. J Neural Eng 2023; 20. [PMID: 36595270 DOI: 10.1088/1741-2552/acab33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 12/13/2022] [Indexed: 12/15/2022]
Abstract
Objective:Subjective tinnitus is an auditory phantom perceptual disorder without an objective biomarker. Fast and efficient diagnostic tools will advance clinical practice by detecting or confirming the condition, tracking change in severity, and monitoring treatment response. Motivated by evidence of subtle anatomical, morphological, or functional information in magnetic resonance images of the brain, we examine data-driven machine learning methods for joint tinnitus classification (tinnitus or no tinnitus) and tinnitus severity prediction.Approach:We propose a deep multi-task multimodal framework for tinnitus classification and severity prediction using structural MRI (sMRI) data. To leverage complementary information multimodal neuroimaging data, we integrate two modalities of three-dimensional sMRI-T1 weighted (T1w) and T2 weighted (T2w) images. To explore the key components in the MR images that drove task performance, we segment both T1w and T2w images into three different components-cerebrospinal fluid, grey matter and white matter, and evaluate performance of each segmented image.Main results:Results demonstrate that our multimodal framework capitalizes on the information across both modalities (T1w and T2w) for the joint task of tinnitus classification and severity prediction.Significance:Our model outperforms existing learning-based and conventional methods in terms of accuracy, sensitivity, specificity, and negative predictive value.
Collapse
Affiliation(s)
- Chieh-Te Lin
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 513 Parnassus Ave, San Francisco, CA 94143, United States of America
| | - Sanjay Ghosh
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 513 Parnassus Ave, San Francisco, CA 94143, United States of America
| | - Leighton B Hinkley
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 513 Parnassus Ave, San Francisco, CA 94143, United States of America
| | - Corby L Dale
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 513 Parnassus Ave, San Francisco, CA 94143, United States of America
| | - Ana C S Souza
- Department of Telecommunication and Mechatronics Engineering, Federal University of Sao Joao del-Rei, Praca Frei Orlando, 170, Sao Joao del Rei 36307, MG, Brazil
| | - Jennifer H Sabes
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, 2380 Sutter St., San Francisco, CA 94115, United States of America
| | - Christopher P Hess
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 513 Parnassus Ave, San Francisco, CA 94143, United States of America
| | - Meredith E Adams
- Department of Otolaryngology-Head and Neck Surgery, University of Minnesota, Phillips Wangensteen Building, 516 Delaware St., Minneapolis, MN 55455, United States of America
| | - Steven W Cheung
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, 2380 Sutter St., San Francisco, CA 94115, United States of America.,Surgical Services, Veterans Affairs, 4150 Clement St., San Francisco, CA 94121, United States of America
| | - Srikantan S Nagarajan
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 513 Parnassus Ave, San Francisco, CA 94143, United States of America.,Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, 2380 Sutter St., San Francisco, CA 94115, United States of America.,Surgical Services, Veterans Affairs, 4150 Clement St., San Francisco, CA 94121, United States of America
| |
Collapse
|
21
|
Hettiarachchi P, Niyangoda SS, Jarosova R, Johnson MA. Dopamine Release Impairments Accompany Locomotor and Cognitive Deficiencies in Rotenone-Treated Parkinson's Disease Model Zebrafish. Chem Res Toxicol 2022; 35:1974-1982. [PMID: 36178476 PMCID: PMC10127151 DOI: 10.1021/acs.chemrestox.2c00150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
In this work, we carried out neurochemical and behavioral analysis of zebrafish (Danio rerio) treated with rotenone, an agent used to chemically induce a syndrome resembling Parkinson's disease (PD). Dopamine release, measured with fast-scan cyclic voltammetry (FSCV) at carbon-fiber electrodes in acutely harvested whole brains, was about 30% of that found in controls. Uptake, represented by the first order rate constant (k) and the half-life (t1/2) determined by nonlinear regression modeling of the stimulated release plots, was also diminished. Behavioral analysis revealed that rotenone treatment increased the time required for zebrafish to reach a reward within a maze by more than 50% and caused fish to select the wrong pathway, suggesting that latent learning was impaired. Additionally, zebrafish treated with rotenone suffered from diminished locomotor activity, swimming shorter distances with lower mean velocity and acceleration. Thus, the neurochemical and behavioral approaches, as applied, were able to resolve rotenone-induced differences in key parameters. This approach may be effective for screening therapies in this and other models of neurodegeneration.
Collapse
Affiliation(s)
- Piyanka Hettiarachchi
- Department of Chemistry and R.N. Adams Institute for Bioanalytical Chemistry, University of Kansas, Lawrence, Kansas 66045
| | - Sayuri S. Niyangoda
- Department of Chemistry and R.N. Adams Institute for Bioanalytical Chemistry, University of Kansas, Lawrence, Kansas 66045
| | - Romana Jarosova
- Department of Chemistry and R.N. Adams Institute for Bioanalytical Chemistry, University of Kansas, Lawrence, Kansas 66045
- Department of Analytical Chemistry, UNESCO Laboratory of Environmental Electrochemistry, Charles University, Prague 2, Czech Republic 12843
| | - Michael A. Johnson
- Department of Chemistry and R.N. Adams Institute for Bioanalytical Chemistry, University of Kansas, Lawrence, Kansas 66045
| |
Collapse
|
22
|
Resting-State Functional Magnetic Resonance Image to Analyze Electrical Biological Characteristics of Major Depressive Disorder Patients with Suicide Ideation. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:3741677. [PMID: 35734778 PMCID: PMC9208946 DOI: 10.1155/2022/3741677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Revised: 05/10/2022] [Accepted: 05/14/2022] [Indexed: 12/02/2022]
Abstract
The study was aimed to explore the brain imaging characteristics of major depressive disorder (MDD) patients with suicide ideation (SI) through resting-state functional magnetic resonance imaging (rs-fMRI) and to investigate the potential neurobiological role in the occurrence of SI. 50 MDD patients were selected as the experimental group and 50 healthy people as the control group. The brain images of the patients were obtained by MRI. Extraction of EEG biological features was from rs-fMRI images. Since MRI images were disturbed by noise, the initial clustering center of FCM was determined by particle swarm optimization algorithm so that the noise of the collected images was cleared by adaptive median filtering. Then, the image images were processed by the optimized model. The correlation between brain mALFF and clinical characteristics was analyzed. It was found that the segmentation model based on the FCM algorithm could effectively eliminate the noise points in the image; that the zALFF values of the right superior temporal gyrus (R-STG), left middle occipital gyrus (L-MOG), and left middle temporal gyrus (L-MTG) in the observation group were significantly higher than those in the control group (P < 0.05); and that the average zALFF values of left thalamus (L-THA) and left middle frontal gyrus (L-MFG) decreased. The mean zALFF values of L-MFG and L-SFG demonstrated good identification value for SI in MDD patients. In summary, MRI images based on FCM had a good convergence rate, and electrical biological characteristics of brain regions were abnormal in MDD patients with SI, which can be applied to the diagnosis and treatment of patients with depression in clinical practice.
Collapse
|
23
|
Long Z, Li J, Liao H, Deng L, Du Y, Fan J, Li X, Miao J, Qiu S, Long C, Jing B. A Multi-Modal and Multi-Atlas Integrated Framework for Identification of Mild Cognitive Impairment. Brain Sci 2022; 12:751. [PMID: 35741636 PMCID: PMC9221217 DOI: 10.3390/brainsci12060751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 05/29/2022] [Accepted: 06/03/2022] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND Multi-modal neuroimaging with appropriate atlas is vital for effectively differentiating mild cognitive impairment (MCI) from healthy controls (HC). METHODS The resting-state functional magnetic resonance imaging (rs-fMRI) and structural MRI (sMRI) of 69 MCI patients and 61 HC subjects were collected. Then, the gray matter volumes obtained from the sMRI and Hurst exponent (HE) values calculated from rs-fMRI data in the Automated Anatomical Labeling (AAL-90), Brainnetome (BN-246), Harvard-Oxford (HOA-112) and AAL3-170 atlases were extracted, respectively. Next, these characteristics were selected with a minimal redundancy maximal relevance algorithm and a sequential feature collection method in single or multi-modalities, and only the optimal features were retained after this procedure. Lastly, the retained characteristics were served as the input features for the support vector machine (SVM)-based method to classify MCI patients, and the performance was estimated with a leave-one-out cross-validation (LOOCV). RESULTS Our proposed method obtained the best 92.00% accuracy, 94.92% specificity and 89.39% sensitivity with the sMRI in AAL-90 and the fMRI in HOA-112 atlas, which was much better than using the single-modal or single-atlas features. CONCLUSION The results demonstrated that the multi-modal and multi-atlas integrated method could effectively recognize MCI patients, which could be extended into various neurological and neuropsychiatric diseases.
Collapse
Affiliation(s)
- Zhuqing Long
- Medical Apparatus and Equipment Deployment, Hunan Children’s Hospital, Changsha 410007, China; (Z.L.); (J.L.); (H.L.); (Y.D.); (S.Q.)
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
| | - Jie Li
- Medical Apparatus and Equipment Deployment, Hunan Children’s Hospital, Changsha 410007, China; (Z.L.); (J.L.); (H.L.); (Y.D.); (S.Q.)
| | - Haitao Liao
- Medical Apparatus and Equipment Deployment, Hunan Children’s Hospital, Changsha 410007, China; (Z.L.); (J.L.); (H.L.); (Y.D.); (S.Q.)
| | - Li Deng
- Department of Data Assessment and Examination, Hunan Children’s Hospital, Changsha 410007, China;
| | - Yukeng Du
- Medical Apparatus and Equipment Deployment, Hunan Children’s Hospital, Changsha 410007, China; (Z.L.); (J.L.); (H.L.); (Y.D.); (S.Q.)
| | - Jianghua Fan
- Department of Pediatric Emergency Center, Emergency Generally Department I, Hunan Children’s Hospital, Changsha 410007, China;
| | - Xiaofeng Li
- Hunan Guangxiu Hospital, Hunan Normal University, Changsha 410006, China;
| | - Jichang Miao
- Department of Medical Devices, Nanfang Hospital, Guangzhou 510515, China;
| | - Shuang Qiu
- Medical Apparatus and Equipment Deployment, Hunan Children’s Hospital, Changsha 410007, China; (Z.L.); (J.L.); (H.L.); (Y.D.); (S.Q.)
| | - Chaojie Long
- Medical Apparatus and Equipment Deployment, Hunan Children’s Hospital, Changsha 410007, China; (Z.L.); (J.L.); (H.L.); (Y.D.); (S.Q.)
| | - Bin Jing
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
| |
Collapse
|
24
|
Okyay S, Adar N. Dementia-related user-based collaborative filtering for imputing missing data and generating a reliability scale on clinical test scores. PeerJ 2022; 10:e13425. [PMID: 35642196 PMCID: PMC9148556 DOI: 10.7717/peerj.13425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 04/21/2022] [Indexed: 01/14/2023] Open
Abstract
Medical doctors may struggle to diagnose dementia, particularly when clinical test scores are missing or incorrect. In case of any doubts, both morphometrics and demographics are crucial when examining dementia in medicine. This study aims to impute and verify clinical test scores with brain MRI analysis and additional demographics, thereby proposing a decision support system that improves diagnosis and prognosis in an easy-to-understand manner. Therefore, we impute the missing clinical test score values by unsupervised dementia-related user-based collaborative filtering to minimize errors. By analyzing succession rates, we propose a reliability scale that can be utilized for the consistency of existing clinical test scores. The complete base of 816 ADNI1-screening samples was processed, and a hybrid set of 603 features was handled. Moreover, the detailed parameters in use, such as the best neighborhood and input features were evaluated for further comparative analysis. Overall, certain collaborative filtering configurations outperformed alternative state-of-the-art imputation techniques. The imputation system and reliability scale based on the proposed methodology are promising for supporting the clinical tests.
Collapse
Affiliation(s)
- Savas Okyay
- Computer Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey,Computer Engineering, Eskisehir Technical University, Eskisehir, Turkey
| | - Nihat Adar
- Computer Engineering, Eskisehir Osmangazi University, Eskisehir, Turkey
| |
Collapse
|
25
|
Kamran SA, Hossain KF, Moghnieh H, Riar S, Bartlett A, Tavakkoli A, Sanders KM, Baker SA. New open-source software for subcellular segmentation and analysis of spatiotemporal fluorescence signals using deep learning. iScience 2022; 25:104277. [PMID: 35573197 PMCID: PMC9095751 DOI: 10.1016/j.isci.2022.104277] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 04/04/2022] [Accepted: 04/18/2022] [Indexed: 11/20/2022] Open
Abstract
Cellular imaging instrumentation advancements as well as readily available optogenetic and fluorescence sensors have yielded a profound need for fast, accurate, and standardized analysis. Deep-learning architectures have revolutionized the field of biomedical image analysis and have achieved state-of-the-art accuracy. Despite these advancements, deep learning architectures for the segmentation of subcellular fluorescence signals is lacking. Cellular dynamic fluorescence signals can be plotted and visualized using spatiotemporal maps (STMaps), and currently their segmentation and quantification are hindered by slow workflow speed and lack of accuracy, especially for large datasets. In this study, we provide a software tool that utilizes a deep-learning methodology to fundamentally overcome signal segmentation challenges. The software framework demonstrates highly optimized and accurate calcium signal segmentation and provides a fast analysis pipeline that can accommodate different patterns of signals across multiple cell types. The software allows seamless data accessibility, quantification, and graphical visualization and enables large dataset analysis throughput.
Collapse
Affiliation(s)
- Sharif Amit Kamran
- Department of Physiology and Cell Biology, University of Nevada, Reno School of Medicine, Anderson Medical Building MS352, Reno, NV 89557, USA
- Department of Computer Science and Engineering, University of Nevada, Reno, NV 89557, USA
| | | | - Hussein Moghnieh
- Department of Electrical and Computer Engineering], McGill University, Montréal, QC H3A 0E9, Canada
| | - Sarah Riar
- Department of Physiology and Cell Biology, University of Nevada, Reno School of Medicine, Anderson Medical Building MS352, Reno, NV 89557, USA
| | - Allison Bartlett
- Department of Physiology and Cell Biology, University of Nevada, Reno School of Medicine, Anderson Medical Building MS352, Reno, NV 89557, USA
| | - Alireza Tavakkoli
- Department of Computer Science and Engineering, University of Nevada, Reno, NV 89557, USA
| | - Kenton M. Sanders
- Department of Physiology and Cell Biology, University of Nevada, Reno School of Medicine, Anderson Medical Building MS352, Reno, NV 89557, USA
| | - Salah A. Baker
- Department of Physiology and Cell Biology, University of Nevada, Reno School of Medicine, Anderson Medical Building MS352, Reno, NV 89557, USA
| |
Collapse
|
26
|
El-Sappagh S, Saleh H, Ali F, Amer E, Abuhmed T. Two-stage deep learning model for Alzheimer’s disease detection and prediction of the mild cognitive impairment time. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07263-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
27
|
Oliver LD, Hawco C, Viviano JD, Voineskos AN. From the Group to the Individual in Schizophrenia Spectrum Disorders: Biomarkers of Social Cognitive Impairments and Therapeutic Translation. Biol Psychiatry 2022; 91:699-708. [PMID: 34799097 DOI: 10.1016/j.biopsych.2021.09.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 08/11/2021] [Accepted: 09/11/2021] [Indexed: 12/23/2022]
Abstract
People with schizophrenia spectrum disorders (SSDs) often experience persistent social cognitive impairments, associated with poor functional outcome. There are currently no approved treatment options for these debilitating symptoms, highlighting the need for novel therapeutic strategies. Work to date has elucidated differential social processes and underlying neural circuitry affected in SSDs, which may be amenable to modulation using neurostimulation. Further, advances in functional connectivity mapping and electric field modeling may be used to identify individualized treatment targets to maximize the impact of brain stimulation on social cognitive networks. Here, we review literature supporting a roadmap for translating functional connectivity biomarker discovery to individualized treatment development for social cognitive impairments in SSDs. First, we outline the relevance of social cognitive impairments in SSDs. We review machine learning approaches for dimensional brain-behavior biomarker discovery, emphasizing the importance of individual differences. We synthesize research showing that brain stimulation techniques, such as repetitive transcranial magnetic stimulation, can be used to target relevant networks. Further, functional connectivity-based individualized targeting may enhance treatment response. We then outline recent approaches to account for neuroanatomical variability and optimize coil positioning to individually maximize target engagement. Overall, the synthesized literature provides support for the utility and feasibility of this translational approach to precision treatment. The proposed roadmap to translate biomarkers of social cognitive impairments to individualized treatment is currently under evaluation in precision-guided trials. Such a translational approach may also be applicable across conditions and generalizable for the development of individualized neurostimulation targeting other behavioral deficits.
Collapse
Affiliation(s)
- Lindsay D Oliver
- Campbell Family Mental Health Research Institute, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Colin Hawco
- Campbell Family Mental Health Research Institute, Centre for Addiction and Mental Health, Toronto, Ontario, Canada; Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | - Joseph D Viviano
- Mila-Quebec Artificial Intelligence Institute, Montreal, Quebec, Canada
| | - Aristotle N Voineskos
- Campbell Family Mental Health Research Institute, Centre for Addiction and Mental Health, Toronto, Ontario, Canada; Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada.
| |
Collapse
|
28
|
Yue H, Liu J, Li J, Kuang H, Lang J, Cheng J, Peng L, Han Y, Bai H, Wang Y, Wang Q, Wang J. MLDRL: Multi-loss disentangled representation learning for predicting esophageal cancer response to neoadjuvant chemoradiotherapy using longitudinal CT images. Med Image Anal 2022; 79:102423. [DOI: 10.1016/j.media.2022.102423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 03/08/2022] [Accepted: 03/12/2022] [Indexed: 12/24/2022]
|
29
|
Nie Y, Huang C, Liang H, Xu H. Adversarial and Implicit Modality Imputation with Applications to Depression Early Detection. ARTIF INTELL 2022. [DOI: 10.1007/978-3-031-20500-2_19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
30
|
Liu Y, Yue L, Xiao S, Yang W, Shen D, Liu M. Assessing clinical progression from subjective cognitive decline to mild cognitive impairment with incomplete multi-modal neuroimages. Med Image Anal 2022; 75:102266. [PMID: 34700245 PMCID: PMC8678365 DOI: 10.1016/j.media.2021.102266] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 10/04/2021] [Accepted: 10/07/2021] [Indexed: 01/03/2023]
Abstract
Accurately assessing clinical progression from subjective cognitive decline (SCD) to mild cognitive impairment (MCI) is crucial for early intervention of pathological cognitive decline. Multi-modal neuroimaging data such as T1-weighted magnetic resonance imaging (MRI) and positron emission tomography (PET), help provide objective and supplementary disease biomarkers for computer-aided diagnosis of MCI. However, there are few studies dedicated to SCD progression prediction since subjects usually lack one or more imaging modalities. Besides, one usually has a limited number (e.g., tens) of SCD subjects, negatively affecting model robustness. To this end, we propose a Joint neuroimage Synthesis and Representation Learning (JSRL) framework for SCD conversion prediction using incomplete multi-modal neuroimages. The JSRL contains two components: 1) a generative adversarial network to synthesize missing images and generate multi-modal features, and 2) a classification network to fuse multi-modal features for SCD conversion prediction. The two components are incorporated into a joint learning framework by sharing the same features, encouraging effective fusion of multi-modal features for accurate prediction. A transfer learning strategy is employed in the proposed framework by leveraging model trained on the Alzheimer's Disease Neuroimaging Initiative (ADNI) with MRI and fluorodeoxyglucose PET from 863 subjects to both the Chinese Longitudinal Aging Study (CLAS) with only MRI from 76 SCD subjects and the Australian Imaging, Biomarkers and Lifestyle (AIBL) with MRI from 235 subjects. Experimental results suggest that the proposed JSRL yields superior performance in SCD and MCI conversion prediction and cross-database neuroimage synthesis, compared with several state-of-the-art methods.
Collapse
Affiliation(s)
- Yunbi Liu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA,School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Ling Yue
- Department of Geriatric Psychiatry, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200240, China,Corresponding authors: M. Liu () and L. Yue ()
| | - Shifu Xiao
- Department of Geriatric Psychiatry, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200240, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Mingxia Liu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA,Corresponding authors: M. Liu () and L. Yue ()
| |
Collapse
|
31
|
Yang S, Bornot JMS, Fernandez RB, Deravi F, Wong-Lin K, Prasad G. Integrated space-frequency-time domain feature extraction for MEG-based Alzheimer's disease classification. Brain Inform 2021; 8:24. [PMID: 34725742 PMCID: PMC8560870 DOI: 10.1186/s40708-021-00145-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Accepted: 10/20/2021] [Indexed: 11/10/2022] Open
Abstract
Magnetoencephalography (MEG) has been combined with machine learning techniques, to recognize the Alzheimer's disease (AD), one of the most common forms of dementia. However, most of the previous studies are limited to binary classification and do not fully utilize the two available MEG modalities (extracted using magnetometer and gradiometer sensors). AD consists of several stages of progression, this study addresses this limitation by using both magnetometer and gradiometer data to discriminate between participants with AD, AD-related mild cognitive impairment (MCI), and healthy control (HC) participants in the form of a three-class classification problem. A series of wavelet-based biomarkers are developed and evaluated, which concurrently leverage the spatial, frequency and time domain characteristics of the signal. A bimodal recognition system based on an improved score-level fusion approach is proposed to reinforce interpretation of the brain activity captured by magnetometers and gradiometers. In this preliminary study, it was found that the markers derived from gradiometer tend to outperform the magnetometer-based markers. Interestingly, out of the total 10 regions of interest, left-frontal lobe demonstrates about 8% higher mean recognition rate than the second-best performing region (left temporal lobe) for AD/MCI/HC classification. Among the four types of markers proposed in this work, the spatial marker developed using wavelet coefficients provided the best recognition performance for the three-way classification. Overall, the proposed approach provides promising results for the potential of AD/MCI/HC three-way classification utilizing the bimodal MEG data.
Collapse
Affiliation(s)
- Su Yang
- Department of Computer Science, Swansea University, Swansea, UK.
| | - Jose Miguel Sanchez Bornot
- Intelligent Systems Research Centre, School of Computing, Eng & Intel. Sys, Ulster University, Derry-Londonderry, Northern Ireland, UK
| | | | - Farzin Deravi
- School of Engineering and Digital Arts at the University of Kent, Canterbury, UK
| | - KongFatt Wong-Lin
- Intelligent Systems Research Centre, School of Computing, Eng & Intel. Sys, Ulster University, Derry-Londonderry, Northern Ireland, UK
| | - Girijesh Prasad
- Intelligent Systems Research Centre, School of Computing, Eng & Intel. Sys, Ulster University, Derry-Londonderry, Northern Ireland, UK
| |
Collapse
|
32
|
Lei B, Cheng N, Frangi AF, Wei Y, Yu B, Liang L, Mai W, Duan G, Nong X, Li C, Su J, Wang T, Zhao L, Deng D, Zhang Z. Auto-weighted centralised multi-task learning via integrating functional and structural connectivity for subjective cognitive decline diagnosis. Med Image Anal 2021; 74:102248. [PMID: 34597938 DOI: 10.1016/j.media.2021.102248] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Revised: 08/21/2021] [Accepted: 09/14/2021] [Indexed: 11/29/2022]
Abstract
Early diagnosis and intervention of mild cognitive impairment (MCI) and its early stage (i.e., subjective cognitive decline (SCD)) is able to delay or reverse the disease progression. However, discrimination between SCD, MCI and healthy subjects accurately remains challenging. This paper proposes an auto-weighted centralised multi-task (AWCMT) learning framework for differential diagnosis of SCD and MCI. AWCMT is based on structural and functional connectivity information inferred from magnetic resonance imaging (MRI). To be specific, we devise a novel multi-task learning algorithm to combine neuroimaging functional and structural connective information. We construct a functional brain network through a sparse and low-rank machine learning method, and also a structural brain network via fibre bundle tracking. Those two networks are constructed separately and independently. Multi-task learning is then used to identify features integration of functional and structural connectivity. Hence, we can learn each task's significance automatically in a balanced way. By combining the functional and structural information, the most informative features of SCD and MCI are obtained for diagnosis. The extensive experiments on the public and self-collected datasets demonstrate that the proposed algorithm obtains better performance in classifying SCD, MCI and healthy people than traditional algorithms. The newly proposed method has good interpretability as it is able to discover the most disease-related brain regions and their connectivity. The results agree well with current clinical findings and provide new insights into early AD detection based on the multi-modal neuroimaging technique.
Collapse
Affiliation(s)
- Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Marshall Laboratory of Biomedical Engineering, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Nina Cheng
- CISTIB, School of Computing and LICAMM, School of Medicine, University of Leeds, Leeds, United Kingdom
| | - Alejandro F Frangi
- CISTIB, School of Computing and LICAMM, School of Medicine, University of Leeds, Leeds, United Kingdom; Department of Cardiovascular Sciences, and Department of Electrical Engineering, ESAT/PSI, KU Leuven, Leuven, Belgium; Medical Imaging Research Center, UZ Leuven, Herestraat 49, 3000 Leuven, Belgium; Alan Turing Institute, London, United Kingdom
| | - Yichen Wei
- Department of Radiology, First Affiliated Hospital, Guangxi University of Chinese Medicine, 530023 Nanning, China
| | - Bihan Yu
- Department of Acupuncture, First Affiliated Hospital, Guangxi University of Chinese Medicine, 530023 Nanning, China
| | - Lingyan Liang
- Department of Radiology, the People's Hospital of Guangxi Zhuang Autonomous Region, 530021 Guangxi, China
| | - Wei Mai
- Department of Acupuncture, First Affiliated Hospital, Guangxi University of Chinese Medicine, 530023 Nanning, China
| | - Gaoxiong Duan
- Department of Radiology, the People's Hospital of Guangxi Zhuang Autonomous Region, 530021 Guangxi, China
| | - Xiucheng Nong
- Department of Acupuncture, First Affiliated Hospital, Guangxi University of Chinese Medicine, 530023 Nanning, China
| | - Chong Li
- Department of Acupuncture, First Affiliated Hospital, Guangxi University of Chinese Medicine, 530023 Nanning, China
| | - Jiahui Su
- Department of Acupuncture, First Affiliated Hospital, Guangxi University of Chinese Medicine, 530023 Nanning, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Marshall Laboratory of Biomedical Engineering, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Lihua Zhao
- Department of Acupuncture, First Affiliated Hospital, Guangxi University of Chinese Medicine, 530023 Nanning, China.
| | - Demao Deng
- Department of Radiology, the People's Hospital of Guangxi Zhuang Autonomous Region, 530021 Guangxi, China.
| | - Zhiguo Zhang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Marshall Laboratory of Biomedical Engineering, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China.
| |
Collapse
|
33
|
Grueso S, Viejo-Sobera R. Machine learning methods for predicting progression from mild cognitive impairment to Alzheimer's disease dementia: a systematic review. Alzheimers Res Ther 2021; 13:162. [PMID: 34583745 PMCID: PMC8480074 DOI: 10.1186/s13195-021-00900-w] [Citation(s) in RCA: 74] [Impact Index Per Article: 24.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 09/12/2021] [Indexed: 01/18/2023]
Abstract
BACKGROUND An increase in lifespan in our society is a double-edged sword that entails a growing number of patients with neurocognitive disorders, Alzheimer's disease being the most prevalent. Advances in medical imaging and computational power enable new methods for the early detection of neurocognitive disorders with the goal of preventing or reducing cognitive decline. Computer-aided image analysis and early detection of changes in cognition is a promising approach for patients with mild cognitive impairment, sometimes a prodromal stage of Alzheimer's disease dementia. METHODS We conducted a systematic review following PRISMA guidelines of studies where machine learning was applied to neuroimaging data in order to predict whether patients with mild cognitive impairment might develop Alzheimer's disease dementia or remain stable. After removing duplicates, we screened 452 studies and selected 116 for qualitative analysis. RESULTS Most studies used magnetic resonance image (MRI) and positron emission tomography (PET) data but also magnetoencephalography. The datasets were mainly extracted from the Alzheimer's disease neuroimaging initiative (ADNI) database with some exceptions. Regarding the algorithms used, the most common was support vector machine with a mean accuracy of 75.4%, but convolutional neural networks achieved a higher mean accuracy of 78.5%. Studies combining MRI and PET achieved overall better classification accuracy than studies that only used one neuroimaging technique. In general, the more complex models such as those based on deep learning, combined with multimodal and multidimensional data (neuroimaging, clinical, cognitive, genetic, and behavioral) achieved the best performance. CONCLUSIONS Although the performance of the different methods still has room for improvement, the results are promising and this methodology has a great potential as a support tool for clinicians and healthcare professionals.
Collapse
Affiliation(s)
- Sergio Grueso
- Cognitive NeuroLab, Faculty of Health Sciences, Universitat Oberta de Catalunya (UOC), Rambla del Poblenou 156, 08018, Barcelona, Spain.
| | - Raquel Viejo-Sobera
- Cognitive NeuroLab, Faculty of Health Sciences, Universitat Oberta de Catalunya (UOC), Rambla del Poblenou 156, 08018, Barcelona, Spain
| |
Collapse
|
34
|
Xia Z, Zhou T, Mamoon S, Lu J. Recognition of Dementia Biomarkers With Deep Finer-DBN. IEEE Trans Neural Syst Rehabil Eng 2021; 29:1926-1935. [PMID: 34506288 DOI: 10.1109/tnsre.2021.3111989] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The treatment of neurodegenerative diseases is expensive, and long-term treatment makes families bear a heavy burden. Accumulating evidence suggests that the high conversion rate can possibly be reduced if clinical interventions are applied at the early stage of brain diseases. Thus, a variety of deep learning methods are utilized to recognize the early stages of neurodegenerative diseases for clinical intervention and treatment. However, most existing methods have ignored the issue of sample imbalance, which often makes it difficult to train an effective model due to lack of a large number of negative samples. To address this problem, we propose a two-stage method, which is used to learn the compression and recover rules of normal subjects so that potential negative samples can be detected. The experimental results show that the proposed method can not only obtain a superb recognition result, but also give an explanation that conforms to the physiological mechanism. Most importantly, the deep learning model does not need to be retrained for each type of disease, which can be widely applied to the diagnosis of various brain diseases. Furthermore, this research could have great potential in understanding regional dysfunction of various brain diseases.
Collapse
|
35
|
Abstract
To achieve sustainable development and improve market competitiveness, many manufacturers are transforming from traditional product manufacturing to service manufacturing. In this trend, the product service system (PSS) has become the mainstream of supply to satisfy customers with individualized products and service combinations. The diversified customer requirements can be realized by the PSS configuration based on modular design. PSS configuration can be deemed as a multi-classification problem. Customer requirements are input, and specific PSS is output. This paper proposes an improved support vector machine (SVM) model optimized by principal component analysis (PCA) and the quantum particle swarm optimization (QPSO) algorithm, which is defined as a PCA-QPSO-SVM model. The model is used to solve the PSS configuration problem. The PCA method is used to reduce the dimension of the customer requirements, and the QPSO is used to optimize the internal parameters of the SVM to improve the prediction accuracy of the SVM classifier. In the case study, a dataset for central air conditioning PSS configuration is used to construct and test the PCA-QPSO-SVM model, and the optimal PSS configuration can be predicted well for specific customer requirements.
Collapse
|
36
|
Calhoun VD, Pearlson GD, Sui J. Data-driven approaches to neuroimaging biomarkers for neurological and psychiatric disorders: emerging approaches and examples. Curr Opin Neurol 2021; 34:469-479. [PMID: 34054110 PMCID: PMC8263510 DOI: 10.1097/wco.0000000000000967] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
PURPOSE OF REVIEW The 'holy grail' of clinical applications of neuroimaging to neurological and psychiatric disorders via personalized biomarkers has remained mostly elusive, despite considerable effort. However, there are many reasons to continue to be hopeful, as the field has made remarkable advances over the past few years, fueled by a variety of converging technical and data developments. RECENT FINDINGS We discuss a number of advances that are accelerating the push for neuroimaging biomarkers including the advent of the 'neuroscience big data' era, biomarker data competitions, the development of more sophisticated algorithms including 'guided' data-driven approaches that facilitate automation of network-based analyses, dynamic connectivity, and deep learning. Another key advance includes multimodal data fusion approaches which can provide convergent and complementary evidence pointing to possible mechanisms as well as increase predictive accuracy. SUMMARY The search for clinically relevant neuroimaging biomarkers for neurological and psychiatric disorders is rapidly accelerating. Here, we highlight some of these aspects, provide recent examples from studies in our group, and link to other ongoing work in the field. It is critical that access and use of these advanced approaches becomes mainstream, this will help propel the community forward and facilitate the production of robust and replicable neuroimaging biomarkers.
Collapse
Affiliation(s)
- Vince D Calhoun
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, Georgia
| | - Godfrey D Pearlson
- Department of Psychiatry and Neuroscience, Yale School of Medicine, New Haven, Connecticut, USA
| | - Jing Sui
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, Georgia
- Institute of Automation, Chinese Academy of Sciences, and the University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
37
|
Ning Z, Xiao Q, Feng Q, Chen W, Zhang Y. Relation-Induced Multi-Modal Shared Representation Learning for Alzheimer's Disease Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1632-1645. [PMID: 33651685 DOI: 10.1109/tmi.2021.3063150] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The fusion of multi-modal data (e.g., magnetic resonance imaging (MRI) and positron emission tomography (PET)) has been prevalent for accurate identification of Alzheimer's disease (AD) by providing complementary structural and functional information. However, most of the existing methods simply concatenate multi-modal features in the original space and ignore their underlying associations which may provide more discriminative characteristics for AD identification. Meanwhile, how to overcome the overfitting issue caused by high-dimensional multi-modal data remains appealing. To this end, we propose a relation-induced multi-modal shared representation learning method for AD diagnosis. The proposed method integrates representation learning, dimension reduction, and classifier modeling into a unified framework. Specifically, the framework first obtains multi-modal shared representations by learning a bi-directional mapping between original space and shared space. Within this shared space, we utilize several relational regularizers (including feature-feature, feature-label, and sample-sample regularizers) and auxiliary regularizers to encourage learning underlying associations inherent in multi-modal data and alleviate overfitting, respectively. Next, we project the shared representations into the target space for AD diagnosis. To validate the effectiveness of our proposed approach, we conduct extensive experiments on two independent datasets (i.e., ADNI-1 and ADNI-2), and the experimental results demonstrate that our proposed method outperforms several state-of-the-art methods.
Collapse
|
38
|
Zhang GY, Chen XW, Zhou YR, Wang CD, Huang D, He XY. Kernelized multi-view subspace clustering via auto-weighted graph learning. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02365-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
39
|
Wang C, Yang G, Papanastasiou G, Tsaftaris SA, Newby DE, Gray C, Macnaught G, MacGillivray TJ. DiCyc: GAN-based deformation invariant cross-domain information fusion for medical image synthesis. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2021; 67:147-160. [PMID: 33658909 PMCID: PMC7763495 DOI: 10.1016/j.inffus.2020.10.015] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 10/19/2020] [Accepted: 10/21/2020] [Indexed: 05/22/2023]
Abstract
Cycle-consistent generative adversarial network (CycleGAN) has been widely used for cross-domain medical image synthesis tasks particularly due to its ability to deal with unpaired data. However, most CycleGAN-based synthesis methods cannot achieve good alignment between the synthesized images and data from the source domain, even with additional image alignment losses. This is because the CycleGAN generator network can encode the relative deformations and noises associated to different domains. This can be detrimental for the downstream applications that rely on the synthesized images, such as generating pseudo-CT for PET-MR attenuation correction. In this paper, we present a deformation invariant cycle-consistency model that can filter out these domain-specific deformation. The deformation is globally parameterized by thin-plate-spline (TPS), and locally learned by modified deformable convolutional layers. Robustness to domain-specific deformations has been evaluated through experiments on multi-sequence brain MR data and multi-modality abdominal CT and MR data. Experiment results demonstrated that our method can achieve better alignment between the source and target data while maintaining superior image quality of signal compared to several state-of-the-art CycleGAN-based methods.
Collapse
Affiliation(s)
- Chengjia Wang
- BHF Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
- Corresponding author.
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, London, UK
| | | | - Sotirios A. Tsaftaris
- Institute for Digital Communications, School of Engineering, University of Edinburgh, Edinburgh, UK
| | - David E. Newby
- BHF Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
| | - Calum Gray
- Edinburgh Imaging Facility QMRI, University of Edinburgh, Edinburgh, UK
| | - Gillian Macnaught
- Edinburgh Imaging Facility QMRI, University of Edinburgh, Edinburgh, UK
| | | |
Collapse
|
40
|
Zhou T, Fan DP, Cheng MM, Shen J, Shao L. RGB-D salient object detection: A survey. COMPUTATIONAL VISUAL MEDIA 2021; 7:37-69. [PMID: 33432275 PMCID: PMC7788385 DOI: 10.1007/s41095-020-0199-z] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 10/07/2020] [Indexed: 06/12/2023]
Abstract
Salient object detection, which simulates human visual perception in locating the most significant object(s) in a scene, has been widely applied to various computer vision tasks. Now, the advent of depth sensors means that depth maps can easily be captured; this additional spatial information can boost the performance of salient object detection. Although various RGB-D based salient object detection models with promising performance have been proposed over the past several years, an in-depth understanding of these models and the challenges in this field remains lacking. In this paper, we provide a comprehensive survey of RGB-D based salient object detection models from various perspectives, and review related benchmark datasets in detail. Further, as light fields can also provide depth maps, we review salient object detection models and popular benchmark datasets from this domain too. Moreover, to investigate the ability of existing models to detect salient objects, we have carried out a comprehensive attribute-based evaluation of several representative RGB-D based salient object detection models. Finally, we discuss several challenges and open directions of RGB-D based salient object detection for future research. All collected models, benchmark datasets, datasets constructed for attribute-based evaluation, and related code are publicly available at https://github.com/taozh2017/RGBD-SODsurvey.
Collapse
Affiliation(s)
- Tao Zhou
- Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, United Arab Emirates
| | - Deng-Ping Fan
- Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, United Arab Emirates
| | | | - Jianbing Shen
- Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, United Arab Emirates
| | - Ling Shao
- Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, United Arab Emirates
| |
Collapse
|
41
|
Pan X, Phan TL, Adel M, Fossati C, Gaidon T, Wojak J, Guedj E. Multi-View Separable Pyramid Network for AD Prediction at MCI Stage by 18F-FDG Brain PET Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:81-92. [PMID: 32894711 DOI: 10.1109/tmi.2020.3022591] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Alzheimer's Disease (AD), one of the main causes of death in elderly people, is characterized by Mild Cognitive Impairment (MCI) at prodromal stage. Nevertheless, only part of MCI subjects could progress to AD. The main objective of this paper is thus to identify those who will develop a dementia of AD type among MCI patients. 18F-FluoroDeoxyGlucose Positron Emission Tomography (18F-FDG PET) serves as a neuroimaging modality for early diagnosis as it can reflect neural activity via measuring glucose uptake at resting-state. In this paper, we design a deep network on 18F-FDG PET modality to address the problem of AD identification at early MCI stage. To this end, a Multi-view Separable Pyramid Network (MiSePyNet) is proposed, in which representations are learned from axial, coronal and sagittal views of PET scans so as to offer complementary information and then combined to make a decision jointly. Different from the widely and naturally used 3D convolution operations for 3D images, the proposed architecture is deployed with separable convolution from slice-wise to spatial-wise successively, which can retain the spatial information and reduce training parameters compared to 2D and 3D networks, respectively. Experiments on ADNI dataset show that the proposed method can yield better performance than both traditional and deep learning-based algorithms for predicting the progression of Mild Cognitive Impairment, with a classification accuracy of 83.05%.
Collapse
|
42
|
Zhou T, Fu H, Chen G, Shen J, Shao L. Hi-Net: Hybrid-Fusion Network for Multi-Modal MR Image Synthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2772-2781. [PMID: 32086202 DOI: 10.1109/tmi.2020.2975344] [Citation(s) in RCA: 98] [Impact Index Per Article: 24.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Magnetic resonance imaging (MRI) is a widely used neuroimaging technique that can provide images of different contrasts (i.e., modalities). Fusing this multi-modal data has proven particularly effective for boosting model performance in many tasks. However, due to poor data quality and frequent patient dropout, collecting all modalities for every patient remains a challenge. Medical image synthesis has been proposed as an effective solution, where any missing modalities are synthesized from the existing ones. In this paper, we propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis, which learns a mapping from multi-modal source images (i.e., existing modalities) to target images (i.e., missing modalities). In our Hi-Net, a modality-specific network is utilized to learn representations for each individual modality, and a fusion network is employed to learn the common latent representation of multi-modal data. Then, a multi-modal synthesis network is designed to densely combine the latent representation with hierarchical features from each modality, acting as a generator to synthesize the target images. Moreover, a layer-wise multi-modal fusion strategy effectively exploits the correlations among multiple modalities, where a Mixed Fusion Block (MFB) is proposed to adaptively weight different fusion strategies. Extensive experiments demonstrate the proposed model outperforms other state-of-the-art medical image synthesis methods.
Collapse
|
43
|
Toward Effective Medical Image Analysis Using Hybrid Approaches—Review, Challenges and Applications. INFORMATION 2020. [DOI: 10.3390/info11030155] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Accurate medical images analysis plays a vital role for several clinical applications. Nevertheless, the immense and complex data volume to be processed make difficult the design of effective algorithms. The first aim of this paper is to examine this area of research and to provide some relevant reference sources related to the context of medical image analysis. Then, an effective hybrid solution to further improve the expected results is proposed here. It allows to consider the benefits of the cooperation of different complementary approaches such as statistical-based, variational-based and atlas-based techniques and to reduce their drawbacks. In particular, a pipeline framework that involves different steps such as a preprocessing step, a classification step and a refinement step with variational-based method is developed to identify accurately pathological regions in biomedical images. The preprocessing step has the role to remove noise and improve the quality of the images. Then the classification is based on both symmetry axis detection step and non linear learning with SVM algorithm. Finally, a level set-based model is performed to refine the boundary detection of the region of interest. In this work we will show that an accurate initialization step could enhance final performances. Some obtained results are exposed which are related to the challenging application of brain tumor segmentation.
Collapse
|