1
|
Wang X, Cui W, Wu H, Huo Y, Xu X. Hybrid-feature based spherical quasi-conformal registration for AD-induced hippocampal surface morphological changes. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 256:108372. [PMID: 39178503 DOI: 10.1016/j.cmpb.2024.108372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 08/06/2024] [Accepted: 08/10/2024] [Indexed: 08/26/2024]
Abstract
BACKGROUND AND OBJECTIVE Establishing accurate one-to-one morphological correspondence between different hippocampal surfaces is a solid foundation for the analysis of AD-induced hippocampal morphological changes. However, owing to the large variations between hippocampal surfaces, exiting registration work either fails to obtain the accurate matching of local and overall morphological features or does not preserve the bijectivity during parametric mapping. For this reason, this study proposes a hybrid-feature based spherical quasi-conformal registration (HSQR) method that can effectively maintain the diffeomorphic property while meeting the hybrid-feature matching constraints in the spherical parameter domain. METHODS The HSQR algorithm is primarily achieved through hippocampal surface hybrid feature extraction and spherical quasi-conformal registration. First, hybrid features for a comprehensive morphological description of the hippocampal surface were established, which included essential anatomical features (landmarks) and mean curvature (intensity) features to ensure the accuracy of surface morphology alignment. Second, spherical parameterization was applied to genus-0 closed surfaces, such as the hippocampus, which maximized the preservation of the original local surface morphology through area-preserving properties. Third, a novel spherical quasi-conformal registration algorithm that can handle large deformations is established. It transforms a 3D spherical parameter domain into a 2D plane parameter domain using iterative local stereo projection to improve the efficiency of the registration algorithm. Subsequently, by controlling the Beltramin coefficient, the hybrid morphological features could be aligned while ensuring bijection before and after registration. RESULTS Using a cohort including 161 patients with amyloid-β (Aβ) positive Alzheimer disease (AD), 234 Aβ positive mild cognitive impairment (MCI) and 266 Aβ negative cognitively unimpaired (CU) individuals from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, we set up the experiment which indicated that the HSQR-based whole bilateral hippocampal atrophy features demonstrated the stronger statistical power for group morphological differences of CU vs. MCI with q-value: 0.0453 for left hippocampus and 0.0401 for right hippocampus and group morphological differences of AD vs. MCI with q-value: 0.0282 for left hippocampus and 0.0421 for right hippocampus. CONCLUSIONS Our registration algorithm may provide a solid foundation for the accurate quantification of hippocampal surface morphological changes for the differential diagnosis and tracking of AD.
Collapse
Affiliation(s)
- Xiangying Wang
- First College of Clinical Medicine, Shandong University of Traditional Chinese Medicine, Jinan, China
| | - Wenqiang Cui
- Department of Neurology, Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, China
| | - Hongyun Wu
- Department of Neurology, Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, China
| | - Yongjun Huo
- Department of Radiology, Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, China
| | - Xiangqing Xu
- Department of Neurology, Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, China.
| |
Collapse
|
2
|
Li M, Zheng H, Koh JC, Choe GY, Choi EJ, Nahm FS, Lee PB. Development of a Deep Learning Model for the Analysis of Dorsal Root Ganglion Chromatolysis in Rat Spinal Stenosis. J Pain Res 2024; 17:1369-1380. [PMID: 38600989 PMCID: PMC11005935 DOI: 10.2147/jpr.s444055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 03/20/2024] [Indexed: 04/12/2024] Open
Abstract
Objective To create a deep learning (DL) model that can accurately detect and classify three distinct types of rat dorsal root ganglion neurons: normal, segmental chromatolysis, and central chromatolysis. The DL model has the potential to improve the efficiency and precision of neuron classification in research related to spinal injuries and diseases. Methods H&E slide images were divided into an internal training set (80%) and a test set (20%). The training dataset was labeled by two pathologists using pre-defined grades. Using this dataset, a two-component DL model was developed with the first component being a convolutional neural network (CNN) that was trained to detect the region of interest (ROI) and the second component being another CNN used for classification. Results A total of 240 lumbar dorsal root ganglion (DRG) pathology slide images from rats were analyzed. The internal testing results showed an accuracy of 93.13%, and the external dataset testing demonstrated an accuracy of 93.44%. Conclusion The DL model demonstrated a level of agreement comparable to that of pathologists in detecting and classifying normal and segmental chromatolysis neurons, although its agreement was slightly lower for central chromatolysis neurons. Significance: DL in improving the accuracy and efficiency of pathological analysis suggests that it may have a role in enhancing medical decision-making.
Collapse
Affiliation(s)
- Meihui Li
- Department of Anesthesiology and Pain Medicine, Seoul National University College of Medicine, Seoul, Korea
- Department of Anesthesiology and Pain Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Haiyan Zheng
- Department of Anesthesiology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, People’s Republic of China
| | - Jae Chul Koh
- Department of Anesthesiology and Pain Medicine, Korea University College of Medicine, Seoul, Korea
| | - Ghee Young Choe
- Department of Pathology, Seoul National University Bundang Hospital, Seongnam, Korea
- Department of Pathology, Seoul National University College of Medicine, Seoul, Korea
| | - Eun Joo Choi
- Department of Anesthesiology and Pain Medicine, Seoul National University College of Medicine, Seoul, Korea
- Department of Anesthesiology and Pain Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Francis Sahngun Nahm
- Department of Anesthesiology and Pain Medicine, Seoul National University College of Medicine, Seoul, Korea
- Department of Anesthesiology and Pain Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Pyung Bok Lee
- Department of Anesthesiology and Pain Medicine, Seoul National University College of Medicine, Seoul, Korea
- Department of Anesthesiology and Pain Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| |
Collapse
|
3
|
Toma TT, Wang Y, Gahlmann A, Acton ST. DeepSeeded: Volumetric Segmentation of Dense Cell Populations with a Cascade of Deep Neural Networks in Bacterial Biofilm Applications. EXPERT SYSTEMS WITH APPLICATIONS 2024; 238:122094. [PMID: 38646063 PMCID: PMC11027476 DOI: 10.1016/j.eswa.2023.122094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Accurate and automatic segmentation of individual cell instances in microscopy images is a vital step for quantifying the cellular attributes, which can subsequently lead to new discoveries in biomedical research. In recent years, data-driven deep learning techniques have shown promising results in this task. Despite the success of these techniques, many fail to accurately segment cells in microscopy images with high cell density and low signal-to-noise ratio. In this paper, we propose a novel 3D cell segmentation approach DeepSeeded, a cascaded deep learning architecture that estimates seeds for a classical seeded watershed segmentation. The cascaded architecture enhances the cell interior and border information using Euclidean distance transforms and detects the cell seeds by performing voxel-wise classification. The data-driven seed estimation process proposed here allows segmenting touching cell instances from a dense, intensity-inhomogeneous microscopy image volume. We demonstrate the performance of the proposed method in segmenting 3D microscopy images of a particularly dense cell population called bacterial biofilms. Experimental results on synthetic and two real biofilm datasets suggest that the proposed method leads to superior segmentation results when compared to state-of-the-art deep learning methods and a classical method.
Collapse
Affiliation(s)
- Tanjin Taher Toma
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, 22904, Virginia, USA
| | - Yibo Wang
- Department of Chemistry, University of Virginia, Charlottesville, 22904, Virginia, USA
| | - Andreas Gahlmann
- Department of Chemistry, University of Virginia, Charlottesville, 22904, Virginia, USA
- Department of Molecular Physiology and Biological Physics, University of Virginia, Charlottesville, 22903, Virginia, USA
| | - Scott T. Acton
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, 22904, Virginia, USA
| |
Collapse
|
4
|
Chen R, Liu M, Chen W, Wang Y, Meijering E. Deep learning in mesoscale brain image analysis: A review. Comput Biol Med 2023; 167:107617. [PMID: 37918261 DOI: 10.1016/j.compbiomed.2023.107617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/23/2023] [Indexed: 11/04/2023]
Abstract
Mesoscale microscopy images of the brain contain a wealth of information which can help us understand the working mechanisms of the brain. However, it is a challenging task to process and analyze these data because of the large size of the images, their high noise levels, the complex morphology of the brain from the cellular to the regional and anatomical levels, the inhomogeneous distribution of fluorescent labels in the cells and tissues, and imaging artifacts. Due to their impressive ability to extract relevant information from images, deep learning algorithms are widely applied to microscopy images of the brain to address these challenges and they perform superiorly in a wide range of microscopy image processing and analysis tasks. This article reviews the applications of deep learning algorithms in brain mesoscale microscopy image processing and analysis, including image synthesis, image segmentation, object detection, and neuron reconstruction and analysis. We also discuss the difficulties of each task and possible directions for further research.
Collapse
Affiliation(s)
- Runze Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Min Liu
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China; Research Institute of Hunan University in Chongqing, Chongqing, 401135, China.
| | - Weixun Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Yaonan Wang
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney 2052, New South Wales, Australia
| |
Collapse
|
5
|
Ma R, Hao L, Tao Y, Mendoza X, Khodeiry M, Liu Y, Shyu ML, Lee RK. RGC-Net: An Automatic Reconstruction and Quantification Algorithm for Retinal Ganglion Cells Based on Deep Learning. Transl Vis Sci Technol 2023; 12:7. [PMID: 37140906 PMCID: PMC10166122 DOI: 10.1167/tvst.12.5.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Accepted: 03/31/2023] [Indexed: 05/05/2023] Open
Abstract
Purpose The purpose of this study was to develop a deep learning-based fully automated reconstruction and quantification algorithm which automatically delineates the neurites and somas of retinal ganglion cells (RGCs). Methods We trained a deep learning-based multi-task image segmentation model, RGC-Net, that automatically segments the neurites and somas in RGC images. A total of 166 RGC scans with manual annotations from human experts were used to develop this model, whereas 132 scans were used for training, and the remaining 34 scans were reserved as testing data. Post-processing techniques removed speckles or dead cells in soma segmentation results to further improve the robustness of the model. Quantification analyses were also conducted to compare five different metrics obtained by our automated algorithm and manual annotations. Results Quantitatively, our segmentation model achieves average foreground accuracy, background accuracy, overall accuracy, and dice similarity coefficient of 0.692, 0.999, 0.997, and 0.691 for the neurite segmentation task, and 0.865, 0.999, 0.997, and 0.850 for the soma segmentation task, respectively. Conclusions The experimental results demonstrate that RGC-Net can accurately and reliably reconstruct neurites and somas in RGC images. We also demonstrate our algorithm is comparable to human manually curated annotations in quantification analyses. Translational Relevance Our deep learning model provides a new tool that can trace and analyze the RGC neurites and somas efficiently and faster than manual analysis.
Collapse
Affiliation(s)
- Rui Ma
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
| | - Lili Hao
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
- Department of Ophthalmology, The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Yudong Tao
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
| | - Ximena Mendoza
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Mohamed Khodeiry
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Yuan Liu
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Mei-Ling Shyu
- School of Science and Engineering, University of Missouri-Kansas City, Kansas City, MO, USA
| | - Richard K. Lee
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA
| |
Collapse
|
6
|
Wei X, Liu Q, Liu M, Wang Y, Meijering E. 3D Soma Detection in Large-Scale Whole Brain Images via a Two-Stage Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:148-157. [PMID: 36103445 DOI: 10.1109/tmi.2022.3206605] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
3D soma detection in whole brain images is a critical step for neuron reconstruction. However, existing soma detection methods are not suitable for whole mouse brain images with large amounts of data and complex structure. In this paper, we propose a two-stage deep neural network to achieve fast and accurate soma detection in large-scale and high-resolution whole mouse brain images (more than 1TB). For the first stage, a lightweight Multi-level Cross Classification Network (MCC-Net) is proposed to filter out images without somas and generate coarse candidate images by combining the advantages of the multi convolution layer's feature extraction ability. It can speed up the detection of somas and reduce the computational complexity. For the second stage, to further obtain the accurate locations of somas in the whole mouse brain images, the Scale Fusion Segmentation Network (SFS-Net) is developed to segment soma regions from candidate images. Specifically, the SFS-Net captures multi-scale context information and establishes a complementary relationship between encoder and decoder by combining the encoder-decoder structure and a 3D Scale-Aware Pyramid Fusion (SAPF) module for better segmentation performance. The experimental results on three whole mouse brain images verify that the proposed method can achieve excellent performance and provide the reconstruction of neurons with beneficial information. Additionally, we have established a public dataset named WBMSD, including 798 high-resolution and representative images ( 256 ×256 ×256 voxels) from three whole mouse brain images, dedicated to the research of soma detection, which will be released along with this paper.
Collapse
|
7
|
Wang Y, Li X, Konanur M, Konkel B, Seyferth E, Brajer N, Liu JG, Bashir MR, Lafata KJ. Towards optimal deep fusion of imaging and clinical data via a model-based description of fusion quality. Med Phys 2022. [PMID: 36548913 DOI: 10.1002/mp.16181] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 07/27/2022] [Accepted: 11/08/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Due to intrinsic differences in data formatting, data structure, and underlying semantic information, the integration of imaging data with clinical data can be non-trivial. Optimal integration requires robust data fusion, that is, the process of integrating multiple data sources to produce more useful information than captured by individual data sources. Here, we introduce the concept of fusion quality for deep learning problems involving imaging and clinical data. We first provide a general theoretical framework and numerical validation of our technique. To demonstrate real-world applicability, we then apply our technique to optimize the fusion of CT imaging and hepatic blood markers to estimate portal venous hypertension, which is linked to prognosis in patients with cirrhosis of the liver. PURPOSE To develop a measurement method of optimal data fusion quality deep learning problems utilizing both imaging data and clinical data. METHODS Our approach is based on modeling the fully connected layer (FCL) of a convolutional neural network (CNN) as a potential function, whose distribution takes the form of the classical Gibbs measure. The features of the FCL are then modeled as random variables governed by state functions, which are interpreted as the different data sources to be fused. The probability density of each source, relative to the probability density of the FCL, represents a quantitative measure of source-bias. To minimize this source-bias and optimize CNN performance, we implement a vector-growing encoding scheme called positional encoding, where low-dimensional clinical data are transcribed into a rich feature space that complements high-dimensional imaging features. We first provide a numerical validation of our approach based on simulated Gaussian processes. We then applied our approach to patient data, where we optimized the fusion of CT images with blood markers to predict portal venous hypertension in patients with cirrhosis of the liver. This patient study was based on a modified ResNet-152 model that incorporates both images and blood markers as input. These two data sources were processed in parallel, fused into a single FCL, and optimized based on our fusion quality framework. RESULTS Numerical validation of our approach confirmed that the probability density function of a fused feature space converges to a source-specific probability density function when source data are improperly fused. Our numerical results demonstrate that this phenomenon can be quantified as a measure of fusion quality. On patient data, the fused model consisting of both imaging data and positionally encoded blood markers at the theoretically optimal fusion quality metric achieved an AUC of 0.74 and an accuracy of 0.71. This model was statistically better than the imaging-only model (AUC = 0.60; accuracy = 0.62), the blood marker-only model (AUC = 0.58; accuracy = 0.60), and a variety of purposely sub-optimized fusion models (AUC = 0.61-0.70; accuracy = 0.58-0.69). CONCLUSIONS We introduced the concept of data fusion quality for multi-source deep learning problems involving both imaging and clinical data. We provided a theoretical framework, numerical validation, and real-world application in abdominal radiology. Our data suggests that CT imaging and hepatic blood markers provide complementary diagnostic information when appropriately fused.
Collapse
Affiliation(s)
- Yuqi Wang
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
| | - Xiang Li
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
| | - Meghana Konanur
- Department of Radiology, Duke University, Durham, North Carolina, USA
| | - Brandon Konkel
- Department of Radiology, Duke University, Durham, North Carolina, USA
| | | | - Nathan Brajer
- Department of Radiology, Duke University, Durham, North Carolina, USA
| | - Jian-Guo Liu
- Department of Mathematics, Duke University, Durham, North Carolina, USA.,Department of Physics, Duke University, Durham, North Carolina, USA
| | - Mustafa R Bashir
- Department of Radiology, Duke University, Durham, North Carolina, USA.,Department of Medicine, Gastroenterology, Duke University, Durham, North Carolina, USA
| | - Kyle J Lafata
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA.,Department of Radiology, Duke University, Durham, North Carolina, USA.,Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
| |
Collapse
|
8
|
Mostafa FA, Elrefaei LA, Fouda MM, Hossam A. A Survey on AI Techniques for Thoracic Diseases Diagnosis Using Medical Images. Diagnostics (Basel) 2022; 12:3034. [PMID: 36553041 PMCID: PMC9777249 DOI: 10.3390/diagnostics12123034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 11/20/2022] [Accepted: 11/22/2022] [Indexed: 12/12/2022] Open
Abstract
Thoracic diseases refer to disorders that affect the lungs, heart, and other parts of the rib cage, such as pneumonia, novel coronavirus disease (COVID-19), tuberculosis, cardiomegaly, and fracture. Millions of people die every year from thoracic diseases. Therefore, early detection of these diseases is essential and can save many lives. Earlier, only highly experienced radiologists examined thoracic diseases, but recent developments in image processing and deep learning techniques are opening the door for the automated detection of these diseases. In this paper, we present a comprehensive review including: types of thoracic diseases; examination types of thoracic images; image pre-processing; models of deep learning applied to the detection of thoracic diseases (e.g., pneumonia, COVID-19, edema, fibrosis, tuberculosis, chronic obstructive pulmonary disease (COPD), and lung cancer); transfer learning background knowledge; ensemble learning; and future initiatives for improving the efficacy of deep learning models in applications that detect thoracic diseases. Through this survey paper, researchers may be able to gain an overall and systematic knowledge of deep learning applications in medical thoracic images. The review investigates a performance comparison of various models and a comparison of various datasets.
Collapse
Affiliation(s)
- Fatma A. Mostafa
- Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Cairo 11672, Egypt
| | - Lamiaa A. Elrefaei
- Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Cairo 11672, Egypt
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, College of Science and Engineering, Idaho State University, Pocatello, ID 83209, USA
| | - Aya Hossam
- Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Cairo 11672, Egypt
| |
Collapse
|
9
|
Qiao H, Chen L, Ye Z, Zhu F. Early Alzheimer's disease diagnosis with the contrastive loss using paired structural MRIs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106282. [PMID: 34343744 DOI: 10.1016/j.cmpb.2021.106282] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 07/08/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Alzheimer's Disease (AD) is a chronic and fatal neurodegenerative disease with progressive impairment of memory. Brain structural magnetic resonance imaging (sMRI) has been widely applied as important biomarkers of AD. Various machine learning approaches, especially deep learning-based models, have been proposed for the early diagnosis of AD and monitoring the disease progression on sMRI data. However, the requirement for a large number of training images still hinders the extensive usage of AD diagnosis. In addition, due to the similarities in human whole-brain structure, finding the subtle brain changes is essential to extract discriminative features from limited sMRI data effectively. METHODS In this work, we proposed two types of contrastive losses with paired sMRIs to promote the diagnostic performance using group categories (G-CAT) and varying subject mini-mental state examination (S-MMSE) information, respectively. Specifically, G-CAT contrastive loss layer was used to learn the closer feature representation from sMRIs with the same categories, while ranking information from S-MMSE assists the model to explore subtle changes between individuals. RESULTS The model was trained on ADNI-1. Comparison with baseline methods was performed on MIRIAD and ADNI-2. For the classification task on MIRIAD, S-MMSE achieves 93.5% of accuracy, 96.6% of sensitivity, and 94.9% of specificity, respectively. G-CAT and S-MMSE both reach remarkable performance in terms of classification sensitivity and specificity respectively. Comparing with state-of-the-art methods, we found this proposed method could achieve comparable results with other approaches. CONCLUSION The proposed model could extract discriminative features under whole-brain similarity. Extensive experiments also support the accuracy of this model, i.e., it provides better ability to identify uncertain samples, especially for the classification task of subjects with MMSE in 22-27. Source code is freely available at https://github.com/fengduqianhe/ADComparative.
Collapse
Affiliation(s)
- Hezhe Qiao
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China; University of Chinese Academy of Sciences, BeiJing 100049, China.
| | - Lin Chen
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China.
| | - Zi Ye
- Johns Hopkins University, Baltimore, MD 21218, United States of America.
| | - Fan Zhu
- Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China.
| |
Collapse
|