1
|
Huang S, Hao S, Si Y, Shen D, Cui L, Zhang Y, Lin H, Wang S, Gao Y, Guo X. Intelligent classification of major depressive disorder using rs-fMRI of the posterior cingulate cortex. J Affect Disord 2024; 358:399-407. [PMID: 38599253 DOI: 10.1016/j.jad.2024.03.166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 02/16/2024] [Accepted: 03/28/2024] [Indexed: 04/12/2024]
Abstract
Major Depressive Disorder (MDD) is a widespread psychiatric condition that affects a significant portion of the global population. The classification and diagnosis of MDD is crucial for effective treatment. Traditional methods, based on clinical assessment, are subjective and rely on healthcare professionals' expertise. Recently, there's growing interest in using Resting-State Functional Magnetic Resonance Imaging (rs-fMRI) to objectively understand MDD's neurobiology, complementing traditional diagnostics. The posterior cingulate cortex (PCC) is a pivotal brain region implicated in MDD which could be used to identify MDD from healthy controls. Thus, this study presents an intelligent approach based on rs-fMRI data to enhance the classification of MDD. Original rs-fMRI data were collected from a cohort of 430 participants, comprising 197 patients and 233 healthy controls. Subsequently, the data underwent preprocessing using DPARSF, and the amplitudes of low-frequency fluctuation values were computed to reduce data dimensionality and feature count. Then data associated with the PCC were extracted. After eliminating redundant features, various types of Support Vector Machines (SVMs) were employed as classifiers for intelligent categorization. Ultimately, we compared the performance of each algorithm, along with its respective optimal classifier, based on classification accuracy, true positive rate, and the area under the receiver operating characteristic curve (AUC-ROC). Upon analyzing the comparison results, we determined that the Random Forest (RF) algorithm, in conjunction with a sophisticated Gaussian SVM classifier, demonstrated the highest performance. Remarkably, this combination achieved a classification accuracy of 81.9 % and a true positive rate of 92.9 %. In conclusion, our study improves the classification of MDD by supplementing traditional methods with rs-fMRI and machine learning techniques, offering deeper neurobiological insights and aiding accuracy, while emphasizing its role as an adjunct to clinical assessment.
Collapse
Affiliation(s)
- Shihao Huang
- Department of Psychiatry, Renmin Hospital of Wuhan University, Wuhan 430000, China; National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence Research, Peking University, Beijing 100191, China; Department of Neurobiology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing 100191, China
| | - Shisheng Hao
- Xiangyang No.1 People's Hospital, Hubei University of Medicine, China
| | - Yue Si
- National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence Research, Peking University, Beijing 100191, China; Department of Neurobiology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing 100191, China
| | - Dan Shen
- Xinxiang Medical University, Xinxiang, Henan Province, China
| | - Lan Cui
- School of Automation, China University of Geosciences, China
| | - Yuandong Zhang
- School of Medicine, Wuhan University of Science and Technology, Wuhan, Hubei 430000, China
| | - Hang Lin
- School of Medicine, Wuhan University of Science and Technology, Wuhan, Hubei 430000, China
| | - Sanwang Wang
- Department of Psychiatry, Renmin Hospital of Wuhan University, Wuhan 430000, China
| | - Yujun Gao
- Department of Psychiatry, Renmin Hospital of Wuhan University, Wuhan 430000, China; Yichang Mental Health Center, China; Institute of Mental Health, Three Gorges University, China; Yichang City Clinical Research Center for Mental Disorders, China.
| | - Xin Guo
- Department of Psychiatry, Renmin Hospital of Wuhan University, Wuhan 430000, China.
| |
Collapse
|
2
|
Yu C, Pei H. Dynamic Weighting Translation Transfer Learning for Imbalanced Medical Image Classification. ENTROPY (BASEL, SWITZERLAND) 2024; 26:400. [PMID: 38785649 PMCID: PMC11119260 DOI: 10.3390/e26050400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 04/25/2024] [Accepted: 04/26/2024] [Indexed: 05/25/2024]
Abstract
Medical image diagnosis using deep learning has shown significant promise in clinical medicine. However, it often encounters two major difficulties in real-world applications: (1) domain shift, which invalidates the trained model on new datasets, and (2) class imbalance problems leading to model biases towards majority classes. To address these challenges, this paper proposes a transfer learning solution, named Dynamic Weighting Translation Transfer Learning (DTTL), for imbalanced medical image classification. The approach is grounded in information and entropy theory and comprises three modules: Cross-domain Discriminability Adaptation (CDA), Dynamic Domain Translation (DDT), and Balanced Target Learning (BTL). CDA connects discriminative feature learning between source and target domains using a synthetic discriminability loss and a domain-invariant feature learning loss. The DDT unit develops a dynamic translation process for imbalanced classes between two domains, utilizing a confidence-based selection approach to select the most useful synthesized images to create a pseudo-labeled balanced target domain. Finally, the BTL unit performs supervised learning on the reassembled target set to obtain the final diagnostic model. This paper delves into maximizing the entropy of class distributions, while simultaneously minimizing the cross-entropy between the source and target domains to reduce domain discrepancies. By incorporating entropy concepts into our framework, our method not only significantly enhances medical image classification in practical settings but also innovates the application of entropy and information theory within deep learning and medical image processing realms. Extensive experiments demonstrate that DTTL achieves the best performance compared to existing state-of-the-art methods for imbalanced medical image classification tasks.
Collapse
Affiliation(s)
- Chenglin Yu
- School of Electrtronic & Information Engineering and Communication Engineering, Guangzhou City University of Technology, Guangzhou 510800, China
- Key Laboratory of Autonomous Systems and Networked Control, Ministry of Education, Unmanned Aerial Vehicle Systems Engineering Technology Research Center of Guangdong, South China University of Technology, Guangzhou 510640, China
| | - Hailong Pei
- Key Laboratory of Autonomous Systems and Networked Control, Ministry of Education, Unmanned Aerial Vehicle Systems Engineering Technology Research Center of Guangdong, School of Automation Scinece and Engineering, South China University of Technology, Guangzhou 510640, China;
| |
Collapse
|
3
|
Li T, Guo Y, Zhao Z, Chen M, Lin Q, Hu X, Yao Z, Hu B. Automated Diagnosis of Major Depressive Disorder With Multi-Modal MRIs Based on Contrastive Learning: A Few-Shot Study. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1566-1576. [PMID: 38512734 DOI: 10.1109/tnsre.2024.3380357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
Depression ranks among the most prevalent mood-related psychiatric disorders. Existing clinical diagnostic approaches relying on scale interviews are susceptible to individual and environmental variations. In contrast, the integration of neuroimaging techniques and computer science has provided compelling evidence for the quantitative assessment of major depressive disorder (MDD). However, one of the major challenges in computer-aided diagnosis of MDD is to automatically and effectively mine the complementary cross-modal information from limited datasets. In this study, we proposed a few-shot learning framework that integrates multi-modal MRI data based on contrastive learning. In the upstream task, it is designed to extract knowledge from heterogeneous data. Subsequently, the downstream task is dedicated to transferring the acquired knowledge to the target dataset, where a hierarchical fusion paradigm is also designed to integrate features across inter- and intra-modalities. Lastly, the proposed model was evaluated on a set of multi-modal clinical data, achieving average scores of 73.52% and 73.09% for accuracy and AUC, respectively. Our findings also reveal that the brain regions within the default mode network and cerebellum play a crucial role in the diagnosis, which provides further direction in exploring reproducible biomarkers for MDD diagnosis.
Collapse
|
4
|
Kumari S, Singh P. Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives. Comput Biol Med 2024; 170:107912. [PMID: 38219643 DOI: 10.1016/j.compbiomed.2023.107912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/02/2023] [Accepted: 12/24/2023] [Indexed: 01/16/2024]
Abstract
Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.
Collapse
Affiliation(s)
- Suruchi Kumari
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| | - Pravendra Singh
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| |
Collapse
|
5
|
Li J, Jiang P, An Q, Wang GG, Kong HF. Medical image identification methods: A review. Comput Biol Med 2024; 169:107777. [PMID: 38104516 DOI: 10.1016/j.compbiomed.2023.107777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/30/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023]
Abstract
The identification of medical images is an essential task in computer-aided diagnosis, medical image retrieval and mining. Medical image data mainly include electronic health record data and gene information data, etc. Although intelligent imaging provided a good scheme for medical image analysis over traditional methods that rely on the handcrafted features, it remains challenging due to the diversity of imaging modalities and clinical pathologies. Many medical image identification methods provide a good scheme for medical image analysis. The concepts pertinent of methods, such as the machine learning, deep learning, convolutional neural networks, transfer learning, and other image processing technologies for medical image are analyzed and summarized in this paper. We reviewed these recent studies to provide a comprehensive overview of applying these methods in various medical image analysis tasks, such as object detection, image classification, image registration, segmentation, and other tasks. Especially, we emphasized the latest progress and contributions of different methods in medical image analysis, which are summarized base on different application scenarios, including classification, segmentation, detection, and image registration. In addition, the applications of different methods are summarized in different application area, such as pulmonary, brain, digital pathology, brain, skin, lung, renal, breast, neuromyelitis, vertebrae, and musculoskeletal, etc. Critical discussion of open challenges and directions for future research are finally summarized. Especially, excellent algorithms in computer vision, natural language processing, and unmanned driving will be applied to medical image recognition in the future.
Collapse
Affiliation(s)
- Juan Li
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China; School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Pan Jiang
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China
| | - Qing An
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China
| | - Gai-Ge Wang
- School of Computer Science and Technology, Ocean University of China, Qingdao, 266100, China.
| | - Hua-Feng Kong
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China.
| |
Collapse
|
6
|
Luo Y, Chen W, Zhan L, Qiu J, Jia T. Multi-feature concatenation and multi-classifier stacking: An interpretable and generalizable machine learning method for MDD discrimination with rsfMRI. Neuroimage 2024; 285:120497. [PMID: 38142755 DOI: 10.1016/j.neuroimage.2023.120497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Revised: 11/21/2023] [Accepted: 12/11/2023] [Indexed: 12/26/2023] Open
Abstract
Major depressive disorder (MDD) is a serious and heterogeneous psychiatric disorder that needs accurate diagnosis. Resting-state functional MRI (rsfMRI), which captures multiple perspectives on brain structure, function, and connectivity, is increasingly applied in the diagnosis and pathological research of MDD. Different machine learning algorithms are then developed to exploit the rich information in rsfMRI and discriminate MDD patients from normal controls. Despite recent advances reported, the MDD discrimination accuracy has room for further improvement. The generalizability and interpretability of the discrimination method are not sufficiently addressed either. Here, we propose a machine learning method (MFMC) for MDD discrimination by concatenating multiple features and stacking multiple classifiers. MFMC is tested on the REST-meta-MDD data set that contains 2428 subjects collected from 25 different sites. MFMC yields 96.9% MDD discrimination accuracy, demonstrating a significant improvement over existing methods. In addition, the generalizability of MFMC is validated by the good performance when the training and testing subjects are from independent sites. The use of XGBoost as the meta classifier allows us to probe the decision process of MFMC. We identify 13 feature values related to 9 brain regions including the posterior cingulate gyrus, superior frontal gyrus orbital part, and angular gyrus, which contribute most to the classification and also demonstrate significant differences at the group level. The use of these 13 feature values alone can reach 87% of MFMC's full performance when taking all feature values. These features may serve as clinically useful diagnostic and prognostic biomarkers for MDD in the future.
Collapse
Affiliation(s)
- Yunsong Luo
- College of Computer and Information Science, Southwest University, Chongqing, 400715, PR China.
| | - Wenyu Chen
- College of Computer and Information Science, Southwest University, Chongqing, 400715, PR China.
| | - Ling Zhan
- College of Computer and Information Science, Southwest University, Chongqing, 400715, PR China.
| | - Jiang Qiu
- Key Laboratory of Cognition and Personality (SWU), Ministry of Education, Chongqing, 400715, PR China; School of Psychology, Southwest University (SWU), Chongqing, 400715, PR China; Southwest University Branch, Collaborative Innovation Center of Assessment Toward Basic Education Quality at Beijing Normal University, Chongqing, 400715, PR China.
| | - Tao Jia
- College of Computer and Information Science, Southwest University, Chongqing, 400715, PR China.
| |
Collapse
|
7
|
Wang X, Chu Y, Wang Q, Cao L, Qiao L, Zhang L, Liu M. Unsupervised contrastive graph learning for resting-state functional MRI analysis and brain disorder detection. Hum Brain Mapp 2023; 44:5672-5692. [PMID: 37668327 PMCID: PMC10619386 DOI: 10.1002/hbm.26469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 07/08/2023] [Accepted: 08/11/2023] [Indexed: 09/06/2023] Open
Abstract
Resting-state functional magnetic resonance imaging (rs-fMRI) helps characterize regional interactions that occur in the human brain at a resting state. Existing research often attempts to explore fMRI biomarkers that best predict brain disease progression using machine/deep learning techniques. Previous fMRI studies have shown that learning-based methods usually require a large amount of labeled training data, limiting their utility in clinical practice where annotating data is often time-consuming and labor-intensive. To this end, we propose an unsupervised contrastive graph learning (UCGL) framework for fMRI-based brain disease analysis, in which a pretext model is designed to generate informative fMRI representations using unlabeled training data, followed by model fine-tuning to perform downstream disease identification tasks. Specifically, in the pretext model, we first design a bi-level fMRI augmentation strategy to increase the sample size by augmenting blood-oxygen-level-dependent (BOLD) signals, and then employ two parallel graph convolutional networks for fMRI feature extraction in an unsupervised contrastive learning manner. This pretext model can be optimized on large-scale fMRI datasets, without requiring labeled training data. This model is further fine-tuned on to-be-analyzed fMRI data for downstream disease detection in a task-oriented learning manner. We evaluate the proposed method on three rs-fMRI datasets for cross-site and cross-dataset learning tasks. Experimental results suggest that the UCGL outperforms several state-of-the-art approaches in automated diagnosis of three brain diseases (i.e., major depressive disorder, autism spectrum disorder, and Alzheimer's disease) with rs-fMRI data.
Collapse
Affiliation(s)
- Xiaochuan Wang
- The School of Mathematics ScienceLiaocheng UniversityLiaochengChina
| | - Ying Chu
- The School of Mathematics ScienceLiaocheng UniversityLiaochengChina
| | - Qianqian Wang
- The Department of Radiology and BRICUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
| | - Liang Cao
- Taian Tumor Prevention and Treatment HospitalTaianChina
| | - Lishan Qiao
- The School of Mathematics ScienceLiaocheng UniversityLiaochengChina
| | - Limei Zhang
- School of Computer Science and TechnologyShandong Jianzhu UniversityJinanChina
| | - Mingxia Liu
- The Department of Radiology and BRICUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
| |
Collapse
|
8
|
Yu C, Pei H. Dynamic Graph Clustering Learning for Unsupervised Diabetic Retinopathy Classification. Diagnostics (Basel) 2023; 13:3251. [PMID: 37892072 PMCID: PMC10606586 DOI: 10.3390/diagnostics13203251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 09/17/2023] [Accepted: 09/18/2023] [Indexed: 10/29/2023] Open
Abstract
Diabetic retinopathy (DR) is a common complication of diabetes, which can lead to vision loss. Early diagnosis is crucial to prevent the progression of DR. In recent years, deep learning approaches have shown promising results in the development of an intelligent and efficient system for DR classification. However, one major drawback is the need for expert-annotated datasets, which are both time-consuming and costly. To address these challenges, this paper proposes a novel dynamic graph clustering learning (DGCL) method for unsupervised classification of DR, which innovatively deploys the Euclidean and topological features from fundus images for dynamic clustering. Firstly, a multi-structural feature fusion (MFF) module extracts features from the structure of the fundus image and captures topological relationships among multiple samples, generating a fused representation. Secondly, another consistency smoothing clustering (CSC) module combines network updates and deep clustering to ensure stability and smooth performance improvement during model convergence, optimizing the clustering process by iteratively updating the network and refining the clustering results. Lastly, dynamic memory storage is utilized to track and store important information from previous iterations, enhancing the training stability and convergence. During validation, the experimental results with public datasets demonstrated the superiority of our proposed DGCL network.
Collapse
Affiliation(s)
- Chenglin Yu
- Key Laboratory of Autonomous Systems and Networked Control, Ministry of Education, Unmanned Aerial Vehicle Systems Engineering Technology Research Center of Guangdong, South China University of Technology, Guangzhou 510640, China
| | - Hailong Pei
- Key Laboratory of Autonomous Systems and Networked Control, Ministry of Education, Unmanned Aerial Vehicle Systems Engineering Technology Research Center of Guangdong, School of Automation Science and Engineering, South China University of Technology, Guangzhou 510640, China;
| |
Collapse
|
9
|
Ma Y, Wang Q, Cao L, Li L, Zhang C, Qiao L, Liu M. Multi-Scale Dynamic Graph Learning for Brain Disorder Detection With Functional MRI. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3501-3512. [PMID: 37643109 DOI: 10.1109/tnsre.2023.3309847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
Resting-state functional magnetic resonance imaging (rs-fMRI) has been widely used in the detection of brain disorders such as autism spectrum disorder based on various machine/deep learning techniques. Learning-based methods typically rely on functional connectivity networks (FCNs) derived from blood-oxygen-level-dependent time series of rs-fMRI data to capture interactions between brain regions-of-interest (ROIs). Graph neural networks have been recently used to extract fMRI features from graph-structured FCNs, but cannot effectively characterize spatiotemporal dynamics of FCNs, e.g., the functional connectivity of brain ROIs is dynamically changing in a short period of time. Also, many studies usually focus on single-scale topology of FCN, thereby ignoring the potential complementary topological information of FCN at different spatial resolutions. To this end, in this paper, we propose a multi-scale dynamic graph learning (MDGL) framework to capture multi-scale spatiotemporal dynamic representations of rs-fMRI data for automated brain disorder diagnosis. The MDGL framework consists of three major components: 1) multi-scale dynamic FCN construction using multiple brain atlases to model multi-scale topological information, 2) multi-scale dynamic graph representation learning to capture spatiotemporal information conveyed in fMRI data, and 3) multi-scale feature fusion and classification. Experimental results on two datasets show that MDGL outperforms several state-of-the-art methods.
Collapse
|