1
|
Natarajan S, Humbert L, Ballester MAG. Domain adaptation using AdaBN and AdaIN for high-resolution IVD mesh reconstruction from clinical MRI. Int J Comput Assist Radiol Surg 2024; 19:2063-2068. [PMID: 39002098 DOI: 10.1007/s11548-024-03233-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 07/03/2024] [Indexed: 07/15/2024]
Abstract
PURPOSE Deep learning has firmly established its dominance in medical imaging applications. However, careful consideration must be exercised when transitioning a trained source model to adapt to an entirely distinct environment that deviates significantly from the training set. The majority of the efforts to mitigate this issue have predominantly focused on classification and segmentation tasks. In this work, we perform a domain adaptation of a trained source model to reconstruct high-resolution intervertebral disc meshes from low-resolution MRI. METHODS To address the outlined challenges, we use MRI2Mesh as the shape reconstruction network. It incorporates three major modules: image encoder, mesh deformation, and cross-level feature fusion. This feature fusion module is used to encapsulate local and global disc features. We evaluate two major domain adaptation techniques: adaptive batch normalization (AdaBN) and adaptive instance normalization (AdaIN) for the task of shape reconstruction. RESULTS Experiments conducted on distinct datasets, including data from different populations, machines, and test sites demonstrate the effectiveness of MRI2Mesh for domain adaptation. MRI2Mesh achieved up to a 14% decrease in Hausdorff distance (HD) and a 19% decrease in the point-to-surface (P2S) metric for both AdaBN and AdaIN experiments, indicating improved performance. CONCLUSION MRI2Mesh has demonstrated consistent superiority to the state-of-the-art Voxel2Mesh network across a diverse range of datasets, populations, and scanning protocols, highlighting its versatility. Additionally, AdaBN has emerged as a robust method compared to the AdaIN technique. Further experiments show that MRI2Mesh, when combined with AdaBN, holds immense promise for enhancing the precision of anatomical shape reconstruction in domain adaptation.
Collapse
Affiliation(s)
- Sai Natarajan
- Galgo Medical S.L., Barcelona, Spain.
- BCNMedTech, Universitat Pompeu Fabra, Barcelona, Spain.
| | - Ludovic Humbert
- Galgo Medical S.L., Barcelona, Spain
- 3D-Shaper Medical S.L., Barcelona, Spain
| | | |
Collapse
|
2
|
Guan H, Yap PT, Bozoki A, Liu M. Federated learning for medical image analysis: A survey. PATTERN RECOGNITION 2024; 151:110424. [PMID: 38559674 PMCID: PMC10976951 DOI: 10.1016/j.patcog.2024.110424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Machine learning in medical imaging often faces a fundamental dilemma, namely, the small sample size problem. Many recent studies suggest using multi-domain data pooled from different acquisition sites/centers to improve statistical power. However, medical images from different sites cannot be easily shared to build large datasets for model training due to privacy protection reasons. As a promising solution, federated learning, which enables collaborative training of machine learning models based on data from different sites without cross-site data sharing, has attracted considerable attention recently. In this paper, we conduct a comprehensive survey of the recent development of federated learning methods in medical image analysis. We have systematically gathered research papers on federated learning and its applications in medical image analysis published between 2017 and 2023. Our search and compilation were conducted using databases from IEEE Xplore, ACM Digital Library, Science Direct, Springer Link, Web of Science, Google Scholar, and PubMed. In this survey, we first introduce the background of federated learning for dealing with privacy protection and collaborative learning issues. We then present a comprehensive review of recent advances in federated learning methods for medical image analysis. Specifically, existing methods are categorized based on three critical aspects of a federated learning system, including client end, server end, and communication techniques. In each category, we summarize the existing federated learning methods according to specific research problems in medical image analysis and also provide insights into the motivations of different approaches. In addition, we provide a review of existing benchmark medical imaging datasets and software platforms for current federated learning research. We also conduct an experimental study to empirically evaluate typical federated learning methods for medical image analysis. This survey can help to better understand the current research status, challenges, and potential research opportunities in this promising research field.
Collapse
Affiliation(s)
- Hao Guan
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Andrea Bozoki
- Department of Neurology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
3
|
Wang X, Chu Y, Wang Q, Cao L, Qiao L, Zhang L, Liu M. Unsupervised contrastive graph learning for resting-state functional MRI analysis and brain disorder detection. Hum Brain Mapp 2023; 44:5672-5692. [PMID: 37668327 PMCID: PMC10619386 DOI: 10.1002/hbm.26469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 07/08/2023] [Accepted: 08/11/2023] [Indexed: 09/06/2023] Open
Abstract
Resting-state functional magnetic resonance imaging (rs-fMRI) helps characterize regional interactions that occur in the human brain at a resting state. Existing research often attempts to explore fMRI biomarkers that best predict brain disease progression using machine/deep learning techniques. Previous fMRI studies have shown that learning-based methods usually require a large amount of labeled training data, limiting their utility in clinical practice where annotating data is often time-consuming and labor-intensive. To this end, we propose an unsupervised contrastive graph learning (UCGL) framework for fMRI-based brain disease analysis, in which a pretext model is designed to generate informative fMRI representations using unlabeled training data, followed by model fine-tuning to perform downstream disease identification tasks. Specifically, in the pretext model, we first design a bi-level fMRI augmentation strategy to increase the sample size by augmenting blood-oxygen-level-dependent (BOLD) signals, and then employ two parallel graph convolutional networks for fMRI feature extraction in an unsupervised contrastive learning manner. This pretext model can be optimized on large-scale fMRI datasets, without requiring labeled training data. This model is further fine-tuned on to-be-analyzed fMRI data for downstream disease detection in a task-oriented learning manner. We evaluate the proposed method on three rs-fMRI datasets for cross-site and cross-dataset learning tasks. Experimental results suggest that the UCGL outperforms several state-of-the-art approaches in automated diagnosis of three brain diseases (i.e., major depressive disorder, autism spectrum disorder, and Alzheimer's disease) with rs-fMRI data.
Collapse
Affiliation(s)
- Xiaochuan Wang
- The School of Mathematics ScienceLiaocheng UniversityLiaochengChina
| | - Ying Chu
- The School of Mathematics ScienceLiaocheng UniversityLiaochengChina
| | - Qianqian Wang
- The Department of Radiology and BRICUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
| | - Liang Cao
- Taian Tumor Prevention and Treatment HospitalTaianChina
| | - Lishan Qiao
- The School of Mathematics ScienceLiaocheng UniversityLiaochengChina
| | - Limei Zhang
- School of Computer Science and TechnologyShandong Jianzhu UniversityJinanChina
| | - Mingxia Liu
- The Department of Radiology and BRICUniversity of North Carolina at Chapel HillChapel HillNorth CarolinaUSA
| |
Collapse
|
4
|
Zhang L, Wu J, Wang L, Wang L, Steffens DC, Qiu S, Potter GG, Liu M. Brain Anatomy-Guided MRI Analysis for Assessing Clinical Progression of Cognitive Impairment with Structural MRI. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2023; 14227:109-119. [PMID: 38390033 PMCID: PMC10883230 DOI: 10.1007/978-3-031-43993-3_11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/24/2024]
Abstract
Brain structural MRI has been widely used for assessing future progression of cognitive impairment (CI) based on learning-based methods. Previous studies generally suffer from the limited number of labeled training data, while there exists a huge amount of MRIs in large-scale public databases. Even without task-specific label information, brain anatomical structures provided by these MRIs can be used to boost learning performance intuitively. Unfortunately, existing research seldom takes advantage of such brain anatomy prior. To this end, this paper proposes a brain anatomy-guided representation (BAR) learning framework for assessing the clinical progression of cognitive impairment with T1-weighted MRIs. The BAR consists of a pretext model and a downstream model, with a shared brain anatomy-guided encoder for MRI feature extraction. The pretext model also contains a decoder for brain tissue segmentation, while the downstream model relies on a predictor for classification. We first train the pretext model through a brain tissue segmentation task on 9,544 auxiliary T1-weighted MRIs, yielding a generalizable encoder. The downstream model with the learned encoder is further fine-tuned on target MRIs for prediction tasks. We validate the proposed BAR on two CI-related studies with a total of 391 subjects with T1-weighted MRIs. Experimental results suggest that the BAR outperforms several state-of-the-art (SOTA) methods. The source code and pre-trained models are available at https://github.com/goodaycoder/BAR.
Collapse
Affiliation(s)
- Lintao Zhang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Jinjian Wu
- The First School of Clinical Medicine, Guangzhou University of Chinese Medicine, Guangzhou, Guangdong, China
| | - Lihong Wang
- Department of Psychiatry, University of Connecticut School of Medicine, University of Connecticut, Farmington, CT, USA
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - David C Steffens
- Department of Psychiatry, University of Connecticut School of Medicine, University of Connecticut, Farmington, CT, USA
| | - Shijun Qiu
- The First School of Clinical Medicine, Guangzhou University of Chinese Medicine, Guangzhou, Guangdong, China
| | - Guy G Potter
- Department of Psychiatry and Behavioral Sciences, Duke University Medical Center, Durham, NC, USA
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| |
Collapse
|
5
|
Wu M, Zhang L, Yap PT, Lin W, Zhu H, Liu M. Structural MRI Harmonization via Disentangled Latent Energy-Based Style Translation. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2023; 14348:1-11. [PMID: 38389805 PMCID: PMC10883146 DOI: 10.1007/978-3-031-45673-2_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/24/2024]
Abstract
Multi-site brain magnetic resonance imaging (MRI) has been widely used in clinical and research domains, but usually is sensitive to non-biological variations caused by site effects (e.g., field strengths and scanning protocols). Several retrospective data harmonization methods have shown promising results in removing these non-biological variations at feature or whole-image level. Most existing image-level harmonization methods are implemented through generative adversarial networks, which are generally computationally expensive and generalize poorly on independent data. To this end, this paper proposes a disentangled latent energy-based style translation (DLEST) framework for image-level structural MRI harmonization. Specifically, DLEST disentangles site-invariant image generation and site-specific style translation via a latent autoencoder and an energy-based model. The autoencoder learns to encode images into low-dimensional latent space, and generates faithful images from latent codes. The energy-based model is placed in between the encoding and generation steps, facilitating style translation from a source domain to a target domain implicitly. This allows highly generalizable image generation and efficient style translation through the latent space. We train our model on 4,092 T1-weighted MRIs in 3 tasks: histogram comparison, acquisition site classification, and brain tissue segmentation. Qualitative and quantitative results demonstrate the superiority of our approach, which generally outperforms several state-of-the-art methods.
Collapse
Affiliation(s)
- Mengqi Wu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill and North Carolina State University, Chapel Hill, NC 27599, USA
| | - Lintao Zhang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Hongtu Zhu
- Department of Biostatistics and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
6
|
Kushol R, Wilman AH, Kalra S, Yang YH. DSMRI: Domain Shift Analyzer for Multi-Center MRI Datasets. Diagnostics (Basel) 2023; 13:2947. [PMID: 37761314 PMCID: PMC10527875 DOI: 10.3390/diagnostics13182947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 09/05/2023] [Accepted: 09/12/2023] [Indexed: 09/29/2023] Open
Abstract
In medical research and clinical applications, the utilization of MRI datasets from multiple centers has become increasingly prevalent. However, inherent variability between these centers presents challenges due to domain shift, which can impact the quality and reliability of the analysis. Regrettably, the absence of adequate tools for domain shift analysis hinders the development and validation of domain adaptation and harmonization techniques. To address this issue, this paper presents a novel Domain Shift analyzer for MRI (DSMRI) framework designed explicitly for domain shift analysis in multi-center MRI datasets. The proposed model assesses the degree of domain shift within an MRI dataset by leveraging various MRI-quality-related metrics derived from the spatial domain. DSMRI also incorporates features from the frequency domain to capture low- and high-frequency information about the image. It further includes the wavelet domain features by effectively measuring the sparsity and energy present in the wavelet coefficients. Furthermore, DSMRI introduces several texture features, thereby enhancing the robustness of the domain shift analysis process. The proposed framework includes visualization techniques such as t-SNE and UMAP to demonstrate that similar data are grouped closely while dissimilar data are in separate clusters. Additionally, quantitative analysis is used to measure the domain shift distance, domain classification accuracy, and the ranking of significant features. The effectiveness of the proposed approach is demonstrated using experimental evaluations on seven large-scale multi-site neuroimaging datasets.
Collapse
Affiliation(s)
- Rafsanjany Kushol
- Department of Computing Science, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Alan H. Wilman
- Departments of Radiology and Diagnostic Imaging and Biomedical Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Sanjay Kalra
- Department of Computing Science, University of Alberta, Edmonton, AB T6G 2R3, Canada
- Division of Neurology, Department of Medicine, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Yee-Hong Yang
- Department of Computing Science, University of Alberta, Edmonton, AB T6G 2R3, Canada
| |
Collapse
|