1
|
Vakli P, Weiss B, Rozmann D, Erőss G, Nárai Á, Hermann P, Vidnyánszky Z. The effect of head motion on brain age prediction using deep convolutional neural networks. Neuroimage 2024; 294:120646. [PMID: 38750907 DOI: 10.1016/j.neuroimage.2024.120646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 05/10/2024] [Accepted: 05/12/2024] [Indexed: 05/23/2024] Open
Abstract
Deep learning can be used effectively to predict participants' age from brain magnetic resonance imaging (MRI) data, and a growing body of evidence suggests that the difference between predicted and chronological age-referred to as brain-predicted age difference (brain-PAD)-is related to various neurological and neuropsychiatric disease states. A crucial aspect of the applicability of brain-PAD as a biomarker of individual brain health is whether and how brain-predicted age is affected by MR image artifacts commonly encountered in clinical settings. To investigate this issue, we trained and validated two different 3D convolutional neural network architectures (CNNs) from scratch and tested the models on a separate dataset consisting of motion-free and motion-corrupted T1-weighted MRI scans from the same participants, the quality of which were rated by neuroradiologists from a clinical diagnostic point of view. Our results revealed a systematic increase in brain-PAD with worsening image quality for both models. This effect was also observed for images that were deemed usable from a clinical perspective, with brains appearing older in medium than in good quality images. These findings were also supported by significant associations found between the brain-PAD and standard image quality metrics indicating larger brain-PAD for lower-quality images. Our results demonstrate a spurious effect of advanced brain aging as a result of head motion and underline the importance of controlling for image quality when using brain-predicted age based on structural neuroimaging data as a proxy measure for brain health.
Collapse
Affiliation(s)
- Pál Vakli
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Budapest 1117, Hungary.
| | - Béla Weiss
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Budapest 1117, Hungary; Biomatics and Applied Artificial Intelligence Institute, John von Neumann Faculty of Informatics, Óbuda University, Budapest 1034, Hungary.
| | - Dorina Rozmann
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - György Erőss
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Ádám Nárai
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Budapest 1117, Hungary; Doctoral School of Biology and Sportbiology, Institute of Biology, Faculty of Sciences, University of Pécs, Pécs 7624, Hungary
| | - Petra Hermann
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Zoltán Vidnyánszky
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Budapest 1117, Hungary.
| |
Collapse
|
2
|
Taylor HP, Thung KH, Huynh KM, Lin W, Ahmad S, Yap PT. Functional Hierarchy of the Human Neocortex from Cradle to Grave. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.14.599109. [PMID: 38915694 PMCID: PMC11195193 DOI: 10.1101/2024.06.14.599109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
Recent evidence indicates that the organization of the human neocortex is underpinned by smooth spatial gradients of functional connectivity (FC). These gradients provide crucial insight into the relationship between the brain's topographic organization and the texture of human cognition. However, no studies to date have charted how intrinsic FC gradient architecture develops across the entire human lifespan. In this work, we model developmental trajectories of the three primary gradients of FC using a large, high-quality, and temporally-dense functional MRI dataset spanning from birth to 100 years of age. The gradient axes, denoted as sensorimotor-association (SA), visual-somatosensory (VS), and modulation-representation (MR), encode crucial hierarchical organizing principles of the brain in development and aging. By tracking their evolution throughout the human lifespan, we provide the first ever comprehensive low-dimensional normative reference of global FC hierarchical architecture. We observe significant age-related changes in global network features, with global markers of hierarchical organization increasing from birth to early adulthood and decreasing thereafter. During infancy and early childhood, FC organization is shaped by primary sensory processing, dense short-range connectivity, and immature association and control hierarchies. Functional differentiation of transmodal systems supported by long-range coupling drives a convergence toward adult-like FC organization during late childhood, while adolescence and early adulthood are marked by the expansion and refinement of SA and MR hierarchies. While gradient topographies remain stable during late adulthood and aging, we observe decreases in global gradient measures of FC differentiation and complexity from 30 to 100 years. Examining cortical microstructure gradients alongside our functional gradients, we observed that structure-function gradient coupling undergoes differential lifespan trajectories across multiple gradient axes.
Collapse
Affiliation(s)
- Hoyt Patrick Taylor
- Department of Computer Science, University of North Carolina, Chapel Hill, U.S.A
| | - Kim-Han Thung
- Department of Radiology, University of North Carolina, Chapel Hill, U.S.A
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, U.S.A
| | - Khoi Minh Huynh
- Department of Radiology, University of North Carolina, Chapel Hill, U.S.A
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, U.S.A
| | - Weili Lin
- Department of Radiology, University of North Carolina, Chapel Hill, U.S.A
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, U.S.A
| | - Sahar Ahmad
- Department of Radiology, University of North Carolina, Chapel Hill, U.S.A
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, U.S.A
| | - Pew-Thian Yap
- Department of Radiology, University of North Carolina, Chapel Hill, U.S.A
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, U.S.A
| |
Collapse
|
3
|
Safari M, Yang X, Chang CW, Qiu RLJ, Fatemi A, Archambault L. Unsupervised MRI motion artifact disentanglement: introducing MAUDGAN. Phys Med Biol 2024; 69:115057. [PMID: 38714192 DOI: 10.1088/1361-6560/ad4845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 05/07/2024] [Indexed: 05/09/2024]
Abstract
Objective.This study developed an unsupervised motion artifact reduction method for magnetic resonance imaging (MRI) images of patients with brain tumors. The proposed novel design uses multi-parametric multicenter contrast-enhanced T1W (ceT1W) and T2-FLAIR MRI images.Approach.The proposed framework included two generators, two discriminators, and two feature extractor networks. A 3-fold cross-validation was used to train and fine-tune the hyperparameters of the proposed model using 230 brain MRI images with tumors, which were then tested on 148 patients'in-vivodatasets. An ablation was performed to evaluate the model's compartments. Our model was compared with Pix2pix and CycleGAN. Six evaluation metrics were reported, including normalized mean squared error (NMSE), structural similarity index (SSIM), multi-scale-SSIM (MS-SSIM), peak signal-to-noise ratio (PSNR), visual information fidelity (VIF), and multi-scale gradient magnitude similarity deviation (MS-GMSD). Artifact reduction and consistency of tumor regions, image contrast, and sharpness were evaluated by three evaluators using Likert scales and compared with ANOVA and Tukey's HSD tests.Main results.On average, our method outperforms comparative models to remove heavy motion artifacts with the lowest NMSE (18.34±5.07%) and MS-GMSD (0.07 ± 0.03) for heavy motion artifact level. Additionally, our method creates motion-free images with the highest SSIM (0.93 ± 0.04), PSNR (30.63 ± 4.96), and VIF (0.45 ± 0.05) values, along with comparable MS-SSIM (0.96 ± 0.31). Similarly, our method outperformed comparative models in removingin-vivomotion artifacts for different distortion levels except for MS- SSIM and VIF, which have comparable performance with CycleGAN. Moreover, our method had a consistent performance for different artifact levels. For the heavy level of motion artifacts, our method got the highest Likert scores of 2.82 ± 0.52, 1.88 ± 0.71, and 1.02 ± 0.14 (p-values≪0.0001) for our method, CycleGAN, and Pix2pix respectively. Similar trends were also found for other motion artifact levels.Significance.Our proposed unsupervised method was demonstrated to reduce motion artifacts from the ceT1W brain images under a multi-parametric framework.
Collapse
Affiliation(s)
- Mojtaba Safari
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Québec, Québec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Québec, Québec, Canada
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Ali Fatemi
- Department of Physics, Jackson State University, Jackson, MS, United States of America
- Merit Health Central, Department of Radiation Oncology, Gamma Knife Center, Jackson, MS, United States of America
| | - Louis Archambault
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Québec, Québec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Québec, Québec, Canada
| |
Collapse
|
4
|
Bao Q, Liu X, Xu J, Xia L, Otikovs M, Xie H, Liu K, Zhang Z, Zhou X, Liu C. Unsupervised deep learning model for correcting Nyquist ghosts of single-shot spatiotemporal encoding. Magn Reson Med 2024; 91:1368-1383. [PMID: 38073072 DOI: 10.1002/mrm.29925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Revised: 10/20/2023] [Accepted: 10/25/2023] [Indexed: 02/03/2024]
Abstract
PURPOSE To design an unsupervised deep learning (DL) model for correcting Nyquist ghosts of single-shot spatiotemporal encoding (SPEN) and evaluate the model for real MRI applications. METHODS The proposed method consists of three main components: (1) an unsupervised network that combines Residual Encoder and Restricted Subspace Mapping (RERSM-net) and is trained to generate a phase-difference map based on the even and odd SPEN images; (2) a spin physical forward model to obtain the corrected image with the learned phase difference map; and (3) cycle-consistency loss that is explored for training the RERSM-net. RESULTS The proposed RERSM-net could effectively generate smooth phase difference maps and correct Nyquist ghosts of single-shot SPEN. Both simulation and real in vivo MRI experiments demonstrated that our method outperforms the state-of-the-art SPEN Nyquist ghost correction method. Furthermore, the ablation experiments of generating phase-difference maps show the advantages of the proposed unsupervised model. CONCLUSION The proposed method can effectively correct Nyquist ghosts for the single-shot SPEN sequence.
Collapse
Affiliation(s)
- Qingjia Bao
- Key Laboratory of Magnetic Resonance in Biological Systems, Innovation Academy for Precision Measurement Science and Technology, Wuhan, China
| | - Xinjie Liu
- Key Laboratory of Magnetic Resonance in Biological Systems, Innovation Academy for Precision Measurement Science and Technology, Wuhan, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Jingyun Xu
- School of Information Engineering, Wuhan University of Technology, Wuhan, China
| | - Liyang Xia
- School of Information Engineering, Wuhan University of Technology, Wuhan, China
| | | | - Han Xie
- Key Laboratory of Magnetic Resonance in Biological Systems, Innovation Academy for Precision Measurement Science and Technology, Wuhan, China
| | - Kewen Liu
- School of Information Engineering, Wuhan University of Technology, Wuhan, China
| | - Zhi Zhang
- Key Laboratory of Magnetic Resonance in Biological Systems, Innovation Academy for Precision Measurement Science and Technology, Wuhan, China
| | - Xin Zhou
- Key Laboratory of Magnetic Resonance in Biological Systems, Innovation Academy for Precision Measurement Science and Technology, Wuhan, China
- University of Chinese Academy of Sciences, Beijing, China
- Optics Valley Laboratory, Wuhan, China
| | - Chaoyang Liu
- Key Laboratory of Magnetic Resonance in Biological Systems, Innovation Academy for Precision Measurement Science and Technology, Wuhan, China
- University of Chinese Academy of Sciences, Beijing, China
- Optics Valley Laboratory, Wuhan, China
| |
Collapse
|
5
|
Zhou Z, Hu P, Qi H. Stop moving: MR motion correction as an opportunity for artificial intelligence. MAGMA (NEW YORK, N.Y.) 2024:10.1007/s10334-023-01144-5. [PMID: 38386151 DOI: 10.1007/s10334-023-01144-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 12/09/2023] [Accepted: 12/19/2023] [Indexed: 02/23/2024]
Abstract
Subject motion is a long-standing problem of magnetic resonance imaging (MRI), which can seriously deteriorate the image quality. Various prospective and retrospective methods have been proposed for MRI motion correction, among which deep learning approaches have achieved state-of-the-art motion correction performance. This survey paper aims to provide a comprehensive review of deep learning-based MRI motion correction methods. Neural networks used for motion artifacts reduction and motion estimation in the image domain or frequency domain are detailed. Furthermore, besides motion-corrected MRI reconstruction, how estimated motion is applied in other downstream tasks is briefly introduced, aiming to strengthen the interaction between different research areas. Finally, we identify current limitations and point out future directions of deep learning-based MRI motion correction.
Collapse
Affiliation(s)
- Zijian Zhou
- School of Biomedical Engineering, ShanghaiTech University, 4th Floor, BME Building, 393 Middle Huaxia Road, Pudong District, Shanghai, 201210, China
- Shanghai Clinical Research and Trial Center, ShanghaiTech University, Shanghai, China
| | - Peng Hu
- School of Biomedical Engineering, ShanghaiTech University, 4th Floor, BME Building, 393 Middle Huaxia Road, Pudong District, Shanghai, 201210, China.
- Shanghai Clinical Research and Trial Center, ShanghaiTech University, Shanghai, China.
| | - Haikun Qi
- School of Biomedical Engineering, ShanghaiTech University, 4th Floor, BME Building, 393 Middle Huaxia Road, Pudong District, Shanghai, 201210, China.
- Shanghai Clinical Research and Trial Center, ShanghaiTech University, Shanghai, China.
| |
Collapse
|
6
|
Liu S, Yap PT. Learning multi-site harmonization of magnetic resonance images without traveling human phantoms. COMMUNICATIONS ENGINEERING 2024; 3:6. [PMID: 38420332 PMCID: PMC10898625 DOI: 10.1038/s44172-023-00140-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 11/20/2023] [Indexed: 03/02/2024]
Abstract
Harmonization improves Magn. Reson. Imaging (MRI) data consistency and is central to effective integration of diverse imaging data acquired across multiple sites. Recent deep learning techniques for harmonization are predominantly supervised in nature and hence require imaging data of the same human subjects to be acquired at multiple sites. Data collection as such requires the human subjects to travel across sites and is hence challenging, costly, and impractical, more so when sufficient sample size is needed for reliable network training. Here we show how harmonization can be achieved with a deep neural network that does not rely on traveling human phantom data. Our method disentangles site-specific appearance information and site-invariant anatomical information from images acquired at multiple sites and then employs the disentangled information to generate the image of each subject for any target site. We demonstrate with more than 6,000 multi-site T1- and T2-weighted images that our method is remarkably effective in generating images with realistic site-specific appearances without altering anatomical details. Our method allows retrospective harmonization of data in a wide range of existing modern large-scale imaging studies, conducted via different scanners and protocols, without additional data collection.
Collapse
Affiliation(s)
- Siyuan Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| |
Collapse
|
7
|
Shiri I, Salimi Y, Maghsudi M, Jenabi E, Harsini S, Razeghi B, Mostafaei S, Hajianfar G, Sanaat A, Jafari E, Samimi R, Khateri M, Sheikhzadeh P, Geramifar P, Dadgar H, Bitrafan Rajabi A, Assadi M, Bénard F, Vafaei Sadr A, Voloshynovskiy S, Mainta I, Uribe C, Rahmim A, Zaidi H. Differential privacy preserved federated transfer learning for multi-institutional 68Ga-PET image artefact detection and disentanglement. Eur J Nucl Med Mol Imaging 2023; 51:40-53. [PMID: 37682303 PMCID: PMC10684636 DOI: 10.1007/s00259-023-06418-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/24/2023] [Indexed: 09/09/2023]
Abstract
PURPOSE Image artefacts continue to pose challenges in clinical molecular imaging, resulting in misdiagnoses, additional radiation doses to patients and financial costs. Mismatch and halo artefacts occur frequently in gallium-68 (68Ga)-labelled compounds whole-body PET/CT imaging. Correcting for these artefacts is not straightforward and requires algorithmic developments, given that conventional techniques have failed to address them adequately. In the current study, we employed differential privacy-preserving federated transfer learning (FTL) to manage clinical data sharing and tackle privacy issues for building centre-specific models that detect and correct artefacts present in PET images. METHODS Altogether, 1413 patients with 68Ga prostate-specific membrane antigen (PSMA)/DOTA-TATE (TOC) PET/CT scans from 3 countries, including 8 different centres, were enrolled in this study. CT-based attenuation and scatter correction (CT-ASC) was used in all centres for quantitative PET reconstruction. Prior to model training, an experienced nuclear medicine physician reviewed all images to ensure the use of high-quality, artefact-free PET images (421 patients' images). A deep neural network (modified U2Net) was trained on 80% of the artefact-free PET images to utilize centre-based (CeBa), centralized (CeZe) and the proposed differential privacy FTL frameworks. Quantitative analysis was performed in 20% of the clean data (with no artefacts) in each centre. A panel of two nuclear medicine physicians conducted qualitative assessment of image quality, diagnostic confidence and image artefacts in 128 patients with artefacts (256 images for CT-ASC and FTL-ASC). RESULTS The three approaches investigated in this study for 68Ga-PET imaging (CeBa, CeZe and FTL) resulted in a mean absolute error (MAE) of 0.42 ± 0.21 (CI 95%: 0.38 to 0.47), 0.32 ± 0.23 (CI 95%: 0.27 to 0.37) and 0.28 ± 0.15 (CI 95%: 0.25 to 0.31), respectively. Statistical analysis using the Wilcoxon test revealed significant differences between the three approaches, with FTL outperforming CeBa and CeZe (p-value < 0.05) in the clean test set. The qualitative assessment demonstrated that FTL-ASC significantly improved image quality and diagnostic confidence and decreased image artefacts, compared to CT-ASC in 68Ga-PET imaging. In addition, mismatch and halo artefacts were successfully detected and disentangled in the chest, abdomen and pelvic regions in 68Ga-PET imaging. CONCLUSION The proposed approach benefits from using large datasets from multiple centres while preserving patient privacy. Qualitative assessment by nuclear medicine physicians showed that the proposed model correctly addressed two main challenging artefacts in 68Ga-PET imaging. This technique could be integrated in the clinic for 68Ga-PET imaging artefact detection and disentanglement using multicentric heterogeneous datasets.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
- Department of Cardiology, Inselspital, University of Bern, Bern, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Mehdi Maghsudi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Elnaz Jenabi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Sara Harsini
- BC Cancer Research Institute, Vancouver, BC, Canada
| | - Behrooz Razeghi
- Department of Computer Science, University of Geneva, Geneva, Switzerland
| | - Shayan Mostafaei
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
- Department of Medical Epidemiology and Biostatistics, Karolinska Institute, Stockholm, Sweden
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Esmail Jafari
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Rezvan Samimi
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Maziar Khateri
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Peyman Sheikhzadeh
- Department of Nuclear Medicine, Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran
| | - Parham Geramifar
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Habibollah Dadgar
- Cancer Research Center, Razavi Hospital, Imam Reza International University, Mashhad, Iran
| | - Ahmad Bitrafan Rajabi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
- Echocardiography Research Center, Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Majid Assadi
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - François Bénard
- BC Cancer Research Institute, Vancouver, BC, Canada
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
| | - Alireza Vafaei Sadr
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
- Department of Public Health Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA, 17033, USA
| | | | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Carlos Uribe
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
- Molecular Imaging and Therapy, BC Cancer, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
- Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Geneva University Neuro Center, Geneva University, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
8
|
Shiri I, Salimi Y, Hervier E, Pezzoni A, Sanaat A, Mostafaei S, Rahmim A, Mainta I, Zaidi H. Artificial Intelligence-Driven Single-Shot PET Image Artifact Detection and Disentanglement: Toward Routine Clinical Image Quality Assurance. Clin Nucl Med 2023; 48:1035-1046. [PMID: 37883015 PMCID: PMC10662584 DOI: 10.1097/rlu.0000000000004912] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 08/22/2023] [Indexed: 10/27/2023]
Abstract
PURPOSE Medical imaging artifacts compromise image quality and quantitative analysis and might confound interpretation and misguide clinical decision-making. The present work envisions and demonstrates a new paradigm PET image Quality Assurance NETwork (PET-QA-NET) in which various image artifacts are detected and disentangled from images without prior knowledge of a standard of reference or ground truth for routine PET image quality assurance. METHODS The network was trained and evaluated using training/validation/testing data sets consisting of 669/100/100 artifact-free oncological 18 F-FDG PET/CT images and subsequently fine-tuned and evaluated on 384 (20% for fine-tuning) scans from 8 different PET centers. The developed DL model was quantitatively assessed using various image quality metrics calculated for 22 volumes of interest defined on each scan. In addition, 200 additional 18 F-FDG PET/CT scans (this time with artifacts), generated using both CT-based attenuation and scatter correction (routine PET) and PET-QA-NET, were blindly evaluated by 2 nuclear medicine physicians for the presence of artifacts, diagnostic confidence, image quality, and the number of lesions detected in different body regions. RESULTS Across the volumes of interest of 100 patients, SUV MAE values of 0.13 ± 0.04, 0.24 ± 0.1, and 0.21 ± 0.06 were reached for SUV mean , SUV max , and SUV peak , respectively (no statistically significant difference). Qualitative assessment showed a general trend of improved image quality and diagnostic confidence and reduced image artifacts for PET-QA-NET compared with routine CT-based attenuation and scatter correction. CONCLUSION We developed a highly effective and reliable quality assurance tool that can be embedded routinely to detect and correct for 18 F-FDG PET image artifacts in clinical setting with notably improved PET image quality and quantitative capabilities.
Collapse
Affiliation(s)
- Isaac Shiri
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
- Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Yazdan Salimi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
| | - Elsa Hervier
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
| | - Agathe Pezzoni
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
| | - Amirhossein Sanaat
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
| | - Shayan Mostafaei
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society
- Department of Medical Epidemiology and Biostatistics, Karolinska Institute, Stockholm, Sweden
| | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
| | - Ismini Mainta
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
| | - Habib Zaidi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva
- Geneva University Neuro Center, Geneva University, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
9
|
Pan F, Fan Q, Xie H, Bai C, Zhang Z, Chen H, Yang L, Zhou X, Bao Q, Liu C. Correction of Arterial-Phase Motion Artifacts in Gadoxetic Acid-Enhanced Liver MRI Using an Innovative Unsupervised Network. Bioengineering (Basel) 2023; 10:1192. [PMID: 37892922 PMCID: PMC10604307 DOI: 10.3390/bioengineering10101192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 09/30/2023] [Accepted: 10/10/2023] [Indexed: 10/29/2023] Open
Abstract
This study aims to propose and evaluate DR-CycleGAN, a disentangled unsupervised network by introducing a novel content-consistency loss, for removing arterial-phase motion artifacts in gadoxetic acid-enhanced liver MRI examinations. From June 2020 to July 2021, gadoxetic acid-enhanced liver MRI data were retrospectively collected in this center to establish training and testing datasets. Motion artifacts were semi-quantitatively assessed using a five-point Likert scale (1 = no artifact, 2 = mild, 3 = moderate, 4 = severe, and 5 = non-diagnostic) and quantitatively evaluated using the structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR). The datasets comprised a training dataset (308 examinations, including 58 examinations with artifact grade = 1 and 250 examinations with artifact grade ≥ 2), a paired test dataset (320 examinations, including 160 examinations with artifact grade = 1 and paired 160 examinations with simulated motion artifacts of grade ≥ 2), and an unpaired test dataset (474 examinations with artifact grade ranging from 1 to 5). The performance of DR-CycleGAN was evaluated and compared with a state-of-the-art network, Cycle-MedGAN V2.0. As a result, in the paired test dataset, DR-CycleGAN demonstrated significantly higher SSIM and PSNR values and lower motion artifact grades compared to Cycle-MedGAN V2.0 (0.89 ± 0.07 vs. 0.84 ± 0.09, 32.88 ± 2.11 vs. 30.81 ± 2.64, and 2.7 ± 0.7 vs. 3.0 ± 0.9, respectively; p < 0.001 each). In the unpaired test dataset, DR-CycleGAN also exhibited a superior motion artifact correction performance, resulting in a significant decrease in motion artifact grades from 2.9 ± 1.3 to 2.0 ± 0.6 compared to Cycle-MedGAN V2.0 (to 2.4 ± 0.9, p < 0.001). In conclusion, DR-CycleGAN effectively reduces motion artifacts in the arterial phase images of gadoxetic acid-enhanced liver MRI examinations, offering the potential to enhance image quality.
Collapse
Affiliation(s)
- Feng Pan
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; (F.P.); (Q.F.); (H.C.); (L.Y.)
| | - Qianqian Fan
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; (F.P.); (Q.F.); (H.C.); (L.Y.)
| | - Han Xie
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China; (H.X.); (Z.Z.); (X.Z.)
| | - Chongxin Bai
- School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China;
| | - Zhi Zhang
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China; (H.X.); (Z.Z.); (X.Z.)
| | - Hebing Chen
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; (F.P.); (Q.F.); (H.C.); (L.Y.)
| | - Lian Yang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; (F.P.); (Q.F.); (H.C.); (L.Y.)
| | - Xin Zhou
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China; (H.X.); (Z.Z.); (X.Z.)
- University of Chinese Academy of Sciences, Beijing 100864, China
- Optics Valley Laboratory, Wuhan 430074, China
| | - Qingjia Bao
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China; (H.X.); (Z.Z.); (X.Z.)
| | - Chaoyang Liu
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China; (H.X.); (Z.Z.); (X.Z.)
- University of Chinese Academy of Sciences, Beijing 100864, China
- Optics Valley Laboratory, Wuhan 430074, China
| |
Collapse
|
10
|
Wu B, Li C, Zhang J, Lai H, Feng Q, Huang M. Unsupervised dual-domain disentangled network for removal of rigid motion artifacts in MRI. Comput Biol Med 2023; 165:107373. [PMID: 37611424 DOI: 10.1016/j.compbiomed.2023.107373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 07/28/2023] [Accepted: 08/12/2023] [Indexed: 08/25/2023]
Abstract
Motion artifacts in magnetic resonance imaging (MRI) have always been a serious issue because they can affect subsequent diagnosis and treatment. Supervised deep learning methods have been investigated for the removal of motion artifacts; however, they require paired data that are difficult to obtain in clinical settings. Although unsupervised methods are widely proposed to fully use clinical unpaired data, they generally focus on anatomical structures generated by the spatial domain while ignoring phase error (deviations or inaccuracies in phase information that are possibly caused by rigid motion artifacts during image acquisition) provided by the frequency domain. In this study, a 2D unsupervised deep learning method named unsupervised disentangled dual-domain network (UDDN) was proposed to effectively disentangle and remove unwanted rigid motion artifacts from images. In UDDN, a dual-domain encoding module was presented to capture different types of information from the spatial and frequency domains to enrich the information. Moreover, a cross-domain attention fusion module was proposed to effectively fuse information from different domains, reduce information redundancy, and improve the performance of motion artifact removal. UDDN was validated on a publicly available dataset and a clinical dataset. Qualitative and quantitative experimental results showed that our method could effectively remove motion artifacts and reconstruct image details. Moreover, the performance of UDDN surpasses that of several state-of-the-art unsupervised methods and is comparable with that of the supervised method. Therefore, our method has great potential for clinical application in MRI, such as real-time removal of rigid motion artifacts.
Collapse
Affiliation(s)
- Boya Wu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Caixia Li
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China.
| | - Jiawei Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Haoran Lai
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Meiyan Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
11
|
Chen Z, Pawar K, Ekanayake M, Pain C, Zhong S, Egan GF. Deep Learning for Image Enhancement and Correction in Magnetic Resonance Imaging-State-of-the-Art and Challenges. J Digit Imaging 2023; 36:204-230. [PMID: 36323914 PMCID: PMC9984670 DOI: 10.1007/s10278-022-00721-9] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 09/09/2022] [Accepted: 10/17/2022] [Indexed: 11/06/2022] Open
Abstract
Magnetic resonance imaging (MRI) provides excellent soft-tissue contrast for clinical diagnoses and research which underpin many recent breakthroughs in medicine and biology. The post-processing of reconstructed MR images is often automated for incorporation into MRI scanners by the manufacturers and increasingly plays a critical role in the final image quality for clinical reporting and interpretation. For image enhancement and correction, the post-processing steps include noise reduction, image artefact correction, and image resolution improvements. With the recent success of deep learning in many research fields, there is great potential to apply deep learning for MR image enhancement, and recent publications have demonstrated promising results. Motivated by the rapidly growing literature in this area, in this review paper, we provide a comprehensive overview of deep learning-based methods for post-processing MR images to enhance image quality and correct image artefacts. We aim to provide researchers in MRI or other research fields, including computer vision and image processing, a literature survey of deep learning approaches for MR image enhancement. We discuss the current limitations of the application of artificial intelligence in MRI and highlight possible directions for future developments. In the era of deep learning, we highlight the importance of a critical appraisal of the explanatory information provided and the generalizability of deep learning algorithms in medical imaging.
Collapse
Affiliation(s)
- Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia.
- Department of Data Science and AI, Monash University, Melbourne, VIC, Australia.
| | - Kamlesh Pawar
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
| | - Mevan Ekanayake
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, VIC, Australia
| | - Cameron Pain
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, VIC, Australia
| | - Shenjun Zhong
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
- National Imaging Facility, Brisbane, QLD, Australia
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, 3168, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
12
|
Hossbach J, Splitthoff DN, Cauley S, Clifford B, Polak D, Lo WC, Meyer H, Maier A. Deep learning-based motion quantification from k-space for fast model-based magnetic resonance imaging motion correction. Med Phys 2022; 50:2148-2161. [PMID: 36433748 DOI: 10.1002/mp.16119] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 10/19/2022] [Accepted: 10/21/2022] [Indexed: 11/28/2022] Open
Abstract
BACKGROUND Intra-scan rigid-body motion is a costly and ubiquitous problem in clinical magnetic resonance imaging (MRI) of the head. PURPOSE State-of-the-art methods for retrospective motion correction in MRI are often computationally expensive or in the case of image-to-image deep learning (DL) based methods can be prone to undesired alterations of the image (hallucinations'). In this work we introduce a novel rigid-body motion correction method which combines the advantages of classical model-driven and data-consistency (DC) preserving approaches with a novel DL algorithm, to provide fast and robust retrospective motion correction. METHODS The proposed Motion Parameter Estimating Densenet (MoPED) retrospectively estimates subject head motion during MRI acquisitions using a DL network with DenseBlocks and multitask learning. It quantifies the 2D rigid in-plane motion parameters slice-wise for each echo train (ET) of a Cartesian T2-weighted 2D Turbo-Spin-Echo sequence. The network receives a center patch of the motion corrupted k-space as well as an additional motion-free low-resolution reference scan to provide the ground truth orientation. The supervised training utilizes motion simulations based on 28 acquisitions with subject-wise training, validation, and test data splits of 70%, 23%, and 7%. During inference, MoPED is embedded in an iterative DC-driven motion correction algorithm which alternatingly updates estimates of the motion parameters and motion-corrected low-resolution k-space data. The estimated motion parameters are then used to reconstruct the final motion corrected image. The mean absolute/squared error and the Pearson correlation coefficient were used to analyze the motion parameter estimation quality on in-silico data in a quantitative evaluation. Structural similarity (SSIM), DC error and root mean squared error (RMSE) were used as metrics of image quality improvement. Furthermore, the generalization capability of the network was analyzed on two in-vivo motion volumes with 28 slices each and on one simulated T1-weighted volume. RESULTS The motion estimation achieves a Pearson correlation of 0.968 to the simulated ground-truth of the 2433 test data slices used. In-silico results indicate that MoPED decreases the time for the optimization by a factor of around 27 compared to a conventional method and is able to reduce the RMSE of the reconstructions and average DC error by more than a factor of two compared to uncorrected images. In-vivo experiments show a decrease in computation time by a factor of around 20, a RMSE decrease from 0.055 to 0.033 and an SSIM increase from 0.795 to 0.862. Furthermore, contrast independence is demonstrated as MoPED is also able to correct T1-weighted images in simulations without retraining. Due to the model-based correction, no hallucinations were observed. CONCLUSIONS Incorporating DL in a model-based motion correction algorithm shows great benefit on the optimization and computation time. The k-space-based estimation also allows a data consistent correction and therefore avoids the risk of hallucinations of image-to-image approaches.
Collapse
Affiliation(s)
- Julian Hossbach
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Siemens Healthcare GmbH, Erlangen, Germany
| | | | - Stephen Cauley
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Massachusetts, Charlestown, USA
| | - Bryan Clifford
- Siemens Medical Solutions USA, Massachusetts, Boston, USA
| | - Daniel Polak
- Siemens Healthcare GmbH, Erlangen, Germany.,Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Massachusetts, Charlestown, USA
| | - Wei-Ching Lo
- Siemens Medical Solutions USA, Massachusetts, Boston, USA
| | | | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
13
|
Abstract
Neonatal care is becoming increasingly complex with large amounts of rich, routinely recorded physiological, diagnostic and outcome data. Artificial intelligence (AI) has the potential to harness this vast quantity and range of information and become a powerful tool to support clinical decision making, personalised care, precise prognostics, and enhance patient safety. Current AI approaches in neonatal medicine include tools for disease prediction and risk stratification, neurological diagnostic support and novel image recognition technologies. Key to the integration of AI in neonatal medicine is the understanding of its limitations and a standardised critical appraisal of AI tools. Barriers and challenges to this include the quality of datasets used, performance assessment, and appropriate external validation and clinical impact studies. Improving digital literacy amongst healthcare professionals and cross-disciplinary collaborations are needed to harness the full potential of AI to help take the next significant steps in improving neonatal outcomes for high-risk infants.
Collapse
|
14
|
Tian D, Zeng Z, Sun X, Tong Q, Li H, He H, Gao JH, He Y, Xia M. A deep learning-based multisite neuroimage harmonization framework established with a traveling-subject dataset. Neuroimage 2022; 257:119297. [PMID: 35568346 DOI: 10.1016/j.neuroimage.2022.119297] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Revised: 03/31/2022] [Accepted: 05/09/2022] [Indexed: 12/12/2022] Open
Abstract
The accumulation of multisite large-sample MRI datasets collected during large brain research projects in the last decade has provided critical resources for understanding the neurobiological mechanisms underlying cognitive functions and brain disorders. However, the significant site effects observed in imaging data and their derived structural and functional features have prevented the derivation of consistent findings across multiple studies. The development of harmonization methods that can effectively eliminate complex site effects while maintaining biological characteristics in neuroimaging data has become a vital and urgent requirement for multisite imaging studies. Here, we propose a deep learning-based framework to harmonize imaging data obtained from pairs of sites, in which site factors and brain features can be disentangled and encoded. We trained the proposed framework with a publicly available traveling subject dataset from the Strategic Research Program for Brain Sciences (SRPBS) and harmonized the gray matter volume maps derived from eight source sites to a target site. The proposed framework significantly eliminated intersite differences in gray matter volumes. The embedded encoders successfully captured both the abstract textures of site factors and the concrete brain features. Moreover, the proposed framework exhibited outstanding performance relative to conventional statistical harmonization methods in terms of site effect removal, data distribution homogenization, and intrasubject similarity improvement. Finally, the proposed harmonization network provided fixable expandability, through which new sites could be linked to the target site via indirect schema without retraining the whole model. Together, the proposed method offers a powerful and interpretable deep learning-based harmonization framework for multisite neuroimaging data that can enhance reliability and reproducibility in multisite studies regarding brain development and brain disorders.
Collapse
Affiliation(s)
- Dezheng Tian
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China; IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Zilong Zeng
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China; IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Xiaoyi Sun
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China; IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; School of Systems Science, Beijing Normal University, Beijing 100875, China
| | - Qiqi Tong
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou 311121, China
| | - Huanjie Li
- School of Biomedical Engineering, Dalian University of Technology, Dalian 116024, China
| | - Hongjian He
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Jia-Hong Gao
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China; Beijing City Key Laboratory for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China; IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
| | - Yong He
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China; IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; Chinese Institute for Brain Research, Beijing 102206, China
| | - Mingrui Xia
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China; IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China.
| |
Collapse
|