1
|
Kim S, Wang SM, Kang DW, Um YH, Han EJ, Park SY, Ha S, Choe YS, Kim HW, Kim REY, Kim D, Lee CU, Lim HK. A Comparative Analysis of Two Automated Quantification Methods for Regional Cerebral Amyloid Retention: PET-Only and PET-and-MRI-Based Methods. Int J Mol Sci 2024; 25:7649. [PMID: 39062892 PMCID: PMC11276670 DOI: 10.3390/ijms25147649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Revised: 07/06/2024] [Accepted: 07/10/2024] [Indexed: 07/28/2024] Open
Abstract
Accurate quantification of amyloid positron emission tomography (PET) is essential for early detection of and intervention in Alzheimer's disease (AD) but there is still a lack of studies comparing the performance of various automated methods. This study compared the PET-only method and PET-and-MRI-based method with a pre-trained deep learning segmentation model. A large sample of 1180 participants in the Catholic Aging Brain Imaging (CABI) database was analyzed to calculate the regional standardized uptake value ratio (SUVR) using both methods. The logistic regression models were employed to assess the discriminability of amyloid-positive and negative groups through 10-fold cross-validation and area under the receiver operating characteristics (AUROC) metrics. The two methods showed a high correlation in calculating SUVRs but the PET-MRI method, incorporating MRI data for anatomical accuracy, demonstrated superior performance in predicting amyloid-positivity. The parietal, frontal, and cingulate importantly contributed to the prediction. The PET-MRI method with a pre-trained deep learning model approach provides an efficient and precise method for earlier diagnosis and intervention in the AD continuum.
Collapse
Affiliation(s)
- Sunghwan Kim
- Department of Psychiatry, College of Medicine, Yeouido St. Mary’s Hospital, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Sheng-Min Wang
- Department of Psychiatry, College of Medicine, Yeouido St. Mary’s Hospital, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Dong Woo Kang
- Department of Psychiatry, College of Medicine, Seoul St. Mary’s Hospital, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Yoo Hyun Um
- Department of Psychiatry, St. Vincent’s Hospital, College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Eun Ji Han
- Division of Nuclear Medicine, Department of Radiology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Sonya Youngju Park
- Division of Nuclear Medicine, Department of Radiology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Seunggyun Ha
- Division of Nuclear Medicine, Department of Radiology, Seoul St. Mary’s Hospital, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Yeong Sim Choe
- Research Institute, Neurophet Inc., Seoul 06234, Republic of Korea (R.E.K.)
| | - Hye Weon Kim
- Research Institute, Neurophet Inc., Seoul 06234, Republic of Korea (R.E.K.)
| | - Regina EY Kim
- Research Institute, Neurophet Inc., Seoul 06234, Republic of Korea (R.E.K.)
| | - Donghyeon Kim
- Research Institute, Neurophet Inc., Seoul 06234, Republic of Korea (R.E.K.)
| | - Chang Uk Lee
- Department of Psychiatry, College of Medicine, Seoul St. Mary’s Hospital, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Hyun Kook Lim
- Department of Psychiatry, College of Medicine, Yeouido St. Mary’s Hospital, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
- CMC Institute for Basic Medical Science, The Catholic Medical Center of The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| |
Collapse
|
2
|
Lee W, Lee S, Park Y, Kim GE, Bae JB, Han JW, Kim KW. Construction and validation of a brain magnetic resonance imaging template for normal older Koreans. BMC Neurol 2024; 24:222. [PMID: 38943101 PMCID: PMC11212263 DOI: 10.1186/s12883-024-03735-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 06/17/2024] [Indexed: 07/01/2024] Open
Abstract
BACKGROUND Spatial normalization to a standardized brain template is a crucial step in magnetic resonance imaging (MRI) studies. Brain templates made from sufficient sample size have low brain variability, improving the accuracy of spatial normalization. Using population-specific template improves accuracy of spatial normalization because brain morphology varies according to ethnicity and age. METHODS We constructed a brain template of normal Korean elderly (KNE200) using MRI scans 100 male and 100 female aged over 60 years old with normal cognition. We compared the deformation after spatial normalization of the KNE200 template to that of the KNE96, constructed from 96 cognitively normal elderly Koreans and to that of the brain template (OCF), constructed from 434 non-demented older Caucasians to examine the effect of sample size and ethnicity on the accuracy of brain template, respectively. We spatially normalized the MRI scans of elderly Koreans and quantified the amount of deformations associated with spatial normalization using the magnitude of displacement and volumetric changes of voxels. RESULTS The KNE200 yielded significantly less displacement and volumetric change in the parahippocampal gyrus, medial and posterior orbital gyrus, fusiform gyrus, gyrus rectus, cerebellum and vermis than the KNE96. The KNE200 also yielded much less displacement in the cerebellum, vermis, hippocampus, parahippocampal gyrus and thalamus and much less volumetric change in the cerebellum, vermis, hippocampus and parahippocampal gyrus than the OCF. CONCLUSION KNE200 had the better accuracy than the KNE96 due to the larger sample size and was far accurate than the template constructed from elderly Caucasians in elderly Koreans.
Collapse
Grants
- HI09C1379 [A092077] Korean Health Technology R&D Project, Ministry of Health and Welfare, Republic of Korea
- HI09C1379 [A092077] Korean Health Technology R&D Project, Ministry of Health and Welfare, Republic of Korea
- HI09C1379 [A092077] Korean Health Technology R&D Project, Ministry of Health and Welfare, Republic of Korea
- HI09C1379 [A092077] Korean Health Technology R&D Project, Ministry of Health and Welfare, Republic of Korea
- HI09C1379 [A092077] Korean Health Technology R&D Project, Ministry of Health and Welfare, Republic of Korea
- HI09C1379 [A092077] Korean Health Technology R&D Project, Ministry of Health and Welfare, Republic of Korea
- HI09C1379 [A092077] Korean Health Technology R&D Project, Ministry of Health and Welfare, Republic of Korea
- MSIT; 2018-2-00861 Institute for Information and Communications Technology Promotion
- MSIT; 2018-2-00861 Institute for Information and Communications Technology Promotion
- MSIT; 2018-2-00861 Institute for Information and Communications Technology Promotion
- MSIT; 2018-2-00861 Institute for Information and Communications Technology Promotion
- MSIT; 2018-2-00861 Institute for Information and Communications Technology Promotion
- MSIT; 2018-2-00861 Institute for Information and Communications Technology Promotion
- MSIT; 2018-2-00861 Institute for Information and Communications Technology Promotion
- Korean Health Technology R&D Project, Ministry of Health and Welfare, Republic of Korea
Collapse
Affiliation(s)
- Wheesung Lee
- Department of Brain & Cognitive Sciences, Seoul National University College of Natural Sciences, Seoul, Republic of Korea
| | - Subin Lee
- Department of Brain & Cognitive Sciences, Seoul National University College of Natural Sciences, Seoul, Republic of Korea
| | - Yeseung Park
- Department of Brain & Cognitive Sciences, Seoul National University College of Natural Sciences, Seoul, Republic of Korea
| | - Grace Eun Kim
- Department of Brain & Cognitive Sciences, Seoul National University College of Natural Sciences, Seoul, Republic of Korea
| | - Jong Bin Bae
- Department of Neuropsychiatry, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Ji Won Han
- Department of Neuropsychiatry, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Ki Woong Kim
- Department of Brain & Cognitive Sciences, Seoul National University College of Natural Sciences, Seoul, Republic of Korea.
- Department of Neuropsychiatry, Seoul National University Bundang Hospital, Seongnam, Republic of Korea.
- Department of Psychiatry, College of Medicine, Seoul National University, Seoul, Republic of Korea.
| |
Collapse
|
3
|
Fard AS, Reutens DC, Ramsay SC, Goodman SJ, Ghosh S, Vegh V. Image synthesis of interictal SPECT from MRI and PET using machine learning. Front Neurol 2024; 15:1383773. [PMID: 38988603 PMCID: PMC11234346 DOI: 10.3389/fneur.2024.1383773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 06/12/2024] [Indexed: 07/12/2024] Open
Abstract
Background Cross-modality image estimation can be performed using generative adversarial networks (GANs). To date, SPECT image estimation from another medical imaging modality using this technique has not been considered. We evaluate the estimation of SPECT from MRI and PET, and additionally assess the necessity for cross-modality image registration for GAN training. Methods We estimated interictal SPECT from PET and MRI as a single-channel input, and as a multi-channel input to the GAN. We collected data from 48 individuals with epilepsy and converted them to 3D isotropic images for consistence across the modalities. Training and testing data were prepared in native and template spaces. The Pix2pix framework within the GAN network was adopted. We evaluated the addition of the structural similarity index metric to the loss function in the GAN implementation. Root-mean-square error, structural similarity index, and peak signal-to-noise ratio were used to assess how well SPECT images were able to be synthesised. Results High quality SPECT images could be synthesised in each case. On average, the use of native space images resulted in a 5.4% percentage improvement in SSIM than the use of images registered to template space. The addition of structural similarity index metric to the GAN loss function did not result in improved synthetic SPECT images. Using PET in either the single channel or dual channel implementation led to the best results, however MRI could produce SPECT images close in quality. Conclusion Synthesis of SPECT from MRI or PET can potentially reduce the number of scans needed for epilepsy patient evaluation and reduce patient exposure to radiation.
Collapse
Affiliation(s)
- Azin Shokraei Fard
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
| | - David C. Reutens
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- Royal Brisbane and Women’s Hospital, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, Brisbane, QLD, Australia
| | | | | | - Soumen Ghosh
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, Brisbane, QLD, Australia
| | - Viktor Vegh
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, Brisbane, QLD, Australia
| |
Collapse
|
4
|
Sanaat A, Boccalini C, Mathoux G, Perani D, Frisoni GB, Haller S, Montandon ML, Rodriguez C, Giannakopoulos P, Garibotto V, Zaidi H. A deep learning model for generating [ 18F]FDG PET Images from early-phase [ 18F]Florbetapir and [ 18F]Flutemetamol PET images. Eur J Nucl Med Mol Imaging 2024:10.1007/s00259-024-06755-1. [PMID: 38861183 DOI: 10.1007/s00259-024-06755-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Accepted: 05/05/2024] [Indexed: 06/12/2024]
Abstract
INTRODUCTION Amyloid-β (Aβ) plaques is a significant hallmark of Alzheimer's disease (AD), detectable via amyloid-PET imaging. The Fluorine-18-Fluorodeoxyglucose ([18F]FDG) PET scan tracks cerebral glucose metabolism, correlated with synaptic dysfunction and disease progression and is complementary for AD diagnosis. Dual-scan acquisitions of amyloid PET allows the possibility to use early-phase amyloid-PET as a biomarker for neurodegeneration, proven to have a good correlation to [18F]FDG PET. The aim of this study was to evaluate the added value of synthesizing the later from the former through deep learning (DL), aiming at reducing the number of PET scans, radiation dose, and discomfort to patients. METHODS A total of 166 subjects including cognitively unimpaired individuals (N = 72), subjects with mild cognitive impairment (N = 73) and dementia (N = 21) were included in this study. All underwent T1-weighted MRI, dual-phase amyloid PET scans using either Fluorine-18 Florbetapir ([18F]FBP) or Fluorine-18 Flutemetamol ([18F]FMM), and an [18F]FDG PET scan. Two transformer-based DL models called SwinUNETR were trained separately to synthesize the [18F]FDG from early phase [18F]FBP and [18F]FMM (eFBP/eFMM). A clinical similarity score (1: no similarity to 3: similar) was assessed to compare the imaging information obtained by synthesized [18F]FDG as well as eFBP/eFMM to actual [18F]FDG. Quantitative evaluations include region wise correlation and single-subject voxel-wise analyses in comparison with a reference [18F]FDG PET healthy control database. Dice coefficients were calculated to quantify the whole-brain spatial overlap between hypometabolic ([18F]FDG PET) and hypoperfused (eFBP/eFMM) binary maps at the single-subject level as well as between [18F]FDG PET and synthetic [18F]FDG PET hypometabolic binary maps. RESULTS The clinical evaluation showed that, in comparison to eFBP/eFMM (average of clinical similarity score (CSS) = 1.53), the synthetic [18F]FDG images are quite similar to the actual [18F]FDG images (average of CSS = 2.7) in terms of preserving clinically relevant uptake patterns. The single-subject voxel-wise analyses showed that at the group level, the Dice scores improved by around 13% and 5% when using the DL approach for eFBP and eFMM, respectively. The correlation analysis results indicated a relatively strong correlation between eFBP/eFMM and [18F]FDG (eFBP: slope = 0.77, R2 = 0.61, P-value < 0.0001); eFMM: slope = 0.77, R2 = 0.61, P-value < 0.0001). This correlation improved for synthetic [18F]FDG (synthetic [18F]FDG generated from eFBP (slope = 1.00, R2 = 0.68, P-value < 0.0001), eFMM (slope = 0.93, R2 = 0.72, P-value < 0.0001)). CONCLUSION We proposed a DL model for generating the [18F]FDG from eFBP/eFMM PET images. This method may be used as an alternative for multiple radiotracer scanning in research and clinical settings allowing to adopt the currently validated [18F]FDG PET normal reference databases for data analysis.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.
| | - Cecilia Boccalini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.
- Laboratory of Neuroimaging and Innovative Molecular Tracers (NIMTlab), Geneva University Neurocenter and Faculty of Medicine, University of Geneva, Geneva, Switzerland.
| | - Gregory Mathoux
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Daniela Perani
- Vita-Salute San Raffaele University, Nuclear Medicine Unit San Raffaele Hospital, Milan, Italy
| | | | - Sven Haller
- CIMC - Centre d'Imagerie Médicale de Cornavin, Geneva, Switzerland
- Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Marie-Louise Montandon
- Department of Rehabilitation and Geriatrics, Geneva University Hospitals and University of Geneva, Geneva, Switzerland
| | - Cristelle Rodriguez
- Division of Institutional Measures, Medical Direction, Geneva University Hospitals, Geneva, Switzerland
| | - Panteleimon Giannakopoulos
- Division of Institutional Measures, Medical Direction, Geneva University Hospitals, Geneva, Switzerland
- Department of Psychiatry, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Valentina Garibotto
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Laboratory of Neuroimaging and Innovative Molecular Tracers (NIMTlab), Geneva University Neurocenter and Faculty of Medicine, University of Geneva, Geneva, Switzerland
- CIBM Center for Biomedical Imaging, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
- University Research and Innovation Center, Óbudabuda University, Budapest, Hungary.
| |
Collapse
|
5
|
Walston SL, Tatekawa H, Takita H, Miki Y, Ueda D. Evaluating Biases and Quality Issues in Intermodality Image Translation Studies for Neuroradiology: A Systematic Review. AJNR Am J Neuroradiol 2024; 45:826-832. [PMID: 38663993 DOI: 10.3174/ajnr.a8211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 01/27/2024] [Indexed: 06/09/2024]
Abstract
BACKGROUND Intermodality image-to-image translation is an artificial intelligence technique for generating one technique from another. PURPOSE This review was designed to systematically identify and quantify biases and quality issues preventing validation and clinical application of artificial intelligence models for intermodality image-to-image translation of brain imaging. DATA SOURCES PubMed, Scopus, and IEEE Xplore were searched through August 2, 2023, for artificial intelligence-based image translation models of radiologic brain images. STUDY SELECTION This review collected 102 works published between April 2017 and August 2023. DATA ANALYSIS Eligible studies were evaluated for quality using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) and for bias using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). Medically-focused article adherence was compared with that of engineering-focused articles overall with the Mann-Whitney U test and for each criterion using the Fisher exact test. DATA SYNTHESIS Median adherence to the relevant CLAIM criteria was 69% and 38% for PROBAST questions. CLAIM adherence was lower for engineering-focused articles compared with medically-focused articles (65% versus 73%, P < .001). Engineering-focused studies had higher adherence for model description criteria, and medically-focused studies had higher adherence for data set and evaluation descriptions. LIMITATIONS Our review is limited by the study design and model heterogeneity. CONCLUSIONS Nearly all studies revealed critical issues preventing clinical application, with engineering-focused studies showing higher adherence for the technical model description but significantly lower overall adherence than medically-focused studies. The pursuit of clinical application requires collaboration from both fields to improve reporting.
Collapse
Affiliation(s)
- Shannon L Walston
- From the Department of Diagnostic and Interventional Radiology (S.L.W., H.Tatekawa, H.Takita, Y.M., D.U.), Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Hiroyuki Tatekawa
- From the Department of Diagnostic and Interventional Radiology (S.L.W., H.Tatekawa, H.Takita, Y.M., D.U.), Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Hirotaka Takita
- From the Department of Diagnostic and Interventional Radiology (S.L.W., H.Tatekawa, H.Takita, Y.M., D.U.), Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Yukio Miki
- From the Department of Diagnostic and Interventional Radiology (S.L.W., H.Tatekawa, H.Takita, Y.M., D.U.), Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Daiju Ueda
- From the Department of Diagnostic and Interventional Radiology (S.L.W., H.Tatekawa, H.Takita, Y.M., D.U.), Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
- Smart Life Science Lab (D.U.), Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan
| |
Collapse
|
6
|
Kang SK, Heo M, Chung JY, Kim D, Shin SA, Choi H, Chung A, Ha JM, Kim H, Lee JS. Clinical Performance Evaluation of an Artificial Intelligence-Powered Amyloid Brain PET Quantification Method. Nucl Med Mol Imaging 2024; 58:246-254. [PMID: 38932756 PMCID: PMC11196433 DOI: 10.1007/s13139-024-00861-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 04/05/2024] [Accepted: 04/09/2024] [Indexed: 06/28/2024] Open
Abstract
Purpose This study assesses the clinical performance of BTXBrain-Amyloid, an artificial intelligence-powered software for quantifying amyloid uptake in brain PET images. Methods 150 amyloid brain PET images were visually assessed by experts and categorized as negative and positive. Standardized uptake value ratio (SUVR) was calculated with cerebellum grey matter as the reference region, and receiver operating characteristic (ROC) and precision-recall (PR) analysis for BTXBrain-Amyloid were conducted. For comparison, same image processing and analysis was performed using Statistical Parametric Mapping (SPM) program. In addition, to evaluate the spatial normalization (SN) performance, mutual information (MI) between MRI template and spatially normalized PET images was calculated and SPM group analysis was conducted. Results Both BTXBrain and SPM methods discriminated between negative and positive groups. However, BTXBrain exhibited lower SUVR standard deviation (0.06 and 0.21 for negative and positive, respectively) than SPM method (0.11 and 0.25). In ROC analysis, BTXBrain had an AUC of 0.979, compared to 0.959 for SPM, while PR curves showed an AUC of 0.983 for BTXBrain and 0.949 for SPM. At the optimal cut-off, the sensitivity and specificity were 0.983 and 0.921 for BTXBrain and 0.917 and 0.921 for SPM12, respectively. MI evaluation also favored BTXBrain (0.848 vs. 0.823), indicating improved SN. In SPM group analysis, BTXBrain exhibited higher sensitivity in detecting basal ganglia differences between negative and positive groups. Conclusion BTXBrain-Amyloid outperformed SPM in clinical performance evaluation, also demonstrating superior SN and improved detection of deep brain differences. These results suggest the potential of BTXBrain-Amyloid as a valuable tool for clinical amyloid PET image evaluation.
Collapse
Affiliation(s)
- Seung Kwan Kang
- Brightonix Imaging Inc., Seoul, Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, Korea
| | - Mina Heo
- Department of Neurology, College of Medicine, Chosun University and Chosun University Hospital, 365 Pilmun-Daero, Dong-Gu, Gwangju, South Korea
| | - Ji Yeon Chung
- Department of Neurology, College of Medicine, Chosun University and Chosun University Hospital, 365 Pilmun-Daero, Dong-Gu, Gwangju, South Korea
| | - Daewoon Kim
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, Korea
| | | | - Hongyoon Choi
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080 Korea
| | - Ari Chung
- Department of Nuclear Medicine, College of Medicine, Chosun University and Chosun University Hospital, Gwangju, Korea
| | - Jung-Min Ha
- Department of Nuclear Medicine, College of Medicine, Chosun University and Chosun University Hospital, Gwangju, Korea
| | - Hoowon Kim
- Department of Neurology, College of Medicine, Chosun University and Chosun University Hospital, 365 Pilmun-Daero, Dong-Gu, Gwangju, South Korea
| | - Jae Sung Lee
- Brightonix Imaging Inc., Seoul, Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, Korea
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080 Korea
| |
Collapse
|
7
|
Xu K, Kang H. A Review of Machine Learning Approaches for Brain Positron Emission Tomography Data Analysis. Nucl Med Mol Imaging 2024; 58:203-212. [PMID: 38932757 PMCID: PMC11196571 DOI: 10.1007/s13139-024-00845-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 01/19/2024] [Accepted: 01/25/2024] [Indexed: 06/28/2024] Open
Abstract
Positron emission tomography (PET) imaging has moved forward the development of medical diagnostics and research across various domains, including cardiology, neurology, infection detection, and oncology. The integration of machine learning (ML) algorithms into PET data analysis has further enhanced their capabilities of including disease diagnosis and classification, image segmentation, and quantitative analysis. ML algorithms empower researchers and clinicians to extract valuable insights from complex big PET datasets, which enabling automated pattern recognition, predictive health outcome modeling, and more efficient data analysis. This review explains the basic knowledge of PET imaging, statistical methods for PET image analysis, and challenges of PET data analysis. We also discussed the improvement of analysis capabilities by combining PET data with machine learning algorithms and the application of this combination in various aspects of PET image research. This review also highlights current trends and future directions in PET imaging, emphasizing the driving and critical role of machine learning and big PET image data analytics in improving diagnostic accuracy and personalized medical approaches. Integration between PET imaging will shape the future of medical diagnosis and research.
Collapse
Affiliation(s)
- Ke Xu
- Department of Biostatistics, Vanderbilt University Medical Center, 2525 West End Avenue, Suite 1100, Nashville, TN 37203 USA
| | - Hakmook Kang
- Department of Biostatistics, Vanderbilt University Medical Center, 2525 West End Avenue, Suite 1100, Nashville, TN 37203 USA
| |
Collapse
|
8
|
Bottani S, Thibeau-Sutre E, Maire A, Ströer S, Dormont D, Colliot O, Burgos N. Contrast-enhanced to non-contrast-enhanced image translation to exploit a clinical data warehouse of T1-weighted brain MRI. BMC Med Imaging 2024; 24:67. [PMID: 38504179 PMCID: PMC10953143 DOI: 10.1186/s12880-024-01242-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 03/07/2024] [Indexed: 03/21/2024] Open
Abstract
BACKGROUND Clinical data warehouses provide access to massive amounts of medical images, but these images are often heterogeneous. They can for instance include images acquired both with or without the injection of a gadolinium-based contrast agent. Harmonizing such data sets is thus fundamental to guarantee unbiased results, for example when performing differential diagnosis. Furthermore, classical neuroimaging software tools for feature extraction are typically applied only to images without gadolinium. The objective of this work is to evaluate how image translation can be useful to exploit a highly heterogeneous data set containing both contrast-enhanced and non-contrast-enhanced images from a clinical data warehouse. METHODS We propose and compare different 3D U-Net and conditional GAN models to convert contrast-enhanced T1-weighted (T1ce) into non-contrast-enhanced (T1nce) brain MRI. These models were trained using 230 image pairs and tested on 77 image pairs from the clinical data warehouse of the Greater Paris area. RESULTS Validation using standard image similarity measures demonstrated that the similarity between real and synthetic T1nce images was higher than between real T1nce and T1ce images for all the models compared. The best performing models were further validated on a segmentation task. We showed that tissue volumes extracted from synthetic T1nce images were closer to those of real T1nce images than volumes extracted from T1ce images. CONCLUSION We showed that deep learning models initially developed with research quality data could synthesize T1nce from T1ce images of clinical quality and that reliable features could be extracted from the synthetic images, thus demonstrating the ability of such methods to help exploit a data set coming from a clinical data warehouse.
Collapse
Affiliation(s)
- Simona Bottani
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Elina Thibeau-Sutre
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Aurélien Maire
- Innovation & Données - Département des Services Numériques, AP-HP, Paris, 75013, France
| | - Sebastian Ströer
- Hôpital Pitié Salpêtrière, Department of Neuroradiology, AP-HP, Paris, 75012, France
| | - Didier Dormont
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, DMU DIAMENT, Paris, 75013, France
| | - Olivier Colliot
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Ninon Burgos
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France.
| |
Collapse
|
9
|
Curcuru AN, Yang D, An H, Cuculich PS, Robinson CG, Gach HM. Technical note: Minimizing CIED artifacts on a 0.35 T MRI-Linac using deep learning. J Appl Clin Med Phys 2024; 25:e14304. [PMID: 38368615 DOI: 10.1002/acm2.14304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 01/11/2024] [Accepted: 02/03/2024] [Indexed: 02/20/2024] Open
Abstract
BACKGROUND Artifacts from implantable cardioverter defibrillators (ICDs) are a challenge to magnetic resonance imaging (MRI)-guided radiotherapy (MRgRT). PURPOSE This study tested an unsupervised generative adversarial network to mitigate ICD artifacts in balanced steady-state free precession (bSSFP) cine MRIs and improve image quality and tracking performance for MRgRT. METHODS Fourteen healthy volunteers (Group A) were scanned on a 0.35 T MRI-Linac with and without an MR conditional ICD taped to their left pectoral to simulate an implanted ICD. bSSFP MRI data from 12 of the volunteers were used to train a CycleGAN model to reduce ICD artifacts. The data from the remaining two volunteers were used for testing. In addition, the dataset was reorganized three times using a Leave-One-Out scheme. Tracking metrics [Dice similarity coefficient (DSC), target registration error (TRE), and 95 percentile Hausdorff distance (95% HD)] were evaluated for whole-heart contours. Image quality metrics [normalized root mean square error (nRMSE), peak signal-to-noise ratio (PSNR), and multiscale structural similarity (MS-SSIM) scores] were evaluated. The technique was also tested qualitatively on three additional ICD datasets (Group B) including a patient with an implanted ICD. RESULTS For the whole-heart contour with CycleGAN reconstruction: 1) Mean DSC rose from 0.910 to 0.935; 2) Mean TRE dropped from 4.488 to 2.877 mm; and 3) Mean 95% HD dropped from 10.236 to 7.700 mm. For the whole-body slice with CycleGAN reconstruction: 1) Mean nRMSE dropped from 0.644 to 0.420; 2) Mean MS-SSIM rose from 0.779 to 0.819; and 3) Mean PSNR rose from 18.744 to 22.368. The three Group B datasets evaluated qualitatively displayed a reduction in ICD artifacts in the heart. CONCLUSION CycleGAN-generated reconstructions significantly improved both tracking and image quality metrics when used to mitigate artifacts from ICDs.
Collapse
Affiliation(s)
- Austen N Curcuru
- Department of Radiation Oncology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Deshan Yang
- Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | - Hongyu An
- Departments of Radiology, Biomedical Engineering and Neurology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Phillip S Cuculich
- Departments of Cardiovascular Medicine and Radiation Oncology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Clifford G Robinson
- Department of Radiation Oncology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - H Michael Gach
- Departments of Radiation Oncology, Radiology and Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| |
Collapse
|
10
|
Yang X, Chin BB, Silosky M, Wehrend J, Litwiller DV, Ghosh D, Xing F. Learning Without Real Data Annotations to Detect Hepatic Lesions in PET Images. IEEE Trans Biomed Eng 2024; 71:679-688. [PMID: 37708016 DOI: 10.1109/tbme.2023.3315268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
OBJECTIVE Deep neural networks have been recently applied to lesion identification in fluorodeoxyglucose (FDG) positron emission tomography (PET) images, but they typically rely on a large amount of well-annotated data for model training. This is extremely difficult to achieve for neuroendocrine tumors (NETs), because of low incidence of NETs and expensive lesion annotation in PET images. The objective of this study is to design a novel, adaptable deep learning method, which uses no real lesion annotations but instead low-cost, list mode-simulated data, for hepatic lesion detection in real-world clinical NET PET images. METHODS We first propose a region-guided generative adversarial network (RG-GAN) for lesion-preserved image-to-image translation. Then, we design a specific data augmentation module for our list-mode simulated data and incorporate this module into the RG-GAN to improve model training. Finally, we combine the RG-GAN, the data augmentation module and a lesion detection neural network into a unified framework for joint-task learning to adaptatively identify lesions in real-world PET data. RESULTS The proposed method outperforms recent state-of-the-art lesion detection methods in real clinical 68Ga-DOTATATE PET images, and produces very competitive performance with the target model that is trained with real lesion annotations. CONCLUSION With RG-GAN modeling and specific data augmentation, we can obtain good lesion detection performance without using any real data annotations. SIGNIFICANCE This study introduces an adaptable deep learning method for hepatic lesion identification in NETs, which can significantly reduce human effort for data annotation and improve model generalizability for lesion detection with PET imaging.
Collapse
|
11
|
Izadi S, Shiri I, F Uribe C, Geramifar P, Zaidi H, Rahmim A, Hamarneh G. Enhanced direct joint attenuation and scatter correction of whole-body PET images via context-aware deep networks. Z Med Phys 2024:S0939-3889(24)00002-3. [PMID: 38302292 DOI: 10.1016/j.zemedi.2024.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 12/24/2023] [Accepted: 01/10/2024] [Indexed: 02/03/2024]
Abstract
In positron emission tomography (PET), attenuation and scatter corrections are necessary steps toward accurate quantitative reconstruction of the radiopharmaceutical distribution. Inspired by recent advances in deep learning, many algorithms based on convolutional neural networks have been proposed for automatic attenuation and scatter correction, enabling applications to CT-less or MR-less PET scanners to improve performance in the presence of CT-related artifacts. A known characteristic of PET imaging is to have varying tracer uptakes for various patients and/or anatomical regions. However, existing deep learning-based algorithms utilize a fixed model across different subjects and/or anatomical regions during inference, which could result in spurious outputs. In this work, we present a novel deep learning-based framework for the direct reconstruction of attenuation and scatter-corrected PET from non-attenuation-corrected images in the absence of structural information in the inference. To deal with inter-subject and intra-subject uptake variations in PET imaging, we propose a novel model to perform subject- and region-specific filtering through modulating the convolution kernels in accordance to the contextual coherency within the neighboring slices. This way, the context-aware convolution can guide the composition of intermediate features in favor of regressing input-conditioned and/or region-specific tracer uptakes. We also utilized a large cohort of 910 whole-body studies for training and evaluation purposes, which is more than one order of magnitude larger than previous works. In our experimental studies, qualitative assessments showed that our proposed CT-free method is capable of producing corrected PET images that accurately resemble ground truth images corrected with the aid of CT scans. For quantitative assessments, we evaluated our proposed method over 112 held-out subjects and achieved an absolute relative error of 14.30±3.88% and a relative error of -2.11%±2.73% in whole-body.
Collapse
Affiliation(s)
- Saeed Izadi
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Canada
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Geneva, Switzerland; Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
| | - Carlos F Uribe
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, Canada; Department of Radiology, University of British Columbia, Vancouver, Canada; Molecular Imaging and Therapy, BC Cancer, Vancouver, BC, Canada
| | - Parham Geramifar
- Research Center for Nuclear Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark; University Research and Innovation Center, Óbuda University, Budapest, Hungary
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, Canada; Department of Radiology, University of British Columbia, Vancouver, Canada; Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada
| | - Ghassan Hamarneh
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Canada.
| |
Collapse
|
12
|
Xu Z, Tang J, Qi C, Yao D, Liu C, Zhan Y, Lukasiewicz T. Cross-domain attention-guided generative data augmentation for medical image analysis with limited data. Comput Biol Med 2024; 168:107744. [PMID: 38006826 DOI: 10.1016/j.compbiomed.2023.107744] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 11/12/2023] [Accepted: 11/20/2023] [Indexed: 11/27/2023]
Abstract
Data augmentation is widely applied to medical image analysis tasks in limited datasets with imbalanced classes and insufficient annotations. However, traditional augmentation techniques cannot supply extra information, making the performance of diagnosis unsatisfactory. GAN-based generative methods have thus been proposed to obtain additional useful information to realize more effective data augmentation; but existing generative data augmentation techniques mainly encounter two problems: (i) Current generative data augmentation lacks of the capability in using cross-domain differential information to extend limited datasets. (ii) The existing generative methods cannot provide effective supervised information in medical image segmentation tasks. To solve these problems, we propose an attention-guided cross-domain tumor image generation model (CDA-GAN) with an information enhancement strategy. The CDA-GAN can generate diverse samples to expand the scale of datasets, improving the performance of medical image diagnosis and treatment tasks. In particular, we incorporate channel attention into a CycleGAN-based cross-domain generation network that captures inter-domain information and generates positive or negative samples of brain tumors. In addition, we propose a semi-supervised spatial attention strategy to guide spatial information of features at the pixel level in tumor generation. Furthermore, we add spectral normalization to prevent the discriminator from mode collapse and stabilize the training procedure. Finally, to resolve an inapplicability problem in the segmentation task, we further propose an application strategy of using this data augmentation model to achieve more accurate medical image segmentation with limited data. Experimental studies on two public brain tumor datasets (BraTS and TCIA) show that the proposed CDA-GAN model greatly outperforms the state-of-the-art generative data augmentation in both practical medical image classification tasks and segmentation tasks; e.g. CDA-GAN is 0.50%, 1.72%, 2.05%, and 0.21% better than the best SOTA baseline in terms of ACC, AUC, Recall, and F1, respectively, in the classification task of BraTS, while its improvements w.r.t. the best SOTA baseline in terms of Dice, Sens, HD95, and mIOU, in the segmentation task of TCIA are 2.50%, 0.90%, 14.96%, and 4.18%, respectively.
Collapse
Affiliation(s)
- Zhenghua Xu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, China
| | - Jiaqi Tang
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, China
| | - Chang Qi
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, China; Institute of Logic and Computation, Vienna University of Technology, Vienna, Austria.
| | - Dan Yao
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, China
| | - Caihua Liu
- College of Computer Science and Technology, Civil Aviation University of China, Tianjin, China
| | - Yuefu Zhan
- Department of Radiology, Hainan Women and Children's Medical Center, Haikou, China
| | - Thomas Lukasiewicz
- Institute of Logic and Computation, Vienna University of Technology, Vienna, Austria; Department of Computer Science, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
13
|
Choi HJ, Seo M, Kim A, Park SH. Generation of Conventional 18F-FDG PET Images from 18F-Florbetaben PET Images Using Generative Adversarial Network: A Preliminary Study Using ADNI Dataset. MEDICINA (KAUNAS, LITHUANIA) 2023; 59:1281. [PMID: 37512092 PMCID: PMC10385186 DOI: 10.3390/medicina59071281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 07/07/2023] [Accepted: 07/07/2023] [Indexed: 07/30/2023]
Abstract
Background and Objectives: 18F-fluorodeoxyglucose (FDG) positron emission tomography (PET) (PETFDG) image can visualize neuronal injury of the brain in Alzheimer's disease. Early-phase amyloid PET image is reported to be similar to PETFDG image. This study aimed to generate PETFDG images from 18F-florbetaben PET (PETFBB) images using a generative adversarial network (GAN) and compare the generated PETFDG (PETGE-FDG) with real PETFDG (PETRE-FDG) images using the structural similarity index measure (SSIM) and the peak signal-to-noise ratio (PSNR). Materials and Methods: Using the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, 110 participants with both PETFDG and PETFBB images at baseline were included. The paired PETFDG and PETFBB images included six and four subset images, respectively. Each subset image had a 5 min acquisition time. These subsets were randomly sampled and divided into 249 paired PETFDG and PETFBB subset images for the training datasets and 95 paired subset images for the validation datasets during the deep-learning process. The deep learning model used in this study is composed of a GAN with a U-Net. The differences in the SSIM and PSNR values between the PETGE-FDG and PETRE-FDG images in the cycleGAN and pix2pix models were evaluated using the independent Student's t-test. Statistical significance was set at p ≤ 0.05. Results: The participant demographics (age, sex, or diagnosis) showed no statistically significant differences between the training (82 participants) and validation (28 participants) groups. The mean SSIM between the PETGE-FDG and PETRE-FDG images was 0.768 ± 0.135 for the cycleGAN model and 0.745 ± 0.143 for the pix2pix model. The mean PSNR was 32.4 ± 9.5 and 30.7 ± 8.0. The PETGE-FDG images of the cycleGAN model showed statistically higher mean SSIM than those of the pix2pix model (p < 0.001). The mean PSNR was also higher in the PETGE-FDG images of the cycleGAN model than those of pix2pix model (p < 0.001). Conclusions: We generated PETFDG images from PETFBB images using deep learning. The cycleGAN model generated PETGE-FDG images with a higher SSIM and PSNR values than the pix2pix model. Image-to-image translation using deep learning may be useful for generating PETFDG images. These may provide additional information for the management of Alzheimer's disease without extra image acquisition and the consequent increase in radiation exposure, inconvenience, or expenses.
Collapse
Affiliation(s)
- Hyung Jin Choi
- Department of Nuclear Medicine, Ulsan University Hospital, Ulsan 44033, Republic of Korea
| | - Minjung Seo
- Department of Nuclear Medicine, Ulsan University Hospital, University of Ulsan College of Medicine, Ulsan 44033, Republic of Korea
| | - Ahro Kim
- Department of Neurology, Ulsan University Hospital, University of Ulsan College of Medicine, Ulsan 44033, Republic of Korea
| | - Seol Hoon Park
- Department of Nuclear Medicine, Ulsan University Hospital, University of Ulsan College of Medicine, Ulsan 44033, Republic of Korea
| |
Collapse
|
14
|
Hu J, Mougiakakou S, Xue S, Afshar-Oromieh A, Hautz W, Christe A, Sznitman R, Rominger A, Ebner L, Shi K. Artificial intelligence for reducing the radiation burden of medical imaging for the diagnosis of coronavirus disease. EUROPEAN PHYSICAL JOURNAL PLUS 2023; 138:391. [PMID: 37192839 PMCID: PMC10165296 DOI: 10.1140/epjp/s13360-023-03745-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 01/25/2023] [Indexed: 05/18/2023]
Abstract
Medical imaging has been intensively employed in screening, diagnosis and monitoring during the COVID-19 pandemic. With the improvement of RT-PCR and rapid inspection technologies, the diagnostic references have shifted. Current recommendations tend to limit the application of medical imaging in the acute setting. Nevertheless, efficient and complementary values of medical imaging have been recognized at the beginning of the pandemic when facing unknown infectious diseases and a lack of sufficient diagnostic tools. Optimizing medical imaging for pandemics may still have encouraging implications for future public health, especially for long-lasting post-COVID-19 syndrome theranostics. A critical concern for the application of medical imaging is the increased radiation burden, particularly when medical imaging is used for screening and rapid containment purposes. Emerging artificial intelligence (AI) technology provides the opportunity to reduce the radiation burden while maintaining diagnostic quality. This review summarizes the current AI research on dose reduction for medical imaging, and the retrospective identification of their potential in COVID-19 may still have positive implications for future public health.
Collapse
Affiliation(s)
- Jiaxi Hu
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland
| | - Stavroula Mougiakakou
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland
| | - Song Xue
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland
| | - Ali Afshar-Oromieh
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
| | - Wolf Hautz
- Department of University Emergency Center of Inselspital, University of Bern, Freiburgstrasse 15, 3010 Bern, Switzerland
| | - Andreas Christe
- Department of Radiology, Inselspital, Bern University Hospital, University of Bern, 3012 Bern, Switzerland
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
| | - Lukas Ebner
- Department of Radiology, Inselspital, Bern University Hospital, University of Bern, 3012 Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
| |
Collapse
|
15
|
Li Z, Fan Q, Bilgic B, Wang G, Wu W, Polimeni JR, Miller KL, Huang SY, Tian Q. Diffusion MRI data analysis assisted by deep learning synthesized anatomical images (DeepAnat). Med Image Anal 2023; 86:102744. [PMID: 36867912 PMCID: PMC10517382 DOI: 10.1016/j.media.2023.102744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 12/25/2022] [Accepted: 01/05/2023] [Indexed: 01/20/2023]
Abstract
Diffusion MRI is a useful neuroimaging tool for non-invasive mapping of human brain microstructure and structural connections. The analysis of diffusion MRI data often requires brain segmentation, including volumetric segmentation and cerebral cortical surfaces, from additional high-resolution T1-weighted (T1w) anatomical MRI data, which may be unacquired, corrupted by subject motion or hardware failure, or cannot be accurately co-registered to the diffusion data that are not corrected for susceptibility-induced geometric distortion. To address these challenges, this study proposes to synthesize high-quality T1w anatomical images directly from diffusion data using convolutional neural networks (CNNs) (entitled "DeepAnat"), including a U-Net and a hybrid generative adversarial network (GAN), and perform brain segmentation on synthesized T1w images or assist the co-registration using synthesized T1w images. The quantitative and systematic evaluations using data of 60 young subjects provided by the Human Connectome Project (HCP) show that the synthesized T1w images and results for brain segmentation and comprehensive diffusion analysis tasks are highly similar to those from native T1w data. The brain segmentation accuracy is slightly higher for the U-Net than the GAN. The efficacy of DeepAnat is further validated on a larger dataset of 300 more elderly subjects provided by the UK Biobank. Moreover, the U-Nets trained and validated on the HCP and UK Biobank data are shown to be highly generalizable to the diffusion data from Massachusetts General Hospital Connectome Diffusion Microstructure Dataset (MGH CDMD) acquired with different hardware systems and imaging protocols and therefore can be used directly without retraining or with fine-tuning for further improved performance. Finally, it is quantitatively demonstrated that the alignment between native T1w images and diffusion images uncorrected for geometric distortion assisted by synthesized T1w images substantially improves upon that by directly co-registering the diffusion and T1w images using the data of 20 subjects from MGH CDMD. In summary, our study demonstrates the benefits and practical feasibility of DeepAnat for assisting various diffusion MRI data analyses and supports its use in neuroscientific applications.
Collapse
Affiliation(s)
- Ziyu Li
- Department of Biomedical Engineering, Tsinghua University, Beijing, China; Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Qiuyun Fan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States
| | - Guangzhi Wang
- Department of Biomedical Engineering, Tsinghua University, Beijing, China
| | - Wenchuan Wu
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Jonathan R Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States
| | - Karla L Miller
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Susie Y Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States
| | - Qiyuan Tian
- Department of Biomedical Engineering, Tsinghua University, Beijing, China; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States.
| |
Collapse
|
16
|
Seo SY, Oh JS, Chung J, Kim SY, Kim JS. MR Template-Based Individual Brain PET Volumes-of-Interest Generation Neither Using MR nor Using Spatial Normalization. Nucl Med Mol Imaging 2023; 57:73-85. [PMID: 36998592 PMCID: PMC10043100 DOI: 10.1007/s13139-022-00772-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 07/01/2022] [Accepted: 08/29/2022] [Indexed: 10/10/2022] Open
Abstract
For more anatomically precise quantitation of mouse brain PET, spatial normalization (SN) of PET onto MR template and subsequent template volumes-of-interest (VOIs)-based analysis are commonly used. Although this leads to dependency on the corresponding MR and the process of SN, routine preclinical/clinical PET images cannot always afford corresponding MR and relevant VOIs. To resolve this issue, we propose a deep learning (DL)-based individual-brain-specific VOIs (i.e., cortex, hippocampus, striatum, thalamus, and cerebellum) directly generated from PET images using the inverse-spatial-normalization (iSN)-based VOI labels and deep convolutional neural network model (deep CNN). Our technique was applied to mutated amyloid precursor protein and presenilin-1 mouse model of Alzheimer's disease. Eighteen mice underwent T2-weighted MRI and 18F FDG PET scans before and after the administration of human immunoglobin or antibody-based treatments. To train the CNN, PET images were used as inputs and MR iSN-based target VOIs as labels. Our devised methods achieved decent performance in terms of not only VOI agreements (i.e., Dice similarity coefficient) but also the correlation of mean counts and SUVR, and CNN-based VOIs was highly accordant with ground-truth (the corresponding MR and MR template-based VOIs). Moreover, the performance metrics were comparable to that of VOI generated by MR-based deep CNN. In conclusion, we established a novel quantitative analysis method both MR-less and SN-less fashion to generate individual brain space VOIs using MR template-based VOIs for PET image quantification. Supplementary Information The online version contains supplementary material available at 10.1007/s13139-022-00772-4.
Collapse
Affiliation(s)
- Seung Yeon Seo
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea
- Department of Biomedical Engineering, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Jungsu S. Oh
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea
| | - Jinwha Chung
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Seog-Young Kim
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Jae Seung Kim
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea
| |
Collapse
|
17
|
Isgut M, Gloster L, Choi K, Venugopalan J, Wang MD. Systematic Review of Advanced AI Methods for Improving Healthcare Data Quality in Post COVID-19 Era. IEEE Rev Biomed Eng 2023; 16:53-69. [PMID: 36269930 DOI: 10.1109/rbme.2022.3216531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
At the beginning of the COVID-19 pandemic, there was significant hype about the potential impact of artificial intelligence (AI) tools in combatting COVID-19 on diagnosis, prognosis, or surveillance. However, AI tools have not yet been widely successful. One of the key reason is the COVID-19 pandemic has demanded faster real-time development of AI-driven clinical and health support tools, including rapid data collection, algorithm development, validation, and deployment. However, there was not enough time for proper data quality control. Learning from the hard lessons in COVID-19, we summarize the important health data quality challenges during COVID-19 pandemic such as lack of data standardization, missing data, tabulation errors, and noise and artifact. Then we conduct a systematic investigation of computational methods that address these issues, including emerging novel advanced AI data quality control methods that achieve better data quality outcomes and, in some cases, simplify or automate the data cleaning process. We hope this article can assist healthcare community to improve health data quality going forward with novel AI development.
Collapse
|
18
|
Zhang A, Xing L, Zou J, Wu JC. Shifting machine learning for healthcare from development to deployment and from models to data. Nat Biomed Eng 2022; 6:1330-1345. [PMID: 35788685 DOI: 10.1038/s41551-022-00898-y] [Citation(s) in RCA: 58] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 05/03/2022] [Indexed: 01/14/2023]
Abstract
In the past decade, the application of machine learning (ML) to healthcare has helped drive the automation of physician tasks as well as enhancements in clinical capabilities and access to care. This progress has emphasized that, from model development to model deployment, data play central roles. In this Review, we provide a data-centric view of the innovations and challenges that are defining ML for healthcare. We discuss deep generative models and federated learning as strategies to augment datasets for improved model performance, as well as the use of the more recent transformer models for handling larger datasets and enhancing the modelling of clinical text. We also discuss data-focused problems in the deployment of ML, emphasizing the need to efficiently deliver data to ML models for timely clinical predictions and to account for natural data shifts that can deteriorate model performance.
Collapse
Affiliation(s)
- Angela Zhang
- Stanford Cardiovascular Institute, School of Medicine, Stanford University, Stanford, CA, USA. .,Department of Genetics, School of Medicine, Stanford University, Stanford, CA, USA. .,Greenstone Biosciences, Palo Alto, CA, USA. .,Department of Computer Science, Stanford University, Stanford, CA, USA.
| | - Lei Xing
- Department of Radiation Oncology, School of Medicine, Stanford University, Stanford, CA, USA
| | - James Zou
- Department of Computer Science, Stanford University, Stanford, CA, USA.,Department of Biomedical Informatics, School of Medicine, Stanford University, Stanford, CA, USA
| | - Joseph C Wu
- Stanford Cardiovascular Institute, School of Medicine, Stanford University, Stanford, CA, USA. .,Greenstone Biosciences, Palo Alto, CA, USA. .,Departments of Medicine, Division of Cardiovascular Medicine Stanford University, Stanford, CA, USA. .,Department of Radiology, School of Medicine, Stanford University, Stanford, CA, USA.
| |
Collapse
|
19
|
You SH, Cho Y, Kim B, Yang KS, Kim BK, Park SE. Synthetic Time of Flight Magnetic Resonance Angiography Generation Model Based on Cycle-Consistent Generative Adversarial Network Using PETRA-MRA in the Patients With Treated Intracranial Aneurysm. J Magn Reson Imaging 2022; 56:1513-1528. [PMID: 35142407 DOI: 10.1002/jmri.28114] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 02/02/2022] [Accepted: 02/03/2022] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Pointwise encoding time reduction with radial acquisition (PETRA) magnetic resonance angiography (MRA) is useful for evaluating intracranial aneurysm recurrence, but the problem of severe background noise and low peripheral signal-to-noise ratio (SNR) remain. Deep learning could reduce noise using high- and low-quality images. PURPOSE To develop a cycle-consistent generative adversarial network (cycleGAN)-based deep learning model to generate synthetic TOF (synTOF) using PETRA. STUDY TYPE Retrospective. POPULATION A total of 377 patients (mean age: 60 ± 11; 293 females) with treated intracranial aneurysms who underwent both PETRA and TOF from October 2017 to January 2021. Data were randomly divided into training (49.9%, 188/377) and validation (50.1%, 189/377) groups. FIELD STRENGTH/SEQUENCE Ultra-short echo time and TOF-MRA on a 3-T MR system. ASSESSMENT For the cycleGAN model, the peak SNR (PSNR) and structural similarity (SSIM) were evaluated. Image quality was compared qualitatively (5-point Likert scale) and quantitatively (SNR). A multireader diagnostic optimality evaluation was performed with 17 radiologists (experience of 1-18 years). STATISTICAL TESTS Generalized estimating equation analysis, Friedman's test, McNemar test, and Spearman's rank correlation. P < 0.05 indicated statistical significance. RESULTS The PSNR and SSIM between synTOF and TOF were 17.51 [16.76; 18.31] dB and 0.71 ± 0.02. The median values of overall image quality, noise, sharpness, and vascular conspicuity were significantly higher for synTOF than for PETRA (4.00 [4.00; 5.00] vs. 4.00 [3.00; 4.00]; 5.00 [4.00; 5.00] vs. 3.00 [2.00; 4.00]; 4.00 [4.00; 4.00] vs. 4.00 [3.00; 4.00]; 3.00 [3.00; 4.00] vs. 3.00 [2.00; 3.00]). The SNRs of the middle cerebral arteries were the highest for synTOF (synTOF vs. TOF vs. PETRA; 63.67 [43.25; 105.00] vs. 52.42 [32.88; 74.67] vs. 21.05 [12.34; 37.88]). In the multireader evaluation, there was no significant difference in diagnostic optimality or preference between synTOF and TOF (19.00 [18.00; 19.00] vs. 20.00 [18.00; 20.00], P = 0.510; 8.00 [6.00; 11.00] vs. 11.00 [9.00, 14.00], P = 1.000). DATA CONCLUSION The cycleGAN-based deep learning model provided synTOF free from background artifact. The synTOF could be a versatile alternative to TOF in patients who have undergone PETRA for evaluating treated aneurysms. EVIDENCE LEVEL 4 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Sung-Hye You
- Department of Radiology, Anam Hospital, Korea University College of Medicine, Korea
| | - Yongwon Cho
- Biomedical Research Center, Korea University College of Medicine, Korea
| | - Byungjun Kim
- Department of Radiology, Anam Hospital, Korea University College of Medicine, Korea
| | - Kyung-Sook Yang
- Department of Biostatistics, Korea University College of Medicine, Seoul, Korea
| | - Bo Kyu Kim
- Department of Radiology, Anam Hospital, Korea University College of Medicine, Korea
| | - Sang Eun Park
- Department of Radiology, Anam Hospital, Korea University College of Medicine, Korea
| |
Collapse
|
20
|
Sun J, Jin S, Shi R, Zuo C, Jiang J. Application and prospect for generative adversarial networks in cross-modality reconstruction of medical images. ZHONG NAN DA XUE XUE BAO. YI XUE BAN = JOURNAL OF CENTRAL SOUTH UNIVERSITY. MEDICAL SCIENCES 2022; 47:1001-1008. [PMID: 36097767 PMCID: PMC10950103 DOI: 10.11817/j.issn.1672-7347.2022.220189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Indexed: 06/15/2023]
Abstract
Cross-modality reconstruction of medical images refers to predicting the image from one modality to another so as to achieve more accurate personalized medicine. Generative adversarial networks is the most commonly used deep learning technique in cross-modality reconstruction. It can generate realistic images by learning features from implicit distributions that follow the distributions of real data and then reconstruct the image of another modality rapidly. With the sharp increase in clinical demand for multi-modality medical image, this technology has been widely used in the task of cross modal reconstruction between different medical image modalities, such as magnetic resonance imaging, computed tomography and positron emission computed tomography. It can achieve accurate and efficient cross-modality image reconstruction in different parts of the body, such as the brain, heart, etc. In addition, although GAN has achieved some success in cross-modality reconstruction, its stability, generalization ability, and accuracy still need further research and improvement.
Collapse
Affiliation(s)
- Jie Sun
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai 200444
| | - Shichen Jin
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai 200444
| | - Rong Shi
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai 200444
| | - Chuantao Zuo
- PET Center, Huashan Hospital Affiliated to Fudan University, Shanghai 200040, China. zuochuantao@ fudan.edu.cn
| | - Jiehui Jiang
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai 200444. jiangjiehui@shu edu.cn
| |
Collapse
|
21
|
Sketch guided and progressive growing GAN for realistic and editable ultrasound image synthesis. Med Image Anal 2022; 79:102461. [DOI: 10.1016/j.media.2022.102461] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 02/22/2022] [Accepted: 04/13/2022] [Indexed: 11/18/2022]
|
22
|
Shokraei Fard A, Reutens DC, Vegh V. From CNNs to GANs for cross-modality medical image estimation. Comput Biol Med 2022; 146:105556. [DOI: 10.1016/j.compbiomed.2022.105556] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 04/03/2022] [Accepted: 04/22/2022] [Indexed: 11/03/2022]
|
23
|
A Synopsis of Machine and Deep Learning in Medical Physics and Radiology. JOURNAL OF BASIC AND CLINICAL HEALTH SCIENCES 2022. [DOI: 10.30621/jbachs.960154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Machine learning (ML) and deep learning (DL) technologies introduced in the fields of medical physics, radiology, and oncology have made great strides in the past few years. A good many applications have proven to be an efficacious automated diagnosis and radiotherapy system. This paper outlines DL's general concepts and principles, key computational methods, and resources, as well as the implementation of automated models in diagnostic radiology and radiation oncology research. In addition, the potential challenges and solutions of DL technology are also discussed.
Collapse
|
24
|
Applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging: A review. Eur J Nucl Med Mol Imaging 2022; 49:3717-3739. [PMID: 35451611 DOI: 10.1007/s00259-022-05805-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/12/2022] [Indexed: 11/04/2022]
Abstract
PURPOSE This paper reviews recent applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging. Recent advances in Deep Learning (DL) and GANs catalysed the research of their applications in medical imaging modalities. As a result, several unique GAN topologies have emerged and been assessed in an experimental environment over the last two years. METHODS The present work extensively describes GAN architectures and their applications in PET imaging. The identification of relevant publications was performed via approved publication indexing websites and repositories. Web of Science, Scopus, and Google Scholar were the major sources of information. RESULTS The research identified a hundred articles that address PET imaging applications such as attenuation correction, de-noising, scatter correction, removal of artefacts, image fusion, high-dose image estimation, super-resolution, segmentation, and cross-modality synthesis. These applications are presented and accompanied by the corresponding research works. CONCLUSION GANs are rapidly employed in PET imaging tasks. However, specific limitations must be eliminated to reach their full potential and gain the medical community's trust in everyday clinical practice.
Collapse
|
25
|
Seo SY, Kim SJ, Oh JS, Chung J, Kim SY, Oh SJ, Joo S, Kim JS. Unified Deep Learning-Based Mouse Brain MR Segmentation: Template-Based Individual Brain Positron Emission Tomography Volumes-of-Interest Generation Without Spatial Normalization in Mouse Alzheimer Model. Front Aging Neurosci 2022; 14:807903. [PMID: 35309883 PMCID: PMC8931825 DOI: 10.3389/fnagi.2022.807903] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 01/17/2022] [Indexed: 02/03/2023] Open
Abstract
Although skull-stripping and brain region segmentation are essential for precise quantitative analysis of positron emission tomography (PET) of mouse brains, deep learning (DL)-based unified solutions, particularly for spatial normalization (SN), have posed a challenging problem in DL-based image processing. In this study, we propose an approach based on DL to resolve these issues. We generated both skull-stripping masks and individual brain-specific volumes-of-interest (VOIs—cortex, hippocampus, striatum, thalamus, and cerebellum) based on inverse spatial normalization (iSN) and deep convolutional neural network (deep CNN) models. We applied the proposed methods to mutated amyloid precursor protein and presenilin-1 mouse model of Alzheimer’s disease. Eighteen mice underwent T2-weighted MRI and 18F FDG PET scans two times, before and after the administration of human immunoglobulin or antibody-based treatments. For training the CNN, manually traced brain masks and iSN-based target VOIs were used as the label. We compared our CNN-based VOIs with conventional (template-based) VOIs in terms of the correlation of standardized uptake value ratio (SUVR) by both methods and two-sample t-tests of SUVR % changes in target VOIs before and after treatment. Our deep CNN-based method successfully generated brain parenchyma mask and target VOIs, which shows no significant difference from conventional VOI methods in SUVR correlation analysis, thus establishing methods of template-based VOI without SN.
Collapse
Affiliation(s)
- Seung Yeon Seo
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- Department of Biomedical Engineering, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| | - Soo-Jong Kim
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- Department of Biomedical Engineering, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- Department of Health Sciences and Technology, Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Songpa-gu, South Korea
- Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Suwon-si, South Korea
| | - Jungsu S. Oh
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- *Correspondence: Jungsu S. Oh, ;
| | - Jinwha Chung
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| | - Seog-Young Kim
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| | - Seung Jun Oh
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| | - Segyeong Joo
- Department of Biomedical Engineering, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| | - Jae Seung Kim
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, Songpa-gu, South Korea
| |
Collapse
|
26
|
Platscher M, Zopes J, Federau C. Image translation for medical image generation: Ischemic stroke lesion segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103283] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
27
|
Matsubara K, Ibaraki M, Nemoto M, Watabe H, Kimura Y. A review on AI in PET imaging. Ann Nucl Med 2022; 36:133-143. [PMID: 35029818 DOI: 10.1007/s12149-021-01710-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Accepted: 12/09/2021] [Indexed: 12/16/2022]
Abstract
Artificial intelligence (AI) has been applied to various medical imaging tasks, such as computer-aided diagnosis. Specifically, deep learning techniques such as convolutional neural network (CNN) and generative adversarial network (GAN) have been extensively used for medical image generation. Image generation with deep learning has been investigated in studies using positron emission tomography (PET). This article reviews studies that applied deep learning techniques for image generation on PET. We categorized the studies for PET image generation with deep learning into three themes as follows: (1) recovering full PET data from noisy data by denoising with deep learning, (2) PET image reconstruction and attenuation correction with deep learning and (3) PET image translation and synthesis with deep learning. We introduce recent studies based on these three categories. Finally, we mention the limitations of applying deep learning techniques to PET image generation and future prospects for PET image generation.
Collapse
Affiliation(s)
- Keisuke Matsubara
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Masanobu Ibaraki
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Mitsutaka Nemoto
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan
| | - Hiroshi Watabe
- Cyclotron and Radioisotope Center (CYRIC), Tohoku University, Miyagi, Japan
| | - Yuichi Kimura
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan.
| |
Collapse
|
28
|
Lee JS, Kim KM, Choi Y, Kim HJ. A Brief History of Nuclear Medicine Physics, Instrumentation, and Data Sciences in Korea. Nucl Med Mol Imaging 2021; 55:265-284. [PMID: 34868376 DOI: 10.1007/s13139-021-00721-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 10/14/2021] [Accepted: 10/18/2021] [Indexed: 10/19/2022] Open
Abstract
We review the history of nuclear medicine physics, instrumentation, and data sciences in Korea to commemorate the 60th anniversary of the Korean Society of Nuclear Medicine. In the 1970s and 1980s, the development of SPECT, nuclear stethoscope, and bone densitometry systems, as well as kidney and cardiac image analysis technology, marked the beginning of nuclear medicine physics and engineering in Korea. With the introduction of PET and cyclotron in Korea in 1994, nuclear medicine imaging research was further activated. With the support of large-scale government projects, the development of gamma camera, SPECT, and PET systems was carried out. Exploiting the use of PET scanners in conjunction with cyclotrons, extensive studies on myocardial blood flow quantification and brain image analysis were also actively pursued. In 2005, Korea's first domestic cyclotron succeeded in producing radioactive isotopes, and the cyclotron was provided to six universities and university hospitals, thereby facilitating the nationwide supply of PET radiopharmaceuticals. Since the late 2000s, research on PET/MRI has been actively conducted, and the advanced research results of Korean scientists in the fields of silicon photomultiplier PET and simultaneous PET/MRI have attracted significant attention from the academic community. Currently, Korean researchers are actively involved in endeavors to solve a variety of complex problems in nuclear medicine using artificial intelligence and deep learning technologies.
Collapse
Affiliation(s)
- Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
| | - Kyeong Min Kim
- Department of Isotopic Drug Development, Korea Radioisotope Center for Pharmaceuticals, Korea Institute of Radiological and Medical Sciences, Seoul, Korea
| | - Yong Choi
- Department of Electronic Engineering, Sogang University, Seoul, Korea
| | - Hee-Joung Kim
- Department of Radiological Science, Yonsei University, Wonju, Korea
| |
Collapse
|
29
|
Qu C, Zou Y, Dai Q, Ma Y, He J, Liu Q, Kuang W, Jia Z, Chen T, Gong Q. Advancing diagnostic performance and clinical applicability of deep learning-driven generative adversarial networks for Alzheimer's disease. PSYCHORADIOLOGY 2021; 1:225-248. [PMID: 38666217 PMCID: PMC10917234 DOI: 10.1093/psyrad/kkab017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 11/18/2021] [Accepted: 11/25/2021] [Indexed: 02/05/2023]
Abstract
Alzheimer's disease (AD) is a neurodegenerative disease that severely affects the activities of daily living in aged individuals, which typically needs to be diagnosed at an early stage. Generative adversarial networks (GANs) provide a new deep learning method that show good performance in image processing, while it remains to be verified whether a GAN brings benefit in AD diagnosis. The purpose of this research is to systematically review psychoradiological studies on the application of a GAN in the diagnosis of AD from the aspects of classification of AD state and AD-related image processing compared with other methods. In addition, we evaluated the research methodology and provided suggestions from the perspective of clinical application. Compared with other methods, a GAN has higher accuracy in the classification of AD state and better performance in AD-related image processing (e.g. image denoising and segmentation). Most studies used data from public databases but lacked clinical validation, and the process of quantitative assessment and comparison in these studies lacked clinicians' participation, which may have an impact on the improvement of generation effect and generalization ability of the GAN model. The application value of GANs in the classification of AD state and AD-related image processing has been confirmed in reviewed studies. Improvement methods toward better GAN architecture were also discussed in this paper. In sum, the present study demonstrated advancing diagnostic performance and clinical applicability of GAN for AD, and suggested that the future researchers should consider recruiting clinicians to compare the algorithm with clinician manual methods and evaluate the clinical effect of the algorithm.
Collapse
Affiliation(s)
- Changxing Qu
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu 610044, China
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, West China School of Stomatology, Sichuan University, Chengdu 610044, China
| | - Yinxi Zou
- West China School of Medicine, Sichuan University, Chengdu 610044, China
| | - Qingyi Dai
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, West China School of Stomatology, Sichuan University, Chengdu 610044, China
| | - Yingqiao Ma
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu 610044, China
| | - Jinbo He
- School of Psychology, Central China Normal University, Wuhan 430079, China
| | - Qihong Liu
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China
| | - Weihong Kuang
- Department of Psychiatry, West China Hospital of Sichuan University, Chengdu 610065, China
| | - Zhiyun Jia
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu 610044, China
| | - Taolin Chen
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu 610044, China
- Research Unit of Psychoradiology, Chinese Academy of Medical Sciences, Chengdu 610041, Sichuan, P.R. China
- Functional and Molecular Imaging Key Laboratory of Sichuan Provience, Department of Radiology, West China Hospital of Sichuan University, Chengdu 610041, Sichuan, P.R. China
| | - Qiyong Gong
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu 610044, China
- Research Unit of Psychoradiology, Chinese Academy of Medical Sciences, Chengdu 610041, Sichuan, P.R. China
- Functional and Molecular Imaging Key Laboratory of Sichuan Provience, Department of Radiology, West China Hospital of Sichuan University, Chengdu 610041, Sichuan, P.R. China
| |
Collapse
|
30
|
Das SR, Lyu X, Duong MT, Xie L, McCollum L, de Flores R, DiCalogero M, Irwin DJ, Dickerson BC, Nasrallah IM, Yushkevich PA, Wolk DA. Tau-Atrophy Variability Reveals Phenotypic Heterogeneity in Alzheimer's Disease. Ann Neurol 2021; 90:751-762. [PMID: 34617306 PMCID: PMC8841129 DOI: 10.1002/ana.26233] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2021] [Revised: 09/27/2021] [Accepted: 09/27/2021] [Indexed: 01/02/2023]
Abstract
OBJECTIVE Tau neurofibrillary tangles (T) are the primary driver of downstream neurodegeneration (N) and subsequent cognitive impairment in Alzheimer's disease (AD). However, there is substantial variability in the T-N relationship - manifested in higher or lower atrophy than expected for level of tau in a given brain region. The goal of this study was to determine if region-based quantitation of this variability allows for identification of underlying modulatory factors, including polypathology. METHODS Cortical thickness (N) and 18 F-Flortaucipir SUVR (T) were computed in 104 gray matter regions from a cohort of cognitively-impaired, amyloid-positive (A+) individuals. Region-specific residuals from a robust linear fit between SUVR and cortical thickness were computed as a surrogate for T-N mismatch. A summary T-N mismatch metric defined using residuals were correlated with demographic and imaging-based modulatory factors, and to partition the cohort into data-driven subgroups. RESULTS The summary T-N mismatch metric correlated with underlying factors such as age and burden of white matter hyperintensity lesions. Data-driven subgroups based on clustering of residuals appear to represent different biologically relevant phenotypes, with groups showing distinct spatial patterns of higher or lower atrophy than expected. INTERPRETATION These data support the notion that a measure of deviation from a normative relationship between tau burden and neurodegeneration across brain regions in individuals on the AD continuum captures variability due to multiple underlying factors, and can reveal phenotypes, which if validated, may help identify possible contributors to neurodegeneration in addition to tau, which may ultimately be useful for cohort selection in clinical trials. ANN NEUROL 2021;90:751-762.
Collapse
Affiliation(s)
- Sandhitsu R Das
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA
| | - Xueying Lyu
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Michael Tran Duong
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Long Xie
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Lauren McCollum
- Department of Medicine, University of Tennessee, Knoxville, TN, USA
| | - Robin de Flores
- Université de Caen Normandie, INSERM UMRS U1237, Caen, France
| | - Michael DiCalogero
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA
| | - David J Irwin
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA
| | | | - Ilya M Nasrallah
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Paul A Yushkevich
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - David A Wolk
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
31
|
Wang R, Liu H, Toyonaga T, Shi L, Wu J, Onofrey JA, Tsai YJ, Naganawa M, Ma T, Liu Y, Chen MK, Mecca AP, O’Dell RS, van Dyck CH, Carson RE, Liu C. Generation of synthetic PET images of synaptic density and amyloid from 18 F-FDG images using deep learning. Med Phys 2021; 48:5115-5129. [PMID: 34224153 PMCID: PMC8455448 DOI: 10.1002/mp.15073] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 06/11/2021] [Accepted: 06/12/2021] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Positron emission tomography (PET) imaging with various tracers is increasingly used in Alzheimer's disease (AD) studies. However, access to PET scans using new or less-available tracers with sophisticated synthesis and short half-life isotopes may be very limited. Therefore, it is of great significance and interest in AD research to assess the feasibility of generating synthetic PET images of less-available tracers from the PET image of another common tracer, in particular 18 F-FDG. METHODS We implemented advanced deep learning methods using the U-Net model to predict 11 C-UCB-J PET images of synaptic vesicle protein 2A (SV2A), a surrogate of synaptic density, from 18 F-FDG PET data. Dynamic 18 F-FDG and 11 C-UCB-J scans were performed in 21 participants with normal cognition (CN) and 33 participants with Alzheimer's disease (AD). Cerebellum was used as the reference region for both tracers. For 11 C-UCB-J image prediction, four network models were trained and tested, which included 1) 18 F-FDG SUV ratio (SUVR) to 11 C-UCB-J SUVR, 2) 18 F-FDG Ki ratio to 11 C-UCB-J SUVR, 3) 18 F-FDG SUVR to 11 C-UCB-J distribution volume ratio (DVR), and 4) 18 F-FDG Ki ratio to 11 C-UCB-J DVR. The normalized root mean square error (NRMSE), structure similarity index (SSIM), and Pearson's correlation coefficient were calculated for evaluating the overall image prediction accuracy. Mean bias of various ROIs in the brain and correlation plots between predicted images and true images were calculated for ROI-based prediction accuracy. Following a similar training and evaluation strategy, 18 F-FDG SUVR to 11 C-PiB SUVR network was also trained and tested for 11 C-PiB static image prediction. RESULTS The results showed that all four network models obtained satisfactory 11 C-UCB-J static and parametric images. For 11 C-UCB-J SUVR prediction, the mean ROI bias was -0.3% ± 7.4% for the AD group and -0.5% ± 7.3% for the CN group with 18 F-FDG SUVR as the input, -0.7% ± 8.1% for the AD group, and -1.3% ± 7.0% for the CN group with 18 F-FDG Ki ratio as the input. For 11 C-UCB-J DVR prediction, the mean ROI bias was -1.3% ± 7.5% for the AD group and -2.0% ± 6.9% for the CN group with 18 F-FDG SUVR as the input, -0.7% ± 9.0% for the AD group, and -1.7% ± 7.8% for the CN group with 18 F-FDG Ki ratio as the input. For 11 C-PiB SUVR image prediction, which appears to be a more challenging task, the incorporation of additional diagnostic information into the network is needed to control the bias below 5% for most ROIs. CONCLUSIONS It is feasible to use 3D U-Net-based methods to generate synthetic 11 C-UCB-J PET images from 18 F-FDG images with reasonable prediction accuracy. It is also possible to predict 11 C-PiB SUVR images from 18 F-FDG images, though the incorporation of additional non-imaging information is needed.
Collapse
Affiliation(s)
- Rui Wang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle and Radiation Imaging, Ministry of Education, Tsinghua University, Beijing, China
| | - Hui Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle and Radiation Imaging, Ministry of Education, Tsinghua University, Beijing, China
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Luyao Shi
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Jing Wu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - John Aaron Onofrey
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Yu-Jung Tsai
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Mika Naganawa
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Tianyu Ma
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle and Radiation Imaging, Ministry of Education, Tsinghua University, Beijing, China
| | - Yaqiang Liu
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle and Radiation Imaging, Ministry of Education, Tsinghua University, Beijing, China
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Adam P. Mecca
- Department of Psychiatry, Yale University, New Haven, CT, USA
- Alzheimer’s Disease Research Unit, Yale University School of Medicine, New Haven, CT, USA
| | - Ryan S. O’Dell
- Department of Psychiatry, Yale University, New Haven, CT, USA
- Alzheimer’s Disease Research Unit, Yale University School of Medicine, New Haven, CT, USA
| | - Christopher H. van Dyck
- Department of Psychiatry, Yale University, New Haven, CT, USA
- Alzheimer’s Disease Research Unit, Yale University School of Medicine, New Haven, CT, USA
| | - Richard E. Carson
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| |
Collapse
|
32
|
Accurate Transmission-Less Attenuation Correction Method for Amyloid-β Brain PET Using Deep Neural Network. ELECTRONICS 2021. [DOI: 10.3390/electronics10151836] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
The lack of physically measured attenuation maps (μ-maps) for attenuation and scatter correction is an important technical challenge in brain-dedicated stand-alone positron emission tomography (PET) scanners. The accuracy of the calculated attenuation correction is limited by the nonuniformity of tissue composition due to pathologic conditions and the complex structure of facial bones. The aim of this study is to develop an accurate transmission-less attenuation correction method for amyloid-β (Aβ) brain PET studies. We investigated the validity of a deep convolutional neural network trained to produce a CT-derived μ-map (μ-CT) from simultaneously reconstructed activity and attenuation maps using the MLAA (maximum likelihood reconstruction of activity and attenuation) algorithm for Aβ brain PET. The performance of three different structures of U-net models (2D, 2.5D, and 3D) were compared. The U-net models generated less noisy and more uniform μ-maps than MLAA μ-maps. Among the three different U-net models, the patch-based 3D U-net model reduced noise and cross-talk artifacts more effectively. The Dice similarity coefficients between the μ-map generated using 3D U-net and μ-CT in bone and air segments were 0.83 and 0.67. All three U-net models showed better voxel-wise correlation of the μ-maps compared to MLAA. The patch-based 3D U-net model was the best. While the uptake value of MLAA yielded a high percentage error of 20% or more, the uptake value of 3D U-nets yielded the lowest percentage error within 5%. The proposed deep learning approach that requires no transmission data, anatomic image, or atlas/template for PET attenuation correction remarkably enhanced the quantitative accuracy of the simultaneously estimated MLAA μ-maps from Aβ brain PET.
Collapse
|
33
|
Jiang C, Zhang X, Zhang N, Zhang Q, Zhou C, Yuan J, He Q, Yang Y, Liu X, Zheng H, Fan W, Hu Z, Liang D. Synthesizing PET/MR (T1-weighted) images from non-attenuation-corrected PET images. Phys Med Biol 2021; 66. [PMID: 34098534 DOI: 10.1088/1361-6560/ac08b2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 06/07/2021] [Indexed: 11/12/2022]
Abstract
Positron emission tomography (PET) imaging can be used for early detection, diagnosis and postoperative patient monitoring of many diseases. Traditional PET imaging requires not only additional computed tomography (CT) imaging or magnetic resonance imaging (MR) to provide anatomical information but also attenuation correction (AC) map calculation based on CT images or MR images for accurate quantitative estimation. During a patient's treatment, PET/CT or PET/MR scans are inevitably repeated many times, leading to additional doses of ionizing radiation (CT scans) and additional economic and time costs (MR scans). To reduce adverse effects while obtaining high-quality PET/MR images in the course of a patient's treatment, especially in the stage of evaluating the effect of postoperative treatment, in this work, we propose a new method based on deep learning, which can directly obtain synthetic attenuation-corrected PET (sAC PET) and synthetic T1-weighted MR (sMR) images based only on non-attenuation-corrected PET (NAC PET) images. Our model, based on the Wasserstein generative adversarial network, first removes noise and artifacts from the NAC PET images to generate sAC PET images and then generates sMR images from the obtained sAC PET images. To evaluate the performance of this generative model, we evaluated it on paired PET/MR images from a total of eighty clinical patients. Based on qualitative and quantitative analysis, the generated sAC PET and sMR images showed a high degree of similarity to the real AC PET and real MR images. These results indicated that our proposed method can reduce the frequency of additional anatomical imaging scans during PET imaging and has great potential in improving doctors' clinical diagnosis efficiency, saving patients' economic expenditure and reducing the radiation risk brought by CT scanning.
Collapse
Affiliation(s)
- Changhui Jiang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China.,National Innovation Center for Advanced Medical Devices, Shenzhen 518131, People's Republic of China
| | - Xu Zhang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou 510060, People's Republic of China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou 510060, People's Republic of China
| | - Jianmin Yuan
- Central Research Institute, Shanghai United Imaging Healthcare, Shanghai 201807, People's Republic of China
| | - Qiang He
- Central Research Institute, Shanghai United Imaging Healthcare, Shanghai 201807, People's Republic of China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou 510060, People's Republic of China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| |
Collapse
|
34
|
Discovering Digital Tumor Signatures-Using Latent Code Representations to Manipulate and Classify Liver Lesions. Cancers (Basel) 2021; 13:cancers13133108. [PMID: 34206336 PMCID: PMC8269051 DOI: 10.3390/cancers13133108] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Revised: 05/30/2021] [Accepted: 06/16/2021] [Indexed: 11/17/2022] Open
Abstract
Simple Summary We use a generative deep learning paradigm for the identification of digital signatures in radiological imaging data. The model is trained on a small inhouse data set and evaluated on publicly available data. Apart from using the learned signatures for the characterization of lesions, in analogy to radiomics features, we also demonstrate that by manipulating them we can create realistic synthetic CT image patches. This generation of synthetic data can be carried out at user-defined spatial locations. Moreover, the discrimination of liver lesions from normal liver tissue can be achieved with high accuracy, sensitivity, and specificity. Abstract Modern generative deep learning (DL) architectures allow for unsupervised learning of latent representations that can be exploited in several downstream tasks. Within the field of oncological medical imaging, we term these latent representations “digital tumor signatures” and hypothesize that they can be used, in analogy to radiomics features, to differentiate between lesions and normal liver tissue. Moreover, we conjecture that they can be used for the generation of synthetic data, specifically for the artificial insertion and removal of liver tumor lesions at user-defined spatial locations in CT images. Our approach utilizes an implicit autoencoder, an unsupervised model architecture that combines an autoencoder and two generative adversarial network (GAN)-like components. The model was trained on liver patches from 25 or 57 inhouse abdominal CT scans, depending on the experiment, demonstrating that only minimal data is required for synthetic image generation. The model was evaluated on a publicly available data set of 131 scans. We show that a PCA embedding of the latent representation captures the structure of the data, providing the foundation for the targeted insertion and removal of tumor lesions. To assess the quality of the synthetic images, we conducted two experiments with five radiologists. For experiment 1, only one rater and the ensemble-rater were marginally above the chance level in distinguishing real from synthetic data. For the second experiment, no rater was above the chance level. To illustrate that the “digital signatures” can also be used to differentiate lesion from normal tissue, we employed several machine learning methods. The best performing method, a LinearSVM, obtained 95% (97%) accuracy, 94% (95%) sensitivity, and 97% (99%) specificity, depending on if all data or only normal appearing patches were used for training of the implicit autoencoder. Overall, we demonstrate that the proposed unsupervised learning paradigm can be utilized for the removal and insertion of liver lesions at user defined spatial locations and that the digital signatures can be used to discriminate between lesions and normal liver tissue in abdominal CT scans.
Collapse
|
35
|
Abstract
Detecting fluorescence in the second near-infrared window (NIR-II) up to ∼1,700 nm has emerged as a novel in vivo imaging modality with high spatial and temporal resolution through millimeter tissue depths. Imaging in the NIR-IIb window (1,500-1,700 nm) is the most effective one-photon approach to suppressing light scattering and maximizing imaging penetration depth, but relies on nanoparticle probes such as PbS/CdS containing toxic elements. On the other hand, imaging the NIR-I (700-1,000 nm) or NIR-IIa window (1,000-1,300 nm) can be done using biocompatible small-molecule fluorescent probes including US Food and Drug Administration-approved dyes such as indocyanine green (ICG), but has a caveat of suboptimal imaging quality due to light scattering. It is highly desired to achieve the performance of NIR-IIb imaging using molecular probes approved for human use. Here, we trained artificial neural networks to transform a fluorescence image in the shorter-wavelength NIR window of 900-1,300 nm (NIR-I/IIa) to an image resembling an NIR-IIb image. With deep-learning translation, in vivo lymph node imaging with ICG achieved an unprecedented signal-to-background ratio of >100. Using preclinical fluorophores such as IRDye-800, translation of ∼900-nm NIR molecular imaging of PD-L1 or EGFR greatly enhanced tumor-to-normal tissue ratio up to ∼20 from ∼5 and improved tumor margin localization. Further, deep learning greatly improved in vivo noninvasive NIR-II light-sheet microscopy (LSM) in resolution and signal/background. NIR imaging equipped with deep learning could facilitate basic biomedical research and empower clinical diagnostics and imaging-guided surgery in the clinic.
Collapse
|
36
|
Kang SK, Lee JS. Anatomy-guided PET reconstruction using l1bowsher prior. Phys Med Biol 2021; 66. [PMID: 33780912 DOI: 10.1088/1361-6560/abf2f7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Accepted: 03/29/2021] [Indexed: 12/22/2022]
Abstract
Advances in simultaneous positron emission tomography/magnetic resonance imaging (PET/MRI) technology have led to an active investigation of the anatomy-guided regularized PET image reconstruction algorithm based on MR images. Among the various priors proposed for anatomy-guided regularized PET image reconstruction, Bowsher's method based on second-order smoothing priors sometimes suffers from over-smoothing of detailed structures. Therefore, in this study, we propose a Bowsher prior based on thel1-norm and an iteratively reweighting scheme to overcome the limitation of the original Bowsher method. In addition, we have derived a closed solution for iterative image reconstruction based on this non-smooth prior. A comparison study between the originall2and proposedl1Bowsher priors was conducted using computer simulation and real human data. In the simulation and real data application, small lesions with abnormal PET uptake were better detected by the proposedl1Bowsher prior methods than the original Bowsher prior. The originall2Bowsher leads to a decreased PET intensity in small lesions when there is no clear separation between the lesions and surrounding tissue in the anatomical prior. However, the proposedl1Bowsher prior methods showed better contrast between the tumors and surrounding tissues owing to the intrinsic edge-preserving property of the prior which is attributed to the sparseness induced byl1-norm, especially in the iterative reweighting scheme. Besides, the proposed methods demonstrated lower bias and less hyper-parameter dependency on PET intensity estimation in the regions with matched anatomical boundaries in PET and MRI. Therefore, these methods will be useful for improving the PET image quality based on the anatomical side information.
Collapse
Affiliation(s)
- Seung Kwan Kang
- Department of Nuclear Medicine, Seoul National University Hospital, Seoul 03080, Republic of Korea.,Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul 03080, Republic of Korea.,Brightonix Imaging Inc., Seoul 04793, Republic of Korea
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University Hospital, Seoul 03080, Republic of Korea.,Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul 03080, Republic of Korea.,Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul 03080, Republic of Korea.,Brightonix Imaging Inc., Seoul 04793, Republic of Korea
| |
Collapse
|
37
|
Cheng D, Qiu N, Zhao F, Mao Y, Li C. Research on the Modality Transfer Method of Brain Imaging Based on Generative Adversarial Network. Front Neurosci 2021; 15:655019. [PMID: 33790739 PMCID: PMC8005554 DOI: 10.3389/fnins.2021.655019] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 02/22/2021] [Indexed: 12/27/2022] Open
Abstract
Brain imaging technology is an important means to study brain diseases. The commonly used brain imaging technologies are fMRI and EEG. Clinical practice has shown that although fMRI is superior to EEG in observing the anatomical details of some diseases that are difficult to diagnose, its costs are prohibitive. In particular, more and more patients who use metal implants cannot use this technology. In contrast, EEG technology is easier to implement. Therefore, to break through the limitations of fMRI technology, we propose a brain imaging modality transfer framework, namely BMT-GAN, based on a generative adversarial network. The framework introduces a new non-adversarial loss to reduce the perception and style difference between input and output images. It also realizes the conversion from EEG modality data to fMRI modality data and provides comprehensive reference information of EEG and fMRI for radiologists. Finally, a qualitative and quantitative comparison with the existing GAN-based brain imaging modality transfer approaches demonstrates the superiority of our framework.
Collapse
Affiliation(s)
- Dapeng Cheng
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai, China.,Shandong Co-Innovation Center of Future Intelligent Computing, Yantai, China
| | - Nuan Qiu
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai, China
| | - Feng Zhao
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai, China.,Shandong Co-Innovation Center of Future Intelligent Computing, Yantai, China
| | - Yanyan Mao
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai, China.,Shandong Co-Innovation Center of Future Intelligent Computing, Yantai, China
| | - Chengnuo Li
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai, China
| |
Collapse
|
38
|
Lee JS. A Review of Deep-Learning-Based Approaches for Attenuation Correction in Positron Emission Tomography. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3009269] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
39
|
Torres-Velázquez M, Chen WJ, Li X, McMillan AB. Application and Construction of Deep Learning Networks in Medical Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:137-159. [PMID: 34017931 PMCID: PMC8132932 DOI: 10.1109/trpms.2020.3030611] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Deep learning (DL) approaches are part of the machine learning (ML) subfield concerned with the development of computational models to train artificial intelligence systems. DL models are characterized by automatically extracting high-level features from the input data to learn the relationship between matching datasets. Thus, its implementation offers an advantage over common ML methods that often require the practitioner to have some domain knowledge of the input data to select the best latent representation. As a result of this advantage, DL has been successfully applied within the medical imaging field to address problems, such as disease classification and tumor segmentation for which it is difficult or impossible to determine which image features are relevant. Therefore, taking into consideration the positive impact of DL on the medical imaging field, this article reviews the key concepts associated with its evolution and implementation. The sections of this review summarize the milestones related to the development of the DL field, followed by a description of the elements of deep neural network and an overview of its application within the medical imaging field. Subsequently, the key steps necessary to implement a supervised DL application are defined, and associated limitations are discussed.
Collapse
Affiliation(s)
- Maribel Torres-Velázquez
- Department of Biomedical Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Wei-Jie Chen
- Department of Electrical and Computer Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Xue Li
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Alan B McMillan
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA, and also with the Department of Medical Physics, University of Wisconsin-Madison, Madison, WI 53705 USA
| |
Collapse
|
40
|
Translating amyloid PET of different radiotracers by a deep generative model for interchangeability. Neuroimage 2021; 232:117890. [PMID: 33617991 DOI: 10.1016/j.neuroimage.2021.117890] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 12/31/2020] [Accepted: 02/15/2021] [Indexed: 11/24/2022] Open
Abstract
It is challenging to compare amyloid PET images obtained with different radiotracers. Here, we introduce a new approach to improve the interchangeability of amyloid PET acquired with different radiotracers through image-level translation. Deep generative networks were developed using unpaired PET datasets, consisting of 203 [11C]PIB and 850 [18F]florbetapir brain PET images. Using 15 paired PET datasets, the standardized uptake value ratio (SUVR) values obtained from pseudo-PIB or pseudo-florbetapir PET images translated using the generative networks was compared to those obtained from the original images. The generated amyloid PET images showed similar distribution patterns with original amyloid PET of different radiotracers. The SUVR obtained from the original [18F]florbetapir PET was lower than those obtained from the original [11C]PIB PET. The translated amyloid PET images reduced the difference in SUVR. The SUVR obtained from the pseudo-PIB PET images generated from [18F]florbetapir PET showed a good agreement with those of the original PIB PET (ICC = 0.87 for global SUVR). The SUVR obtained from the pseudo-florbetapir PET also showed a good agreement with those of the original [18F]florbetapir PET (ICC = 0.85 for global SUVR). The ICC values between the original and generated PET images were higher than those between original [11C]PIB and [18F]florbetapir images (ICC = 0.65 for global SUVR). Our approach provides the image-level translation of amyloid PET images obtained using different radiotracers. It may facilitate the clinical studies designed with variable amyloid PET images due to long-term clinical follow-up as well as multicenter trials by enabling the translation of different types of amyloid PET.
Collapse
|
41
|
Yurt M, Dar SU, Erdem A, Erdem E, Oguz KK, Çukur T. mustGAN: multi-stream Generative Adversarial Networks for MR Image Synthesis. Med Image Anal 2021; 70:101944. [PMID: 33690024 DOI: 10.1016/j.media.2020.101944] [Citation(s) in RCA: 46] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Revised: 12/11/2020] [Accepted: 12/15/2020] [Indexed: 01/28/2023]
Abstract
Multi-contrast MRI protocols increase the level of morphological information available for diagnosis. Yet, the number and quality of contrasts are limited in practice by various factors including scan time and patient motion. Synthesis of missing or corrupted contrasts from other high-quality ones can alleviate this limitation. When a single target contrast is of interest, common approaches for multi-contrast MRI involve either one-to-one or many-to-one synthesis methods depending on their input. One-to-one methods take as input a single source contrast, and they learn a latent representation sensitive to unique features of the source. Meanwhile, many-to-one methods receive multiple distinct sources, and they learn a shared latent representation more sensitive to common features across sources. For enhanced image synthesis, we propose a multi-stream approach that aggregates information across multiple source images via a mixture of multiple one-to-one streams and a joint many-to-one stream. The complementary feature maps generated in the one-to-one streams and the shared feature maps generated in the many-to-one stream are combined with a fusion block. The location of the fusion block is adaptively modified to maximize task-specific performance. Quantitative and radiological assessments on T1,- T2-, PD-weighted, and FLAIR images clearly demonstrate the superior performance of the proposed method compared to previous state-of-the-art one-to-one and many-to-one methods.
Collapse
Affiliation(s)
- Mahmut Yurt
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara, TR-06800, Turkey; National Magnetic Resonance Research Center, Bilkent University, Ankara, TR-06800, Turkey
| | - Salman Uh Dar
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara, TR-06800, Turkey; National Magnetic Resonance Research Center, Bilkent University, Ankara, TR-06800, Turkey
| | - Aykut Erdem
- Department of Computer Engineering, Koç University, İstanbul, TR-34450, Turkey
| | - Erkut Erdem
- Department of Computer Engineering, Hacettepe University, Ankara, TR-06800, Turkey
| | - Kader K Oguz
- National Magnetic Resonance Research Center, Bilkent University, Ankara, TR-06800, Turkey; Department of Radiology, Hacettepe University, Ankara, TR-06100, Turkey
| | - Tolga Çukur
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara, TR-06800, Turkey; National Magnetic Resonance Research Center, Bilkent University, Ankara, TR-06800, Turkey; Neuroscience Program, Aysel Sabuncu Brain Research Center, Bilkent, Ankara, TR-06800, Turkey.
| |
Collapse
|
42
|
Yousefzadeh-Nowshahr E, Winter G, Bohn P, Kneer K, von Arnim CAF, Otto M, Solbach C, Anderl-Straub S, Polivka D, Fissler P, Prasad V, Kletting P, Riepe MW, Higuchi M, Ludolph A, Beer AJ, Glatting G. Comparison of MRI-based and PET-based image pre-processing for quantification of 11C-PBB3 uptake in human brain. Z Med Phys 2021; 31:37-47. [PMID: 33454153 DOI: 10.1016/j.zemedi.2020.12.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Revised: 11/11/2020] [Accepted: 12/03/2020] [Indexed: 11/30/2022]
Abstract
PURPOSE Quantification of tau load using 11C-PBB3-PET has the potential to improve diagnosis of neurodegenerative diseases. Although MRI-based pre-processing is used as a reference method, not all patients have MRI. The feasibility of a PET-based pre-processing for the quantification of 11C-PBB3 tracer was evaluated and compared with the MRI-based method. MATERIALS AND METHODS Fourteen patients with decreased recent memory were examined with 11C-PBB3-PET and MRI. The PET scans were visually assessed and rated as either PBB3(+) or PBB3(-). The image processing based on the PET-based method was validated against the MRI-based approach. The regional uptakes were quantified using the Mesial-temporal/Temporoparietal/Rest of neocortex (MeTeR) regions. SUVR values were calculated by normalizing to the cerebellar reference region to compare both methods within the patient groups. RESULTS Significant correlations were observed between the SUVRs of the MRI-based and the PET-based methods in the MeTeR regions (rMe=0.91; rTe=0.98; rR=0.96; p<0.0001). However, the Bland-Altman plot showed a significant bias between both methods in the subcortical Me region (bias: -0.041; 95% CI: -0.061 to -0.024; p=0.003). As in the MRI-based method, the 11C-PBB3 uptake obtained with the PET-based method was higher for the PBB3(+) group in each of the cortical regions and for the whole brain than for the PBB3(-) group (PET-basedGlobal: 1.11 vs. 0.96; Cliff's Delta (d)=0.68; p=0.04; MRI-basedGlobal: 1.11 vs. 0.97; d=0.70; p=0.03). To differentiate between positive and negative scans, Youden's index estimated the best cut-off of 0.99 from the ROC curve with good accuracy (AUC: 0.88±0.10; 95% CI: 0.67-1.00) and the same sensitivity (83%) and specificity (88%) for both methods. CONCLUSION The PET-based pre-processing method developed to quantify the tau burden with 11C-PBB3 provided comparable SUVR values and effect sizes as the MRI-based reference method. Furthermore, both methods have a comparable discrimination accuracy between PBB3(+) and PBB3(-) groups as assessed by visual rating. Therefore, the presented PET-based method can be used for clinical diagnosis if no MRI image is available.
Collapse
Affiliation(s)
- Elham Yousefzadeh-Nowshahr
- Medical Radiation Physics, Department of Nuclear Medicine, Ulm University, Ulm, Germany; Department of Nuclear Medicine, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.
| | - Gordon Winter
- Department of Nuclear Medicine, Ulm University, Ulm, Germany
| | - Peter Bohn
- Department of Nuclear Medicine, Inselspital Bern - University of Bern, Bern, Switzerland
| | - Katharina Kneer
- Department of Nuclear Medicine, Ulm University, Ulm, Germany
| | - Christine A F von Arnim
- Department of Neurology, Ulm University, Ulm, Germany; Department of Geriatrics, University Medical Center Göttingen, Göttingen, Germany
| | - Markus Otto
- Department of Neurology, Ulm University, Ulm, Germany
| | | | | | - Dörte Polivka
- Department of Neurology, Ulm University, Ulm, Germany
| | - Patrick Fissler
- Department of Neurology, Ulm University, Ulm, Germany; Psychiatric Services of Thurgovia (Academic Teaching Hospital of Medical University Salzburg), Münsterlingen, Switzerland
| | - Vikas Prasad
- Department of Nuclear Medicine, Ulm University, Ulm, Germany
| | - Peter Kletting
- Medical Radiation Physics, Department of Nuclear Medicine, Ulm University, Ulm, Germany; Department of Nuclear Medicine, Ulm University, Ulm, Germany
| | - Matthias W Riepe
- Department of Psychiatry and Psychotherapy II, Ulm University, Ulm, Germany
| | - Makoto Higuchi
- National Institute of Radiological Sciences, Chiba, Japan
| | - Albert Ludolph
- Department of Neurology, Ulm University, Ulm, Germany; German Center for Neurodegerative Diseases (DZNE), Ulm, Germany
| | - Ambros J Beer
- Department of Nuclear Medicine, Ulm University, Ulm, Germany
| | - Gerhard Glatting
- Medical Radiation Physics, Department of Nuclear Medicine, Ulm University, Ulm, Germany; Department of Nuclear Medicine, Ulm University, Ulm, Germany
| |
Collapse
|
43
|
Peng L, Lin L, Lin Y, Chen YW, Mo Z, Vlasova RM, Kim SH, Evans AC, Dager SR, Estes AM, McKinstry RC, Botteron KN, Gerig G, Schultz RT, Hazlett HC, Piven J, Burrows CA, Grzadzinski RL, Girault JB, Shen MD, Styner MA. Longitudinal Prediction of Infant MR Images With Multi-Contrast Perceptual Adversarial Learning. Front Neurosci 2021; 15:653213. [PMID: 34566556 PMCID: PMC8458966 DOI: 10.3389/fnins.2021.653213] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 08/09/2021] [Indexed: 11/28/2022] Open
Abstract
The infant brain undergoes a remarkable period of neural development that is crucial for the development of cognitive and behavioral capacities (Hasegawa et al., 2018). Longitudinal magnetic resonance imaging (MRI) is able to characterize the developmental trajectories and is critical in neuroimaging studies of early brain development. However, missing data at different time points is an unavoidable occurrence in longitudinal studies owing to participant attrition and scan failure. Compared to dropping incomplete data, data imputation is considered a better solution to address such missing data in order to preserve all available samples. In this paper, we adapt generative adversarial networks (GAN) to a new application: longitudinal image prediction of structural MRI in the first year of life. In contrast to existing medical image-to-image translation applications of GANs, where inputs and outputs share a very close anatomical structure, our task is more challenging as brain size, shape and tissue contrast vary significantly between the input data and the predicted data. Several improvements over existing GAN approaches are proposed to address these challenges in our task. To enhance the realism, crispness, and accuracy of the predicted images, we incorporate both a traditional voxel-wise reconstruction loss as well as a perceptual loss term into the adversarial learning scheme. As the differing contrast changes in T1w and T2w MR images in the first year of life, we incorporate multi-contrast images leading to our proposed 3D multi-contrast perceptual adversarial network (MPGAN). Extensive evaluations are performed to assess the qualityand fidelity of the predicted images, including qualitative and quantitative assessments of the image appearance, as well as quantitative assessment on two segmentation tasks. Our experimental results show that our MPGAN is an effective solution for longitudinal MR image data imputation in the infant brain. We further apply our predicted/imputed images to two practical tasks, a regression task and a classification task, in order to highlight the enhanced task-related performance following image imputation. The results show that the model performance in both tasks is improved by including the additional imputed data, demonstrating the usability of the predicted images generated from our approach.
Collapse
Affiliation(s)
- Liying Peng
- Department of Computer Science, Zhejiang University, Hangzhou, China
- Department of Psychiatry, UNC School of Medicine, University of North Carolina, Chapel Hill, NC, United States
| | - Lanfen Lin
- Department of Computer Science, Zhejiang University, Hangzhou, China
| | - Yusen Lin
- Department of Electrical and Computer Engineering Department, University of Maryland, College Park, MD, United States
| | - Yen-wei Chen
- Department of Information Science and Engineering, Ritsumeikan University, Shiga, Japan
| | - Zhanhao Mo
- Department of Radiology, China-Japan Union Hospital of Jilin University, Changchun, Jilin, China
| | - Roza M. Vlasova
- Department of Psychiatry, UNC School of Medicine, University of North Carolina, Chapel Hill, NC, United States
| | - Sun Hyung Kim
- Department of Psychiatry, UNC School of Medicine, University of North Carolina, Chapel Hill, NC, United States
| | - Alan C. Evans
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Stephen R. Dager
- Department of Radiology, University of Washington, Seattle, WA, United States
| | - Annette M. Estes
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, United States
| | - Robert C. McKinstry
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO, United States
| | - Kelly N. Botteron
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO, United States
- Department of Psychiatry, Washington University School of Medicine, St. Louis, MO, United States
| | - Guido Gerig
- Department of Computer Science and Engineering, New York University, New York, NY, United States
| | - Robert T. Schultz
- Center for Autism Research, Department of Pediatrics, Children's Hospital of Philadelphia, and University of Pennsylvania, Philadelphia, PA, United States
| | - Heather C. Hazlett
- Department of Psychiatry, UNC School of Medicine, University of North Carolina, Chapel Hill, NC, United States
- Carolina Institute for Developmental Disabilities, University of North Carolina School of Medicine, University of North Carolina-Chapel Hill, Chapel Hill, NC, United States
| | - Joseph Piven
- Department of Psychiatry, UNC School of Medicine, University of North Carolina, Chapel Hill, NC, United States
- Carolina Institute for Developmental Disabilities, University of North Carolina School of Medicine, University of North Carolina-Chapel Hill, Chapel Hill, NC, United States
| | - Catherine A. Burrows
- Department of Pediatrics, University of Minnesota, Minneapolis, MN, United States
| | - Rebecca L. Grzadzinski
- Department of Psychiatry, UNC School of Medicine, University of North Carolina, Chapel Hill, NC, United States
- Carolina Institute for Developmental Disabilities, University of North Carolina School of Medicine, University of North Carolina-Chapel Hill, Chapel Hill, NC, United States
| | - Jessica B. Girault
- Department of Psychiatry, UNC School of Medicine, University of North Carolina, Chapel Hill, NC, United States
- Carolina Institute for Developmental Disabilities, University of North Carolina School of Medicine, University of North Carolina-Chapel Hill, Chapel Hill, NC, United States
| | - Mark D. Shen
- Department of Psychiatry, UNC School of Medicine, University of North Carolina, Chapel Hill, NC, United States
- Carolina Institute for Developmental Disabilities, University of North Carolina School of Medicine, University of North Carolina-Chapel Hill, Chapel Hill, NC, United States
- UNC Neuroscience Center, University of North Carolina-Chapel Hill, Chapel Hill, NC, United States
| | - Martin A. Styner
- Department of Psychiatry, UNC School of Medicine, University of North Carolina, Chapel Hill, NC, United States
- Department of Computer Science, University of North Carolina, Chapel Hill, NC, United States
- *Correspondence: Martin A. Styner
| |
Collapse
|
44
|
Hadjiiski L, Samala R, Chan HP. Image Processing Analytics: Enhancements and Segmentation. Mol Imaging 2021. [DOI: 10.1016/b978-0-12-816386-3.00057-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
|
45
|
Burgos N, Bottani S, Faouzi J, Thibeau-Sutre E, Colliot O. Deep learning for brain disorders: from data processing to disease treatment. Brief Bioinform 2020; 22:1560-1576. [PMID: 33316030 DOI: 10.1093/bib/bbaa310] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 10/09/2020] [Accepted: 10/13/2020] [Indexed: 12/19/2022] Open
Abstract
In order to reach precision medicine and improve patients' quality of life, machine learning is increasingly used in medicine. Brain disorders are often complex and heterogeneous, and several modalities such as demographic, clinical, imaging, genetics and environmental data have been studied to improve their understanding. Deep learning, a subpart of machine learning, provides complex algorithms that can learn from such various data. It has become state of the art in numerous fields, including computer vision and natural language processing, and is also growingly applied in medicine. In this article, we review the use of deep learning for brain disorders. More specifically, we identify the main applications, the concerned disorders and the types of architectures and data used. Finally, we provide guidelines to bridge the gap between research studies and clinical routine.
Collapse
|
46
|
Tian Y, Fu S. A descriptive framework for the field of deep learning applications in medical images. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.106445] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
47
|
Zaharchuk G, Davidzon G. Artificial Intelligence for Optimization and Interpretation of PET/CT and PET/MR Images. Semin Nucl Med 2020; 51:134-142. [PMID: 33509370 DOI: 10.1053/j.semnuclmed.2020.10.001] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Artificial intelligence (AI) has recently attracted much attention for its potential use in healthcare applications. The use of AI to improve and extract more information out of medical images, given their parallels with natural images and the immense progress in the area of computer vision, has been at the forefront of these advances. This is due to a convergence of factors, including the increasing numbers of scans performed, the availability of open source AI tools, and decreases in the costs of hardware required to implement these technologies. In this article, we review the progress in the use of AI toward optimizing PET/CT and PET/MRI studies. These two methods, which combine molecular information with structural and (in the case of MRI) functional imaging, are extremely valuable for a wide range of clinical indications. They are also tremendously data-rich modalities and as such are highly amenable to data-driven technologies such as AI. The first half of the article will focus on methods to improve PET reconstruction and image quality, which has multiple benefits including faster image acquisition, image reconstruction, and lower or even "zero" radiation dose imaging. It will also address the value of AI-driven methods to perform MR-based attenuation correction. The second half will address how some of these advances can be used to perform to optimize diagnosis from the acquired images, with examples given for whole-body oncology, cardiology, and neurology indications. Overall, it is likely that the use of AI will markedly improve both the quality and safety of PET/CT and PET/MRI as well as enhance our ability to interpret the scans and follow lesions over time. This will hopefully lead to expanded clinical use cases for these valuable technologies leading to better patient care.
Collapse
Affiliation(s)
- Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, CA.
| | - Guido Davidzon
- Division of Nuclear Medicine & Molecular Imaging, Department of Radiology, Stanford University, Stanford, CA
| |
Collapse
|
48
|
Wei W, Poirion E, Bodini B, Tonietto M, Durrleman S, Colliot O, Stankoff B, Ayache N. Predicting PET-derived myelin content from multisequence MRI for individual longitudinal analysis in multiple sclerosis. Neuroimage 2020; 223:117308. [PMID: 32889117 DOI: 10.1016/j.neuroimage.2020.117308] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Revised: 07/20/2020] [Accepted: 08/21/2020] [Indexed: 12/31/2022] Open
Abstract
Multiple sclerosis (MS) is a demyelinating and inflammatory disease of the central nervous system (CNS). The demyelination process can be repaired by the generation of a new sheath of myelin around the axon, a process termed remyelination. In MS patients, the demyelination-remyelination cycles are highly dynamic. Over the years, magnetic resonance imaging (MRI) has been increasingly used in the diagnosis of MS and it is currently the most useful paraclinical tool to assess this diagnosis. However, conventional MRI pulse sequences are not specific for pathological mechanisms such as demyelination and remyelination. Recently, positron emission tomography (PET) with radiotracer [11C]PIB has become a promising tool to measure in-vivo myelin content changes which is essential to push forward our understanding of mechanisms involved in the pathology of MS, and to monitor individual patients in the context of clinical trials focused on repair therapies. However, PET imaging is invasive due to the injection of a radioactive tracer. Moreover, it is an expensive imaging test and not offered in the majority of medical centers in the world. In this work, by using multisequence MRI, we thus propose a method to predict the parametric map of [11C]PIB PET, from which we derived the myelin content changes in a longitudinal analysis of patients with MS. The method is based on the proposed conditional flexible self-attention GAN (CF-SAGAN) which is specifically adjusted for high-dimensional medical images and able to capture the relationships between the spatially separated lesional regions during the image synthesis process. Jointly applying the sketch-refinement process and the proposed attention regularization that focuses on the MS lesions, our approach is shown to outperform the state-of-the-art methods qualitatively and quantitatively. Specifically, our method demonstrated a superior performance for the prediction of myelin content at voxel-wise level. More important, our method for the prediction of myelin content changes in patients with MS shows similar clinical correlations to the PET-derived gold standard indicating the potential for clinical management of patients with MS.
Collapse
Affiliation(s)
- Wen Wei
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, France; Inria, Aramis Project-Team, Paris, France; Institut du Cerveau, ICM, Inserm U 1127, CNRS UMR 7225, Sorbonne Université, F-75013 Paris, France.
| | - Emilie Poirion
- Institut du Cerveau, ICM, Inserm U 1127, CNRS UMR 7225, Sorbonne Université, F-75013 Paris, France
| | - Benedetta Bodini
- Institut du Cerveau, ICM, Inserm U 1127, CNRS UMR 7225, Sorbonne Université, F-75013 Paris, France; APHP, Hôpital Saint Antoine, Neurology Department, Paris, France
| | - Matteo Tonietto
- Institut du Cerveau, ICM, Inserm U 1127, CNRS UMR 7225, Sorbonne Université, F-75013 Paris, France
| | - Stanley Durrleman
- Inria, Aramis Project-Team, Paris, France; Institut du Cerveau, ICM, Inserm U 1127, CNRS UMR 7225, Sorbonne Université, F-75013 Paris, France
| | - Olivier Colliot
- Inria, Aramis Project-Team, Paris, France; Institut du Cerveau, ICM, Inserm U 1127, CNRS UMR 7225, Sorbonne Université, F-75013 Paris, France
| | - Bruno Stankoff
- Institut du Cerveau, ICM, Inserm U 1127, CNRS UMR 7225, Sorbonne Université, F-75013 Paris, France; APHP, Hôpital Saint Antoine, Neurology Department, Paris, France
| | - Nicholas Ayache
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, France
| |
Collapse
|
49
|
Oh KT, Lee S, Lee H, Yun M, Yoo SK. Semantic Segmentation of White Matter in FDG-PET Using Generative Adversarial Network. J Digit Imaging 2020; 33:816-825. [PMID: 32043177 PMCID: PMC7522152 DOI: 10.1007/s10278-020-00321-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
In the diagnosis of neurodegenerative disorders, F-18 fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) is used for its ability to detect functional changes at early stages of disease process. However, anatomical information from another modality (CT or MRI) is still needed to properly interpret and localize the radiotracer uptake due to its low spatial resolution. Lack of structural information limits segmentation and accurate quantification of the 18F-FDG PET/CT. The correct segmentation of the brain compartment in 18F-FDG PET/CT will enable the quantitative analysis of the 18F-FDG PET/CT scan alone. In this paper, we propose a method to segment white matter in 18F-FDG PET/CT images using generative adversarial network (GAN). The segmentation result of GAN model was evaluated using evaluation parameters such as dice, AUC-PR, precision, and recall. It was also compared with other deep learning methods. As a result, the proposed method achieves superior segmentation accuracy and reliability compared with other deep learning methods.
Collapse
Affiliation(s)
- Kyeong Taek Oh
- Department of Medical Engineering, Yonsei University College of Medicine, Seoul, South Korea
| | - Sangwon Lee
- Department of Nuclear Medicine, Yonsei University College of Medicine, Seoul, South Korea
| | - Haeun Lee
- Department of Medical Engineering, Yonsei University College of Medicine, Seoul, South Korea
| | - Mijin Yun
- Department of Nuclear Medicine, Yonsei University College of Medicine, Seoul, South Korea
| | - Sun K. Yoo
- Department of Medical Engineering, Yonsei University College of Medicine, Seoul, South Korea
| |
Collapse
|
50
|
Creating Artificial Images for Radiology Applications Using Generative Adversarial Networks (GANs) - A Systematic Review. Acad Radiol 2020; 27:1175-1185. [PMID: 32035758 DOI: 10.1016/j.acra.2019.12.024] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 12/24/2019] [Accepted: 12/27/2019] [Indexed: 12/22/2022]
Abstract
RATIONALE AND OBJECTIVES Generative adversarial networks (GANs) are deep learning models aimed at generating fake realistic looking images. These novel models made a great impact on the computer vision field. Our study aims to review the literature on GANs applications in radiology. MATERIALS AND METHODS This systematic review followed the PRISMA guidelines. Electronic datasets were searched for studies describing applications of GANs in radiology. We included studies published up-to September 2019. RESULTS Data were extracted from 33 studies published between 2017 and 2019. Eighteen studies focused on CT images generation, ten on MRI, three on PET/MRI and PET/CT, one on ultrasound and one on X-ray. Applications in radiology included image reconstruction and denoising for dose and scan time reduction (fourteen studies), data augmentation (six studies), transfer between modalities (eight studies) and image segmentation (five studies). All studies reported that generated images improved the performance of the developed algorithms. CONCLUSION GANs are increasingly studied for various radiology applications. They enable the creation of new data, which can be used to improve clinical care, education and research.
Collapse
|