1
|
Fu Y, Dong S, Huang Y, Niu M, Ni C, Yu L, Shi K, Yao Z, Zhuo C. MPGAN: Multi Pareto Generative Adversarial Network for the denoising and quantitative analysis of low-dose PET images of human brain. Med Image Anal 2024; 98:103306. [PMID: 39163786 DOI: 10.1016/j.media.2024.103306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 06/15/2024] [Accepted: 08/12/2024] [Indexed: 08/22/2024]
Abstract
Positron emission tomography (PET) imaging is widely used in medical imaging for analyzing neurological disorders and related brain diseases. Usually, full-dose imaging for PET ensures image quality but raises concerns about potential health risks of radiation exposure. The contradiction between reducing radiation exposure and maintaining diagnostic performance can be effectively addressed by reconstructing low-dose PET (L-PET) images to the same high-quality as full-dose (F-PET). This paper introduces the Multi Pareto Generative Adversarial Network (MPGAN) to achieve 3D end-to-end denoising for the L-PET images of human brain. MPGAN consists of two key modules: the diffused multi-round cascade generator (GDmc) and the dynamic Pareto-efficient discriminator (DPed), both of which play a zero-sum game for n(n∈1,2,3) rounds to ensure the quality of synthesized F-PET images. The Pareto-efficient dynamic discrimination process is introduced in DPed to adaptively adjust the weights of sub-discriminators for improved discrimination output. We validated the performance of MPGAN using three datasets, including two independent datasets and one mixed dataset, and compared it with 12 recent competing models. Experimental results indicate that the proposed MPGAN provides an effective solution for 3D end-to-end denoising of L-PET images of the human brain, which meets clinical standards and achieves state-of-the-art performance on commonly used metrics.
Collapse
Affiliation(s)
- Yu Fu
- School of Information Science and Engineering, Lanzhou University, Lanzhou, China; College of Integrated Circuits, Zhejiang University, Hangzhou, China
| | - Shunjie Dong
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yanyan Huang
- Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong, China
| | - Meng Niu
- Department of Radiology, The First Hospital of Lanzhou University, Lanzhou, China
| | - Chao Ni
- Department of Breast Surgery, The Second Affiliated Hospital of Zhejiang University, Hangzhou, China
| | - Lequan Yu
- Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong, China
| | - Kuangyu Shi
- Department of Nuclear Medicine, University Hospital Bern, Bern, Switzerland
| | - Zhijun Yao
- School of Information Science and Engineering, Lanzhou University, Lanzhou, China
| | - Cheng Zhuo
- College of Integrated Circuits, Zhejiang University, Hangzhou, China.
| |
Collapse
|
2
|
Maus J, Nikulin P, Hofheinz F, Petr J, Braune A, Kotzerke J, van den Hoff J. Deep learning based bilateral filtering for edge-preserving denoising of respiratory-gated PET. EJNMMI Phys 2024; 11:58. [PMID: 38977533 PMCID: PMC11231129 DOI: 10.1186/s40658-024-00661-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Accepted: 06/17/2024] [Indexed: 07/10/2024] Open
Abstract
BACKGROUND Residual image noise is substantial in positron emission tomography (PET) and one of the factors limiting lesion detection, quantification, and overall image quality. Thus, improving noise reduction remains of considerable interest. This is especially true for respiratory-gated PET investigations. The only broadly used approach for noise reduction in PET imaging has been the application of low-pass filters, usually Gaussians, which however leads to loss of spatial resolution and increased partial volume effects affecting detectability of small lesions and quantitative data evaluation. The bilateral filter (BF) - a locally adaptive image filter - allows to reduce image noise while preserving well defined object edges but manual optimization of the filter parameters for a given PET scan can be tedious and time-consuming, hampering its clinical use. In this work we have investigated to what extent a suitable deep learning based approach can resolve this issue by training a suitable network with the target of reproducing the results of manually adjusted case-specific bilateral filtering. METHODS Altogether, 69 respiratory-gated clinical PET/CT scans with three different tracers ([ 18 F ] FDG,[ 18 F ] L-DOPA,[ 68 Ga ] DOTATATE) were used for the present investigation. Prior to data processing, the gated data sets were split, resulting in a total of 552 single-gate image volumes. For each of these image volumes, four 3D ROIs were delineated: one ROI for image noise assessment and three ROIs for focal uptake (e.g. tumor lesions) measurements at different target/background contrast levels. An automated procedure was used to perform a brute force search of the two-dimensional BF parameter space for each data set to identify the "optimal" filter parameters to generate user-approved ground truth input data consisting of pairs of original and optimally BF filtered images. For reproducing the optimal BF filtering, we employed a modified 3D U-Net CNN incorporating residual learning principle. The network training and evaluation was performed using a 5-fold cross-validation scheme. The influence of filtering on lesion SUV quantification and image noise level was assessed by calculating absolute and fractional differences between the CNN, manual BF, or original (STD) data sets in the previously defined ROIs. RESULTS The automated procedure used for filter parameter determination chose adequate filter parameters for the majority of the data sets with only 19 patient data sets requiring manual tuning. Evaluation of the focal uptake ROIs revealed that CNN as well as BF based filtering essentially maintain the focal SUV max values of the unfiltered images with a low mean ± SD difference of δ SUV max CNN , STD = (-3.9 ± 5.2)% and δ SUV max BF , STD = (-4.4 ± 5.3)%. Regarding relative performance of CNN versus BF, both methods lead to very similar SUV max values in the vast majority of cases with an overall average difference of δ SUV max CNN , BF = (0.5 ± 4.8)%. Evaluation of the noise properties showed that CNN filtering mostly satisfactorily reproduces the noise level and characteristics of BF with δ Noise CNN , BF = (5.6 ± 10.5)%. No significant tracer dependent differences between CNN and BF were observed. CONCLUSIONS Our results show that a neural network based denoising can reproduce the results of a case by case optimized BF in a fully automated way. Apart from rare cases it led to images of practically identical quality regarding noise level, edge preservation, and signal recovery. We believe such a network might proof especially useful in the context of improved motion correction of respiratory-gated PET studies but could also help to establish BF-equivalent edge-preserving CNN filtering in clinical PET since it obviates time consuming manual BF parameter tuning.
Collapse
Affiliation(s)
- Jens Maus
- Department of Positron Emission Tomography, Institute of Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01314, Dresden, Germany.
| | - Pavel Nikulin
- Department of Positron Emission Tomography, Institute of Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01314, Dresden, Germany
| | - Frank Hofheinz
- Department of Positron Emission Tomography, Institute of Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01314, Dresden, Germany
| | - Jan Petr
- Department of Positron Emission Tomography, Institute of Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01314, Dresden, Germany
| | - Anja Braune
- Klinik und Poliklinik für Nuklearmedizin, Universtitätsklinikum Carl Gustav Carus, Fetscherstraße 74, 01307, Dresden, Germany
| | - Jörg Kotzerke
- Klinik und Poliklinik für Nuklearmedizin, Universtitätsklinikum Carl Gustav Carus, Fetscherstraße 74, 01307, Dresden, Germany
| | - Jörg van den Hoff
- Department of Positron Emission Tomography, Institute of Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01314, Dresden, Germany
- Klinik und Poliklinik für Nuklearmedizin, Universtitätsklinikum Carl Gustav Carus, Fetscherstraße 74, 01307, Dresden, Germany
| |
Collapse
|
3
|
Dutta K, Laforest R, Luo J, Jha AK, Shoghi KI. Deep learning generation of preclinical positron emission tomography (PET) images from low-count PET with task-based performance assessment. Med Phys 2024; 51:4324-4339. [PMID: 38710222 DOI: 10.1002/mp.17105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 04/02/2024] [Accepted: 04/09/2024] [Indexed: 05/08/2024] Open
Abstract
BACKGROUND Preclinical low-count positron emission tomography (LC-PET) imaging offers numerous advantages such as facilitating imaging logistics, enabling longitudinal studies of long- and short-lived isotopes as well as increasing scanner throughput. However, LC-PET is characterized by reduced photon-count levels resulting in low signal-to-noise ratio (SNR), segmentation difficulties, and quantification uncertainties. PURPOSE We developed and evaluated a novel deep-learning (DL) architecture-Attention based Residual-Dilated Net (ARD-Net)-to generate standard-count PET (SC-PET) images from LC-PET images. The performance of the ARD-Net framework was evaluated for numerous low count realizations using fidelity-based qualitative metrics, task-based segmentation, and quantitative metrics. METHOD Patient Derived tumor Xenograft (PDX) with tumors implanted in the mammary fat-pad were subjected to preclinical [18F]-Fluorodeoxyglucose (FDG)-PET/CT imaging. SC-PET images were derived from a 10 min static FDG-PET acquisition, 50 min post administration of FDG, and were resampled to generate four distinct LC-PET realizations corresponding to 10%, 5%, 1.6%, and 0.8% of SC-PET count-level. ARD-Net was trained and optimized using 48 preclinical FDG-PET datasets, while 16 datasets were utilized to assess performance. Further, the performance of ARD-Net was benchmarked against two leading DL-based methods (Residual UNet, RU-Net; and Dilated Network, D-Net) and non-DL methods (Non-Local Means, NLM; and Block Matching 3D Filtering, BM3D). The performance of the framework was evaluated using traditional fidelity-based image quality metrics such as Structural Similarity Index Metric (SSIM) and Normalized Root Mean Square Error (NRMSE), as well as human observer-based tumor segmentation performance (Dice Score and volume bias) and quantitative analysis of Standardized Uptake Value (SUV) measurements. Additionally, radiomics-derived features were utilized as a measure of quality assurance (QA) in comparison to true SC-PET. Finally, a performance ensemble score (EPS) was developed by integrating fidelity-based and task-based metrics. Concordance Correlation Coefficient (CCC) was utilized to determine concordance between measures. The non-parametric Friedman Test with Bonferroni correction was used to compare the performance of ARD-Net against benchmarked methods with significance at adjusted p-value ≤0.01. RESULTS ARD-Net-generated SC-PET images exhibited significantly better (p ≤ 0.01 post Bonferroni correction) overall image fidelity scores in terms of SSIM and NRMSE at majority of photon-count levels compared to benchmarked DL and non-DL methods. In terms of task-based quantitative accuracy evaluated by SUVMean and SUVPeak, ARD-Net exhibited less than 5% median absolute bias for SUVMean compared to true SC-PET and lower degree of variability compared to benchmarked DL and non-DL based methods in generating SC-PET. Additionally, ARD-Net-generated SC-PET images displayed higher degree of concordance to SC-PET images in terms of radiomics features compared to non-DL and other DL approaches. Finally, the ensemble score suggested that ARD-Net exhibited significantly superior performance compared to benchmarked algorithms (p ≤ 0.01 post Bonferroni correction). CONCLUSION ARD-Net provides a robust framework to generate SC-PET from LC-PET images. ARD-Net generated SC-PET images exhibited superior performance compared other DL and non-DL approaches in terms of image-fidelity based metrics, task-based segmentation metrics, and minimal bias in terms of task-based quantification performance for preclinical PET imaging.
Collapse
Affiliation(s)
- Kaushik Dutta
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri, USA
- Imaging Science Program, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
| | - Richard Laforest
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri, USA
- Imaging Science Program, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
| | - Jingqin Luo
- Department of Surgery, Public Health Sciences, Washington University in St Louis, St Louis, Missouri, USA
| | - Abhinav K Jha
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri, USA
- Imaging Science Program, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
- Department of Biomedical Engineering, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
| | - Kooresh I Shoghi
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri, USA
- Imaging Science Program, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
- Department of Biomedical Engineering, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
| |
Collapse
|
4
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
5
|
Wang D, Jiang C, He J, Teng Y, Qin H, Liu J, Yang X. M 3S-Net: multi-modality multi-branch multi-self-attention network with structure-promoting loss for low-dose PET/CT enhancement. Phys Med Biol 2024; 69:025001. [PMID: 38086073 DOI: 10.1088/1361-6560/ad14c5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 12/12/2023] [Indexed: 01/05/2024]
Abstract
Objective.PET (Positron Emission Tomography) inherently involves radiotracer injections and long scanning time, which raises concerns about the risk of radiation exposure and patient comfort. Reductions in radiotracer dosage and acquisition time can lower the potential risk and improve patient comfort, respectively, but both will also reduce photon counts and hence degrade the image quality. Therefore, it is of interest to improve the quality of low-dose PET images.Approach.A supervised multi-modality deep learning model, named M3S-Net, was proposed to generate standard-dose PET images (60 s per bed position) from low-dose ones (10 s per bed position) and the corresponding CT images. Specifically, we designed a multi-branch convolutional neural network with multi-self-attention mechanisms, which first extracted features from PET and CT images in two separate branches and then fused the features to generate the final generated PET images. Moreover, a novel multi-modality structure-promoting term was proposed in the loss function to learn the anatomical information contained in CT images.Main results.We conducted extensive numerical experiments on real clinical data collected from local hospitals. Compared with state-of-the-art methods, the proposed M3S-Net not only achieved higher objective metrics and better generated tumors, but also performed better in preserving edges and suppressing noise and artifacts.Significance.The experimental results of quantitative metrics and qualitative displays demonstrate that the proposed M3S-Net can generate high-quality PET images from low-dose ones, which are competable to standard-dose PET images. This is valuable in reducing PET acquisition time and has potential applications in dynamic PET imaging.
Collapse
Affiliation(s)
- Dong Wang
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Chong Jiang
- Department of Nuclear Medicine, West China Hospital of Sichuan University, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Jian He
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Yue Teng
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Hourong Qin
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| | - Jijun Liu
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| |
Collapse
|
6
|
Balaji V, Song TA, Malekzadeh M, Heidari P, Dutta J. Artificial Intelligence for PET and SPECT Image Enhancement. J Nucl Med 2024; 65:4-12. [PMID: 37945384 PMCID: PMC10755520 DOI: 10.2967/jnumed.122.265000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 10/10/2023] [Indexed: 11/12/2023] Open
Abstract
Nuclear medicine imaging modalities such as PET and SPECT are confounded by high noise levels and low spatial resolution, necessitating postreconstruction image enhancement to improve their quality and quantitative accuracy. Artificial intelligence (AI) models such as convolutional neural networks, U-Nets, and generative adversarial networks have shown promising outcomes in enhancing PET and SPECT images. This review article presents a comprehensive survey of state-of-the-art AI methods for PET and SPECT image enhancement and seeks to identify emerging trends in this field. We focus on recent breakthroughs in AI-based PET and SPECT image denoising and deblurring. Supervised deep-learning models have shown great potential in reducing radiotracer dose and scan times without sacrificing image quality and diagnostic accuracy. However, the clinical utility of these methods is often limited by their need for paired clean and corrupt datasets for training. This has motivated research into unsupervised alternatives that can overcome this limitation by relying on only corrupt inputs or unpaired datasets to train models. This review highlights recently published supervised and unsupervised efforts toward AI-based PET and SPECT image enhancement. We discuss cross-scanner and cross-protocol training efforts, which can greatly enhance the clinical translatability of AI-based image enhancement tools. We also aim to address the looming question of whether the improvements in image quality generated by AI models lead to actual clinical benefit. To this end, we discuss works that have focused on task-specific objective clinical evaluation of AI models for image enhancement or incorporated clinical metrics into their loss functions to guide the image generation process. Finally, we discuss emerging research directions, which include the exploration of novel training paradigms, curation of larger task-specific datasets, and objective clinical evaluation that will enable the realization of the full translation potential of these models in the future.
Collapse
Affiliation(s)
- Vibha Balaji
- Department of Biomedical Engineering, University of Massachusetts Amherst, Amherst, Massachusetts; and
| | - Tzu-An Song
- Department of Biomedical Engineering, University of Massachusetts Amherst, Amherst, Massachusetts; and
| | - Masoud Malekzadeh
- Department of Biomedical Engineering, University of Massachusetts Amherst, Amherst, Massachusetts; and
| | - Pedram Heidari
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Joyita Dutta
- Department of Biomedical Engineering, University of Massachusetts Amherst, Amherst, Massachusetts; and
| |
Collapse
|
7
|
Wang Y, Luo Y, Zu C, Zhan B, Jiao Z, Wu X, Zhou J, Shen D, Zhou L. 3D multi-modality Transformer-GAN for high-quality PET reconstruction. Med Image Anal 2024; 91:102983. [PMID: 37926035 DOI: 10.1016/j.media.2023.102983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 08/06/2023] [Accepted: 09/28/2023] [Indexed: 11/07/2023]
Abstract
Positron emission tomography (PET) scans can reveal abnormal metabolic activities of cells and provide favorable information for clinical patient diagnosis. Generally, standard-dose PET (SPET) images contain more diagnostic information than low-dose PET (LPET) images but higher-dose scans can also bring higher potential radiation risks. To reduce the radiation risk while acquiring high-quality PET images, in this paper, we propose a 3D multi-modality edge-aware Transformer-GAN for high-quality SPET reconstruction using the corresponding LPET images and T1 acquisitions from magnetic resonance imaging (T1-MRI). Specifically, to fully excavate the metabolic distributions in LPET and anatomical structural information in T1-MRI, we first use two separate CNN-based encoders to extract local spatial features from the two modalities, respectively, and design a multimodal feature integration module to effectively integrate the two kinds of features given the diverse contributions of features at different locations. Then, as CNNs can describe local spatial information well but have difficulty in modeling long-range dependencies in images, we further apply a Transformer-based encoder to extract global semantic information in the input images and use a CNN decoder to transform the encoded features into SPET images. Finally, a patch-based discriminator is applied to ensure the similarity of patch-wise data distribution between the reconstructed and real images. Considering the importance of edge information in anatomical structures for clinical disease diagnosis, besides voxel-level estimation error and adversarial loss, we also introduce an edge-aware loss to retain more edge detail information in the reconstructed SPET images. Experiments on the phantom dataset and clinical dataset validate that our proposed method can effectively reconstruct high-quality SPET images and outperform current state-of-the-art methods in terms of qualitative and quantitative metrics.
Collapse
Affiliation(s)
- Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China
| | - Yanmei Luo
- School of Computer Science, Sichuan University, Chengdu, China
| | - Chen Zu
- Department of Risk Controlling Research, JD.COM, China
| | - Bo Zhan
- School of Computer Science, Sichuan University, Chengdu, China
| | - Zhengyang Jiao
- School of Computer Science, Sichuan University, Chengdu, China
| | - Xi Wu
- School of Computer Science, Chengdu University of Information Technology, China
| | - Jiliu Zhou
- School of Computer Science, Sichuan University, Chengdu, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Luping Zhou
- School of Electrical and Information Engineering, University of Sydney, Australia.
| |
Collapse
|
8
|
Li A, Yang B, Naganawa M, Fontaine K, Toyonaga T, Carson RE, Tang J. Dose reduction in dynamic synaptic vesicle glycoprotein 2A PET imaging using artificial neural networks. Phys Med Biol 2023; 68:245006. [PMID: 37857316 PMCID: PMC10739622 DOI: 10.1088/1361-6560/ad0535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 10/02/2023] [Accepted: 10/19/2023] [Indexed: 10/21/2023]
Abstract
Objective. Reducing dose in positron emission tomography (PET) imaging increases noise in reconstructed dynamic frames, which inevitably results in higher noise and possible bias in subsequently estimated images of kinetic parameters than those estimated in the standard dose case. We report the development of a spatiotemporal denoising technique for reduced-count dynamic frames through integrating a cascade artificial neural network (ANN) with the highly constrained back-projection (HYPR) scheme to improve low-dose parametric imaging.Approach. We implemented and assessed the proposed method using imaging data acquired with11C-UCB-J, a PET radioligand bound to synaptic vesicle glycoprotein 2A (SV2A) in the human brain. The patch-based ANN was trained with a reduced-count frame and its full-count correspondence of a subject and was used in cascade to process dynamic frames of other subjects to further take advantage of its denoising capability. The HYPR strategy was then applied to the spatial ANN processed image frames to make use of the temporal information from the entire dynamic scan.Main results. In all the testing subjects including healthy volunteers and Parkinson's disease patients, the proposed method reduced more noise while introducing minimal bias in dynamic frames and the resulting parametric images, as compared with conventional denoising methods.Significance. Achieving 80% noise reduction with a bias of -2% in dynamic frames, which translates into 75% and 70% of noise reduction in the tracer uptake (bias, -2%) and distribution volume (bias, -5%) images, the proposed ANN+HYPR technique demonstrates the denoising capability equivalent to a 11-fold dose increase for dynamic SV2A PET imaging with11C-UCB-J.
Collapse
Affiliation(s)
- Andi Li
- Department of Biomedical Engineering, University of Cincinnati, Cincinnati, OH, United States of America
| | - Bao Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, People’s Republic of China
| | - Mika Naganawa
- Positron Emission Tomography Center, Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Kathryn Fontaine
- Positron Emission Tomography Center, Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Takuya Toyonaga
- Positron Emission Tomography Center, Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Richard E Carson
- Positron Emission Tomography Center, Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Jing Tang
- Department of Biomedical Engineering, University of Cincinnati, Cincinnati, OH, United States of America
| |
Collapse
|
9
|
Xue Y, Peng Y, Bi L, Feng D, Kim J. CG-3DSRGAN: A classification guided 3D generative adversarial network for image quality recovery from low-dose PET images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083742 DOI: 10.1109/embc40787.2023.10341112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Positron emission tomography (PET) is the most sensitive molecular imaging modality routinely applied in our modern healthcare. High radioactivity caused by the injected tracer dose is a major concern in PET imaging and limits its clinical applications. However, reducing the dose leads to inadequate image quality for diagnostic practice. Motivated by the need to produce high quality images with minimum 'low-dose', convolutional neural networks (CNNs) based methods have been developed for high quality PET synthesis from its low-dose counterparts. Previous CNNs-based studies usually directly map low-dose PET into features space without consideration of different dose reduction level. In this study, a novel approach named CG-3DSRGAN (Classification-Guided Generative Adversarial Network with Super Resolution Refinement) is presented. Specifically, a multi-tasking coarse generator, guided by a classification head, allows for a more comprehensive understanding of the noise-level features present in the low-dose data, resulting in improved image synthesis. Moreover, to recover spatial details of standard PET, an auxiliary super resolution network - Contextual-Net - is proposed as a second-stage training to narrow the gap between coarse prediction and standard PET. We compared our method to the state-of-the-art methods on whole-body PET with different dose reduction factors (DRF). Experiments demonstrate our method can outperform others on all DRF.Clinical Relevance- Low-Dose PET, PET recovery, GAN, task driven image synthesis, super resolution.
Collapse
|
10
|
Margail C, Merlin C, Billoux T, Wallaert M, Otman H, Sas N, Molnar I, Guillemin F, Boyer L, Guy L, Tempier M, Levesque S, Revy A, Cachin F, Chanchou M. Imaging quality of an artificial intelligence denoising algorithm: validation in 68Ga PSMA-11 PET for patients with biochemical recurrence of prostate cancer. EJNMMI Res 2023; 13:50. [PMID: 37231229 DOI: 10.1186/s13550-023-00999-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 05/12/2023] [Indexed: 05/27/2023] Open
Abstract
BACKGROUND 68 Ga-PSMA PET is the leading prostate cancer imaging technique, but the image quality remains noisy and could be further improved using an artificial intelligence-based denoising algorithm. To address this issue, we analyzed the overall quality of reprocessed images compared to standard reconstructions. We also analyzed the diagnostic performances of the different sequences and the impact of the algorithm on lesion intensity and background measures. METHODS We retrospectively included 30 patients with biochemical recurrence of prostate cancer who had undergone 68 Ga-PSMA-11 PET-CT. We simulated images produced using only a quarter, half, three-quarters, or all of the acquired data material reprocessed using the SubtlePET® denoising algorithm. Three physicians with different levels of experience blindly analyzed every sequence and then used a 5-level Likert scale to assess the series. The binary criterion of lesion detectability was compared between series. We also compared lesion SUV, background uptake, and diagnostic performances of the series (sensitivity, specificity, accuracy). RESULTS VPFX-derived series were classified differently but better than standard reconstructions (p < 0.001) using half the data. Q.Clear series were not classified differently using half the signal. Some series were noisy but had no significant effect on lesion detectability (p > 0.05). The SubtlePET® algorithm significantly decreased lesion SUV (p < 0.005) and increased liver background (p < 0.005) and had no substantial effect on the diagnostic performance of each reader. CONCLUSION We show that the SubtlePET® can be used for 68 Ga-PSMA scans using half the signal with similar image quality to Q.Clear series and superior quality to VPFX series. However, it significantly modifies quantitative measurements and should not be used for comparative examinations if standard algorithm is applied during follow-up.
Collapse
Affiliation(s)
- Charles Margail
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France.
| | - Charles Merlin
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
| | - Tommy Billoux
- Inserm UMR 1240 IMOST, Physique Médicale, CLCC Jean Perrin, Université Clermont Auvergne, Clermont-Ferrand, France
| | | | - Hosameldin Otman
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
| | - Nicolas Sas
- Inserm UMR 1240 IMOST, Physique Médicale, CLCC Jean Perrin, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Ioana Molnar
- Biostatistics, CLCC Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
| | | | - Louis Boyer
- Radiology, UMR 6602 UCA/CNRS/SIGMA, Hôpital Gabriel-Montpied TGI -Institut Pascal, Clermont-Ferrand, France
| | - Laurent Guy
- Urology, Hôpital Gabriel-Montpied, Clermont-Ferrand, France
- Université Clermont Auvergne, Clermont-Ferrand, France
| | - Marion Tempier
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
| | - Sophie Levesque
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
| | - Alban Revy
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
| | - Florent Cachin
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
- Université Clermont Auvergne, Clermont-Ferrand, France
| | - Marion Chanchou
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
- Université Clermont Auvergne, Clermont-Ferrand, France
| |
Collapse
|
11
|
Fu Y, Dong S, Niu M, Xue L, Guo H, Huang Y, Xu Y, Yu T, Shi K, Yang Q, Shi Y, Zhang H, Tian M, Zhuo C. AIGAN: Attention-encoding Integrated Generative Adversarial Network for the reconstruction of low-dose CT and low-dose PET images. Med Image Anal 2023; 86:102787. [PMID: 36933386 DOI: 10.1016/j.media.2023.102787] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 11/05/2022] [Accepted: 02/22/2023] [Indexed: 03/04/2023]
Abstract
X-ray computed tomography (CT) and positron emission tomography (PET) are two of the most commonly used medical imaging technologies for the evaluation of many diseases. Full-dose imaging for CT and PET ensures the image quality but usually raises concerns about the potential health risks of radiation exposure. The contradiction between reducing the radiation exposure and remaining diagnostic performance can be addressed effectively by reconstructing the low-dose CT (L-CT) and low-dose PET (L-PET) images to the same high-quality ones as full-dose (F-CT and F-PET). In this paper, we propose an Attention-encoding Integrated Generative Adversarial Network (AIGAN) to achieve efficient and universal full-dose reconstruction for L-CT and L-PET images. AIGAN consists of three modules: the cascade generator, the dual-scale discriminator and the multi-scale spatial fusion module (MSFM). A sequence of consecutive L-CT (L-PET) slices is first fed into the cascade generator that integrates with a generation-encoding-generation pipeline. The generator plays the zero-sum game with the dual-scale discriminator for two stages: the coarse and fine stages. In both stages, the generator generates the estimated F-CT (F-PET) images as like the original F-CT (F-PET) images as possible. After the fine stage, the estimated fine full-dose images are then fed into the MSFM, which fully explores the inter- and intra-slice structural information, to output the final generated full-dose images. Experimental results show that the proposed AIGAN achieves the state-of-the-art performances on commonly used metrics and satisfies the reconstruction needs for clinical standards.
Collapse
Affiliation(s)
- Yu Fu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China; Binjiang Institute, Zhejiang University, Hangzhou, China
| | - Shunjie Dong
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Meng Niu
- Department of Radiology, The First Hospital of Lanzhou University, Lanzhou, China
| | - Le Xue
- Department of Nuclear Medicine and Medical PET Center The Second Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Hanning Guo
- Institute of Neuroscience and Medicine, Medical Imaging Physics (INM-4), Forschungszentrum Jülich, Jülich, Germany
| | - Yanyan Huang
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Yuanfan Xu
- Hangzhou Universal Medical Imaging Diagnostic Center, Hangzhou, China
| | - Tianbai Yu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Kuangyu Shi
- Department of Nuclear Medicine, University Hospital Bern, Bern, Switzerland
| | - Qianqian Yang
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Yiyu Shi
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, USA
| | - Hong Zhang
- Binjiang Institute, Zhejiang University, Hangzhou, China; Department of Nuclear Medicine and Medical PET Center The Second Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Mei Tian
- Human Phenome Institute, Fudan University, Shanghai, China.
| | - Cheng Zhuo
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China; Key Laboratory of Collaborative Sensing and Autonomous Unmanned Systems of Zhejiang Province, Hangzhou, China.
| |
Collapse
|
12
|
Flaus A, Deddah T, Reilhac A, Leiris ND, Janier M, Merida I, Grenier T, McGinnity CJ, Hammers A, Lartizien C, Costes N. PET image enhancement using artificial intelligence for better characterization of epilepsy lesions. Front Med (Lausanne) 2022; 9:1042706. [PMID: 36465898 PMCID: PMC9708713 DOI: 10.3389/fmed.2022.1042706] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 10/21/2022] [Indexed: 11/16/2023] Open
Abstract
INTRODUCTION [18F]fluorodeoxyglucose ([18F]FDG) brain PET is used clinically to detect small areas of decreased uptake associated with epileptogenic lesions, e.g., Focal Cortical Dysplasias (FCD) but its performance is limited due to spatial resolution and low contrast. We aimed to develop a deep learning-based PET image enhancement method using simulated PET to improve lesion visualization. METHODS We created 210 numerical brain phantoms (MRI segmented into 9 regions) and assigned 10 different plausible activity values (e.g., GM/WM ratios) resulting in 2100 ground truth high quality (GT-HQ) PET phantoms. With a validated Monte-Carlo PET simulator, we then created 2100 simulated standard quality (S-SQ) [18F]FDG scans. We trained a ResNet on 80% of this dataset (10% used for validation) to learn the mapping between S-SQ and GT-HQ PET, outputting a predicted HQ (P-HQ) PET. For the remaining 10%, we assessed Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Root Mean Squared Error (RMSE) against GT-HQ PET. For GM and WM, we computed recovery coefficients (RC) and coefficient of variation (COV). We also created lesioned GT-HQ phantoms, S-SQ PET and P-HQ PET with simulated small hypometabolic lesions characteristic of FCDs. We evaluated lesion detectability on S-SQ and P-HQ PET both visually and measuring the Relative Lesion Activity (RLA, measured activity in the reduced-activity ROI over the standard-activity ROI). Lastly, we applied our previously trained ResNet on 10 clinical epilepsy PETs to predict the corresponding HQ-PET and assessed image quality and confidence metrics. RESULTS Compared to S-SQ PET, P-HQ PET improved PNSR, SSIM and RMSE; significatively improved GM RCs (from 0.29 ± 0.03 to 0.79 ± 0.04) and WM RCs (from 0.49 ± 0.03 to 1 ± 0.05); mean COVs were not statistically different. Visual lesion detection improved from 38 to 75%, with average RLA decreasing from 0.83 ± 0.08 to 0.67 ± 0.14. Visual quality of P-HQ clinical PET improved as well as reader confidence. CONCLUSION P-HQ PET showed improved image quality compared to S-SQ PET across several objective quantitative metrics and increased detectability of simulated lesions. In addition, the model generalized to clinical data. Further evaluation is required to study generalization of our method and to assess clinical performance in larger cohorts.
Collapse
Affiliation(s)
- Anthime Flaus
- Department of Nuclear Medicine, Hospices Civils de Lyon, Lyon, France
- Faculté de Médecine Lyon Est, Université Claude Bernard Lyon 1, Lyon, France
- King's College London and Guy's and St Thomas' PET Centre, School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, INSERM, CREATIS UMR 5220, Lyon, France
- Lyon Neuroscience Research Center, INSERM U1028/CNRS UMR5292, Lyon, France
- CERMEP-Life Imaging, Lyon, France
| | | | - Anthonin Reilhac
- Brain Health Imaging Centre, Center for Addiction and Mental Health (CAHMS), Toronto, ON, Canada
| | - Nicolas De Leiris
- Departement of Nuclear Medicine, CHU Grenoble Alpes, University Grenoble Alpes, Grenoble, France
- Laboratoire Radiopharmaceutiques Biocliniques, University Grenoble Alpes, INSERM, CHU Grenoble Alpes, Grenoble, France
| | - Marc Janier
- Department of Nuclear Medicine, Hospices Civils de Lyon, Lyon, France
- Faculté de Médecine Lyon Est, Université Claude Bernard Lyon 1, Lyon, France
| | | | - Thomas Grenier
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, INSERM, CREATIS UMR 5220, Lyon, France
| | - Colm J. McGinnity
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, INSERM, CREATIS UMR 5220, Lyon, France
| | - Alexander Hammers
- King's College London and Guy's and St Thomas' PET Centre, School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Carole Lartizien
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, INSERM, CREATIS UMR 5220, Lyon, France
| | - Nicolas Costes
- Lyon Neuroscience Research Center, INSERM U1028/CNRS UMR5292, Lyon, France
- CERMEP-Life Imaging, Lyon, France
| |
Collapse
|
13
|
Liu J, Ren S, Wang R, Mirian N, Tsai YJ, Kulon M, Pucar D, Chen MK, Liu C. Virtual high-count PET image generation using a deep learning method. Med Phys 2022; 49:5830-5840. [PMID: 35880541 PMCID: PMC9474624 DOI: 10.1002/mp.15867] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 06/07/2022] [Accepted: 07/18/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Recently, deep learning-based methods have been established to denoise the low-count positron emission tomography (PET) images and predict their standard-count image counterparts, which could achieve reduction of injected dosage and scan time, and improve image quality for equivalent lesion detectability and clinical diagnosis. In clinical settings, the majority scans are still acquired using standard injection dose with standard scan time. In this work, we applied a 3D U-Net network to reduce the noise of standard-count PET images to obtain the virtual-high-count (VHC) PET images for identifying the potential benefits of the obtained VHC PET images. METHODS The training datasets, including down-sampled standard-count PET images as the network input and high-count images as the desired network output, were derived from 27 whole-body PET datasets, which were acquired using 90-min dynamic scan. The down-sampled standard-count PET images were rebinned with matched noise level of 195 clinical static PET datasets, by matching the normalized standard derivation (NSTD) inside 3D liver region of interests (ROIs). Cross-validation was performed on 27 PET datasets. Normalized mean square error (NMSE), peak signal to noise ratio (PSNR), structural similarity index (SSIM), and standard uptake value (SUV) bias of lesions were used for evaluation on standard-count and VHC PET images, with real-high-count PET image of 90 min as the gold standard. In addition, the network trained with 27 dynamic PET datasets was applied to 195 clinical static datasets to obtain VHC PET images. The NSTD and mean/max SUV of hypermetabolic lesions in standard-count and VHC PET images were evaluated. Three experienced nuclear medicine physicians evaluated the overall image quality of randomly selected 50 out of 195 patients' standard-count and VHC images and conducted 5-score ranking. A Wilcoxon signed-rank test was used to compare differences in the grading of standard-count and VHC images. RESULTS The cross-validation results showed that VHC PET images had improved quantitative metrics scores than the standard-count PET images. The mean/max SUVs of 35 lesions in the standard-count and true-high-count PET images did not show significantly statistical difference. Similarly, the mean/max SUVs of VHC and true-high-count PET images did not show significantly statistical difference. For the 195 clinical data, the VHC PET images had a significantly lower NSTD than the standard-count images. The mean/max SUVs of 215 hypermetabolic lesions in the VHC and standard-count images showed no statistically significant difference. In the image quality evaluation by three experienced nuclear medicine physicians, standard-count images and VHC images received scores with mean and standard deviation of 3.34±0.80 and 4.26 ± 0.72 from Physician 1, 3.02 ± 0.87 and 3.96 ± 0.73 from Physician 2, and 3.74 ± 1.10 and 4.58 ± 0.57 from Physician 3, respectively. The VHC images were consistently ranked higher than the standard-count images. The Wilcoxon signed-rank test also indicated that the image quality evaluation between standard-count and VHC images had significant difference. CONCLUSIONS A DL method was proposed to convert the standard-count images to the VHC images. The VHC images had reduced noise level. No significant difference in mean/max SUV to the standard-count images was observed. VHC images improved image quality for better lesion detectability and clinical diagnosis.
Collapse
Affiliation(s)
- Juan Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Sijin Ren
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Rui Wang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, China
| | - Niloufarsadat Mirian
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Yu-Jung Tsai
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Michal Kulon
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Darko Pucar
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| |
Collapse
|
14
|
Bonardel G, Dupont A, Decazes P, Queneau M, Modzelewski R, Coulot J, Le Calvez N, Hapdey S. Clinical and phantom validation of a deep learning based denoising algorithm for F-18-FDG PET images from lower detection counting in comparison with the standard acquisition. EJNMMI Phys 2022; 9:36. [PMID: 35543894 PMCID: PMC9095795 DOI: 10.1186/s40658-022-00465-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 04/20/2022] [Indexed: 11/21/2022] Open
Abstract
Background PET/CT image quality is directly influenced by the F-18-FDG injected activity. The higher the injected activity, the less noise in the reconstructed images but the more radioactive staff exposition. A new FDA cleared software has been introduced to obtain clinical PET images, acquired at 25% of the count statistics considering US practices. Our aim is to determine the limits of a deep learning based denoising algorithm (SubtlePET) applied to statistically reduced PET raw data from 3 different last generation PET scanners in comparison to the regular acquisition in phantom and patients, considering the European guidelines for radiotracer injection activities. Images of low and high contrasted (SBR = 2 and 5) spheres of the IEC phantom and high contrast (SBR = 5) of micro-spheres of Jaszczak phantom were acquired on 3 different PET devices. 110 patients with different pathologies were included. The data was acquired in list-mode and retrospectively reconstructed with the regular acquisition count statistic (PET100), 50% reduction in counts (PET50) and 66% reduction in counts (PET33). These count reduced images were post-processed with SubtlePET to obtain PET50 + SP and PET33 + SP images. Patient image quality was scored by 2 senior nuclear physicians. Peak-signal-to-Noise and Structural similarity metrics were computed to compare the low count images to regular acquisition (PET100). Results SubtlePET reliably denoised the images and maintained the SUVmax values in PET50 + SP. SubtlePET enhanced images (PET33 + SP) had slightly increased noise compared to PET100 and could lead to a potential loss of information in terms of lesion detectability. Regarding the patient datasets, the PET100 and PET50 + SP were qualitatively comparable. The SubtlePET algorithm was able to correctly recover the SUVmax values of the lesions and maintain a noise level equivalent to full-time images. Conclusion Based on our results, SubtlePET is adapted in clinical practice for half-time or half-dose acquisitions based on European recommended injected dose of 3 MBq/kg without diagnostic confidence loss. Supplementary Information The online version contains supplementary material available at 10.1186/s40658-022-00465-z.
Collapse
Affiliation(s)
- Gerald Bonardel
- Nuclear Medicine, Centre Cardiologique du Nord, Saint-Denis, France.,Nuclear Medicine, Hopital Delafontaine, Saint-Denis, France
| | | | - Pierre Decazes
- Nuclear Medicine Department, Henri Becquerel Cancer Center, Rouen, France.,QuantIF-LITIS EA4108, Rouen University Hospital, Rouen, France
| | - Mathieu Queneau
- Nuclear Medicine, Centre Cardiologique du Nord, Saint-Denis, France.,Nuclear Medicine, Hopital Delafontaine, Saint-Denis, France
| | - Romain Modzelewski
- Nuclear Medicine Department, Henri Becquerel Cancer Center, Rouen, France.,QuantIF-LITIS EA4108, Rouen University Hospital, Rouen, France
| | | | - Nicolas Le Calvez
- Nuclear Medicine, Centre Cardiologique du Nord, Saint-Denis, France.,Nuclear Medicine, Hopital Delafontaine, Saint-Denis, France
| | - Sébastien Hapdey
- Nuclear Medicine Department, Henri Becquerel Cancer Center, Rouen, France. .,QuantIF-LITIS EA4108, Rouen University Hospital, Rouen, France.
| |
Collapse
|
15
|
Pain CD, Egan GF, Chen Z. Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement. Eur J Nucl Med Mol Imaging 2022; 49:3098-3118. [PMID: 35312031 PMCID: PMC9250483 DOI: 10.1007/s00259-022-05746-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/25/2022] [Indexed: 12/21/2022]
Abstract
Image processing plays a crucial role in maximising diagnostic quality of positron emission tomography (PET) images. Recently, deep learning methods developed across many fields have shown tremendous potential when applied to medical image enhancement, resulting in a rich and rapidly advancing literature surrounding this subject. This review encapsulates methods for integrating deep learning into PET image reconstruction and post-processing for low-dose imaging and resolution enhancement. A brief introduction to conventional image processing techniques in PET is firstly presented. We then review methods which integrate deep learning into the image reconstruction framework as either deep learning-based regularisation or as a fully data-driven mapping from measured signal to images. Deep learning-based post-processing methods for low-dose imaging, temporal resolution enhancement and spatial resolution enhancement are also reviewed. Finally, the challenges associated with applying deep learning to enhance PET images in the clinical setting are discussed and future research directions to address these challenges are presented.
Collapse
Affiliation(s)
- Cameron Dennis Pain
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia.
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Department of Data Science and AI, Monash University, Melbourne, Australia
| |
Collapse
|
16
|
de Vries BM, Golla SSV, Zwezerijnen GJC, Hoekstra OS, Jauw YWS, Huisman MC, van Dongen GAMS, Menke-van der Houven van Oordt WC, Zijlstra-Baalbergen JJM, Mesotten L, Boellaard R, Yaqub M. 3D Convolutional Neural Network-Based Denoising of Low-Count Whole-Body 18F-Fluorodeoxyglucose and 89Zr-Rituximab PET Scans. Diagnostics (Basel) 2022; 12:diagnostics12030596. [PMID: 35328149 PMCID: PMC8946936 DOI: 10.3390/diagnostics12030596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 02/22/2022] [Accepted: 02/24/2022] [Indexed: 11/23/2022] Open
Abstract
Acquisition time and injected activity of 18F-fluorodeoxyglucose (18F-FDG) PET should ideally be reduced. However, this decreases the signal-to-noise ratio (SNR), which impairs the diagnostic value of these PET scans. In addition, 89Zr-antibody PET is known to have a low SNR. To improve the diagnostic value of these scans, a Convolutional Neural Network (CNN) denoising method is proposed. The aim of this study was therefore to develop CNNs to increase SNR for low-count 18F-FDG and 89Zr-antibody PET. Super-low-count, low-count and full-count 18F-FDG PET scans from 60 primary lung cancer patients and full-count 89Zr-rituximab PET scans from five patients with non-Hodgkin lymphoma were acquired. CNNs were built to capture the features and to denoise the PET scans. Additionally, Gaussian smoothing (GS) and Bilateral filtering (BF) were evaluated. The performance of the denoising approaches was assessed based on the tumour recovery coefficient (TRC), coefficient of variance (COV; level of noise), and a qualitative assessment by two nuclear medicine physicians. The CNNs had a higher TRC and comparable or lower COV to GS and BF and was also the preferred method of the two observers for both 18F-FDG and 89Zr-rituximab PET. The CNNs improved the SNR of low-count 18F-FDG and 89Zr-rituximab PET, with almost similar or better clinical performance than the full-count PET, respectively. Additionally, the CNNs showed better performance than GS and BF.
Collapse
Affiliation(s)
- Bart M. de Vries
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
- Correspondence: ; Tel.: +31-643628806
| | - Sandeep S. V. Golla
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
| | - Gerben J. C. Zwezerijnen
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
| | - Otto S. Hoekstra
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
| | - Yvonne W. S. Jauw
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
- Cancer Center Amsterdam, Department of Hematology, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands
| | - Marc C. Huisman
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
| | - Guus A. M. S. van Dongen
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
| | | | - Josée J. M. Zijlstra-Baalbergen
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
- Cancer Center Amsterdam, Department of Hematology, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands
| | - Liesbet Mesotten
- Faculty of Medicine and Life Sciences, Hasselt University, Agoralaan Building D, B-3590 Diepenbeek, Belgium;
- Department of Nuclear Medicine, Ziekenhuis Oost Limburg, Schiepse Bos 6, B-3600 Genk, Belgium
| | - Ronald Boellaard
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
| | - Maqsood Yaqub
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
| |
Collapse
|
17
|
Minoshima S, Cross D. Application of artificial intelligence in brain molecular imaging. Ann Nucl Med 2022; 36:103-110. [PMID: 35028878 DOI: 10.1007/s12149-021-01697-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 11/15/2021] [Indexed: 12/22/2022]
Abstract
Initial development of artificial Intelligence (AI) and machine learning (ML) dates back to the mid-twentieth century. A growing awareness of the potential for AI, as well as increases in computational resources, research, and investment are rapidly advancing AI applications to medical imaging and, specifically, brain molecular imaging. AI/ML can improve imaging operations and decision making, and potentially perform tasks that are not readily possible by physicians, such as predicting disease prognosis, and identifying latent relationships from multi-modal clinical information. The number of applications of image-based AI algorithms, such as convolutional neural network (CNN), is increasing rapidly. The applications for brain molecular imaging (MI) include image denoising, PET and PET/MRI attenuation correction, image segmentation and lesion detection, parametric image formation, and the detection/diagnosis of Alzheimer's disease and other brain disorders. When effectively used, AI will likely improve the quality of patient care, instead of replacing radiologists. A regulatory framework is being developed to facilitate AI adaptation for medical imaging.
Collapse
Affiliation(s)
- Satoshi Minoshima
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A071, Salt Lake City, UT, 84132, USA.
| | - Donna Cross
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A071, Salt Lake City, UT, 84132, USA
| |
Collapse
|
18
|
Matsubara K, Ibaraki M, Nemoto M, Watabe H, Kimura Y. A review on AI in PET imaging. Ann Nucl Med 2022; 36:133-143. [PMID: 35029818 DOI: 10.1007/s12149-021-01710-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Accepted: 12/09/2021] [Indexed: 12/16/2022]
Abstract
Artificial intelligence (AI) has been applied to various medical imaging tasks, such as computer-aided diagnosis. Specifically, deep learning techniques such as convolutional neural network (CNN) and generative adversarial network (GAN) have been extensively used for medical image generation. Image generation with deep learning has been investigated in studies using positron emission tomography (PET). This article reviews studies that applied deep learning techniques for image generation on PET. We categorized the studies for PET image generation with deep learning into three themes as follows: (1) recovering full PET data from noisy data by denoising with deep learning, (2) PET image reconstruction and attenuation correction with deep learning and (3) PET image translation and synthesis with deep learning. We introduce recent studies based on these three categories. Finally, we mention the limitations of applying deep learning techniques to PET image generation and future prospects for PET image generation.
Collapse
Affiliation(s)
- Keisuke Matsubara
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Masanobu Ibaraki
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Mitsutaka Nemoto
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan
| | - Hiroshi Watabe
- Cyclotron and Radioisotope Center (CYRIC), Tohoku University, Miyagi, Japan
| | - Yuichi Kimura
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan.
| |
Collapse
|
19
|
Luo Y, Zhou L, Zhan B, Fei Y, Zhou J, Wang Y, Shen D. Adaptive rectification based adversarial network with spectrum constraint for high-quality PET image synthesis. Med Image Anal 2021; 77:102335. [PMID: 34979432 DOI: 10.1016/j.media.2021.102335] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 11/02/2021] [Accepted: 12/13/2021] [Indexed: 12/13/2022]
Abstract
Positron emission tomography (PET) is a typical nuclear imaging technique, which can provide crucial functional information for early brain disease diagnosis. Generally, clinically acceptable PET images are obtained by injecting a standard-dose radioactive tracer into human body, while on the other hand the cumulative radiation exposure inevitably raises concerns about potential health risks. However, reducing the tracer dose will increase the noise and artifacts of the reconstructed PET image. For the purpose of acquiring high-quality PET images while reducing radiation exposure, in this paper, we innovatively present an adaptive rectification based generative adversarial network with spectrum constraint, named AR-GAN, which uses low-dose PET (LPET) image to synthesize standard-dose PET (SPET) image of high-quality. Specifically, considering the existing differences between the synthesized SPET image by traditional GAN and the real SPET image, an adaptive rectification network (AR-Net) is devised to estimate the residual between the preliminarily predicted image and the real SPET image, based on the hypothesis that a more realistic rectified image can be obtained by incorporating both the residual and the preliminarily predicted PET image. Moreover, to address the issue of high-frequency distortions in the output image, we employ a spectral regularization term in the training optimization objective to constrain the consistency of the synthesized image and the real image in the frequency domain, which further preserves the high-frequency detailed information and improves synthesis performance. Validations on both the phantom dataset and the clinical dataset show that the proposed AR-GAN can estimate SPET images from LPET images effectively and outperform other state-of-the-art image synthesis approaches.
Collapse
Affiliation(s)
- Yanmei Luo
- School of Computer Science, Sichuan University, China
| | - Luping Zhou
- School of Electrical and Information Engineering, University of Sydney, Australia
| | - Bo Zhan
- School of Computer Science, Sichuan University, China
| | - Yuchen Fei
- School of Computer Science, Sichuan University, China
| | - Jiliu Zhou
- School of Computer Science, Sichuan University, China; School of Computer Science, Chengdu University of Information Technology, China
| | - Yan Wang
- School of Computer Science, Sichuan University, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| |
Collapse
|
20
|
Wang S, Cao G, Wang Y, Liao S, Wang Q, Shi J, Li C, Shen D. Review and Prospect: Artificial Intelligence in Advanced Medical Imaging. FRONTIERS IN RADIOLOGY 2021; 1:781868. [PMID: 37492170 PMCID: PMC10365109 DOI: 10.3389/fradi.2021.781868] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 11/08/2021] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) as an emerging technology is gaining momentum in medical imaging. Recently, deep learning-based AI techniques have been actively investigated in medical imaging, and its potential applications range from data acquisition and image reconstruction to image analysis and understanding. In this review, we focus on the use of deep learning in image reconstruction for advanced medical imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET). Particularly, recent deep learning-based methods for image reconstruction will be emphasized, in accordance with their methodology designs and performances in handling volumetric imaging data. It is expected that this review can help relevant researchers understand how to adapt AI for medical imaging and which advantages can be achieved with the assistance of AI.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
- Pengcheng Laboratrory, Shenzhen, China
| | - Guohua Cao
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China
| | - Shu Liao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Qian Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jun Shi
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| |
Collapse
|
21
|
Serrano-Sosa M, Van Snellenberg JX, Meng J, Luceno JR, Spuhler K, Weinstein JJ, Abi-Dargham A, Slifstein M, Huang C. Multitask Learning Based Three-Dimensional Striatal Segmentation of MRI: fMRI and PET Objective Assessments. J Magn Reson Imaging 2021; 54:1623-1635. [PMID: 33970510 PMCID: PMC9204799 DOI: 10.1002/jmri.27682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 04/22/2021] [Accepted: 04/23/2021] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Recent studies have established a clear topographical and functional organization of projections to and from complex subdivisions of the striatum. Manual segmentation of these functional subdivisions is labor-intensive and time-consuming, and automated methods are not as reliable as manual segmentation. PURPOSE To utilize multitask learning (MTL) as a method to segment subregions of the striatum consisting of pre-commissural putamen (prePU), pre-commissural caudate (preCA), post-commissural putamen (postPU), post-commissural caudate (postCA), and ventral striatum (VST). STUDY TYPE Retrospective. POPULATION Eighty-seven total data sets from patients with schizophrenia and matched controls. FIELD STRENGTH/SEQUENCE 1.5 T and 3.0 T, T1 -weighted (SPGR SENSE, 3D BRAVO). ASSESSMENT MTL-generated segmentations were compared to the Imperial College London Clinical Imaging Center (CIC) atlas. Dice similarity coefficient (DSC) was used to compare the automated methods to manual segmentations. Positron emission tomography (PET) imaging: 60 minutes of emission data were acquired using [11 C]raclopride. Data were reconstructed by filtered back projection (FBP) with computed tomography (CT) used for attenuation correction. Binding potential values, BPND , and region of interest (ROI) time series and whole-brain connectivity using functional magnetic resonance imaging (fMRI) images were compared between manual and both automated segmentations. STATISTICAL TESTS Pearson correlation and paired t-test. RESULTS MTL-generated segmentations showed excellent spatial agreement with manual (DSC ≥0.72 across all striatal subregions). BPND values from MTL-generated segmentations were shown to correlate well with manual segmentations with R2 ≥ 0.91 in all caudate and putamen subregions, and R2 = 0.69 in VST. Mean Pearson correlation coefficients of the fMRI data between MTL-generated and manual segmentations were also high in time series (≥0.86) and whole-brain connectivity (≥0.89) across all subregions. DATA CONCLUSION Across both PET and fMRI task-based assessments, results from MTL-generated segmentations more closely corresponded to results from manually drawn ROIs than CIC-generated segmentations did. Therefore, the proposed MTL approach is a fast and reliable method for three-dimensional striatal subregion segmentation with results comparable to manually segmented ROIs. LEVEL OF EVIDENCE 2 TECHNICAL EFFICACY STAGE: 1.
Collapse
Affiliation(s)
- Mario Serrano-Sosa
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY
| | - Jared X. Van Snellenberg
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY
- Department of Psychiatry, Stony Brook Medicine, Stony Brook, NY
| | - Jiayan Meng
- Department of Psychiatry, Stony Brook Medicine, Stony Brook, NY
| | - Jacob R. Luceno
- Department of Psychiatry, Stony Brook Medicine, Stony Brook, NY
| | - Karl Spuhler
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY
| | | | | | - Mark Slifstein
- Department of Psychiatry, Stony Brook Medicine, Stony Brook, NY
| | - Chuan Huang
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY
- Department of Psychiatry, Stony Brook Medicine, Stony Brook, NY
- Department of Radiology, Stony Brook Medicine, Stony Brook, NY
| |
Collapse
|
22
|
Estimation of Nuclear Medicine Exposure Measures Based on Intelligent Computer Processing. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:4102183. [PMID: 34616531 PMCID: PMC8490043 DOI: 10.1155/2021/4102183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Revised: 09/07/2021] [Accepted: 09/08/2021] [Indexed: 11/18/2022]
Abstract
This paper provides an in-depth discussion and analysis of the estimation of nuclear medicine exposure measurements using computerized intelligent processing. The focus is on the study of energy extraction algorithms to obtain a high energy resolution with the lowest possible ADC sampling rate and thus reduce the amount of data. This paper focuses on the direct pulse peak extraction algorithm, polynomial curve fitting algorithm, double exponential function curve fitting algorithm, and pulse area calculation algorithm. The detector output waveforms are obtained with an oscilloscope, and the analysis module is designed in MATLAB. Based on these algorithms, the data obtained from six different lower sampling rates are analyzed and compared with the results of the high sampling rate direct pulse peak extraction algorithm and the pulse area calculation algorithm, respectively. The correctness of the compartment model was checked, and the results were found to be realistic and reliable, which can be used for the analysis of internal exposure data in radiation occupational health management, estimation of internal exposure dose for nuclear emergency groups, and estimation of accidental internal exposure dose. The results of the compartment model of the respiratory tract and the compartment model of the digestive tract can be used to calculate the distribution and retention patterns of radionuclides and their compounds in the body, which can be used to assess the damage of radionuclide internal contamination and guide the implementation of medical treatment.
Collapse
|
23
|
Liu J, Malekzadeh M, Mirian N, Song TA, Liu C, Dutta J. Artificial Intelligence-Based Image Enhancement in PET Imaging: Noise Reduction and Resolution Enhancement. PET Clin 2021; 16:553-576. [PMID: 34537130 PMCID: PMC8457531 DOI: 10.1016/j.cpet.2021.06.005] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
High noise and low spatial resolution are two key confounding factors that limit the qualitative and quantitative accuracy of PET images. Artificial intelligence models for image denoising and deblurring are becoming increasingly popular for the postreconstruction enhancement of PET images. We present a detailed review of recent efforts for artificial intelligence-based PET image enhancement with a focus on network architectures, data types, loss functions, and evaluation metrics. We also highlight emerging areas in this field that are quickly gaining popularity, identify barriers to large-scale adoption of artificial intelligence models for PET image enhancement, and discuss future directions.
Collapse
Affiliation(s)
- Juan Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Masoud Malekzadeh
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Niloufar Mirian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Tzu-An Song
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA; Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
24
|
Onishi Y, Hashimoto F, Ote K, Ohba H, Ota R, Yoshikawa E, Ouchi Y. Anatomical-guided attention enhances unsupervised PET image denoising performance. Med Image Anal 2021; 74:102226. [PMID: 34563861 DOI: 10.1016/j.media.2021.102226] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 08/02/2021] [Accepted: 09/05/2021] [Indexed: 10/20/2022]
Abstract
Although supervised convolutional neural networks (CNNs) often outperform conventional alternatives for denoising positron emission tomography (PET) images, they require many low- and high-quality reference PET image pairs. Herein, we propose an unsupervised 3D PET image denoising method based on an anatomical information-guided attention mechanism. The proposed magnetic resonance-guided deep decoder (MR-GDD) utilizes the spatial details and semantic features of MR-guidance image more effectively by introducing encoder-decoder and deep decoder subnetworks. Moreover, the specific shapes and patterns of the guidance image do not affect the denoised PET image, because the guidance image is input to the network through an attention gate. In a Monte Carlo simulation of [18F]fluoro-2-deoxy-D-glucose (FDG), the proposed method achieved the highest peak signal-to-noise ratio and structural similarity (27.92 ± 0.44 dB/0.886 ± 0.007), as compared with Gaussian filtering (26.68 ± 0.10 dB/0.807 ± 0.004), image guided filtering (27.40 ± 0.11 dB/0.849 ± 0.003), deep image prior (DIP) (24.22 ± 0.43 dB/0.737 ± 0.017), and MR-DIP (27.65 ± 0.42 dB/0.879 ± 0.007). Furthermore, we experimentally visualized the behavior of the optimization process, which is often unknown in unsupervised CNN-based restoration problems. For preclinical (using [18F]FDG and [11C]raclopride) and clinical (using [18F]florbetapir) studies, the proposed method demonstrates state-of-the-art denoising performance while retaining spatial resolution and quantitative accuracy, despite using a common network architecture for various noisy PET images with 1/10th of the full counts. These results suggest that the proposed MR-GDD can reduce PET scan times and PET tracer doses considerably without impacting patients.
Collapse
Affiliation(s)
- Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan.
| | - Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Hiroyuki Ohba
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Ryosuke Ota
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Etsuji Yoshikawa
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Yasuomi Ouchi
- Department of Biofunctional Imaging, Preeminent Medical Photonics Education & Research Center, Hamamatsu University School of Medicine, 1-20-1 Handayama, Higashi-ku, Hamamatsu 431-3192, Japan
| |
Collapse
|
25
|
Cheng Z, Wen J, Huang G, Yan J. Applications of artificial intelligence in nuclear medicine image generation. Quant Imaging Med Surg 2021; 11:2792-2822. [PMID: 34079744 PMCID: PMC8107336 DOI: 10.21037/qims-20-1078] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 02/14/2021] [Indexed: 12/12/2022]
Abstract
Recently, the application of artificial intelligence (AI) in medical imaging (including nuclear medicine imaging) has rapidly developed. Most AI applications in nuclear medicine imaging have focused on the diagnosis, treatment monitoring, and correlation analyses with pathology or specific gene mutation. It can also be used for image generation to shorten the time of image acquisition, reduce the dose of injected tracer, and enhance image quality. This work provides an overview of the application of AI in image generation for single-photon emission computed tomography (SPECT) and positron emission tomography (PET) either without or with anatomical information [CT or magnetic resonance imaging (MRI)]. This review focused on four aspects, including imaging physics, image reconstruction, image postprocessing, and internal dosimetry. AI application in generating attenuation map, estimating scatter events, boosting image quality, and predicting internal dose map is summarized and discussed.
Collapse
Affiliation(s)
- Zhibiao Cheng
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Junhai Wen
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Gang Huang
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| | - Jianhua Yan
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| |
Collapse
|
26
|
Deep learning in Nuclear Medicine—focus on CNN-based approaches for PET/CT and PET/MR: where do we stand? Clin Transl Imaging 2021. [DOI: 10.1007/s40336-021-00411-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|