1
|
Zhou Y, Chen T, Hou J, Xie H, Dvornek NC, Zhou SK, Wilson DL, Duncan JS, Liu C, Zhou B. Cascaded Multi-path Shortcut Diffusion Model for Medical Image Translation. Med Image Anal 2024; 98:103300. [PMID: 39226710 DOI: 10.1016/j.media.2024.103300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 05/29/2024] [Accepted: 08/06/2024] [Indexed: 09/05/2024]
Abstract
Image-to-image translation is a vital component in medical imaging processing, with many uses in a wide range of imaging modalities and clinical scenarios. Previous methods include Generative Adversarial Networks (GANs) and Diffusion Models (DMs), which offer realism but suffer from instability and lack uncertainty estimation. Even though both GAN and DM methods have individually exhibited their capability in medical image translation tasks, the potential of combining a GAN and DM to further improve translation performance and to enable uncertainty estimation remains largely unexplored. In this work, we address these challenges by proposing a Cascade Multi-path Shortcut Diffusion Model (CMDM) for high-quality medical image translation and uncertainty estimation. To reduce the required number of iterations and ensure robust performance, our method first obtains a conditional GAN-generated prior image that will be used for the efficient reverse translation with a DM in the subsequent step. Additionally, a multi-path shortcut diffusion strategy is employed to refine translation results and estimate uncertainty. A cascaded pipeline further enhances translation quality, incorporating residual averaging between cascades. We collected three different medical image datasets with two sub-tasks for each dataset to test the generalizability of our approach. Our experimental results found that CMDM can produce high-quality translations comparable to state-of-the-art methods while providing reasonable uncertainty estimations that correlate well with the translation error.
Collapse
Affiliation(s)
- Yinchi Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | - Tianqi Chen
- Department of Computer Science, University of California Irvine, Irvine, CA, USA
| | - Jun Hou
- Department of Computer Science, University of California Irvine, Irvine, CA, USA
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Nicha C Dvornek
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - S Kevin Zhou
- School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China
| | - David L Wilson
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - James S Duncan
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA; Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Bo Zhou
- Department of Radiology, Northwestern University, Chicago, IL, USA.
| |
Collapse
|
2
|
Sharma V, Awate SP. Adversarial EM for variational deep learning: Application to semi-supervised image quality enhancement in low-dose PET and low-dose CT. Med Image Anal 2024; 97:103291. [PMID: 39121545 DOI: 10.1016/j.media.2024.103291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 07/23/2024] [Accepted: 07/25/2024] [Indexed: 08/12/2024]
Abstract
In positron emission tomography (PET) and X-ray computed tomography (CT), reducing radiation dose can cause significant degradation in image quality. For image quality enhancement in low-dose PET and CT, we propose a novel theoretical adversarial and variational deep neural network (DNN) framework relying on expectation maximization (EM) based learning, termed adversarial EM (AdvEM). AdvEM proposes an encoder-decoder architecture with a multiscale latent space, and generalized-Gaussian models enabling datum-specific robust statistical modeling in latent space and image space. The model robustness is further enhanced by including adversarial learning in the training protocol. Unlike typical variational-DNN learning, AdvEM proposes latent-space sampling from the posterior distribution, and uses a Metropolis-Hastings scheme. Unlike existing schemes for PET or CT image enhancement which train using pairs of low-dose images with their corresponding normal-dose versions, we propose a semi-supervised AdvEM (ssAdvEM) framework that enables learning using a small number of normal-dose images. AdvEM and ssAdvEM enable per-pixel uncertainty estimates for their outputs. Empirical analyses on real-world PET and CT data involving many baselines, out-of-distribution data, and ablation studies show the benefits of the proposed framework.
Collapse
Affiliation(s)
- Vatsala Sharma
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India.
| | - Suyash P Awate
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India
| |
Collapse
|
3
|
Seyyedi N, Ghafari A, Seyyedi N, Sheikhzadeh P. Deep learning-based techniques for estimating high-quality full-dose positron emission tomography images from low-dose scans: a systematic review. BMC Med Imaging 2024; 24:238. [PMID: 39261796 PMCID: PMC11391655 DOI: 10.1186/s12880-024-01417-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 08/30/2024] [Indexed: 09/13/2024] Open
Abstract
This systematic review aimed to evaluate the potential of deep learning algorithms for converting low-dose Positron Emission Tomography (PET) images to full-dose PET images in different body regions. A total of 55 articles published between 2017 and 2023 by searching PubMed, Web of Science, Scopus and IEEE databases were included in this review, which utilized various deep learning models, such as generative adversarial networks and UNET, to synthesize high-quality PET images. The studies involved different datasets, image preprocessing techniques, input data types, and loss functions. The evaluation of the generated PET images was conducted using both quantitative and qualitative methods, including physician evaluations and various denoising techniques. The findings of this review suggest that deep learning algorithms have promising potential in generating high-quality PET images from low-dose PET images, which can be useful in clinical practice.
Collapse
Affiliation(s)
- Negisa Seyyedi
- Nursing and Midwifery Care Research Center, Health Management Research Institute, Iran University of Medical Sciences, Tehran, Iran
| | - Ali Ghafari
- Research Center for Evidence-Based Medicine, Iranian EBM Centre: A JBI Centre of Excellence, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Navisa Seyyedi
- Department of Health Information Management and Medical Informatics, School of Allied Medical Science, Tehran University of Medical Sciences, Tehran, Iran
| | - Peyman Sheikhzadeh
- Medical Physics and Biomedical Engineering Department, Medical Faculty, Tehran University of Medical Sciences, Tehran, Iran.
- Department of Nuclear Medicine, Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
4
|
Bousse A, Kandarpa VSS, Shi K, Gong K, Lee JS, Liu C, Visvikis D. A Review on Low-Dose Emission Tomography Post-Reconstruction Denoising with Neural Network Approaches. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2024; 8:333-347. [PMID: 39429805 PMCID: PMC11486494 DOI: 10.1109/trpms.2023.3349194] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2024]
Abstract
Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.
Collapse
Affiliation(s)
| | | | - Kuangyu Shi
- Lab for Artificial Intelligence & Translational Theranostics, Dept. Nuclear Medicine, Inselspital, University of Bern, 3010 Bern, Switzerland
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston, MA 02114, USA
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Korea
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | |
Collapse
|
5
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
6
|
Sadia RT, Chen J, Zhang J. CT image denoising methods for image quality improvement and radiation dose reduction. J Appl Clin Med Phys 2024; 25:e14270. [PMID: 38240466 PMCID: PMC10860577 DOI: 10.1002/acm2.14270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 12/15/2023] [Accepted: 12/28/2023] [Indexed: 02/13/2024] Open
Abstract
With the ever-increasing use of computed tomography (CT), concerns about its radiation dose have become a significant public issue. To address the need for radiation dose reduction, CT denoising methods have been widely investigated and applied in low-dose CT images. Numerous noise reduction algorithms have emerged, such as iterative reconstruction and most recently, deep learning (DL)-based approaches. Given the rapid advancements in Artificial Intelligence techniques, we recognize the need for a comprehensive review that emphasizes the most recently developed methods. Hence, we have performed a thorough analysis of existing literature to provide such a review. Beyond directly comparing the performance, we focus on pivotal aspects, including model training, validation, testing, generalizability, vulnerability, and evaluation methods. This review is expected to raise awareness of the various facets involved in CT image denoising and the specific challenges in developing DL-based models.
Collapse
Affiliation(s)
- Rabeya Tus Sadia
- Department of Computer ScienceUniversity of KentuckyLexingtonKentuckyUSA
| | - Jin Chen
- Department of Medicine‐NephrologyUniversity of Alabama at BirminghamBirminghamAlabamaUSA
| | - Jie Zhang
- Department of RadiologyUniversity of KentuckyLexingtonKentuckyUSA
| |
Collapse
|
7
|
Bousse A, Kandarpa VSS, Shi K, Gong K, Lee JS, Liu C, Visvikis D. A Review on Low-Dose Emission Tomography Post-Reconstruction Denoising with Neural Network Approaches. ARXIV 2024:arXiv:2401.00232v2. [PMID: 38313194 PMCID: PMC10836084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 02/06/2024]
Abstract
Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified in low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.
Collapse
Affiliation(s)
| | | | - Kuangyu Shi
- Lab for Artificial Intelligence & Translational Theranostics, Dept. Nuclear Medicine, Inselspital, University of Bern, 3010 Bern, Switzerland
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston, MA 02114, USA
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Korea
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | |
Collapse
|
8
|
Wang D, Jiang C, He J, Teng Y, Qin H, Liu J, Yang X. M 3S-Net: multi-modality multi-branch multi-self-attention network with structure-promoting loss for low-dose PET/CT enhancement. Phys Med Biol 2024; 69:025001. [PMID: 38086073 DOI: 10.1088/1361-6560/ad14c5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 12/12/2023] [Indexed: 01/05/2024]
Abstract
Objective.PET (Positron Emission Tomography) inherently involves radiotracer injections and long scanning time, which raises concerns about the risk of radiation exposure and patient comfort. Reductions in radiotracer dosage and acquisition time can lower the potential risk and improve patient comfort, respectively, but both will also reduce photon counts and hence degrade the image quality. Therefore, it is of interest to improve the quality of low-dose PET images.Approach.A supervised multi-modality deep learning model, named M3S-Net, was proposed to generate standard-dose PET images (60 s per bed position) from low-dose ones (10 s per bed position) and the corresponding CT images. Specifically, we designed a multi-branch convolutional neural network with multi-self-attention mechanisms, which first extracted features from PET and CT images in two separate branches and then fused the features to generate the final generated PET images. Moreover, a novel multi-modality structure-promoting term was proposed in the loss function to learn the anatomical information contained in CT images.Main results.We conducted extensive numerical experiments on real clinical data collected from local hospitals. Compared with state-of-the-art methods, the proposed M3S-Net not only achieved higher objective metrics and better generated tumors, but also performed better in preserving edges and suppressing noise and artifacts.Significance.The experimental results of quantitative metrics and qualitative displays demonstrate that the proposed M3S-Net can generate high-quality PET images from low-dose ones, which are competable to standard-dose PET images. This is valuable in reducing PET acquisition time and has potential applications in dynamic PET imaging.
Collapse
Affiliation(s)
- Dong Wang
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Chong Jiang
- Department of Nuclear Medicine, West China Hospital of Sichuan University, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Jian He
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Yue Teng
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Hourong Qin
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| | - Jijun Liu
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| |
Collapse
|
9
|
Balaji V, Song TA, Malekzadeh M, Heidari P, Dutta J. Artificial Intelligence for PET and SPECT Image Enhancement. J Nucl Med 2024; 65:4-12. [PMID: 37945384 PMCID: PMC10755520 DOI: 10.2967/jnumed.122.265000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 10/10/2023] [Indexed: 11/12/2023] Open
Abstract
Nuclear medicine imaging modalities such as PET and SPECT are confounded by high noise levels and low spatial resolution, necessitating postreconstruction image enhancement to improve their quality and quantitative accuracy. Artificial intelligence (AI) models such as convolutional neural networks, U-Nets, and generative adversarial networks have shown promising outcomes in enhancing PET and SPECT images. This review article presents a comprehensive survey of state-of-the-art AI methods for PET and SPECT image enhancement and seeks to identify emerging trends in this field. We focus on recent breakthroughs in AI-based PET and SPECT image denoising and deblurring. Supervised deep-learning models have shown great potential in reducing radiotracer dose and scan times without sacrificing image quality and diagnostic accuracy. However, the clinical utility of these methods is often limited by their need for paired clean and corrupt datasets for training. This has motivated research into unsupervised alternatives that can overcome this limitation by relying on only corrupt inputs or unpaired datasets to train models. This review highlights recently published supervised and unsupervised efforts toward AI-based PET and SPECT image enhancement. We discuss cross-scanner and cross-protocol training efforts, which can greatly enhance the clinical translatability of AI-based image enhancement tools. We also aim to address the looming question of whether the improvements in image quality generated by AI models lead to actual clinical benefit. To this end, we discuss works that have focused on task-specific objective clinical evaluation of AI models for image enhancement or incorporated clinical metrics into their loss functions to guide the image generation process. Finally, we discuss emerging research directions, which include the exploration of novel training paradigms, curation of larger task-specific datasets, and objective clinical evaluation that will enable the realization of the full translation potential of these models in the future.
Collapse
Affiliation(s)
- Vibha Balaji
- Department of Biomedical Engineering, University of Massachusetts Amherst, Amherst, Massachusetts; and
| | - Tzu-An Song
- Department of Biomedical Engineering, University of Massachusetts Amherst, Amherst, Massachusetts; and
| | - Masoud Malekzadeh
- Department of Biomedical Engineering, University of Massachusetts Amherst, Amherst, Massachusetts; and
| | - Pedram Heidari
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Joyita Dutta
- Department of Biomedical Engineering, University of Massachusetts Amherst, Amherst, Massachusetts; and
| |
Collapse
|
10
|
Manoj Doss KK, Chen JC. Utilizing deep learning techniques to improve image quality and noise reduction in preclinical low-dose PET images in the sinogram domain. Med Phys 2024; 51:209-223. [PMID: 37966121 DOI: 10.1002/mp.16830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 09/28/2023] [Accepted: 10/22/2023] [Indexed: 11/16/2023] Open
Abstract
BACKGROUND Low-dose positron emission tomography (LD-PET) imaging is commonly employed in preclinical research to minimize radiation exposure to animal subjects. However, LD-PET images often exhibit poor quality and high noise levels due to the low signal-to-noise ratio. Deep learning (DL) techniques such as generative adversarial networks (GANs) and convolutional neural network (CNN) have the capability to enhance the quality of images derived from noisy or low-quality PET data, which encodes critical information about radioactivity distribution in the body. PURPOSE Our objective was to optimize the image quality and reduce noise in preclinical PET images by utilizing the sinogram domain as input for DL models, resulting in improved image quality as compared to LD-PET images. METHODS A GAN and CNN model were utilized to predict high-dose (HD) preclinical PET sinograms from the corresponding LD preclinical PET sinograms. In order to generate the datasets, experiments were conducted on micro-phantoms, animal subjects (rats), and virtual simulations. The quality of DL-generated images was weighted by performing the following quantitative measures: structural similarity index measure (SSIM), root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). Additionally, DL input and output were both subjected to a spatial resolution calculation of full width half maximum (FWHM) and full width tenth maximum (FWTM). DL outcomes were then compared with the conventional denoising algorithms such as non-local means (NLM), block-matching, and 3D filtering (BM3D). RESULTS The DL models effectively learned image features and produced high-quality images, as reflected in the quantitative metrics. Notably, the FWHM and FWTM values of DL PET images exhibited significantly improved accuracy compared to LD, NLM, and BM3D PET images, and just as precise as HD PET images. The MSE loss underscored the excellent performance of the models, indicating that the models performed well. To further improve the training, the generator loss (G loss) was increased to a value higher than the discriminator loss (D loss), thereby achieving convergence in the GAN model. CONCLUSIONS The sinograms generated by the GAN network closely resembled real HD preclinical PET sinograms and were more realistic than LD. There was a noticeable improvement in image quality and noise factor in the predicted HD images. Importantly, DL networks did not fully compromise the spatial resolution of the images.
Collapse
Affiliation(s)
| | - Jyh-Cheng Chen
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Medical Imaging and Radiological Sciences, China Medical University, Taichung, Taiwan
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
| |
Collapse
|
11
|
Li A, Yang B, Naganawa M, Fontaine K, Toyonaga T, Carson RE, Tang J. Dose reduction in dynamic synaptic vesicle glycoprotein 2A PET imaging using artificial neural networks. Phys Med Biol 2023; 68:245006. [PMID: 37857316 PMCID: PMC10739622 DOI: 10.1088/1361-6560/ad0535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 10/02/2023] [Accepted: 10/19/2023] [Indexed: 10/21/2023]
Abstract
Objective. Reducing dose in positron emission tomography (PET) imaging increases noise in reconstructed dynamic frames, which inevitably results in higher noise and possible bias in subsequently estimated images of kinetic parameters than those estimated in the standard dose case. We report the development of a spatiotemporal denoising technique for reduced-count dynamic frames through integrating a cascade artificial neural network (ANN) with the highly constrained back-projection (HYPR) scheme to improve low-dose parametric imaging.Approach. We implemented and assessed the proposed method using imaging data acquired with11C-UCB-J, a PET radioligand bound to synaptic vesicle glycoprotein 2A (SV2A) in the human brain. The patch-based ANN was trained with a reduced-count frame and its full-count correspondence of a subject and was used in cascade to process dynamic frames of other subjects to further take advantage of its denoising capability. The HYPR strategy was then applied to the spatial ANN processed image frames to make use of the temporal information from the entire dynamic scan.Main results. In all the testing subjects including healthy volunteers and Parkinson's disease patients, the proposed method reduced more noise while introducing minimal bias in dynamic frames and the resulting parametric images, as compared with conventional denoising methods.Significance. Achieving 80% noise reduction with a bias of -2% in dynamic frames, which translates into 75% and 70% of noise reduction in the tracer uptake (bias, -2%) and distribution volume (bias, -5%) images, the proposed ANN+HYPR technique demonstrates the denoising capability equivalent to a 11-fold dose increase for dynamic SV2A PET imaging with11C-UCB-J.
Collapse
Affiliation(s)
- Andi Li
- Department of Biomedical Engineering, University of Cincinnati, Cincinnati, OH, United States of America
| | - Bao Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, People’s Republic of China
| | - Mika Naganawa
- Positron Emission Tomography Center, Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Kathryn Fontaine
- Positron Emission Tomography Center, Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Takuya Toyonaga
- Positron Emission Tomography Center, Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Richard E Carson
- Positron Emission Tomography Center, Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Jing Tang
- Department of Biomedical Engineering, University of Cincinnati, Cincinnati, OH, United States of America
| |
Collapse
|
12
|
Zhou B, Xie H, Liu Q, Chen X, Guo X, Feng Z, Hou J, Zhou SK, Li B, Rominger A, Shi K, Duncan JS, Liu C. FedFTN: Personalized federated learning with deep feature transformation network for multi-institutional low-count PET denoising. Med Image Anal 2023; 90:102993. [PMID: 37827110 PMCID: PMC10611438 DOI: 10.1016/j.media.2023.102993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 09/12/2023] [Accepted: 10/02/2023] [Indexed: 10/14/2023]
Abstract
Low-count PET is an efficient way to reduce radiation exposure and acquisition time, but the reconstructed images often suffer from low signal-to-noise ratio (SNR), thus affecting diagnosis and other downstream tasks. Recent advances in deep learning have shown great potential in improving low-count PET image quality, but acquiring a large, centralized, and diverse dataset from multiple institutions for training a robust model is difficult due to privacy and security concerns of patient data. Moreover, low-count PET data at different institutions may have different data distribution, thus requiring personalized models. While previous federated learning (FL) algorithms enable multi-institution collaborative training without the need of aggregating local data, addressing the large domain shift in the application of multi-institutional low-count PET denoising remains a challenge and is still highly under-explored. In this work, we propose FedFTN, a personalized federated learning strategy that addresses these challenges. FedFTN uses a local deep feature transformation network (FTN) to modulate the feature outputs of a globally shared denoising network, enabling personalized low-count PET denoising for each institution. During the federated learning process, only the denoising network's weights are communicated and aggregated, while the FTN remains at the local institutions for feature transformation. We evaluated our method using a large-scale dataset of multi-institutional low-count PET imaging data from three medical centers located across three continents, and showed that FedFTN provides high-quality low-count PET images, outperforming previous baseline FL reconstruction methods across all low-count levels at all three institutions.
Collapse
Affiliation(s)
- Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Qiong Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Zhicheng Feng
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA
| | - Jun Hou
- Department of Computer Science, University of California Irvine, Irvine, CA, USA
| | - S Kevin Zhou
- School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland; Computer Aided Medical Procedures and Augmented Reality, Institute of Informatics I16, Technical University of Munich, Munich, Germany
| | - James S Duncan
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA; Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| |
Collapse
|
13
|
Loft M, Ladefoged CN, Johnbeck CB, Carlsen EA, Oturai P, Langer SW, Knigge U, Andersen FL, Kjaer A. An Investigation of Lesion Detection Accuracy for Artificial Intelligence-Based Denoising of Low-Dose 64Cu-DOTATATE PET Imaging in Patients with Neuroendocrine Neoplasms. J Nucl Med 2023; 64:951-959. [PMID: 37169532 PMCID: PMC10241012 DOI: 10.2967/jnumed.122.264826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 01/31/2023] [Indexed: 05/13/2023] Open
Abstract
Frequent somatostatin receptor PET, for example, 64Cu-DOTATATE PET, is part of the diagnostic work-up of patients with neuroendocrine neoplasms (NENs), resulting in high accumulated radiation doses. Scan-related radiation exposure should be minimized in accordance with the as-low-as-reasonably achievable principle, for example, by reducing injected radiotracer activity. Previous investigations found that reducing 64Cu-DOTATATE activity to below 50 MBq results in inadequate image quality and lesion detection. We therefore investigated whether image quality and lesion detection of less than 50 MBq of 64Cu-DOTATATE PET could be restored using artificial intelligence (AI). Methods: We implemented a parameter-transferred Wasserstein generative adversarial network for patients with NENs on simulated low-dose 64Cu-DOTATATE PET images corresponding to 25% (PET25%), or about 48 MBq, of the injected activity of the reference full dose (PET100%), or about 191 MBq, to generate denoised PET images (PETAI). We included 38 patients in the training sets for network optimization. We analyzed PET intensity correlation, peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and mean-square error (MSE) of PETAI/PET100% versus PET25%/PET100% Two readers assessed Likert scale-defined image quality (1, very poor; 2, poor; 3, moderate; 4, good; 5, excellent) and identified lesion-suspicious foci on PETAI and PET100% in a subset of the patients with no more than 20 lesions per organ (n = 33) to allow comparison of all foci on a 1:1 basis. Detected foci were scored (C1, definite lesion; C0, lesion-suspicious focus) and matched with PET100% as the reference. True-positive (TP), false-positive (FP), and false-negative (FN) lesions were assessed. Results: For PETAI/PET100% versus PET25%/PET100%, PET intensity correlation had a goodness-of-fit value of 0.94 versus 0.81, PSNR was 58.1 versus 53.0, SSIM was 0.908 versus 0.899, and MSE was 2.6 versus 4.7. Likert scale-defined image quality was rated good or excellent in 33 of 33 and 32 of 33 patients on PET100% and PETAI, respectively. Total number of detected lesions was 118 on PET100% and 115 on PETAI Only 78 PETAI lesions were TP, 40 were FN, and 37 were FP, yielding detection sensitivity (TP/(TP+FN)) and a false discovery rate (FP/(TP+FP)) of 66% (78/118) and 32% (37/115), respectively. In 62% (23/37) of cases, the FP lesion was scored C1, suggesting a definite lesion. Conclusion: PETAI improved visual similarity with PET100% compared with PET25%, and PETAI and PET100% had similar Likert scale-defined image quality. However, lesion detection analysis performed by physicians showed high proportions of FP and FN lesions on PETAI, highlighting the need for clinical validation of AI algorithms.
Collapse
Affiliation(s)
- Mathias Loft
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital-Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark
- ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
| | - Claes N Ladefoged
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital-Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Camilla B Johnbeck
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital-Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark
- ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
| | - Esben A Carlsen
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital-Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark
- ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
| | - Peter Oturai
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital-Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark
- ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
| | - Seppo W Langer
- ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
- Department of Oncology, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark; and
| | - Ulrich Knigge
- ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
- Departments of Clinical Endocrinology and Surgical Gastroenterology, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
| | - Flemming L Andersen
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital-Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Andreas Kjaer
- Department of Clinical Physiology and Nuclear Medicine & Cluster for Molecular Imaging, Copenhagen University Hospital-Rigshospitalet & Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark;
- ENETS Neuroendocrine Tumor Center of Excellence, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
| |
Collapse
|
14
|
Xu X, Li Y, Du L, Huang W. Inverse Design of Nanophotonic Devices Using Generative Adversarial Networks with the Sim-NN Model and Self-Attention Mechanism. MICROMACHINES 2023; 14:634. [PMID: 36985041 PMCID: PMC10056754 DOI: 10.3390/mi14030634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 03/07/2023] [Accepted: 03/08/2023] [Indexed: 06/18/2023]
Abstract
The inverse design method based on a generative adversarial network (GAN) combined with a simulation neural network (sim-NN) and the self-attention mechanism is proposed in order to improve the efficiency of GAN for designing nanophotonic devices. The sim-NN can guide the model to produce more accurate device designs via the spectrum comparison, whereas the self-attention mechanism can help to extract detailed features of the spectrum by exploring their global interconnections. The nanopatterned power splitter with a 2 μm × 2 μm interference region is designed as an example to obtain the average high transmission (>94%) and low back-reflection (<0.5%) over the broad wavelength range of 1200~1650 nm. As compared to other models, this method can produce larger proportions of high figure-of-merit devices with various desired power-splitting ratios.
Collapse
|
15
|
Zhou B, Miao T, Mirian N, Chen X, Xie H, Feng Z, Guo X, Li X, Zhou SK, Duncan JS, Liu C. Federated Transfer Learning for Low-dose PET Denoising: A Pilot Study with Simulated Heterogeneous Data. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2023; 7:284-295. [PMID: 37789946 PMCID: PMC10544830 DOI: 10.1109/trpms.2022.3194408] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Positron emission tomography (PET) with a reduced injection dose, i.e., low-dose PET, is an efficient way to reduce radiation dose. However, low-dose PET reconstruction suffers from a low signal-to-noise ratio (SNR), affecting diagnosis and other PET-related applications. Recently, deep learning-based PET denoising methods have demonstrated superior performance in generating high-quality reconstruction. However, these methods require a large amount of representative data for training, which can be difficult to collect and share due to medical data privacy regulations. Moreover, low-dose PET data at different institutions may use different low-dose protocols, leading to non-identical data distribution. While previous federated learning (FL) algorithms enable multi-institution collaborative training without the need of aggregating local data, it is challenging for previous methods to address the large domain shift caused by different low-dose PET settings, and the application of FL to PET is still under-explored. In this work, we propose a federated transfer learning (FTL) framework for low-dose PET denoising using heterogeneous low-dose data. Our experimental results on simulated multi-institutional data demonstrate that our method can efficiently utilize heterogeneous low-dose data without compromising data privacy for achieving superior low-dose PET denoising performance for different institutions with different low-dose settings, as compared to previous FL methods.
Collapse
Affiliation(s)
- Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Tianshun Miao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Niloufar Mirian
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Zhicheng Feng
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, 90007, USA
| | - Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Xiaoxiao Li
- Electrical and Computer Engineering Department, University of British Columbia, Vancouver, Canada
| | - S Kevin Zhou
- School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China and the Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China
| | - James S Duncan
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Chi Liu
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| |
Collapse
|
16
|
Image denoising in the deep learning era. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10305-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
17
|
Liu H, Yousefi H, Mirian N, Lin M, Menard D, Gregory M, Aboian M, Boustani A, Chen MK, Saperstein L, Pucar D, Kulon M, Liu C. PET Image Denoising using a Deep-Learning Method for Extremely Obese Patients. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:766-770. [PMID: 37284026 PMCID: PMC10241407 DOI: 10.1109/trpms.2021.3131999] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/03/2023]
Abstract
The image quality in clinical PET scan can be severely degraded due to high noise levels in extremely obese patients. Our work aimed to reduce the noise in clinical PET images of extremely obese subjects to the noise level of lean subject images, to ensure consistent imaging quality. The noise level was measured by normalized standard deviation (NSTD) derived from a liver region of interest. A deep learning-based noise reduction method with a fully 3D patch-based U-Net was used. Two U-Nets, U-Nets A and B, were trained on datasets with 40% and 10% count levels derived from 100 lean subjects, respectively. The clinical PET images of 10 extremely obese subjects were denoised using the two U-Nets. The results showed the noise levels of the images with 40% counts of lean subjects were consistent with those of the extremely obese subjects. U-Net A effectively reduced the noise in the images of the extremely obese patients while preserving the fine structures. The liver NSTD improved from 0.13±0.04 to 0.08±0.03 after noise reduction (p = 0.01). After denoising, the image noise level of extremely obese subjects was similar to that of lean subjects, in terms of liver NSTD (0.08±0.03 vs. 0.08±0.02, p = 0.74). In contrast, U-Net B over-smoothed the images of extremely obese patients, resulting in blurred fine structures. In a pilot reader study comparing extremely obese patients without and with U-Net A, the difference was not significant. In conclusion, the U-Net trained by datasets from lean subjects with matched count level can provide promising denoising performance for extremely obese subjects while maintaining image resolution, though further clinical evaluation is needed.
Collapse
Affiliation(s)
- Hui Liu
- Department of Engineering Physics, Tsinghua University, and Key Laboratory of Particle & Radiation Imaging, Ministry of Education (Tsinghua University), Beijing, China, on leave from the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Hamed Yousefi
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Niloufar Mirian
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - MingDe Lin
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Visage Imaging, Inc., San Diego, CA, USA
| | - David Menard
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Matthew Gregory
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Mariam Aboian
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Annemarie Boustani
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Lawrence Saperstein
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Darko Pucar
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Michal Kulon
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| |
Collapse
|
18
|
Liu J, Ren S, Wang R, Mirian N, Tsai YJ, Kulon M, Pucar D, Chen MK, Liu C. Virtual high-count PET image generation using a deep learning method. Med Phys 2022; 49:5830-5840. [PMID: 35880541 PMCID: PMC9474624 DOI: 10.1002/mp.15867] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 06/07/2022] [Accepted: 07/18/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Recently, deep learning-based methods have been established to denoise the low-count positron emission tomography (PET) images and predict their standard-count image counterparts, which could achieve reduction of injected dosage and scan time, and improve image quality for equivalent lesion detectability and clinical diagnosis. In clinical settings, the majority scans are still acquired using standard injection dose with standard scan time. In this work, we applied a 3D U-Net network to reduce the noise of standard-count PET images to obtain the virtual-high-count (VHC) PET images for identifying the potential benefits of the obtained VHC PET images. METHODS The training datasets, including down-sampled standard-count PET images as the network input and high-count images as the desired network output, were derived from 27 whole-body PET datasets, which were acquired using 90-min dynamic scan. The down-sampled standard-count PET images were rebinned with matched noise level of 195 clinical static PET datasets, by matching the normalized standard derivation (NSTD) inside 3D liver region of interests (ROIs). Cross-validation was performed on 27 PET datasets. Normalized mean square error (NMSE), peak signal to noise ratio (PSNR), structural similarity index (SSIM), and standard uptake value (SUV) bias of lesions were used for evaluation on standard-count and VHC PET images, with real-high-count PET image of 90 min as the gold standard. In addition, the network trained with 27 dynamic PET datasets was applied to 195 clinical static datasets to obtain VHC PET images. The NSTD and mean/max SUV of hypermetabolic lesions in standard-count and VHC PET images were evaluated. Three experienced nuclear medicine physicians evaluated the overall image quality of randomly selected 50 out of 195 patients' standard-count and VHC images and conducted 5-score ranking. A Wilcoxon signed-rank test was used to compare differences in the grading of standard-count and VHC images. RESULTS The cross-validation results showed that VHC PET images had improved quantitative metrics scores than the standard-count PET images. The mean/max SUVs of 35 lesions in the standard-count and true-high-count PET images did not show significantly statistical difference. Similarly, the mean/max SUVs of VHC and true-high-count PET images did not show significantly statistical difference. For the 195 clinical data, the VHC PET images had a significantly lower NSTD than the standard-count images. The mean/max SUVs of 215 hypermetabolic lesions in the VHC and standard-count images showed no statistically significant difference. In the image quality evaluation by three experienced nuclear medicine physicians, standard-count images and VHC images received scores with mean and standard deviation of 3.34±0.80 and 4.26 ± 0.72 from Physician 1, 3.02 ± 0.87 and 3.96 ± 0.73 from Physician 2, and 3.74 ± 1.10 and 4.58 ± 0.57 from Physician 3, respectively. The VHC images were consistently ranked higher than the standard-count images. The Wilcoxon signed-rank test also indicated that the image quality evaluation between standard-count and VHC images had significant difference. CONCLUSIONS A DL method was proposed to convert the standard-count images to the VHC images. The VHC images had reduced noise level. No significant difference in mean/max SUV to the standard-count images was observed. VHC images improved image quality for better lesion detectability and clinical diagnosis.
Collapse
Affiliation(s)
- Juan Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Sijin Ren
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Rui Wang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, China
| | - Niloufarsadat Mirian
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Yu-Jung Tsai
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Michal Kulon
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Darko Pucar
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| |
Collapse
|
19
|
Cui J, Gong K, Guo N, Kim K, Liu H, Li Q. Unsupervised PET logan parametric image estimation using conditional deep image prior. Med Image Anal 2022; 80:102519. [PMID: 35767910 DOI: 10.1016/j.media.2022.102519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 06/14/2022] [Accepted: 06/15/2022] [Indexed: 11/18/2022]
Abstract
Recently, deep learning-based denoising methods have been gradually used for PET images denoising and have shown great achievements. Among these methods, one interesting framework is conditional deep image prior (CDIP) which is an unsupervised method that does not need prior training or a large number of training pairs. In this work, we combined CDIP with Logan parametric image estimation to generate high-quality parametric images. In our method, the kinetic model is the Logan reference tissue model that can avoid arterial sampling. The neural network was utilized to represent the images of Logan slope and intercept. The patient's computed tomography (CT) image or magnetic resonance (MR) image was used as the network input to provide anatomical information. The optimization function was constructed and solved by the alternating direction method of multipliers (ADMM) algorithm. Both simulation and clinical patient datasets demonstrated that the proposed method could generate parametric images with more detailed structures. Quantification results showed that the proposed method results had higher contrast-to-noise (CNR) improvement ratios (PET/CT datasets: 62.25%±29.93%; striatum of brain PET datasets : 129.51%±32.13%, thalamus of brain PET datasets: 128.24%±31.18%) than Gaussian filtered results (PET/CT datasets: 23.33%±18.63%; striatum of brain PET datasets: 74.71%±8.71%, thalamus of brain PET datasets: 73.02%±9.34%) and nonlocal mean (NLM) denoised results (PET/CT datasets: 37.55%±26.56%; striatum of brain PET datasets: 100.89%±16.13%, thalamus of brain PET datasets: 103.59%±16.37%).
Collapse
Affiliation(s)
- Jianan Cui
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, Zhejiang 310027, China; The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Ning Guo
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Kyungsang Kim
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Huafeng Liu
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, Zhejiang 310027, China; Jiaxing Key Laboratory of Photonic Sensing and Intelligent Imaging, Jiaxing, Zhejiang 314000, China; Intelligent Optics and Photonics Research Center, Jiaxing Research Institute, Zhejiang University, Zhejiang 314000, China.
| | - Quanzheng Li
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA.
| |
Collapse
|
20
|
Applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging: A review. Eur J Nucl Med Mol Imaging 2022; 49:3717-3739. [PMID: 35451611 DOI: 10.1007/s00259-022-05805-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/12/2022] [Indexed: 11/04/2022]
Abstract
PURPOSE This paper reviews recent applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging. Recent advances in Deep Learning (DL) and GANs catalysed the research of their applications in medical imaging modalities. As a result, several unique GAN topologies have emerged and been assessed in an experimental environment over the last two years. METHODS The present work extensively describes GAN architectures and their applications in PET imaging. The identification of relevant publications was performed via approved publication indexing websites and repositories. Web of Science, Scopus, and Google Scholar were the major sources of information. RESULTS The research identified a hundred articles that address PET imaging applications such as attenuation correction, de-noising, scatter correction, removal of artefacts, image fusion, high-dose image estimation, super-resolution, segmentation, and cross-modality synthesis. These applications are presented and accompanied by the corresponding research works. CONCLUSION GANs are rapidly employed in PET imaging tasks. However, specific limitations must be eliminated to reach their full potential and gain the medical community's trust in everyday clinical practice.
Collapse
|
21
|
Pain CD, Egan GF, Chen Z. Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement. Eur J Nucl Med Mol Imaging 2022; 49:3098-3118. [PMID: 35312031 PMCID: PMC9250483 DOI: 10.1007/s00259-022-05746-4] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/25/2022] [Indexed: 12/21/2022]
Abstract
Image processing plays a crucial role in maximising diagnostic quality of positron emission tomography (PET) images. Recently, deep learning methods developed across many fields have shown tremendous potential when applied to medical image enhancement, resulting in a rich and rapidly advancing literature surrounding this subject. This review encapsulates methods for integrating deep learning into PET image reconstruction and post-processing for low-dose imaging and resolution enhancement. A brief introduction to conventional image processing techniques in PET is firstly presented. We then review methods which integrate deep learning into the image reconstruction framework as either deep learning-based regularisation or as a fully data-driven mapping from measured signal to images. Deep learning-based post-processing methods for low-dose imaging, temporal resolution enhancement and spatial resolution enhancement are also reviewed. Finally, the challenges associated with applying deep learning to enhance PET images in the clinical setting are discussed and future research directions to address these challenges are presented.
Collapse
Affiliation(s)
- Cameron Dennis Pain
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia.
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Department of Data Science and AI, Monash University, Melbourne, Australia
| |
Collapse
|
22
|
Geng M, Meng X, Yu J, Zhu L, Jin L, Jiang Z, Qiu B, Li H, Kong H, Yuan J, Yang K, Shan H, Han H, Yang Z, Ren Q, Lu Y. Content-Noise Complementary Learning for Medical Image Denoising. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:407-419. [PMID: 34529565 DOI: 10.1109/tmi.2021.3113365] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Medical imaging denoising faces great challenges, yet is in great demand. With its distinctive characteristics, medical imaging denoising in the image domain requires innovative deep learning strategies. In this study, we propose a simple yet effective strategy, the content-noise complementary learning (CNCL) strategy, in which two deep learning predictors are used to learn the respective content and noise of the image dataset complementarily. A medical image denoising pipeline based on the CNCL strategy is presented, and is implemented as a generative adversarial network, where various representative networks (including U-Net, DnCNN, and SRDenseNet) are investigated as the predictors. The performance of these implemented models has been validated on medical imaging datasets including CT, MR, and PET. The results show that this strategy outperforms state-of-the-art denoising algorithms in terms of visual quality and quantitative metrics, and the strategy demonstrates a robust generalization capability. These findings validate that this simple yet effective strategy demonstrates promising potential for medical image denoising tasks, which could exert a clinical impact in the future. Code is available at: https://github.com/gengmufeng/CNCL-denoising.
Collapse
|
23
|
Wang S, Cao G, Wang Y, Liao S, Wang Q, Shi J, Li C, Shen D. Review and Prospect: Artificial Intelligence in Advanced Medical Imaging. FRONTIERS IN RADIOLOGY 2021; 1:781868. [PMID: 37492170 PMCID: PMC10365109 DOI: 10.3389/fradi.2021.781868] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 11/08/2021] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) as an emerging technology is gaining momentum in medical imaging. Recently, deep learning-based AI techniques have been actively investigated in medical imaging, and its potential applications range from data acquisition and image reconstruction to image analysis and understanding. In this review, we focus on the use of deep learning in image reconstruction for advanced medical imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET). Particularly, recent deep learning-based methods for image reconstruction will be emphasized, in accordance with their methodology designs and performances in handling volumetric imaging data. It is expected that this review can help relevant researchers understand how to adapt AI for medical imaging and which advantages can be achieved with the assistance of AI.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
- Pengcheng Laboratrory, Shenzhen, China
| | - Guohua Cao
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China
| | - Shu Liao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Qian Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jun Shi
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| |
Collapse
|
24
|
Amirrashedi M, Sarkar S, Mamizadeh H, Ghadiri H, Ghafarian P, Zaidi H, Ay MR. Leveraging deep neural networks to improve numerical and perceptual image quality in low-dose preclinical PET imaging. Comput Med Imaging Graph 2021; 94:102010. [PMID: 34784505 DOI: 10.1016/j.compmedimag.2021.102010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 10/25/2021] [Accepted: 10/26/2021] [Indexed: 01/24/2023]
Abstract
The amount of radiotracer injected into laboratory animals is still the most daunting challenge facing translational PET studies. Since low-dose imaging is characterized by a higher level of noise, the quality of the reconstructed images leaves much to be desired. Being the most ubiquitous techniques in denoising applications, edge-aware denoising filters, and reconstruction-based techniques have drawn significant attention in low-count applications. However, for the last few years, much of the credit has gone to deep-learning (DL) methods, which provide more robust solutions to handle various conditions. Albeit being extensively explored in clinical studies, to the best of our knowledge, there is a lack of studies exploring the feasibility of DL-based image denoising in low-count small animal PET imaging. Therefore, herein, we investigated different DL frameworks to map low-dose small animal PET images to their full-dose equivalent with quality and visual similarity on a par with those of standard acquisition. The performance of the DL model was also compared to other well-established filters, including Gaussian smoothing, nonlocal means, and anisotropic diffusion. Visual inspection and quantitative assessment based on quality metrics proved the superior performance of the DL methods in low-count small animal PET studies, paving the way for a more detailed exploration of DL-assisted algorithms in this domain.
Collapse
Affiliation(s)
- Mahsa Amirrashedi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Saeed Sarkar
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Hojjat Mamizadeh
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Hossein Ghadiri
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Pardis Ghafarian
- Chronic Respiratory Diseases Research Center, National Research Institute of Tuberculosis and Lung Diseases (NRITLD), Shahid Beheshti University of Medical Sciences, Tehran, Iran; PET/CT and Cyclotron Center, Masih Daneshvari Hospital, Shahid Beheshti University of Medical, Tehran, Iran.
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva CH-1211, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| | - Mohammad Reza Ay
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
25
|
Zhou B, Tsai YJ, Chen X, Duncan JS, Liu C. MDPET: A Unified Motion Correction and Denoising Adversarial Network for Low-Dose Gated PET. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3154-3164. [PMID: 33909561 PMCID: PMC8588635 DOI: 10.1109/tmi.2021.3076191] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
In positron emission tomography (PET), gating is commonly utilized to reduce respiratory motion blurring and to facilitate motion correction methods. In application where low-dose gated PET is useful, reducing injection dose causes increased noise levels in gated images that could corrupt motion estimation and subsequent corrections, leading to inferior image quality. To address these issues, we propose MDPET, a unified motion correction and denoising adversarial network for generating motion-compensated low-noise images from low-dose gated PET data. Specifically, we proposed a Temporal Siamese Pyramid Network (TSP-Net) with basic units made up of 1.) Siamese Pyramid Network (SP-Net), and 2.) a recurrent layer for motion estimation among the gates. The denoising network is unified with our motion estimation network to simultaneously correct the motion and predict a motion-compensated denoised PET reconstruction. The experimental results on human data demonstrated that our MDPET can generate accurate motion estimation directly from low-dose gated images and produce high-quality motion-compensated low-noise reconstructions. Comparative studies with previous methods also show that our MDPET is able to generate superior motion estimation and denoising performance. Our code is available at https://github.com/bbbbbbzhou/MDPET.
Collapse
|
26
|
Liu J, Malekzadeh M, Mirian N, Song TA, Liu C, Dutta J. Artificial Intelligence-Based Image Enhancement in PET Imaging: Noise Reduction and Resolution Enhancement. PET Clin 2021; 16:553-576. [PMID: 34537130 PMCID: PMC8457531 DOI: 10.1016/j.cpet.2021.06.005] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
High noise and low spatial resolution are two key confounding factors that limit the qualitative and quantitative accuracy of PET images. Artificial intelligence models for image denoising and deblurring are becoming increasingly popular for the postreconstruction enhancement of PET images. We present a detailed review of recent efforts for artificial intelligence-based PET image enhancement with a focus on network architectures, data types, loss functions, and evaluation metrics. We also highlight emerging areas in this field that are quickly gaining popularity, identify barriers to large-scale adoption of artificial intelligence models for PET image enhancement, and discuss future directions.
Collapse
Affiliation(s)
- Juan Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Masoud Malekzadeh
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Niloufar Mirian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Tzu-An Song
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA; Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
27
|
Kulathilake KASH, Abdullah NA, Sabri AQM, Lai KW. A review on Deep Learning approaches for low-dose Computed Tomography restoration. COMPLEX INTELL SYST 2021; 9:2713-2745. [PMID: 34777967 PMCID: PMC8164834 DOI: 10.1007/s40747-021-00405-x] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 05/18/2021] [Indexed: 02/08/2023]
Abstract
Computed Tomography (CT) is a widely use medical image modality in clinical medicine, because it produces excellent visualizations of fine structural details of the human body. In clinical procedures, it is desirable to acquire CT scans by minimizing the X-ray flux to prevent patients from being exposed to high radiation. However, these Low-Dose CT (LDCT) scanning protocols compromise the signal-to-noise ratio of the CT images because of noise and artifacts over the image space. Thus, various restoration methods have been published over the past 3 decades to produce high-quality CT images from these LDCT images. More recently, as opposed to conventional LDCT restoration methods, Deep Learning (DL)-based LDCT restoration approaches have been rather common due to their characteristics of being data-driven, high-performance, and fast execution. Thus, this study aims to elaborate on the role of DL techniques in LDCT restoration and critically review the applications of DL-based approaches for LDCT restoration. To achieve this aim, different aspects of DL-based LDCT restoration applications were analyzed. These include DL architectures, performance gains, functional requirements, and the diversity of objective functions. The outcome of the study highlights the existing limitations and future directions for DL-based LDCT restoration. To the best of our knowledge, there have been no previous reviews, which specifically address this topic.
Collapse
Affiliation(s)
- K. A. Saneera Hemantha Kulathilake
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Nor Aniza Abdullah
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Aznul Qalid Md Sabri
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| |
Collapse
|
28
|
Lv Y, Xi C. PET image reconstruction with deep progressive learning. Phys Med Biol 2021; 66. [PMID: 33892485 DOI: 10.1088/1361-6560/abfb17] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 04/23/2021] [Indexed: 11/11/2022]
Abstract
Convolutional neural networks (CNNs) have recently achieved state-of-the-art results for positron emission tomography (PET) imaging problems. However direct learning from input image to target image is challenging if the gap is large between two images. Previous studies have shown that CNN can reduce image noise, but it can also degrade contrast recovery for small lesions. In this work, a deep progressive learning (DPL) method for PET image reconstruction is proposed to reduce background noise and improve image contrast. DPL bridges the gap between low quality image and high quality image through two learning steps. In the iterative reconstruction process, two pre-trained neural networks are introduced to control the image noise and contrast in turn. The feedback structure is adopted in the network design, which greatly reduces the parameters. The training data come from uEXPLORER, the world's first total-body PET scanner, in which the PET images show high contrast and very low image noise. We conducted extensive phantom and patient studies to test the algorithm for PET image quality improvement. The experimental results show that DPL is promising for reducing noise and improving contrast of PET images. Moreover, the proposed method has sufficient versatility to solve various imaging and image processing problems.
Collapse
Affiliation(s)
- Yang Lv
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Chen Xi
- United Imaging Healthcare, Shanghai, People's Republic of China
| |
Collapse
|