1
|
Yang B, Gong K, Liu H, Li Q, Zhu W. Anatomically Guided PET Image Reconstruction Using Conditional Weakly-Supervised Multi-Task Learning Integrating Self-Attention. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2098-2112. [PMID: 38241121 DOI: 10.1109/tmi.2024.3356189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2024]
Abstract
To address the lack of high-quality training labels in positron emission tomography (PET) imaging, weakly-supervised reconstruction methods that generate network-based mappings between prior images and noisy targets have been developed. However, the learned model has an intrinsic variance proportional to the average variance of the target image. To suppress noise and improve the accuracy and generalizability of the learned model, we propose a conditional weakly-supervised multi-task learning (MTL) strategy, in which an auxiliary task is introduced serving as an anatomical regularizer for the PET reconstruction main task. In the proposed MTL approach, we devise a novel multi-channel self-attention (MCSA) module that helps learn an optimal combination of shared and task-specific features by capturing both local and global channel-spatial dependencies. The proposed reconstruction method was evaluated on NEMA phantom PET datasets acquired at different positions in a PET/CT scanner and 26 clinical whole-body PET datasets. The phantom results demonstrate that our method outperforms state-of-the-art learning-free and weakly-supervised approaches obtaining the best noise/contrast tradeoff with a significant noise reduction of approximately 50.0% relative to the maximum likelihood (ML) reconstruction. The patient study results demonstrate that our method achieves the largest noise reductions of 67.3% and 35.5% in the liver and lung, respectively, as well as consistently small biases in 8 tumors with various volumes and intensities. In addition, network visualization reveals that adding the auxiliary task introduces more anatomical information into PET reconstruction than adding only the anatomical loss, and the developed MCSA can abstract features and retain PET image details.
Collapse
|
2
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
3
|
Li J, Xi C, Dai H, Wang J, Lv Y, Zhang P, Zhao J. Enhanced PET imaging using progressive conditional deep image prior. Phys Med Biol 2023; 68:175047. [PMID: 37582392 DOI: 10.1088/1361-6560/acf091] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Accepted: 08/15/2023] [Indexed: 08/17/2023]
Abstract
Objective.Unsupervised learning-based methods have been proven to be an effective way to improve the image quality of positron emission tomography (PET) images when a large dataset is not available. However, when the gap between the input image and the target PET image is large, direct unsupervised learning can be challenging and easily lead to reduced lesion detectability. We aim to develop a new unsupervised learning method to improve lesion detectability in patient studies.Approach.We applied the deep progressive learning strategy to bridge the gap between the input image and the target image. The one-step unsupervised learning is decomposed into two unsupervised learning steps. The input image of the first network is an anatomical image and the input image of the second network is a PET image with a low noise level. The output of the first network is also used as the prior image to generate the target image of the second network by iterative reconstruction method.Results.The performance of the proposed method was evaluated through the phantom and patient studies and compared with non-deep learning, supervised learning and unsupervised learning methods. The results showed that the proposed method was superior to non-deep learning and unsupervised methods, and was comparable to the supervised method.Significance.A progressive unsupervised learning method was proposed, which can improve image noise performance and lesion detectability.
Collapse
Affiliation(s)
- Jinming Li
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Chen Xi
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Houjiao Dai
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Jing Wang
- Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, Shaanxi, Xi'an, People's Republic of China
| | - Yang Lv
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Puming Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
4
|
Zhu W, Lee SJ. Similarity-Driven Fine-Tuning Methods for Regularization Parameter Optimization in PET Image Reconstruction. SENSORS (BASEL, SWITZERLAND) 2023; 23:5783. [PMID: 37447633 DOI: 10.3390/s23135783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 06/18/2023] [Accepted: 06/19/2023] [Indexed: 07/15/2023]
Abstract
We present an adaptive method for fine-tuning hyperparameters in edge-preserving regularization for PET image reconstruction. For edge-preserving regularization, in addition to the smoothing parameter that balances data fidelity and regularization, one or more control parameters are typically incorporated to adjust the sensitivity of edge preservation by modifying the shape of the penalty function. Although there have been efforts to develop automated methods for tuning the hyperparameters in regularized PET reconstruction, the majority of these methods primarily focus on the smoothing parameter. However, it is challenging to obtain high-quality images without appropriately selecting the control parameters that adjust the edge preservation sensitivity. In this work, we propose a method to precisely tune the hyperparameters, which are initially set with a fixed value for the entire image, either manually or using an automated approach. Our core strategy involves adaptively adjusting the control parameter at each pixel, taking into account the degree of patch similarities calculated from the previous iteration within the pixel's neighborhood that is being updated. This approach allows our new method to integrate with a wide range of existing parameter-tuning techniques for edge-preserving regularization. Experimental results demonstrate that our proposed method effectively enhances the overall reconstruction accuracy across multiple image quality metrics, including peak signal-to-noise ratio, structural similarity, visual information fidelity, mean absolute error, root-mean-square error, and mean percentage error.
Collapse
Affiliation(s)
- Wen Zhu
- Department of Electrical and Electronic Engineering, Pai Chai University, Daejeon 35345, Republic of Korea
| | - Soo-Jin Lee
- Department of Electrical and Electronic Engineering, Pai Chai University, Daejeon 35345, Republic of Korea
| |
Collapse
|
5
|
Pain CD, Egan GF, Chen Z. Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement. Eur J Nucl Med Mol Imaging 2022; 49:3098-3118. [PMID: 35312031 PMCID: PMC9250483 DOI: 10.1007/s00259-022-05746-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/25/2022] [Indexed: 12/21/2022]
Abstract
Image processing plays a crucial role in maximising diagnostic quality of positron emission tomography (PET) images. Recently, deep learning methods developed across many fields have shown tremendous potential when applied to medical image enhancement, resulting in a rich and rapidly advancing literature surrounding this subject. This review encapsulates methods for integrating deep learning into PET image reconstruction and post-processing for low-dose imaging and resolution enhancement. A brief introduction to conventional image processing techniques in PET is firstly presented. We then review methods which integrate deep learning into the image reconstruction framework as either deep learning-based regularisation or as a fully data-driven mapping from measured signal to images. Deep learning-based post-processing methods for low-dose imaging, temporal resolution enhancement and spatial resolution enhancement are also reviewed. Finally, the challenges associated with applying deep learning to enhance PET images in the clinical setting are discussed and future research directions to address these challenges are presented.
Collapse
Affiliation(s)
- Cameron Dennis Pain
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia.
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Department of Data Science and AI, Monash University, Melbourne, Australia
| |
Collapse
|
6
|
Yang B, Wang X, Li A, Moody JB, Tang J. Dictionary Learning Constrained Direct Parametric Estimation in Dynamic Myocardial Perfusion PET. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3485-3497. [PMID: 34125672 DOI: 10.1109/tmi.2021.3089112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In myocardial perfusion imaging with dynamic positron emission tomography (PET), direct parametric reconstruction from the projection data allows accurate modeling of the Poisson noise in the projection domain to provide more reliable estimate of the parametric images. In this study, we propose to incorporate a superior denoiser to efficiently suppress the unfavorable noise propagation during the direct reconstruction. The dictionary learning (DL) based sparse representation serves as a regularization term to constrain the intermediate K1 estimation. We rewrite the DL regularizer into a voxel-separable form to facilitate the decoupling of a DL penalized curve fitting from the reconstruction of dynamic frames. The nonlinear fitting is then solved by a damped Newton method with uniform initialization. Using simulated and patient 82Rb dynamic PET data, we study the performance of the proposed DL direct algorithm and quantitatively compare it with the indirect method with or without post-filtering, the direct reconstruction without regularization, and the quadratic penalty regularized direct algorithm. The DL regularized direct reconstruction achieves improved noise versus bias performance in the reconstructed K1 images as well as superior recovery of a reduced myocardial blood flow defect. The dictionary learned from a 3D self-created hollow sphere image yields comparable results to those using the dictionary learned from the corresponding magnetic resonance image. The uniform initializations converge to K1 estimations similar to the result from initializing with the indirect reconstruction. To summarize, we demonstrate the potential of the proposed DL constrained direct parametric reconstruction in improving quantitative dynamic PET imaging.
Collapse
|
7
|
Gong K, Kim K, Cui J, Wu D, Li Q. The Evolution of Image Reconstruction in PET: From Filtered Back-Projection to Artificial Intelligence. PET Clin 2021; 16:533-542. [PMID: 34537129 DOI: 10.1016/j.cpet.2021.06.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
PET can provide functional images revealing physiologic processes in vivo. Although PET has many applications, there are still some limitations that compromise its precision: the absorption of photons in the body causes signal attenuation; the dead-time limit of system components leads to the loss of the count rate; the scattered and random events received by the detector introduce additional noise; the characteristics of the detector limit the spatial resolution; and the low signal-to-noise ratio caused by the scan-time limit (eg, dynamic scans) and dose concern. The early PET reconstruction methods are analytical approaches based on an idealized mathematical model.
Collapse
Affiliation(s)
- Kuang Gong
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Kyungsang Kim
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jianan Cui
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Dufan Wu
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Quanzheng Li
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
8
|
Arridge SR, Ehrhardt MJ, Thielemans K. (An overview of) Synergistic reconstruction for multimodality/multichannel imaging methods. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2021; 379:20200205. [PMID: 33966461 DOI: 10.1098/rsta.2020.0205] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Imaging is omnipresent in modern society with imaging devices based on a zoo of physical principles, probing a specimen across different wavelengths, energies and time. Recent years have seen a change in the imaging landscape with more and more imaging devices combining that which previously was used separately. Motivated by these hardware developments, an ever increasing set of mathematical ideas is appearing regarding how data from different imaging modalities or channels can be synergistically combined in the image reconstruction process, exploiting structural and/or functional correlations between the multiple images. Here we review these developments, give pointers to important challenges and provide an outlook as to how the field may develop in the forthcoming years. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'.
Collapse
Affiliation(s)
- Simon R Arridge
- Department of Computer Science, University College London, London, UK
| | - Matthias J Ehrhardt
- Department of Mathematical Sciences, University of Bath, Bath, UK
- Institute for Mathematical Innovation, University of Bath, Bath, UK
| | - Kris Thielemans
- Institute of Nuclear Medicine, University College London, London, UK
| |
Collapse
|
9
|
Abstract
The significant statistical noise and limited spatial resolution of positron emission tomography (PET) data in sinogram space results in the degradation of the quality and accuracy of reconstructed images. Although high-dose radiotracers and long acquisition times improve the PET image quality, the patients’ radiation exposure increases and the patient is more likely to move during the PET scan. Recently, various data-driven techniques based on supervised deep neural network learning have made remarkable progress in reducing noise in images. However, these conventional techniques require clean target images that are of limited availability for PET denoising. Therefore, in this study, we utilized the Noise2Noise framework, which requires only noisy image pairs for network training, to reduce the noise in the PET images. A trainable wavelet transform was proposed to improve the performance of the network. The proposed network was fed wavelet-decomposed images consisting of low- and high-pass components. The inverse wavelet transforms of the network output produced denoised images. The proposed Noise2Noise filter with wavelet transforms outperforms the original Noise2Noise method in the suppression of artefacts and preservation of abnormal uptakes. The quantitative analysis of the simulated PET uptake confirms the improved performance of the proposed method compared with the original Noise2Noise technique. In the clinical data, 10 s images filtered with Noise2Noise are virtually equivalent to 300 s images filtered with a 6 mm Gaussian filter. The incorporation of wavelet transforms in Noise2Noise network training results in the improvement of the image contrast. In conclusion, the performance of Noise2Noise filtering for PET images was improved by incorporating the trainable wavelet transform in the self-supervised deep learning framework.
Collapse
|
10
|
Cheng Z, Wen J, Huang G, Yan J. Applications of artificial intelligence in nuclear medicine image generation. Quant Imaging Med Surg 2021; 11:2792-2822. [PMID: 34079744 PMCID: PMC8107336 DOI: 10.21037/qims-20-1078] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 02/14/2021] [Indexed: 12/12/2022]
Abstract
Recently, the application of artificial intelligence (AI) in medical imaging (including nuclear medicine imaging) has rapidly developed. Most AI applications in nuclear medicine imaging have focused on the diagnosis, treatment monitoring, and correlation analyses with pathology or specific gene mutation. It can also be used for image generation to shorten the time of image acquisition, reduce the dose of injected tracer, and enhance image quality. This work provides an overview of the application of AI in image generation for single-photon emission computed tomography (SPECT) and positron emission tomography (PET) either without or with anatomical information [CT or magnetic resonance imaging (MRI)]. This review focused on four aspects, including imaging physics, image reconstruction, image postprocessing, and internal dosimetry. AI application in generating attenuation map, estimating scatter events, boosting image quality, and predicting internal dose map is summarized and discussed.
Collapse
Affiliation(s)
- Zhibiao Cheng
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Junhai Wen
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Gang Huang
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| | - Jianhua Yan
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| |
Collapse
|
11
|
Wang X, Zhou L, Wang Y, Jiang H, Ye H. Improved low-dose positron emission tomography image reconstruction using deep learned prior. Phys Med Biol 2021; 66. [PMID: 33882466 DOI: 10.1088/1361-6560/abfa36] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 04/21/2021] [Indexed: 01/18/2023]
Abstract
Positron emission tomography (PET) is a promising medical imaging technology that provides non-invasive and quantitative measurement of biochemical process in the human bodies. PET image reconstruction is challenging due to the ill-poseness of the inverse problem. With lower statistics caused by the limited detected photons, low-dose PET imaging leads to noisy reconstructed images with much quality degradation. Recently, deep neural networks (DNN) have been widely used in computer vision tasks and attracted growing interests in medical imaging. In this paper, we proposed a maximuma posteriori(MAP) reconstruction algorithm incorporating a convolutional neural network (CNN) representation in the formation of the prior. Rather than using the CNN in post-processing, we embedded the neural network in the reconstruction framework for image representation. Using the simulated data, we first quantitatively evaluated our proposed method in terms of the noise-bias tradeoff, and compared with the filtered maximum likelihood (ML), the conventional MAP, and the CNN post-processing methods. In addition to the simulation experiments, the proposed method was further quantitatively validated on the acquired patient brain and body data with the tradeoff between noise and contrast. The results demonstrated that the proposed CNN-MAP method improved noise-bias tradeoff compared with the filtered ML, the conventional MAP, and the CNN post-processing methods in the simulation study. For the patient study, the CNN-MAP method achieved better noise-contrast tradeoff over the other three methods. The quantitative enhancements indicate the potential value of the proposed CNN-MAP method in low-dose PET imaging.
Collapse
Affiliation(s)
- Xinhui Wang
- MinFound Medical Systems Co., Ltd., Hangzhou, People's Republic of China.,Zhejiang MinFound Intelligent Healthcare Technology Co. Ltd., Hangzhou, People's Republic of China
| | - Long Zhou
- MinFound Medical Systems Co., Ltd., Hangzhou, People's Republic of China.,Zhejiang MinFound Intelligent Healthcare Technology Co. Ltd., Hangzhou, People's Republic of China
| | - Yaofa Wang
- MinFound Medical Systems Co., Ltd., Hangzhou, People's Republic of China.,College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, People's Republic of China
| | - Haochuan Jiang
- MinFound Medical Systems Co., Ltd., Hangzhou, People's Republic of China
| | - Hongwei Ye
- MinFound Medical Systems Co., Ltd., Hangzhou, People's Republic of China.,Zhejiang MinFound Intelligent Healthcare Technology Co. Ltd., Hangzhou, People's Republic of China
| |
Collapse
|
12
|
da Costa-Luis CO, Reader AJ. Micro-Networks for Robust MR-Guided Low Count PET Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:202-212. [PMID: 33681546 PMCID: PMC7931458 DOI: 10.1109/trpms.2020.2986414] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Revised: 12/30/2019] [Accepted: 02/08/2020] [Indexed: 01/18/2023]
Abstract
Noise suppression is particularly important in low count positron emission tomography (PET) imaging. Post-smoothing (PS) and regularization methods which aim to reduce noise also tend to reduce resolution and introduce bias. Alternatively, anatomical information from another modality such as magnetic resonance (MR) imaging can be used to improve image quality. Convolutional neural networks (CNNs) are particularly well suited to such joint image processing, but usually require large amounts of training data and have mostly been applied outside the field of medical imaging or focus on classification and segmentation, leaving PET image quality improvement relatively understudied. This article proposes the use of a relatively low-complexity CNN (micro-net) as a post-reconstruction MR-guided image processing step to reduce noise and reconstruction artefacts while also improving resolution in low count PET scans. The CNN is designed to be fully 3-D, robust to very limited amounts of training data, and to accept multiple inputs (including competitive denoising methods). Application of the proposed CNN on simulated low (30 M) count data (trained to produce standard (300 M) count reconstructions) results in a 36% lower normalized root mean squared error (NRMSE, calculated over ten realizations against the ground truth) compared to maximum-likelihood expectation maximization (MLEM) used in clinical practice. In contrast, a decrease of only 25% in NRMSE is obtained when an optimized (using knowledge of the ground truth) PS is performed. A 26% NRMSE decrease is obtained with both RM and optimized PS. Similar improvement is also observed for low count real patient datasets. Overfitting to training data is demonstrated to occur as the network size is increased. In an extreme case, a U-net (which produces better predictions for training data) is shown to completely fail on test data due to overfitting to this case of very limited training data. Meanwhile, the resultant images from the proposed CNN (which has low training data requirements) have lower noise, reduced ringing, and partial volume effects, as well as sharper edges and improved resolution compared to conventional MLEM.
Collapse
Affiliation(s)
- Casper O. da Costa-Luis
- Department of Biomedical EngineeringSchool of Biomedical Engineering and Imaging Sciences, St. Thomas’ HospitalKing’s College LondonLondonSE1 7EHU.K.
| | - Andrew J. Reader
- Department of Biomedical EngineeringSchool of Biomedical Engineering and Imaging Sciences, St. Thomas’ HospitalKing’s College LondonLondonSE1 7EHU.K.
| |
Collapse
|
13
|
Gao J, Liu Q, Zhou C, Zhang W, Wan Q, Hu C, Gu Z, Liang D, Liu X, Yang Y, Zheng H, Hu Z, Zhang N. An improved patch-based regularization method for PET image reconstruction. Quant Imaging Med Surg 2021; 11:556-570. [PMID: 33532256 DOI: 10.21037/qims-20-19] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Background Statistical reconstruction methods based on penalized maximum likelihood (PML) are being increasingly used in positron emission tomography (PET) imaging to reduce noise and improve image quality. Wang and Qi proposed a patch-based edge-preserving penalties algorithm that can be implemented in three simple steps: a maximum-likelihood expectation-maximization (MLEM) image update, an image smoothing step, and a pixel-by-pixel image fusion step. The pixel-by-pixel image fusion step, which fuses the MLEM updated image and the smoothed image, involves a trade-off between preserving the fine structural features of an image and suppressing noise. Particularly when reconstructing images from low-count data, this step cannot preserve fine structural features in detail. To better preserve these features and accelerate the algorithm convergence, we proposed to improve the patch-based regularization reconstruction method. Methods Our improved method involved adding a total variation (TV) regularization step following the MLEM image update in the patch-based algorithm. A feature refinement (FR) step was then used to extract the lost fine structural features from the residual image between the TV regularized image and the fused image based on patch regularization. These structural features would then be added back to the fused image. With the addition of these steps, each iteration of the image should gain more structural information. A brain phantom simulation experiment and a mouse study were conducted to evaluate our proposed improved method. Brain phantom simulation with added noise were used to determine the feasibility of the proposed algorithm and its acceleration of convergence. Data obtained from the mouse study were divided into event count sets to validate the performance of the proposed algorithm when reconstructing images from low-count data. Five criteria were used for quantitative evaluation: signal-to-noise ratio (SNR), covariance (COV), contrast recovery coefficient (CRC), regional relative bias, and relative variance. Results The bias and variance of the phantom brain image reconstructed using the patch-based method were 0.421 and 5.035, respectively, and this process took 83.637 seconds. The bias and variance of the image reconstructed by the proposed improved method, however, were 0.396 and 4.568, respectively, and this process took 41.851 seconds. This demonstrates that the proposed algorithm accelerated the reconstruction convergence. The CRC of the phantom brain image reconstructed using the patch-based method was iterated 20 times and reached 0.284, compared with the proposed method, which reached 0.446. When using a count of 5,000 K data obtained from the mouse study, both the patch-based method and the proposed method reconstructed images similar to the ground truth image. The intensity of the ground truth image was 88.3, and it was located in the 102nd row and the 116th column. However, when the count was reduced to below 40 K, and the patch-based method was used, image quality was significantly reduced. This effect was not observed when the proposed method was used. When a count of 40 K was used, the image intensity was 58.79 when iterated 100 times by the patch-based method, and it was located in the 102nd row and the 116th column, while the intensity when iterated 50 times by the proposed method was 63.83. This suggests that the proposed method improves image reconstruction from low-count data. Conclusions This improved method of PET image reconstruction could potentially improve the quality of PET images faster than other methods and also produce better reconstructions from low-count data.
Collapse
Affiliation(s)
- Juan Gao
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China.,Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen, China.,School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Weiguang Zhang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Qian Wan
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China.,Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen, China
| | - Chenxi Hu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zheng Gu
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China.,Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China.,Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China.,Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China.,Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China.,Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China.,Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen, China
| |
Collapse
|
14
|
Xue H, Zhang Q, Zou S, Zhang W, Zhou C, Tie C, Wan Q, Teng Y, Li Y, Liang D, Liu X, Yang Y, Zheng H, Zhu X, Hu Z. LCPR-Net: low-count PET image reconstruction using the domain transform and cycle-consistent generative adversarial networks. Quant Imaging Med Surg 2021; 11:749-762. [PMID: 33532274 PMCID: PMC7779905 DOI: 10.21037/qims-20-66] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2020] [Accepted: 09/25/2020] [Indexed: 11/06/2022]
Abstract
BACKGROUND Reducing the radiation tracer dose and scanning time during positron emission tomography (PET) imaging can reduce the cost of the tracer, reduce motion artifacts, and increase the efficiency of the scanner. However, the reconstructed images to be noisy. It is very important to reconstruct high-quality images with low-count (LC) data. Therefore, we propose a deep learning method called LCPR-Net, which is used for directly reconstructing full-count (FC) PET images from corresponding LC sinogram data. METHODS Based on the framework of a generative adversarial network (GAN), we enforce a cyclic consistency constraint on the least-squares loss to establish a nonlinear end-to-end mapping process from LC sinograms to FC images. In this process, we merge a convolutional neural network (CNN) and a residual network for feature extraction and image reconstruction. In addition, the domain transform (DT) operation sends a priori information to the cycle-consistent GAN (CycleGAN) network, avoiding the need for a large amount of computational resources to learn this transformation. RESULTS The main advantages of this method are as follows. First, the network can use LC sinogram data as input to directly reconstruct an FC PET image. The reconstruction speed is faster than that provided by model-based iterative reconstruction. Second, reconstruction based on the CycleGAN framework improves the quality of the reconstructed image. CONCLUSIONS Compared with other state-of-the-art methods, the quantitative and qualitative evaluation results show that the proposed method is accurate and effective for FC PET image reconstruction.
Collapse
Affiliation(s)
- Hengzhi Xue
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Sijuan Zou
- Department of Nuclear Medicine and PET, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Weiguang Zhang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Changjun Tie
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qian Wan
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yueyang Teng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yongchang Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xiaohua Zhu
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
15
|
Xie N, Gong K, Guo N, Qin Z, Wu Z, Liu H, Li Q. Penalized-likelihood PET Image Reconstruction Using 3D Structural Convolutional Sparse Coding. IEEE Trans Biomed Eng 2020; 69:4-14. [PMID: 33284746 DOI: 10.1109/tbme.2020.3042907] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Positron emission tomography (PET) is widely used for clinical diagnosis. As PET suffers from low resolution and high noise, numerous efforts try to incorporate anatomical priors into PET image reconstruction, especially with the development of hybrid PET/CT and PET/MRI systems. In this work, we proposed a cube-based 3D structural convolutional sparse coding (CSC) concept for penalized-likelihood PET image reconstruction, named 3D PET-CSC. The proposed 3D PET-CSC takes advantage of the convolutional operation and manages to incorporate anatomical priors without the need of registration or supervised training. As 3D PET-CSC codes the whole 3D PET image, instead of patches, it alleviates the staircase artifacts commonly presented in traditional patch-based sparse coding methods. Compared with traditional coding methods in Fourier domain, the proposed method extends the 3D CSC to a straightforward approach based on the pursuit of localized cubes. Moreover, we developed the residual-image and order-subset mechanisms to further reduce the computational cost and accelerate the convergence for the proposed 3D PET-CSC method. Experiments based on computer simulations and clinical datasets demonstrate the superiority of 3D PET-CSC compared with other reference methods.
Collapse
|
16
|
Duffy IR, Boyle AJ, Vasdev N. Improving PET Imaging Acquisition and Analysis With Machine Learning: A Narrative Review With Focus on Alzheimer's Disease and Oncology. Mol Imaging 2020; 18:1536012119869070. [PMID: 31429375 PMCID: PMC6702769 DOI: 10.1177/1536012119869070] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Machine learning (ML) algorithms have found increasing utility in the medical imaging field and numerous applications in the analysis of digital biomarkers within positron emission tomography (PET) imaging have emerged. Interest in the use of artificial intelligence in PET imaging for the study of neurodegenerative diseases and oncology stems from the potential for such techniques to streamline decision support for physicians providing early and accurate diagnosis and allowing personalized treatment regimens. In this review, the use of ML to improve PET image acquisition and reconstruction is presented, along with an overview of its applications in the analysis of PET images for the study of Alzheimer's disease and oncology.
Collapse
Affiliation(s)
- Ian R Duffy
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Amanda J Boyle
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Neil Vasdev
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada.,2 Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
17
|
Zhang W, Gao J, Yang Y, Liang D, Liu X, Zheng H, Hu Z. Image reconstruction for positron emission tomography based on patch-based regularization and dictionary learning. Med Phys 2019; 46:5014-5026. [PMID: 31494950 PMCID: PMC6899708 DOI: 10.1002/mp.13804] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2018] [Revised: 07/18/2019] [Accepted: 08/18/2019] [Indexed: 12/31/2022] Open
Abstract
PURPOSE Positron emission tomography (PET) is an important tool for nuclear medical imaging. It has been widely used in clinical diagnosis, scientific research, and drug testing. PET is a kind of emission computed tomography. Its basic imaging principle is to use the positron annihilation radiation generated by radionuclide decay to generate gamma photon images. However, in practical applications, due to the low gamma photon counting rate, limited acquisition time, inconsistent detector characteristics, and electronic noise, measured PET projection data often contain considerable noise, which results in ill-conditioned PET images. Therefore, determining how to obtain high-quality reconstructed PET images suitable for clinical applications is a valuable research topic. In this context, this paper presents an image reconstruction algorithm based on patch-based regularization and dictionary learning (DL) called the patch-DL algorithm. Compared to other algorithms, the proposed algorithm can retain more image details while suppressing noise. METHODS Expectation-maximization (EM)-like image updating, image smoothing, pixel-by-pixel image fusion, and DL are the four steps of the proposed reconstruction algorithm. We used a two-dimensional (2D) brain phantom to evaluate the proposed algorithm by simulating sinograms that contained random Poisson noise. We also quantitatively compared the patch-DL algorithm with a pixel-based algorithm, a patch-based algorithm, and an adaptive dictionary learning (AD) algorithm. RESULTS Through computer simulations, we demonstrated the advantages of the patch-DL method over the pixel-, patch-, and AD-based methods in terms of the tradeoff between noise suppression and detail retention in reconstructed images. Quantitative analysis shows that the proposed method results in a better performance statistically [according to the mean absolute error (MAE), correlation coefficient (CORR), and root mean square error (RMSE)] in considered region of interests (ROI) with two simulated count levels. Additionally, to analyze whether the results among these methods have significant differences, we used one-way analysis of variance (ANOVA) to calculate the corresponding P values. The results show that most of the P < 0.01; some P> 0.01 < 0.05. Therefore, our method can achieve a better quantitative performance than those of traditional methods. CONCLUSIONS The results show that the proposed algorithm has the potential to improve the quality of PET image reconstruction. Since the proposed algorithm was validated only with simulated 2D data, it still needs to be further validated with real three-dimensional data. In the future, we intend to explore GPU parallelization technology to further improve the computational efficiency and shorten the computation time.
Collapse
Affiliation(s)
- Wanhong Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,College of Electrical and Information Engineering, Hunan University, Changsha, 410082, China
| | - Juan Gao
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| |
Collapse
|
18
|
|
19
|
Gong K, Guan J, Kim K, Zhang X, Yang J, Seo Y, El Fakhri G, Qi J, Li Q. Iterative PET Image Reconstruction Using Convolutional Neural Network Representation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:675-685. [PMID: 30222554 PMCID: PMC6472985 DOI: 10.1109/tmi.2018.2869871] [Citation(s) in RCA: 112] [Impact Index Per Article: 22.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
PET image reconstruction is challenging due to the ill-poseness of the inverse problem and limited number of detected photons. Recently, the deep neural networks have been widely and successfully used in computer vision tasks and attracted growing interests in medical imaging. In this paper, we trained a deep residual convolutional neural network to improve PET image quality by using the existing inter-patient information. An innovative feature of the proposed method is that we embed the neural network in the iterative reconstruction framework for image representation, rather than using it as a post-processing tool. We formulate the objective function as a constrained optimization problem and solve it using the alternating direction method of multipliers algorithm. Both simulation data and hybrid real data are used to evaluate the proposed method. Quantification results show that our proposed iterative neural network method can outperform the neural network denoising and conventional penalized maximum likelihood methods.
Collapse
Affiliation(s)
- Kuang Gong
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 USA, and with the Department of Biomedical Engineering, University of California, Davis CA 95616 USA
| | - Jiahui Guan
- Department of Statistics, University of California, Davis, CA 95616 USA
| | - Kyungsang Kim
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 USA
| | - Xuezhu Zhang
- Department of Biomedical Engineering, University of California, Davis, CA 95616 USA
| | - Jaewon Yang
- Physics Research Laboratory, Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143 USA
| | - Youngho Seo
- Physics Research Laboratory, Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143 USA
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 USA
| | - Jinyi Qi
- Department of Biomedical Engineering, University of California, Davis, CA 95616 USA
| | - Quanzheng Li
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 USA
| |
Collapse
|
20
|
Gao J, Zhang Q, Liu Q, Zhang X, Zhang M, Yang Y, Liang D, Liu X, Zheng H, Hu Z. Positron emission tomography image reconstruction using feature extraction. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2019; 27:949-963. [PMID: 31381539 DOI: 10.3233/xst-190527] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE To reduce the cost of positron emission tomography (PET) scanning systems, image reconstruction algorithms for low-sampled data have been extensively studied. However, the current method based on total variation (TV) minimization regularization nested in the maximum likelihood-expectation maximization (MLEM) algorithm cannot distinguish true structures from noise resulting losing some fine features in the images. Thus, this work aims to recover fine features lost in the MLEM-TV algorithm from low-sampled data. METHOD A feature refinement (FR) approach previously developed for statistical interior computed tomography (CT) reconstruction is applied to PET imaging to recover fine features in this study. The proposed method starts with a constant initial image and the FR step is performed after each MLEM-TV iteration to extract the desired structural information lost during TV minimization. A feature descriptor is specifically designed to distinguish structure from noise and artifacts. A modified steepest descent method is adopted to minimize the objective function. After evaluating the impacts of different patch sizes on the outcome of the presented method, an optimal patch size of 7×7 is selected in this study to balance structure-detection ability and computational efficiency. RESULTS Applying MLEM-TV-FR algorithm to the simulated brain PET imaging using an emission activity phantom, a standard Shepp-Logan phantom, and mouse results in the increased peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as comparing to using the conventional MLEM-TV algorithm, as well as the substantial reduction of the used sampling numbers, which improves the computational efficiency. CONCLUSIONS The presented algorithm can achieve image quality superior to that of the MLEM and MLEM-TV approaches in terms of the preservation of fine structure and the suppression of undesired artifacts and noise, indicating its useful potential for low-sampled data in PET imaging.
Collapse
Affiliation(s)
- Juan Gao
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Xuezhu Zhang
- Department of Biomedical Engineering, University of California, Davis, CA, USA
| | - Mengxi Zhang
- Department of Biomedical Engineering, University of California, Davis, CA, USA
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
21
|
Yang B, Ying L, Tang J. Artificial Neural Network Enhanced Bayesian PET Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1297-1309. [PMID: 29870360 PMCID: PMC6132251 DOI: 10.1109/tmi.2018.2803681] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
In positron emission tomography (PET) image reconstruction, the Bayesian framework with various regularization terms has been implemented to constrain the radio tracer distribution. Varying the regularizing weight of a maximum a posteriori (MAP) algorithm specifies a lower bound of the tradeoff between variance and spatial resolution measured from the reconstructed images. The purpose of this paper is to build a patch-based image enhancement scheme to reduce the size of the unachievable region below the bound and thus to quantitatively improve the Bayesian PET imaging. We cast the proposed enhancement as a regression problem which models a highly nonlinear and spatial-varying mapping between the reconstructed image patches and an enhanced image patch. An artificial neural network model named multilayer perceptron (MLP) with backpropagation was used to solve this regression problem through learning from examples. Using the BrainWeb phantoms, we simulated brain PET data at different count levels of different subjects with and without lesions. The MLP was trained using the image patches reconstructed with a MAP algorithm of different regularization parameters for one normal subject at a certain count level. To evaluate the performance of the trained MLP, reconstructed images from other simulations and two patient brain PET imaging data sets were processed. In every testing cases, we demonstrate that the MLP enhancement technique improves the noise and bias tradeoff compared with the MAP reconstruction using different regularizing weights thus decreasing the size of the unachievable region defined by the MAP algorithm in the variance/resolution plane.
Collapse
Affiliation(s)
- Bao Yang
- Department of Electrical and Computer Engineering, Oakland University, Rochester, MI, USA
| | - Leslie Ying
- Departments of Biomedical Engineering and Electrical Engineering, The State University of New York at Buffalo, Buffalo, NY, USA
| | | |
Collapse
|
22
|
Gong K, Cheng-Liao J, Wang G, Chen KT, Catana C, Qi J. Direct Patlak Reconstruction From Dynamic PET Data Using the Kernel Method With MRI Information Based on Structural Similarity. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:955-965. [PMID: 29610074 PMCID: PMC5933939 DOI: 10.1109/tmi.2017.2776324] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Positron emission tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neuroscience. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information into image reconstruction. Previously, kernel learning has been successfully embedded into static and dynamic PET image reconstruction using either PET temporal or MRI information. Here, we combine both PET temporal and MRI information adaptively to improve the quality of direct Patlak reconstruction. We examined different approaches to combine the PET and MRI information in kernel learning to address the issue of potential mismatches between MRI and PET signals. Computer simulations and hybrid real-patient data acquired on a simultaneous PET/MR scanner were used to evaluate the proposed methods. Results show that the method that combines PET temporal information and MRI spatial information adaptively based on the structure similarity index has the best performance in terms of noise reduction and resolution improvement.
Collapse
|