1
|
Chen X, Zhou B, Xie H, Miao T, Liu H, Holler W, Lin M, Miller EJ, Carson RE, Sinusas AJ, Liu C. DuDoSS: Deep-learning-based dual-domain sinogram synthesis from sparsely sampled projections of cardiac SPECT. Med Phys 2023; 50:89-103. [PMID: 36048541 PMCID: PMC9868054 DOI: 10.1002/mp.15958] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 08/04/2022] [Accepted: 08/19/2022] [Indexed: 01/26/2023] Open
Abstract
PURPOSE Myocardial perfusion imaging (MPI) using single-photon emission-computed tomography (SPECT) is widely applied for the diagnosis of cardiovascular diseases. In clinical practice, the long scanning procedures and acquisition time might induce patient anxiety and discomfort, motion artifacts, and misalignments between SPECT and computed tomography (CT). Reducing the number of projection angles provides a solution that results in a shorter scanning time. However, fewer projection angles might cause lower reconstruction accuracy, higher noise level, and reconstruction artifacts due to reduced angular sampling. We developed a deep-learning-based approach for high-quality SPECT image reconstruction using sparsely sampled projections. METHODS We proposed a novel deep-learning-based dual-domain sinogram synthesis (DuDoSS) method to recover full-view projections from sparsely sampled projections of cardiac SPECT. DuDoSS utilized the SPECT images predicted in the image domain as guidance to generate synthetic full-view projections in the sinogram domain. The synthetic projections were then reconstructed into non-attenuation-corrected and attenuation-corrected (AC) SPECT images for voxel-wise and segment-wise quantitative evaluations in terms of normalized mean square error (NMSE) and absolute percent error (APE). Previous deep-learning-based approaches, including direct sinogram generation (Direct Sino2Sino) and direct image prediction (Direct Img2Img), were tested in this study for comparison. The dataset used in this study included a total of 500 anonymized clinical stress-state MPI studies acquired on a GE NM/CT 850 scanner with 60 projection angles following the injection of 99m Tc-tetrofosmin. RESULTS Our proposed DuDoSS generated more consistent synthetic projections and SPECT images with the ground truth than other approaches. The average voxel-wise NMSE between the synthetic projections by DuDoSS and the ground-truth full-view projections was 2.08% ± 0.81%, as compared to 2.21% ± 0.86% (p < 0.001) by Direct Sino2Sino. The averaged voxel-wise NMSE between the AC SPECT images by DuDoSS and the ground-truth AC SPECT images was 1.63% ± 0.72%, as compared to 1.84% ± 0.79% (p < 0.001) by Direct Sino2Sino and 1.90% ± 0.66% (p < 0.001) by Direct Img2Img. The averaged segment-wise APE between the AC SPECT images by DuDoSS and the ground-truth AC SPECT images was 3.87% ± 3.23%, as compared to 3.95% ± 3.21% (p = 0.023) by Direct Img2Img and 4.46% ± 3.58% (p < 0.001) by Direct Sino2Sino. CONCLUSIONS Our proposed DuDoSS is feasible to generate accurate synthetic full-view projections from sparsely sampled projections for cardiac SPECT. The synthetic projections and reconstructed SPECT images generated from DuDoSS are more consistent with the ground-truth full-view projections and SPECT images than other approaches. DuDoSS can potentially enable fast data acquisition of cardiac SPECT.
Collapse
Affiliation(s)
- Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
| | - Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
| | - Tianshun Miao
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
| | - Hui Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
| | | | - MingDe Lin
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
- Visage Imaging, Inc., San Diego, California, United States, 92130
| | - Edward J. Miller
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
- Department of Internal Medicine (Cardiology), Yale University School of Medicine, New Haven, Connecticut, United States, 06511
| | - Richard E. Carson
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
| | - Albert J. Sinusas
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
- Department of Internal Medicine (Cardiology), Yale University School of Medicine, New Haven, Connecticut, United States, 06511
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
| |
Collapse
|
2
|
Huang Y, Zhu H, Duan X, Hong X, Sun H, Lv W, Lu L, Feng Q. GapFill-Recon Net: A Cascade Network for simultaneously PET Gap Filling and Image Reconstruction. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106271. [PMID: 34274612 DOI: 10.1016/j.cmpb.2021.106271] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Accepted: 07/01/2021] [Indexed: 06/13/2023]
Abstract
PET image reconstruction from incomplete data, such as the gap between adjacent detector blocks generally introduces partial projection data loss, is an important and challenging problem in medical imaging. This work proposes an efficient convolutional neural network (CNN) framework, called GapFill-Recon Net, that jointly reconstructs PET images and their associated sinogram data. GapFill-Recon Net including two blocks: the Gap-Filling block first address the sinogram gap and the Image-Recon block maps the filled sinogram onto the final image directly. A total of 43,660 pairs of synthetic 2D PET sinograms with gaps and images generated from the MOBY phantom are utilized for network training, testing and validation. Whole-body mouse Monte Carlo (MC) simulated data are also used for evaluation. The experimental results show that the reconstructed image quality of GapFill-Recon Net outperforms filtered back-projection (FBP) and maximum likelihood expectation maximization (MLEM) in terms of the structural similarity index metric (SSIM), relative root mean squared error (rRMSE), and peak signal-to-noise ratio (PSNR). Moreover, the reconstruction speed is equivalent to that of FBP and was nearly 83 times faster than that of MLEM. In conclusion, compared with the traditional reconstruction algorithm, GapFill-Recon Net achieves relatively optimal performance in image quality and reconstruction speed, which effectively achieves a balance between efficiency and performance.
Collapse
Affiliation(s)
- Yanchao Huang
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China; Nanfang PET Center, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong 510515, China
| | - Huobiao Zhu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Xiaoman Duan
- Division of Biomedical Engineering, College of Engineering, University of Saskatchewan, Saskatoon, SK S7N5A9, Canada
| | - Xiaotong Hong
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Hao Sun
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Wenbing Lv
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Lijun Lu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China.
| | - Qianjin Feng
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China.
| |
Collapse
|
3
|
Gao J, Liu Q, Zhou C, Zhang W, Wan Q, Hu C, Gu Z, Liang D, Liu X, Yang Y, Zheng H, Hu Z, Zhang N. An improved patch-based regularization method for PET image reconstruction. Quant Imaging Med Surg 2021; 11:556-570. [PMID: 33532256 DOI: 10.21037/qims-20-19] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Background Statistical reconstruction methods based on penalized maximum likelihood (PML) are being increasingly used in positron emission tomography (PET) imaging to reduce noise and improve image quality. Wang and Qi proposed a patch-based edge-preserving penalties algorithm that can be implemented in three simple steps: a maximum-likelihood expectation-maximization (MLEM) image update, an image smoothing step, and a pixel-by-pixel image fusion step. The pixel-by-pixel image fusion step, which fuses the MLEM updated image and the smoothed image, involves a trade-off between preserving the fine structural features of an image and suppressing noise. Particularly when reconstructing images from low-count data, this step cannot preserve fine structural features in detail. To better preserve these features and accelerate the algorithm convergence, we proposed to improve the patch-based regularization reconstruction method. Methods Our improved method involved adding a total variation (TV) regularization step following the MLEM image update in the patch-based algorithm. A feature refinement (FR) step was then used to extract the lost fine structural features from the residual image between the TV regularized image and the fused image based on patch regularization. These structural features would then be added back to the fused image. With the addition of these steps, each iteration of the image should gain more structural information. A brain phantom simulation experiment and a mouse study were conducted to evaluate our proposed improved method. Brain phantom simulation with added noise were used to determine the feasibility of the proposed algorithm and its acceleration of convergence. Data obtained from the mouse study were divided into event count sets to validate the performance of the proposed algorithm when reconstructing images from low-count data. Five criteria were used for quantitative evaluation: signal-to-noise ratio (SNR), covariance (COV), contrast recovery coefficient (CRC), regional relative bias, and relative variance. Results The bias and variance of the phantom brain image reconstructed using the patch-based method were 0.421 and 5.035, respectively, and this process took 83.637 seconds. The bias and variance of the image reconstructed by the proposed improved method, however, were 0.396 and 4.568, respectively, and this process took 41.851 seconds. This demonstrates that the proposed algorithm accelerated the reconstruction convergence. The CRC of the phantom brain image reconstructed using the patch-based method was iterated 20 times and reached 0.284, compared with the proposed method, which reached 0.446. When using a count of 5,000 K data obtained from the mouse study, both the patch-based method and the proposed method reconstructed images similar to the ground truth image. The intensity of the ground truth image was 88.3, and it was located in the 102nd row and the 116th column. However, when the count was reduced to below 40 K, and the patch-based method was used, image quality was significantly reduced. This effect was not observed when the proposed method was used. When a count of 40 K was used, the image intensity was 58.79 when iterated 100 times by the patch-based method, and it was located in the 102nd row and the 116th column, while the intensity when iterated 50 times by the proposed method was 63.83. This suggests that the proposed method improves image reconstruction from low-count data. Conclusions This improved method of PET image reconstruction could potentially improve the quality of PET images faster than other methods and also produce better reconstructions from low-count data.
Collapse
Affiliation(s)
- Juan Gao
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China.,Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen, China.,School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Weiguang Zhang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Qian Wan
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China.,Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen, China
| | - Chenxi Hu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zheng Gu
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China.,Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China.,Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China.,Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China.,Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China.,Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China.,Key Laboratory for Magnetic Resonance and Multimodality Imaging of Guangdong Province, Shenzhen, China
| |
Collapse
|
4
|
Xie N, Gong K, Guo N, Qin Z, Wu Z, Liu H, Li Q. Penalized-likelihood PET Image Reconstruction Using 3D Structural Convolutional Sparse Coding. IEEE Trans Biomed Eng 2020; 69:4-14. [PMID: 33284746 DOI: 10.1109/tbme.2020.3042907] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Positron emission tomography (PET) is widely used for clinical diagnosis. As PET suffers from low resolution and high noise, numerous efforts try to incorporate anatomical priors into PET image reconstruction, especially with the development of hybrid PET/CT and PET/MRI systems. In this work, we proposed a cube-based 3D structural convolutional sparse coding (CSC) concept for penalized-likelihood PET image reconstruction, named 3D PET-CSC. The proposed 3D PET-CSC takes advantage of the convolutional operation and manages to incorporate anatomical priors without the need of registration or supervised training. As 3D PET-CSC codes the whole 3D PET image, instead of patches, it alleviates the staircase artifacts commonly presented in traditional patch-based sparse coding methods. Compared with traditional coding methods in Fourier domain, the proposed method extends the 3D CSC to a straightforward approach based on the pursuit of localized cubes. Moreover, we developed the residual-image and order-subset mechanisms to further reduce the computational cost and accelerate the convergence for the proposed 3D PET-CSC method. Experiments based on computer simulations and clinical datasets demonstrate the superiority of 3D PET-CSC compared with other reference methods.
Collapse
|
5
|
Liu CC, Huang HM. Partial-ring PET image restoration using a deep learning based method. Phys Med Biol 2019; 64:225014. [PMID: 31581143 DOI: 10.1088/1361-6560/ab4aa9] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
PET scanners with partial-ring geometry have been proposed for various imaging purposes. The incomplete projection data obtained from this design cause undesirable artifacts in the reconstructed images. In this study, we investigated the performance of a deep learning (DL) based method for the recovery of partial-ring PET images. Twenty digital brain phantoms were used in the Monte Carlo simulation toolkit, SimSET, to simulate 15 min full-ring PET scans. Partial-ring PET data were generated from full-ring PET data by removing coincidence events that hit these specific detector blocks. A convolutional neural network based on the residual U-Net architecture was trained to predict full-ring data from partial-ring data in either the projection or image domain. The performance of the proposed DL-based method was evaluated by comparing with the PET images reconstructed using the full-ring projection data in terms of the mean squared error (MSE), structural similarity (SSIM) index and recovery coefficient (RC). The MSE results showed the superiority of the image-domain approach in reduction of 91.7% in contrast to 14.3% for the projection-domain approach. Therefore, the image-domain approach was used to study the influence of the number of detector block removal. The SSIM results were 0.998, 0.996 and 0.993 for 3, 5 and 7 detector block removals, respectively. The activity of gray and white matters could be fully recovered even with 7 detector block removal, while the RCs of two artificially inserted small lesions (3 pixels in diameter) in the testing data were 94%, 89% and 79% for 3, 5, and 7 detector block removals, respectively. Our simulation results suggest that DL has the potential to recover partial-ring PET images.
Collapse
Affiliation(s)
- Chih-Chieh Liu
- Department of Biomedical Engineering, University of California, Davis, CA 95616 United States of America
| | | |
Collapse
|
6
|
Zhang W, Gao J, Yang Y, Liang D, Liu X, Zheng H, Hu Z. Image reconstruction for positron emission tomography based on patch-based regularization and dictionary learning. Med Phys 2019; 46:5014-5026. [PMID: 31494950 PMCID: PMC6899708 DOI: 10.1002/mp.13804] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2018] [Revised: 07/18/2019] [Accepted: 08/18/2019] [Indexed: 12/31/2022] Open
Abstract
PURPOSE Positron emission tomography (PET) is an important tool for nuclear medical imaging. It has been widely used in clinical diagnosis, scientific research, and drug testing. PET is a kind of emission computed tomography. Its basic imaging principle is to use the positron annihilation radiation generated by radionuclide decay to generate gamma photon images. However, in practical applications, due to the low gamma photon counting rate, limited acquisition time, inconsistent detector characteristics, and electronic noise, measured PET projection data often contain considerable noise, which results in ill-conditioned PET images. Therefore, determining how to obtain high-quality reconstructed PET images suitable for clinical applications is a valuable research topic. In this context, this paper presents an image reconstruction algorithm based on patch-based regularization and dictionary learning (DL) called the patch-DL algorithm. Compared to other algorithms, the proposed algorithm can retain more image details while suppressing noise. METHODS Expectation-maximization (EM)-like image updating, image smoothing, pixel-by-pixel image fusion, and DL are the four steps of the proposed reconstruction algorithm. We used a two-dimensional (2D) brain phantom to evaluate the proposed algorithm by simulating sinograms that contained random Poisson noise. We also quantitatively compared the patch-DL algorithm with a pixel-based algorithm, a patch-based algorithm, and an adaptive dictionary learning (AD) algorithm. RESULTS Through computer simulations, we demonstrated the advantages of the patch-DL method over the pixel-, patch-, and AD-based methods in terms of the tradeoff between noise suppression and detail retention in reconstructed images. Quantitative analysis shows that the proposed method results in a better performance statistically [according to the mean absolute error (MAE), correlation coefficient (CORR), and root mean square error (RMSE)] in considered region of interests (ROI) with two simulated count levels. Additionally, to analyze whether the results among these methods have significant differences, we used one-way analysis of variance (ANOVA) to calculate the corresponding P values. The results show that most of the P < 0.01; some P> 0.01 < 0.05. Therefore, our method can achieve a better quantitative performance than those of traditional methods. CONCLUSIONS The results show that the proposed algorithm has the potential to improve the quality of PET image reconstruction. Since the proposed algorithm was validated only with simulated 2D data, it still needs to be further validated with real three-dimensional data. In the future, we intend to explore GPU parallelization technology to further improve the computational efficiency and shorten the computation time.
Collapse
Affiliation(s)
- Wanhong Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,College of Electrical and Information Engineering, Hunan University, Changsha, 410082, China
| | - Juan Gao
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| |
Collapse
|
7
|
Gao J, Zhang Q, Liu Q, Zhang X, Zhang M, Yang Y, Liang D, Liu X, Zheng H, Hu Z. Positron emission tomography image reconstruction using feature extraction. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2019; 27:949-963. [PMID: 31381539 DOI: 10.3233/xst-190527] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE To reduce the cost of positron emission tomography (PET) scanning systems, image reconstruction algorithms for low-sampled data have been extensively studied. However, the current method based on total variation (TV) minimization regularization nested in the maximum likelihood-expectation maximization (MLEM) algorithm cannot distinguish true structures from noise resulting losing some fine features in the images. Thus, this work aims to recover fine features lost in the MLEM-TV algorithm from low-sampled data. METHOD A feature refinement (FR) approach previously developed for statistical interior computed tomography (CT) reconstruction is applied to PET imaging to recover fine features in this study. The proposed method starts with a constant initial image and the FR step is performed after each MLEM-TV iteration to extract the desired structural information lost during TV minimization. A feature descriptor is specifically designed to distinguish structure from noise and artifacts. A modified steepest descent method is adopted to minimize the objective function. After evaluating the impacts of different patch sizes on the outcome of the presented method, an optimal patch size of 7×7 is selected in this study to balance structure-detection ability and computational efficiency. RESULTS Applying MLEM-TV-FR algorithm to the simulated brain PET imaging using an emission activity phantom, a standard Shepp-Logan phantom, and mouse results in the increased peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as comparing to using the conventional MLEM-TV algorithm, as well as the substantial reduction of the used sampling numbers, which improves the computational efficiency. CONCLUSIONS The presented algorithm can achieve image quality superior to that of the MLEM and MLEM-TV approaches in terms of the preservation of fine structure and the suppression of undesired artifacts and noise, indicating its useful potential for low-sampled data in PET imaging.
Collapse
Affiliation(s)
- Juan Gao
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Xuezhu Zhang
- Department of Biomedical Engineering, University of California, Davis, CA, USA
| | - Mengxi Zhang
- Department of Biomedical Engineering, University of California, Davis, CA, USA
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
8
|
Shojaeilangari S, Schmidtlein CR, Rahmim A, Ay MR. Recovery of missing data in partial geometry PET scanners: Compensation in projection space vs image space. Med Phys 2018; 45:5437-5449. [PMID: 30288762 DOI: 10.1002/mp.13225] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2018] [Revised: 07/30/2018] [Accepted: 08/27/2018] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Robust and reliable reconstruction of images from noisy and incomplete projection data holds significant potential for proliferation of cost-effective medical imaging technologies. Since conventional reconstruction techniques can generate severe artifacts in the recovered images, a notable line of research constitutes development of appropriate algorithms to compensate for missing data and to reduce noise. In the present work, we investigate the effectiveness of state-of-the-art methodologies developed for image inpainting and noise reduction to preserve the quality of reconstructed images from undersampled PET data. We aimed to assess and ascertain whether missing data recovery is best performed in the projection space prior to reconstruction or adjoined with the reconstruction step in image space. METHODS Different strategies for data recovery were investigated using realistic patient derived phantoms (brain and abdomen) in PET scanners with partial geometry (small and large gap structures). Specifically, gap filling strategies in projection space were compared with reconstruction based compensation in image space. The methods used for filling the gap structure in sinogram PET data include partial differential equation based techniques (PDE), total variation (TV) regularization, discrete cosine transform(DCT)-based penalized regression, and dictionary learning based inpainting (DLI). For compensation in image space, compressed sensing based image reconstruction methods were applied. These include the preconditioned alternating projection (PAPA) algorithm with first and higher order total variation (HOTV) regularization as well as dictionary learning based compressed sensing (DLCS). We additionally investigated the performance of the methods for recovery of missing data in the presence of simulated lesion. The impact of different noise levels in the undersampled sinograms on performance of the approaches were also evaluated. RESULTS In our first study (brain imaging), DLI was shown to outperform other methods for small gap structure in terms of root mean square error (RMSE) and structural similarity (SSIM), though having relatively high computational cost. For large gap structure, HOTV-PAPA produces better results. In the second study (abdomen imaging), again the best performance belonged to DLI for small gap, and HOTV-PAPA for large gap. In our experiments for lesion simulation on patient brain phantom data, the best performance in term of contrast recovery coefficient (CRC) for small gap simulation belonged to DLI, while in the case of large gap simulation, HOTV-PAPA outperformed others. Our evaluation of the impact of noise on performance of approaches indicated that in case of low and medium noise levels, DLI still produces favorable results among inpainting approaches. However, for high noise levels, the performance of PDE4 (variant of PDE) and DLI are very competitive. CONCLUSIONS Our results showed that estimation of missing data in projection space as a preprocessing step before reconstruction can improve the quality of recovered images especially for small gap structures. However, when large portions of data are missing, compressed sensing techniques adjoined with the reconstruction step in image space were the best strategy.
Collapse
Affiliation(s)
- Seyedehsamaneh Shojaeilangari
- Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.,School of Cognitive Science, Institute for Research in Fundamental Sciences (IPM), P.O. Box 193955746, Tehran, Iran
| | - C Ross Schmidtlein
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Arman Rahmim
- Department of Radiology, Johns Hopkins University, Baltimore, MD, 21287, USA.,Departments of Radiology and Physics & Astronomy, University of British Columbia, Columbia, BC, V5Z 1M9, Canada
| | - Mohammad Reza Ay
- Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.,Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, 1417613151, Tehran, Iran
| |
Collapse
|
9
|
Tian X, Zeng D, Zhang S, Huang J, Zhang H, He J, Lu L, Xi W, Ma J, Bian Z. Robust low-dose dynamic cerebral perfusion CT image restoration via coupled dictionary learning scheme. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2016; 24:837-853. [PMID: 27612048 DOI: 10.3233/xst-160593] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Dynamic cerebral perfusion x-ray computed tomography (PCT) imaging has been advocated to quantitatively and qualitatively assess hemodynamic parameters in the diagnosis of acute stroke or chronic cerebrovascular diseases. However, the associated radiation dose is a significant concern to patients due to its dynamic scan protocol. To address this issue, in this paper we propose an image restoration method by utilizing coupled dictionary learning (CDL) scheme to yield clinically acceptable PCT images with low-dose data acquisition. Specifically, in the present CDL scheme, the 2D background information from the average of the baseline time frames of low-dose unenhanced CT images and the 3D enhancement information from normal-dose sequential cerebral PCT images are exploited to train the dictionary atoms respectively. After getting the two trained dictionaries, we couple them to represent the desired PCT images as spatio-temporal prior in objective function construction. Finally, the low-dose dynamic cerebral PCT images are restored by using a general DL image processing. To get a robust solution, the objective function is solved by using a modified dictionary learning based image restoration algorithm. The experimental results on clinical data show that the present method can yield more accurate kinetic enhanced details and diagnostic hemodynamic parameter maps than the state-of-the-art methods.
Collapse
Affiliation(s)
- Xiumei Tian
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
| | - Dong Zeng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
| | - Shanli Zhang
- The First Affiliated Hospital of Guangzhou University of Traditional Chinese Medicine, Guangzhou, Guangdong, China
| | - Jing Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
| | - Hua Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
| | - Ji He
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
| | - Lijun Lu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
| | - Weiwen Xi
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
| | - Zhaoying Bian
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
| |
Collapse
|
10
|
Tang J, Yang B, Wang Y, Ying L. Sparsity-constrained PET image reconstruction with learned dictionaries. Phys Med Biol 2016; 61:6347-68. [DOI: 10.1088/0031-9155/61/17/6347] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
11
|
Wang Y, Ma G, An L, Shi F, Zhang P, Lalush DS, Wu X, Pu Y, Zhou J, Shen D. Semisupervised Tripled Dictionary Learning for Standard-Dose PET Image Prediction Using Low-Dose PET and Multimodal MRI. IEEE Trans Biomed Eng 2016; 64:569-579. [PMID: 27187939 DOI: 10.1109/tbme.2016.2564440] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE To obtain high-quality positron emission tomography (PET) image with low-dose tracer injection, this study attempts to predict the standard-dose PET (S-PET) image from both its low-dose PET (L-PET) counterpart and corresponding magnetic resonance imaging (MRI). METHODS It was achieved by patch-based sparse representation (SR), using the training samples with a complete set of MRI, L-PET and S-PET modalities for dictionary construction. However, the number of training samples with complete modalities is often limited. In practice, many samples generally have incomplete modalities (i.e., with one or two missing modalities) that thus cannot be used in the prediction process. In light of this, we develop a semisupervised tripled dictionary learning (SSTDL) method for S-PET image prediction, which can utilize not only the samples with complete modalities (called complete samples) but also the samples with incomplete modalities (called incomplete samples), to take advantage of the large number of available training samples and thus further improve the prediction performance. RESULTS Validation was done on a real human brain dataset consisting of 18 subjects, and the results show that our method is superior to the SR and other baseline methods. CONCLUSION This paper proposed a new S-PET prediction method, which can significantly improve the PET image quality with low-dose injection. SIGNIFICANCE The proposed method is favorable in clinical application since it can decrease the potential radiation risk for patients.
Collapse
|