1
|
Lv L, Zeng GL, Chen G, Ding W, Weng F, Huang Q. The effects of back-projection variants in BPF-like TOF PET reconstruction using CNN filtration - Based on simulated and clinical brain data. Med Phys 2024. [PMID: 38828883 DOI: 10.1002/mp.17191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 03/28/2024] [Accepted: 04/22/2024] [Indexed: 06/05/2024] Open
Abstract
BACKGROUND The back-projection strategies such as confidence weighting (CW) and most likely annihilation position (MLAP) have been adopted into back-projection-and-filtering-like (BPF-like) deep reconstruction model and shown great potential on fast and accurate PET reconstruction. Although the two methods degenerate to an identical model at the time resolution of 0 ps, they represent two distinct approaches at the realistic time resolutions of current commercial systems. There is a lack of a systematic and fair assessment on these differences. PURPOSE This work aims to analyze the impact of back-projection variants on CNN-based PET image reconstruction to find the most effective back-projection model, and ultimately contribute to accurate PET reconstruction. METHODS Different back-projection strategies (CW and MLAP) and different angular view processing methods (view-summed and view-grouped) were considered, leading to the comparison of four back-projection variants integrated with the same CNN filtration model. Meanwhile, we investigated two strategies of physical effect compensation, either introducing pre-corrected data as the input or adding a channel of attenuation map to the CNN model. After training models separately on Monte-Carlo-simulated BrainWeb phantoms with full dose (events = 3×107), we tested them on both simulated phantoms and clinical brain scans with two dosage levels. For the performance assessment, peak signal-to-noise ratio (PSNR) and root mean square error (RMSE) were used to evaluate the pixel-wise error, structural similarity index (SSIM) to evaluate the structural similarity, and contrast recovery coefficient (CRC) in manually selected ROI to compare the region recovery. RESULTS Compared to two MLAP-based histo-image reconstruction models, two CW-based back-projected image methods produced clearer, sharper, and more detailed images, from both simulated and clinical data. For angular view processing methods, view-grouped histo-image improved image quality, while view-grouped cwbp-image showed no advantage except for contrast recovery. Quantitative analysis on simulated data demonstrated that the view-summed cwbp-image model achieved the best PSNR, RMSE, SSIM, while the 8-view cwbp-image model achieved the best CRC in lesions and the white matter. Additionally, the multi-channel input model including the back-projection image and attenuation map was proved to be the most efficient and simplest method for compensating for physical effects for brain data. Applying Gaussian blur to the histo-image yielded images with limited improvement. All above results hold for both the half-dose and the full-dose cases. CONCLUSION For brain imaging, the evaluation based on metrics PSNR, RMSE, SSIM, and CRC indicates that the view-summed CW-based back-projection variant is the most effective input for the BPF-like reconstruction model using CNN filtration, which can involve the attenuation map through an additional channel to effectively compensate for physical effects.
Collapse
Affiliation(s)
- Li Lv
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Gengsheng L Zeng
- Department of Computer Science, Utah Valley University, Orem, USA
| | - Gaoyu Chen
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai, China
- Department of Automation, Shanghai Jiao Tong University, Shanghai, China
| | - Wenxiang Ding
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Fenghua Weng
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Qiu Huang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- Department of Nuclear Medicine, Rui Jin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
2
|
Hashimoto F, Ote K. ReconU-Net: a direct PET image reconstruction using U-Net architecture with back projection-induced skip connection. Phys Med Biol 2024; 69:105022. [PMID: 38640921 DOI: 10.1088/1361-6560/ad40f6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 04/19/2024] [Indexed: 04/21/2024]
Abstract
Objective.This study aims to introduce a novel back projection-induced U-Net-shaped architecture, called ReconU-Net, based on the original U-Net architecture for deep learning-based direct positron emission tomography (PET) image reconstruction. Additionally, our objective is to visualize the behavior of direct PET image reconstruction by comparing the proposed ReconU-Net architecture with the original U-Net architecture and existing DeepPET encoder-decoder architecture without skip connections.Approach. The proposed ReconU-Net architecture uniquely integrates the physical model of the back projection operation into the skip connection. This distinctive feature facilitates the effective transfer of intrinsic spatial information from the input sinogram to the reconstructed image via an embedded physical model. The proposed ReconU-Net was trained using Monte Carlo simulation data from the Brainweb phantom and tested on both simulated and real Hoffman brain phantom data.Main results. The proposed ReconU-Net method provided better reconstructed image in terms of the peak signal-to-noise ratio and contrast recovery coefficient than the original U-Net and DeepPET methods. Further analysis shows that the proposed ReconU-Net architecture has the ability to transfer features of multiple resolutions, especially non-abstract high-resolution information, through skip connections. Unlike the U-Net and DeepPET methods, the proposed ReconU-Net successfully reconstructed the real Hoffman brain phantom, despite limited training on simulated data.Significance. The proposed ReconU-Net can improve the fidelity of direct PET image reconstruction, even with small training datasets, by leveraging the synergistic relationship between data-driven modeling and the physics model of the imaging process.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamana-ku, Hamamatsu 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamana-ku, Hamamatsu 434-8601, Japan
| |
Collapse
|
3
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
4
|
Reader AJ, Pan B. AI for PET image reconstruction. Br J Radiol 2023; 96:20230292. [PMID: 37486607 PMCID: PMC10546435 DOI: 10.1259/bjr.20230292] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 06/06/2023] [Accepted: 06/20/2023] [Indexed: 07/25/2023] Open
Abstract
Image reconstruction for positron emission tomography (PET) has been developed over many decades, with advances coming from improved modelling of the data statistics and improved modelling of the imaging physics. However, high noise and limited spatial resolution have remained issues in PET imaging, and state-of-the-art PET reconstruction has started to exploit other medical imaging modalities (such as MRI) to assist in noise reduction and enhancement of PET's spatial resolution. Nonetheless, there is an ongoing drive towards not only improving image quality, but also reducing the injected radiation dose and reducing scanning times. While the arrival of new PET scanners (such as total body PET) is helping, there is always a need to improve reconstructed image quality due to the time and count limited imaging conditions. Artificial intelligence (AI) methods are now at the frontier of research for PET image reconstruction. While AI can learn the imaging physics as well as the noise in the data (when given sufficient examples), one of the most common uses of AI arises from exploiting databases of high-quality reference examples, to provide advanced noise compensation and resolution recovery. There are three main AI reconstruction approaches: (i) direct data-driven AI methods which rely on supervised learning from reference data, (ii) iterative (unrolled) methods which combine our physics and statistical models with AI learning from data, and (iii) methods which exploit AI with our known models, but crucially can offer benefits even in the absence of any example training data whatsoever. This article reviews these methods, considering opportunities and challenges of AI for PET reconstruction.
Collapse
Affiliation(s)
- Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Bolin Pan
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| |
Collapse
|
5
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Yamaya T. Fully 3D implementation of the end-to-end deep image prior-based PET image reconstruction using block iterative algorithm. Phys Med Biol 2023; 68:155009. [PMID: 37406637 DOI: 10.1088/1361-6560/ace49c] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 07/05/2023] [Indexed: 07/07/2023]
Abstract
Objective. Deep image prior (DIP) has recently attracted attention owing to its unsupervised positron emission tomography (PET) image reconstruction method, which does not require any prior training dataset. In this paper, we present the first attempt to implement an end-to-end DIP-based fully 3D PET image reconstruction method that incorporates a forward-projection model into a loss function.Approach. A practical implementation of a fully 3D PET image reconstruction could not be performed at present because of a graphics processing unit memory limitation. Consequently, we modify the DIP optimization to a block iteration and sequential learning of an ordered sequence of block sinograms. Furthermore, the relative difference penalty (RDP) term is added to the loss function to enhance the quantitative accuracy of the PET image.Main results. We evaluated our proposed method using Monte Carlo simulation with [18F]FDG PET data of a human brain and a preclinical study on monkey-brain [18F]FDG PET data. The proposed method was compared with the maximum-likelihood expectation maximization (EM), maximuma posterioriEM with RDP, and hybrid DIP-based PET reconstruction methods. The simulation results showed that, compared with other algorithms, the proposed method improved the PET image quality by reducing statistical noise and better preserved the contrast of brain structures and inserted tumors. In the preclinical experiment, finer structures and better contrast recovery were obtained with the proposed method.Significance.The results indicated that the proposed method could produce high-quality images without a prior training dataset. Thus, the proposed method could be a key enabling technology for the straightforward and practical implementation of end-to-end DIP-based fully 3D PET image reconstruction.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-ku, Chiba, 263-8555, Japan
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-ku, Chiba, 263-8555, Japan
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-ku, Chiba, 263-8555, Japan
| |
Collapse
|
6
|
Pain CD, Egan GF, Chen Z. Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement. Eur J Nucl Med Mol Imaging 2022; 49:3098-3118. [PMID: 35312031 PMCID: PMC9250483 DOI: 10.1007/s00259-022-05746-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/25/2022] [Indexed: 12/21/2022]
Abstract
Image processing plays a crucial role in maximising diagnostic quality of positron emission tomography (PET) images. Recently, deep learning methods developed across many fields have shown tremendous potential when applied to medical image enhancement, resulting in a rich and rapidly advancing literature surrounding this subject. This review encapsulates methods for integrating deep learning into PET image reconstruction and post-processing for low-dose imaging and resolution enhancement. A brief introduction to conventional image processing techniques in PET is firstly presented. We then review methods which integrate deep learning into the image reconstruction framework as either deep learning-based regularisation or as a fully data-driven mapping from measured signal to images. Deep learning-based post-processing methods for low-dose imaging, temporal resolution enhancement and spatial resolution enhancement are also reviewed. Finally, the challenges associated with applying deep learning to enhance PET images in the clinical setting are discussed and future research directions to address these challenges are presented.
Collapse
Affiliation(s)
- Cameron Dennis Pain
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia.
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Department of Data Science and AI, Monash University, Melbourne, Australia
| |
Collapse
|
7
|
Lv L, Zeng GL, Zan Y, Hong X, Guo M, Chen G, Tao W, Ding W, Huang Q. A back‐projection‐and‐filtering‐like (BPF‐like) reconstruction method with the deep learning filtration from listmode data in TOF‐PET. Med Phys 2022; 49:2531-2544. [PMID: 35122265 PMCID: PMC10080664 DOI: 10.1002/mp.15520] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 01/19/2022] [Accepted: 01/19/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE The time-of-flight (TOF) information improves signal-to-noise ratio (SNR) for positron emission tomography (PET) imaging. Existing analytical algorithms for TOF PET usually follow a filtered back-projection process on reconstructing images from the sinogram data. This work aims to develop a back-projection-and-filtering-like (BPF-like) algorithm that reconstructs the TOF PET image directly from listmode data rapidly. METHODS We extended the 2D conventional non-TOF PET projection model to a TOF case, where projection data are represented as line integrals weighted by the one-dimensional TOF kernel along the projection direction. After deriving the central slice theorem and the TOF back-projection of listmode data, we designed a deep learning network with a modified U-net architecture to perform the spatial filtration (reconstruction filter). The proposed BP-Net method was validated via Monte Carlo simulations of TOF PET listmode data with three different time resolutions for two types of activity phantoms. The network was only trained on the simulated full-dose XCAT dataset and then evaluated on XCAT and Jaszczak data with different time resolutions and dose levels. RESULTS Reconstructed images show that when compared with the conventional BPF algorithm and the MLEM algorithm proposed for TOF PET, the proposed BP-Net method obtains better image quality in terms of peak signal-to-noise ratio, relative mean square error, and structure similarity index; besides, the reconstruction speed of the BP-Net is 1.75 times faster than BPF and 29.05 times faster than MLEM using 15 iterations. The results also indicate that the performance of the BP-Net degrades with worse time resolutions and lower tracer doses, but degrades less than BPF or MLEM reconstructions. CONCLUSION In this work, we developed an analytical-like reconstruction in the form of BPF with the reconstruction filtering operation performed via a deep network. The method runs even faster than the conventional BPF algorithm and provides accurate reconstructions from listmode data in TOF-PET, free of rebinning data to a sinogram.
Collapse
Affiliation(s)
- Li Lv
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Gengsheng L. Zeng
- Department of Computer Science Utah Valley University Orem UT 84058 USA
| | - Yunlong Zan
- Department of Nuclear Medicine Rui Jin Hospital School of Medicine Shanghai Jiao Tong University Shanghai 200240 China
| | - Xiang Hong
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Minghao Guo
- School of Electronic Information and Electrical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Gaoyu Chen
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Weijie Tao
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
- Department of Nuclear Medicine Rui Jin Hospital School of Medicine Shanghai Jiao Tong University Shanghai 200240 China
| | - Wenxiang Ding
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Qiu Huang
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
- Department of Nuclear Medicine Rui Jin Hospital School of Medicine Shanghai Jiao Tong University Shanghai 200240 China
| |
Collapse
|