1
|
Sanaat A, Amini M, Arabi H, Zaidi H. The quest for multifunctional and dedicated PET instrumentation with irregular geometries. Ann Nucl Med 2024; 38:31-70. [PMID: 37952197 PMCID: PMC10766666 DOI: 10.1007/s12149-023-01881-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 10/09/2023] [Indexed: 11/14/2023]
Abstract
We focus on reviewing state-of-the-art developments of dedicated PET scanners with irregular geometries and the potential of different aspects of multifunctional PET imaging. First, we discuss advances in non-conventional PET detector geometries. Then, we present innovative designs of organ-specific dedicated PET scanners for breast, brain, prostate, and cardiac imaging. We will also review challenges and possible artifacts by image reconstruction algorithms for PET scanners with irregular geometries, such as non-cylindrical and partial angular coverage geometries and how they can be addressed. Then, we attempt to address some open issues about cost/benefits analysis of dedicated PET scanners, how far are the theoretical conceptual designs from the market/clinic, and strategies to reduce fabrication cost without compromising performance.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Mehdi Amini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB, Groningen, The Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
- University Research and Innovation Center, Óbuda University, Budapest, Hungary.
| |
Collapse
|
2
|
Chen X, Zhou B, Xie H, Miao T, Liu H, Holler W, Lin M, Miller EJ, Carson RE, Sinusas AJ, Liu C. DuDoSS: Deep-learning-based dual-domain sinogram synthesis from sparsely sampled projections of cardiac SPECT. Med Phys 2023; 50:89-103. [PMID: 36048541 PMCID: PMC9868054 DOI: 10.1002/mp.15958] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 08/04/2022] [Accepted: 08/19/2022] [Indexed: 01/26/2023] Open
Abstract
PURPOSE Myocardial perfusion imaging (MPI) using single-photon emission-computed tomography (SPECT) is widely applied for the diagnosis of cardiovascular diseases. In clinical practice, the long scanning procedures and acquisition time might induce patient anxiety and discomfort, motion artifacts, and misalignments between SPECT and computed tomography (CT). Reducing the number of projection angles provides a solution that results in a shorter scanning time. However, fewer projection angles might cause lower reconstruction accuracy, higher noise level, and reconstruction artifacts due to reduced angular sampling. We developed a deep-learning-based approach for high-quality SPECT image reconstruction using sparsely sampled projections. METHODS We proposed a novel deep-learning-based dual-domain sinogram synthesis (DuDoSS) method to recover full-view projections from sparsely sampled projections of cardiac SPECT. DuDoSS utilized the SPECT images predicted in the image domain as guidance to generate synthetic full-view projections in the sinogram domain. The synthetic projections were then reconstructed into non-attenuation-corrected and attenuation-corrected (AC) SPECT images for voxel-wise and segment-wise quantitative evaluations in terms of normalized mean square error (NMSE) and absolute percent error (APE). Previous deep-learning-based approaches, including direct sinogram generation (Direct Sino2Sino) and direct image prediction (Direct Img2Img), were tested in this study for comparison. The dataset used in this study included a total of 500 anonymized clinical stress-state MPI studies acquired on a GE NM/CT 850 scanner with 60 projection angles following the injection of 99m Tc-tetrofosmin. RESULTS Our proposed DuDoSS generated more consistent synthetic projections and SPECT images with the ground truth than other approaches. The average voxel-wise NMSE between the synthetic projections by DuDoSS and the ground-truth full-view projections was 2.08% ± 0.81%, as compared to 2.21% ± 0.86% (p < 0.001) by Direct Sino2Sino. The averaged voxel-wise NMSE between the AC SPECT images by DuDoSS and the ground-truth AC SPECT images was 1.63% ± 0.72%, as compared to 1.84% ± 0.79% (p < 0.001) by Direct Sino2Sino and 1.90% ± 0.66% (p < 0.001) by Direct Img2Img. The averaged segment-wise APE between the AC SPECT images by DuDoSS and the ground-truth AC SPECT images was 3.87% ± 3.23%, as compared to 3.95% ± 3.21% (p = 0.023) by Direct Img2Img and 4.46% ± 3.58% (p < 0.001) by Direct Sino2Sino. CONCLUSIONS Our proposed DuDoSS is feasible to generate accurate synthetic full-view projections from sparsely sampled projections for cardiac SPECT. The synthetic projections and reconstructed SPECT images generated from DuDoSS are more consistent with the ground-truth full-view projections and SPECT images than other approaches. DuDoSS can potentially enable fast data acquisition of cardiac SPECT.
Collapse
Affiliation(s)
- Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
| | - Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
| | - Tianshun Miao
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
| | - Hui Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
| | | | - MingDe Lin
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
- Visage Imaging, Inc., San Diego, California, United States, 92130
| | - Edward J. Miller
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
- Department of Internal Medicine (Cardiology), Yale University School of Medicine, New Haven, Connecticut, United States, 06511
| | - Richard E. Carson
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
| | - Albert J. Sinusas
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
- Department of Internal Medicine (Cardiology), Yale University School of Medicine, New Haven, Connecticut, United States, 06511
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
| |
Collapse
|
3
|
Huang Y, Zhu H, Duan X, Hong X, Sun H, Lv W, Lu L, Feng Q. GapFill-Recon Net: A Cascade Network for simultaneously PET Gap Filling and Image Reconstruction. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106271. [PMID: 34274612 DOI: 10.1016/j.cmpb.2021.106271] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Accepted: 07/01/2021] [Indexed: 06/13/2023]
Abstract
PET image reconstruction from incomplete data, such as the gap between adjacent detector blocks generally introduces partial projection data loss, is an important and challenging problem in medical imaging. This work proposes an efficient convolutional neural network (CNN) framework, called GapFill-Recon Net, that jointly reconstructs PET images and their associated sinogram data. GapFill-Recon Net including two blocks: the Gap-Filling block first address the sinogram gap and the Image-Recon block maps the filled sinogram onto the final image directly. A total of 43,660 pairs of synthetic 2D PET sinograms with gaps and images generated from the MOBY phantom are utilized for network training, testing and validation. Whole-body mouse Monte Carlo (MC) simulated data are also used for evaluation. The experimental results show that the reconstructed image quality of GapFill-Recon Net outperforms filtered back-projection (FBP) and maximum likelihood expectation maximization (MLEM) in terms of the structural similarity index metric (SSIM), relative root mean squared error (rRMSE), and peak signal-to-noise ratio (PSNR). Moreover, the reconstruction speed is equivalent to that of FBP and was nearly 83 times faster than that of MLEM. In conclusion, compared with the traditional reconstruction algorithm, GapFill-Recon Net achieves relatively optimal performance in image quality and reconstruction speed, which effectively achieves a balance between efficiency and performance.
Collapse
Affiliation(s)
- Yanchao Huang
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China; Nanfang PET Center, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong 510515, China
| | - Huobiao Zhu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Xiaoman Duan
- Division of Biomedical Engineering, College of Engineering, University of Saskatchewan, Saskatoon, SK S7N5A9, Canada
| | - Xiaotong Hong
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Hao Sun
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Wenbing Lv
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Lijun Lu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China.
| | - Qianjin Feng
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China.
| |
Collapse
|
4
|
Cheng Z, Wen J, Huang G, Yan J. Applications of artificial intelligence in nuclear medicine image generation. Quant Imaging Med Surg 2021; 11:2792-2822. [PMID: 34079744 PMCID: PMC8107336 DOI: 10.21037/qims-20-1078] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 02/14/2021] [Indexed: 12/12/2022]
Abstract
Recently, the application of artificial intelligence (AI) in medical imaging (including nuclear medicine imaging) has rapidly developed. Most AI applications in nuclear medicine imaging have focused on the diagnosis, treatment monitoring, and correlation analyses with pathology or specific gene mutation. It can also be used for image generation to shorten the time of image acquisition, reduce the dose of injected tracer, and enhance image quality. This work provides an overview of the application of AI in image generation for single-photon emission computed tomography (SPECT) and positron emission tomography (PET) either without or with anatomical information [CT or magnetic resonance imaging (MRI)]. This review focused on four aspects, including imaging physics, image reconstruction, image postprocessing, and internal dosimetry. AI application in generating attenuation map, estimating scatter events, boosting image quality, and predicting internal dose map is summarized and discussed.
Collapse
Affiliation(s)
- Zhibiao Cheng
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Junhai Wen
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Gang Huang
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| | - Jianhua Yan
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| |
Collapse
|
5
|
Liu CC, Huang HM. Partial-ring PET image restoration using a deep learning based method. Phys Med Biol 2019; 64:225014. [PMID: 31581143 DOI: 10.1088/1361-6560/ab4aa9] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
PET scanners with partial-ring geometry have been proposed for various imaging purposes. The incomplete projection data obtained from this design cause undesirable artifacts in the reconstructed images. In this study, we investigated the performance of a deep learning (DL) based method for the recovery of partial-ring PET images. Twenty digital brain phantoms were used in the Monte Carlo simulation toolkit, SimSET, to simulate 15 min full-ring PET scans. Partial-ring PET data were generated from full-ring PET data by removing coincidence events that hit these specific detector blocks. A convolutional neural network based on the residual U-Net architecture was trained to predict full-ring data from partial-ring data in either the projection or image domain. The performance of the proposed DL-based method was evaluated by comparing with the PET images reconstructed using the full-ring projection data in terms of the mean squared error (MSE), structural similarity (SSIM) index and recovery coefficient (RC). The MSE results showed the superiority of the image-domain approach in reduction of 91.7% in contrast to 14.3% for the projection-domain approach. Therefore, the image-domain approach was used to study the influence of the number of detector block removal. The SSIM results were 0.998, 0.996 and 0.993 for 3, 5 and 7 detector block removals, respectively. The activity of gray and white matters could be fully recovered even with 7 detector block removal, while the RCs of two artificially inserted small lesions (3 pixels in diameter) in the testing data were 94%, 89% and 79% for 3, 5, and 7 detector block removals, respectively. Our simulation results suggest that DL has the potential to recover partial-ring PET images.
Collapse
Affiliation(s)
- Chih-Chieh Liu
- Department of Biomedical Engineering, University of California, Davis, CA 95616 United States of America
| | | |
Collapse
|