1
|
Li S, Zhu Y, Spencer BA, Wang G. Single-Subject Deep-Learning Image Reconstruction With a Neural Optimization Transfer Algorithm for PET-Enabled Dual-Energy CT Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:4075-4089. [PMID: 38941203 DOI: 10.1109/tip.2024.3418347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2024]
Abstract
Combining dual-energy computed tomography (DECT) with positron emission tomography (PET) offers many potential clinical applications but typically requires expensive hardware upgrades or increases radiation doses on PET/CT scanners due to an extra X-ray CT scan. The recent PET-enabled DECT method allows DECT imaging on PET/CT without requiring a second X-ray CT scan. It combines the already existing X-ray CT image with a 511 keV γ -ray CT (gCT) image reconstructed from time-of-flight PET emission data. A kernelized framework has been developed for reconstructing gCT image but this method has not fully exploited the potential of prior knowledge. Use of deep neural networks may explore the power of deep learning in this application. However, common approaches require a large database for training, which is impractical for a new imaging method like PET-enabled DECT. Here, we propose a single-subject method by using neural-network representation as a deep coefficient prior to improving gCT image reconstruction without population-based pre-training. The resulting optimization problem becomes the tomographic estimation of nonlinear neural-network parameters from gCT projection data. This complicated problem can be efficiently solved by utilizing the optimization transfer strategy with quadratic surrogates. Each iteration of the proposed neural optimization transfer algorithm includes: PET activity image update; gCT image update; and least-square neural-network learning in the gCT image domain. This algorithm is guaranteed to monotonically increase the data likelihood. Results from computer simulation, real phantom data and real patient data have demonstrated that the proposed method can significantly improve gCT image quality and consequent multi-material decomposition as compared to other methods.
Collapse
|
2
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
3
|
Wang S, Liu B, Xie F, Chai L. An iterative reconstruction algorithm for unsupervised PET image. Phys Med Biol 2024; 69:055025. [PMID: 38346340 DOI: 10.1088/1361-6560/ad2882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 02/12/2024] [Indexed: 02/28/2024]
Abstract
Objective.In recent years, convolutional neural networks (CNNs) have shown great potential in positron emission tomography (PET) image reconstruction. However, most of them rely on many low-quality and high-quality reference PET image pairs for training, which are not always feasible in clinical practice. On the other hand, many works improve the quality of PET image reconstruction by adding explicit regularization or optimizing the network structure, which may lead to complex optimization problems.Approach.In this paper, we develop a novel iterative reconstruction algorithm by integrating the deep image prior (DIP) framework, which only needs the prior information (e.g. MRI) and sinogram data of patients. To be specific, we construct the objective function as a constrained optimization problem and utilize the existing PET image reconstruction packages to streamline calculations. Moreover, to further improve both the reconstruction quality and speed, we introduce the Nesterov's acceleration part and the restart mechanism in each iteration.Main results.2D experiments on PET data sets based on computer simulations and real patients demonstrate that our proposed algorithm can outperform existing MLEM-GF, KEM and DIPRecon methods.Significance.Unlike traditional CNN methods, the proposed algorithm does not rely on large data sets, but only leverages inter-patient information. Furthermore, we enhance reconstruction performance by optimizing the iterative algorithm. Notably, the proposed method does not require much modification of the basic algorithm, allowing for easy integration into standard implementations.
Collapse
Affiliation(s)
- Siqi Wang
- Engineering Research Center of Metallurgical Automation and Measurement Technology, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
| | - Bing Liu
- Engineering Research Center of Metallurgical Automation and Measurement Technology, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
| | - Furan Xie
- Engineering Research Center of Metallurgical Automation and Measurement Technology, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
| | - Li Chai
- College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, People's Republic of China
| |
Collapse
|
4
|
Bousse A, Kandarpa VSS, Shi K, Gong K, Lee JS, Liu C, Visvikis D. A Review on Low-Dose Emission Tomography Post-Reconstruction Denoising with Neural Network Approaches. ARXIV 2024:arXiv:2401.00232v2. [PMID: 38313194 PMCID: PMC10836084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 02/06/2024]
Abstract
Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified in low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.
Collapse
Affiliation(s)
| | | | - Kuangyu Shi
- Lab for Artificial Intelligence & Translational Theranostics, Dept. Nuclear Medicine, Inselspital, University of Bern, 3010 Bern, Switzerland
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston, MA 02114, USA
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Korea
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | |
Collapse
|
5
|
Gao M, Fessler JA, Chan HP. Model-based deep CNN-regularized reconstruction for digital breast tomosynthesis with a task-based CNN image assessment approach. Phys Med Biol 2023; 68:245024. [PMID: 37988758 PMCID: PMC10719554 DOI: 10.1088/1361-6560/ad0eb4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 11/02/2023] [Accepted: 11/21/2023] [Indexed: 11/23/2023]
Abstract
Objective. Digital breast tomosynthesis (DBT) is a quasi-three-dimensional breast imaging modality that improves breast cancer screening and diagnosis because it reduces fibroglandular tissue overlap compared with 2D mammography. However, DBT suffers from noise and blur problems that can lower the detectability of subtle signs of cancers such as microcalcifications (MCs). Our goal is to improve the image quality of DBT in terms of image noise and MC conspicuity.Approach. We proposed a model-based deep convolutional neural network (deep CNN or DCNN) regularized reconstruction (MDR) for DBT. It combined a model-based iterative reconstruction (MBIR) method that models the detector blur and correlated noise of the DBT system and the learning-based DCNN denoiser using the regularization-by-denoising framework. To facilitate the task-based image quality assessment, we also proposed two DCNN tools for image evaluation: a noise estimator (CNN-NE) trained to estimate the root-mean-square (RMS) noise of the images, and an MC classifier (CNN-MC) as a DCNN model observer to evaluate the detectability of clustered MCs in human subject DBTs.Main results. We demonstrated the efficacies of CNN-NE and CNN-MC on a set of physical phantom DBTs. The MDR method achieved low RMS noise and the highest detection area under the receiver operating characteristic curve (AUC) rankings evaluated by CNN-NE and CNN-MC among the reconstruction methods studied on an independent test set of human subject DBTs.Significance. The CNN-NE and CNN-MC may serve as a cost-effective surrogate for human observers to provide task-specific metrics for image quality comparisons. The proposed reconstruction method shows the promise of combining physics-based MBIR and learning-based DCNNs for DBT image reconstruction, which may potentially lead to lower dose and higher sensitivity and specificity for MC detection in breast cancer screening and diagnosis.
Collapse
Affiliation(s)
- Mingjie Gao
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, United States of America
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109, United States of America
| | - Jeffrey A Fessler
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, United States of America
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109, United States of America
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, United States of America
| |
Collapse
|
6
|
Zhang F, Wang L, Zhao J, Zhang X. Medical applications of generative adversarial network: a visualization analysis. Acta Radiol 2023; 64:2757-2767. [PMID: 37603577 DOI: 10.1177/02841851231189035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/23/2023]
Abstract
BACKGROUND Deep learning (DL) is one of the latest approaches to artificial intelligence. As an unsupervised DL method, a generative adversarial network (GAN) can be used to synthesize new data. PURPOSE To explore GAN applications in medicine and point out the significance of its existence for clinical medical research, as well as to provide a visual bibliometric analysis of GAN applications in the medical field in combination with the scientometric software Citespace and statistical analysis methods. MATERIAL AND METHODS PubMed, MEDLINE, Web of Science, and Google Scholar were searched to identify studies of GAN in medical applications between 2017 and 2022. This study was performed and reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Citespace was used to analyze the number of publications, authors, institutions, and keywords of articles related to GAN in medical applications. RESULTS The applications of GAN in medicine are not limited to medical image processing, but will also penetrate wider and more complex fields, or may be applied to clinical medicine. Eligibility criteria were the full texts of peer-reviewed journals reporting the application of GANs in medicine. Research selections included material published in English between 1 January 2017 and 1 December 2022. CONCLUSION GAN has been fully applied to the medical field and will be more deeply and widely used in clinical medicine, especially in the field of privacy protection and medical diagnosis. However, clinical applications of GAN require consideration of ethical and legal issues. GAN-based applications should be well validated by expert radiologists.
Collapse
Affiliation(s)
- Fan Zhang
- Radiology department, Huaihe Hospital of Henan University, Kaifeng, PR China
- Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng, PR China
| | - Luyao Wang
- School of Computer and Information Engineering, Henan University, Kaifeng, PR China
| | - Jiayin Zhao
- School of Software, Henan University, Kaifeng, PR China
| | - Xinhong Zhang
- School of Software, Henan University, Kaifeng, PR China
| |
Collapse
|
7
|
Li S, Gong K, Badawi RD, Kim EJ, Qi J, Wang G. Neural KEM: A Kernel Method With Deep Coefficient Prior for PET Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:785-796. [PMID: 36288234 PMCID: PMC10081957 DOI: 10.1109/tmi.2022.3217543] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Image reconstruction of low-count positron emission tomography (PET) data is challenging. Kernel methods address the challenge by incorporating image prior information in the forward model of iterative PET image reconstruction. The kernelized expectation-maximization (KEM) algorithm has been developed and demonstrated to be effective and easy to implement. A common approach for a further improvement of the kernel method would be adding an explicit regularization, which however leads to a complex optimization problem. In this paper, we propose an implicit regularization for the kernel method by using a deep coefficient prior, which represents the kernel coefficient image in the PET forward model using a convolutional neural-network. To solve the maximum-likelihood neural network-based reconstruction problem, we apply the principle of optimization transfer to derive a neural KEM algorithm. Each iteration of the algorithm consists of two separate steps: a KEM step for image update from the projection data and a deep-learning step in the image domain for updating the kernel coefficient image using the neural network. This optimization algorithm is guaranteed to monotonically increase the data likelihood. The results from computer simulations and real patient data have demonstrated that the neural KEM can outperform existing KEM and deep image prior methods.
Collapse
|
8
|
Xu L, Cui C, Li R, Yang R, Liu R, Meng Q, Wang F. Phantom and clinical evaluation of the effect of a new Bayesian penalized likelihood reconstruction algorithm (HYPER Iterative) on 68Ga-DOTA-NOC PET/CT image quality. EJNMMI Res 2022; 12:73. [PMID: 36504014 PMCID: PMC9742075 DOI: 10.1186/s13550-022-00945-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Accepted: 11/09/2022] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND Bayesian penalized likelihood (BPL) algorithm is an effective way to suppress noise in the process of positron emission tomography (PET) image reconstruction by incorporating a smooth penalty. The strength of the smooth penalty is controlled by the penalization factor. The aim was to investigate the impact of different penalization factors and acquisition times in a new BPL algorithm, HYPER Iterative, on the quality of 68Ga-DOTA-NOC PET/CT images. A phantom and 25 patients with neuroendocrine neoplasms who underwent 68Ga-DOTA-NOC PET/CT were included. The PET data were acquired in a list-mode with a digital PET/CT scanner and reconstructed by ordered subset expectation maximization (OSEM) and the HYPER Iterative algorithm with seven penalization factors between 0.03 and 0.5 for acquisitions of 2 and 3 min per bed position (m/b), both including time-of-flight and point of spread function recovery. The contrast recovery (CR), background variability (BV) and radioactivity concentration ratio (RCR) of the phantom; The SUVmean and coefficient of variation (CV) of the liver; and the SUVmax of the lesions were measured. Image quality was rated by two radiologists using a five-point Likert scale. RESULTS The CR, BV, and RCR decreased with increasing penalization factors for four "hot" spheres, and the HYPER Iterative 2 m/b groups with penalization factors of 0.07 to 0.2 had equivalent CR and superior BV performance compared to the OSEM 3 m/b group. The liver SUVmean values were approximately equal in all reconstruction groups (range 5.95-5.97), and the liver CVs of the HYPER Iterative 2 m/b and 3 m/b groups with the penalization factors of 0.1 to 0.2 were equivalent to those of the OSEM 3 m/b group (p = 0.113-0.711 and p = 0.079-0.287, respectively), while the lesion SUVmax significantly increased by 19-22% and 25%, respectively (all p < 0.001). The highest qualitative score was attained at a penalization factor of 0.2 for the HYPER Iterative 2 m/b group (3.20 ± 0.52) and 3 m/b group (3.70 ± 0.36); those scores were comparable to or greater than that of the OSEM 3 m/b group (3.09 ± 0.36, p = 0.388 and p < 0.001, respectively). CONCLUSIONS The HYPER Iterative algorithm with a penalization factor of 0.2 resulted in higher lesion contrast and lower image noise than OSEM for 68Ga-DOTA-NOC PET/CT, allowing the same image quality to be achieved with less injected radioactivity and a shorter acquisition time.
Collapse
Affiliation(s)
- Lei Xu
- grid.89957.3a0000 0000 9255 8984Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, Nanjing, 210006 Jiangsu China
| | - Can Cui
- grid.89957.3a0000 0000 9255 8984Department of PET/CT Center, Jiangsu Cancer Hospital, Jiangsu Institute of Cancer Research, The Affiliated Cancer Hospital of Nanjing Medical University, Nanjing, 210009 Jiangsu China
| | - Rushuai Li
- grid.89957.3a0000 0000 9255 8984Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, Nanjing, 210006 Jiangsu China
| | - Rui Yang
- grid.89957.3a0000 0000 9255 8984Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, Nanjing, 210006 Jiangsu China
| | - Rencong Liu
- grid.89957.3a0000 0000 9255 8984Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, Nanjing, 210006 Jiangsu China
| | - Qingle Meng
- grid.89957.3a0000 0000 9255 8984Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, Nanjing, 210006 Jiangsu China
| | - Feng Wang
- grid.89957.3a0000 0000 9255 8984Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, Nanjing, 210006 Jiangsu China
| |
Collapse
|
9
|
Li S, Wang G. Deep Kernel Representation for Image Reconstruction in PET. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3029-3038. [PMID: 35584077 PMCID: PMC9613528 DOI: 10.1109/tmi.2022.3176002] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Image reconstruction for positron emission tomography (PET) is challenging because of the ill-conditioned tomographic problem and low counting statistics. Kernel methods address this challenge by using kernel representation to incorporate image prior information in the forward model of iterative PET image reconstruction. Existing kernel methods construct the kernels commonly using an empirical process, which may lead to unsatisfactory performance. In this paper, we describe the equivalence between the kernel representation and a trainable neural network model. A deep kernel method is then proposed by exploiting a deep neural network to enable automated learning of an improved kernel model and is directly applicable to single subjects in dynamic PET. The training process utilizes available image prior data to form a set of robust kernels in an optimized way rather than empirically. The results from computer simulations and a real patient dataset demonstrate that the proposed deep kernel method can outperform the existing kernel method and neural network method for dynamic PET image reconstruction.
Collapse
|
10
|
Natarajan B, Elakkiya R. Dynamic GAN for high-quality sign language video generation from skeletal poses using generative adversarial networks. Soft comput 2022. [DOI: 10.1007/s00500-022-07014-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
11
|
Cui J, Gong K, Guo N, Kim K, Liu H, Li Q. Unsupervised PET logan parametric image estimation using conditional deep image prior. Med Image Anal 2022; 80:102519. [PMID: 35767910 DOI: 10.1016/j.media.2022.102519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 06/14/2022] [Accepted: 06/15/2022] [Indexed: 11/18/2022]
Abstract
Recently, deep learning-based denoising methods have been gradually used for PET images denoising and have shown great achievements. Among these methods, one interesting framework is conditional deep image prior (CDIP) which is an unsupervised method that does not need prior training or a large number of training pairs. In this work, we combined CDIP with Logan parametric image estimation to generate high-quality parametric images. In our method, the kinetic model is the Logan reference tissue model that can avoid arterial sampling. The neural network was utilized to represent the images of Logan slope and intercept. The patient's computed tomography (CT) image or magnetic resonance (MR) image was used as the network input to provide anatomical information. The optimization function was constructed and solved by the alternating direction method of multipliers (ADMM) algorithm. Both simulation and clinical patient datasets demonstrated that the proposed method could generate parametric images with more detailed structures. Quantification results showed that the proposed method results had higher contrast-to-noise (CNR) improvement ratios (PET/CT datasets: 62.25%±29.93%; striatum of brain PET datasets : 129.51%±32.13%, thalamus of brain PET datasets: 128.24%±31.18%) than Gaussian filtered results (PET/CT datasets: 23.33%±18.63%; striatum of brain PET datasets: 74.71%±8.71%, thalamus of brain PET datasets: 73.02%±9.34%) and nonlocal mean (NLM) denoised results (PET/CT datasets: 37.55%±26.56%; striatum of brain PET datasets: 100.89%±16.13%, thalamus of brain PET datasets: 103.59%±16.37%).
Collapse
Affiliation(s)
- Jianan Cui
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, Zhejiang 310027, China; The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Ning Guo
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Kyungsang Kim
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Huafeng Liu
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, Zhejiang 310027, China; Jiaxing Key Laboratory of Photonic Sensing and Intelligent Imaging, Jiaxing, Zhejiang 314000, China; Intelligent Optics and Photonics Research Center, Jiaxing Research Institute, Zhejiang University, Zhejiang 314000, China.
| | - Quanzheng Li
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA.
| |
Collapse
|
12
|
Li T, Zhang M, Qi W, Asma E, Qi J. Deep Learning Based Joint PET Image Reconstruction and Motion Estimation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1230-1241. [PMID: 34928789 PMCID: PMC9064915 DOI: 10.1109/tmi.2021.3136553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Respiratory motion is one of the main sources of motion artifacts in positron emission tomography (PET) imaging. The emission image and patient motion can be estimated simultaneously from respiratory gated data through a joint estimation framework. However, conventional motion estimation methods based on registration of a pair of images are sensitive to noise. The goal of this study is to develop a robust joint estimation method that incorporates a deep learning (DL)-based image registration approach for motion estimation. We propose a joint estimation framework by incorporating a learned image registration network into a regularized PET image reconstruction. The joint estimation was formulated as a constrained optimization problem with moving gated images related to a fixed image via the deep neural network. The constrained optimization problem is solved by the alternating direction method of multipliers (ADMM) algorithm. The effectiveness of the algorithm was demonstrated using simulated and real data. We compared the proposed DL-ADMM joint estimation algorithm with a monotonic iterative joint estimation. Motion compensated reconstructions using pre-calculated deformation fields by DL-based (DL-MC recon) and iterative (iterative-MC recon) image registration were also included for comparison. Our simulation study shows that the proposed DL-ADMM joint estimation method reduces bias compared to the ungated image without increasing noise and outperforms the competing methods. In the real data study, our proposed method also generated higher lesion contrast and sharper liver boundaries compared to the ungated image and had lower noise than the reference gated image.
Collapse
|
13
|
Applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging: A review. Eur J Nucl Med Mol Imaging 2022; 49:3717-3739. [PMID: 35451611 DOI: 10.1007/s00259-022-05805-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/12/2022] [Indexed: 11/04/2022]
Abstract
PURPOSE This paper reviews recent applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging. Recent advances in Deep Learning (DL) and GANs catalysed the research of their applications in medical imaging modalities. As a result, several unique GAN topologies have emerged and been assessed in an experimental environment over the last two years. METHODS The present work extensively describes GAN architectures and their applications in PET imaging. The identification of relevant publications was performed via approved publication indexing websites and repositories. Web of Science, Scopus, and Google Scholar were the major sources of information. RESULTS The research identified a hundred articles that address PET imaging applications such as attenuation correction, de-noising, scatter correction, removal of artefacts, image fusion, high-dose image estimation, super-resolution, segmentation, and cross-modality synthesis. These applications are presented and accompanied by the corresponding research works. CONCLUSION GANs are rapidly employed in PET imaging tasks. However, specific limitations must be eliminated to reach their full potential and gain the medical community's trust in everyday clinical practice.
Collapse
|
14
|
Pain CD, Egan GF, Chen Z. Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement. Eur J Nucl Med Mol Imaging 2022; 49:3098-3118. [PMID: 35312031 PMCID: PMC9250483 DOI: 10.1007/s00259-022-05746-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/25/2022] [Indexed: 12/21/2022]
Abstract
Image processing plays a crucial role in maximising diagnostic quality of positron emission tomography (PET) images. Recently, deep learning methods developed across many fields have shown tremendous potential when applied to medical image enhancement, resulting in a rich and rapidly advancing literature surrounding this subject. This review encapsulates methods for integrating deep learning into PET image reconstruction and post-processing for low-dose imaging and resolution enhancement. A brief introduction to conventional image processing techniques in PET is firstly presented. We then review methods which integrate deep learning into the image reconstruction framework as either deep learning-based regularisation or as a fully data-driven mapping from measured signal to images. Deep learning-based post-processing methods for low-dose imaging, temporal resolution enhancement and spatial resolution enhancement are also reviewed. Finally, the challenges associated with applying deep learning to enhance PET images in the clinical setting are discussed and future research directions to address these challenges are presented.
Collapse
Affiliation(s)
- Cameron Dennis Pain
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia.
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Department of Data Science and AI, Monash University, Melbourne, Australia
| |
Collapse
|
15
|
Orlhac F, Nioche C, Klyuzhin I, Rahmim A, Buvat I. Radiomics in PET Imaging:: A Practical Guide for Newcomers. PET Clin 2021; 16:597-612. [PMID: 34537132 DOI: 10.1016/j.cpet.2021.06.007] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Radiomics has undergone considerable development in recent years. In PET imaging, very promising results concerning the ability of handcrafted features to predict the biological characteristics of lesions and to assess patient prognosis or response to treatment have been reported in the literature. This article presents a checklist for designing a reliable radiomic study, gives an overview of the steps of the pipeline, and outlines approaches for data harmonization. Tips are provided for critical reading of the content of articles. The advantages and limitations of handcrafted radiomics compared with deep-learning approaches for the characterization of PET images are also discussed.
Collapse
Affiliation(s)
- Fanny Orlhac
- Institut Curie Centre de Recherche, Centre Universitaire, Bat 101B, Rue Henri Becquerel, CS 90030, 91401 Orsay Cedex, France.
| | - Christophe Nioche
- Institut Curie Centre de Recherche, Centre Universitaire, Bat 101B, Rue Henri Becquerel, CS 90030, 91401 Orsay Cedex, France
| | - Ivan Klyuzhin
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, BC V5Z 1L3, Canada; Department of Radiology, University of British Columbia, 675 West 10th Avenue, Vancouver, BC V5Z 1L3, Canada
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, BC V5Z 1L3, Canada; Department of Radiology, University of British Columbia, 675 West 10th Avenue, Vancouver, BC V5Z 1L3, Canada
| | - Irène Buvat
- Institut Curie Centre de Recherche, Centre Universitaire, Bat 101B, Rue Henri Becquerel, CS 90030, 91401 Orsay Cedex, France
| |
Collapse
|
16
|
Gong K, Kim K, Cui J, Wu D, Li Q. The Evolution of Image Reconstruction in PET: From Filtered Back-Projection to Artificial Intelligence. PET Clin 2021; 16:533-542. [PMID: 34537129 DOI: 10.1016/j.cpet.2021.06.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
PET can provide functional images revealing physiologic processes in vivo. Although PET has many applications, there are still some limitations that compromise its precision: the absorption of photons in the body causes signal attenuation; the dead-time limit of system components leads to the loss of the count rate; the scattered and random events received by the detector introduce additional noise; the characteristics of the detector limit the spatial resolution; and the low signal-to-noise ratio caused by the scan-time limit (eg, dynamic scans) and dose concern. The early PET reconstruction methods are analytical approaches based on an idealized mathematical model.
Collapse
Affiliation(s)
- Kuang Gong
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Kyungsang Kim
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jianan Cui
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Dufan Wu
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Quanzheng Li
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
17
|
Xie Z, Li T, Zhang X, Qi W, Asma E, Qi J. Anatomically aided PET image reconstruction using deep neural networks. Med Phys 2021; 48:5244-5258. [PMID: 34129690 PMCID: PMC8510002 DOI: 10.1002/mp.15051] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 05/07/2021] [Accepted: 06/02/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE The developments of PET/CT and PET/MR scanners provide opportunities for improving PET image quality by using anatomical information. In this paper, we propose a novel co-learning three-dimensional (3D) convolutional neural network (CNN) to extract modality-specific features from PET/CT image pairs and integrate complementary features into an iterative reconstruction framework to improve PET image reconstruction. METHODS We used a pretrained deep neural network to represent PET images. The network was trained using low-count PET and CT image pairs as inputs and high-count PET images as labels. This network was then incorporated into a constrained maximum likelihood framework to regularize PET image reconstruction. Two different network structures were investigated for the integration of anatomical information from CT images. One was a multichannel CNN, which treated PET and CT volumes as separate channels of the input. The other one was multibranch CNN, which implemented separate encoders for PET and CT images to extract latent features and fed the combined latent features into a decoder. Using computer-based Monte Carlo simulations and two real patient datasets, the proposed method has been compared with existing methods, including the maximum likelihood expectation maximization (MLEM) reconstruction, a kernel-based reconstruction and a CNN-based deep penalty method with and without anatomical guidance. RESULTS Reconstructed images showed that the proposed constrained ML reconstruction approach produced higher quality images than the competing methods. The tumors in the lung region have higher contrast in the proposed constrained ML reconstruction than in the CNN-based deep penalty reconstruction. The image quality was further improved by incorporating the anatomical information. Moreover, the liver standard deviation was lower in the proposed approach than all the competing methods at a matched lesion contrast. CONCLUSIONS The supervised co-learning strategy can improve the performance of constrained maximum likelihood reconstruction. Compared with existing techniques, the proposed method produced a better lesion contrast versus background standard deviation trade-off curve, which can potentially improve lesion detection.
Collapse
Affiliation(s)
- Zhaoheng Xie
- Department of Biomedical Engineering, University of
California, Davis, CA, USA
| | - Tiantian Li
- Department of Biomedical Engineering, University of
California, Davis, CA, USA
| | - Xuezhu Zhang
- Department of Biomedical Engineering, University of
California, Davis, CA, USA
| | - Wenyuan Qi
- Canon Medical Research USA, Inc., Vernon Hills, IL,
USA
| | - Evren Asma
- Canon Medical Research USA, Inc., Vernon Hills, IL,
USA
| | - Jinyi Qi
- Department of Biomedical Engineering, University of
California, Davis, CA, USA
| |
Collapse
|
18
|
Cheng Z, Wen J, Huang G, Yan J. Applications of artificial intelligence in nuclear medicine image generation. Quant Imaging Med Surg 2021; 11:2792-2822. [PMID: 34079744 PMCID: PMC8107336 DOI: 10.21037/qims-20-1078] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 02/14/2021] [Indexed: 12/12/2022]
Abstract
Recently, the application of artificial intelligence (AI) in medical imaging (including nuclear medicine imaging) has rapidly developed. Most AI applications in nuclear medicine imaging have focused on the diagnosis, treatment monitoring, and correlation analyses with pathology or specific gene mutation. It can also be used for image generation to shorten the time of image acquisition, reduce the dose of injected tracer, and enhance image quality. This work provides an overview of the application of AI in image generation for single-photon emission computed tomography (SPECT) and positron emission tomography (PET) either without or with anatomical information [CT or magnetic resonance imaging (MRI)]. This review focused on four aspects, including imaging physics, image reconstruction, image postprocessing, and internal dosimetry. AI application in generating attenuation map, estimating scatter events, boosting image quality, and predicting internal dose map is summarized and discussed.
Collapse
Affiliation(s)
- Zhibiao Cheng
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Junhai Wen
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Gang Huang
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| | - Jianhua Yan
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| |
Collapse
|
19
|
Wang X, Zhou L, Wang Y, Jiang H, Ye H. Improved low-dose positron emission tomography image reconstruction using deep learned prior. Phys Med Biol 2021; 66. [PMID: 33882466 DOI: 10.1088/1361-6560/abfa36] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 04/21/2021] [Indexed: 01/18/2023]
Abstract
Positron emission tomography (PET) is a promising medical imaging technology that provides non-invasive and quantitative measurement of biochemical process in the human bodies. PET image reconstruction is challenging due to the ill-poseness of the inverse problem. With lower statistics caused by the limited detected photons, low-dose PET imaging leads to noisy reconstructed images with much quality degradation. Recently, deep neural networks (DNN) have been widely used in computer vision tasks and attracted growing interests in medical imaging. In this paper, we proposed a maximuma posteriori(MAP) reconstruction algorithm incorporating a convolutional neural network (CNN) representation in the formation of the prior. Rather than using the CNN in post-processing, we embedded the neural network in the reconstruction framework for image representation. Using the simulated data, we first quantitatively evaluated our proposed method in terms of the noise-bias tradeoff, and compared with the filtered maximum likelihood (ML), the conventional MAP, and the CNN post-processing methods. In addition to the simulation experiments, the proposed method was further quantitatively validated on the acquired patient brain and body data with the tradeoff between noise and contrast. The results demonstrated that the proposed CNN-MAP method improved noise-bias tradeoff compared with the filtered ML, the conventional MAP, and the CNN post-processing methods in the simulation study. For the patient study, the CNN-MAP method achieved better noise-contrast tradeoff over the other three methods. The quantitative enhancements indicate the potential value of the proposed CNN-MAP method in low-dose PET imaging.
Collapse
Affiliation(s)
- Xinhui Wang
- MinFound Medical Systems Co., Ltd., Hangzhou, People's Republic of China.,Zhejiang MinFound Intelligent Healthcare Technology Co. Ltd., Hangzhou, People's Republic of China
| | - Long Zhou
- MinFound Medical Systems Co., Ltd., Hangzhou, People's Republic of China.,Zhejiang MinFound Intelligent Healthcare Technology Co. Ltd., Hangzhou, People's Republic of China
| | - Yaofa Wang
- MinFound Medical Systems Co., Ltd., Hangzhou, People's Republic of China.,College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, People's Republic of China
| | - Haochuan Jiang
- MinFound Medical Systems Co., Ltd., Hangzhou, People's Republic of China
| | - Hongwei Ye
- MinFound Medical Systems Co., Ltd., Hangzhou, People's Republic of China.,Zhejiang MinFound Intelligent Healthcare Technology Co. Ltd., Hangzhou, People's Republic of China
| |
Collapse
|
20
|
Abstract
Total-body PET image reconstruction follows a similar procedure to the image reconstruction process for standard whole-body PET scanners. One unique aspect of total-body imaging is simultaneous coverage of the entire human body, which makes it convenient to perform total-body dynamic PET scans. Therefore, four-dimensional dynamic PET reconstruction and parametric imaging are of great interest in total-body imaging. This article covers some basics of PET image reconstruction and then focuses on three- and four-dimensional PET reconstruction for total-body imaging. Methods for image formation from raw measurements in total-body PET are described. Challenges and opportunities in total-body PET image reconstruction are discussed.
Collapse
Affiliation(s)
- Jinyi Qi
- Department of Biomedical Engineering, University of California, One Shields Avenue, Davis, CA 95616, USA.
| | - Samuel Matej
- Department of Radiology, University of Pennsylvania, 3620 Hamilton Walk, John Morgan Building, Room 156A, Philadelphia, PA 19104-6061, USA
| | - Guobao Wang
- Department of Radiology, University of California Davis Medical Center, Lawrence J. Ellison Ambulatory Care Center Building, Suite 3100, 4860 Y Street, Sacramento, CA 95817, USA
| | - Xuezhu Zhang
- Department of Biomedical Engineering, University of California, One Shields Avenue, Davis, CA 95616, USA
| |
Collapse
|
21
|
Vijay Kumar J, Harshavardhan A, Bhukya H, Krishna Prasad AV. Advanced Machine Learning-Based Analytics on COVID-19 Data Using Generative Adversarial Networks. MATERIALS TODAY. PROCEEDINGS 2020:S2214-7853(20)37620-3. [PMID: 33078094 PMCID: PMC7556782 DOI: 10.1016/j.matpr.2020.10.053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Accepted: 10/03/2020] [Indexed: 11/01/2022]
Abstract
The domain of medical diagnosis and predictive analytics is one of the key domains of research with enormous dimensions whereby the diseases of different types can be predicted. Nowadays, there is a huge panic of impact and rapid mutation of the COVID-19 virus impression. The world is getting affected by this virus to a huge extent and there is no vaccine developed so far. India is also having more than 10,000 patients with than 300 deceased. The global human community is having around 20 lacs of Coronavirus patients. The Generative Adversarial Network (GAN) is the contemporary high-performance approach in which the use of advanced neural networks is done for the cavernous analytics of the images and multimedia data. In this research work, the analytics of key points from medical images of the COVID-19 dataset is to be presented using which the diagnosis and predictions can be done for the patients. The GANs are used for the generation, transformation as well as presentation of the dataset and key points using advanced deep learning models which can analyze the patterns in the medical images including X-Ray, CT Scan, and many others. Using such approaches with the integration of GANs, the overall predictive analytics can be made high performance aware as compared to the classical neural networks with multiple layers. In this research manuscript, the inscription of work is projected on the benchmark datasets with the advanced scripting so that the predictive mining and knowledge discovery can be done effectively with more accuracy.
Collapse
Affiliation(s)
| | | | - Hanumanthu Bhukya
- Department of CSE, Kakatiya Institute of Technology & Science, Warangal, Telangana, India
| | - A V Krishna Prasad
- Department. of Computer Science and Engineering, MVSR Engineering College, Hyderabad, India
| |
Collapse
|