1
|
Li S, Zhu Y, Spencer BA, Wang G. Single-Subject Deep-Learning Image Reconstruction With a Neural Optimization Transfer Algorithm for PET-Enabled Dual-Energy CT Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:4075-4089. [PMID: 38941203 DOI: 10.1109/tip.2024.3418347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2024]
Abstract
Combining dual-energy computed tomography (DECT) with positron emission tomography (PET) offers many potential clinical applications but typically requires expensive hardware upgrades or increases radiation doses on PET/CT scanners due to an extra X-ray CT scan. The recent PET-enabled DECT method allows DECT imaging on PET/CT without requiring a second X-ray CT scan. It combines the already existing X-ray CT image with a 511 keV γ -ray CT (gCT) image reconstructed from time-of-flight PET emission data. A kernelized framework has been developed for reconstructing gCT image but this method has not fully exploited the potential of prior knowledge. Use of deep neural networks may explore the power of deep learning in this application. However, common approaches require a large database for training, which is impractical for a new imaging method like PET-enabled DECT. Here, we propose a single-subject method by using neural-network representation as a deep coefficient prior to improving gCT image reconstruction without population-based pre-training. The resulting optimization problem becomes the tomographic estimation of nonlinear neural-network parameters from gCT projection data. This complicated problem can be efficiently solved by utilizing the optimization transfer strategy with quadratic surrogates. Each iteration of the proposed neural optimization transfer algorithm includes: PET activity image update; gCT image update; and least-square neural-network learning in the gCT image domain. This algorithm is guaranteed to monotonically increase the data likelihood. Results from computer simulation, real phantom data and real patient data have demonstrated that the proposed method can significantly improve gCT image quality and consequent multi-material decomposition as compared to other methods.
Collapse
|
2
|
Pan B, Marsden PK, Reader AJ. Kinetic model-informed deep learning for multiplexed PET image separation. EJNMMI Phys 2024; 11:56. [PMID: 38951271 PMCID: PMC11555001 DOI: 10.1186/s40658-024-00660-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 05/24/2024] [Indexed: 07/03/2024] Open
Abstract
BACKGROUND Multiplexed positron emission tomography (mPET) imaging can measure physiological and pathological information from different tracers simultaneously in a single scan. Separation of the multiplexed PET signals within a single PET scan is challenging due to the fact that each tracer gives rise to indistinguishable 511 keV photon pairs, and thus no unique energy information for differentiating the source of each photon pair. METHODS Recently, many applications of deep learning for mPET image separation have been concentrated on pure data-driven methods, e.g., training a neural network to separate mPET images into single-tracer dynamic/static images. These methods use over-parameterized networks with only a very weak inductive prior. In this work, we improve the inductive prior of the deep network by incorporating a general kinetic model based on spectral analysis. The model is incorporated, along with deep networks, into an unrolled image-space version of an iterative fully 4D PET reconstruction algorithm. RESULTS The performance of the proposed method was evaluated on a simulated brain image dataset for dual-tracer [18 F]FDG+[11 C]MET PET image separation. The results demonstrate that the proposed method can achieve separation performance comparable to that obtained with single-tracer imaging. In addition, the proposed method outperformed the model-based separation methods (the conventional voxel-wise multi-tracer compartment modeling method (v-MTCM) and the image-space dual-tracer version of the fully 4D PET image reconstruction algorithm (IS-F4D)), as well as a pure data-driven separation [using a convolutional encoder-decoder (CED)], with fewer training examples. CONCLUSIONS This work proposes a kinetic model-informed unrolled deep learning method for mPET image separation. In simulation studies, the method proved able to outperform both the conventional v-MTCM method and a pure data-driven CED with less training data.
Collapse
Affiliation(s)
- Bolin Pan
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.
| | - Paul K Marsden
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| |
Collapse
|
3
|
Dutta K, Laforest R, Luo J, Jha AK, Shoghi KI. Deep learning generation of preclinical positron emission tomography (PET) images from low-count PET with task-based performance assessment. Med Phys 2024; 51:4324-4339. [PMID: 38710222 PMCID: PMC11423763 DOI: 10.1002/mp.17105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 04/02/2024] [Accepted: 04/09/2024] [Indexed: 05/08/2024] Open
Abstract
BACKGROUND Preclinical low-count positron emission tomography (LC-PET) imaging offers numerous advantages such as facilitating imaging logistics, enabling longitudinal studies of long- and short-lived isotopes as well as increasing scanner throughput. However, LC-PET is characterized by reduced photon-count levels resulting in low signal-to-noise ratio (SNR), segmentation difficulties, and quantification uncertainties. PURPOSE We developed and evaluated a novel deep-learning (DL) architecture-Attention based Residual-Dilated Net (ARD-Net)-to generate standard-count PET (SC-PET) images from LC-PET images. The performance of the ARD-Net framework was evaluated for numerous low count realizations using fidelity-based qualitative metrics, task-based segmentation, and quantitative metrics. METHOD Patient Derived tumor Xenograft (PDX) with tumors implanted in the mammary fat-pad were subjected to preclinical [18F]-Fluorodeoxyglucose (FDG)-PET/CT imaging. SC-PET images were derived from a 10 min static FDG-PET acquisition, 50 min post administration of FDG, and were resampled to generate four distinct LC-PET realizations corresponding to 10%, 5%, 1.6%, and 0.8% of SC-PET count-level. ARD-Net was trained and optimized using 48 preclinical FDG-PET datasets, while 16 datasets were utilized to assess performance. Further, the performance of ARD-Net was benchmarked against two leading DL-based methods (Residual UNet, RU-Net; and Dilated Network, D-Net) and non-DL methods (Non-Local Means, NLM; and Block Matching 3D Filtering, BM3D). The performance of the framework was evaluated using traditional fidelity-based image quality metrics such as Structural Similarity Index Metric (SSIM) and Normalized Root Mean Square Error (NRMSE), as well as human observer-based tumor segmentation performance (Dice Score and volume bias) and quantitative analysis of Standardized Uptake Value (SUV) measurements. Additionally, radiomics-derived features were utilized as a measure of quality assurance (QA) in comparison to true SC-PET. Finally, a performance ensemble score (EPS) was developed by integrating fidelity-based and task-based metrics. Concordance Correlation Coefficient (CCC) was utilized to determine concordance between measures. The non-parametric Friedman Test with Bonferroni correction was used to compare the performance of ARD-Net against benchmarked methods with significance at adjusted p-value ≤0.01. RESULTS ARD-Net-generated SC-PET images exhibited significantly better (p ≤ 0.01 post Bonferroni correction) overall image fidelity scores in terms of SSIM and NRMSE at majority of photon-count levels compared to benchmarked DL and non-DL methods. In terms of task-based quantitative accuracy evaluated by SUVMean and SUVPeak, ARD-Net exhibited less than 5% median absolute bias for SUVMean compared to true SC-PET and lower degree of variability compared to benchmarked DL and non-DL based methods in generating SC-PET. Additionally, ARD-Net-generated SC-PET images displayed higher degree of concordance to SC-PET images in terms of radiomics features compared to non-DL and other DL approaches. Finally, the ensemble score suggested that ARD-Net exhibited significantly superior performance compared to benchmarked algorithms (p ≤ 0.01 post Bonferroni correction). CONCLUSION ARD-Net provides a robust framework to generate SC-PET from LC-PET images. ARD-Net generated SC-PET images exhibited superior performance compared other DL and non-DL approaches in terms of image-fidelity based metrics, task-based segmentation metrics, and minimal bias in terms of task-based quantification performance for preclinical PET imaging.
Collapse
Affiliation(s)
- Kaushik Dutta
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri, USA
- Imaging Science Program, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
| | - Richard Laforest
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri, USA
- Imaging Science Program, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
| | - Jingqin Luo
- Department of Surgery, Public Health Sciences, Washington University in St Louis, St Louis, Missouri, USA
| | - Abhinav K Jha
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri, USA
- Imaging Science Program, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
- Department of Biomedical Engineering, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
| | - Kooresh I Shoghi
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri, USA
- Imaging Science Program, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
- Department of Biomedical Engineering, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
| |
Collapse
|
4
|
Bousse A, Kandarpa VSS, Shi K, Gong K, Lee JS, Liu C, Visvikis D. A Review on Low-Dose Emission Tomography Post-Reconstruction Denoising with Neural Network Approaches. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2024; 8:333-347. [PMID: 39429805 PMCID: PMC11486494 DOI: 10.1109/trpms.2023.3349194] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2024]
Abstract
Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.
Collapse
Affiliation(s)
| | | | - Kuangyu Shi
- Lab for Artificial Intelligence & Translational Theranostics, Dept. Nuclear Medicine, Inselspital, University of Bern, 3010 Bern, Switzerland
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston, MA 02114, USA
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Korea
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | |
Collapse
|
5
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
6
|
Bousse A, Kandarpa VSS, Shi K, Gong K, Lee JS, Liu C, Visvikis D. A Review on Low-Dose Emission Tomography Post-Reconstruction Denoising with Neural Network Approaches. ARXIV 2024:arXiv:2401.00232v2. [PMID: 38313194 PMCID: PMC10836084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 02/06/2024]
Abstract
Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified in low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.
Collapse
Affiliation(s)
| | | | - Kuangyu Shi
- Lab for Artificial Intelligence & Translational Theranostics, Dept. Nuclear Medicine, Inselspital, University of Bern, 3010 Bern, Switzerland
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston, MA 02114, USA
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Korea
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | |
Collapse
|
7
|
Yazdani E, Geramifar P, Karamzade-Ziarati N, Sadeghi M, Amini P, Rahmim A. Radiomics and Artificial Intelligence in Radiotheranostics: A Review of Applications for Radioligands Targeting Somatostatin Receptors and Prostate-Specific Membrane Antigens. Diagnostics (Basel) 2024; 14:181. [PMID: 38248059 PMCID: PMC10814892 DOI: 10.3390/diagnostics14020181] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 01/11/2024] [Accepted: 01/12/2024] [Indexed: 01/23/2024] Open
Abstract
Radiotheranostics refers to the pairing of radioactive imaging biomarkers with radioactive therapeutic compounds that deliver ionizing radiation. Given the introduction of very promising radiopharmaceuticals, the radiotheranostics approach is creating a novel paradigm in personalized, targeted radionuclide therapies (TRTs), also known as radiopharmaceuticals (RPTs). Radiotherapeutic pairs targeting somatostatin receptors (SSTR) and prostate-specific membrane antigens (PSMA) are increasingly being used to diagnose and treat patients with metastatic neuroendocrine tumors (NETs) and prostate cancer. In parallel, radiomics and artificial intelligence (AI), as important areas in quantitative image analysis, are paving the way for significantly enhanced workflows in diagnostic and theranostic fields, from data and image processing to clinical decision support, improving patient selection, personalized treatment strategies, response prediction, and prognostication. Furthermore, AI has the potential for tremendous effectiveness in patient dosimetry which copes with complex and time-consuming tasks in the RPT workflow. The present work provides a comprehensive overview of radiomics and AI application in radiotheranostics, focusing on pairs of SSTR- or PSMA-targeting radioligands, describing the fundamental concepts and specific imaging/treatment features. Our review includes ligands radiolabeled by 68Ga, 18F, 177Lu, 64Cu, 90Y, and 225Ac. Specifically, contributions via radiomics and AI towards improved image acquisition, reconstruction, treatment response, segmentation, restaging, lesion classification, dose prediction, and estimation as well as ongoing developments and future directions are discussed.
Collapse
Affiliation(s)
- Elmira Yazdani
- Medical Physics Department, School of Medicine, Iran University of Medical Sciences, Tehran 14496-14535, Iran
- Finetech in Medicine Research Center, Iran University of Medical Sciences, Tehran 14496-14535, Iran
| | - Parham Geramifar
- Research Center for Nuclear Medicine, Tehran University of Medical Sciences, Tehran 14117-13135, Iran
| | - Najme Karamzade-Ziarati
- Research Center for Nuclear Medicine, Tehran University of Medical Sciences, Tehran 14117-13135, Iran
| | - Mahdi Sadeghi
- Medical Physics Department, School of Medicine, Iran University of Medical Sciences, Tehran 14496-14535, Iran
- Finetech in Medicine Research Center, Iran University of Medical Sciences, Tehran 14496-14535, Iran
| | - Payam Amini
- Department of Biostatistics, School of Public Health, Iran University of Medical Sciences, Tehran 14496-14535, Iran
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC V5Z 1L3, Canada
- Departments of Radiology and Physics, University of British Columbia, Vancouver, BC V5Z 1L3, Canada
| |
Collapse
|
8
|
Cha BK, Lee KH, Lee Y, Kim K. Optimization Method to Predict Optimal Noise Reduction Parameters for the Non-Local Means Algorithm Based on the Scintillator Thickness in Radiography. SENSORS (BASEL, SWITZERLAND) 2023; 23:9803. [PMID: 38139649 PMCID: PMC10747373 DOI: 10.3390/s23249803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 12/09/2023] [Accepted: 12/12/2023] [Indexed: 12/24/2023]
Abstract
The resulting image obtained from an X-ray imaging system depends significantly on the characteristics of the detector. In particular, when an X-ray image is acquired by thinning the detector, a relatively large amount of noise inevitably occurs. In addition, when a thick detector is used to reduce noise in X-ray images, blurring increases and the ability to distinguish target areas deteriorates. In this study, we aimed to derive the optimal X-ray image quality by deriving the optimal noise reduction parameters based on the non-local means (NLM) algorithm. The detectors used were of two thicknesses (96 and 140 μm), and images were acquired based on the IEC 62220-1-1:2015 RQA-5 protocol. The optimal parameters were derived by calculating the edge preservation index and signal-to-noise ratio according to the sigma value of the NLM algorithm. As a result, a sigma value of the optimized NLM algorithm (0.01) was derived, and this algorithm was applied to a relatively thin X-ray detector system to obtain appropriate noise level and spatial resolution data. The no-reference-based blind/referenceless image spatial quality evaluator value, which analyzes the overall image quality, was best when using the proposed method. In conclusion, we propose an optimized NLM algorithm based on a new method that can overcome the noise amplification problem in thin X-ray detector systems and is expected to be applied in various photon imaging fields in the future.
Collapse
Affiliation(s)
- Bo Kyung Cha
- Precision Medical Device Research Center, Korea Electrotechnology Research Institute (KERI), 111 Hanggaul-ro, Sangnok-gu, Ansan-si 15588, Republic of Korea; (B.K.C.); (K.-H.L.)
| | - Kyeong-Hee Lee
- Precision Medical Device Research Center, Korea Electrotechnology Research Institute (KERI), 111 Hanggaul-ro, Sangnok-gu, Ansan-si 15588, Republic of Korea; (B.K.C.); (K.-H.L.)
| | - Youngjin Lee
- Department of Radiological Science, Gachon University, 191 Hambangmoe-ro, Yeonsu-gu, Incheon 21936, Republic of Korea
| | - Kyuseok Kim
- Department of Biomedical Engineering, Eulji University, 553 Sanseong-daero, Sujeong-gu, Seongnam-si 13135, Republic of Korea
| |
Collapse
|
9
|
Kaviani S, Sanaat A, Mokri M, Cohalan C, Carrier JF. Image reconstruction using UNET-transformer network for fast and low-dose PET scans. Comput Med Imaging Graph 2023; 110:102315. [PMID: 38006648 DOI: 10.1016/j.compmedimag.2023.102315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 09/26/2023] [Accepted: 11/15/2023] [Indexed: 11/27/2023]
Abstract
INTRODUCTION Low-dose and fast PET imaging (low-count PET) play a significant role in enhancing patient safety, healthcare efficiency, and patient comfort during medical imaging procedures. To achieve high-quality images with low-count PET scans, effective reconstruction models are crucial for denoising and enhancing image quality. The main goal of this paper is to develop an effective and accurate deep learning-based method for reconstructing low-count PET images, which is a challenging problem due to the limited amount of available data and the high level of noise in the acquired images. The proposed method aims to improve the quality of reconstructed PET images while preserving important features, such as edges and small details, by combining the strengths of UNET and Transformer networks. MATERIAL AND METHODS The proposed TrUNET-MAPEM model integrates a residual UNET-transformer regularizer into the unrolled maximum a posteriori expectation maximization (MAPEM) algorithm for PET image reconstruction. A loss function based on a combination of structural similarity index (SSIM) and mean squared error (MSE) is utilized to evaluate the accuracy of the reconstructed images. The simulated dataset was generated using the Brainweb phantom, while the real patient dataset was acquired using a Siemens Biograph mMR PET scanner. We also implemented state-of-the-art methods for comparison purposes: OSEM, MAPOSEM, and supervised learning using 3D-UNET network. The reconstructed images are compared to ground truth images using metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and relative root mean square error (rRMSE) to quantitatively evaluate the accuracy of the reconstructed images. RESULTS Our proposed TrUNET-MAPEM approach was evaluated using both simulated and real patient data. For the patient data, our model achieved an average PSNR of 33.72 dB, an average SSIM of 0.955, and an average rRMSE of 0.39. These results outperformed other methods which had average PSNRs of 36.89 dB, 34.12 dB, and 33.52 db, average SSIMs of 0.944, 0.947, and 0.951, and average rRMSEs of 0.59, 0.49, and 0.42. For the simulated data, our model achieved an average PSNR of 31.23 dB, an average SSIM of 0.95, and an average rRMSE of 0.55. These results also outperformed other state-of-the-art methods, such as OSEM, MAPOSEM, and 3DUNET-MAPEM. The model demonstrates the potential for clinical use by successfully reconstructing smooth images while preserving edges. The comparison with other methods demonstrates the superiority of our approach, as it outperforms all other methods for all three metrics. CONCLUSION The proposed TrUNET-MAPEM model presents a significant advancement in the field of low-count PET image reconstruction. The results demonstrate the potential for clinical use, as the model can produce images with reduced noise levels and better edge preservation compared to other reconstruction and post-processing algorithms. The proposed approach may have important clinical applications in the early detection and diagnosis of various diseases.
Collapse
Affiliation(s)
- Sanaz Kaviani
- Faculty of Medicine, University of Montreal, Montreal, Canada; University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada.
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Mersede Mokri
- Faculty of Medicine, University of Montreal, Montreal, Canada; University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada
| | - Claire Cohalan
- University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada; Department of Physics and Biomedical Engineering, University of Montreal Hospital Centre, Montreal, Canada
| | - Jean-Francois Carrier
- University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada; Department of Physics, University of Montreal, Montreal, QC, Canada; Department de Radiation Oncology, University of Montreal Hospital Centre (CHUM), Montreal, Canada
| |
Collapse
|
10
|
Reader AJ, Pan B. AI for PET image reconstruction. Br J Radiol 2023; 96:20230292. [PMID: 37486607 PMCID: PMC10546435 DOI: 10.1259/bjr.20230292] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 06/06/2023] [Accepted: 06/20/2023] [Indexed: 07/25/2023] Open
Abstract
Image reconstruction for positron emission tomography (PET) has been developed over many decades, with advances coming from improved modelling of the data statistics and improved modelling of the imaging physics. However, high noise and limited spatial resolution have remained issues in PET imaging, and state-of-the-art PET reconstruction has started to exploit other medical imaging modalities (such as MRI) to assist in noise reduction and enhancement of PET's spatial resolution. Nonetheless, there is an ongoing drive towards not only improving image quality, but also reducing the injected radiation dose and reducing scanning times. While the arrival of new PET scanners (such as total body PET) is helping, there is always a need to improve reconstructed image quality due to the time and count limited imaging conditions. Artificial intelligence (AI) methods are now at the frontier of research for PET image reconstruction. While AI can learn the imaging physics as well as the noise in the data (when given sufficient examples), one of the most common uses of AI arises from exploiting databases of high-quality reference examples, to provide advanced noise compensation and resolution recovery. There are three main AI reconstruction approaches: (i) direct data-driven AI methods which rely on supervised learning from reference data, (ii) iterative (unrolled) methods which combine our physics and statistical models with AI learning from data, and (iii) methods which exploit AI with our known models, but crucially can offer benefits even in the absence of any example training data whatsoever. This article reviews these methods, considering opportunities and challenges of AI for PET reconstruction.
Collapse
Affiliation(s)
- Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Bolin Pan
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| |
Collapse
|
11
|
Lim H, Dewaraja YK, Fessler JA. SPECT reconstruction with a trained regularizer using CT-side information: Application to 177Lu SPECT imaging. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2023; 9:846-856. [PMID: 38516350 PMCID: PMC10956080 DOI: 10.1109/tci.2023.3318993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
Improving low-count SPECT can shorten scans and support pre-therapy theranostic imaging for dosimetry-based treatment planning, especially with radionuclides like 177Lu known for low photon yields. Conventional methods often underperform in low-count settings, highlighting the need for trained regularization in model-based image reconstruction. This paper introduces a trained regularizer for SPECT reconstruction that leverages segmentation based on CT imaging. The regularizer incorporates CT-side information via a segmentation mask from a pre-trained network (nnUNet). In this proof-of-concept study, we used patient studies with 177Lu DOTATATE to train and tested with phantom and patient datasets, simulating pre-therapy imaging conditions. Our results show that the proposed method outperforms both standard unregularized EM algorithms and conventional regularization with CT-side information. Specifically, our method achieved marked improvements in activity quantification, noise reduction, and root mean square error. The enhanced low-count SPECT approach has promising implications for theranostic imaging, post-therapy imaging, whole body SPECT, and reducing SPECT acquisition times.
Collapse
Affiliation(s)
- Hongki Lim
- Department of Electronic Engineering, Inha University, Incheon, 22212, South Korea
| | - Yuni K Dewaraja
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109 USA
| | - Jeffrey A Fessler
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109 USA
| |
Collapse
|
12
|
Chun IY, Huang Z, Lim H, Fessler JA. Momentum-Net: Fast and Convergent Iterative Neural Network for Inverse Problems. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:4915-4931. [PMID: 32750839 PMCID: PMC8011286 DOI: 10.1109/tpami.2020.3012955] [Citation(s) in RCA: 26] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Iterative neural networks (INN) are rapidly gaining attention for solving inverse problems in imaging, image processing, and computer vision. INNs combine regression NNs and an iterative model-based image reconstruction (MBIR) algorithm, often leading to both good generalization capability and outperforming reconstruction quality over existing MBIR optimization models. This paper proposes the first fast and convergent INN architecture, Momentum-Net, by generalizing a block-wise MBIR algorithm that uses momentum and majorizers with regression NNs. For fast MBIR, Momentum-Net uses momentum terms in extrapolation modules, and noniterative MBIR modules at each iteration by using majorizers, where each iteration of Momentum-Net consists of three core modules: image refining, extrapolation, and MBIR. Momentum-Net guarantees convergence to a fixed-point for general differentiable (non)convex MBIR functions (or data-fit terms) and convex feasible sets, under two asymptomatic conditions. To consider data-fit variations across training and testing samples, we also propose a regularization parameter selection scheme based on the "spectral spread" of majorization matrices. Numerical experiments for light-field photography using a focal stack and sparse-view computational tomography demonstrate that, given identical regression NN architectures, Momentum-Net significantly improves MBIR speed and accuracy over several existing INNs; it significantly improves reconstruction quality compared to a state-of-the-art MBIR method in each application.
Collapse
|
13
|
Li S, Gong K, Badawi RD, Kim EJ, Qi J, Wang G. Neural KEM: A Kernel Method With Deep Coefficient Prior for PET Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:785-796. [PMID: 36288234 PMCID: PMC10081957 DOI: 10.1109/tmi.2022.3217543] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Image reconstruction of low-count positron emission tomography (PET) data is challenging. Kernel methods address the challenge by incorporating image prior information in the forward model of iterative PET image reconstruction. The kernelized expectation-maximization (KEM) algorithm has been developed and demonstrated to be effective and easy to implement. A common approach for a further improvement of the kernel method would be adding an explicit regularization, which however leads to a complex optimization problem. In this paper, we propose an implicit regularization for the kernel method by using a deep coefficient prior, which represents the kernel coefficient image in the PET forward model using a convolutional neural-network. To solve the maximum-likelihood neural network-based reconstruction problem, we apply the principle of optimization transfer to derive a neural KEM algorithm. Each iteration of the algorithm consists of two separate steps: a KEM step for image update from the projection data and a deep-learning step in the image domain for updating the kernel coefficient image using the neural network. This optimization algorithm is guaranteed to monotonically increase the data likelihood. The results from computer simulations and real patient data have demonstrated that the neural KEM can outperform existing KEM and deep image prior methods.
Collapse
|
14
|
Li S, Wang G. Deep Kernel Representation for Image Reconstruction in PET. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3029-3038. [PMID: 35584077 PMCID: PMC9613528 DOI: 10.1109/tmi.2022.3176002] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Image reconstruction for positron emission tomography (PET) is challenging because of the ill-conditioned tomographic problem and low counting statistics. Kernel methods address this challenge by using kernel representation to incorporate image prior information in the forward model of iterative PET image reconstruction. Existing kernel methods construct the kernels commonly using an empirical process, which may lead to unsatisfactory performance. In this paper, we describe the equivalence between the kernel representation and a trainable neural network model. A deep kernel method is then proposed by exploiting a deep neural network to enable automated learning of an improved kernel model and is directly applicable to single subjects in dynamic PET. The training process utilizes available image prior data to form a set of robust kernels in an optimized way rather than empirically. The results from computer simulations and a real patient dataset demonstrate that the proposed deep kernel method can outperform the existing kernel method and neural network method for dynamic PET image reconstruction.
Collapse
|
15
|
Zavala-Mondragon LA, Rongen P, Bescos JO, de With PHN, van der Sommen F. Noise Reduction in CT Using Learned Wavelet-Frame Shrinkage Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2048-2066. [PMID: 35201984 DOI: 10.1109/tmi.2022.3154011] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Encoding-decoding (ED) CNNs have demonstrated state-of-the-art performance for noise reduction over the past years. This has triggered the pursuit of better understanding the inner workings of such architectures, which has led to the theory of deep convolutional framelets (TDCF), revealing important links between signal processing and CNNs. Specifically, the TDCF demonstrates that ReLU CNNs induce low-rankness, since these models often do not satisfy the necessary redundancy to achieve perfect reconstruction (PR). In contrast, this paper explores CNNs that do meet the PR conditions. We demonstrate that in these type of CNNs soft shrinkage and PR can be assumed. Furthermore, based on our explorations we propose the learned wavelet-frame shrinkage network, or LWFSN and its residual counterpart, the rLWFSN. The ED path of the (r)LWFSN complies with the PR conditions, while the shrinkage stage is based on the linear expansion of thresholds proposed Blu and Luisier. In addition, the LWFSN has only a fraction of the training parameters (<1%) of conventional CNNs, very small inference times, low memory footprint, while still achieving performance close to state-of-the-art alternatives, such as the tight frame (TF) U-Net and FBPConvNet, in low-dose CT denoising.
Collapse
|
16
|
Artificial intelligence-based PET image acquisition and reconstruction. Clin Transl Imaging 2022. [DOI: 10.1007/s40336-022-00508-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
17
|
Li Z, Long Y, Chun IY. An improved iterative neural network for high-quality image-domain material decomposition in dual-energy CT. Med Phys 2022; 50:2195-2211. [PMID: 35735056 DOI: 10.1002/mp.15817] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 12/08/2021] [Accepted: 01/11/2022] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Dual-energy computed tomography (DECT) has been widely used in many applications that need material decomposition. Image-domain methods directly decompose material images from high- and low-energy attenuation images, and thus, are susceptible to noise and artifacts on attenuation images. The purpose of this study is to develop an improved iterative neural network (INN) for high-quality image-domain material decomposition in DECT, and to study its properties. METHODS We propose a new INN architecture for DECT material decomposition. The proposed INN architecture uses distinct cross-material convolutional neural network (CNN) in image refining modules, and uses image decomposition physics in image reconstruction modules. The distinct cross-material CNN refiners incorporate distinct encoding-decoding filters and cross-material model that captures correlations between different materials. We study the distinct cross-material CNN refiner with patch-based reformulation and tight-frame condition. RESULTS Numerical experiments with extended cardiac-torso phantom and clinical data show that the proposed INN significantly improves the image quality over several image-domain material decomposition methods, including a conventional model-based image decomposition (MBID) method using an edge-preserving regularizer, a recent MBID method using pre-learned material-wise sparsifying transforms, and a noniterative deep CNN method. Our study with patch-based reformulations reveals that learned filters of distinct cross-material CNN refiners can approximately satisfy the tight-frame condition. CONCLUSIONS The proposed INN architecture achieves high-quality material decompositions using iteration-wise refiners that exploit cross-material properties between different material images with distinct encoding-decoding filters. Our tight-frame study implies that cross-material CNN refiners in the proposed INN architecture are useful for noise suppression and signal restoration. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Zhipeng Li
- University of Michigan - Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yong Long
- University of Michigan - Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Il Yong Chun
- School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, Gyeonggi, 16419, Republic of Korea
| |
Collapse
|
18
|
Xu J, Noo F. Convex optimization algorithms in medical image reconstruction-in the age of AI. Phys Med Biol 2022; 67:10.1088/1361-6560/ac3842. [PMID: 34757943 PMCID: PMC10405576 DOI: 10.1088/1361-6560/ac3842] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Accepted: 11/10/2021] [Indexed: 11/12/2022]
Abstract
The past decade has seen the rapid growth of model based image reconstruction (MBIR) algorithms, which are often applications or adaptations of convex optimization algorithms from the optimization community. We review some state-of-the-art algorithms that have enjoyed wide popularity in medical image reconstruction, emphasize known connections between different algorithms, and discuss practical issues such as computation and memory cost. More recently, deep learning (DL) has forayed into medical imaging, where the latest development tries to exploit the synergy between DL and MBIR to elevate the MBIR's performance. We present existing approaches and emerging trends in DL-enhanced MBIR methods, with particular attention to the underlying role of convexity and convex algorithms on network architecture. We also discuss how convexity can be employed to improve the generalizability and representation power of DL networks in general.
Collapse
Affiliation(s)
- Jingyan Xu
- Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America
| | - Frédéric Noo
- Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, UT, United States of America
| |
Collapse
|
19
|
Gong K, Catana C, Qi J, Li Q. Direct Reconstruction of Linear Parametric Images From Dynamic PET Using Nonlocal Deep Image Prior. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:680-689. [PMID: 34652998 PMCID: PMC8956450 DOI: 10.1109/tmi.2021.3120913] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Direct reconstruction methods have been developed to estimate parametric images directly from the measured PET sinograms by combining the PET imaging model and tracer kinetics in an integrated framework. Due to limited counts received, signal-to-noise-ratio (SNR) and resolution of parametric images produced by direct reconstruction frameworks are still limited. Recently supervised deep learning methods have been successfully applied to medical imaging denoising/reconstruction when large number of high-quality training labels are available. For static PET imaging, high-quality training labels can be acquired by extending the scanning time. However, this is not feasible for dynamic PET imaging, where the scanning time is already long enough. In this work, we proposed an unsupervised deep learning framework for direct parametric reconstruction from dynamic PET, which was tested on the Patlak model and the relative equilibrium Logan model. The training objective function was based on the PET statistical model. The patient's anatomical prior image, which is readily available from PET/CT or PET/MR scans, was supplied as the network input to provide a manifold constraint, and also utilized to construct a kernel layer to perform non-local feature denoising. The linear kinetic model was embedded in the network structure as a 1 ×1 ×1 convolution layer. Evaluations based on dynamic datasets of 18F-FDG and 11C-PiB tracers show that the proposed framework can outperform the traditional and the kernel method-based direct reconstruction methods.
Collapse
|
20
|
Lv L, Zeng GL, Zan Y, Hong X, Guo M, Chen G, Tao W, Ding W, Huang Q. A back‐projection‐and‐filtering‐like (BPF‐like) reconstruction method with the deep learning filtration from listmode data in TOF‐PET. Med Phys 2022; 49:2531-2544. [PMID: 35122265 PMCID: PMC10080664 DOI: 10.1002/mp.15520] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 01/19/2022] [Accepted: 01/19/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE The time-of-flight (TOF) information improves signal-to-noise ratio (SNR) for positron emission tomography (PET) imaging. Existing analytical algorithms for TOF PET usually follow a filtered back-projection process on reconstructing images from the sinogram data. This work aims to develop a back-projection-and-filtering-like (BPF-like) algorithm that reconstructs the TOF PET image directly from listmode data rapidly. METHODS We extended the 2D conventional non-TOF PET projection model to a TOF case, where projection data are represented as line integrals weighted by the one-dimensional TOF kernel along the projection direction. After deriving the central slice theorem and the TOF back-projection of listmode data, we designed a deep learning network with a modified U-net architecture to perform the spatial filtration (reconstruction filter). The proposed BP-Net method was validated via Monte Carlo simulations of TOF PET listmode data with three different time resolutions for two types of activity phantoms. The network was only trained on the simulated full-dose XCAT dataset and then evaluated on XCAT and Jaszczak data with different time resolutions and dose levels. RESULTS Reconstructed images show that when compared with the conventional BPF algorithm and the MLEM algorithm proposed for TOF PET, the proposed BP-Net method obtains better image quality in terms of peak signal-to-noise ratio, relative mean square error, and structure similarity index; besides, the reconstruction speed of the BP-Net is 1.75 times faster than BPF and 29.05 times faster than MLEM using 15 iterations. The results also indicate that the performance of the BP-Net degrades with worse time resolutions and lower tracer doses, but degrades less than BPF or MLEM reconstructions. CONCLUSION In this work, we developed an analytical-like reconstruction in the form of BPF with the reconstruction filtering operation performed via a deep network. The method runs even faster than the conventional BPF algorithm and provides accurate reconstructions from listmode data in TOF-PET, free of rebinning data to a sinogram.
Collapse
Affiliation(s)
- Li Lv
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Gengsheng L. Zeng
- Department of Computer Science Utah Valley University Orem UT 84058 USA
| | - Yunlong Zan
- Department of Nuclear Medicine Rui Jin Hospital School of Medicine Shanghai Jiao Tong University Shanghai 200240 China
| | - Xiang Hong
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Minghao Guo
- School of Electronic Information and Electrical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Gaoyu Chen
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Weijie Tao
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
- Department of Nuclear Medicine Rui Jin Hospital School of Medicine Shanghai Jiao Tong University Shanghai 200240 China
| | - Wenxiang Ding
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Qiu Huang
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
- Department of Nuclear Medicine Rui Jin Hospital School of Medicine Shanghai Jiao Tong University Shanghai 200240 China
| |
Collapse
|
21
|
Corda-D'Incan G, Schnabel JA, Reader AJ. Memory-Efficient Training for Fully Unrolled Deep Learned PET Image Reconstruction with Iteration-Dependent Targets. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:552-563. [PMID: 35664091 PMCID: PMC7612803 DOI: 10.1109/trpms.2021.3101947] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
We propose a new version of the forward-backward splitting expectation-maximisation network (FBSEM-Net) along with a new memory-efficient training method enabling the training of fully unrolled implementations of 3D FBSEM-Net. FBSEM-Net unfolds the maximum a posteriori expectation-maximisation algorithm and replaces the regularisation step by a residual convolutional neural network. Both the gradient of the prior and the regularisation strength are learned from training data. In this new implementation, three modifications of the original framework are included. First, iteration-dependent networks are used to have a customised regularisation at each iteration. Second, iteration-dependent targets and losses are introduced so that the regularised reconstruction matches the reconstruction of noise-free data at every iteration. Third, sequential training is performed, making training of large unrolled networks far more memory efficient and feasible. Since sequential training permits unrolling a high number of iterations, there is no need for artificial use of the regularisation step as a leapfrogging acceleration. The results obtained on 2D and 3D simulated data show that FBSEM-Net using iteration-dependent targets and losses improves the consistency in the optimisation of the network parameters over different training runs. We also found that using iteration-dependent targets increases the generalisation capabilities of the network. Furthermore, unrolled networks using iteration-dependent regularisation allowed a slight reduction in reconstruction error compared to using a fixed regularisation network at each iteration. Finally, we demonstrate that sequential training successfully addresses potentially serious memory issues during the training of deep unrolled networks. In particular, it enables the training of 3D fully unrolled FBSEM-Net, not previously feasible, by reducing the memory usage by up to 98% compared to a conventional end-to-end training. We also note that the truncation of the backpropagation (due to sequential training) does not notably impact the network’s performance compared to conventional training with a full backpropagation through the entire network.
Collapse
Affiliation(s)
- Guillaume Corda-D'Incan
- School of Biomedical Engineering and Imaging Sciences, Department of Biomedical Engineering, King's College London, St. Thomas' Hospital, London, UK
| | - Julia A Schnabel
- School of Biomedical Engineering and Imaging Sciences, Department of Biomedical Engineering, King's College London, St. Thomas' Hospital, London, UK
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, Department of Biomedical Engineering, King's College London, St. Thomas' Hospital, London, UK
| |
Collapse
|
22
|
Cheng Z, Wen J, Huang G, Yan J. Applications of artificial intelligence in nuclear medicine image generation. Quant Imaging Med Surg 2021; 11:2792-2822. [PMID: 34079744 PMCID: PMC8107336 DOI: 10.21037/qims-20-1078] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 02/14/2021] [Indexed: 12/12/2022]
Abstract
Recently, the application of artificial intelligence (AI) in medical imaging (including nuclear medicine imaging) has rapidly developed. Most AI applications in nuclear medicine imaging have focused on the diagnosis, treatment monitoring, and correlation analyses with pathology or specific gene mutation. It can also be used for image generation to shorten the time of image acquisition, reduce the dose of injected tracer, and enhance image quality. This work provides an overview of the application of AI in image generation for single-photon emission computed tomography (SPECT) and positron emission tomography (PET) either without or with anatomical information [CT or magnetic resonance imaging (MRI)]. This review focused on four aspects, including imaging physics, image reconstruction, image postprocessing, and internal dosimetry. AI application in generating attenuation map, estimating scatter events, boosting image quality, and predicting internal dose map is summarized and discussed.
Collapse
Affiliation(s)
- Zhibiao Cheng
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Junhai Wen
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Gang Huang
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| | - Jianhua Yan
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| |
Collapse
|
23
|
Lv Y, Xi C. PET image reconstruction with deep progressive learning. Phys Med Biol 2021; 66. [PMID: 33892485 DOI: 10.1088/1361-6560/abfb17] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 04/23/2021] [Indexed: 11/11/2022]
Abstract
Convolutional neural networks (CNNs) have recently achieved state-of-the-art results for positron emission tomography (PET) imaging problems. However direct learning from input image to target image is challenging if the gap is large between two images. Previous studies have shown that CNN can reduce image noise, but it can also degrade contrast recovery for small lesions. In this work, a deep progressive learning (DPL) method for PET image reconstruction is proposed to reduce background noise and improve image contrast. DPL bridges the gap between low quality image and high quality image through two learning steps. In the iterative reconstruction process, two pre-trained neural networks are introduced to control the image noise and contrast in turn. The feedback structure is adopted in the network design, which greatly reduces the parameters. The training data come from uEXPLORER, the world's first total-body PET scanner, in which the PET images show high contrast and very low image noise. We conducted extensive phantom and patient studies to test the algorithm for PET image quality improvement. The experimental results show that DPL is promising for reducing noise and improving contrast of PET images. Moreover, the proposed method has sufficient versatility to solve various imaging and image processing problems.
Collapse
Affiliation(s)
- Yang Lv
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Chen Xi
- United Imaging Healthcare, Shanghai, People's Republic of China
| |
Collapse
|
24
|
da Costa-Luis CO, Reader AJ. Micro-Networks for Robust MR-Guided Low Count PET Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:202-212. [PMID: 33681546 PMCID: PMC7931458 DOI: 10.1109/trpms.2020.2986414] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Revised: 12/30/2019] [Accepted: 02/08/2020] [Indexed: 01/18/2023]
Abstract
Noise suppression is particularly important in low count positron emission tomography (PET) imaging. Post-smoothing (PS) and regularization methods which aim to reduce noise also tend to reduce resolution and introduce bias. Alternatively, anatomical information from another modality such as magnetic resonance (MR) imaging can be used to improve image quality. Convolutional neural networks (CNNs) are particularly well suited to such joint image processing, but usually require large amounts of training data and have mostly been applied outside the field of medical imaging or focus on classification and segmentation, leaving PET image quality improvement relatively understudied. This article proposes the use of a relatively low-complexity CNN (micro-net) as a post-reconstruction MR-guided image processing step to reduce noise and reconstruction artefacts while also improving resolution in low count PET scans. The CNN is designed to be fully 3-D, robust to very limited amounts of training data, and to accept multiple inputs (including competitive denoising methods). Application of the proposed CNN on simulated low (30 M) count data (trained to produce standard (300 M) count reconstructions) results in a 36% lower normalized root mean squared error (NRMSE, calculated over ten realizations against the ground truth) compared to maximum-likelihood expectation maximization (MLEM) used in clinical practice. In contrast, a decrease of only 25% in NRMSE is obtained when an optimized (using knowledge of the ground truth) PS is performed. A 26% NRMSE decrease is obtained with both RM and optimized PS. Similar improvement is also observed for low count real patient datasets. Overfitting to training data is demonstrated to occur as the network size is increased. In an extreme case, a U-net (which produces better predictions for training data) is shown to completely fail on test data due to overfitting to this case of very limited training data. Meanwhile, the resultant images from the proposed CNN (which has low training data requirements) have lower noise, reduced ringing, and partial volume effects, as well as sharper edges and improved resolution compared to conventional MLEM.
Collapse
Affiliation(s)
- Casper O. da Costa-Luis
- Department of Biomedical EngineeringSchool of Biomedical Engineering and Imaging Sciences, St. Thomas’ HospitalKing’s College LondonLondonSE1 7EHU.K.
| | - Andrew J. Reader
- Department of Biomedical EngineeringSchool of Biomedical Engineering and Imaging Sciences, St. Thomas’ HospitalKing’s College LondonLondonSE1 7EHU.K.
| |
Collapse
|
25
|
Mok GSP, Dewaraja YK. Recent advances in voxel-based targeted radionuclide therapy dosimetry. Quant Imaging Med Surg 2021; 11:483-489. [PMID: 33532249 PMCID: PMC7779928 DOI: 10.21037/qims-20-1006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 09/27/2020] [Indexed: 02/04/2023]
Affiliation(s)
- Greta S. P. Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau, China
- Center for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau, China
| | - Yuni K. Dewaraja
- Department of Radiology, University of Michigan Medical School, Ann Arbor, MI, USA
| |
Collapse
|
26
|
Reader AJ, Corda G, Mehranian A, Costa-Luis CD, Ellis S, Schnabel JA. Deep Learning for PET Image Reconstruction. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3014786] [Citation(s) in RCA: 65] [Impact Index Per Article: 21.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|