1
|
Lee H. Monte Carlo methods for medical imaging research. Biomed Eng Lett 2024; 14:1195-1205. [PMID: 39465109 PMCID: PMC11502642 DOI: 10.1007/s13534-024-00423-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 07/24/2024] [Accepted: 08/26/2024] [Indexed: 10/29/2024] Open
Abstract
In radiation-based medical imaging research, computational modeling methods are used to design and validate imaging systems and post-processing algorithms. Monte Carlo methods are widely used for the computational modeling as they can model the systems accurately and intuitively by sampling interactions between particles and imaging subject with known probability distributions. This article reviews the physics behind Monte Carlo methods, their applications in medical imaging, and available MC codes for medical imaging research. Additionally, potential research areas related to Monte Carlo for medical imaging are discussed.
Collapse
Affiliation(s)
- Hoyeon Lee
- Department of Diagnostic Radiology and Centre of Cancer Medicine, University of Hong Kong, Hong Kong, China
| |
Collapse
|
2
|
Kuang X, Li B, Lyu T, Xue Y, Huang H, Xie Q, Zhu W. PET image reconstruction using weighted nuclear norm maximization and deep learning prior. Phys Med Biol 2024; 69:215023. [PMID: 39374634 DOI: 10.1088/1361-6560/ad841d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2024] [Accepted: 10/07/2024] [Indexed: 10/09/2024]
Abstract
The ill-posed Positron emission tomography (PET) reconstruction problem usually results in limited resolution and significant noise. Recently, deep neural networks have been incorporated into PET iterative reconstruction framework to improve the image quality. In this paper, we propose a new neural network-based iterative reconstruction method by using weighted nuclear norm (WNN) maximization, which aims to recover the image details in the reconstruction process. The novelty of our method is the application of WNN maximization rather than WNN minimization in PET image reconstruction. Meanwhile, a neural network is used to control the noise originated from WNN maximization. Our method is evaluated on simulated and clinical datasets. The simulation results show that the proposed approach outperforms state-of-the-art neural network-based iterative methods by achieving the best contrast/noise tradeoff with a remarkable contrast improvement on the lesion contrast recovery. The study on clinical datasets also demonstrates that our method can recover lesions of different sizes while suppressing noise in various low-dose PET image reconstruction tasks. Our code is available athttps://github.com/Kuangxd/PETReconstruction.
Collapse
Affiliation(s)
- Xiaodong Kuang
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Bingxuan Li
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, People's Republic of China
| | - Tianling Lyu
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Yitian Xue
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Hailiang Huang
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Qingguo Xie
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, People's Republic of China
| | - Wentao Zhu
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| |
Collapse
|
3
|
Lang Y, Jiang Z, Sun L, Tran P, Mossahebi S, Xiang L, Ren L. Patient-specific deep learning for 3D protoacoustic image reconstruction and dose verification in proton therapy. Med Phys 2024; 51:7425-7438. [PMID: 38980065 PMCID: PMC11479840 DOI: 10.1002/mp.17294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 06/27/2024] [Accepted: 06/27/2024] [Indexed: 07/10/2024] Open
Abstract
BACKGROUND Protoacoustic (PA) imaging has the potential to provide real-time 3D dose verification of proton therapy. However, PA images are susceptible to severe distortion due to limited angle acquisition. Our previous studies showed the potential of using deep learning to enhance PA images. As the model was trained using a limited number of patients' data, its efficacy was limited when applied to individual patients. PURPOSE In this study, we developed a patient-specific deep learning method for protoacoustic imaging to improve the reconstruction quality of protoacoustic imaging and the accuracy of dose verification for individual patients. METHODS Our method consists of two stages: in the first stage, a group model is trained from a diverse training set containing all patients, where a novel deep learning network is employed to directly reconstruct the initial pressure maps from the radiofrequency (RF) signals; in the second stage, we apply transfer learning on the pre-trained group model using patient-specific dataset derived from a novel data augmentation method to tune it into a patient-specific model. Raw PA signals were simulated based on computed tomography (CT) images and the pressure map derived from the planned dose. The reconstructed PA images were evaluated against the ground truth by using the root mean squared errors (RMSE), structural similarity index measure (SSIM) and gamma index on 10 specific prostate cancer patients. The significance level was evaluated by t-test with the p-value threshold of 0.05 compared with the results from the group model. RESULTS The patient-specific model achieved an average RMSE of 0.014 (p < 0.05 ${{{p}}}<{0.05}$ ), and an average SSIM of 0.981 (p < 0.05 ${{{p}}}<{0.05}$ ), out-performing the group model. Qualitative results also demonstrated that our patient-specific approach acquired better imaging quality with more details reconstructed when comparing with the group model. Dose verification achieved an average RMSE of 0.011 (p < 0.05 ${{{p}}}<{0.05}$ ), and an average SSIM of 0.995 (p < 0.05 ${{{p}}}<{0.05}$ ). Gamma index evaluation demonstrated a high agreement (97.4% [p < 0.05 ${{{p}}}<{0.05}$ ] and 97.9% [p < 0.05 ${{{p}}}<{0.05}$ ] for 1%/3 and 1%/5 mm) between the predicted and the ground truth dose maps. Our approach approximately took 6 s to reconstruct PA images for each patient, demonstrating its feasibility for online 3D dose verification for prostate proton therapy. CONCLUSIONS Our method demonstrated the feasibility of achieving 3D high-precision PA-based dose verification using patient-specific deep-learning approaches, which can potentially be used to guide the treatment to mitigate the impact of range uncertainty and improve the precision. Further studies are needed to validate the clinical impact of the technique.
Collapse
Affiliation(s)
- Yankun Lang
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Maryland, USA
| | - Zhuoran Jiang
- Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | - Leshan Sun
- Department of Biomedical Engineering and Radiology, University of California, Irnive, California, USA
| | - Phuoc Tran
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Maryland, USA
| | - Sina Mossahebi
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Maryland, USA
| | - Liangzhong Xiang
- Department of Biomedical Engineering and Radiology, University of California, Irnive, California, USA
| | - Lei Ren
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Maryland, USA
| |
Collapse
|
4
|
An H, Khan J, Kim S, Choi J, Jung Y. The Adaption of Recent New Concepts in Neural Radiance Fields and Their Role for High-Fidelity Volume Reconstruction in Medical Images. SENSORS (BASEL, SWITZERLAND) 2024; 24:5923. [PMID: 39338670 PMCID: PMC11436004 DOI: 10.3390/s24185923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 08/26/2024] [Accepted: 09/10/2024] [Indexed: 09/30/2024]
Abstract
Volume reconstruction techniques are gaining increasing interest in medical domains due to their potential to learn complex 3D structural information from sparse 2D images. Recently, neural radiance fields (NeRF), which implicitly model continuous radiance fields based on multi-layer perceptrons to enable volume reconstruction of objects at arbitrary resolution, have gained traction in natural image volume reconstruction. However, the direct application of NeRF to medical volume reconstruction presents unique challenges due to differences in imaging principles, internal structure requirements, and boundary delineation. In this paper, we evaluate different NeRF techniques developed for natural images, including sampling strategies, feature encoding, and the use of complimentary features, by applying them to medical images. We evaluate three state-of-the-art NeRF techniques on four datasets of medical images of different complexity. Our goal is to identify the strengths, limitations, and future directions for integrating NeRF into the medical domain.
Collapse
Affiliation(s)
- Haill An
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
| | - Jawad Khan
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
| | - Suhyeon Kim
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
| | - Junseo Choi
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
| | - Younhyun Jung
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
| |
Collapse
|
5
|
Lv L, Zeng GL, Chen G, Ding W, Weng F, Huang Q. The effects of back-projection variants in BPF-like TOF PET reconstruction using CNN filtration - Based on simulated and clinical brain data. Med Phys 2024; 51:6161-6175. [PMID: 38828883 PMCID: PMC11489027 DOI: 10.1002/mp.17191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 03/28/2024] [Accepted: 04/22/2024] [Indexed: 06/05/2024] Open
Abstract
BACKGROUND The back-projection strategies such as confidence weighting (CW) and most likely annihilation position (MLAP) have been adopted into back-projection-and-filtering-like (BPF-like) deep reconstruction model and shown great potential on fast and accurate PET reconstruction. Although the two methods degenerate to an identical model at the time resolution of 0 ps, they represent two distinct approaches at the realistic time resolutions of current commercial systems. There is a lack of a systematic and fair assessment on these differences. PURPOSE This work aims to analyze the impact of back-projection variants on CNN-based PET image reconstruction to find the most effective back-projection model, and ultimately contribute to accurate PET reconstruction. METHODS Different back-projection strategies (CW and MLAP) and different angular view processing methods (view-summed and view-grouped) were considered, leading to the comparison of four back-projection variants integrated with the same CNN filtration model. Meanwhile, we investigated two strategies of physical effect compensation, either introducing pre-corrected data as the input or adding a channel of attenuation map to the CNN model. After training models separately on Monte-Carlo-simulated BrainWeb phantoms with full dose (events = 3×107), we tested them on both simulated phantoms and clinical brain scans with two dosage levels. For the performance assessment, peak signal-to-noise ratio (PSNR) and root mean square error (RMSE) were used to evaluate the pixel-wise error, structural similarity index (SSIM) to evaluate the structural similarity, and contrast recovery coefficient (CRC) in manually selected ROI to compare the region recovery. RESULTS Compared to two MLAP-based histo-image reconstruction models, two CW-based back-projected image methods produced clearer, sharper, and more detailed images, from both simulated and clinical data. For angular view processing methods, view-grouped histo-image improved image quality, while view-grouped cwbp-image showed no advantage except for contrast recovery. Quantitative analysis on simulated data demonstrated that the view-summed cwbp-image model achieved the best PSNR, RMSE, SSIM, while the 8-view cwbp-image model achieved the best CRC in lesions and the white matter. Additionally, the multi-channel input model including the back-projection image and attenuation map was proved to be the most efficient and simplest method for compensating for physical effects for brain data. Applying Gaussian blur to the histo-image yielded images with limited improvement. All above results hold for both the half-dose and the full-dose cases. CONCLUSION For brain imaging, the evaluation based on metrics PSNR, RMSE, SSIM, and CRC indicates that the view-summed CW-based back-projection variant is the most effective input for the BPF-like reconstruction model using CNN filtration, which can involve the attenuation map through an additional channel to effectively compensate for physical effects.
Collapse
Affiliation(s)
- Li Lv
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai,China
| | | | - Gaoyu Chen
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai,China
- Department of Automation, Shanghai Jiao Tong University, Shanghai,China
| | - Wenxiang Ding
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai,China
| | - Fenghua Weng
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai,China
| | - Qiu Huang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai,China
- Department of Nuclear Medicine,Rui Jin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai,China
| |
Collapse
|
6
|
Vashistha R, Vegh V, Moradi H, Hammond A, O’Brien K, Reutens D. Modular GAN: positron emission tomography image reconstruction using two generative adversarial networks. FRONTIERS IN RADIOLOGY 2024; 4:1466498. [PMID: 39328298 PMCID: PMC11425657 DOI: 10.3389/fradi.2024.1466498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/18/2024] [Accepted: 08/08/2024] [Indexed: 09/28/2024]
Abstract
Introduction The reconstruction of PET images involves converting sinograms, which represent the measured counts of radioactive emissions using detector rings encircling the patient, into meaningful images. However, the quality of PET data acquisition is impacted by physical factors, photon count statistics and detector characteristics, which affect the signal-to-noise ratio, resolution and quantitative accuracy of the resulting images. To address these influences, correction methods have been developed to mitigate each of these issues separately. Recently, generative adversarial networks (GANs) based on machine learning have shown promise in learning the complex mapping between acquired PET data and reconstructed tomographic images. This study aims to investigate the properties of training images that contribute to GAN performance when non-clinical images are used for training. Additionally, we describe a method to correct common PET imaging artefacts without relying on patient-specific anatomical images. Methods The modular GAN framework includes two GANs. Module 1, resembling Pix2pix architecture, is trained on non-clinical sinogram-image pairs. Training data are optimised by considering image properties defined by metrics. The second module utilises adaptive instance normalisation and style embedding to enhance the quality of images from Module 1. Additional perceptual and patch-based loss functions are employed in training both modules. The performance of the new framework was compared with that of existing methods, (filtered backprojection (FBP) and ordered subset expectation maximisation (OSEM) without and with point spread function (OSEM-PSF)) with respect to correction for attenuation, patient motion and noise in simulated, NEMA phantom and human imaging data. Evaluation metrics included structural similarity (SSIM), peak-signal-to-noise ratio (PSNR), relative root mean squared error (rRMSE) for simulated data, and contrast-to-noise ratio (CNR) for NEMA phantom and human data. Results For simulated test data, the performance of the proposed framework was both qualitatively and quantitatively superior to that of FBP and OSEM. In the presence of noise, Module 1 generated images with a SSIM of 0.48 and higher. These images exhibited coarse structures that were subsequently refined by Module 2, yielding images with an SSIM higher than 0.71 (at least 22% higher than OSEM). The proposed method was robust against noise and motion. For NEMA phantoms, it achieved higher CNR values than OSEM. For human images, the CNR in brain regions was significantly higher than that of FBP and OSEM (p < 0.05, paired t-test). The CNR of images reconstructed with OSEM-PSF was similar to those reconstructed using the proposed method. Conclusion The proposed image reconstruction method can produce PET images with artefact correction.
Collapse
Affiliation(s)
- Rajat Vashistha
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, University of Queensland, Brisbane, QLD, Australia
| | - Viktor Vegh
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, University of Queensland, Brisbane, QLD, Australia
| | - Hamed Moradi
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, University of Queensland, Brisbane, QLD, Australia
- Diagnostic Imaging, Siemens Healthcare Pty Ltd., Melbourne, QLD,Australia
| | - Amanda Hammond
- Diagnostic Imaging, Siemens Healthcare Pty Ltd., Melbourne, QLD,Australia
| | - Kieran O’Brien
- Diagnostic Imaging, Siemens Healthcare Pty Ltd., Melbourne, QLD,Australia
| | - David Reutens
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, University of Queensland, Brisbane, QLD, Australia
| |
Collapse
|
7
|
Yang J, Afaq A, Sibley R, McMilan A, Pirasteh A. Deep learning applications for quantitative and qualitative PET in PET/MR: technical and clinical unmet needs. MAGMA (NEW YORK, N.Y.) 2024:10.1007/s10334-024-01199-y. [PMID: 39167304 DOI: 10.1007/s10334-024-01199-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 08/06/2024] [Accepted: 08/08/2024] [Indexed: 08/23/2024]
Abstract
We aim to provide an overview of technical and clinical unmet needs in deep learning (DL) applications for quantitative and qualitative PET in PET/MR, with a focus on attenuation correction, image enhancement, motion correction, kinetic modeling, and simulated data generation. (1) DL-based attenuation correction (DLAC) remains an area of limited exploration for pediatric whole-body PET/MR and lung-specific DLAC due to data shortages and technical limitations. (2) DL-based image enhancement approximating MR-guided regularized reconstruction with a high-resolution MR prior has shown promise in enhancing PET image quality. However, its clinical value has not been thoroughly evaluated across various radiotracers, and applications outside the head may pose challenges due to motion artifacts. (3) Robust training for DL-based motion correction requires pairs of motion-corrupted and motion-corrected PET/MR data. However, these pairs are rare. (4) DL-based approaches can address the limitations of dynamic PET, such as long scan durations that may cause patient discomfort and motion, providing new research opportunities. (5) Monte-Carlo simulations using anthropomorphic digital phantoms can provide extensive datasets to address the shortage of clinical data. This summary of technical/clinical challenges and potential solutions may provide research opportunities for the research community towards the clinical translation of DL solutions.
Collapse
Affiliation(s)
- Jaewon Yang
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA.
| | - Asim Afaq
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA
| | - Robert Sibley
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA
| | - Alan McMilan
- Departments of Radiology and Medical Physics, University of Wisconsin-Madison, 600 Highland Ave, Madison, WI, USA
| | - Ali Pirasteh
- Departments of Radiology and Medical Physics, University of Wisconsin-Madison, 600 Highland Ave, Madison, WI, USA
| |
Collapse
|
8
|
Dutta K, Laforest R, Luo J, Jha AK, Shoghi KI. Deep learning generation of preclinical positron emission tomography (PET) images from low-count PET with task-based performance assessment. Med Phys 2024; 51:4324-4339. [PMID: 38710222 PMCID: PMC11423763 DOI: 10.1002/mp.17105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 04/02/2024] [Accepted: 04/09/2024] [Indexed: 05/08/2024] Open
Abstract
BACKGROUND Preclinical low-count positron emission tomography (LC-PET) imaging offers numerous advantages such as facilitating imaging logistics, enabling longitudinal studies of long- and short-lived isotopes as well as increasing scanner throughput. However, LC-PET is characterized by reduced photon-count levels resulting in low signal-to-noise ratio (SNR), segmentation difficulties, and quantification uncertainties. PURPOSE We developed and evaluated a novel deep-learning (DL) architecture-Attention based Residual-Dilated Net (ARD-Net)-to generate standard-count PET (SC-PET) images from LC-PET images. The performance of the ARD-Net framework was evaluated for numerous low count realizations using fidelity-based qualitative metrics, task-based segmentation, and quantitative metrics. METHOD Patient Derived tumor Xenograft (PDX) with tumors implanted in the mammary fat-pad were subjected to preclinical [18F]-Fluorodeoxyglucose (FDG)-PET/CT imaging. SC-PET images were derived from a 10 min static FDG-PET acquisition, 50 min post administration of FDG, and were resampled to generate four distinct LC-PET realizations corresponding to 10%, 5%, 1.6%, and 0.8% of SC-PET count-level. ARD-Net was trained and optimized using 48 preclinical FDG-PET datasets, while 16 datasets were utilized to assess performance. Further, the performance of ARD-Net was benchmarked against two leading DL-based methods (Residual UNet, RU-Net; and Dilated Network, D-Net) and non-DL methods (Non-Local Means, NLM; and Block Matching 3D Filtering, BM3D). The performance of the framework was evaluated using traditional fidelity-based image quality metrics such as Structural Similarity Index Metric (SSIM) and Normalized Root Mean Square Error (NRMSE), as well as human observer-based tumor segmentation performance (Dice Score and volume bias) and quantitative analysis of Standardized Uptake Value (SUV) measurements. Additionally, radiomics-derived features were utilized as a measure of quality assurance (QA) in comparison to true SC-PET. Finally, a performance ensemble score (EPS) was developed by integrating fidelity-based and task-based metrics. Concordance Correlation Coefficient (CCC) was utilized to determine concordance between measures. The non-parametric Friedman Test with Bonferroni correction was used to compare the performance of ARD-Net against benchmarked methods with significance at adjusted p-value ≤0.01. RESULTS ARD-Net-generated SC-PET images exhibited significantly better (p ≤ 0.01 post Bonferroni correction) overall image fidelity scores in terms of SSIM and NRMSE at majority of photon-count levels compared to benchmarked DL and non-DL methods. In terms of task-based quantitative accuracy evaluated by SUVMean and SUVPeak, ARD-Net exhibited less than 5% median absolute bias for SUVMean compared to true SC-PET and lower degree of variability compared to benchmarked DL and non-DL based methods in generating SC-PET. Additionally, ARD-Net-generated SC-PET images displayed higher degree of concordance to SC-PET images in terms of radiomics features compared to non-DL and other DL approaches. Finally, the ensemble score suggested that ARD-Net exhibited significantly superior performance compared to benchmarked algorithms (p ≤ 0.01 post Bonferroni correction). CONCLUSION ARD-Net provides a robust framework to generate SC-PET from LC-PET images. ARD-Net generated SC-PET images exhibited superior performance compared other DL and non-DL approaches in terms of image-fidelity based metrics, task-based segmentation metrics, and minimal bias in terms of task-based quantification performance for preclinical PET imaging.
Collapse
Affiliation(s)
- Kaushik Dutta
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri, USA
- Imaging Science Program, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
| | - Richard Laforest
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri, USA
- Imaging Science Program, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
| | - Jingqin Luo
- Department of Surgery, Public Health Sciences, Washington University in St Louis, St Louis, Missouri, USA
| | - Abhinav K Jha
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri, USA
- Imaging Science Program, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
- Department of Biomedical Engineering, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
| | - Kooresh I Shoghi
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Missouri, USA
- Imaging Science Program, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
- Department of Biomedical Engineering, McKelvey School of Engineering, Washington University in St Louis, St Louis, Missouri, USA
| |
Collapse
|
9
|
Arabi H, Manesh AS, Zaidi H. Innovations in dedicated PET instrumentation: from the operating room to specimen imaging. Phys Med Biol 2024; 69:11TR03. [PMID: 38744305 DOI: 10.1088/1361-6560/ad4b92] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Accepted: 05/14/2024] [Indexed: 05/16/2024]
Abstract
This review casts a spotlight on intraoperative positron emission tomography (PET) scanners and the distinctive challenges they confront. Specifically, these systems contend with the necessity of partial coverage geometry, essential for ensuring adequate access to the patient. This inherently leans them towards limited-angle PET imaging, bringing along its array of reconstruction and geometrical sensitivity challenges. Compounding this, the need for real-time imaging in navigation systems mandates rapid acquisition and reconstruction times. For these systems, the emphasis is on dependable PET image reconstruction (without significant artefacts) while rapid processing takes precedence over the spatial resolution of the system. In contrast, specimen PET imagers are unburdened by the geometrical sensitivity challenges, thanks to their ability to leverage full coverage PET imaging geometries. For these devices, the focus shifts: high spatial resolution imaging takes precedence over rapid image reconstruction. This review concurrently probes into the technical complexities of both intraoperative and specimen PET imaging, shedding light on their recent designs, inherent challenges, and technological advancements.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Abdollah Saberi Manesh
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB Groningen, The Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, 500 Odense, Denmark
- University Research and Innovation Center, Óbuda University, Budapest, Hungary
| |
Collapse
|
10
|
Hashimoto F, Ote K. ReconU-Net: a direct PET image reconstruction using U-Net architecture with back projection-induced skip connection. Phys Med Biol 2024; 69:105022. [PMID: 38640921 DOI: 10.1088/1361-6560/ad40f6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 04/19/2024] [Indexed: 04/21/2024]
Abstract
Objective.This study aims to introduce a novel back projection-induced U-Net-shaped architecture, called ReconU-Net, based on the original U-Net architecture for deep learning-based direct positron emission tomography (PET) image reconstruction. Additionally, our objective is to visualize the behavior of direct PET image reconstruction by comparing the proposed ReconU-Net architecture with the original U-Net architecture and existing DeepPET encoder-decoder architecture without skip connections.Approach. The proposed ReconU-Net architecture uniquely integrates the physical model of the back projection operation into the skip connection. This distinctive feature facilitates the effective transfer of intrinsic spatial information from the input sinogram to the reconstructed image via an embedded physical model. The proposed ReconU-Net was trained using Monte Carlo simulation data from the Brainweb phantom and tested on both simulated and real Hoffman brain phantom data.Main results. The proposed ReconU-Net method provided better reconstructed image in terms of the peak signal-to-noise ratio and contrast recovery coefficient than the original U-Net and DeepPET methods. Further analysis shows that the proposed ReconU-Net architecture has the ability to transfer features of multiple resolutions, especially non-abstract high-resolution information, through skip connections. Unlike the U-Net and DeepPET methods, the proposed ReconU-Net successfully reconstructed the real Hoffman brain phantom, despite limited training on simulated data.Significance. The proposed ReconU-Net can improve the fidelity of direct PET image reconstruction, even with small training datasets, by leveraging the synergistic relationship between data-driven modeling and the physics model of the imaging process.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamana-ku, Hamamatsu 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamana-ku, Hamamatsu 434-8601, Japan
| |
Collapse
|
11
|
Lang Y, Jiang Z, Sun L, Xiang L, Ren L. Hybrid-supervised deep learning for domain transfer 3D protoacoustic image reconstruction. Phys Med Biol 2024; 69:10.1088/1361-6560/ad3327. [PMID: 38471184 PMCID: PMC11076107 DOI: 10.1088/1361-6560/ad3327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 03/12/2024] [Indexed: 03/14/2024]
Abstract
Objective. Protoacoustic imaging showed great promise in providing real-time 3D dose verification of proton therapy. However, the limited acquisition angle in protoacoustic imaging induces severe artifacts, which impairs its accuracy for dose verification. In this study, we developed a hybrid-supervised deep learning method for protoacoustic imaging to address the limited view issue.Approach. We proposed a Recon-Enhance two-stage deep learning method. In the Recon-stage, a transformer-based network was developed to reconstruct initial pressure maps from raw acoustic signals. The network is trained in a hybrid-supervised approach, where it is first trained using supervision by the iteratively reconstructed pressure map and then fine-tuned using transfer learning and self-supervision based on the data fidelity constraint. In the enhance-stage, a 3D U-net is applied to further enhance the image quality with supervision from the ground truth pressure map. The final protoacoustic images are then converted to dose for proton verification.Main results. The results evaluated on a dataset of 126 prostate cancer patients achieved an average root mean squared errors (RMSE) of 0.0292, and an average structural similarity index measure (SSIM) of 0.9618, out-performing related start-of-the-art methods. Qualitative results also demonstrated that our approach addressed the limit-view issue with more details reconstructed. Dose verification achieved an average RMSE of 0.018, and an average SSIM of 0.9891. Gamma index evaluation demonstrated a high agreement (94.7% and 95.7% for 1%/3 mm and 1%/5 mm) between the predicted and the ground truth dose maps. Notably, the processing time was reduced to 6 s, demonstrating its feasibility for online 3D dose verification for prostate proton therapy.Significance. Our study achieved start-of-the-art performance in the challenging task of direct reconstruction from radiofrequency signals, demonstrating the great promise of PA imaging as a highly efficient and accurate tool forinvivo3D proton dose verification to minimize the range uncertainties of proton therapy to improve its precision and outcomes.
Collapse
Affiliation(s)
- Yankun Lang
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Baltimore, MD 21201, United States of America
| | - Zhuoran Jiang
- Department of Radiation Oncology, Duke University, Durham, NC 27710, United States of America
| | - Leshan Sun
- Department of Biomedical Engineering and Radiology, University of California, Irvine, Irnive, CA, 92617, United States of America
| | - Liangzhong Xiang
- Department of Biomedical Engineering and Radiology, University of California, Irvine, Irnive, CA, 92617, United States of America
| | - Lei Ren
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Baltimore, MD 21201, United States of America
| |
Collapse
|
12
|
Lu B, Fu L, Pan Y, Dong Y. SWISTA-Nets: Subband-adaptive wavelet iterative shrinkage thresholding networks for image reconstruction. Comput Med Imaging Graph 2024; 113:102345. [PMID: 38330636 DOI: 10.1016/j.compmedimag.2024.102345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 01/29/2024] [Accepted: 01/29/2024] [Indexed: 02/10/2024]
Abstract
Robust and interpretable image reconstruction is central to imageology applications in clinical practice. Prevalent deep networks, with strong learning ability to extract implicit information from data manifold, are still lack of prior knowledge introduced from mathematics or physics, leading to instability, poor structure interpretability and high computation cost. As to this issue, we propose two prior knowledge-driven networks to combine the good interpretability of mathematical methods and the powerful learnability of deep learning methods. Incorporating different kinds of prior knowledge, we propose subband-adaptive wavelet iterative shrinkage thresholding networks (SWISTA-Nets), where almost every network module is in one-to-one correspondence with each step involved in the iterative algorithm. By end-to-end training of proposed SWISTA-Nets, implicit information can be extracted from training data and guide the tuning process of key parameters that possess mathematical definition. The inverse problems associated with two medical imaging modalities, i.e., electromagnetic tomography and X-ray computational tomography are applied to validate the proposed networks. Both visual and quantitative results indicate that the SWISTA-Nets outperform mathematical methods and state-of-the-art prior knowledge-driven networks, especially with fewer training parameters, interpretable network structures and well robustness. We assume that our analysis will support further investigation of prior knowledge-driven networks in the field of ill-posed image reconstruction.
Collapse
Affiliation(s)
- Binchun Lu
- Department of Precision Instrument, Tsinghua University, Beijing 100084, China.
| | - Lidan Fu
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Yixuan Pan
- Department of Precision Instrument, Tsinghua University, Beijing 100084, China.
| | - Yonggui Dong
- Department of Precision Instrument, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
13
|
Sluijter TE, Yakar D, Roest C, Tsoumpas C, Kwee TC. Does FDG-PET/CT for incidentally found pulmonary lesions lead to a cascade of more incidental findings? Clin Imaging 2024; 108:110116. [PMID: 38460254 DOI: 10.1016/j.clinimag.2024.110116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 02/13/2024] [Accepted: 02/28/2024] [Indexed: 03/11/2024]
Abstract
OBJECTIVE To determine the frequency, nature, and downstream healthcare costs of new incidental findings that are found on whole-body FDG-PET/CT in patients with a non-FDG-avid pulmonary lesion ≥10 mm that was incidentally found on previous imaging. MATERIALS AND METHODS This retrospective study included a consecutive series of patients who underwent whole-body FDG-PET/CT because of an incidentally found pulmonary lesion ≥10 mm. RESULTS Seventy patients were included, of whom 23 (32.9 %) had an incidentally found pulmonary lesion that proved to be non-FDG-avid. In 12 of these 23 cases (52.2 %) at least one new incidental finding was discovered on FDG-PET/CT. The total number of new incidental findings was 21, of which 7 turned out to be benign, 1 proved to be malignant (incurable metastasized cancer), and 13 whose nature remained unclear. One patient sustained permanent neurologic impairment of the left leg due to iatrogenic nerve damage during laparotomy for an incidental finding which turned out to be benign. The total costs of all additional investigations due to the detection of new incidental findings amounted to €9903.17, translating to an average of €141.47 per whole-body FDG-PET/CT scan performed for the evaluation of an incidentally found pulmonary lesion. CONCLUSION In many patients in whom whole-body FDG-PET/CT was performed to evaluate an incidentally found pulmonary lesion that turned out to be non-FDG-avid and therefore very likely benign, FDG-PET/CT detected new incidental findings in our preliminary study. Whether the detection of these new incidental findings is cost-effective or not, requires further research with larger sample sizes.
Collapse
Affiliation(s)
- Tim E Sluijter
- Medical Imaging Center, Departments of Radiology, Nuclear Medicine, and Molecular Imaging, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands.
| | - Derya Yakar
- Medical Imaging Center, Departments of Radiology, Nuclear Medicine, and Molecular Imaging, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands; Netherlands Cancer Institute, Amsterdam, Department of Radiology, the Netherlands
| | - Christian Roest
- Medical Imaging Center, Departments of Radiology, Nuclear Medicine, and Molecular Imaging, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Charalampos Tsoumpas
- Medical Imaging Center, Departments of Radiology, Nuclear Medicine, and Molecular Imaging, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Thomas C Kwee
- Medical Imaging Center, Departments of Radiology, Nuclear Medicine, and Molecular Imaging, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| |
Collapse
|
14
|
Dedja M, Mehranian A, Bradley KM, Walker MD, Fielding PA, Wollenweber SD, Johnsen R, McGowan DR. Sequential deep learning image enhancement models improve diagnostic confidence, lesion detectability, and image reconstruction time in PET. EJNMMI Phys 2024; 11:28. [PMID: 38488923 PMCID: PMC10942956 DOI: 10.1186/s40658-024-00632-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Accepted: 03/07/2024] [Indexed: 03/17/2024] Open
Abstract
BACKGROUND Investigate the potential benefits of sequential deployment of two deep learning (DL) algorithms namely DL-Enhancement (DLE) and DL-based time-of-flight (ToF) (DLT). DLE aims to enhance the rapidly reconstructed ordered-subset-expectation-maximisation algorithm (OSEM) images towards block-sequential-regularised-expectation-maximisation (BSREM) images, whereas DLT aims to improve the quality of BSREM images reconstructed without ToF. As the algorithms differ in their purpose, sequential application may allow benefits from each to be combined. 20 FDG PET-CT scans were performed on a Discovery 710 (D710) and 20 on Discovery MI (DMI; both GE HealthCare). PET data was reconstructed using five combinations of algorithms:1. ToF-BSREM, 2. ToF-OSEM + DLE, 3. OSEM + DLE + DLT, 4. ToF-OSEM + DLE + DLT, 5. ToF-BSREM + DLT. To assess image noise, 30 mm-diameter spherical VOIs were drawn in both lung and liver to measure standard deviation of voxels within the volume. In a blind clinical reading, two experienced readers rated the images on a five-point Likert scale based on lesion detectability, diagnostic confidence, and image quality. RESULTS Applying DLE + DLT reduced noise whilst improving lesion detectability, diagnostic confidence, and image reconstruction time. ToF-OSEM + DLE + DLT reconstructions demonstrated an increase in lesion SUVmax of 28 ± 14% (average ± standard deviation) and 11 ± 5% for data acquired on the D710 and DMI, respectively. The same reconstruction scored highest in clinical readings for both lesion detectability and diagnostic confidence for D710. CONCLUSIONS The combination of DLE and DLT increased diagnostic confidence and lesion detectability compared to ToF-BSREM images. As DLE + DLT used input OSEM images, and because DL inferencing was fast, there was a significant decrease in overall reconstruction time. This could have applications to total body PET.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Daniel R McGowan
- Oxford University Hospitals, Oxford, UK.
- University of Oxford, Oxford, UK.
| |
Collapse
|
15
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
16
|
Wang S, Liu B, Xie F, Chai L. An iterative reconstruction algorithm for unsupervised PET image. Phys Med Biol 2024; 69:055025. [PMID: 38346340 DOI: 10.1088/1361-6560/ad2882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 02/12/2024] [Indexed: 02/28/2024]
Abstract
Objective.In recent years, convolutional neural networks (CNNs) have shown great potential in positron emission tomography (PET) image reconstruction. However, most of them rely on many low-quality and high-quality reference PET image pairs for training, which are not always feasible in clinical practice. On the other hand, many works improve the quality of PET image reconstruction by adding explicit regularization or optimizing the network structure, which may lead to complex optimization problems.Approach.In this paper, we develop a novel iterative reconstruction algorithm by integrating the deep image prior (DIP) framework, which only needs the prior information (e.g. MRI) and sinogram data of patients. To be specific, we construct the objective function as a constrained optimization problem and utilize the existing PET image reconstruction packages to streamline calculations. Moreover, to further improve both the reconstruction quality and speed, we introduce the Nesterov's acceleration part and the restart mechanism in each iteration.Main results.2D experiments on PET data sets based on computer simulations and real patients demonstrate that our proposed algorithm can outperform existing MLEM-GF, KEM and DIPRecon methods.Significance.Unlike traditional CNN methods, the proposed algorithm does not rely on large data sets, but only leverages inter-patient information. Furthermore, we enhance reconstruction performance by optimizing the iterative algorithm. Notably, the proposed method does not require much modification of the basic algorithm, allowing for easy integration into standard implementations.
Collapse
Affiliation(s)
- Siqi Wang
- Engineering Research Center of Metallurgical Automation and Measurement Technology, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
| | - Bing Liu
- Engineering Research Center of Metallurgical Automation and Measurement Technology, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
| | - Furan Xie
- Engineering Research Center of Metallurgical Automation and Measurement Technology, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
| | - Li Chai
- College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, People's Republic of China
| |
Collapse
|
17
|
Nigam S, Gjelaj E, Wang R, Wei GW, Wang P. Machine Learning and Deep Learning Applications in Magnetic Particle Imaging. J Magn Reson Imaging 2024:10.1002/jmri.29294. [PMID: 38358090 PMCID: PMC11324856 DOI: 10.1002/jmri.29294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 01/29/2024] [Accepted: 01/31/2024] [Indexed: 02/16/2024] Open
Abstract
In recent years, magnetic particle imaging (MPI) has emerged as a promising imaging technique depicting high sensitivity and spatial resolution. It originated in the early 2000s where it proposed a new approach to challenge the low spatial resolution achieved by using relaxometry in order to measure the magnetic fields. MPI presents 2D and 3D images with high temporal resolution, non-ionizing radiation, and optimal visual contrast due to its lack of background tissue signal. Traditionally, the images were reconstructed by the conversion of signal from the induced voltage by generating system matrix and X-space based methods. Because image reconstruction and analyses play an integral role in obtaining precise information from MPI signals, newer artificial intelligence-based methods are continuously being researched and developed upon. In this work, we summarize and review the significance and employment of machine learning and deep learning models for applications with MPI and the potential they hold for the future. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Saumya Nigam
- Precision Health Program, Michigan State University, East Lansing, Michigan 48824, United States
- Department of Radiology, College of Human Medicine, Michigan State University, East Lansing, Michigan 48824, United States
| | - Elvira Gjelaj
- Precision Health Program, Michigan State University, East Lansing, Michigan 48824, United States
- Lyman Briggs College, Michigan State University, East Lansing, Michigan 48824, United States
| | - Rui Wang
- Department of Mathematics, College of Natural Science, Michigan State University, East Lansing, Michigan, 48824, United States
| | - Guo-Wei Wei
- Department of Mathematics, College of Natural Science, Michigan State University, East Lansing, Michigan, 48824, United States
- Department of Electrical and Computer Engineering, College of Engineering, Michigan State University, East Lansing, Michigan, 48824, United States
- Department of Biochemistry and Molecular Biology, College of Natural Science, Michigan State University, East Lansing, Michigan, 48824, United States
| | - Ping Wang
- Precision Health Program, Michigan State University, East Lansing, Michigan 48824, United States
- Department of Radiology, College of Human Medicine, Michigan State University, East Lansing, Michigan 48824, United States
| |
Collapse
|
18
|
Dayarathna S, Islam KT, Uribe S, Yang G, Hayat M, Chen Z. Deep learning based synthesis of MRI, CT and PET: Review and analysis. Med Image Anal 2024; 92:103046. [PMID: 38052145 DOI: 10.1016/j.media.2023.103046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 11/14/2023] [Accepted: 11/29/2023] [Indexed: 12/07/2023]
Abstract
Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple image modalities for an accurate clinical workflow. This approach proves beneficial in estimating an image of a desired modality from a given source modality among the most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). However, translating between two image modalities presents difficulties due to the complex and non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance in synthetic image contrast applications compared to conventional image synthesis methods. This survey comprehensively reviews deep learning-based medical imaging translation from 2018 to 2023 on pseudo-CT, synthetic MR, and synthetic PET. We provide an overview of synthetic contrasts in medical imaging and the most frequently employed deep learning networks for medical image synthesis. Additionally, we conduct a detailed analysis of each synthesis method, focusing on their diverse model designs based on input domains and network architectures. We also analyse novel network architectures, ranging from conventional CNNs to the recent Transformer and Diffusion models. This analysis includes comparing loss functions, available datasets and anatomical regions, and image quality assessments and performance in other downstream tasks. Finally, we discuss the challenges and identify solutions within the literature, suggesting possible future directions. We hope that the insights offered in this survey paper will serve as a valuable roadmap for researchers in the field of medical image synthesis.
Collapse
Affiliation(s)
- Sanuwani Dayarathna
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia.
| | | | - Sergio Uribe
- Department of Medical Imaging and Radiation Sciences, Faculty of Medicine, Monash University, Clayton VIC 3800, Australia
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, W12 7SL, United Kingdom
| | - Munawar Hayat
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia
| | - Zhaolin Chen
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia; Monash Biomedical Imaging, Clayton VIC 3800, Australia
| |
Collapse
|
19
|
Chang H, Kobzarenko V, Mitra D. Inverse radon transform with deep learning: an application in cardiac motion correction. Phys Med Biol 2024; 69:035010. [PMID: 37988757 DOI: 10.1088/1361-6560/ad0eb5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 11/21/2023] [Indexed: 11/23/2023]
Abstract
Objective. This paper addresses performing inverse radon transform (IRT) with artificial neural network (ANN) or deep learning, simultaneously with cardiac motion correction (MC). The suggested application domain is cardiac image reconstruction in emission or transmission tomography where IRT is relevant. Our main contribution is in proposing an ANN architecture that is particularly suitable for this purpose.Approach. We validate our approach with two types of datasets. First, we use an abstract object that looks like a heart to simulate motion-blurred radon transform. With the known ground truth in hand, we then train our proposed ANN architecture and validate its effectiveness in MC. Second, we used human cardiac gated datasets for training and validation of our approach. The gating mechanism bins data over time using the electro-cardiogram (ECG) signals for cardiac motion correction.Main results. We have shown that trained ANNs can perform motion-corrected image reconstruction directly from a motion-corrupted sinogram. We have compared our model against two other known ANN-based approaches.Significance. Our method paves the way for eliminating any need for hardware gating in medical imaging.
Collapse
Affiliation(s)
- Haoran Chang
- Department of Electrical Engineering and Computer Science, Florida Institute of Technology, Melbourne, FL 32901, United States of America
| | - Valerie Kobzarenko
- Department of Electrical Engineering and Computer Science, Florida Institute of Technology, Melbourne, FL 32901, United States of America
| | - Debasis Mitra
- Department of Electrical Engineering and Computer Science, Florida Institute of Technology, Melbourne, FL 32901, United States of America
| |
Collapse
|
20
|
Wang D, Jiang C, He J, Teng Y, Qin H, Liu J, Yang X. M 3S-Net: multi-modality multi-branch multi-self-attention network with structure-promoting loss for low-dose PET/CT enhancement. Phys Med Biol 2024; 69:025001. [PMID: 38086073 DOI: 10.1088/1361-6560/ad14c5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 12/12/2023] [Indexed: 01/05/2024]
Abstract
Objective.PET (Positron Emission Tomography) inherently involves radiotracer injections and long scanning time, which raises concerns about the risk of radiation exposure and patient comfort. Reductions in radiotracer dosage and acquisition time can lower the potential risk and improve patient comfort, respectively, but both will also reduce photon counts and hence degrade the image quality. Therefore, it is of interest to improve the quality of low-dose PET images.Approach.A supervised multi-modality deep learning model, named M3S-Net, was proposed to generate standard-dose PET images (60 s per bed position) from low-dose ones (10 s per bed position) and the corresponding CT images. Specifically, we designed a multi-branch convolutional neural network with multi-self-attention mechanisms, which first extracted features from PET and CT images in two separate branches and then fused the features to generate the final generated PET images. Moreover, a novel multi-modality structure-promoting term was proposed in the loss function to learn the anatomical information contained in CT images.Main results.We conducted extensive numerical experiments on real clinical data collected from local hospitals. Compared with state-of-the-art methods, the proposed M3S-Net not only achieved higher objective metrics and better generated tumors, but also performed better in preserving edges and suppressing noise and artifacts.Significance.The experimental results of quantitative metrics and qualitative displays demonstrate that the proposed M3S-Net can generate high-quality PET images from low-dose ones, which are competable to standard-dose PET images. This is valuable in reducing PET acquisition time and has potential applications in dynamic PET imaging.
Collapse
Affiliation(s)
- Dong Wang
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Chong Jiang
- Department of Nuclear Medicine, West China Hospital of Sichuan University, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Jian He
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Yue Teng
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Hourong Qin
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| | - Jijun Liu
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| |
Collapse
|
21
|
Manoj Doss KK, Chen JC. Utilizing deep learning techniques to improve image quality and noise reduction in preclinical low-dose PET images in the sinogram domain. Med Phys 2024; 51:209-223. [PMID: 37966121 DOI: 10.1002/mp.16830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 09/28/2023] [Accepted: 10/22/2023] [Indexed: 11/16/2023] Open
Abstract
BACKGROUND Low-dose positron emission tomography (LD-PET) imaging is commonly employed in preclinical research to minimize radiation exposure to animal subjects. However, LD-PET images often exhibit poor quality and high noise levels due to the low signal-to-noise ratio. Deep learning (DL) techniques such as generative adversarial networks (GANs) and convolutional neural network (CNN) have the capability to enhance the quality of images derived from noisy or low-quality PET data, which encodes critical information about radioactivity distribution in the body. PURPOSE Our objective was to optimize the image quality and reduce noise in preclinical PET images by utilizing the sinogram domain as input for DL models, resulting in improved image quality as compared to LD-PET images. METHODS A GAN and CNN model were utilized to predict high-dose (HD) preclinical PET sinograms from the corresponding LD preclinical PET sinograms. In order to generate the datasets, experiments were conducted on micro-phantoms, animal subjects (rats), and virtual simulations. The quality of DL-generated images was weighted by performing the following quantitative measures: structural similarity index measure (SSIM), root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). Additionally, DL input and output were both subjected to a spatial resolution calculation of full width half maximum (FWHM) and full width tenth maximum (FWTM). DL outcomes were then compared with the conventional denoising algorithms such as non-local means (NLM), block-matching, and 3D filtering (BM3D). RESULTS The DL models effectively learned image features and produced high-quality images, as reflected in the quantitative metrics. Notably, the FWHM and FWTM values of DL PET images exhibited significantly improved accuracy compared to LD, NLM, and BM3D PET images, and just as precise as HD PET images. The MSE loss underscored the excellent performance of the models, indicating that the models performed well. To further improve the training, the generator loss (G loss) was increased to a value higher than the discriminator loss (D loss), thereby achieving convergence in the GAN model. CONCLUSIONS The sinograms generated by the GAN network closely resembled real HD preclinical PET sinograms and were more realistic than LD. There was a noticeable improvement in image quality and noise factor in the predicted HD images. Importantly, DL networks did not fully compromise the spatial resolution of the images.
Collapse
Affiliation(s)
| | - Jyh-Cheng Chen
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Medical Imaging and Radiological Sciences, China Medical University, Taichung, Taiwan
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
| |
Collapse
|
22
|
Brosch-Lenz JF, Delker A, Schmidt F, Tran-Gia J. On the Use of Artificial Intelligence for Dosimetry of Radiopharmaceutical Therapies. Nuklearmedizin 2023; 62:379-388. [PMID: 37827503 DOI: 10.1055/a-2179-6872] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
Routine clinical dosimetry along with radiopharmaceutical therapies is key for future treatment personalization. However, dosimetry is considered complex and time-consuming with various challenges amongst the required steps within the dosimetry workflow. The general workflow for image-based dosimetry consists of quantitative imaging, the segmentation of organs and tumors, fitting of the time-activity-curves, and the conversion to absorbed dose. This work reviews the potential and advantages of the use of artificial intelligence to improve speed and accuracy of every single step of the dosimetry workflow.
Collapse
Affiliation(s)
| | - Astrid Delker
- Department of Nuclear Medicine, LMU University Hospital, Munich, Germany
| | - Fabian Schmidt
- Department of Nuclear Medicine and Clinical Molecular Imaging, University Hospital Tuebingen, Tuebingen, Germany
- Department of Preclinical Imaging and Radiopharmacy, Werner Siemens Imaging Center, Tuebingen, Germany
| | - Johannes Tran-Gia
- Department of Nuclear Medicine, University Hospital Wuerzburg, Wuerzburg, Germany
| |
Collapse
|
23
|
Kaviani S, Sanaat A, Mokri M, Cohalan C, Carrier JF. Image reconstruction using UNET-transformer network for fast and low-dose PET scans. Comput Med Imaging Graph 2023; 110:102315. [PMID: 38006648 DOI: 10.1016/j.compmedimag.2023.102315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 09/26/2023] [Accepted: 11/15/2023] [Indexed: 11/27/2023]
Abstract
INTRODUCTION Low-dose and fast PET imaging (low-count PET) play a significant role in enhancing patient safety, healthcare efficiency, and patient comfort during medical imaging procedures. To achieve high-quality images with low-count PET scans, effective reconstruction models are crucial for denoising and enhancing image quality. The main goal of this paper is to develop an effective and accurate deep learning-based method for reconstructing low-count PET images, which is a challenging problem due to the limited amount of available data and the high level of noise in the acquired images. The proposed method aims to improve the quality of reconstructed PET images while preserving important features, such as edges and small details, by combining the strengths of UNET and Transformer networks. MATERIAL AND METHODS The proposed TrUNET-MAPEM model integrates a residual UNET-transformer regularizer into the unrolled maximum a posteriori expectation maximization (MAPEM) algorithm for PET image reconstruction. A loss function based on a combination of structural similarity index (SSIM) and mean squared error (MSE) is utilized to evaluate the accuracy of the reconstructed images. The simulated dataset was generated using the Brainweb phantom, while the real patient dataset was acquired using a Siemens Biograph mMR PET scanner. We also implemented state-of-the-art methods for comparison purposes: OSEM, MAPOSEM, and supervised learning using 3D-UNET network. The reconstructed images are compared to ground truth images using metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and relative root mean square error (rRMSE) to quantitatively evaluate the accuracy of the reconstructed images. RESULTS Our proposed TrUNET-MAPEM approach was evaluated using both simulated and real patient data. For the patient data, our model achieved an average PSNR of 33.72 dB, an average SSIM of 0.955, and an average rRMSE of 0.39. These results outperformed other methods which had average PSNRs of 36.89 dB, 34.12 dB, and 33.52 db, average SSIMs of 0.944, 0.947, and 0.951, and average rRMSEs of 0.59, 0.49, and 0.42. For the simulated data, our model achieved an average PSNR of 31.23 dB, an average SSIM of 0.95, and an average rRMSE of 0.55. These results also outperformed other state-of-the-art methods, such as OSEM, MAPOSEM, and 3DUNET-MAPEM. The model demonstrates the potential for clinical use by successfully reconstructing smooth images while preserving edges. The comparison with other methods demonstrates the superiority of our approach, as it outperforms all other methods for all three metrics. CONCLUSION The proposed TrUNET-MAPEM model presents a significant advancement in the field of low-count PET image reconstruction. The results demonstrate the potential for clinical use, as the model can produce images with reduced noise levels and better edge preservation compared to other reconstruction and post-processing algorithms. The proposed approach may have important clinical applications in the early detection and diagnosis of various diseases.
Collapse
Affiliation(s)
- Sanaz Kaviani
- Faculty of Medicine, University of Montreal, Montreal, Canada; University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada.
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Mersede Mokri
- Faculty of Medicine, University of Montreal, Montreal, Canada; University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada
| | - Claire Cohalan
- University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada; Department of Physics and Biomedical Engineering, University of Montreal Hospital Centre, Montreal, Canada
| | - Jean-Francois Carrier
- University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada; Department of Physics, University of Montreal, Montreal, QC, Canada; Department de Radiation Oncology, University of Montreal Hospital Centre (CHUM), Montreal, Canada
| |
Collapse
|
24
|
Hellwig D, Hellwig NC, Boehner S, Fuchs T, Fischer R, Schmidt D. Artificial Intelligence and Deep Learning for Advancing PET Image Reconstruction: State-of-the-Art and Future Directions. Nuklearmedizin 2023; 62:334-342. [PMID: 37995706 PMCID: PMC10689088 DOI: 10.1055/a-2198-0358] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 10/12/2023] [Indexed: 11/25/2023]
Abstract
Positron emission tomography (PET) is vital for diagnosing diseases and monitoring treatments. Conventional image reconstruction (IR) techniques like filtered backprojection and iterative algorithms are powerful but face limitations. PET IR can be seen as an image-to-image translation. Artificial intelligence (AI) and deep learning (DL) using multilayer neural networks enable a new approach to this computer vision task. This review aims to provide mutual understanding for nuclear medicine professionals and AI researchers. We outline fundamentals of PET imaging as well as state-of-the-art in AI-based PET IR with its typical algorithms and DL architectures. Advances improve resolution and contrast recovery, reduce noise, and remove artifacts via inferred attenuation and scatter correction, sinogram inpainting, denoising, and super-resolution refinement. Kernel-priors support list-mode reconstruction, motion correction, and parametric imaging. Hybrid approaches combine AI with conventional IR. Challenges of AI-assisted PET IR include availability of training data, cross-scanner compatibility, and the risk of hallucinated lesions. The need for rigorous evaluations, including quantitative phantom validation and visual comparison of diagnostic accuracy against conventional IR, is highlighted along with regulatory issues. First approved AI-based applications are clinically available, and its impact is foreseeable. Emerging trends, such as the integration of multimodal imaging and the use of data from previous imaging visits, highlight future potentials. Continued collaborative research promises significant improvements in image quality, quantitative accuracy, and diagnostic performance, ultimately leading to the integration of AI-based IR into routine PET imaging protocols.
Collapse
Affiliation(s)
- Dirk Hellwig
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Nils Constantin Hellwig
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Steven Boehner
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Timo Fuchs
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Regina Fischer
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Daniel Schmidt
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
| |
Collapse
|
25
|
Liu Z, Wang B, Ye H, Liu H. Prior information-guided reconstruction network for positron emission tomography images. Quant Imaging Med Surg 2023; 13:8230-8246. [PMID: 38106321 PMCID: PMC10722030 DOI: 10.21037/qims-23-579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 10/07/2023] [Indexed: 12/19/2023]
Abstract
Background Deep learning has recently shown great potential in medical image reconstruction tasks. For positron emission tomography (PET) images, the direct reconstruction from raw data to radioactivity images using deep learning without any constraint may lead to the production of nonexistent structures. The aim of this study was to specifically develop and test a flexibly deep learning-based reconstruction network guided by any form of prior knowledge to achieve high quality and high reliability reconstruction. Methods We developed a novel prior information-guided reconstruction network (PIGRN) with a dual-channel generator and a 2-scale discriminator based on a conditional generative adversarial network (cGAN). Besides the raw data channel, an additional channel is provided in the generator for prior information (PI) to guide the training phase. The PI can be reconstructed images obtained via conventional methods, nuclear medical images from other modalities, attenuation correction maps from time-of-flight-PET (TOF-PET) data, or any other physical parameters. For this study, the reconstructed images generated by filtered back projection (FBP) were chosen as the input of the additional channel. To improve the image quality, a 2-scale discriminator was adopted which can focus on both the coarse and fine field of the reconstruction images. Experiments were carried out on both a simulation dataset and a real Sprague Dawley (SD) rat dataset. Results Two classic deep learning-based reconstruction networks, including U-Net and Deep-PET, were compared in our study. Compared with these two methods, our method could provide much higher quality PET image reconstruction in the study of the simulation dataset. The peak signal-to-noise ratio (PSNR) value reached 31.8498, and the structure similarity index measure (SSIM) value reached 0.9754. The real study on SD rats indicated that the proposed network also has strong generalization ability. Conclusions The flexible PIGRN based on cGAN for PET images combines both raw data and PI. The results of comparison experiments and a generalization experiment based on simulation and SD rat datasets demonstrated that the proposed PIGRN has the ability to improve image quality and has strong generalization ability.
Collapse
Affiliation(s)
- Zhiyuan Liu
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Bo Wang
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Huihui Ye
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
- Jiaxing Key Laboratory of Photonic Sensing & Intelligent Imaging, Jiaxing, China
- Intelligent Optics & Photonics Research Center, Jiaxing Research Institute Zhejiang University, Jiaxing, China
| | - Huafeng Liu
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
- Jiaxing Key Laboratory of Photonic Sensing & Intelligent Imaging, Jiaxing, China
- Intelligent Optics & Photonics Research Center, Jiaxing Research Institute Zhejiang University, Jiaxing, China
| |
Collapse
|
26
|
Jabbarpour A, Ghassel S, Lang J, Leung E, Le Gal G, Klein R, Moulton E. The Past, Present, and Future Role of Artificial Intelligence in Ventilation/Perfusion Scintigraphy: A Systematic Review. Semin Nucl Med 2023; 53:752-765. [PMID: 37080822 DOI: 10.1053/j.semnuclmed.2023.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 03/06/2023] [Accepted: 03/07/2023] [Indexed: 04/22/2023]
Abstract
Ventilation-perfusion (V/Q) lung scans constitute one of the oldest nuclear medicine procedures, remain one of the few studies performed in the acute setting, and are amongst the few performed in the emergency setting. V/Q studies have witnessed a long fluctuation in adoption rates in parallel to continuous advances in image processing and computer vision techniques. This review provides an overview on the status of artificial intelligence (AI) in V/Q scintigraphy. To clearly assess the past, current, and future role of AI in V/Q scans, we conducted a systematic Ovid MEDLINE(R) literature search from 1946 to August 5, 2022 in addition to a manual search. The literature was reviewed and summarized in terms of methodologies and results for the various applications of AI to V/Q scans. The PRISMA guidelines were followed. Thirty-one publications fulfilled our search criteria and were grouped into two distinct categories: (1) disease diagnosis/detection (N = 22, 71.0%) and (2) cross-modality image translation into V/Q images (N = 9, 29.0%). Studies on disease diagnosis and detection relied heavily on shallow artificial neural networks for acute pulmonary embolism (PE) diagnosis and were primarily published between the mid-1990s and early 2000s. Recent applications almost exclusively regard image translation tasks from CT to ventilation or perfusion images with modern algorithms, such as convolutional neural networks, and were published between 2019 and 2022. AI research in V/Q scintigraphy for acute PE diagnosis in the mid-90s to early 2000s yielded promising results but has since been largely neglected and thus have yet to benefit from today's state-of-the art machine-learning techniques, such as deep neural networks. Recently, the main application of AI for V/Q has shifted towards generating synthetic ventilation and perfusion images from CT. There is therefore considerable potential to expand and modernize the use of real V/Q studies with state-of-the-art deep learning approaches, especially for workflow optimization and PE detection at both acute and chronic stages. We discuss future challenges and potential directions to compensate for the lag in this domain and enhance the value of this traditional nuclear medicine scan.
Collapse
Affiliation(s)
- Amir Jabbarpour
- Department of Physics, Carleton University, Ottawa, Ontario, Canada
| | - Siraj Ghassel
- Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Ontario, Canada
| | - Jochen Lang
- Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Ontario, Canada
| | - Eugene Leung
- Division of Nuclear Medicine and Molecular Imaging, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Grégoire Le Gal
- Division of Hematology, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Ran Klein
- Department of Physics, Carleton University, Ottawa, Ontario, Canada; Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Ontario, Canada; Division of Nuclear Medicine and Molecular Imaging, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada; Department of Nuclear Medicine and Molecular Imaging, The Ottawa Hospital, Ottawa, Ontario, Canada.
| | - Eric Moulton
- Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Ontario, Canada; Jubilant DraxImage Inc., Kirkland, Quebec, Canada
| |
Collapse
|
27
|
Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Nuklearmedizin 2023; 62:306-313. [PMID: 37802058 DOI: 10.1055/a-2157-6670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
BACKGROUND Machine learning (ML) is considered an important technology for future data analysis in health care. METHODS The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and - for PET imaging - reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers. RESULTS AND CONCLUSION In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future. KEY POINTS · ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET..
Collapse
Affiliation(s)
- Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Tobias Hepp
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Ferdinand Seith
- Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| |
Collapse
|
28
|
Reader AJ, Pan B. AI for PET image reconstruction. Br J Radiol 2023; 96:20230292. [PMID: 37486607 PMCID: PMC10546435 DOI: 10.1259/bjr.20230292] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 06/06/2023] [Accepted: 06/20/2023] [Indexed: 07/25/2023] Open
Abstract
Image reconstruction for positron emission tomography (PET) has been developed over many decades, with advances coming from improved modelling of the data statistics and improved modelling of the imaging physics. However, high noise and limited spatial resolution have remained issues in PET imaging, and state-of-the-art PET reconstruction has started to exploit other medical imaging modalities (such as MRI) to assist in noise reduction and enhancement of PET's spatial resolution. Nonetheless, there is an ongoing drive towards not only improving image quality, but also reducing the injected radiation dose and reducing scanning times. While the arrival of new PET scanners (such as total body PET) is helping, there is always a need to improve reconstructed image quality due to the time and count limited imaging conditions. Artificial intelligence (AI) methods are now at the frontier of research for PET image reconstruction. While AI can learn the imaging physics as well as the noise in the data (when given sufficient examples), one of the most common uses of AI arises from exploiting databases of high-quality reference examples, to provide advanced noise compensation and resolution recovery. There are three main AI reconstruction approaches: (i) direct data-driven AI methods which rely on supervised learning from reference data, (ii) iterative (unrolled) methods which combine our physics and statistical models with AI learning from data, and (iii) methods which exploit AI with our known models, but crucially can offer benefits even in the absence of any example training data whatsoever. This article reviews these methods, considering opportunities and challenges of AI for PET reconstruction.
Collapse
Affiliation(s)
- Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Bolin Pan
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| |
Collapse
|
29
|
Zhao F, Li D, Luo R, Liu M, Jiang X, Hu J. Self-supervised deep learning for joint 3D low-dose PET/CT image denoising. Comput Biol Med 2023; 165:107391. [PMID: 37717529 DOI: 10.1016/j.compbiomed.2023.107391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 08/08/2023] [Accepted: 08/25/2023] [Indexed: 09/19/2023]
Abstract
Deep learning (DL)-based denoising of low-dose positron emission tomography (LDPET) and low-dose computed tomography (LDCT) has been widely explored. However, previous methods have focused only on single modality denoising, neglecting the possibility of simultaneously denoising LDPET and LDCT using only one neural network, i.e., joint LDPET/LDCT denoising. Moreover, DL-based denoising methods generally require plenty of well-aligned LD-normal-dose (LD-ND) sample pairs, which can be difficult to obtain. To this end, we propose a self-supervised two-stage training framework named MAsk-then-Cycle (MAC), to achieve self-supervised joint LDPET/LDCT denoising. The first stage of MAC is masked autoencoder (MAE)-based pre-training and the second stage is self-supervised denoising training. Specifically, we propose a self-supervised denoising strategy named cycle self-recombination (CSR), which enables denoising without well-aligned sample pairs. Unlike other methods that treat noise as a homogeneous whole, CSR disentangles noise into signal-dependent and independent noises. This is more in line with the actual imaging process and allows for flexible recombination of noises and signals to generate new samples. These new samples contain implicit constraints that can improve the network's denoising ability. Based on these constraints, we design multiple loss functions to enable self-supervised training. Then we design a CSR-based denoising network to achieve joint 3D LDPET/LDCT denoising. Existing self-supervised methods generally lack pixel-level constraints on networks, which can easily lead to additional artifacts. Before denoising training, we perform MAE-based pre-training to indirectly impose pixel-level constraints on networks. Experiments on an LDPET/LDCT dataset demonstrate its superiority over existing methods. Our method is the first self-supervised joint LDPET/LDCT denoising method. It does not require any prior assumptions and is therefore more robust.
Collapse
Affiliation(s)
- Feixiang Zhao
- State Key Laboratory of Geohazard Prevention and Geoenvironment Protection, Chengdu University of Technology, Chengdu, 610000, China.
| | - Dongfen Li
- State Key Laboratory of Geohazard Prevention and Geoenvironment Protection, Chengdu University of Technology, Chengdu, 610000, China.
| | - Rui Luo
- Department of Nuclear Medicine, Mianyang Central Hospital, Mianyang, 621000, China.
| | - Mingzhe Liu
- State Key Laboratory of Geohazard Prevention and Geoenvironment Protection, Chengdu University of Technology, Chengdu, 610000, China.
| | - Xin Jiang
- School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Wenzhou, 325000, China.
| | - Junjie Hu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, China.
| |
Collapse
|
30
|
Gu F, Wu Q. Quantitation of dynamic total-body PET imaging: recent developments and future perspectives. Eur J Nucl Med Mol Imaging 2023; 50:3538-3557. [PMID: 37460750 PMCID: PMC10547641 DOI: 10.1007/s00259-023-06299-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 06/05/2023] [Indexed: 10/04/2023]
Abstract
BACKGROUND Positron emission tomography (PET) scanning is an important diagnostic imaging technique used in disease diagnosis, therapy planning, treatment monitoring, and medical research. The standardized uptake value (SUV) obtained at a single time frame has been widely employed in clinical practice. Well beyond this simple static measure, more detailed metabolic information can be recovered from dynamic PET scans, followed by the recovery of arterial input function and application of appropriate tracer kinetic models. Many efforts have been devoted to the development of quantitative techniques over the last couple of decades. CHALLENGES The advent of new-generation total-body PET scanners characterized by ultra-high sensitivity and long axial field of view, i.e., uEXPLORER (United Imaging Healthcare), PennPET Explorer (University of Pennsylvania), and Biograph Vision Quadra (Siemens Healthineers), further stimulates valuable inspiration to derive kinetics for multiple organs simultaneously. But some emerging issues also need to be addressed, e.g., the large-scale data size and organ-specific physiology. The direct implementation of classical methods for total-body PET imaging without proper validation may lead to less accurate results. CONCLUSIONS In this contribution, the published dynamic total-body PET datasets are outlined, and several challenges/opportunities for quantitation of such types of studies are presented. An overview of the basic equation, calculation of input function (based on blood sampling, image, population or mathematical model), and kinetic analysis encompassing parametric (compartmental model, graphical plot and spectral analysis) and non-parametric (B-spline and piece-wise basis elements) approaches is provided. The discussion mainly focuses on the feasibilities, recent developments, and future perspectives of these methodologies for a diverse-tissue environment.
Collapse
Affiliation(s)
- Fengyun Gu
- School of Mathematics and Physics, North China Electric Power University, 102206, Beijing, China.
- School of Mathematical Sciences, University College Cork, T12XF62, Cork, Ireland.
| | - Qi Wu
- School of Mathematical Sciences, University College Cork, T12XF62, Cork, Ireland
| |
Collapse
|
31
|
Farag A, Huang J, Kohan A, Mirshahvalad SA, Basso Dias A, Fenchel M, Metser U, Veit-Haibach P. Evaluation of MR anatomically-guided PET reconstruction using a convolutional neural network in PSMA patients. Phys Med Biol 2023; 68:185014. [PMID: 37625418 DOI: 10.1088/1361-6560/acf439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 08/25/2023] [Indexed: 08/27/2023]
Abstract
Background. Recently, approaches have utilized the superior anatomical information provided by magnetic resonance imaging (MRI) to guide the reconstruction of positron emission tomography (PET). One of those approaches is the Bowsher's prior, which has been accelerated lately with a convolutional neural network (CNN) to reconstruct MR-guided PET in the imaging domain in routine clinical imaging. Two differently trained Bowsher-CNN methods (B-CNN0 and B-CNN) have been trained and tested on brain PET/MR images with non-PSMA tracers, but so far, have not been evaluated in other anatomical regions yet.Methods. A NEMA phantom with five of its six spheres filled with the same, calibrated concentration of 18F-DCFPyL-PSMA, and thirty-two patients (mean age 64 ± 7 years) with biopsy-confirmed PCa were used in this study. Reconstruction with either of the two available Bowsher-CNN methods were performed on the conventional MR-based attenuation correction (MRAC) and T1-MR images in the imaging domain. Detectable volume of the spheres and tumors, relative contrast recovery (CR), and background variation (BV) were measured for the MRAC and the Bowsher-CNN images, and qualitative assessment was conducted by ranking the image sharpness and quality by two experienced readers.Results. For the phantom study, the B-CNN produced 12.7% better CR compared to conventional reconstruction. The small sphere volume (<1.8 ml) detectability improved from MRAC to B-CNN by nearly 13%, while measured activity was higher than the ground-truth by 8%. The signal-to-noise ratio, CR, and BV were significantly improved (p< 0.05) in B-CNN images of the tumor. The qualitative analysis determined that tumor sharpness was excellent in 76% of the PET images reconstructed with the B-CNN method, compared to conventional reconstruction.Conclusions. Applying the MR-guided B-CNN in clinical prostate PET/MR imaging improves some quantitative, as well as qualitative imaging measures. The measured improvements in the phantom are also clearly translated into clinical application.
Collapse
Affiliation(s)
- Adam Farag
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Jin Huang
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Andres Kohan
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Seyed Ali Mirshahvalad
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Adriano Basso Dias
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | | | - Ur Metser
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Patrick Veit-Haibach
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| |
Collapse
|
32
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Yamaya T. Fully 3D implementation of the end-to-end deep image prior-based PET image reconstruction using block iterative algorithm. Phys Med Biol 2023; 68:155009. [PMID: 37406637 DOI: 10.1088/1361-6560/ace49c] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 07/05/2023] [Indexed: 07/07/2023]
Abstract
Objective. Deep image prior (DIP) has recently attracted attention owing to its unsupervised positron emission tomography (PET) image reconstruction method, which does not require any prior training dataset. In this paper, we present the first attempt to implement an end-to-end DIP-based fully 3D PET image reconstruction method that incorporates a forward-projection model into a loss function.Approach. A practical implementation of a fully 3D PET image reconstruction could not be performed at present because of a graphics processing unit memory limitation. Consequently, we modify the DIP optimization to a block iteration and sequential learning of an ordered sequence of block sinograms. Furthermore, the relative difference penalty (RDP) term is added to the loss function to enhance the quantitative accuracy of the PET image.Main results. We evaluated our proposed method using Monte Carlo simulation with [18F]FDG PET data of a human brain and a preclinical study on monkey-brain [18F]FDG PET data. The proposed method was compared with the maximum-likelihood expectation maximization (EM), maximuma posterioriEM with RDP, and hybrid DIP-based PET reconstruction methods. The simulation results showed that, compared with other algorithms, the proposed method improved the PET image quality by reducing statistical noise and better preserved the contrast of brain structures and inserted tumors. In the preclinical experiment, finer structures and better contrast recovery were obtained with the proposed method.Significance.The results indicated that the proposed method could produce high-quality images without a prior training dataset. Thus, the proposed method could be a key enabling technology for the straightforward and practical implementation of end-to-end DIP-based fully 3D PET image reconstruction.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-ku, Chiba, 263-8555, Japan
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-ku, Chiba, 263-8555, Japan
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-ku, Chiba, 263-8555, Japan
| |
Collapse
|
33
|
Sohn JH, Behr SC, Hernandez PM, Seo Y. Quantitative Assessment of Myocardial Ischemia With Positron Emission Tomography. J Thorac Imaging 2023; 38:247-259. [PMID: 33492046 PMCID: PMC8295411 DOI: 10.1097/rti.0000000000000579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Recent advances in positron emission tomography (PET) technology and reconstruction techniques have now made quantitative assessment using cardiac PET readily available in most cardiac PET imaging centers. Multiple PET myocardial perfusion imaging (MPI) radiopharmaceuticals are available for quantitative examination of myocardial ischemia, with each having distinct convenience and accuracy profile. Important properties of these radiopharmaceuticals ( 15 O-water, 13 N-ammonia, 82 Rb, 11 C-acetate, and 18 F-flurpiridaz) including radionuclide half-life, mean positron range in tissue, and the relationship between kinetic parameters and myocardial blood flow (MBF) are presented. Absolute quantification of MBF requires PET MPI to be performed with protocols that allow the generation of dynamic multiframes of reconstructed data. Using a tissue compartment model, the rate constant that governs the rate of PET MPI radiopharmaceutical extraction from the blood plasma to myocardial tissue is calculated. Then, this rate constant ( K1 ) is converted to MBF using an established extraction formula for each radiopharmaceutical. As most of the modern PET scanners acquire the data only in list mode, techniques of processing the list-mode data into dynamic multiframes are also reviewed. Finally, the impact of modern PET technologies such as PET/CT, PET/MR, total-body PET, machine learning/deep learning on comprehensive and quantitative assessment of myocardial ischemia is briefly described in this review.
Collapse
Affiliation(s)
- Jae Ho Sohn
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA
| | - Spencer C. Behr
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA
| | | | - Youngho Seo
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA
- Department of Radiation Oncology, University of California, San Francisco, CA
- UC Berkeley-UCSF Graduate Program in Bioengineering, Berkeley and San Francisco, CA
| |
Collapse
|
34
|
Wang YRJ, Wang P, Adams LC, Sheybani ND, Qu L, Sarrami AH, Theruvath AJ, Gatidis S, Ho T, Zhou Q, Pribnow A, Thakor AS, Rubin D, Daldrup-Link HE. Low-count whole-body PET/MRI restoration: an evaluation of dose reduction spectrum and five state-of-the-art artificial intelligence models. Eur J Nucl Med Mol Imaging 2023; 50:1337-1350. [PMID: 36633614 PMCID: PMC10387227 DOI: 10.1007/s00259-022-06097-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 12/24/2022] [Indexed: 01/13/2023]
Abstract
PURPOSE To provide a holistic and complete comparison of the five most advanced AI models in the augmentation of low-dose 18F-FDG PET data over the entire dose reduction spectrum. METHODS In this multicenter study, five AI models were investigated for restoring low-count whole-body PET/MRI, covering convolutional benchmarks - U-Net, enhanced deep super-resolution network (EDSR), generative adversarial network (GAN) - and the most cutting-edge image reconstruction transformer models in computer vision to date - Swin transformer image restoration network (SwinIR) and EDSR-ViT (vision transformer). The models were evaluated against six groups of count levels representing the simulated 75%, 50%, 25%, 12.5%, 6.25%, and 1% (extremely ultra-low-count) of the clinical standard 3 MBq/kg 18F-FDG dose. The comparisons were performed upon two independent cohorts - (1) a primary cohort from Stanford University and (2) a cross-continental external validation cohort from Tübingen University - in order to ensure the findings are generalizable. A total of 476 original count and simulated low-count whole-body PET/MRI scans were incorporated into this analysis. RESULTS For low-count PET restoration on the primary cohort, the mean structural similarity index (SSIM) scores for dose 6.25% were 0.898 (95% CI, 0.887-0.910) for EDSR, 0.893 (0.881-0.905) for EDSR-ViT, 0.873 (0.859-0.887) for GAN, 0.885 (0.873-0.898) for U-Net, and 0.910 (0.900-0.920) for SwinIR. In continuation, SwinIR and U-Net's performances were also discreetly evaluated at each simulated radiotracer dose levels. Using the primary Stanford cohort, the mean diagnostic image quality (DIQ; 5-point Likert scale) scores of SwinIR restoration were 5 (SD, 0) for dose 75%, 4.50 (0.535) for dose 50%, 3.75 (0.463) for dose 25%, 3.25 (0.463) for dose 12.5%, 4 (0.926) for dose 6.25%, and 2.5 (0.534) for dose 1%. CONCLUSION Compared to low-count PET images, with near-to or nondiagnostic images at higher dose reduction levels (up to 6.25%), both SwinIR and U-Net significantly improve the diagnostic quality of PET images. A radiotracer dose reduction to 1% of the current clinical standard radiotracer dose is out of scope for current AI techniques.
Collapse
Affiliation(s)
- Yan-Ran Joyce Wang
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA.
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA.
| | - Pengcheng Wang
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China
| | - Lisa Christine Adams
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Natasha Diba Sheybani
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA
| | - Liangqiong Qu
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA
| | - Amir Hossein Sarrami
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Ashok Joseph Theruvath
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Sergios Gatidis
- Department of Diagnostic and Interventional Radiology, University Hospital Tuebingen, Tuebingen, Germany
| | - Tina Ho
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Quan Zhou
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Allison Pribnow
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA
| | - Avnesh S Thakor
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA
| | - Daniel Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA
| | - Heike E Daldrup-Link
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA.
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA.
| |
Collapse
|
35
|
Li S, Peng L, Li F, Liang Z. Low-dose sinogram restoration enabled by conditional GAN with cross-domain regularization in SPECT imaging. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:9728-9758. [PMID: 37322909 DOI: 10.3934/mbe.2023427] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
In order to generate high-quality single-photon emission computed tomography (SPECT) images under low-dose acquisition mode, a sinogram denoising method was studied for suppressing random oscillation and enhancing contrast in the projection domain. A conditional generative adversarial network with cross-domain regularization (CGAN-CDR) is proposed for low-dose SPECT sinogram restoration. The generator stepwise extracts multiscale sinusoidal features from a low-dose sinogram, which are then rebuilt into a restored sinogram. Long skip connections are introduced into the generator, so that the low-level features can be better shared and reused, and the spatial and angular sinogram information can be better recovered. A patch discriminator is employed to capture detailed sinusoidal features within sinogram patches; thereby, detailed features in local receptive fields can be effectively characterized. Meanwhile, a cross-domain regularization is developed in both the projection and image domains. Projection-domain regularization directly constrains the generator via penalizing the difference between generated and label sinograms. Image-domain regularization imposes a similarity constraint on the reconstructed images, which can ameliorate the issue of ill-posedness and serves as an indirect constraint on the generator. By adversarial learning, the CGAN-CDR model can achieve high-quality sinogram restoration. Finally, the preconditioned alternating projection algorithm with total variation regularization is adopted for image reconstruction. Extensive numerical experiments show that the proposed model exhibits good performance in low-dose sinogram restoration. From visual analysis, CGAN-CDR performs well in terms of noise and artifact suppression, contrast enhancement and structure preservation, particularly in low-contrast regions. From quantitative analysis, CGAN-CDR has obtained superior results in both global and local image quality metrics. From robustness analysis, CGAN-CDR can better recover the detailed bone structure of the reconstructed image for a higher-noise sinogram. This work demonstrates the feasibility and effectiveness of CGAN-CDR in low-dose SPECT sinogram restoration. CGAN-CDR can yield significant quality improvement in both projection and image domains, which enables potential applications of the proposed method in real low-dose study.
Collapse
Affiliation(s)
- Si Li
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
| | - Limei Peng
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
| | - Fenghuan Li
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
| | - Zengguo Liang
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
| |
Collapse
|
36
|
Dai J, Wang H, Xu Y, Chen X, Tian R. Clinical application of AI-based PET images in oncological patients. Semin Cancer Biol 2023; 91:124-142. [PMID: 36906112 DOI: 10.1016/j.semcancer.2023.03.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 02/28/2023] [Accepted: 03/07/2023] [Indexed: 03/11/2023]
Abstract
Based on the advantages of revealing the functional status and molecular expression of tumor cells, positron emission tomography (PET) imaging has been performed in numerous types of malignant diseases for diagnosis and monitoring. However, insufficient image quality, the lack of a convincing evaluation tool and intra- and interobserver variation in human work are well-known limitations of nuclear medicine imaging and restrict its clinical application. Artificial intelligence (AI) has gained increasing interest in the field of medical imaging due to its powerful information collection and interpretation ability. The combination of AI and PET imaging potentially provides great assistance to physicians managing patients. Radiomics, an important branch of AI applied in medical imaging, can extract hundreds of abstract mathematical features of images for further analysis. In this review, an overview of the applications of AI in PET imaging is provided, focusing on image enhancement, tumor detection, response and prognosis prediction and correlation analyses with pathology or specific gene mutations in several types of tumors. Our aim is to describe recent clinical applications of AI-based PET imaging in malignant diseases and to focus on the description of possible future developments.
Collapse
Affiliation(s)
- Jiaona Dai
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Hui Wang
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Yuchao Xu
- School of Nuclear Science and Technology, University of South China, Hengyang City 421001, China
| | - Xiyang Chen
- Division of Vascular Surgery, Department of General Surgery, West China Hospital, Sichuan University, Chengdu 610041, China.
| | - Rong Tian
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China.
| |
Collapse
|
37
|
Zhu Y, Lyu Z, Lu W, Liu Y, Ma T. Fast and Accurate Gamma Imaging System Calibration Based on Deep Denoising Networks and Self-Adaptive Data Clustering. SENSORS (BASEL, SWITZERLAND) 2023; 23:2689. [PMID: 36904898 PMCID: PMC10007588 DOI: 10.3390/s23052689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 02/18/2023] [Accepted: 02/27/2023] [Indexed: 06/18/2023]
Abstract
Gamma imagers play a key role in both industrial and medical applications. Modern gamma imagers typically employ iterative reconstruction methods in which the system matrix (SM) is a key component to obtain high-quality images. An accurate SM could be acquired from an experimental calibration step with a point source across the FOV, but at a cost of long calibration time to suppress noise, posing challenges to real-world applications. In this work, we propose a time-efficient SM calibration approach for a 4π-view gamma imager with short-time measured SM and deep-learning-based denoising. The key steps include decomposing the SM into multiple detector response function (DRF) images, categorizing DRFs into multiple groups with a self-adaptive K-means clustering method to address sensitivity discrepancy, and independently training separate denoising deep networks for each DRF group. We investigate two denoising networks and compare them against a conventional Gaussian filtering method. The results demonstrate that the denoised SM with deep networks faithfully yields a comparable imaging performance with the long-time measured SM. The SM calibration time is reduced from 1.4 h to 8 min. We conclude that the proposed SM denoising approach is promising and effective in enhancing the productivity of the 4π-view gamma imager, and it is also generally applicable to other imaging systems that require an experimental calibration step.
Collapse
Affiliation(s)
- Yihang Zhu
- Department of Engineering Physics, Tsinghua University, Beijing 100084, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education, Tsinghua University, Beijing 100084, China
- Institute for Precision Medicine, Tsinghua University, Beijing 100084, China
| | - Zhenlei Lyu
- Department of Engineering Physics, Tsinghua University, Beijing 100084, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education, Tsinghua University, Beijing 100084, China
- Institute for Precision Medicine, Tsinghua University, Beijing 100084, China
| | - Wenzhuo Lu
- Department of Engineering Physics, Tsinghua University, Beijing 100084, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education, Tsinghua University, Beijing 100084, China
| | - Yaqiang Liu
- Department of Engineering Physics, Tsinghua University, Beijing 100084, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education, Tsinghua University, Beijing 100084, China
| | - Tianyu Ma
- Department of Engineering Physics, Tsinghua University, Beijing 100084, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education, Tsinghua University, Beijing 100084, China
- Institute for Precision Medicine, Tsinghua University, Beijing 100084, China
| |
Collapse
|
38
|
Li S, Gong K, Badawi RD, Kim EJ, Qi J, Wang G. Neural KEM: A Kernel Method With Deep Coefficient Prior for PET Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:785-796. [PMID: 36288234 PMCID: PMC10081957 DOI: 10.1109/tmi.2022.3217543] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Image reconstruction of low-count positron emission tomography (PET) data is challenging. Kernel methods address the challenge by incorporating image prior information in the forward model of iterative PET image reconstruction. The kernelized expectation-maximization (KEM) algorithm has been developed and demonstrated to be effective and easy to implement. A common approach for a further improvement of the kernel method would be adding an explicit regularization, which however leads to a complex optimization problem. In this paper, we propose an implicit regularization for the kernel method by using a deep coefficient prior, which represents the kernel coefficient image in the PET forward model using a convolutional neural-network. To solve the maximum-likelihood neural network-based reconstruction problem, we apply the principle of optimization transfer to derive a neural KEM algorithm. Each iteration of the algorithm consists of two separate steps: a KEM step for image update from the projection data and a deep-learning step in the image domain for updating the kernel coefficient image using the neural network. This optimization algorithm is guaranteed to monotonically increase the data likelihood. The results from computer simulations and real patient data have demonstrated that the neural KEM can outperform existing KEM and deep image prior methods.
Collapse
|
39
|
O'Briain TB, Uribe C, Sechopoulos I, Michel C, Bazalova-Carter M. Publicly available framework for simulating and experimentally validating clinical PET systems. Med Phys 2023; 50:1549-1559. [PMID: 36215081 DOI: 10.1002/mp.16032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 08/24/2022] [Accepted: 09/26/2022] [Indexed: 01/14/2023] Open
Abstract
BACKGROUND Monte Carlo (MC) simulations are a powerful tool to model medical imaging systems. However, before simulations can be considered the ground truth, they have to be validated with experiments. PURPOSE To provide a pipeline that models a clinical positron emission tomography (PET)/CT system using MC simulations after extensively validating the results against experimental measurements. METHODS A clinical four-ring PET imaging system was modeled using Geant4 application for tomographic emission (v. 9.0). To validate the simulations, PET images were acquired of a cylindrical phantom, point source, and image quality phantom with the modeled system and the simulations of the experimental procedures. For the purpose of validating the quantification capabilities and image quality provided by the simulation pipeline, the simulations were compared against the measurements in terms of their count rates and sensitivity as well as their image uniformity, resolution, recovery coefficients (RCs), coefficients of variation, contrast, and background variability. RESULTS When compared to the measured data, the number of true detections in the MC simulations was within 5%. The scatter fraction was found to be 30.0% ± 2.2% and 28.8% ± 1.7% in the measured and simulated scans, respectively. Analyzing the measured and simulated sinograms, the sensitivities were found to be 8.2 and 7.8 cps/kBq, respectively. The fraction of random coincidences were 19% in the measured data and 25% in the simulation. When calculating the image uniformity within the axial slices, the measured image exhibited a uniformity of 0.015 ± 0.005, whereas the simulated image had a uniformity of 0.029 ± 0.011. In the axial direction, the uniformity was measured to be 0.024 ± 0.006 and 0.040 ± 0.015 for the measured and simulated data, respectively. Comparing the image resolution, an average percentage difference of 2.9% was found between the measurements and simulations. The RCs calculated in both the measured and simulated images were found to be within the EARL ranges, except for that of the simulation of the smallest sphere. The coefficients of variation for the measured and simulated images were found to be 12% and 13%, respectively. Lastly, the background variability was consistent between the measurements and simulations, whereas the average percentage difference in the sphere contrasts was found to be 8.8%. CONCLUSION The clinical PET/CT system was modeled and validated to provide a simulation pipeline for the community. The pipeline and the validation procedures have been made available (https://github.com/teaghan/PET_MonteCarlo).
Collapse
Affiliation(s)
- Teaghan B O'Briain
- Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia, Canada
| | - Carlos Uribe
- Functional Imaging Department, BC Cancer, Vancouver, British Columbia, Canada
| | - Ioannis Sechopoulos
- Department of Medical Imaging, Radboud University Medical Centre, Nijmegen, The Netherlands
- Technical Medical Centre, University of Twente, Enschede, The Netherlands
| | | | | |
Collapse
|
40
|
Li Y, Hu J, Sari H, Xue S, Ma R, Kandarpa S, Visvikis D, Rominger A, Liu H, Shi K. A deep neural network for parametric image reconstruction on a large axial field-of-view PET. Eur J Nucl Med Mol Imaging 2023; 50:701-714. [PMID: 36326869 DOI: 10.1007/s00259-022-06003-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 10/09/2022] [Indexed: 11/06/2022]
Abstract
PURPOSE The PET scanners with long axial field of view (AFOV) having ~ 20 times higher sensitivity than conventional scanners provide new opportunities for enhanced parametric imaging but suffer from the dramatically increased volume and complexity of dynamic data. This study reconstructed a high-quality direct Patlak Ki image from five-frame sinograms without input function by a deep learning framework based on DeepPET to explore the potential of artificial intelligence reducing the acquisition time and the dependence of input function in parametric imaging. METHODS This study was implemented on a large AFOV PET/CT scanner (Biograph Vision Quadra) and twenty patients were recruited with 18F-fluorodeoxyglucose (18F-FDG) dynamic scans. During training and testing of the proposed deep learning framework, the last five-frame (25 min, 40-65 min post-injection) sinograms were set as input and the reconstructed Patlak Ki images by a nested EM algorithm on the vendor were set as ground truth. To evaluate the image quality of predicted Ki images, mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) were calculated. Meanwhile, a linear regression process was applied between predicted and true Ki means on avid malignant lesions and tumor volume of interests (VOIs). RESULTS In the testing phase, the proposed method achieved excellent MSE of less than 0.03%, high SSIM, and PSNR of ~ 0.98 and ~ 38 dB, respectively. Moreover, there was a high correlation (DeepPET: [Formula: see text]= 0.73, self-attention DeepPET: [Formula: see text]=0.82) between predicted Ki and traditionally reconstructed Patlak Ki means over eleven lesions. CONCLUSIONS The results show that the deep learning-based method produced high-quality parametric images from small frames of projection data without input function. It has much potential to address the dilemma of the long scan time and dependency on input function that still hamper the clinical translation of dynamic PET.
Collapse
Affiliation(s)
- Y Li
- College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, People's Republic of China.,College of Optical Science and Engineering, Zhejiang University, Hangzhou, People's Republic of China
| | - J Hu
- Department of Nuclear Medicine, Inselpital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - H Sari
- Advanced Clinical Imaging Technology, Siemens Healthcare AG, Lausanne, Switzerland
| | - S Xue
- Department of Nuclear Medicine, Inselpital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - R Ma
- Department of Nuclear Medicine, Inselpital, Bern University Hospital, University of Bern, Bern, Switzerland.,Department of Engineering Physics, Tsinghua University, Beijing, China
| | - S Kandarpa
- LaTIM, INSERM, UMR 1101, University of Brest, Brest, France
| | - D Visvikis
- LaTIM, INSERM, UMR 1101, University of Brest, Brest, France
| | - A Rominger
- Department of Nuclear Medicine, Inselpital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - H Liu
- College of Optical Science and Engineering, Zhejiang University, Hangzhou, People's Republic of China.
| | - K Shi
- Department of Nuclear Medicine, Inselpital, Bern University Hospital, University of Bern, Bern, Switzerland.,Computer Aided Medical Procedures and Augmented Reality, Institute of Informatics I16, Technical University of Munich, Munich, Germany
| |
Collapse
|
41
|
Zeng F, Fang J, Muhashi A, Liu H. Direct reconstruction for simultaneous dual-tracer PET imaging based on multi-task learning. EJNMMI Res 2023; 13:7. [PMID: 36719532 PMCID: PMC9889598 DOI: 10.1186/s13550-023-00955-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 01/17/2023] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND Simultaneous dual-tracer positron emission tomography (PET) imaging can observe two molecular targets in a single scan, which is conducive to disease diagnosis and tracking. Since the signals emitted by different tracers are the same, it is crucial to separate each single tracer from the mixed signals. The current study proposed a novel deep learning-based method to reconstruct single-tracer activity distributions from the dual-tracer sinogram. METHODS We proposed the Multi-task CNN, a three-dimensional convolutional neural network (CNN) based on a framework of multi-task learning. One common encoder extracted features from the dual-tracer dynamic sinogram, followed by two distinct and parallel decoders which reconstructed the single-tracer dynamic images of two tracers separately. The model was evaluated by mean squared error (MSE), multiscale structural similarity (MS-SSIM) index and peak signal-to-noise ratio (PSNR) on simulated data and real animal data, and compared to the filtered back-projection method based on deep learning (FBP-CNN). RESULTS In the simulation experiments, the Multi-task CNN reconstructed single-tracer images with lower MSE, higher MS-SSIM and PSNR than FBP-CNN, and was more robust to the changes in individual difference, tracer combination and scanning protocol. In the experiment of rats with an orthotopic xenograft glioma model, the Multi-task CNN reconstructions also showed higher qualities than FBP-CNN reconstructions. CONCLUSIONS The proposed Multi-task CNN could effectively reconstruct the dynamic activity images of two single tracers from the dual-tracer dynamic sinogram, which was potential in the direct reconstruction for real simultaneous dual-tracer PET imaging data in future.
Collapse
Affiliation(s)
- Fuzhen Zeng
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Jingwan Fang
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Amanjule Muhashi
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Huafeng Liu
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China.
| |
Collapse
|
42
|
Zhao W, Fan Y, Wang H, Gemmeke H, van Dongen KWA, Hopp T, Hesser J. Simulation-to-real generalization for deep-learning-based refraction-corrected ultrasound tomography image reconstruction. Phys Med Biol 2023; 68. [PMID: 36577143 DOI: 10.1088/1361-6560/acaeed] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 12/28/2022] [Indexed: 12/29/2022]
Abstract
Objective. The image reconstruction of ultrasound computed tomography is computationally expensive with conventional iterative methods. The fully learned direct deep learning reconstruction is promising to speed up image reconstruction significantly. However, for direct reconstruction from measurement data, due to the lack of real labeled data, the neural network is usually trained on a simulation dataset and shows poor performance on real data because of the simulation-to-real gap.Approach. To improve the simulation-to-real generalization of neural networks, a series of strategies are developed including a Fourier-transform-integrated neural network, measurement-domain data augmentation methods, and a self-supervised-learning-based patch-wise preprocessing neural network. Our strategies are evaluated on both the simulation dataset and real measurement datasets from two different prototype machines.Main results. The experimental results show that our deep learning methods help to improve the neural networks' robustness against noise and the generalizability to real measurement data.Significance. Our methods prove that it is possible for neural networks to achieve superior performance to traditional iterative reconstruction algorithms in imaging quality and allow for real-time 2D-image reconstruction. This study helps pave the path for the application of deep learning methods to practical ultrasound tomography image reconstruction based on simulation datasets.
Collapse
Affiliation(s)
- Wenzhao Zhao
- Interdisciplinary Center for Scientific Computing (IWR), Central Institute for Computer Engineering (ZITI), Mannheim Institute for Intelligent Systems in Medicine (MIISM), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim, Germany
| | - Yuling Fan
- Interdisciplinary Center for Scientific Computing (IWR), Central Institute for Computer Engineering (ZITI), Mannheim Institute for Intelligent Systems in Medicine (MIISM), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim, Germany
| | - Hongjian Wang
- School of Computer Science and Technology, Donghua University, 2999 North Renmin Road, 201620 Shanghai, People's Republic of China
| | - Hartmut Gemmeke
- Institute for Data Processing and Electronics, Karlsruhe Institute of Technology (KIT), Campus Nord, P.O. Box 3640, D-76021 Karlsruhe, Germany
| | - Koen W A van Dongen
- Department of Imaging Physics, Delft University of Technology, Delft, The Netherlands
| | - Torsten Hopp
- Institute for Data Processing and Electronics, Karlsruhe Institute of Technology (KIT), Campus Nord, P.O. Box 3640, D-76021 Karlsruhe, Germany
| | - Jürgen Hesser
- Interdisciplinary Center for Scientific Computing (IWR), Central Institute for Computer Engineering (ZITI), CZS Heidelberg Center for Model-Based AI, Mannheim Institute for Intelligent Systems in Medicine (MIISM), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim, Germany
| |
Collapse
|
43
|
Hansen TM, Mosegaard K, Holm S, Andersen FL, Fischer BM, Hansen AE. Probabilistic deconvolution of PET images using informed priors. FRONTIERS IN NUCLEAR MEDICINE (LAUSANNE, SWITZERLAND) 2023; 2:1028928. [PMID: 39381407 PMCID: PMC11459987 DOI: 10.3389/fnume.2022.1028928] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 12/22/2022] [Indexed: 10/10/2024]
Abstract
Purpose We present a probabilistic approach to medical image analysis that requires, and makes use of, explicit prior information provided by a medical expert. Depending on the choice of prior model the method can be used for image enhancement, analysis, and segmentation. Methods The methodology is based on a probabilistic approach to medical image analysis, that allows integration of 1) arbitrarily complex prior information (for which realizations can be generated), 2) information about a convolution operator of the imaging system, and 3) information about the noise in the reconstructed image into a posterior probability density. The method was demonstrated on positron emission tomography (PET) images obtained from a phantom and a patient with lung cancer. The likelihood model (multivariate log-normal) and the convolution operator were derived from phantom data. Two examples of prior information were used to show the potential of the method. The extended Metropolis-Hastings algorithm, a Markov chain Monte Carlo method, was used to generate realizations of the posterior distribution of the tracer activity concentration. Results A set of realizations from the posterior was used as the base of a quantitative PET image analysis. The mean and variance of activity concentrations were computed, as well as the probability of high tracer uptake and statistics on the size and activity concentration of high uptake regions. For both phantom and in vivo images, the estimated images of mean activity concentrations appeared to have reduced noise levels, and a sharper outline of high activity regions, as compared to the original PET. The estimated variance of activity concentrations was high at the edges of high activity regions. Conclusions The methodology provides a probabilistic approach for medical image analysis that explicitly takes into account medical expert knowledge as prior information. The presented first results indicate the potential of the method to improve the detection of small lesions. The methodology allows for a probabilistic measure of the size and activity level of high uptake regions, with possible long-term perspectives for early detection of cancer, as well as treatment, planning, and follow-up.
Collapse
Affiliation(s)
| | - Klaus Mosegaard
- Physics of Ice, Climate and Earth, Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark
| | - Søren Holm
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Littrup Andersen
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Barbara Malene Fischer
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
- Cancer Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Adam Espe Hansen
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
- Department of Radiology, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
44
|
Sun H, Jiang Y, Yuan J, Wang H, Liang D, Fan W, Hu Z, Zhang N. High-quality PET image synthesis from ultra-low-dose PET/MRI using bi-task deep learning. Quant Imaging Med Surg 2022; 12:5326-5342. [PMID: 36465830 PMCID: PMC9703111 DOI: 10.21037/qims-22-116] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 08/04/2022] [Indexed: 01/25/2023]
Abstract
BACKGROUND Lowering the dose for positron emission tomography (PET) imaging reduces patients' radiation burden but decreases the image quality by increasing noise and reducing imaging detail and quantifications. This paper introduces a method for acquiring high-quality PET images from an ultra-low-dose state to achieve both high-quality images and a low radiation burden. METHODS We developed a two-task-based end-to-end generative adversarial network, named bi-c-GAN, that incorporated the advantages of PET and magnetic resonance imaging (MRI) modalities to synthesize high-quality PET images from an ultra-low-dose input. Moreover, a combined loss, including the mean absolute error, structural loss, and bias loss, was created to improve the trained model's performance. Real integrated PET/MRI data from 67 patients' axial heads (each with 161 slices) were used for training and validation purposes. Synthesized images were quantified by the peak signal-to-noise ratio (PSNR), normalized mean square error (NMSE), structural similarity (SSIM), and contrast noise ratio (CNR). The improvement ratios of these four selected quantitative metrics were used to compare the images produced by bi-c-GAN with other methods. RESULTS In the four-fold cross-validation, the proposed bi-c-GAN outperformed the other three selected methods (U-net, c-GAN, and multiple input c-GAN). With the bi-c-GAN, in a 5% low-dose PET, the image quality was higher than that of the other three methods by at least 6.7% in the PSNR, 0.6% in the SSIM, 1.3% in the NMSE, and 8% in the CNR. In the hold-out validation, bi-c-GAN improved the image quality compared to U-net and c-GAN in both 2.5% and 10% low-dose PET. For example, the PSNR using bi-C-GAN was at least 4.46% in the 2.5% low-dose PET and at most 14.88% in the 10% low-dose PET. Visual examples also showed a higher quality of images generated from the proposed method, demonstrating the denoising and improving ability of bi-c-GAN. CONCLUSIONS By taking advantage of integrated PET/MR images and multitask deep learning (MDL), the proposed bi-c-GAN can efficiently improve the image quality of ultra-low-dose PET and reduce radiation exposure.
Collapse
Affiliation(s)
- Hanyu Sun
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yongluo Jiang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Jianmin Yuan
- Central Research Institute, Shanghai United Imaging Healthcare, Shanghai, China
| | - Haining Wang
- United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China;,United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China;,United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| |
Collapse
|
45
|
Li S, Wang G. Deep Kernel Representation for Image Reconstruction in PET. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3029-3038. [PMID: 35584077 PMCID: PMC9613528 DOI: 10.1109/tmi.2022.3176002] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Image reconstruction for positron emission tomography (PET) is challenging because of the ill-conditioned tomographic problem and low counting statistics. Kernel methods address this challenge by using kernel representation to incorporate image prior information in the forward model of iterative PET image reconstruction. Existing kernel methods construct the kernels commonly using an empirical process, which may lead to unsatisfactory performance. In this paper, we describe the equivalence between the kernel representation and a trainable neural network model. A deep kernel method is then proposed by exploiting a deep neural network to enable automated learning of an improved kernel model and is directly applicable to single subjects in dynamic PET. The training process utilizes available image prior data to form a set of robust kernels in an optimized way rather than empirically. The results from computer simulations and a real patient dataset demonstrate that the proposed deep kernel method can outperform the existing kernel method and neural network method for dynamic PET image reconstruction.
Collapse
|
46
|
Qu X, Ren C, Yan G, Zheng D, Tang W, Wang S, Lin H, Zhang J, Jiang J. Deep-Learning-Based Ultrasound Sound-Speed Tomography Reconstruction with Tikhonov Pseudo-Inverse Priori. ULTRASOUND IN MEDICINE & BIOLOGY 2022; 48:2079-2094. [PMID: 35922265 PMCID: PMC10448397 DOI: 10.1016/j.ultrasmedbio.2022.05.033] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 05/26/2022] [Accepted: 05/31/2022] [Indexed: 06/15/2023]
Abstract
Ultrasound sound-speed tomography (USST) is a promising technology for breast imaging and breast cancer detection. Its reconstruction is a complex non-linear mapping from the projection data to the sound-speed image (SSI). The traditional reconstruction methods include mainly the ray-based methods and the waveform-based methods. The ray-based methods with linear approximation have low computational cost but low reconstruction quality; the full wave-based methods with the complex non-linear model have high quality but high cost. To achieve both high quality and low cost, we introduced traditional linear approximation as prior knowledge into a deep neural network and treated the complex non-linear mapping of USST reconstruction as a combination of linear mapping and non-linear mapping. In the proposed method, the linear mapping was seamlessly implemented with a fully connected layer and initialized using the Tikhonov pseudo-inverse matrix. The non-linear mapping was implemented using a U-shape Net (U-Net). Furthermore, we proposed the Tikhonov U-shape net (TU-Net), in which the linear mapping was done before the non-linear mapping, and the U-shape Tikhonov net (UT-Net), in which the non-linear mapping was done before the linear mapping. Moreover, we conducted simulations and experiments for evaluation. In the numerical simulation, the root-mean-squared error was 6.49 and 4.29 m/s for the UT-Net and TU-Net, the peak signal-to-noise ratio was 49.01 and 52.90 dB, the structural similarity was 0.9436 and 0.9761 and the reconstruction time was 10.8 and 11.3 ms, respectively. In this study, the SSIs obtained with the proposed methods exhibited high sound-speed accuracy. Both the UT-Net and the TU-Net achieved high quality and low computational cost.
Collapse
Affiliation(s)
- Xiaolei Qu
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Chujian Ren
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Guo Yan
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Dezhi Zheng
- Research Institute for Frontier Science, Beihang University, Beijing, China
| | - Wenzhong Tang
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Shuai Wang
- Research Institute for Frontier Science, Beihang University, Beijing, China
| | - Hongxiang Lin
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, China
| | - Jingya Zhang
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA.
| |
Collapse
|
47
|
Rahman AU, Nemallapudi MV, Chou CY, Lin CH, Lee SC. Direct mapping from PET coincidence data to proton-dose and positron activity using a deep learning approach. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac8af5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Accepted: 08/18/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Obtaining the intrinsic dose distributions in particle therapy is a challenging problem that needs to be addressed by imaging algorithms to take advantage of secondary particle detectors. In this work, we investigate the utility of deep learning methods for achieving direct mapping from detector data to the intrinsic dose distribution. Approach. We performed Monte Carlo simulations using GATE/Geant4 10.4 simulation toolkits to generate a dataset using human CT phantom irradiated with high-energy protons and imaged with compact in-beam PET for realistic beam delivery in a single-fraction (∼2 Gy). We developed a neural network model based on conditional generative adversarial networks to generate dose maps conditioned on coincidence distributions in the detector. The model performance is evaluated by the mean relative error, absolute dose fraction difference, and shift in Bragg peak position. Main results. The relative deviation in the dose and range of the distributions predicted by the model from the true values for mono-energetic irradiation between 50 and 122 MeV lie within 1% and 2%, respectively. This was achieved using 105 coincidences acquired five minutes after irradiation. The relative deviation in the dose and range for spread-out Bragg peak distributions were within 1% and 2.6% uncertainties, respectively. Significance. An important aspect of this study is the demonstration of a method for direct mapping from detector counts to dose domain using the low count data of compact detectors suited for practical implementation in particle therapy. Including additional prior information in the future can further expand the scope of our model and also extend its application to other areas of medical imaging.
Collapse
|
48
|
Schwenck J, Kneilling M, Riksen NP, la Fougère C, Mulder DJ, Slart RJHA, Aarntzen EHJG. A role for artificial intelligence in molecular imaging of infection and inflammation. Eur J Hybrid Imaging 2022; 6:17. [PMID: 36045228 PMCID: PMC9433558 DOI: 10.1186/s41824-022-00138-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 05/16/2022] [Indexed: 12/03/2022] Open
Abstract
The detection of occult infections and low-grade inflammation in clinical practice remains challenging and much depending on readers’ expertise. Although molecular imaging, like [18F]FDG PET or radiolabeled leukocyte scintigraphy, offers quantitative and reproducible whole body data on inflammatory responses its interpretation is limited to visual analysis. This often leads to delayed diagnosis and treatment, as well as untapped areas of potential application. Artificial intelligence (AI) offers innovative approaches to mine the wealth of imaging data and has led to disruptive breakthroughs in other medical domains already. Here, we discuss how AI-based tools can improve the detection sensitivity of molecular imaging in infection and inflammation but also how AI might push the data analysis beyond current application toward predicting outcome and long-term risk assessment.
Collapse
|
49
|
Ma R, Hu J, Sari H, Xue S, Mingels C, Viscione M, Kandarpa VSS, Li WB, Visvikis D, Qiu R, Rominger A, Li J, Shi K. An encoder-decoder network for direct image reconstruction on sinograms of a long axial field of view PET. Eur J Nucl Med Mol Imaging 2022; 49:4464-4477. [PMID: 35819497 DOI: 10.1007/s00259-022-05861-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 06/02/2022] [Indexed: 11/04/2022]
Abstract
PURPOSE Deep learning is an emerging reconstruction method for positron emission tomography (PET), which can tackle complex PET corrections in an integrated procedure. This paper optimizes the direct PET reconstruction from sinogram on a long axial field of view (LAFOV) PET. METHODS This paper proposes a novel deep learning architecture to reduce the biases during direct reconstruction from sinograms to images. This architecture is based on an encoder-decoder network, where the perceptual loss is used with pre-trained convolutional layers. It is trained and tested on data of 80 patients acquired from recent Siemens Biograph Vision Quadra long axial FOV (LAFOV) PET/CT. The patients are randomly split into a training dataset of 60 patients, a validation dataset of 10 patients, and a test dataset of 10 patients. The 3D sinograms are converted into 2D sinogram slices and used as input to the network. In addition, the vendor reconstructed images are considered as ground truths. Finally, the proposed method is compared with DeepPET, a benchmark deep learning method for PET reconstruction. RESULTS Compared with DeepPET, the proposed network significantly reduces the root-mean-squared error (NRMSE) from 0.63 to 0.6 (p < 0.01) and increases the structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) from 0.93 to 0.95 (p < 0.01) and from 82.02 to 82.36 (p < 0.01), respectively. The reconstruction time is approximately 10 s per patient, which is shortened by 23 times compared with the conventional method. The errors of mean standardized uptake values (SUVmean) for lesions between ground truth and the predicted result are reduced from 33.5 to 18.7% (p = 0.03). In addition, the error of max SUV is reduced from 32.7 to 21.8% (p = 0.02). CONCLUSION The results demonstrate the feasibility of using deep learning to reconstruct images with acceptable image quality and short reconstruction time. It is shown that the proposed method can improve the quality of deep learning-based reconstructed images without additional CT images for attenuation and scattering corrections. This study demonstrated the feasibility of deep learning to rapidly reconstruct images without additional CT images for complex corrections from actual clinical measurements on LAFOV PET. Despite improving the current development, AI-based reconstruction does not work appropriately for untrained scenarios due to limited extrapolation capability and cannot completely replace conventional reconstruction currently.
Collapse
Affiliation(s)
- Ruiyao Ma
- Department of Engineering Physics, Tsinghua University, and Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, 100084, China.,Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.,Institute of Radiation Medicine, Helmholtz Zentrum München German Research Center for Environmental Health (GmbH), Bavaria, Neuherberg, Germany
| | - Jiaxi Hu
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Hasan Sari
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.,Advanced Clinical Imaging Technology, Siemens Healthcare AG, Lausanne, Switzerland
| | - Song Xue
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Clemens Mingels
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Marco Viscione
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | | | - Wei Bo Li
- Institute of Radiation Medicine, Helmholtz Zentrum München German Research Center for Environmental Health (GmbH), Bavaria, Neuherberg, Germany
| | | | - Rui Qiu
- Department of Engineering Physics, Tsinghua University, and Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, 100084, China.
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Junli Li
- Department of Engineering Physics, Tsinghua University, and Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, 100084, China.
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
50
|
Visvikis D, Lambin P, Beuschau Mauridsen K, Hustinx R, Lassmann M, Rischpler C, Shi K, Pruim J. Application of artificial intelligence in nuclear medicine and molecular imaging: a review of current status and future perspectives for clinical translation. Eur J Nucl Med Mol Imaging 2022; 49:4452-4463. [PMID: 35809090 PMCID: PMC9606092 DOI: 10.1007/s00259-022-05891-w] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 06/25/2022] [Indexed: 02/06/2023]
Abstract
Artificial intelligence (AI) will change the face of nuclear medicine and molecular imaging as it will in everyday life. In this review, we focus on the potential applications of AI in the field, both from a physical (radiomics, underlying statistics, image reconstruction and data analysis) and a clinical (neurology, cardiology, oncology) perspective. Challenges for transferability from research to clinical practice are being discussed as is the concept of explainable AI. Finally, we focus on the fields where challenges should be set out to introduce AI in the field of nuclear medicine and molecular imaging in a reliable manner.
Collapse
Affiliation(s)
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology, Maastricht University Medical Center (MUMC +), Maastricht, The Netherlands.,Department of Radiology and Nuclear Medicine, GROW - School for Oncology, Maastricht University Medical Center (MUMC +), Maastricht, The Netherlands
| | - Kim Beuschau Mauridsen
- Center of Functionally Integrative Neuroscience and MindLab, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.,Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Roland Hustinx
- GIGA-CRC in Vivo Imaging, University of Liège, GIGA, Avenue de l'Hôpital 11, 4000, Liege, Belgium
| | - Michael Lassmann
- Klinik Und Poliklinik Für Nuklearmedizin, Universitätsklinikum Würzburg, Würzburg, Germany
| | - Christoph Rischpler
- Department of Nuclear Medicine, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Kuangyu Shi
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland.,Department of Informatics, Technical University of Munich, Munich, Germany
| | - Jan Pruim
- Medical Imaging Center, Dept. of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| |
Collapse
|