1
|
Kuang X, Li B, Lyu T, Xue Y, Huang H, Xie Q, Zhu W. PET image reconstruction using weighted nuclear norm maximization and deep learning prior. Phys Med Biol 2024; 69:215023. [PMID: 39374634 DOI: 10.1088/1361-6560/ad841d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2024] [Accepted: 10/07/2024] [Indexed: 10/09/2024]
Abstract
The ill-posed Positron emission tomography (PET) reconstruction problem usually results in limited resolution and significant noise. Recently, deep neural networks have been incorporated into PET iterative reconstruction framework to improve the image quality. In this paper, we propose a new neural network-based iterative reconstruction method by using weighted nuclear norm (WNN) maximization, which aims to recover the image details in the reconstruction process. The novelty of our method is the application of WNN maximization rather than WNN minimization in PET image reconstruction. Meanwhile, a neural network is used to control the noise originated from WNN maximization. Our method is evaluated on simulated and clinical datasets. The simulation results show that the proposed approach outperforms state-of-the-art neural network-based iterative methods by achieving the best contrast/noise tradeoff with a remarkable contrast improvement on the lesion contrast recovery. The study on clinical datasets also demonstrates that our method can recover lesions of different sizes while suppressing noise in various low-dose PET image reconstruction tasks. Our code is available athttps://github.com/Kuangxd/PETReconstruction.
Collapse
Affiliation(s)
- Xiaodong Kuang
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Bingxuan Li
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, People's Republic of China
| | - Tianling Lyu
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Yitian Xue
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Hailiang Huang
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Qingguo Xie
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, People's Republic of China
| | - Wentao Zhu
- Center for Frontier Fundamental Studies, Zhejiang Lab, Hangzhou, People's Republic of China
| |
Collapse
|
2
|
Yousefzadeh F, Yazdi M, Entezarmahdi SM, Faghihi R, Ghasempoor S, Shahamiri N, Mehrizi ZA, Haghighatafshar M. SPECT-MPI iterative denoising during the reconstruction process using a two-phase learned convolutional neural network. EJNMMI Phys 2024; 11:82. [PMID: 39378001 PMCID: PMC11461437 DOI: 10.1186/s40658-024-00687-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 09/26/2024] [Indexed: 10/11/2024] Open
Abstract
PURPOSE The problem of image denoising in single-photon emission computed tomography (SPECT) myocardial perfusion imaging (MPI) is a fundamental challenge. Although various image processing techniques have been presented, they may degrade the contrast of denoised images. The proposed idea in this study is to use a deep neural network as the denoising procedure during the iterative reconstruction process rather than the post-reconstruction phase. This method could decrease the background coefficient of variation (COV_bkg) of the final reconstructed image, which represents the amount of random noise, while improving the contrast-to-noise ratio (CNR). METHODS In this study, a generative adversarial network is used, where its generator is trained by a two-phase approach. In the first phase, the network is trained by a confined image region around the heart in transverse view. The second phase improves the network's generalization by tuning the network weights with the full image size as the input. The network was trained and tested by a dataset of 247 patients who underwent two immediate serially high- and low-noise SPECT-MPI. RESULTS Quantitative results show that compared to post-reconstruction low pass filtering and post-reconstruction deep denoising methods, our proposed method can decline the COV_bkg of the images by up to 10.28% and 12.52% and enhance the CNR by up to 54.54% and 45.82%, respectively. CONCLUSION The iterative deep denoising method outperforms 2D low-pass Gaussian filtering with an 8.4-mm FWHM and post-reconstruction deep denoising approaches.
Collapse
Affiliation(s)
- Farnaz Yousefzadeh
- Department of Computer Science and Engineering and IT, School of Electrical and Computer Engineering, Shiraz University, Shiraz, Iran
| | - Mehran Yazdi
- School of Electrical and Computer Engineering, Shiraz University, Shiraz, Iran.
| | | | - Reza Faghihi
- Department of Nuclear Engineering, Shiraz University, Shiraz, Iran
| | - Sadegh Ghasempoor
- Department of Nuclear Medicine, Alzahra Hospital, Shiraz University of Medical Sciences, Shiraz, Iran
| | | | - Zahra Abuee Mehrizi
- Department of Nuclear Medicine, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Mahdi Haghighatafshar
- Department of Nuclear Medicine, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
3
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Yamaya T. Two-step optimization for accelerating deep image prior-based PET image reconstruction. Radiol Phys Technol 2024; 17:776-781. [PMID: 39096446 DOI: 10.1007/s12194-024-00831-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Revised: 07/25/2024] [Accepted: 07/27/2024] [Indexed: 08/05/2024]
Abstract
Deep learning, particularly convolutional neural networks (CNNs), has advanced positron emission tomography (PET) image reconstruction. However, it requires extensive, high-quality training datasets. Unsupervised learning methods, such as deep image prior (DIP), have shown promise for PET image reconstruction. Although DIP-based PET image reconstruction methods demonstrate superior performance, they involve highly time-consuming calculations. This study proposed a two-step optimization method to accelerate end-to-end DIP-based PET image reconstruction and improve PET image quality. The proposed two-step method comprised a pre-training step using conditional DIP denoising, followed by an end-to-end reconstruction step with fine-tuning. Evaluations using Monte Carlo simulation data demonstrated that the proposed two-step method significantly reduced the computation time and improved the image quality, thereby rendering it a practical and efficient approach for end-to-end DIP-based PET image reconstruction.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho,Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa,Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa,Inage-Ku, Chiba, 263-8555, Japan
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho,Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa,Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
4
|
Vashistha R, Vegh V, Moradi H, Hammond A, O’Brien K, Reutens D. Modular GAN: positron emission tomography image reconstruction using two generative adversarial networks. FRONTIERS IN RADIOLOGY 2024; 4:1466498. [PMID: 39328298 PMCID: PMC11425657 DOI: 10.3389/fradi.2024.1466498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/18/2024] [Accepted: 08/08/2024] [Indexed: 09/28/2024]
Abstract
Introduction The reconstruction of PET images involves converting sinograms, which represent the measured counts of radioactive emissions using detector rings encircling the patient, into meaningful images. However, the quality of PET data acquisition is impacted by physical factors, photon count statistics and detector characteristics, which affect the signal-to-noise ratio, resolution and quantitative accuracy of the resulting images. To address these influences, correction methods have been developed to mitigate each of these issues separately. Recently, generative adversarial networks (GANs) based on machine learning have shown promise in learning the complex mapping between acquired PET data and reconstructed tomographic images. This study aims to investigate the properties of training images that contribute to GAN performance when non-clinical images are used for training. Additionally, we describe a method to correct common PET imaging artefacts without relying on patient-specific anatomical images. Methods The modular GAN framework includes two GANs. Module 1, resembling Pix2pix architecture, is trained on non-clinical sinogram-image pairs. Training data are optimised by considering image properties defined by metrics. The second module utilises adaptive instance normalisation and style embedding to enhance the quality of images from Module 1. Additional perceptual and patch-based loss functions are employed in training both modules. The performance of the new framework was compared with that of existing methods, (filtered backprojection (FBP) and ordered subset expectation maximisation (OSEM) without and with point spread function (OSEM-PSF)) with respect to correction for attenuation, patient motion and noise in simulated, NEMA phantom and human imaging data. Evaluation metrics included structural similarity (SSIM), peak-signal-to-noise ratio (PSNR), relative root mean squared error (rRMSE) for simulated data, and contrast-to-noise ratio (CNR) for NEMA phantom and human data. Results For simulated test data, the performance of the proposed framework was both qualitatively and quantitatively superior to that of FBP and OSEM. In the presence of noise, Module 1 generated images with a SSIM of 0.48 and higher. These images exhibited coarse structures that were subsequently refined by Module 2, yielding images with an SSIM higher than 0.71 (at least 22% higher than OSEM). The proposed method was robust against noise and motion. For NEMA phantoms, it achieved higher CNR values than OSEM. For human images, the CNR in brain regions was significantly higher than that of FBP and OSEM (p < 0.05, paired t-test). The CNR of images reconstructed with OSEM-PSF was similar to those reconstructed using the proposed method. Conclusion The proposed image reconstruction method can produce PET images with artefact correction.
Collapse
Affiliation(s)
- Rajat Vashistha
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, University of Queensland, Brisbane, QLD, Australia
| | - Viktor Vegh
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, University of Queensland, Brisbane, QLD, Australia
| | - Hamed Moradi
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, University of Queensland, Brisbane, QLD, Australia
- Diagnostic Imaging, Siemens Healthcare Pty Ltd., Melbourne, QLD,Australia
| | - Amanda Hammond
- Diagnostic Imaging, Siemens Healthcare Pty Ltd., Melbourne, QLD,Australia
| | - Kieran O’Brien
- Diagnostic Imaging, Siemens Healthcare Pty Ltd., Melbourne, QLD,Australia
| | - David Reutens
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, University of Queensland, Brisbane, QLD, Australia
| |
Collapse
|
5
|
Li S, Zhu Y, Spencer BA, Wang G. Single-Subject Deep-Learning Image Reconstruction With a Neural Optimization Transfer Algorithm for PET-Enabled Dual-Energy CT Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:4075-4089. [PMID: 38941203 DOI: 10.1109/tip.2024.3418347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2024]
Abstract
Combining dual-energy computed tomography (DECT) with positron emission tomography (PET) offers many potential clinical applications but typically requires expensive hardware upgrades or increases radiation doses on PET/CT scanners due to an extra X-ray CT scan. The recent PET-enabled DECT method allows DECT imaging on PET/CT without requiring a second X-ray CT scan. It combines the already existing X-ray CT image with a 511 keV γ -ray CT (gCT) image reconstructed from time-of-flight PET emission data. A kernelized framework has been developed for reconstructing gCT image but this method has not fully exploited the potential of prior knowledge. Use of deep neural networks may explore the power of deep learning in this application. However, common approaches require a large database for training, which is impractical for a new imaging method like PET-enabled DECT. Here, we propose a single-subject method by using neural-network representation as a deep coefficient prior to improving gCT image reconstruction without population-based pre-training. The resulting optimization problem becomes the tomographic estimation of nonlinear neural-network parameters from gCT projection data. This complicated problem can be efficiently solved by utilizing the optimization transfer strategy with quadratic surrogates. Each iteration of the proposed neural optimization transfer algorithm includes: PET activity image update; gCT image update; and least-square neural-network learning in the gCT image domain. This algorithm is guaranteed to monotonically increase the data likelihood. Results from computer simulation, real phantom data and real patient data have demonstrated that the proposed method can significantly improve gCT image quality and consequent multi-material decomposition as compared to other methods.
Collapse
|
6
|
Jang SI, Pan T, Li Y, Heidari P, Chen J, Li Q, Gong K. Spach Transformer: Spatial and Channel-Wise Transformer Based on Local and Global Self-Attentions for PET Image Denoising. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2036-2049. [PMID: 37995174 PMCID: PMC11111593 DOI: 10.1109/tmi.2023.3336237] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2023]
Abstract
Position emission tomography (PET) is widely used in clinics and research due to its quantitative merits and high sensitivity, but suffers from low signal-to-noise ratio (SNR). Recently convolutional neural networks (CNNs) have been widely used to improve PET image quality. Though successful and efficient in local feature extraction, CNN cannot capture long-range dependencies well due to its limited receptive field. Global multi-head self-attention (MSA) is a popular approach to capture long-range information. However, the calculation of global MSA for 3D images has high computational costs. In this work, we proposed an efficient spatial and channel-wise encoder-decoder transformer, Spach Transformer, that can leverage spatial and channel information based on local and global MSAs. Experiments based on datasets of different PET tracers, i.e., 18F-FDG, 18F-ACBC, 18F-DCFPyL, and 68Ga-DOTATATE, were conducted to evaluate the proposed framework. Quantitative results show that the proposed Spach Transformer framework outperforms state-of-the-art deep learning architectures.
Collapse
|
7
|
Hashimoto F, Ote K. ReconU-Net: a direct PET image reconstruction using U-Net architecture with back projection-induced skip connection. Phys Med Biol 2024; 69:105022. [PMID: 38640921 DOI: 10.1088/1361-6560/ad40f6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 04/19/2024] [Indexed: 04/21/2024]
Abstract
Objective.This study aims to introduce a novel back projection-induced U-Net-shaped architecture, called ReconU-Net, based on the original U-Net architecture for deep learning-based direct positron emission tomography (PET) image reconstruction. Additionally, our objective is to visualize the behavior of direct PET image reconstruction by comparing the proposed ReconU-Net architecture with the original U-Net architecture and existing DeepPET encoder-decoder architecture without skip connections.Approach. The proposed ReconU-Net architecture uniquely integrates the physical model of the back projection operation into the skip connection. This distinctive feature facilitates the effective transfer of intrinsic spatial information from the input sinogram to the reconstructed image via an embedded physical model. The proposed ReconU-Net was trained using Monte Carlo simulation data from the Brainweb phantom and tested on both simulated and real Hoffman brain phantom data.Main results. The proposed ReconU-Net method provided better reconstructed image in terms of the peak signal-to-noise ratio and contrast recovery coefficient than the original U-Net and DeepPET methods. Further analysis shows that the proposed ReconU-Net architecture has the ability to transfer features of multiple resolutions, especially non-abstract high-resolution information, through skip connections. Unlike the U-Net and DeepPET methods, the proposed ReconU-Net successfully reconstructed the real Hoffman brain phantom, despite limited training on simulated data.Significance. The proposed ReconU-Net can improve the fidelity of direct PET image reconstruction, even with small training datasets, by leveraging the synergistic relationship between data-driven modeling and the physics model of the imaging process.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamana-ku, Hamamatsu 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamana-ku, Hamamatsu 434-8601, Japan
| |
Collapse
|
8
|
Cheng L, Lyu Z, Liu H, Wu J, Jia C, Wu Y, Ji Y, Jiang N, Ma T, Liu Y. Efficient image reconstruction for a small animal PET system with dual-layer-offset detector design. Med Phys 2024; 51:2772-2787. [PMID: 37921396 DOI: 10.1002/mp.16814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 10/10/2023] [Accepted: 10/10/2023] [Indexed: 11/04/2023] Open
Abstract
BACKGROUND A compact PET/SPECT/CT system Inliview-3000B has been developed to provide multi-modality information on small animals for biomedical research. Its PET subsystem employed a dual-layer-offset detector design for depth-of-interaction capability and higher detection efficiency, but the irregular design caused some difficulties in calculating the normalization factors and the sensitivity map. Besides, the relatively larger (2 mm) crystal cross-section size also posed a challenge to high-resolution image reconstruction. PURPOSE We present an efficient image reconstruction method to achieve high imaging performance for the PET subsystem of Inliview-3000B. METHODS List mode reconstruction with efficient system modeling was used for the PET imaging. We adopt an on-the-fly multi-ray tracing method with random crystal sampling to model the solid angle, crystal penetration and object attenuation effect, and modify the system response model during each iteration to improve the reconstruction performance and computational efficiency. We estimate crystal efficiency with a novel iterative approach that combines measured cylinder phantom data with simulated line-of-response (LOR)-based factors for normalization correction before reconstruction. Since it is necessary to calculate normalization factors and the sensitivity map, we stack the two crystal layers together and extend the conventional data organization method here to index all useful LORs. Simulations and experiments were performed to demonstrate the feasibility and advantage of the proposed method. RESULTS Simulation results showed that the iterative algorithm for crystal efficiency estimation could achieve good accuracy. NEMA image quality phantom studies have demonstrated the superiority of random sampling, which is able to achieve good imaging performance with much less computation than traditional uniform sampling. In the spatial resolution evaluation based on the mini-Derenzo phantom, 1.1 mm hot rods could be identified with the proposed reconstruction method. Reconstruction of double mice and a rat showed good spatial resolution and a high signal-to-noise ratio, and organs with higher uptake could be recognized well. CONCLUSION The results validated the superiority of introducing randomness into reconstruction, and demonstrated its reliability for high-performance imaging. The Inliview-3000B PET subsystem with the proposed image reconstruction can provide rich and detailed information on small animals for preclinical research.
Collapse
Affiliation(s)
- Li Cheng
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education (Tsinghua University), Beijing, China
| | - Zhenlei Lyu
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education (Tsinghua University), Beijing, China
| | - Hui Liu
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education (Tsinghua University), Beijing, China
| | - Jing Wu
- Center for Advanced Quantum Studies and Department of Physics, Beijing Normal University, Beijing, China
| | - Chao Jia
- Beijing Novel Medical Equipment Ltd, Beijing, China
| | - Yuanguang Wu
- Beijing Novel Medical Equipment Ltd, Beijing, China
| | - Yingcai Ji
- Beijing Novel Medical Equipment Ltd, Beijing, China
| | | | - Tianyu Ma
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education (Tsinghua University), Beijing, China
| | - Yaqiang Liu
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education (Tsinghua University), Beijing, China
| |
Collapse
|
9
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
10
|
Wang S, Liu B, Xie F, Chai L. An iterative reconstruction algorithm for unsupervised PET image. Phys Med Biol 2024; 69:055025. [PMID: 38346340 DOI: 10.1088/1361-6560/ad2882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 02/12/2024] [Indexed: 02/28/2024]
Abstract
Objective.In recent years, convolutional neural networks (CNNs) have shown great potential in positron emission tomography (PET) image reconstruction. However, most of them rely on many low-quality and high-quality reference PET image pairs for training, which are not always feasible in clinical practice. On the other hand, many works improve the quality of PET image reconstruction by adding explicit regularization or optimizing the network structure, which may lead to complex optimization problems.Approach.In this paper, we develop a novel iterative reconstruction algorithm by integrating the deep image prior (DIP) framework, which only needs the prior information (e.g. MRI) and sinogram data of patients. To be specific, we construct the objective function as a constrained optimization problem and utilize the existing PET image reconstruction packages to streamline calculations. Moreover, to further improve both the reconstruction quality and speed, we introduce the Nesterov's acceleration part and the restart mechanism in each iteration.Main results.2D experiments on PET data sets based on computer simulations and real patients demonstrate that our proposed algorithm can outperform existing MLEM-GF, KEM and DIPRecon methods.Significance.Unlike traditional CNN methods, the proposed algorithm does not rely on large data sets, but only leverages inter-patient information. Furthermore, we enhance reconstruction performance by optimizing the iterative algorithm. Notably, the proposed method does not require much modification of the basic algorithm, allowing for easy integration into standard implementations.
Collapse
Affiliation(s)
- Siqi Wang
- Engineering Research Center of Metallurgical Automation and Measurement Technology, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
| | - Bing Liu
- Engineering Research Center of Metallurgical Automation and Measurement Technology, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
| | - Furan Xie
- Engineering Research Center of Metallurgical Automation and Measurement Technology, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
| | - Li Chai
- College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, People's Republic of China
| |
Collapse
|
11
|
Kumar N, Krause L, Wondrak T, Eckert S, Eckert K, Gumhold S. Robust Reconstruction of the Void Fraction from Noisy Magnetic Flux Density Using Invertible Neural Networks. SENSORS (BASEL, SWITZERLAND) 2024; 24:1213. [PMID: 38400371 PMCID: PMC10893175 DOI: 10.3390/s24041213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 02/07/2024] [Accepted: 02/09/2024] [Indexed: 02/25/2024]
Abstract
Electrolysis stands as a pivotal method for environmentally sustainable hydrogen production. However, the formation of gas bubbles during the electrolysis process poses significant challenges by impeding the electrochemical reactions, diminishing cell efficiency, and dramatically increasing energy consumption. Furthermore, the inherent difficulty in detecting these bubbles arises from the non-transparency of the wall of electrolysis cells. Additionally, these gas bubbles induce alterations in the conductivity of the electrolyte, leading to corresponding fluctuations in the magnetic flux density outside of the electrolysis cell, which can be measured by externally placed magnetic sensors. By solving the inverse problem of the Biot-Savart Law, we can estimate the conductivity distribution as well as the void fraction within the cell. In this work, we study different approaches to solve the inverse problem including Invertible Neural Networks (INNs) and Tikhonov regularization. Our experiments demonstrate that INNs are much more robust to solving the inverse problem than Tikhonov regularization when the level of noise in the magnetic flux density measurements is not known or changes over space and time.
Collapse
Affiliation(s)
- Nishant Kumar
- Institute of Software and Multimedia Technology, Technische Universität Dresden, 01187 Dresden, Germany;
| | - Lukas Krause
- Institute of Process Engineering and Environmental Technology, Technische Universität Dresden, 01069 Dresden, Germany; (L.K.); (K.E.)
- Institute of Fluid Dynamics, Helmholtz-Zentrum Dresden-Rossendorf, 01328 Dresden, Germany; (T.W.); (S.E.)
| | - Thomas Wondrak
- Institute of Fluid Dynamics, Helmholtz-Zentrum Dresden-Rossendorf, 01328 Dresden, Germany; (T.W.); (S.E.)
| | - Sven Eckert
- Institute of Fluid Dynamics, Helmholtz-Zentrum Dresden-Rossendorf, 01328 Dresden, Germany; (T.W.); (S.E.)
| | - Kerstin Eckert
- Institute of Process Engineering and Environmental Technology, Technische Universität Dresden, 01069 Dresden, Germany; (L.K.); (K.E.)
- Institute of Fluid Dynamics, Helmholtz-Zentrum Dresden-Rossendorf, 01328 Dresden, Germany; (T.W.); (S.E.)
| | - Stefan Gumhold
- Institute of Software and Multimedia Technology, Technische Universität Dresden, 01187 Dresden, Germany;
| |
Collapse
|
12
|
Wang D, Jiang C, He J, Teng Y, Qin H, Liu J, Yang X. M 3S-Net: multi-modality multi-branch multi-self-attention network with structure-promoting loss for low-dose PET/CT enhancement. Phys Med Biol 2024; 69:025001. [PMID: 38086073 DOI: 10.1088/1361-6560/ad14c5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 12/12/2023] [Indexed: 01/05/2024]
Abstract
Objective.PET (Positron Emission Tomography) inherently involves radiotracer injections and long scanning time, which raises concerns about the risk of radiation exposure and patient comfort. Reductions in radiotracer dosage and acquisition time can lower the potential risk and improve patient comfort, respectively, but both will also reduce photon counts and hence degrade the image quality. Therefore, it is of interest to improve the quality of low-dose PET images.Approach.A supervised multi-modality deep learning model, named M3S-Net, was proposed to generate standard-dose PET images (60 s per bed position) from low-dose ones (10 s per bed position) and the corresponding CT images. Specifically, we designed a multi-branch convolutional neural network with multi-self-attention mechanisms, which first extracted features from PET and CT images in two separate branches and then fused the features to generate the final generated PET images. Moreover, a novel multi-modality structure-promoting term was proposed in the loss function to learn the anatomical information contained in CT images.Main results.We conducted extensive numerical experiments on real clinical data collected from local hospitals. Compared with state-of-the-art methods, the proposed M3S-Net not only achieved higher objective metrics and better generated tumors, but also performed better in preserving edges and suppressing noise and artifacts.Significance.The experimental results of quantitative metrics and qualitative displays demonstrate that the proposed M3S-Net can generate high-quality PET images from low-dose ones, which are competable to standard-dose PET images. This is valuable in reducing PET acquisition time and has potential applications in dynamic PET imaging.
Collapse
Affiliation(s)
- Dong Wang
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Chong Jiang
- Department of Nuclear Medicine, West China Hospital of Sichuan University, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Jian He
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Yue Teng
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Hourong Qin
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| | - Jijun Liu
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| |
Collapse
|
13
|
Wang Y, Luo Y, Zu C, Zhan B, Jiao Z, Wu X, Zhou J, Shen D, Zhou L. 3D multi-modality Transformer-GAN for high-quality PET reconstruction. Med Image Anal 2024; 91:102983. [PMID: 37926035 DOI: 10.1016/j.media.2023.102983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 08/06/2023] [Accepted: 09/28/2023] [Indexed: 11/07/2023]
Abstract
Positron emission tomography (PET) scans can reveal abnormal metabolic activities of cells and provide favorable information for clinical patient diagnosis. Generally, standard-dose PET (SPET) images contain more diagnostic information than low-dose PET (LPET) images but higher-dose scans can also bring higher potential radiation risks. To reduce the radiation risk while acquiring high-quality PET images, in this paper, we propose a 3D multi-modality edge-aware Transformer-GAN for high-quality SPET reconstruction using the corresponding LPET images and T1 acquisitions from magnetic resonance imaging (T1-MRI). Specifically, to fully excavate the metabolic distributions in LPET and anatomical structural information in T1-MRI, we first use two separate CNN-based encoders to extract local spatial features from the two modalities, respectively, and design a multimodal feature integration module to effectively integrate the two kinds of features given the diverse contributions of features at different locations. Then, as CNNs can describe local spatial information well but have difficulty in modeling long-range dependencies in images, we further apply a Transformer-based encoder to extract global semantic information in the input images and use a CNN decoder to transform the encoded features into SPET images. Finally, a patch-based discriminator is applied to ensure the similarity of patch-wise data distribution between the reconstructed and real images. Considering the importance of edge information in anatomical structures for clinical disease diagnosis, besides voxel-level estimation error and adversarial loss, we also introduce an edge-aware loss to retain more edge detail information in the reconstructed SPET images. Experiments on the phantom dataset and clinical dataset validate that our proposed method can effectively reconstruct high-quality SPET images and outperform current state-of-the-art methods in terms of qualitative and quantitative metrics.
Collapse
Affiliation(s)
- Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China
| | - Yanmei Luo
- School of Computer Science, Sichuan University, Chengdu, China
| | - Chen Zu
- Department of Risk Controlling Research, JD.COM, China
| | - Bo Zhan
- School of Computer Science, Sichuan University, Chengdu, China
| | - Zhengyang Jiao
- School of Computer Science, Sichuan University, Chengdu, China
| | - Xi Wu
- School of Computer Science, Chengdu University of Information Technology, China
| | - Jiliu Zhou
- School of Computer Science, Sichuan University, Chengdu, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Luping Zhou
- School of Electrical and Information Engineering, University of Sydney, Australia.
| |
Collapse
|
14
|
Kaviani S, Sanaat A, Mokri M, Cohalan C, Carrier JF. Image reconstruction using UNET-transformer network for fast and low-dose PET scans. Comput Med Imaging Graph 2023; 110:102315. [PMID: 38006648 DOI: 10.1016/j.compmedimag.2023.102315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 09/26/2023] [Accepted: 11/15/2023] [Indexed: 11/27/2023]
Abstract
INTRODUCTION Low-dose and fast PET imaging (low-count PET) play a significant role in enhancing patient safety, healthcare efficiency, and patient comfort during medical imaging procedures. To achieve high-quality images with low-count PET scans, effective reconstruction models are crucial for denoising and enhancing image quality. The main goal of this paper is to develop an effective and accurate deep learning-based method for reconstructing low-count PET images, which is a challenging problem due to the limited amount of available data and the high level of noise in the acquired images. The proposed method aims to improve the quality of reconstructed PET images while preserving important features, such as edges and small details, by combining the strengths of UNET and Transformer networks. MATERIAL AND METHODS The proposed TrUNET-MAPEM model integrates a residual UNET-transformer regularizer into the unrolled maximum a posteriori expectation maximization (MAPEM) algorithm for PET image reconstruction. A loss function based on a combination of structural similarity index (SSIM) and mean squared error (MSE) is utilized to evaluate the accuracy of the reconstructed images. The simulated dataset was generated using the Brainweb phantom, while the real patient dataset was acquired using a Siemens Biograph mMR PET scanner. We also implemented state-of-the-art methods for comparison purposes: OSEM, MAPOSEM, and supervised learning using 3D-UNET network. The reconstructed images are compared to ground truth images using metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and relative root mean square error (rRMSE) to quantitatively evaluate the accuracy of the reconstructed images. RESULTS Our proposed TrUNET-MAPEM approach was evaluated using both simulated and real patient data. For the patient data, our model achieved an average PSNR of 33.72 dB, an average SSIM of 0.955, and an average rRMSE of 0.39. These results outperformed other methods which had average PSNRs of 36.89 dB, 34.12 dB, and 33.52 db, average SSIMs of 0.944, 0.947, and 0.951, and average rRMSEs of 0.59, 0.49, and 0.42. For the simulated data, our model achieved an average PSNR of 31.23 dB, an average SSIM of 0.95, and an average rRMSE of 0.55. These results also outperformed other state-of-the-art methods, such as OSEM, MAPOSEM, and 3DUNET-MAPEM. The model demonstrates the potential for clinical use by successfully reconstructing smooth images while preserving edges. The comparison with other methods demonstrates the superiority of our approach, as it outperforms all other methods for all three metrics. CONCLUSION The proposed TrUNET-MAPEM model presents a significant advancement in the field of low-count PET image reconstruction. The results demonstrate the potential for clinical use, as the model can produce images with reduced noise levels and better edge preservation compared to other reconstruction and post-processing algorithms. The proposed approach may have important clinical applications in the early detection and diagnosis of various diseases.
Collapse
Affiliation(s)
- Sanaz Kaviani
- Faculty of Medicine, University of Montreal, Montreal, Canada; University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada.
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Mersede Mokri
- Faculty of Medicine, University of Montreal, Montreal, Canada; University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada
| | - Claire Cohalan
- University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada; Department of Physics and Biomedical Engineering, University of Montreal Hospital Centre, Montreal, Canada
| | - Jean-Francois Carrier
- University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada; Department of Physics, University of Montreal, Montreal, QC, Canada; Department de Radiation Oncology, University of Montreal Hospital Centre (CHUM), Montreal, Canada
| |
Collapse
|
15
|
Hellwig D, Hellwig NC, Boehner S, Fuchs T, Fischer R, Schmidt D. Artificial Intelligence and Deep Learning for Advancing PET Image Reconstruction: State-of-the-Art and Future Directions. Nuklearmedizin 2023; 62:334-342. [PMID: 37995706 PMCID: PMC10689088 DOI: 10.1055/a-2198-0358] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 10/12/2023] [Indexed: 11/25/2023]
Abstract
Positron emission tomography (PET) is vital for diagnosing diseases and monitoring treatments. Conventional image reconstruction (IR) techniques like filtered backprojection and iterative algorithms are powerful but face limitations. PET IR can be seen as an image-to-image translation. Artificial intelligence (AI) and deep learning (DL) using multilayer neural networks enable a new approach to this computer vision task. This review aims to provide mutual understanding for nuclear medicine professionals and AI researchers. We outline fundamentals of PET imaging as well as state-of-the-art in AI-based PET IR with its typical algorithms and DL architectures. Advances improve resolution and contrast recovery, reduce noise, and remove artifacts via inferred attenuation and scatter correction, sinogram inpainting, denoising, and super-resolution refinement. Kernel-priors support list-mode reconstruction, motion correction, and parametric imaging. Hybrid approaches combine AI with conventional IR. Challenges of AI-assisted PET IR include availability of training data, cross-scanner compatibility, and the risk of hallucinated lesions. The need for rigorous evaluations, including quantitative phantom validation and visual comparison of diagnostic accuracy against conventional IR, is highlighted along with regulatory issues. First approved AI-based applications are clinically available, and its impact is foreseeable. Emerging trends, such as the integration of multimodal imaging and the use of data from previous imaging visits, highlight future potentials. Continued collaborative research promises significant improvements in image quality, quantitative accuracy, and diagnostic performance, ultimately leading to the integration of AI-based IR into routine PET imaging protocols.
Collapse
Affiliation(s)
- Dirk Hellwig
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Nils Constantin Hellwig
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Steven Boehner
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Timo Fuchs
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Regina Fischer
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Daniel Schmidt
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
| |
Collapse
|
16
|
Gu F, Wu Q. Quantitation of dynamic total-body PET imaging: recent developments and future perspectives. Eur J Nucl Med Mol Imaging 2023; 50:3538-3557. [PMID: 37460750 PMCID: PMC10547641 DOI: 10.1007/s00259-023-06299-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 06/05/2023] [Indexed: 10/04/2023]
Abstract
BACKGROUND Positron emission tomography (PET) scanning is an important diagnostic imaging technique used in disease diagnosis, therapy planning, treatment monitoring, and medical research. The standardized uptake value (SUV) obtained at a single time frame has been widely employed in clinical practice. Well beyond this simple static measure, more detailed metabolic information can be recovered from dynamic PET scans, followed by the recovery of arterial input function and application of appropriate tracer kinetic models. Many efforts have been devoted to the development of quantitative techniques over the last couple of decades. CHALLENGES The advent of new-generation total-body PET scanners characterized by ultra-high sensitivity and long axial field of view, i.e., uEXPLORER (United Imaging Healthcare), PennPET Explorer (University of Pennsylvania), and Biograph Vision Quadra (Siemens Healthineers), further stimulates valuable inspiration to derive kinetics for multiple organs simultaneously. But some emerging issues also need to be addressed, e.g., the large-scale data size and organ-specific physiology. The direct implementation of classical methods for total-body PET imaging without proper validation may lead to less accurate results. CONCLUSIONS In this contribution, the published dynamic total-body PET datasets are outlined, and several challenges/opportunities for quantitation of such types of studies are presented. An overview of the basic equation, calculation of input function (based on blood sampling, image, population or mathematical model), and kinetic analysis encompassing parametric (compartmental model, graphical plot and spectral analysis) and non-parametric (B-spline and piece-wise basis elements) approaches is provided. The discussion mainly focuses on the feasibilities, recent developments, and future perspectives of these methodologies for a diverse-tissue environment.
Collapse
Affiliation(s)
- Fengyun Gu
- School of Mathematics and Physics, North China Electric Power University, 102206, Beijing, China.
- School of Mathematical Sciences, University College Cork, T12XF62, Cork, Ireland.
| | - Qi Wu
- School of Mathematical Sciences, University College Cork, T12XF62, Cork, Ireland
| |
Collapse
|
17
|
Lim H, Dewaraja YK, Fessler JA. SPECT reconstruction with a trained regularizer using CT-side information: Application to 177Lu SPECT imaging. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2023; 9:846-856. [PMID: 38516350 PMCID: PMC10956080 DOI: 10.1109/tci.2023.3318993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
Improving low-count SPECT can shorten scans and support pre-therapy theranostic imaging for dosimetry-based treatment planning, especially with radionuclides like 177Lu known for low photon yields. Conventional methods often underperform in low-count settings, highlighting the need for trained regularization in model-based image reconstruction. This paper introduces a trained regularizer for SPECT reconstruction that leverages segmentation based on CT imaging. The regularizer incorporates CT-side information via a segmentation mask from a pre-trained network (nnUNet). In this proof-of-concept study, we used patient studies with 177Lu DOTATATE to train and tested with phantom and patient datasets, simulating pre-therapy imaging conditions. Our results show that the proposed method outperforms both standard unregularized EM algorithms and conventional regularization with CT-side information. Specifically, our method achieved marked improvements in activity quantification, noise reduction, and root mean square error. The enhanced low-count SPECT approach has promising implications for theranostic imaging, post-therapy imaging, whole body SPECT, and reducing SPECT acquisition times.
Collapse
Affiliation(s)
- Hongki Lim
- Department of Electronic Engineering, Inha University, Incheon, 22212, South Korea
| | - Yuni K Dewaraja
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109 USA
| | - Jeffrey A Fessler
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109 USA
| |
Collapse
|
18
|
Sohlberg A, Kangasmaa T, Tikkakoski A. Comparison of post reconstruction- and reconstruction-based deep learning denoising methods in cardiac SPECT. Biomed Phys Eng Express 2023; 9:065007. [PMID: 37666231 DOI: 10.1088/2057-1976/acf66c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 09/04/2023] [Indexed: 09/06/2023]
Abstract
Objective. The quality of myocardial perfusion SPECT (MPS) images is often hampered by low count statistics. Poor image quality might hinder reporting the studies and in the worst case lead to erroneous diagnosis. Deep learning (DL)-based methods can be used to improve the quality of the low count studies. DL can be applied in several different methods, which might affect the outcome. The aim of this study was to investigate the differences between post reconstruction- and reconstruction-based denoising methods.Approach. A UNET-type network was trained using ordered subsets expectation maximization (OSEM) reconstructed MPS studies acquired with half, quarter and eighth of full-activity. The trained network was applied as a post reconstruction denoiser (OSEM+DL) and it was incorporated into a regularized reconstruction algorithm as a deep learning penalty (DLP). OSEM+DL and DLP were compared against each other and against OSEM images without DL denoising in terms of noise level, myocardium-ventricle contrast and defect detection performance with signal-to-noise ratio of a non-prewhitening matched filter (NPWMF-SNR) applied to artificial perfusion defects inserted into defect-free clinical MPS scans. Comparisons were made using half-, quarter- and eighth-activity data.Main results. OSEM+DL provided lower noise level at all activities than other methods. DLP's noise level was also always lower than matching activity OSEM's. In addition, OSEM+DL and DLP outperformed OSEM in defect detection performance, but contrary to noise level ranking DLP had higher NPWMF-SNR overall than OSEM+DL. The myocardium-ventricle contrast was highest with DLP and lowest with OSEM+DL. Both OSEM+DL and DLP offered better image quality than OSEM, but visually perfusion defects were deeper in OSEM images at low activities.Significance. Both post reconstruction- and reconstruction-based DL denoising methods have great potential for MPS. The preference between these methods is a trade-off between smoother images and better defect detection performance.
Collapse
Affiliation(s)
- Antti Sohlberg
- Department of Nuclear Medicine, Päijät-Häme Central Hospital, Lahti, Finland
- HERMES Medical Solutions, Stockholm, Sweden
| | - Tuija Kangasmaa
- Department of Clinical Physiology and Nuclear Medicine, Vaasa Central Hospital, Vaasa, Finland
| | - Antti Tikkakoski
- Clinical Physiology and Nuclear Medicine, Tampere University Hospital, Tampere, Finland
| |
Collapse
|
19
|
Huang Z, Li W, Wang Y, Liu Z, Zhang Q, Jin Y, Wu R, Quan G, Liang D, Hu Z, Zhang N. MLNAN: Multi-level noise-aware network for low-dose CT imaging implemented with constrained cycle Wasserstein generative adversarial networks. Artif Intell Med 2023; 143:102609. [PMID: 37673577 DOI: 10.1016/j.artmed.2023.102609] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 05/17/2023] [Accepted: 06/06/2023] [Indexed: 09/08/2023]
Abstract
Low-dose CT techniques attempt to minimize the radiation exposure of patients by estimating the high-resolution normal-dose CT images to reduce the risk of radiation-induced cancer. In recent years, many deep learning methods have been proposed to solve this problem by building a mapping function between low-dose CT images and their high-dose counterparts. However, most of these methods ignore the effect of different radiation doses on the final CT images, which results in large differences in the intensity of the noise observable in CT images. What'more, the noise intensity of low-dose CT images exists significantly differences under different medical devices manufacturers. In this paper, we propose a multi-level noise-aware network (MLNAN) implemented with constrained cycle Wasserstein generative adversarial networks to recovery the low-dose CT images under uncertain noise levels. Particularly, the noise-level classification is predicted and reused as a prior pattern in generator networks. Moreover, the discriminator network introduces noise-level determination. Under two dose-reduction strategies, experiments to evaluate the performance of proposed method are conducted on two datasets, including the simulated clinical AAPM challenge datasets and commercial CT datasets from United Imaging Healthcare (UIH). The experimental results illustrate the effectiveness of our proposed method in terms of noise suppression and structural detail preservation compared with several other deep-learning based methods. Ablation studies validate the effectiveness of the individual components regarding the afforded performance improvement. Further research for practical clinical applications and other medical modalities is required in future works.
Collapse
Affiliation(s)
- Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Wenbo Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yunling Wang
- Department of Radiology, First Affiliated Hospital of Xinjiang Medical University, Urumqi, 830011, China.
| | - Zhou Liu
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yuxi Jin
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Ruodai Wu
- Department of Radiology, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen 518055, China
| | - Guotao Quan
- Shanghai United Imaging Healthcare, Shanghai 201807, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.
| |
Collapse
|
20
|
Li J, Xi C, Dai H, Wang J, Lv Y, Zhang P, Zhao J. Enhanced PET imaging using progressive conditional deep image prior. Phys Med Biol 2023; 68:175047. [PMID: 37582392 DOI: 10.1088/1361-6560/acf091] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Accepted: 08/15/2023] [Indexed: 08/17/2023]
Abstract
Objective.Unsupervised learning-based methods have been proven to be an effective way to improve the image quality of positron emission tomography (PET) images when a large dataset is not available. However, when the gap between the input image and the target PET image is large, direct unsupervised learning can be challenging and easily lead to reduced lesion detectability. We aim to develop a new unsupervised learning method to improve lesion detectability in patient studies.Approach.We applied the deep progressive learning strategy to bridge the gap between the input image and the target image. The one-step unsupervised learning is decomposed into two unsupervised learning steps. The input image of the first network is an anatomical image and the input image of the second network is a PET image with a low noise level. The output of the first network is also used as the prior image to generate the target image of the second network by iterative reconstruction method.Results.The performance of the proposed method was evaluated through the phantom and patient studies and compared with non-deep learning, supervised learning and unsupervised learning methods. The results showed that the proposed method was superior to non-deep learning and unsupervised methods, and was comparable to the supervised method.Significance.A progressive unsupervised learning method was proposed, which can improve image noise performance and lesion detectability.
Collapse
Affiliation(s)
- Jinming Li
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Chen Xi
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Houjiao Dai
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Jing Wang
- Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, Shaanxi, Xi'an, People's Republic of China
| | - Yang Lv
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Puming Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
21
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Yamaya T. Fully 3D implementation of the end-to-end deep image prior-based PET image reconstruction using block iterative algorithm. Phys Med Biol 2023; 68:155009. [PMID: 37406637 DOI: 10.1088/1361-6560/ace49c] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 07/05/2023] [Indexed: 07/07/2023]
Abstract
Objective. Deep image prior (DIP) has recently attracted attention owing to its unsupervised positron emission tomography (PET) image reconstruction method, which does not require any prior training dataset. In this paper, we present the first attempt to implement an end-to-end DIP-based fully 3D PET image reconstruction method that incorporates a forward-projection model into a loss function.Approach. A practical implementation of a fully 3D PET image reconstruction could not be performed at present because of a graphics processing unit memory limitation. Consequently, we modify the DIP optimization to a block iteration and sequential learning of an ordered sequence of block sinograms. Furthermore, the relative difference penalty (RDP) term is added to the loss function to enhance the quantitative accuracy of the PET image.Main results. We evaluated our proposed method using Monte Carlo simulation with [18F]FDG PET data of a human brain and a preclinical study on monkey-brain [18F]FDG PET data. The proposed method was compared with the maximum-likelihood expectation maximization (EM), maximuma posterioriEM with RDP, and hybrid DIP-based PET reconstruction methods. The simulation results showed that, compared with other algorithms, the proposed method improved the PET image quality by reducing statistical noise and better preserved the contrast of brain structures and inserted tumors. In the preclinical experiment, finer structures and better contrast recovery were obtained with the proposed method.Significance.The results indicated that the proposed method could produce high-quality images without a prior training dataset. Thus, the proposed method could be a key enabling technology for the straightforward and practical implementation of end-to-end DIP-based fully 3D PET image reconstruction.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-ku, Chiba, 263-8555, Japan
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-ku, Chiba, 263-8555, Japan
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-ku, Chiba, 263-8555, Japan
| |
Collapse
|
22
|
Sanaat A, Shooli H, Böhringer AS, Sadeghi M, Shiri I, Salimi Y, Ginovart N, Garibotto V, Arabi H, Zaidi H. A cycle-consistent adversarial network for brain PET partial volume correction without prior anatomical information. Eur J Nucl Med Mol Imaging 2023; 50:1881-1896. [PMID: 36808000 PMCID: PMC10199868 DOI: 10.1007/s00259-023-06152-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 02/12/2023] [Indexed: 02/23/2023]
Abstract
PURPOSE Partial volume effect (PVE) is a consequence of the limited spatial resolution of PET scanners. PVE can cause the intensity values of a particular voxel to be underestimated or overestimated due to the effect of surrounding tracer uptake. We propose a novel partial volume correction (PVC) technique to overcome the adverse effects of PVE on PET images. METHODS Two hundred and twelve clinical brain PET scans, including 50 18F-Fluorodeoxyglucose (18F-FDG), 50 18F-Flortaucipir, 36 18F-Flutemetamol, and 76 18F-FluoroDOPA, and their corresponding T1-weighted MR images were enrolled in this study. The Iterative Yang technique was used for PVC as a reference or surrogate of the ground truth for evaluation. A cycle-consistent adversarial network (CycleGAN) was trained to directly map non-PVC PET images to PVC PET images. Quantitative analysis using various metrics, including structural similarity index (SSIM), root mean squared error (RMSE), and peak signal-to-noise ratio (PSNR), was performed. Furthermore, voxel-wise and region-wise-based correlations of activity concentration between the predicted and reference images were evaluated through joint histogram and Bland and Altman analysis. In addition, radiomic analysis was performed by calculating 20 radiomic features within 83 brain regions. Finally, a voxel-wise two-sample t-test was used to compare the predicted PVC PET images with reference PVC images for each radiotracer. RESULTS The Bland and Altman analysis showed the largest and smallest variance for 18F-FDG (95% CI: - 0.29, + 0.33 SUV, mean = 0.02 SUV) and 18F-Flutemetamol (95% CI: - 0.26, + 0.24 SUV, mean = - 0.01 SUV), respectively. The PSNR was lowest (29.64 ± 1.13 dB) for 18F-FDG and highest (36.01 ± 3.26 dB) for 18F-Flutemetamol. The smallest and largest SSIM were achieved for 18F-FDG (0.93 ± 0.01) and 18F-Flutemetamol (0.97 ± 0.01), respectively. The average relative error for the kurtosis radiomic feature was 3.32%, 9.39%, 4.17%, and 4.55%, while it was 4.74%, 8.80%, 7.27%, and 6.81% for NGLDM_contrast feature for 18F-Flutemetamol, 18F-FluoroDOPA, 18F-FDG, and 18F-Flortaucipir, respectively. CONCLUSION An end-to-end CycleGAN PVC method was developed and evaluated. Our model generates PVC images from the original non-PVC PET images without requiring additional anatomical information, such as MRI or CT. Our model eliminates the need for accurate registration or segmentation or PET scanner system response characterization. In addition, no assumptions regarding anatomical structure size, homogeneity, boundary, or background level are required.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Hossein Shooli
- Persian Gulf Nuclear Medicine Research Center, Department of Molecular Imaging and Radionuclide Therapy (MIRT), Bushehr Medical University Hospital, Faculty of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Andrew Stephen Böhringer
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Maryam Sadeghi
- Department of Medical Statistics, Informatics and Health Economics, Medical University of Innsbruck, Schoepfstr. 41, Innsbruck, Austria
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Nathalie Ginovart
- Geneva University Neurocenter, University of Geneva, Geneva, Switzerland
- Department of Psychiatry, Geneva University, Geneva, Switzerland
- Department of Basic Neuroscience, Geneva University, Geneva, Switzerland
| | - Valentina Garibotto
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
- Geneva University Neurocenter, University of Geneva, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Geneva University Neurocenter, University of Geneva, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
23
|
Geng C, Jiang M, Fang X, Li Y, Jin G, Chen A, Liu F. HFIST-Net: High-throughput fast iterative shrinkage thresholding network for accelerating MR image reconstruction. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 232:107440. [PMID: 36881983 DOI: 10.1016/j.cmpb.2023.107440] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 01/22/2023] [Accepted: 02/19/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVES Compressed sensing (CS) is often used to accelerate magnetic resonance image (MRI) reconstruction from undersampled k-space data. A novelty deeply unfolded networks (DUNs) based method, designed by unfolding a traditional CS-MRI optimization algorithm into deep networks, can provide significantly faster reconstruction speeds than traditional CS-MRI methods while improving image quality. METHODS In this paper, we propose a High-Throughput Fast Iterative Shrinkage Thresholding Network (HFIST-Net) for reconstructing MR images from sparse measurements by combining traditional model-based CS techniques and data-driven deep learning methods. Specifically, the conventional Fast Iterative Shrinkage Thresholding Algorithm (FISTA) method is expanded as a deep network. To break the bottleneck of information transmission, a multi-channel fusion mechanism is proposed to improve the efficiency of information transmission between adjacent network stages. Moreover, a simple yet efficient channel attention block, called Gaussian context transformer (GCT), is proposed to improve the characterization capabilities of deep Convolutional Neural Network (CNN,) which utilizes Gaussian functions that satisfy preset relationships to achieve context feature excitation. RESULTS T1 and T2 brain MR images from the FastMRI dataset are used to validate the performance of the proposed HFIST-Net. The qualitative and quantitative results showed that our method is superior to those compared state-of-the-art unfolded deep learning networks. CONCLUSIONS The proposed HFIST-Net is capable of reconstructing more accurate MR image details from highly undersampled k-space data while maintaining fast computational speed.
Collapse
Affiliation(s)
- Chenghu Geng
- Department of Physics, Zhejiang Sci-Tech University, Hangzhou 310018, China
| | - Mingfeng Jiang
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou 310018, China.
| | - Xian Fang
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou 310018, China
| | - Yang Li
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou 310018, China
| | - Guangri Jin
- Department of Physics, Zhejiang Sci-Tech University, Hangzhou 310018, China
| | - Aixi Chen
- Department of Physics, Zhejiang Sci-Tech University, Hangzhou 310018, China
| | - Feng Liu
- The School of Information Technology & Electrical Engineering, The University of Queensland, St. Lucia, Brisbane, Queensland 4072, Australia
| |
Collapse
|
24
|
Fang R, Guo R, Zhao M, Yao M. FBP‐CNN: A Direct PET Image Reconstruction Network for Flow Visualization. ADVANCED THEORY AND SIMULATIONS 2023. [DOI: 10.1002/adts.202200604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
|
25
|
Zhu Y, Lyu Z, Lu W, Liu Y, Ma T. Fast and Accurate Gamma Imaging System Calibration Based on Deep Denoising Networks and Self-Adaptive Data Clustering. SENSORS (BASEL, SWITZERLAND) 2023; 23:2689. [PMID: 36904898 PMCID: PMC10007588 DOI: 10.3390/s23052689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 02/18/2023] [Accepted: 02/27/2023] [Indexed: 06/18/2023]
Abstract
Gamma imagers play a key role in both industrial and medical applications. Modern gamma imagers typically employ iterative reconstruction methods in which the system matrix (SM) is a key component to obtain high-quality images. An accurate SM could be acquired from an experimental calibration step with a point source across the FOV, but at a cost of long calibration time to suppress noise, posing challenges to real-world applications. In this work, we propose a time-efficient SM calibration approach for a 4π-view gamma imager with short-time measured SM and deep-learning-based denoising. The key steps include decomposing the SM into multiple detector response function (DRF) images, categorizing DRFs into multiple groups with a self-adaptive K-means clustering method to address sensitivity discrepancy, and independently training separate denoising deep networks for each DRF group. We investigate two denoising networks and compare them against a conventional Gaussian filtering method. The results demonstrate that the denoised SM with deep networks faithfully yields a comparable imaging performance with the long-time measured SM. The SM calibration time is reduced from 1.4 h to 8 min. We conclude that the proposed SM denoising approach is promising and effective in enhancing the productivity of the 4π-view gamma imager, and it is also generally applicable to other imaging systems that require an experimental calibration step.
Collapse
Affiliation(s)
- Yihang Zhu
- Department of Engineering Physics, Tsinghua University, Beijing 100084, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education, Tsinghua University, Beijing 100084, China
- Institute for Precision Medicine, Tsinghua University, Beijing 100084, China
| | - Zhenlei Lyu
- Department of Engineering Physics, Tsinghua University, Beijing 100084, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education, Tsinghua University, Beijing 100084, China
- Institute for Precision Medicine, Tsinghua University, Beijing 100084, China
| | - Wenzhuo Lu
- Department of Engineering Physics, Tsinghua University, Beijing 100084, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education, Tsinghua University, Beijing 100084, China
| | - Yaqiang Liu
- Department of Engineering Physics, Tsinghua University, Beijing 100084, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education, Tsinghua University, Beijing 100084, China
| | - Tianyu Ma
- Department of Engineering Physics, Tsinghua University, Beijing 100084, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education, Tsinghua University, Beijing 100084, China
- Institute for Precision Medicine, Tsinghua University, Beijing 100084, China
| |
Collapse
|
26
|
Li S, Gong K, Badawi RD, Kim EJ, Qi J, Wang G. Neural KEM: A Kernel Method With Deep Coefficient Prior for PET Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:785-796. [PMID: 36288234 PMCID: PMC10081957 DOI: 10.1109/tmi.2022.3217543] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Image reconstruction of low-count positron emission tomography (PET) data is challenging. Kernel methods address the challenge by incorporating image prior information in the forward model of iterative PET image reconstruction. The kernelized expectation-maximization (KEM) algorithm has been developed and demonstrated to be effective and easy to implement. A common approach for a further improvement of the kernel method would be adding an explicit regularization, which however leads to a complex optimization problem. In this paper, we propose an implicit regularization for the kernel method by using a deep coefficient prior, which represents the kernel coefficient image in the PET forward model using a convolutional neural-network. To solve the maximum-likelihood neural network-based reconstruction problem, we apply the principle of optimization transfer to derive a neural KEM algorithm. Each iteration of the algorithm consists of two separate steps: a KEM step for image update from the projection data and a deep-learning step in the image domain for updating the kernel coefficient image using the neural network. This optimization algorithm is guaranteed to monotonically increase the data likelihood. The results from computer simulations and real patient data have demonstrated that the neural KEM can outperform existing KEM and deep image prior methods.
Collapse
|
27
|
Sohlberg A, Kangasmaa T, Constable C, Tikkakoski A. Comparison of deep learning-based denoising methods in cardiac SPECT. EJNMMI Phys 2023; 10:9. [PMID: 36752847 PMCID: PMC9908801 DOI: 10.1186/s40658-023-00531-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 02/01/2023] [Indexed: 02/09/2023] Open
Abstract
BACKGROUND Myocardial perfusion SPECT (MPS) images often suffer from artefacts caused by low-count statistics. Poor-quality images can lead to misinterpretations of perfusion defects. Deep learning (DL)-based methods have been proposed to overcome the noise artefacts. The aim of this study was to investigate the differences among several DL denoising models. METHODS Convolution neural network (CNN), residual neural network (RES), UNET and conditional generative adversarial neural network (cGAN) were generated and trained using ordered subsets expectation maximization (OSEM) reconstructed MPS studies acquired with full, half, three-eighths and quarter acquisition time. All DL methods were compared against each other and also against images without DL-based denoising. Comparisons were made using half and quarter time acquisition data. The methods were evaluated in terms of noise level (coefficient of variation of counts, CoV), structural similarity index measure (SSIM) in the myocardium of normal patients and receiver operating characteristic (ROC) analysis of realistic artificial perfusion defects inserted into normal MPS scans. Total perfusion deficit scores were used as observer rating for the presence of a perfusion defect. RESULTS All the DL denoising methods tested provided statistically significantly lower noise level than OSEM without DL-based denoising with the same acquisition time. CoV of the myocardium counts with the different DL noising methods was on average 7% (CNN), 8% (RES), 7% (UNET) and 14% (cGAN) lower than with OSEM. All DL methods also outperformed full time OSEM without DL-based denoising in terms of noise level with both half and quarter acquisition time, but this difference was not statistically significant. cGAN had the lowest CoV of the DL methods at all noise levels. Image quality and polar map uniformity of DL-denoised images were also better than reduced acquisition time OSEM's. SSIM of the reduced acquisition time OSEM was overall higher than with the DL methods. The defect detection performance of full time OSEM measured as area under the ROC curve (AUC) was on average 0.97. Half time OSEM, CNN, RES and UNET provided equal or nearly equal AUC. However, with quarter time data CNN, RES and UNET had an average AUC of 0.93, which was lower than full time OSEM's AUC, but equal to quarter acquisition time OSEM. cGAN did not achieve the defect detection performance of the other DL methods. Its average AUC with half time data was 0.94 and 0.91 with quarter time data. CONCLUSIONS DL-based denoising effectively improved noise level with slightly lower perfusion defect detection performance than full time reconstruction. cGAN achieved the lowest noise level, but at the same time the poorest defect detection performance among the studied DL methods.
Collapse
Affiliation(s)
- Antti Sohlberg
- Department of Clinical Physiology and Nuclear Medicine, Päijät-Häme Central Hospital, Lahti, Finland.
- HERMES Medical Solutions, Stockholm, Sweden.
| | - Tuija Kangasmaa
- Department of Clinical Physiology and Nuclear Medicine, Vaasa Central Hospital, Vaasa, Finland
| | | | - Antti Tikkakoski
- Clinical Physiology and Nuclear Medicine, Tampere University Hospital, Tampere, Finland
| |
Collapse
|
28
|
Li Y, Hu J, Sari H, Xue S, Ma R, Kandarpa S, Visvikis D, Rominger A, Liu H, Shi K. A deep neural network for parametric image reconstruction on a large axial field-of-view PET. Eur J Nucl Med Mol Imaging 2023; 50:701-714. [PMID: 36326869 DOI: 10.1007/s00259-022-06003-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 10/09/2022] [Indexed: 11/06/2022]
Abstract
PURPOSE The PET scanners with long axial field of view (AFOV) having ~ 20 times higher sensitivity than conventional scanners provide new opportunities for enhanced parametric imaging but suffer from the dramatically increased volume and complexity of dynamic data. This study reconstructed a high-quality direct Patlak Ki image from five-frame sinograms without input function by a deep learning framework based on DeepPET to explore the potential of artificial intelligence reducing the acquisition time and the dependence of input function in parametric imaging. METHODS This study was implemented on a large AFOV PET/CT scanner (Biograph Vision Quadra) and twenty patients were recruited with 18F-fluorodeoxyglucose (18F-FDG) dynamic scans. During training and testing of the proposed deep learning framework, the last five-frame (25 min, 40-65 min post-injection) sinograms were set as input and the reconstructed Patlak Ki images by a nested EM algorithm on the vendor were set as ground truth. To evaluate the image quality of predicted Ki images, mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) were calculated. Meanwhile, a linear regression process was applied between predicted and true Ki means on avid malignant lesions and tumor volume of interests (VOIs). RESULTS In the testing phase, the proposed method achieved excellent MSE of less than 0.03%, high SSIM, and PSNR of ~ 0.98 and ~ 38 dB, respectively. Moreover, there was a high correlation (DeepPET: [Formula: see text]= 0.73, self-attention DeepPET: [Formula: see text]=0.82) between predicted Ki and traditionally reconstructed Patlak Ki means over eleven lesions. CONCLUSIONS The results show that the deep learning-based method produced high-quality parametric images from small frames of projection data without input function. It has much potential to address the dilemma of the long scan time and dependency on input function that still hamper the clinical translation of dynamic PET.
Collapse
Affiliation(s)
- Y Li
- College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, People's Republic of China.,College of Optical Science and Engineering, Zhejiang University, Hangzhou, People's Republic of China
| | - J Hu
- Department of Nuclear Medicine, Inselpital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - H Sari
- Advanced Clinical Imaging Technology, Siemens Healthcare AG, Lausanne, Switzerland
| | - S Xue
- Department of Nuclear Medicine, Inselpital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - R Ma
- Department of Nuclear Medicine, Inselpital, Bern University Hospital, University of Bern, Bern, Switzerland.,Department of Engineering Physics, Tsinghua University, Beijing, China
| | - S Kandarpa
- LaTIM, INSERM, UMR 1101, University of Brest, Brest, France
| | - D Visvikis
- LaTIM, INSERM, UMR 1101, University of Brest, Brest, France
| | - A Rominger
- Department of Nuclear Medicine, Inselpital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - H Liu
- College of Optical Science and Engineering, Zhejiang University, Hangzhou, People's Republic of China.
| | - K Shi
- Department of Nuclear Medicine, Inselpital, Bern University Hospital, University of Bern, Bern, Switzerland.,Computer Aided Medical Procedures and Augmented Reality, Institute of Informatics I16, Technical University of Munich, Munich, Germany
| |
Collapse
|
29
|
Poonkodi S, Kanchana M. 3D-MedTranCSGAN: 3D Medical Image Transformation using CSGAN. Comput Biol Med 2023; 153:106541. [PMID: 36652868 DOI: 10.1016/j.compbiomed.2023.106541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 11/30/2022] [Accepted: 01/10/2023] [Indexed: 01/15/2023]
Abstract
Computer vision techniques are a rapidly growing area of transforming medical images for various specific medical applications. In an end-to-end application, this paper proposes a 3D Medical Image Transformation Using a CSGAN model named a 3D-MedTranCSGAN. The 3D-MedTranCSGAN model is an integration of non-adversarial loss components and the Cyclic Synthesized Generative Adversarial Networks. The proposed model utilizes PatchGAN's discriminator network, to penalize the difference between the synthesized image and the original image. The model also computes the non-adversary loss functions such as content, perception, and style transfer losses. 3DCascadeNet is a new generator architecture introduced in the paper, which is used to enhance the perceptiveness of the transformed medical image by encoding-decoding pairs. We use the 3D-MedTranCSGAN model to do various tasks without modifying specific applications: PET to CT image transformation; reconstruction of CT to PET; modification of movement artefacts in MR images; and removing noise in PET images. We found that 3D-MedTranCSGAN outperformed other transformation methods in our experiments. For the first task, the proposed model yields SSIM is 0.914, PSNR is 26.12, MSE is 255.5, VIF is 0.4862, UQI is 0.9067 and LPIPs is 0.2284. For the second task, the model yields 0.9197, 25.7, 257.56, 0.4962, 0.9027, 0.2262. For the third task, the model yields 0.8862, 24.94, 0.4071, 0.6410, 0.2196. For the final task, the model yields 0.9521, 33.67, 33.57, 0.6091, 0.9255, 0.0244. Based on the result analysis, the proposed model outperforms the other techniques.
Collapse
Affiliation(s)
- S Poonkodi
- Department of Computing Technologies, School of Computing, SRM Institute of Science and Technology, Kattankulathur, India
| | - M Kanchana
- Department of Computing Technologies, School of Computing, SRM Institute of Science and Technology, Kattankulathur, India.
| |
Collapse
|
30
|
Zemplényi A, Tachkov K, Balkanyi L, Németh B, Petykó ZI, Petrova G, Czech M, Dawoud D, Goettsch W, Gutierrez Ibarluzea I, Hren R, Knies S, Lorenzovici L, Maravic Z, Piniazhko O, Savova A, Manova M, Tesar T, Zerovnik S, Kaló Z. Recommendations to overcome barriers to the use of artificial intelligence-driven evidence in health technology assessment. Front Public Health 2023; 11:1088121. [PMID: 37181704 PMCID: PMC10171457 DOI: 10.3389/fpubh.2023.1088121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 04/03/2023] [Indexed: 05/16/2023] Open
Abstract
Background Artificial intelligence (AI) has attracted much attention because of its enormous potential in healthcare, but uptake has been slow. There are substantial barriers that challenge health technology assessment (HTA) professionals to use AI-generated evidence for decision-making from large real-world databases (e.g., based on claims data). As part of the European Commission-funded HTx H2020 (Next Generation Health Technology Assessment) project, we aimed to put forward recommendations to support healthcare decision-makers in integrating AI into the HTA processes. The barriers, addressed by the paper, are particularly focusing on Central and Eastern European (CEE) countries, where the implementation of HTA and access to health databases lag behind Western European countries. Methods We constructed a survey to rank the barriers to using AI for HTA purposes, completed by respondents from CEE jurisdictions with expertise in HTA. Using the results, two members of the HTx consortium from CEE developed recommendations on the most critical barriers. Then these recommendations were discussed in a workshop by a wider group of experts, including HTA and reimbursement decision-makers from both CEE countries and Western European countries, and summarized in a consensus report. Results Recommendations have been developed to address the top 15 barriers in areas of (1) human factor-related barriers, focusing on educating HTA doers and users, establishing collaborations and best practice sharing; (2) regulatory and policy-related barriers, proposing increasing awareness and political commitment and improving the management of sensitive information for AI use; (3) data-related barriers, suggesting enhancing standardization and collaboration with data networks, managing missing and unstructured data, using analytical and statistical approaches to address bias, using quality assessment tools and quality standards, improving reporting, and developing better conditions for the use of data; and (4) technological barriers, suggesting sustainable development of AI infrastructure. Conclusion In the field of HTA, the great potential of AI to support evidence generation and evaluation has not yet been sufficiently explored and realized. Raising awareness of the intended and unintended consequences of AI-based methods and encouraging political commitment from policymakers is necessary to upgrade the regulatory and infrastructural environment and knowledge base required to integrate AI into HTA-based decision-making processes better.
Collapse
Affiliation(s)
- Antal Zemplényi
- Center for Health Technology Assessment and Pharmacoeconomics Research, Faculty of Pharmacy, University of Pécs, Pécs, Hungary
- Syreon Research Institute, Budapest, Hungary
- *Correspondence: Antal Zemplényi,
| | - Konstantin Tachkov
- Department of Organization and Economics of Pharmacy, Faculty of Pharmacy, Medical University of Sofia, Sofia, Bulgaria
| | - Laszlo Balkanyi
- Medical Informatics R&D Center, Pannon University, Veszprém, Hungary
| | | | | | - Guenka Petrova
- Department of Organization and Economics of Pharmacy, Faculty of Pharmacy, Medical University of Sofia, Sofia, Bulgaria
| | - Marcin Czech
- Department of Pharmacoeconomics, Institute of Mother and Child, Warsaw, Poland
| | - Dalia Dawoud
- Science Policy and Research Programme, Science Evidence and Analytics Directorate, National Institute for Health and Care Excellence (NICE), London, United Kingdom
- Cairo University, Faculty of Pharmacy, Cairo, Egypt
| | - Wim Goettsch
- Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht University, Utrecht, Netherlands
- National Health Care Institute, Diemen, Netherlands
| | | | - Rok Hren
- Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana, Slovenia
| | - Saskia Knies
- National Health Care Institute, Diemen, Netherlands
| | - László Lorenzovici
- Syreon Research Romania, Tirgu Mures, Romania
- G. E. Palade University of Medicine, Pharmacy, Science and Technology, Tirgu Mures, Romania
| | | | - Oresta Piniazhko
- HTA Department of State Expert Centre of the Ministry of Health of Ukraine, Kyiv, Ukraine
| | - Alexandra Savova
- Department of Organization and Economics of Pharmacy, Faculty of Pharmacy, Medical University of Sofia, Sofia, Bulgaria
- National Council of Prices and Reimbursement of Medicinal Products, Sofia, Bulgaria
| | - Manoela Manova
- Department of Organization and Economics of Pharmacy, Faculty of Pharmacy, Medical University of Sofia, Sofia, Bulgaria
- National Council of Prices and Reimbursement of Medicinal Products, Sofia, Bulgaria
| | - Tomas Tesar
- Department of Organisation and Management of Pharmacy, Faculty of Pharmacy, Comenius University in Bratislava, Bratislava, Slovakia
| | | | - Zoltán Kaló
- Syreon Research Institute, Budapest, Hungary
- Centre for Health Technology Assessment, Semmelweis University, Budapest, Hungary
| |
Collapse
|
31
|
Sundell VM, Mäkelä T, Vitikainen AM, Kaasalainen T. Convolutional neural network -based phantom image scoring for mammography quality control. BMC Med Imaging 2022; 22:216. [PMID: 36476319 PMCID: PMC9727908 DOI: 10.1186/s12880-022-00944-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 11/28/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Visual evaluation of phantom images is an important, but time-consuming part of mammography quality control (QC). Consistent scoring of phantom images over the device's lifetime is highly desirable. Recently, convolutional neural networks (CNNs) have been applied to a wide range of image classification problems, performing with a high accuracy. The purpose of this study was to automate mammography QC phantom scoring task by training CNN models to mimic a human reviewer. METHODS Eight CNN variations consisting of three to ten convolutional layers were trained for detecting targets (fibres, microcalcifications and masses) in American College of Radiology (ACR) accreditation phantom images and the results were compared with human scoring. Regular and artificially degraded/improved QC phantom images from eight mammography devices were visually evaluated by one reviewer. These images were used in training the CNN models. A separate test set consisted of daily QC images from the eight devices and separately acquired images with varying dose levels. These were scored by four reviewers and considered the ground truth for CNN performance testing. RESULTS Although hyper-parameter search space was limited, an optimal network depth after which additional layers resulted in decreased accuracy was identified. The highest scoring accuracy (95%) was achieved with the CNN consisting of six convolutional layers. The highest deviation between the CNN and the reviewers was found at lowest dose levels. No significant difference emerged between the visual reviews and CNN results except in case of smallest masses. CONCLUSION A CNN-based automatic mammography QC phantom scoring system can score phantom images in a good agreement with human reviewers, and can therefore be of benefit in mammography QC.
Collapse
Affiliation(s)
- Veli-Matti Sundell
- grid.7737.40000 0004 0410 2071Department of Physics, University of Helsinki, P.O. Box 64, 00014 Helsinki, Finland ,grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Teemu Mäkelä
- grid.7737.40000 0004 0410 2071Department of Physics, University of Helsinki, P.O. Box 64, 00014 Helsinki, Finland ,grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Anne-Mari Vitikainen
- grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Touko Kaasalainen
- grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| |
Collapse
|
32
|
Li S, Wang G. Deep Kernel Representation for Image Reconstruction in PET. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3029-3038. [PMID: 35584077 PMCID: PMC9613528 DOI: 10.1109/tmi.2022.3176002] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Image reconstruction for positron emission tomography (PET) is challenging because of the ill-conditioned tomographic problem and low counting statistics. Kernel methods address this challenge by using kernel representation to incorporate image prior information in the forward model of iterative PET image reconstruction. Existing kernel methods construct the kernels commonly using an empirical process, which may lead to unsatisfactory performance. In this paper, we describe the equivalence between the kernel representation and a trainable neural network model. A deep kernel method is then proposed by exploiting a deep neural network to enable automated learning of an improved kernel model and is directly applicable to single subjects in dynamic PET. The training process utilizes available image prior data to form a set of robust kernels in an optimized way rather than empirically. The results from computer simulations and a real patient dataset demonstrate that the proposed deep kernel method can outperform the existing kernel method and neural network method for dynamic PET image reconstruction.
Collapse
|
33
|
Hosch R, Weber M, Sraieb M, Flaschel N, Haubold J, Kim MS, Umutlu L, Kleesiek J, Herrmann K, Nensa F, Rischpler C, Koitka S, Seifert R, Kersting D. Artificial intelligence guided enhancement of digital PET: scans as fast as CT? Eur J Nucl Med Mol Imaging 2022; 49:4503-4515. [PMID: 35904589 PMCID: PMC9606065 DOI: 10.1007/s00259-022-05901-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 06/30/2022] [Indexed: 12/03/2022]
Abstract
Purpose Both digital positron emission tomography (PET) detector technologies and artificial intelligence based image post-reconstruction methods allow to reduce the PET acquisition time while maintaining diagnostic quality. The aim of this study was to acquire ultra-low-count fluorodeoxyglucose (FDG) ExtremePET images on a digital PET/computed tomography (CT) scanner at an acquisition time comparable to a CT scan and to generate synthetic full-dose PET images using an artificial neural network. Methods This is a prospective, single-arm, single-center phase I/II imaging study. A total of 587 patients were included. For each patient, a standard and an ultra-low-count FDG PET/CT scan (whole-body acquisition time about 30 s) were acquired. A modified pix2pixHD deep-learning network was trained employing 387 data sets as training and 200 as test cohort. Three models (PET-only and PET/CT with or without group convolution) were compared. Detectability and quantification were evaluated. Results The PET/CT input model with group convolution performed best regarding lesion signal recovery and was selected for detailed evaluation. Synthetic PET images were of high visual image quality; mean absolute lesion SUVmax (maximum standardized uptake value) difference was 1.5. Patient-based sensitivity and specificity for lesion detection were 79% and 100%, respectively. Not-detected lesions were of lower tracer uptake and lesion volume. In a matched-pair comparison, patient-based (lesion-based) detection rate was 89% (78%) for PERCIST (PET response criteria in solid tumors)-measurable and 36% (22%) for non PERCIST-measurable lesions. Conclusion Lesion detectability and lesion quantification were promising in the context of extremely fast acquisition times. Possible application scenarios might include re-staging of late-stage cancer patients, in whom assessment of total tumor burden can be of higher relevance than detailed evaluation of small and low-uptake lesions. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-022-05901-x.
Collapse
Affiliation(s)
- René Hosch
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany. .,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany.
| | - Manuel Weber
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Miriam Sraieb
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Nils Flaschel
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Johannes Haubold
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Moon-Sung Kim
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Lale Umutlu
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Ken Herrmann
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Felix Nensa
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Christoph Rischpler
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Sven Koitka
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Robert Seifert
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany.,Department of Nuclear Medicine, University Hospital Münster, University of Münster, Albert-Schweitzer-Campus 1, 48149, Münster, Germany
| | - David Kersting
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| |
Collapse
|
34
|
Manimegalai P, Suresh Kumar R, Valsalan P, Dhanagopal R, Vasanth Raj PT, Christhudass J. 3D Convolutional Neural Network Framework with Deep Learning for Nuclear Medicine. SCANNING 2022; 2022:9640177. [PMID: 35924105 PMCID: PMC9308558 DOI: 10.1155/2022/9640177] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 06/27/2022] [Indexed: 05/15/2023]
Abstract
Though artificial intelligence (AI) has been used in nuclear medicine for more than 50 years, more progress has been made in deep learning (DL) and machine learning (ML), which have driven the development of new AI abilities in the field. ANNs are used in both deep learning and machine learning in nuclear medicine. Alternatively, if 3D convolutional neural network (CNN) is used, the inputs may be the actual images that are being analyzed, rather than a set of inputs. In nuclear medicine, artificial intelligence reimagines and reengineers the field's therapeutic and scientific capabilities. Understanding the concepts of 3D CNN and U-Net in the context of nuclear medicine provides for a deeper engagement with clinical and research applications, as well as the ability to troubleshoot problems when they emerge. Business analytics, risk assessment, quality assurance, and basic classifications are all examples of simple ML applications. General nuclear medicine, SPECT, PET, MRI, and CT may benefit from more advanced DL applications for classification, detection, localization, segmentation, quantification, and radiomic feature extraction utilizing 3D CNNs. An ANN may be used to analyze a small dataset at the same time as traditional statistical methods, as well as bigger datasets. Nuclear medicine's clinical and research practices have been largely unaffected by the introduction of artificial intelligence (AI). Clinical and research landscapes have been fundamentally altered by the advent of 3D CNN and U-Net applications. Nuclear medicine professionals must now have at least an elementary understanding of AI principles such as neural networks (ANNs) and convolutional neural networks (CNNs).
Collapse
Affiliation(s)
- P. Manimegalai
- Department of Biomedical Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India
| | - R. Suresh Kumar
- Center for System Design, Chennai Institute of Technology, Chennai, India
| | - Prajoona Valsalan
- Department of Electrical and Computer Engineering, Dhofar University, Salalah, Oman
| | - R. Dhanagopal
- Center for System Design, Chennai Institute of Technology, Chennai, India
| | - P. T. Vasanth Raj
- Center for System Design, Chennai Institute of Technology, Chennai, India
| | - Jerome Christhudass
- Department of Biomedical Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India
| |
Collapse
|
35
|
Ma R, Hu J, Sari H, Xue S, Mingels C, Viscione M, Kandarpa VSS, Li WB, Visvikis D, Qiu R, Rominger A, Li J, Shi K. An encoder-decoder network for direct image reconstruction on sinograms of a long axial field of view PET. Eur J Nucl Med Mol Imaging 2022; 49:4464-4477. [PMID: 35819497 DOI: 10.1007/s00259-022-05861-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 06/02/2022] [Indexed: 11/04/2022]
Abstract
PURPOSE Deep learning is an emerging reconstruction method for positron emission tomography (PET), which can tackle complex PET corrections in an integrated procedure. This paper optimizes the direct PET reconstruction from sinogram on a long axial field of view (LAFOV) PET. METHODS This paper proposes a novel deep learning architecture to reduce the biases during direct reconstruction from sinograms to images. This architecture is based on an encoder-decoder network, where the perceptual loss is used with pre-trained convolutional layers. It is trained and tested on data of 80 patients acquired from recent Siemens Biograph Vision Quadra long axial FOV (LAFOV) PET/CT. The patients are randomly split into a training dataset of 60 patients, a validation dataset of 10 patients, and a test dataset of 10 patients. The 3D sinograms are converted into 2D sinogram slices and used as input to the network. In addition, the vendor reconstructed images are considered as ground truths. Finally, the proposed method is compared with DeepPET, a benchmark deep learning method for PET reconstruction. RESULTS Compared with DeepPET, the proposed network significantly reduces the root-mean-squared error (NRMSE) from 0.63 to 0.6 (p < 0.01) and increases the structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) from 0.93 to 0.95 (p < 0.01) and from 82.02 to 82.36 (p < 0.01), respectively. The reconstruction time is approximately 10 s per patient, which is shortened by 23 times compared with the conventional method. The errors of mean standardized uptake values (SUVmean) for lesions between ground truth and the predicted result are reduced from 33.5 to 18.7% (p = 0.03). In addition, the error of max SUV is reduced from 32.7 to 21.8% (p = 0.02). CONCLUSION The results demonstrate the feasibility of using deep learning to reconstruct images with acceptable image quality and short reconstruction time. It is shown that the proposed method can improve the quality of deep learning-based reconstructed images without additional CT images for attenuation and scattering corrections. This study demonstrated the feasibility of deep learning to rapidly reconstruct images without additional CT images for complex corrections from actual clinical measurements on LAFOV PET. Despite improving the current development, AI-based reconstruction does not work appropriately for untrained scenarios due to limited extrapolation capability and cannot completely replace conventional reconstruction currently.
Collapse
Affiliation(s)
- Ruiyao Ma
- Department of Engineering Physics, Tsinghua University, and Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, 100084, China.,Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.,Institute of Radiation Medicine, Helmholtz Zentrum München German Research Center for Environmental Health (GmbH), Bavaria, Neuherberg, Germany
| | - Jiaxi Hu
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Hasan Sari
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.,Advanced Clinical Imaging Technology, Siemens Healthcare AG, Lausanne, Switzerland
| | - Song Xue
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Clemens Mingels
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Marco Viscione
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | | | - Wei Bo Li
- Institute of Radiation Medicine, Helmholtz Zentrum München German Research Center for Environmental Health (GmbH), Bavaria, Neuherberg, Germany
| | | | - Rui Qiu
- Department of Engineering Physics, Tsinghua University, and Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, 100084, China.
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Junli Li
- Department of Engineering Physics, Tsinghua University, and Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, 100084, China.
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
36
|
Toyonaga T, Shao D, Shi L, Zhang J, Revilla EM, Menard D, Ankrah J, Hirata K, Chen MK, Onofrey JA, Lu Y. Deep learning-based attenuation correction for whole-body PET - a multi-tracer study with 18F-FDG, 68 Ga-DOTATATE, and 18F-Fluciclovine. Eur J Nucl Med Mol Imaging 2022; 49:3086-3097. [PMID: 35277742 PMCID: PMC10725742 DOI: 10.1007/s00259-022-05748-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 02/25/2022] [Indexed: 11/04/2022]
Abstract
A novel deep learning (DL)-based attenuation correction (AC) framework was applied to clinical whole-body oncology studies using 18F-FDG, 68 Ga-DOTATATE, and 18F-Fluciclovine. The framework used activity (λ-MLAA) and attenuation (µ-MLAA) maps estimated by the maximum likelihood reconstruction of activity and attenuation (MLAA) algorithm as inputs to a modified U-net neural network with a novel imaging physics-based loss function to learn a CT-derived attenuation map (µ-CT). METHODS Clinical whole-body PET/CT datasets of 18F-FDG (N = 113), 68 Ga-DOTATATE (N = 76), and 18F-Fluciclovine (N = 90) were used to train and test tracer-specific neural networks. For each tracer, forty subjects were used to train the neural network to predict attenuation maps (µ-DL). µ-DL and µ-MLAA were compared to the gold-standard µ-CT. PET images reconstructed using the OSEM algorithm with µ-DL (OSEMDL) and µ-MLAA (OSEMMLAA) were compared to the CT-based reconstruction (OSEMCT). Tumor regions of interest were segmented by two radiologists and tumor SUV and volume measures were reported, as well as evaluation using conventional image analysis metrics. RESULTS µ-DL yielded high resolution and fine detail recovery of the attenuation map, which was superior in quality as compared to µ-MLAA in all metrics for all tracers. Using OSEMCT as the gold-standard, OSEMDL provided more accurate tumor quantification than OSEMMLAA for all three tracers, e.g., error in SUVmax for OSEMMLAA vs. OSEMDL: - 3.6 ± 4.4% vs. - 1.7 ± 4.5% for 18F-FDG (N = 152), - 4.3 ± 5.1% vs. 0.4 ± 2.8% for 68 Ga-DOTATATE (N = 70), and - 7.3 ± 2.9% vs. - 2.8 ± 2.3% for 18F-Fluciclovine (N = 44). OSEMDL also yielded more accurate tumor volume measures than OSEMMLAA, i.e., - 8.4 ± 14.5% (OSEMMLAA) vs. - 3.0 ± 15.0% for 18F-FDG, - 14.1 ± 19.7% vs. 1.8 ± 11.6% for 68 Ga-DOTATATE, and - 15.9 ± 9.1% vs. - 6.4 ± 6.4% for 18F-Fluciclovine. CONCLUSIONS The proposed framework provides accurate and robust attenuation correction for whole-body 18F-FDG, 68 Ga-DOTATATE and 18F-Fluciclovine in tumor SUV measures as well as tumor volume estimation. The proposed method provides clinically equivalent quality as compared to CT in attenuation correction for the three tracers.
Collapse
Affiliation(s)
- Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Dan Shao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Guangdong Provincial People's Hospital, Guangzhou, Guangdong, China
| | - Luyao Shi
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06520, USA
| | - Jiazhen Zhang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Enette Mae Revilla
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | | | | - Kenji Hirata
- Department of Diagnostic Imaging, School of Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Yale New Haven Hospital, New Haven, CT, USA
| | - John A Onofrey
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06520, USA
- Department of Urology, Yale University, New Haven, CT, USA
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA.
| |
Collapse
|
37
|
Cui J, Gong K, Guo N, Kim K, Liu H, Li Q. Unsupervised PET logan parametric image estimation using conditional deep image prior. Med Image Anal 2022; 80:102519. [PMID: 35767910 DOI: 10.1016/j.media.2022.102519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 06/14/2022] [Accepted: 06/15/2022] [Indexed: 11/18/2022]
Abstract
Recently, deep learning-based denoising methods have been gradually used for PET images denoising and have shown great achievements. Among these methods, one interesting framework is conditional deep image prior (CDIP) which is an unsupervised method that does not need prior training or a large number of training pairs. In this work, we combined CDIP with Logan parametric image estimation to generate high-quality parametric images. In our method, the kinetic model is the Logan reference tissue model that can avoid arterial sampling. The neural network was utilized to represent the images of Logan slope and intercept. The patient's computed tomography (CT) image or magnetic resonance (MR) image was used as the network input to provide anatomical information. The optimization function was constructed and solved by the alternating direction method of multipliers (ADMM) algorithm. Both simulation and clinical patient datasets demonstrated that the proposed method could generate parametric images with more detailed structures. Quantification results showed that the proposed method results had higher contrast-to-noise (CNR) improvement ratios (PET/CT datasets: 62.25%±29.93%; striatum of brain PET datasets : 129.51%±32.13%, thalamus of brain PET datasets: 128.24%±31.18%) than Gaussian filtered results (PET/CT datasets: 23.33%±18.63%; striatum of brain PET datasets: 74.71%±8.71%, thalamus of brain PET datasets: 73.02%±9.34%) and nonlocal mean (NLM) denoised results (PET/CT datasets: 37.55%±26.56%; striatum of brain PET datasets: 100.89%±16.13%, thalamus of brain PET datasets: 103.59%±16.37%).
Collapse
Affiliation(s)
- Jianan Cui
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, Zhejiang 310027, China; The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Ning Guo
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Kyungsang Kim
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Huafeng Liu
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, Zhejiang 310027, China; Jiaxing Key Laboratory of Photonic Sensing and Intelligent Imaging, Jiaxing, Zhejiang 314000, China; Intelligent Optics and Photonics Research Center, Jiaxing Research Institute, Zhejiang University, Zhejiang 314000, China.
| | - Quanzheng Li
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA.
| |
Collapse
|
38
|
Ito T, Maeno T, Tsuchikame H, Shishido M, Nishi K, Kojima S, Hayashi T, Suzuki K. Adapting a low-count acquisition of the bone scintigraphy using deep denoising super-resolution convolutional neural network. Phys Med 2022; 100:18-25. [PMID: 35716484 DOI: 10.1016/j.ejmp.2022.06.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 04/21/2022] [Accepted: 06/11/2022] [Indexed: 10/18/2022] Open
Abstract
PURPOSE Deep-layer learning processing may improve contrast imaging with greater precision in low-count acquisition. However, no data on noise reduction using super-resolution processing for deep-layer learning have been reported in nuclear medicine imaging. OBJECTIVES This study was designed to evaluate the adaptability of deep denoising super-resolution convolutional neural networks (DDSRCNN) in nuclear medicine by comparing them with denoising convolutional natural networks (DnCNN), Gaussian processing, and nonlinear diffusion (NLD) processing. METHODS In this study, 156 patients were included. Data were collected using a matrix size of 256 × 256 with a pixel size of 2.46 mm at 0.898 folds, 15% energy window at the center of the photopeak energy (140 keV), and total count of 1000 kilocounts (kct). Following the training and validation of two learning models, we created 100 images for each 20-test datum. The peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) between each image and the reference image were calculated. RESULTS DDSRCNN showed the highest PSNR values for all total counts. Regarding SSIM, DDSRCNN had significantly higher values than the original and Gaussian. In DnCNN, false accumulation was observed as the total counts increased. Regarding PSNR and SSIM transition, the model using 100-500-kct training data was significantly higher than that using 100-kct training data. CONCLUSIONS Edge-preserving noise reduction processing was possible, and adaptability to low-count acquisition was demonstrated using DDSRCNN. Using training data with different noise levels, DDSRCNN could learn the noise components with high accuracy and contrast improvement.
Collapse
Affiliation(s)
- Toshimune Ito
- Department of Radiological, Technology, Faculty of Medical Technology, Teikyo University, 2-11-1 Kaga, Itabashi-ku, Tokyo 173-8605, Japan.
| | - Takafumi Maeno
- Department of Radiology, Saiseikai Yokohamashi Tobu Hospital, 3-6-1 Shimosueyoshi, Tsurumi-ku, Yokohama, Kanagawa 230-0012, Japan.
| | - Hirotatsu Tsuchikame
- Department of Radiology, Saiseikai Yokohamashi Tobu Hospital, 3-6-1 Shimosueyoshi, Tsurumi-ku, Yokohama, Kanagawa 230-0012, Japan.
| | - Masaaki Shishido
- Department of Radiology, Saiseikai Yokohamashi Tobu Hospital, 3-6-1 Shimosueyoshi, Tsurumi-ku, Yokohama, Kanagawa 230-0012, Japan.
| | - Kana Nishi
- Department of Radiology, Saiseikai Yokohamashi Tobu Hospital, 3-6-1 Shimosueyoshi, Tsurumi-ku, Yokohama, Kanagawa 230-0012, Japan
| | - Shinya Kojima
- Department of Radiological, Technology, Faculty of Medical Technology, Teikyo University, 2-11-1 Kaga, Itabashi-ku, Tokyo 173-8605, Japan.
| | - Tatsuya Hayashi
- Department of Radiological, Technology, Faculty of Medical Technology, Teikyo University, 2-11-1 Kaga, Itabashi-ku, Tokyo 173-8605, Japan.
| | - Kentaro Suzuki
- Department of Radiological Technology, Toranomon Hospital, 2-2-2 Tranomon, Minato-ku, Tokyo 105-8470, Japan; Department of Radiation Oncology, Graduated School of Medicine, Juntendo University, 2-1-1 Hongo, Bunkyo-ku, Tokyo, Japan.
| |
Collapse
|
39
|
Biophysical Model: A Promising Method in the Study of the Mechanism of Propofol: A Narrative Review. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8202869. [PMID: 35619772 PMCID: PMC9129930 DOI: 10.1155/2022/8202869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 04/02/2022] [Accepted: 04/19/2022] [Indexed: 11/17/2022]
Abstract
The physiological and neuroregulatory mechanism of propofol is largely based on very limited knowledge. It is one of the important puzzling issues in anesthesiology and is of great value in both scientific and clinical fields. It is acknowledged that neural networks which are comprised of a number of neural circuits might be involved in the anesthetic mechanism. However, the mechanism of this hypothesis needs to be further elucidated. With the progress of artificial intelligence, it is more likely to solve this problem through using artificial neural networks to perform temporal waveform data analysis and to construct biophysical computational models. This review focuses on current knowledge regarding the anesthetic mechanism of propofol, an intravenous general anesthetic, by constructing biophysical computational models.
Collapse
|
40
|
Bonardel G, Dupont A, Decazes P, Queneau M, Modzelewski R, Coulot J, Le Calvez N, Hapdey S. Clinical and phantom validation of a deep learning based denoising algorithm for F-18-FDG PET images from lower detection counting in comparison with the standard acquisition. EJNMMI Phys 2022; 9:36. [PMID: 35543894 PMCID: PMC9095795 DOI: 10.1186/s40658-022-00465-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 04/20/2022] [Indexed: 11/21/2022] Open
Abstract
Background PET/CT image quality is directly influenced by the F-18-FDG injected activity. The higher the injected activity, the less noise in the reconstructed images but the more radioactive staff exposition. A new FDA cleared software has been introduced to obtain clinical PET images, acquired at 25% of the count statistics considering US practices. Our aim is to determine the limits of a deep learning based denoising algorithm (SubtlePET) applied to statistically reduced PET raw data from 3 different last generation PET scanners in comparison to the regular acquisition in phantom and patients, considering the European guidelines for radiotracer injection activities. Images of low and high contrasted (SBR = 2 and 5) spheres of the IEC phantom and high contrast (SBR = 5) of micro-spheres of Jaszczak phantom were acquired on 3 different PET devices. 110 patients with different pathologies were included. The data was acquired in list-mode and retrospectively reconstructed with the regular acquisition count statistic (PET100), 50% reduction in counts (PET50) and 66% reduction in counts (PET33). These count reduced images were post-processed with SubtlePET to obtain PET50 + SP and PET33 + SP images. Patient image quality was scored by 2 senior nuclear physicians. Peak-signal-to-Noise and Structural similarity metrics were computed to compare the low count images to regular acquisition (PET100). Results SubtlePET reliably denoised the images and maintained the SUVmax values in PET50 + SP. SubtlePET enhanced images (PET33 + SP) had slightly increased noise compared to PET100 and could lead to a potential loss of information in terms of lesion detectability. Regarding the patient datasets, the PET100 and PET50 + SP were qualitatively comparable. The SubtlePET algorithm was able to correctly recover the SUVmax values of the lesions and maintain a noise level equivalent to full-time images. Conclusion Based on our results, SubtlePET is adapted in clinical practice for half-time or half-dose acquisitions based on European recommended injected dose of 3 MBq/kg without diagnostic confidence loss. Supplementary Information The online version contains supplementary material available at 10.1186/s40658-022-00465-z.
Collapse
Affiliation(s)
- Gerald Bonardel
- Nuclear Medicine, Centre Cardiologique du Nord, Saint-Denis, France.,Nuclear Medicine, Hopital Delafontaine, Saint-Denis, France
| | | | - Pierre Decazes
- Nuclear Medicine Department, Henri Becquerel Cancer Center, Rouen, France.,QuantIF-LITIS EA4108, Rouen University Hospital, Rouen, France
| | - Mathieu Queneau
- Nuclear Medicine, Centre Cardiologique du Nord, Saint-Denis, France.,Nuclear Medicine, Hopital Delafontaine, Saint-Denis, France
| | - Romain Modzelewski
- Nuclear Medicine Department, Henri Becquerel Cancer Center, Rouen, France.,QuantIF-LITIS EA4108, Rouen University Hospital, Rouen, France
| | | | - Nicolas Le Calvez
- Nuclear Medicine, Centre Cardiologique du Nord, Saint-Denis, France.,Nuclear Medicine, Hopital Delafontaine, Saint-Denis, France
| | - Sébastien Hapdey
- Nuclear Medicine Department, Henri Becquerel Cancer Center, Rouen, France. .,QuantIF-LITIS EA4108, Rouen University Hospital, Rouen, France.
| |
Collapse
|
41
|
Li T, Zhang M, Qi W, Asma E, Qi J. Deep Learning Based Joint PET Image Reconstruction and Motion Estimation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1230-1241. [PMID: 34928789 PMCID: PMC9064915 DOI: 10.1109/tmi.2021.3136553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Respiratory motion is one of the main sources of motion artifacts in positron emission tomography (PET) imaging. The emission image and patient motion can be estimated simultaneously from respiratory gated data through a joint estimation framework. However, conventional motion estimation methods based on registration of a pair of images are sensitive to noise. The goal of this study is to develop a robust joint estimation method that incorporates a deep learning (DL)-based image registration approach for motion estimation. We propose a joint estimation framework by incorporating a learned image registration network into a regularized PET image reconstruction. The joint estimation was formulated as a constrained optimization problem with moving gated images related to a fixed image via the deep neural network. The constrained optimization problem is solved by the alternating direction method of multipliers (ADMM) algorithm. The effectiveness of the algorithm was demonstrated using simulated and real data. We compared the proposed DL-ADMM joint estimation algorithm with a monotonic iterative joint estimation. Motion compensated reconstructions using pre-calculated deformation fields by DL-based (DL-MC recon) and iterative (iterative-MC recon) image registration were also included for comparison. Our simulation study shows that the proposed DL-ADMM joint estimation method reduces bias compared to the ungated image without increasing noise and outperforms the competing methods. In the real data study, our proposed method also generated higher lesion contrast and sharper liver boundaries compared to the ungated image and had lower noise than the reference gated image.
Collapse
|
42
|
Adler SS, Seidel J, Choyke PL. Advances in Preclinical PET. Semin Nucl Med 2022; 52:382-402. [PMID: 35307164 PMCID: PMC9038721 DOI: 10.1053/j.semnuclmed.2022.02.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 02/11/2022] [Accepted: 02/14/2022] [Indexed: 12/18/2022]
Abstract
The classical intent of PET imaging is to obtain the most accurate estimate of the amount of positron-emitting radiotracer in the smallest possible volume element located anywhere in the imaging subject at any time using the least amount of radioactivity. Reaching this goal, however, is confounded by an enormous array of interlinked technical issues that limit imaging system performance. As a result, advances in PET, human or animal, are the result of cumulative innovations across each of the component elements of PET, from data acquisition to image analysis. In the report that follows, we trace several of these advances across the imaging process with a focus on small animal PET.
Collapse
Affiliation(s)
- Stephen S Adler
- Frederick National Laboratory for Cancer Research, Frederick, MD; Molecular Imaging Branch, National Cancer Institute, Bethesda MD
| | - Jurgen Seidel
- Contractor to Frederick National Laboratory for Cancer Research, Leidos biodical Research, Inc., Frederick, MD; Molecular Imaging Branch, National Cancer Institute, Bethesda MD
| | - Peter L Choyke
- Molecular Imaging Branch, National Cancer Institute, Bethesda MD.
| |
Collapse
|
43
|
Yang B, Zhou L, Chen L, Lu L, Liu H, Zhu W. Cycle-consistent learning-based hybrid iterative reconstruction for whole-body PET imaging. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac5bfb] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 03/09/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. To develop a cycle-consistent learning-based hybrid iterative reconstruction (IR) method that takes only slightly longer than analytic reconstruction, while pursuing the image resolution and tumor quantification achievable by IR for whole-body PET imaging. Approach. We backproject the raw positron emission tomography (PET) data to generate a blurred activity distribution. From the backprojection to the IR label, a reconstruction mapping that approximates the deblurring filters for the point spread function and the physical effects of the PET system is unrolled to a neural network with stacked convolutional layers. By minimizing the cycle-consistent loss, we train the reconstruction and inverse mappings simultaneously. Main results. In phantom study, the proposed method results in an absolute relative error (RE) of the mean activity of 4.0% ± 0.7% in the largest hot sphere, similar to the RE of the full-count IR and significantly smaller than that obtained by CycleGAN postprocessing. Achieving a noise reduction of 48.1% ± 0.5% relative to the low-count IR, the proposed method demonstrates advantages over the low-count IR and CycleGAN in terms of resolution maintenance, contrast recovery, and noise reduction. In patient study, the proposed method obtains a noise reduction of 44.6% ± 8.0% for the lung and the liver, while maintaining the regional mean activity in both simulated lesions and real tumors. The run time of the proposed method is only half that of the conventional IR. Significance. The proposed cycle-consistent learning from the backprojection rather than the raw PET data or an IR result enables improved reconstruction accuracy, reduced memory requirements, and fast implementation speeds for clinical whole-body PET imaging.
Collapse
|
44
|
San José Estépar R. Artificial intelligence in functional imaging of the lung. Br J Radiol 2022; 95:20210527. [PMID: 34890215 PMCID: PMC9153712 DOI: 10.1259/bjr.20210527] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 07/11/2021] [Accepted: 07/28/2021] [Indexed: 12/16/2022] Open
Abstract
Artificial intelligence (AI) is transforming the way we perform advanced imaging. From high-resolution image reconstruction to predicting functional response from clinically acquired data, AI is promising to revolutionize clinical evaluation of lung performance, pushing the boundary in pulmonary functional imaging for patients suffering from respiratory conditions. In this review, we overview the current developments and expound on some of the encouraging new frontiers. We focus on the recent advances in machine learning and deep learning that enable reconstructing images, quantitating, and predicting functional responses of the lung. Finally, we shed light on the potential opportunities and challenges ahead in adopting AI for functional lung imaging in clinical settings.
Collapse
Affiliation(s)
- Raúl San José Estépar
- Applied Chest Imaging Laboratory, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, United States
| |
Collapse
|
45
|
Xu J, Noo F. Convex optimization algorithms in medical image reconstruction-in the age of AI. Phys Med Biol 2022; 67:10.1088/1361-6560/ac3842. [PMID: 34757943 PMCID: PMC10405576 DOI: 10.1088/1361-6560/ac3842] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Accepted: 11/10/2021] [Indexed: 11/12/2022]
Abstract
The past decade has seen the rapid growth of model based image reconstruction (MBIR) algorithms, which are often applications or adaptations of convex optimization algorithms from the optimization community. We review some state-of-the-art algorithms that have enjoyed wide popularity in medical image reconstruction, emphasize known connections between different algorithms, and discuss practical issues such as computation and memory cost. More recently, deep learning (DL) has forayed into medical imaging, where the latest development tries to exploit the synergy between DL and MBIR to elevate the MBIR's performance. We present existing approaches and emerging trends in DL-enhanced MBIR methods, with particular attention to the underlying role of convexity and convex algorithms on network architecture. We also discuss how convexity can be employed to improve the generalizability and representation power of DL networks in general.
Collapse
Affiliation(s)
- Jingyan Xu
- Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America
| | - Frédéric Noo
- Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, UT, United States of America
| |
Collapse
|
46
|
Huang Z, Wu Y, Fu F, Meng N, Gu F, Wu Q, Zhou Y, Yang Y, Liu X, Zheng H, Liang D, Wang M, Hu Z. Parametric image generation with the uEXPLORER total-body PET/CT system through deep learning. Eur J Nucl Med Mol Imaging 2022; 49:2482-2492. [DOI: 10.1007/s00259-022-05731-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 02/13/2022] [Indexed: 11/25/2022]
|
47
|
Pain CD, Egan GF, Chen Z. Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement. Eur J Nucl Med Mol Imaging 2022; 49:3098-3118. [PMID: 35312031 PMCID: PMC9250483 DOI: 10.1007/s00259-022-05746-4] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/25/2022] [Indexed: 12/21/2022]
Abstract
Image processing plays a crucial role in maximising diagnostic quality of positron emission tomography (PET) images. Recently, deep learning methods developed across many fields have shown tremendous potential when applied to medical image enhancement, resulting in a rich and rapidly advancing literature surrounding this subject. This review encapsulates methods for integrating deep learning into PET image reconstruction and post-processing for low-dose imaging and resolution enhancement. A brief introduction to conventional image processing techniques in PET is firstly presented. We then review methods which integrate deep learning into the image reconstruction framework as either deep learning-based regularisation or as a fully data-driven mapping from measured signal to images. Deep learning-based post-processing methods for low-dose imaging, temporal resolution enhancement and spatial resolution enhancement are also reviewed. Finally, the challenges associated with applying deep learning to enhance PET images in the clinical setting are discussed and future research directions to address these challenges are presented.
Collapse
Affiliation(s)
- Cameron Dennis Pain
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia.
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Department of Data Science and AI, Monash University, Melbourne, Australia
| |
Collapse
|
48
|
Gong K, Catana C, Qi J, Li Q. Direct Reconstruction of Linear Parametric Images From Dynamic PET Using Nonlocal Deep Image Prior. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:680-689. [PMID: 34652998 PMCID: PMC8956450 DOI: 10.1109/tmi.2021.3120913] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Direct reconstruction methods have been developed to estimate parametric images directly from the measured PET sinograms by combining the PET imaging model and tracer kinetics in an integrated framework. Due to limited counts received, signal-to-noise-ratio (SNR) and resolution of parametric images produced by direct reconstruction frameworks are still limited. Recently supervised deep learning methods have been successfully applied to medical imaging denoising/reconstruction when large number of high-quality training labels are available. For static PET imaging, high-quality training labels can be acquired by extending the scanning time. However, this is not feasible for dynamic PET imaging, where the scanning time is already long enough. In this work, we proposed an unsupervised deep learning framework for direct parametric reconstruction from dynamic PET, which was tested on the Patlak model and the relative equilibrium Logan model. The training objective function was based on the PET statistical model. The patient's anatomical prior image, which is readily available from PET/CT or PET/MR scans, was supplied as the network input to provide a manifold constraint, and also utilized to construct a kernel layer to perform non-local feature denoising. The linear kinetic model was embedded in the network structure as a 1 ×1 ×1 convolution layer. Evaluations based on dynamic datasets of 18F-FDG and 11C-PiB tracers show that the proposed framework can outperform the traditional and the kernel method-based direct reconstruction methods.
Collapse
|
49
|
A Review of Deep Learning Methods for Compressed Sensing Image Reconstruction and Its Medical Applications. ELECTRONICS 2022. [DOI: 10.3390/electronics11040586] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Compressed sensing (CS) and its medical applications are active areas of research. In this paper, we review recent works using deep learning method to solve CS problem for images or medical imaging reconstruction including computed tomography (CT), magnetic resonance imaging (MRI) and positron-emission tomography (PET). We propose a novel framework to unify traditional iterative algorithms and deep learning approaches. In short, we define two projection operators toward image prior and data consistency, respectively, and any reconstruction algorithm can be decomposed to the two parts. Though deep learning methods can be divided into several categories, they all satisfies the framework. We built the relationship between different reconstruction methods of deep learning, and connect them to traditional methods through the proposed framework. It also indicates that the key to solve CS problem and its medical applications is how to depict the image prior. Based on the framework, we analyze the current deep learning methods and point out some important directions of research in the future.
Collapse
|
50
|
Ote K, Hashimoto F. Deep-learning-based fast TOF-PET image reconstruction using direction information. Radiol Phys Technol 2022; 15:72-82. [DOI: 10.1007/s12194-022-00652-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 01/26/2022] [Accepted: 01/27/2022] [Indexed: 10/19/2022]
|