1
|
Wu J, Jiang X, Zhong L, Zheng W, Li X, Lin J, Li Z. Linear diffusion noise boosted deep image prior for unsupervised sparse-view CT reconstruction. Phys Med Biol 2024; 69:165029. [PMID: 39119998 DOI: 10.1088/1361-6560/ad69f7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Accepted: 07/31/2024] [Indexed: 08/10/2024]
Abstract
Objective.Deep learning has markedly enhanced the performance of sparse-view computed tomography reconstruction. However, the dependence of these methods on supervised training using high-quality paired datasets, and the necessity for retraining under varied physical acquisition conditions, constrain their generalizability across new imaging contexts and settings.Approach.To overcome these limitations, we propose an unsupervised approach grounded in the deep image prior framework. Our approach advances beyond the conventional single noise level input by incorporating multi-level linear diffusion noise, significantly mitigating the risk of overfitting. Furthermore, we embed non-local self-similarity as a deep implicit prior within a self-attention network structure, improving the model's capability to identify and utilize repetitive patterns throughout the image. Additionally, leveraging imaging physics, gradient backpropagation is performed between the image domain and projection data space to optimize network weights.Main Results.Evaluations with both simulated and clinical cases demonstrate our method's effective zero-shot adaptability across various projection views, highlighting its robustness and flexibility. Additionally, our approach effectively eliminates noise and streak artifacts while significantly restoring intricate image details.Significance. Our method aims to overcome the limitations in current supervised deep learning-based sparse-view CT reconstruction, offering improved generalizability and adaptability without the need for extensive paired training data.
Collapse
Affiliation(s)
- Jia Wu
- School of Communications and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
- School of Medical Information and Engineering, Southwest Medical University, Luzhou 646000, People's Republic of China
| | - Xiaoming Jiang
- Chongqing Engineering Research Center of Medical Electronics and Information Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
| | - Lisha Zhong
- School of Medical Information and Engineering, Southwest Medical University, Luzhou 646000, People's Republic of China
| | - Wei Zheng
- Key Laboratory of Big Data Intelligent Computing, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
| | - Xinwei Li
- Chongqing Engineering Research Center of Medical Electronics and Information Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
| | - Jinzhao Lin
- School of Communications and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
| | - Zhangyong Li
- Chongqing Engineering Research Center of Medical Electronics and Information Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
| |
Collapse
|
2
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Yamaya T. Two-step optimization for accelerating deep image prior-based PET image reconstruction. Radiol Phys Technol 2024:10.1007/s12194-024-00831-9. [PMID: 39096446 DOI: 10.1007/s12194-024-00831-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Revised: 07/25/2024] [Accepted: 07/27/2024] [Indexed: 08/05/2024]
Abstract
Deep learning, particularly convolutional neural networks (CNNs), has advanced positron emission tomography (PET) image reconstruction. However, it requires extensive, high-quality training datasets. Unsupervised learning methods, such as deep image prior (DIP), have shown promise for PET image reconstruction. Although DIP-based PET image reconstruction methods demonstrate superior performance, they involve highly time-consuming calculations. This study proposed a two-step optimization method to accelerate end-to-end DIP-based PET image reconstruction and improve PET image quality. The proposed two-step method comprised a pre-training step using conditional DIP denoising, followed by an end-to-end reconstruction step with fine-tuning. Evaluations using Monte Carlo simulation data demonstrated that the proposed two-step method significantly reduced the computation time and improved the image quality, thereby rendering it a practical and efficient approach for end-to-end DIP-based PET image reconstruction.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho,Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa,Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa,Inage-Ku, Chiba, 263-8555, Japan
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho,Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa,Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
3
|
Dong S, Shewarega A, Chapiro J, Cai Z, Hyder F, Coman D, Duncan JS. High-resolution extracellular pH imaging of liver cancer with multiparametric MR using Deep Image Prior. NMR IN BIOMEDICINE 2024; 37:e5145. [PMID: 38488205 DOI: 10.1002/nbm.5145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 02/19/2024] [Accepted: 02/20/2024] [Indexed: 07/11/2024]
Abstract
Noninvasive extracellular pH (pHe) mapping with Biosensor Imaging of Redundant Deviation in Shifts (BIRDS) using MR spectroscopic imaging (MRSI) has been demonstrated on 3T clinical MR scanners at 8 × 8 × 10 mm3 spatial resolution and applied to study various liver cancer treatments. Although pHe imaging at higher resolution can be achieved by extending the acquisition time, a postprocessing method to increase the resolution is preferable, to minimize the duration spent by the subject in the MR scanner. In this work, we propose to improve the spatial resolution of pHe mapping with BIRDS by incorporating anatomical information in the form of multiparametric MRI and using an unsupervised deep-learning technique, Deep Image Prior (DIP). Specifically, we used high-resolution T 1 , T 2 , and diffusion-weighted imaging (DWI) MR images of rabbits with VX2 liver tumors as inputs to a U-Net architecture to provide anatomical information. U-Net parameters were optimized to minimize the difference between the output super-resolution image and the experimentally acquired low-resolution pHe image using the mean-absolute error. In this way, the super-resolution pHe image would be consistent with both anatomical MR images and the low-resolution pHe measurement from the scanner. The method was developed based on data from 49 rabbits implanted with VX2 liver tumors. For evaluation, we also acquired high-resolution pHe images from two rabbits, which were used as ground truth. The results indicate a good match between the spatial characteristics of the super-resolution images and the high-resolution ground truth, supported by the low pixelwise absolute error.
Collapse
Affiliation(s)
- Siyuan Dong
- Department of Electrical Engineering, Yale University, New Haven, Connecticut, USA
| | - Annabella Shewarega
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, USA
| | - Julius Chapiro
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, USA
| | - Zhuotong Cai
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, USA
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Fahmeed Hyder
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, USA
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, USA
| | - Daniel Coman
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, USA
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, USA
| | - James S Duncan
- Department of Electrical Engineering, Yale University, New Haven, Connecticut, USA
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, USA
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, USA
| |
Collapse
|
4
|
Li S, Zhu Y, Spencer BA, Wang G. Single-Subject Deep-Learning Image Reconstruction With a Neural Optimization Transfer Algorithm for PET-Enabled Dual-Energy CT Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:4075-4089. [PMID: 38941203 DOI: 10.1109/tip.2024.3418347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2024]
Abstract
Combining dual-energy computed tomography (DECT) with positron emission tomography (PET) offers many potential clinical applications but typically requires expensive hardware upgrades or increases radiation doses on PET/CT scanners due to an extra X-ray CT scan. The recent PET-enabled DECT method allows DECT imaging on PET/CT without requiring a second X-ray CT scan. It combines the already existing X-ray CT image with a 511 keV γ -ray CT (gCT) image reconstructed from time-of-flight PET emission data. A kernelized framework has been developed for reconstructing gCT image but this method has not fully exploited the potential of prior knowledge. Use of deep neural networks may explore the power of deep learning in this application. However, common approaches require a large database for training, which is impractical for a new imaging method like PET-enabled DECT. Here, we propose a single-subject method by using neural-network representation as a deep coefficient prior to improving gCT image reconstruction without population-based pre-training. The resulting optimization problem becomes the tomographic estimation of nonlinear neural-network parameters from gCT projection data. This complicated problem can be efficiently solved by utilizing the optimization transfer strategy with quadratic surrogates. Each iteration of the proposed neural optimization transfer algorithm includes: PET activity image update; gCT image update; and least-square neural-network learning in the gCT image domain. This algorithm is guaranteed to monotonically increase the data likelihood. Results from computer simulation, real phantom data and real patient data have demonstrated that the proposed method can significantly improve gCT image quality and consequent multi-material decomposition as compared to other methods.
Collapse
|
5
|
Liu Q, Tsai YJ, Gallezot JD, Guo X, Chen MK, Pucar D, Young C, Panin V, Casey M, Miao T, Xie H, Chen X, Zhou B, Carson R, Liu C. Population-based deep image prior for dynamic PET denoising: A data-driven approach to improve parametric quantification. Med Image Anal 2024; 95:103180. [PMID: 38657423 DOI: 10.1016/j.media.2024.103180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 04/02/2024] [Accepted: 04/12/2024] [Indexed: 04/26/2024]
Abstract
The high noise level of dynamic Positron Emission Tomography (PET) images degrades the quality of parametric images. In this study, we aim to improve the quality and quantitative accuracy of Ki images by utilizing deep learning techniques to reduce the noise in dynamic PET images. We propose a novel denoising technique, Population-based Deep Image Prior (PDIP), which integrates population-based prior information into the optimization process of Deep Image Prior (DIP). Specifically, the population-based prior image is generated from a supervised denoising model that is trained on a prompts-matched static PET dataset comprising 100 clinical studies. The 3D U-Net architecture is employed for both the supervised model and the following DIP optimization process. We evaluated the efficacy of PDIP for noise reduction in 25%-count and 100%-count dynamic PET images from 23 patients by comparing with two other baseline techniques: the Prompts-matched Supervised model (PS) and a conditional DIP (CDIP) model that employs the mean static PET image as the prior. Both the PS and CDIP models show effective noise reduction but result in smoothing and removal of small lesions. In addition, the utilization of a single static image as the prior in the CDIP model also introduces a similar tracer distribution to the denoised dynamic frames, leading to lower Ki in general as well as incorrect Ki in the descending aorta. By contrast, as the proposed PDIP model utilizes intrinsic image features from the dynamic dataset and a large clinical static dataset, it not only achieves comparable noise reduction as the supervised and CDIP models but also improves lesion Ki predictions.
Collapse
Affiliation(s)
- Qiong Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | - Yu-Jung Tsai
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | | | - Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Darko Pucar
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Colin Young
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | | - Michael Casey
- Siemens Medical Solutions USA, Inc., Knoxville, TN, USA
| | - Tianshun Miao
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Richard Carson
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA.
| |
Collapse
|
6
|
Lee J, Seo H, Lee W, Park H. Unsupervised motion artifact correction of turbo spin-echo MRI using deep image prior. Magn Reson Med 2024; 92:28-42. [PMID: 38282279 DOI: 10.1002/mrm.30026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 12/13/2023] [Accepted: 01/09/2024] [Indexed: 01/30/2024]
Abstract
PURPOSE In MRI, motion artifacts can significantly degrade image quality. Motion artifact correction methods using deep neural networks usually required extensive training on large datasets, making them time-consuming and resource-intensive. In this paper, an unsupervised deep learning-based motion artifact correction method for turbo-spin echo MRI is proposed using the deep image prior framework. THEORY AND METHODS The proposed approach takes advantage of the high impedance to motion artifacts offered by the neural network parameterization to remove motion artifacts in MR images. The framework consists of parameterization of MR image, automatic spatial transformation, and motion simulation model. The proposed method synthesizes motion-corrupted images from the motion-corrected images generated by the convolutional neural network, where an optimization process minimizes the objective function between the synthesized images and the acquired images. RESULTS In the simulation study of 280 slices from 14 subjects, the proposed method showed a significant increase in the averaged structural similarity index measure by 0.2737 in individual coil images and by 0.4550 in the root-sum-of-square images. In addition, the ablation study demonstrated the effectiveness of each proposed component in correcting motion artifacts compared to the corrected images produced by the baseline method. The experiments on real motion dataset has shown its clinical potential. CONCLUSION The proposed method exhibited significant quantitative and qualitative improvements in correcting rigid and in-plane motion artifacts in MR images acquired using turbo spin-echo sequence.
Collapse
Affiliation(s)
- Jongyeon Lee
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| | - Hyunseok Seo
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| | - Wonil Lee
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA
| | - HyunWook Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| |
Collapse
|
7
|
Wang F, Wang R, Qiu H. Low-dose CT reconstruction using dataset-free learning. PLoS One 2024; 19:e0304738. [PMID: 38875181 PMCID: PMC11178168 DOI: 10.1371/journal.pone.0304738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 05/16/2024] [Indexed: 06/16/2024] Open
Abstract
Low-Dose computer tomography (LDCT) is an ideal alternative to reduce radiation risk in clinical applications. Although supervised-deep-learning-based reconstruction methods have demonstrated superior performance compared to conventional model-driven reconstruction algorithms, they require collecting massive pairs of low-dose and norm-dose CT images for neural network training, which limits their practical application in LDCT imaging. In this paper, we propose an unsupervised and training data-free learning reconstruction method for LDCT imaging that avoids the requirement for training data. The proposed method is a post-processing technique that aims to enhance the initial low-quality reconstruction results, and it reconstructs the high-quality images by neural work training that minimizes the ℓ1-norm distance between the CT measurements and their corresponding simulated sinogram data, as well as the total variation (TV) value of the reconstructed image. Moreover, the proposed method does not require to set the weights for both the data fidelity term and the plenty term. Experimental results on the AAPM challenge data and LoDoPab-CT data demonstrate that the proposed method is able to effectively suppress the noise and preserve the tiny structures. Also, these results demonstrate the rapid convergence and low computational cost of the proposed method. The source code is available at https://github.com/linfengyu77/IRLDCT.
Collapse
Affiliation(s)
- Feng Wang
- College of Big Data and Software Engineering, Zhejiang Wanli University, Ningbo, Zhejiang, China
| | - Renfang Wang
- College of Big Data and Software Engineering, Zhejiang Wanli University, Ningbo, Zhejiang, China
| | - Hong Qiu
- College of Big Data and Software Engineering, Zhejiang Wanli University, Ningbo, Zhejiang, China
| |
Collapse
|
8
|
Jafaritadi M, Teuho J, Lehtonen E, Klén R, Saraste A, Levin CS. Deep generative denoising networks enhance quality and accuracy of gated cardiac PET data. Ann Nucl Med 2024:10.1007/s12149-024-01945-1. [PMID: 38842629 DOI: 10.1007/s12149-024-01945-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 05/16/2024] [Indexed: 06/07/2024]
Abstract
BACKGROUND Cardiac positron emission tomography (PET) can visualize and quantify the molecular and physiological pathways of cardiac function. However, cardiac and respiratory motion can introduce blurring that reduces PET image quality and quantitative accuracy. Dual cardiac- and respiratory-gated PET reconstruction can mitigate motion artifacts but increases noise as only a subset of data are used for each time frame of the cardiac cycle. AIM The objective of this study is to create a zero-shot image denoising framework using a conditional generative adversarial networks (cGANs) for improving image quality and quantitative accuracy in non-gated and dual-gated cardiac PET images. METHODS Our study included retrospective list-mode data from 40 patients who underwent an 18F-fluorodeoxyglucose (18F-FDG) cardiac PET study. We initially trained and evaluated a 3D cGAN-known as Pix2Pix-on simulated non-gated low-count PET data paired with corresponding full-count target data, and then deployed the model on an unseen test set acquired on the same PET/CT system including both non-gated and dual-gated PET data. RESULTS Quantitative analysis demonstrated that the 3D Pix2Pix network architecture achieved significantly (p value<0.05) enhanced image quality and accuracy in both non-gated and gated cardiac PET images. At 5%, 10%, and 15% preserved count statistics, the model increased peak signal-to-noise ratio (PSNR) by 33.7%, 21.2%, and 15.5%, structural similarity index (SSIM) by 7.1%, 3.3%, and 2.2%, and reduced mean absolute error (MAE) by 61.4%, 54.3%, and 49.7%, respectively. When tested on dual-gated PET data, the model consistently reduced noise, irrespective of cardiac/respiratory motion phases, while maintaining image resolution and accuracy. Significant improvements were observed across all gates, including a 34.7% increase in PSNR, a 7.8% improvement in SSIM, and a 60.3% reduction in MAE. CONCLUSION The findings of this study indicate that dual-gated cardiac PET images, which often have post-reconstruction artifacts potentially affecting diagnostic performance, can be effectively improved using a generative pre-trained denoising network.
Collapse
Affiliation(s)
| | - Jarmo Teuho
- Turku PET Center, University of Turku, Turku, Finland
- Turku PET Center, Turku University Hospital, Turku, Finland
| | - Eero Lehtonen
- Turku PET Center, University of Turku, Turku, Finland
| | - Riku Klén
- Turku PET Center, University of Turku, Turku, Finland
- Turku PET Center, Turku University Hospital, Turku, Finland
| | - Antti Saraste
- Turku PET Center, University of Turku, Turku, Finland
- Turku PET Center, Turku University Hospital, Turku, Finland
- Heart Center, Turku University Hospital, Turku, Finland
| | - Craig S Levin
- Department of Radiology, Stanford University, Stanford, CA, USA.
- Department of Physics, Stanford University, Stanford, CA, USA.
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
- Department of Bioengineering, Stanford University, Stanford, CA, USA.
| |
Collapse
|
9
|
Jang SI, Pan T, Li Y, Heidari P, Chen J, Li Q, Gong K. Spach Transformer: Spatial and Channel-Wise Transformer Based on Local and Global Self-Attentions for PET Image Denoising. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2036-2049. [PMID: 37995174 PMCID: PMC11111593 DOI: 10.1109/tmi.2023.3336237] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2023]
Abstract
Position emission tomography (PET) is widely used in clinics and research due to its quantitative merits and high sensitivity, but suffers from low signal-to-noise ratio (SNR). Recently convolutional neural networks (CNNs) have been widely used to improve PET image quality. Though successful and efficient in local feature extraction, CNN cannot capture long-range dependencies well due to its limited receptive field. Global multi-head self-attention (MSA) is a popular approach to capture long-range information. However, the calculation of global MSA for 3D images has high computational costs. In this work, we proposed an efficient spatial and channel-wise encoder-decoder transformer, Spach Transformer, that can leverage spatial and channel information based on local and global MSAs. Experiments based on datasets of different PET tracers, i.e., 18F-FDG, 18F-ACBC, 18F-DCFPyL, and 68Ga-DOTATATE, were conducted to evaluate the proposed framework. Quantitative results show that the proposed Spach Transformer framework outperforms state-of-the-art deep learning architectures.
Collapse
|
10
|
Yang B, Gong K, Liu H, Li Q, Zhu W. Anatomically Guided PET Image Reconstruction Using Conditional Weakly-Supervised Multi-Task Learning Integrating Self-Attention. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2098-2112. [PMID: 38241121 DOI: 10.1109/tmi.2024.3356189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2024]
Abstract
To address the lack of high-quality training labels in positron emission tomography (PET) imaging, weakly-supervised reconstruction methods that generate network-based mappings between prior images and noisy targets have been developed. However, the learned model has an intrinsic variance proportional to the average variance of the target image. To suppress noise and improve the accuracy and generalizability of the learned model, we propose a conditional weakly-supervised multi-task learning (MTL) strategy, in which an auxiliary task is introduced serving as an anatomical regularizer for the PET reconstruction main task. In the proposed MTL approach, we devise a novel multi-channel self-attention (MCSA) module that helps learn an optimal combination of shared and task-specific features by capturing both local and global channel-spatial dependencies. The proposed reconstruction method was evaluated on NEMA phantom PET datasets acquired at different positions in a PET/CT scanner and 26 clinical whole-body PET datasets. The phantom results demonstrate that our method outperforms state-of-the-art learning-free and weakly-supervised approaches obtaining the best noise/contrast tradeoff with a significant noise reduction of approximately 50.0% relative to the maximum likelihood (ML) reconstruction. The patient study results demonstrate that our method achieves the largest noise reductions of 67.3% and 35.5% in the liver and lung, respectively, as well as consistently small biases in 8 tumors with various volumes and intensities. In addition, network visualization reveals that adding the auxiliary task introduces more anatomical information into PET reconstruction than adding only the anatomical loss, and the developed MCSA can abstract features and retain PET image details.
Collapse
|
11
|
Wang L, Wang Q, Wang X, Ma Y, Zhang L, Liu M. Triplet-constrained deep hashing for chest X-ray image retrieval in COVID-19 assessment. Neural Netw 2024; 173:106182. [PMID: 38387203 DOI: 10.1016/j.neunet.2024.106182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 01/15/2024] [Accepted: 02/15/2024] [Indexed: 02/24/2024]
Abstract
Radiology images of the chest, such as computer tomography scans and X-rays, have been prominently used in computer-aided COVID-19 analysis. Learning-based radiology image retrieval has attracted increasing attention recently, which generally involves image feature extraction and finding matches in extensive image databases based on query images. Many deep hashing methods have been developed for chest radiology image search due to the high efficiency of retrieval using hash codes. However, they often overlook the complex triple associations between images; that is, images belonging to the same category tend to share similar characteristics and vice versa. To this end, we develop a triplet-constrained deep hashing (TCDH) framework for chest radiology image retrieval to facilitate automated analysis of COVID-19. The TCDH consists of two phases, including (a) feature extraction and (b) image retrieval. For feature extraction, we have introduced a triplet constraint and an image reconstruction task to enhance discriminative ability of learned features, and these features are then converted into binary hash codes to capture semantic information. Specifically, the triplet constraint is designed to pull closer samples within the same category and push apart samples from different categories. Additionally, an auxiliary image reconstruction task is employed during feature extraction to help effectively capture anatomical structures of images. For image retrieval, we utilize learned hash codes to conduct searches for medical images. Extensive experiments on 30,386 chest X-ray images demonstrate the superiority of the proposed method over several state-of-the-art approaches in automated image search. The code is now available online.
Collapse
Affiliation(s)
- Linmin Wang
- School of Mathematics Science, Liaocheng University, Liaocheng, Shandong, 252000, China
| | - Qianqian Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Xiaochuan Wang
- School of Mathematics Science, Liaocheng University, Liaocheng, Shandong, 252000, China
| | - Yunling Ma
- School of Mathematics Science, Liaocheng University, Liaocheng, Shandong, 252000, China
| | - Limei Zhang
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan, Shandong, 250101, China.
| | - Mingxia Liu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
| |
Collapse
|
12
|
Cheng L, Lyu Z, Liu H, Wu J, Jia C, Wu Y, Ji Y, Jiang N, Ma T, Liu Y. Efficient image reconstruction for a small animal PET system with dual-layer-offset detector design. Med Phys 2024; 51:2772-2787. [PMID: 37921396 DOI: 10.1002/mp.16814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 10/10/2023] [Accepted: 10/10/2023] [Indexed: 11/04/2023] Open
Abstract
BACKGROUND A compact PET/SPECT/CT system Inliview-3000B has been developed to provide multi-modality information on small animals for biomedical research. Its PET subsystem employed a dual-layer-offset detector design for depth-of-interaction capability and higher detection efficiency, but the irregular design caused some difficulties in calculating the normalization factors and the sensitivity map. Besides, the relatively larger (2 mm) crystal cross-section size also posed a challenge to high-resolution image reconstruction. PURPOSE We present an efficient image reconstruction method to achieve high imaging performance for the PET subsystem of Inliview-3000B. METHODS List mode reconstruction with efficient system modeling was used for the PET imaging. We adopt an on-the-fly multi-ray tracing method with random crystal sampling to model the solid angle, crystal penetration and object attenuation effect, and modify the system response model during each iteration to improve the reconstruction performance and computational efficiency. We estimate crystal efficiency with a novel iterative approach that combines measured cylinder phantom data with simulated line-of-response (LOR)-based factors for normalization correction before reconstruction. Since it is necessary to calculate normalization factors and the sensitivity map, we stack the two crystal layers together and extend the conventional data organization method here to index all useful LORs. Simulations and experiments were performed to demonstrate the feasibility and advantage of the proposed method. RESULTS Simulation results showed that the iterative algorithm for crystal efficiency estimation could achieve good accuracy. NEMA image quality phantom studies have demonstrated the superiority of random sampling, which is able to achieve good imaging performance with much less computation than traditional uniform sampling. In the spatial resolution evaluation based on the mini-Derenzo phantom, 1.1 mm hot rods could be identified with the proposed reconstruction method. Reconstruction of double mice and a rat showed good spatial resolution and a high signal-to-noise ratio, and organs with higher uptake could be recognized well. CONCLUSION The results validated the superiority of introducing randomness into reconstruction, and demonstrated its reliability for high-performance imaging. The Inliview-3000B PET subsystem with the proposed image reconstruction can provide rich and detailed information on small animals for preclinical research.
Collapse
Affiliation(s)
- Li Cheng
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education (Tsinghua University), Beijing, China
| | - Zhenlei Lyu
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education (Tsinghua University), Beijing, China
| | - Hui Liu
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education (Tsinghua University), Beijing, China
| | - Jing Wu
- Center for Advanced Quantum Studies and Department of Physics, Beijing Normal University, Beijing, China
| | - Chao Jia
- Beijing Novel Medical Equipment Ltd, Beijing, China
| | - Yuanguang Wu
- Beijing Novel Medical Equipment Ltd, Beijing, China
| | - Yingcai Ji
- Beijing Novel Medical Equipment Ltd, Beijing, China
| | | | - Tianyu Ma
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education (Tsinghua University), Beijing, China
| | - Yaqiang Liu
- Department of Engineering Physics, Tsinghua University, Beijing, China
- Key Laboratory of Particle & Radiation Imaging, Ministry of Education (Tsinghua University), Beijing, China
| |
Collapse
|
13
|
Li Y, Feng J, Xiang J, Li Z, Liang D. AIRPORT: A Data Consistency Constrained Deep Temporal Extrapolation Method To Improve Temporal Resolution In Contrast Enhanced CT Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1605-1618. [PMID: 38133967 DOI: 10.1109/tmi.2023.3344712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2023]
Abstract
Typical tomographic image reconstruction methods require that the imaged object is static and stationary during the time window to acquire a minimally complete data set. The violation of this requirement leads to temporal-averaging errors in the reconstructed images. For a fixed gantry rotation speed, to reduce the errors, it is desired to reconstruct images using data acquired over a narrower angular range, i.e., with a higher temporal resolution. However, image reconstruction with a narrower angular range violates the data sufficiency condition, resulting in severe data-insufficiency-induced errors. The purpose of this work is to decouple the trade-off between these two types of errors in contrast-enhanced computed tomography (CT) imaging. We demonstrated that using the developed data consistency constrained deep temporal extrapolation method (AIRPORT), the entire time-varying imaged object can be accurately reconstructed with 40 frames-per-second temporal resolution, the time window needed to acquire a single projection view data using a typical C-arm cone-beam CT system. AIRPORT is applicable to general non-sparse imaging tasks using a single short-scan data acquisition.
Collapse
|
14
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
15
|
Wang S, Liu B, Xie F, Chai L. An iterative reconstruction algorithm for unsupervised PET image. Phys Med Biol 2024; 69:055025. [PMID: 38346340 DOI: 10.1088/1361-6560/ad2882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 02/12/2024] [Indexed: 02/28/2024]
Abstract
Objective.In recent years, convolutional neural networks (CNNs) have shown great potential in positron emission tomography (PET) image reconstruction. However, most of them rely on many low-quality and high-quality reference PET image pairs for training, which are not always feasible in clinical practice. On the other hand, many works improve the quality of PET image reconstruction by adding explicit regularization or optimizing the network structure, which may lead to complex optimization problems.Approach.In this paper, we develop a novel iterative reconstruction algorithm by integrating the deep image prior (DIP) framework, which only needs the prior information (e.g. MRI) and sinogram data of patients. To be specific, we construct the objective function as a constrained optimization problem and utilize the existing PET image reconstruction packages to streamline calculations. Moreover, to further improve both the reconstruction quality and speed, we introduce the Nesterov's acceleration part and the restart mechanism in each iteration.Main results.2D experiments on PET data sets based on computer simulations and real patients demonstrate that our proposed algorithm can outperform existing MLEM-GF, KEM and DIPRecon methods.Significance.Unlike traditional CNN methods, the proposed algorithm does not rely on large data sets, but only leverages inter-patient information. Furthermore, we enhance reconstruction performance by optimizing the iterative algorithm. Notably, the proposed method does not require much modification of the basic algorithm, allowing for easy integration into standard implementations.
Collapse
Affiliation(s)
- Siqi Wang
- Engineering Research Center of Metallurgical Automation and Measurement Technology, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
| | - Bing Liu
- Engineering Research Center of Metallurgical Automation and Measurement Technology, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
| | - Furan Xie
- Engineering Research Center of Metallurgical Automation and Measurement Technology, Wuhan University of Science and Technology, Wuhan 430081, People's Republic of China
| | - Li Chai
- College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, People's Republic of China
| |
Collapse
|
16
|
Wang D, Jiang C, He J, Teng Y, Qin H, Liu J, Yang X. M 3S-Net: multi-modality multi-branch multi-self-attention network with structure-promoting loss for low-dose PET/CT enhancement. Phys Med Biol 2024; 69:025001. [PMID: 38086073 DOI: 10.1088/1361-6560/ad14c5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 12/12/2023] [Indexed: 01/05/2024]
Abstract
Objective.PET (Positron Emission Tomography) inherently involves radiotracer injections and long scanning time, which raises concerns about the risk of radiation exposure and patient comfort. Reductions in radiotracer dosage and acquisition time can lower the potential risk and improve patient comfort, respectively, but both will also reduce photon counts and hence degrade the image quality. Therefore, it is of interest to improve the quality of low-dose PET images.Approach.A supervised multi-modality deep learning model, named M3S-Net, was proposed to generate standard-dose PET images (60 s per bed position) from low-dose ones (10 s per bed position) and the corresponding CT images. Specifically, we designed a multi-branch convolutional neural network with multi-self-attention mechanisms, which first extracted features from PET and CT images in two separate branches and then fused the features to generate the final generated PET images. Moreover, a novel multi-modality structure-promoting term was proposed in the loss function to learn the anatomical information contained in CT images.Main results.We conducted extensive numerical experiments on real clinical data collected from local hospitals. Compared with state-of-the-art methods, the proposed M3S-Net not only achieved higher objective metrics and better generated tumors, but also performed better in preserving edges and suppressing noise and artifacts.Significance.The experimental results of quantitative metrics and qualitative displays demonstrate that the proposed M3S-Net can generate high-quality PET images from low-dose ones, which are competable to standard-dose PET images. This is valuable in reducing PET acquisition time and has potential applications in dynamic PET imaging.
Collapse
Affiliation(s)
- Dong Wang
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Chong Jiang
- Department of Nuclear Medicine, West China Hospital of Sichuan University, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Jian He
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Yue Teng
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Hourong Qin
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| | - Jijun Liu
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| |
Collapse
|
17
|
Hirata K, Watanabe S, Kitagawa Y, Kudo K. A Review of Hypoxia Imaging Using 18F-Fluoromisonidazole Positron Emission Tomography. Methods Mol Biol 2024; 2755:133-140. [PMID: 38319574 DOI: 10.1007/978-1-0716-3633-6_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2024]
Abstract
Tumor hypoxia is an essential factor related to malignancy, prognosis, and resistance to treatment. Positron emission tomography (PET) is a modality that visualizes the distribution of radiopharmaceuticals administered into the body. PET imaging with [18F]fluoromisonidazole ([18F]FMISO) identifies hypoxic tissues. Unlike [18F]fluorodeoxyglucose ([18F]FDG)-PET, fasting is not necessary for [18F]FMISO-PET, but the waiting time from injection to image acquisition needs to be relatively long (e.g., 2-4 h). [18F]FMISO-PET images can be displayed on an ordinary commercial viewer on a personal computer (PC). While visual assessment is fundamental, various quantitative indices such as tumor-to-muscle ratio have also been proposed. Several novel hypoxia tracers have been invented to compensate for the limitations of [18F]FMISO.
Collapse
Affiliation(s)
- Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Sapporo, Japan.
- Department of Nuclear Medicine, Hokkaido University Hospital, Sapporo, Japan.
- Global Center for Biomedical Science and Engineering, Faculty of Medicine, Hokkaido University, Sapporo, Japan.
| | - Shiro Watanabe
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Sapporo, Japan
- Department of Nuclear Medicine, Hokkaido University Hospital, Sapporo, Japan
- Global Center for Biomedical Science and Engineering, Faculty of Medicine, Hokkaido University, Sapporo, Japan
| | - Yoshimasa Kitagawa
- Oral Diagnosis and Medicine, Department of Oral Pathobiological Science, Graduate School of Dental Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Kohsuke Kudo
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Sapporo, Japan
- Department of Nuclear Medicine, Hokkaido University Hospital, Sapporo, Japan
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan
| |
Collapse
|
18
|
Zheng Y, Frame E, Caravaca J, Gullberg GT, Vetter K, Seo Y. A generalization of the maximum likelihood expectation maximization (MLEM) method: Masked-MLEM. Phys Med Biol 2023; 68:10.1088/1361-6560/ad0900. [PMID: 37918026 PMCID: PMC10819675 DOI: 10.1088/1361-6560/ad0900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 11/02/2023] [Indexed: 11/04/2023]
Abstract
Objective.In our previous work on image reconstruction for single-layer collimatorless scintigraphy, we developed the min-min weighted robust least squares (WRLS) optimization algorithm to address the challenge of reconstructing images when both the system matrix and the projection data are uncertain. Whereas the WRLS algorithm has been successful in two-dimensional (2D) reconstruction, expanding it to three-dimensional (3D) reconstruction is difficult since the WRLS optimization problem is neither smooth nor strongly-convex. To overcome these difficulties and achieve robust image reconstruction in the presence of system uncertainties and projection noise, we propose a generalized iterative method based on the maximum likelihood expectation maximization (MLEM) algorithm, hereinafter referred to as the Masked-MLEM algorithm.Approach.In the Masked-MLEM algorithm, only selected subsets ('masks') from the system matrix and the projection contribute to the image update to satisfy the constraints imposed by the system uncertainties. We validate the Masked-MLEM algorithm and compare it to the standard MLEM algorithm using experimental data obtained from both collimated and uncollimated imaging instruments, including parallel-hole collimated SPECT, 2D collimatorless scintigraphy, and 3D collimatorless tomography. Additionally, we conduct comprehensive Monte Carlo simulations for 3D collimatorless tomography to further validate the effectiveness of the Masked-MLEM algorithm in handling different levels of system uncertainties.Main results.The Masked-MLEM and standard MLEM reconstructions are similar in cases with negligible system uncertainties, whereas the Masked-MLEM algorithm outperforms the standard MLEM algorithm when the system matrix is an approximation. Importantly, the Masked-MLEM algorithm ensures reliable image reconstruction across varying levels of system uncertainties.Significance.With a good choice of system uncertainty and without requiring accurate knowledge of the actual system matrix, the Masked-MLEM algorithm yields more robust image reconstruction than the standard MLEM algorithm, effectively reducing the likelihood of erroneously reconstructing higher activities in regions without radioactive sources.
Collapse
Affiliation(s)
- Yifan Zheng
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143, USA
- Department of Nuclear Engineering, University of California, Berkeley, CA 94720, USA
| | - Emily Frame
- Department of Nuclear Engineering, University of California, Berkeley, CA 94720, USA
| | - Javier Caravaca
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143, USA
| | - Grant T. Gullberg
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143, USA
| | - Kai Vetter
- Department of Nuclear Engineering, University of California, Berkeley, CA 94720, USA
- Applied Nuclear Physics Group, Lawrence Berkeley National Laboratory, Berkeley, CA 94502, USA
| | - Youngho Seo
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143, USA
- Department of Nuclear Engineering, University of California, Berkeley, CA 94720, USA
| |
Collapse
|
19
|
Hellström M, Löfstedt T, Garpebring A. Denoising and uncertainty estimation in parameter mapping with approximate Bayesian deep image priors. Magn Reson Med 2023; 90:2557-2571. [PMID: 37582257 DOI: 10.1002/mrm.29823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 06/26/2023] [Accepted: 07/18/2023] [Indexed: 08/17/2023]
Abstract
PURPOSE To mitigate the problem of noisy parameter maps with high uncertainties by casting parameter mapping as a denoising task based on Deep Image Priors. METHODS We extend the concept of denoising with Deep Image Prior (DIP) into parameter mapping by treating the output of an image-generating network as a parametrization of tissue parameter maps. The method implicitly denoises the parameter mapping process by filtering low-level image features with an untrained convolutional neural network (CNN). Our implementation includes uncertainty estimation from Bernoulli approximate variational inference, implemented with MC dropout, which provides model uncertainty in each voxel of the denoised parameter maps. The method is modular, so the specifics of different applications (e.g., T1 mapping) separate into application-specific signal equation blocks. We evaluate the method on variable flip angle T1 mapping, multi-echo T2 mapping, and apparent diffusion coefficient mapping. RESULTS We found that deep image prior adapts successfully to several applications in parameter mapping. In all evaluations, the method produces noise-reduced parameter maps with decreased uncertainty compared to conventional methods. The downsides of the proposed method are the long computational time and the introduction of some bias from the denoising prior. CONCLUSION DIP successfully denoise the parameter mapping process and applies to several applications with limited hyperparameter tuning. Further, it is easy to implement since DIP methods do not use network training data. Although time-consuming, uncertainty information from MC dropout makes the method more robust and provides useful information when properly calibrated.
Collapse
Affiliation(s)
- Max Hellström
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
| | - Tommy Löfstedt
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
- Department of Computing Science, Umeå University, Umeå, Sweden
| | | |
Collapse
|
20
|
Kaviani S, Sanaat A, Mokri M, Cohalan C, Carrier JF. Image reconstruction using UNET-transformer network for fast and low-dose PET scans. Comput Med Imaging Graph 2023; 110:102315. [PMID: 38006648 DOI: 10.1016/j.compmedimag.2023.102315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 09/26/2023] [Accepted: 11/15/2023] [Indexed: 11/27/2023]
Abstract
INTRODUCTION Low-dose and fast PET imaging (low-count PET) play a significant role in enhancing patient safety, healthcare efficiency, and patient comfort during medical imaging procedures. To achieve high-quality images with low-count PET scans, effective reconstruction models are crucial for denoising and enhancing image quality. The main goal of this paper is to develop an effective and accurate deep learning-based method for reconstructing low-count PET images, which is a challenging problem due to the limited amount of available data and the high level of noise in the acquired images. The proposed method aims to improve the quality of reconstructed PET images while preserving important features, such as edges and small details, by combining the strengths of UNET and Transformer networks. MATERIAL AND METHODS The proposed TrUNET-MAPEM model integrates a residual UNET-transformer regularizer into the unrolled maximum a posteriori expectation maximization (MAPEM) algorithm for PET image reconstruction. A loss function based on a combination of structural similarity index (SSIM) and mean squared error (MSE) is utilized to evaluate the accuracy of the reconstructed images. The simulated dataset was generated using the Brainweb phantom, while the real patient dataset was acquired using a Siemens Biograph mMR PET scanner. We also implemented state-of-the-art methods for comparison purposes: OSEM, MAPOSEM, and supervised learning using 3D-UNET network. The reconstructed images are compared to ground truth images using metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and relative root mean square error (rRMSE) to quantitatively evaluate the accuracy of the reconstructed images. RESULTS Our proposed TrUNET-MAPEM approach was evaluated using both simulated and real patient data. For the patient data, our model achieved an average PSNR of 33.72 dB, an average SSIM of 0.955, and an average rRMSE of 0.39. These results outperformed other methods which had average PSNRs of 36.89 dB, 34.12 dB, and 33.52 db, average SSIMs of 0.944, 0.947, and 0.951, and average rRMSEs of 0.59, 0.49, and 0.42. For the simulated data, our model achieved an average PSNR of 31.23 dB, an average SSIM of 0.95, and an average rRMSE of 0.55. These results also outperformed other state-of-the-art methods, such as OSEM, MAPOSEM, and 3DUNET-MAPEM. The model demonstrates the potential for clinical use by successfully reconstructing smooth images while preserving edges. The comparison with other methods demonstrates the superiority of our approach, as it outperforms all other methods for all three metrics. CONCLUSION The proposed TrUNET-MAPEM model presents a significant advancement in the field of low-count PET image reconstruction. The results demonstrate the potential for clinical use, as the model can produce images with reduced noise levels and better edge preservation compared to other reconstruction and post-processing algorithms. The proposed approach may have important clinical applications in the early detection and diagnosis of various diseases.
Collapse
Affiliation(s)
- Sanaz Kaviani
- Faculty of Medicine, University of Montreal, Montreal, Canada; University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada.
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Mersede Mokri
- Faculty of Medicine, University of Montreal, Montreal, Canada; University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada
| | - Claire Cohalan
- University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada; Department of Physics and Biomedical Engineering, University of Montreal Hospital Centre, Montreal, Canada
| | - Jean-Francois Carrier
- University of Montreal Hospital Research Centre (CRCHUM), Montreal, Canada; Department of Physics, University of Montreal, Montreal, QC, Canada; Department de Radiation Oncology, University of Montreal Hospital Centre (CHUM), Montreal, Canada
| |
Collapse
|
21
|
Hellwig D, Hellwig NC, Boehner S, Fuchs T, Fischer R, Schmidt D. Artificial Intelligence and Deep Learning for Advancing PET Image Reconstruction: State-of-the-Art and Future Directions. Nuklearmedizin 2023; 62:334-342. [PMID: 37995706 PMCID: PMC10689088 DOI: 10.1055/a-2198-0358] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 10/12/2023] [Indexed: 11/25/2023]
Abstract
Positron emission tomography (PET) is vital for diagnosing diseases and monitoring treatments. Conventional image reconstruction (IR) techniques like filtered backprojection and iterative algorithms are powerful but face limitations. PET IR can be seen as an image-to-image translation. Artificial intelligence (AI) and deep learning (DL) using multilayer neural networks enable a new approach to this computer vision task. This review aims to provide mutual understanding for nuclear medicine professionals and AI researchers. We outline fundamentals of PET imaging as well as state-of-the-art in AI-based PET IR with its typical algorithms and DL architectures. Advances improve resolution and contrast recovery, reduce noise, and remove artifacts via inferred attenuation and scatter correction, sinogram inpainting, denoising, and super-resolution refinement. Kernel-priors support list-mode reconstruction, motion correction, and parametric imaging. Hybrid approaches combine AI with conventional IR. Challenges of AI-assisted PET IR include availability of training data, cross-scanner compatibility, and the risk of hallucinated lesions. The need for rigorous evaluations, including quantitative phantom validation and visual comparison of diagnostic accuracy against conventional IR, is highlighted along with regulatory issues. First approved AI-based applications are clinically available, and its impact is foreseeable. Emerging trends, such as the integration of multimodal imaging and the use of data from previous imaging visits, highlight future potentials. Continued collaborative research promises significant improvements in image quality, quantitative accuracy, and diagnostic performance, ultimately leading to the integration of AI-based IR into routine PET imaging protocols.
Collapse
Affiliation(s)
- Dirk Hellwig
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Nils Constantin Hellwig
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Steven Boehner
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Timo Fuchs
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Regina Fischer
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Daniel Schmidt
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
| |
Collapse
|
22
|
Bollack A, Pemberton HG, Collij LE, Markiewicz P, Cash DM, Farrar G, Barkhof F. Longitudinal amyloid and tau PET imaging in Alzheimer's disease: A systematic review of methodologies and factors affecting quantification. Alzheimers Dement 2023; 19:5232-5252. [PMID: 37303269 DOI: 10.1002/alz.13158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 04/21/2023] [Accepted: 04/25/2023] [Indexed: 06/13/2023]
Abstract
Deposition of amyloid and tau pathology can be quantified in vivo using positron emission tomography (PET). Accurate longitudinal measurements of accumulation from these images are critical for characterizing the start and spread of the disease. However, these measurements are challenging; precision and accuracy can be affected substantially by various sources of errors and variability. This review, supported by a systematic search of the literature, summarizes the current design and methodologies of longitudinal PET studies. Intrinsic, biological causes of variability of the Alzheimer's disease (AD) protein load over time are then detailed. Technical factors contributing to longitudinal PET measurement uncertainty are highlighted, followed by suggestions for mitigating these factors, including possible techniques that leverage shared information between serial scans. Controlling for intrinsic variability and reducing measurement uncertainty in longitudinal PET pipelines will provide more accurate and precise markers of disease evolution, improve clinical trial design, and aid therapy response monitoring.
Collapse
Affiliation(s)
- Ariane Bollack
- Department of Medical Physics and Biomedical Engineering, Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - Hugh G Pemberton
- Department of Medical Physics and Biomedical Engineering, Centre for Medical Image Computing (CMIC), University College London, London, UK
- GE Healthcare, Amersham, UK
- UCL Queen Square Institute of Neurology, London, UK
| | - Lyduine E Collij
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location VUmc, Amsterdam, The Netherlands
- Clinical Memory Research Unit, Department of Clinical Sciences, Lund University, Malmö, Sweden
| | - Pawel Markiewicz
- Department of Medical Physics and Biomedical Engineering, Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - David M Cash
- UCL Queen Square Institute of Neurology, London, UK
- UK Dementia Research Institute at University College London, London, UK
| | | | - Frederik Barkhof
- Department of Medical Physics and Biomedical Engineering, Centre for Medical Image Computing (CMIC), University College London, London, UK
- UCL Queen Square Institute of Neurology, London, UK
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location VUmc, Amsterdam, The Netherlands
| |
Collapse
|
23
|
Shinohara H, Hori K, Hashimoto T. Deep learning study on the mechanism of edge artifacts in point spread function reconstruction for numerical brain images. Ann Nucl Med 2023; 37:596-604. [PMID: 37610591 DOI: 10.1007/s12149-023-01862-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 08/07/2023] [Indexed: 08/24/2023]
Abstract
OBJECTIVE Non-blinded image deblurring with deep learning was performed on blurred numerical brain images without point spread function (PSF) reconstruction to obtain edge artifacts (EA)-free images. This study uses numerical simulation to investigate the mechanism of EA in PSF reconstruction based on the spatial frequency characteristics of EA-free images. METHODS In 256 × 256 matrix brain images, the signal values of gray matter (GM), white matter, and cerebrospinal fluid were set to 1, 0.25, and 0.05, respectively. We assumed ideal projection data of a two-dimensional (2D) parallel beam with no degradation factors other than detector response blur to precisely grasp EA using the PSF reconstruction algorithm from blurred projection data. The detector response was assumed to be a shift-invariant and one-dimensional (1D) Gaussian function with 2-5 mm full width at half maximum (FWHM). Images without PSF reconstruction (non-PSF), PSF reconstruction without regularization (PSF) and with regularization of relative difference function (PSF-RD) were generated by ordered subset expectation maximization (OSEM). For non-PSF, the image deblurring with a deep image prior (DIP) was applied using a 2D Gaussian function with 2-5 mm FWHM. The 1D object-specific modulation transfer function (1D-OMTF), which is the ratio of 1D amplitude spectrum of the original and reconstructed images, was used as the index of spatial frequency characteristics. RESULTS When the detector response was greater than 3 mm FWHM, EA in PSF was observed in GM borders and narrow GM. No remarkable EA was observed in the DIP, and the FWHM estimated from the recovery coefficient for the deblurred image of non-PSF at 5 mm FWHM was reduced to 3 mm or less. PSF of 5 mm FWHM showed higher spatial frequency characteristics than that of DIP up to around 2.2 cycles/cm but was lower than the latter after 3 cycles/cm. PSF-RD showed almost the same spatial frequency characteristics as that of DIP above 3 cycles/cm but was inferior below 3 cycles/cm. PSF-RD has a lower spatial resolution than DIP. CONCLUSIONS Unlike DIP, PSF lacks high-frequency components around the Nyquist frequency, generating EA. PSF-RD mitigates EA while simultaneously suppressing the signal, diminishing spatial resolution.
Collapse
Affiliation(s)
- Hiroyuki Shinohara
- Faculty of Health Sciences, Tokyo Metropolitan University, 7-2-10, Higasi-ogu, Arakawa-ku, Tokyo, 116-8551, Japan.
- Department of Radiology, Showa University Fujigaoka Hospital, 1-30, Fujigaoka, Yokohama-shi, 227-8501, Japan.
| | - Kensuke Hori
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, 1-5-32, Yushima, Bunkyo-ku, Tokyo, 113-0034, Japan
| | - Takeyuki Hashimoto
- Department of Radiological Technology, Faculty of Health Science, Kyorin University, 5-4-1 Shimorenjaku, Mitaka-shi, Tokyo, 181-8612, Japan
| |
Collapse
|
24
|
Ye S, Shen L, Islam MT, Xing L. Super-resolution biomedical imaging via reference-free statistical implicit neural representation. Phys Med Biol 2023; 68:10.1088/1361-6560/acfdf1. [PMID: 37757838 PMCID: PMC10615136 DOI: 10.1088/1361-6560/acfdf1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 09/27/2023] [Indexed: 09/29/2023]
Abstract
Objective.Supervised deep learning for image super-resolution (SR) has limitations in biomedical imaging due to the lack of large amounts of low- and high-resolution image pairs for model training. In this work, we propose a reference-free statistical implicit neural representation (INR) framework, which needs only a single or a few observed low-resolution (LR) image(s), to generate high-quality SR images.Approach.The framework models the statistics of the observed LR images via maximum likelihood estimation and trains the INR network to represent the latent high-resolution (HR) image as a continuous function in the spatial domain. The INR network is constructed as a coordinate-based multi-layer perceptron, whose inputs are image spatial coordinates and outputs are corresponding pixel intensities. The trained INR not only constrains functional smoothness but also allows an arbitrary scale in SR imaging.Main results.We demonstrate the efficacy of the proposed framework on various biomedical images, including computed tomography (CT), magnetic resonance imaging (MRI), fluorescence microscopy, and ultrasound images, across different SR magnification scales of 2×, 4×, and 8×. A limited number of LR images were used for each of the SR imaging tasks to show the potential of the proposed statistical INR framework.Significance.The proposed method provides an urgently needed unsupervised deep learning framework for numerous biomedical SR applications that lack HR reference images.
Collapse
Affiliation(s)
- Siqi Ye
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, United States of America
| | - Liyue Shen
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109, United States of America
| | - Md Tauhidul Islam
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, United States of America
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, United States of America
| |
Collapse
|
25
|
Reader AJ, Pan B. AI for PET image reconstruction. Br J Radiol 2023; 96:20230292. [PMID: 37486607 PMCID: PMC10546435 DOI: 10.1259/bjr.20230292] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 06/06/2023] [Accepted: 06/20/2023] [Indexed: 07/25/2023] Open
Abstract
Image reconstruction for positron emission tomography (PET) has been developed over many decades, with advances coming from improved modelling of the data statistics and improved modelling of the imaging physics. However, high noise and limited spatial resolution have remained issues in PET imaging, and state-of-the-art PET reconstruction has started to exploit other medical imaging modalities (such as MRI) to assist in noise reduction and enhancement of PET's spatial resolution. Nonetheless, there is an ongoing drive towards not only improving image quality, but also reducing the injected radiation dose and reducing scanning times. While the arrival of new PET scanners (such as total body PET) is helping, there is always a need to improve reconstructed image quality due to the time and count limited imaging conditions. Artificial intelligence (AI) methods are now at the frontier of research for PET image reconstruction. While AI can learn the imaging physics as well as the noise in the data (when given sufficient examples), one of the most common uses of AI arises from exploiting databases of high-quality reference examples, to provide advanced noise compensation and resolution recovery. There are three main AI reconstruction approaches: (i) direct data-driven AI methods which rely on supervised learning from reference data, (ii) iterative (unrolled) methods which combine our physics and statistical models with AI learning from data, and (iii) methods which exploit AI with our known models, but crucially can offer benefits even in the absence of any example training data whatsoever. This article reviews these methods, considering opportunities and challenges of AI for PET reconstruction.
Collapse
Affiliation(s)
- Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Bolin Pan
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| |
Collapse
|
26
|
Farag A, Huang J, Kohan A, Mirshahvalad SA, Basso Dias A, Fenchel M, Metser U, Veit-Haibach P. Evaluation of MR anatomically-guided PET reconstruction using a convolutional neural network in PSMA patients. Phys Med Biol 2023; 68:185014. [PMID: 37625418 DOI: 10.1088/1361-6560/acf439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 08/25/2023] [Indexed: 08/27/2023]
Abstract
Background. Recently, approaches have utilized the superior anatomical information provided by magnetic resonance imaging (MRI) to guide the reconstruction of positron emission tomography (PET). One of those approaches is the Bowsher's prior, which has been accelerated lately with a convolutional neural network (CNN) to reconstruct MR-guided PET in the imaging domain in routine clinical imaging. Two differently trained Bowsher-CNN methods (B-CNN0 and B-CNN) have been trained and tested on brain PET/MR images with non-PSMA tracers, but so far, have not been evaluated in other anatomical regions yet.Methods. A NEMA phantom with five of its six spheres filled with the same, calibrated concentration of 18F-DCFPyL-PSMA, and thirty-two patients (mean age 64 ± 7 years) with biopsy-confirmed PCa were used in this study. Reconstruction with either of the two available Bowsher-CNN methods were performed on the conventional MR-based attenuation correction (MRAC) and T1-MR images in the imaging domain. Detectable volume of the spheres and tumors, relative contrast recovery (CR), and background variation (BV) were measured for the MRAC and the Bowsher-CNN images, and qualitative assessment was conducted by ranking the image sharpness and quality by two experienced readers.Results. For the phantom study, the B-CNN produced 12.7% better CR compared to conventional reconstruction. The small sphere volume (<1.8 ml) detectability improved from MRAC to B-CNN by nearly 13%, while measured activity was higher than the ground-truth by 8%. The signal-to-noise ratio, CR, and BV were significantly improved (p< 0.05) in B-CNN images of the tumor. The qualitative analysis determined that tumor sharpness was excellent in 76% of the PET images reconstructed with the B-CNN method, compared to conventional reconstruction.Conclusions. Applying the MR-guided B-CNN in clinical prostate PET/MR imaging improves some quantitative, as well as qualitative imaging measures. The measured improvements in the phantom are also clearly translated into clinical application.
Collapse
Affiliation(s)
- Adam Farag
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Jin Huang
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Andres Kohan
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Seyed Ali Mirshahvalad
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Adriano Basso Dias
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | | | - Ur Metser
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Patrick Veit-Haibach
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| |
Collapse
|
27
|
Krokos G, MacKewn J, Dunn J, Marsden P. A review of PET attenuation correction methods for PET-MR. EJNMMI Phys 2023; 10:52. [PMID: 37695384 PMCID: PMC10495310 DOI: 10.1186/s40658-023-00569-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/07/2023] [Indexed: 09/12/2023] Open
Abstract
Despite being thirteen years since the installation of the first PET-MR system, the scanners constitute a very small proportion of the total hybrid PET systems installed. This is in stark contrast to the rapid expansion of the PET-CT scanner, which quickly established its importance in patient diagnosis within a similar timeframe. One of the main hurdles is the development of an accurate, reproducible and easy-to-use method for attenuation correction. Quantitative discrepancies in PET images between the manufacturer-provided MR methods and the more established CT- or transmission-based attenuation correction methods have led the scientific community in a continuous effort to develop a robust and accurate alternative. These can be divided into four broad categories: (i) MR-based, (ii) emission-based, (iii) atlas-based and the (iv) machine learning-based attenuation correction, which is rapidly gaining momentum. The first is based on segmenting the MR images in various tissues and allocating a predefined attenuation coefficient for each tissue. Emission-based attenuation correction methods aim in utilising the PET emission data by simultaneously reconstructing the radioactivity distribution and the attenuation image. Atlas-based attenuation correction methods aim to predict a CT or transmission image given an MR image of a new patient, by using databases containing CT or transmission images from the general population. Finally, in machine learning methods, a model that could predict the required image given the acquired MR or non-attenuation-corrected PET image is developed by exploiting the underlying features of the images. Deep learning methods are the dominant approach in this category. Compared to the more traditional machine learning, which uses structured data for building a model, deep learning makes direct use of the acquired images to identify underlying features. This up-to-date review goes through the literature of attenuation correction approaches in PET-MR after categorising them. The various approaches in each category are described and discussed. After exploring each category separately, a general overview is given of the current status and potential future approaches along with a comparison of the four outlined categories.
Collapse
Affiliation(s)
- Georgios Krokos
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK.
| | - Jane MacKewn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Joel Dunn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Paul Marsden
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| |
Collapse
|
28
|
Li J, Xi C, Dai H, Wang J, Lv Y, Zhang P, Zhao J. Enhanced PET imaging using progressive conditional deep image prior. Phys Med Biol 2023; 68:175047. [PMID: 37582392 DOI: 10.1088/1361-6560/acf091] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Accepted: 08/15/2023] [Indexed: 08/17/2023]
Abstract
Objective.Unsupervised learning-based methods have been proven to be an effective way to improve the image quality of positron emission tomography (PET) images when a large dataset is not available. However, when the gap between the input image and the target PET image is large, direct unsupervised learning can be challenging and easily lead to reduced lesion detectability. We aim to develop a new unsupervised learning method to improve lesion detectability in patient studies.Approach.We applied the deep progressive learning strategy to bridge the gap between the input image and the target image. The one-step unsupervised learning is decomposed into two unsupervised learning steps. The input image of the first network is an anatomical image and the input image of the second network is a PET image with a low noise level. The output of the first network is also used as the prior image to generate the target image of the second network by iterative reconstruction method.Results.The performance of the proposed method was evaluated through the phantom and patient studies and compared with non-deep learning, supervised learning and unsupervised learning methods. The results showed that the proposed method was superior to non-deep learning and unsupervised methods, and was comparable to the supervised method.Significance.A progressive unsupervised learning method was proposed, which can improve image noise performance and lesion detectability.
Collapse
Affiliation(s)
- Jinming Li
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Chen Xi
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Houjiao Dai
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Jing Wang
- Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, Shaanxi, Xi'an, People's Republic of China
| | - Yang Lv
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Puming Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
29
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Yamaya T. Fully 3D implementation of the end-to-end deep image prior-based PET image reconstruction using block iterative algorithm. Phys Med Biol 2023; 68:155009. [PMID: 37406637 DOI: 10.1088/1361-6560/ace49c] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 07/05/2023] [Indexed: 07/07/2023]
Abstract
Objective. Deep image prior (DIP) has recently attracted attention owing to its unsupervised positron emission tomography (PET) image reconstruction method, which does not require any prior training dataset. In this paper, we present the first attempt to implement an end-to-end DIP-based fully 3D PET image reconstruction method that incorporates a forward-projection model into a loss function.Approach. A practical implementation of a fully 3D PET image reconstruction could not be performed at present because of a graphics processing unit memory limitation. Consequently, we modify the DIP optimization to a block iteration and sequential learning of an ordered sequence of block sinograms. Furthermore, the relative difference penalty (RDP) term is added to the loss function to enhance the quantitative accuracy of the PET image.Main results. We evaluated our proposed method using Monte Carlo simulation with [18F]FDG PET data of a human brain and a preclinical study on monkey-brain [18F]FDG PET data. The proposed method was compared with the maximum-likelihood expectation maximization (EM), maximuma posterioriEM with RDP, and hybrid DIP-based PET reconstruction methods. The simulation results showed that, compared with other algorithms, the proposed method improved the PET image quality by reducing statistical noise and better preserved the contrast of brain structures and inserted tumors. In the preclinical experiment, finer structures and better contrast recovery were obtained with the proposed method.Significance.The results indicated that the proposed method could produce high-quality images without a prior training dataset. Thus, the proposed method could be a key enabling technology for the straightforward and practical implementation of end-to-end DIP-based fully 3D PET image reconstruction.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-ku, Chiba, 263-8555, Japan
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-ku, Chiba, 263-8555, Japan
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-ku, Chiba, 263-8555, Japan
| |
Collapse
|
30
|
Liao S, Mo Z, Zeng M, Wu J, Gu Y, Li G, Quan G, Lv Y, Liu L, Yang C, Wang X, Huang X, Zhang Y, Cao W, Dong Y, Wei Y, Zhou Q, Xiao Y, Zhan Y, Zhou XS, Shi F, Shen D. Fast and low-dose medical imaging generation empowered by hybrid deep-learning and iterative reconstruction. Cell Rep Med 2023; 4:101119. [PMID: 37467726 PMCID: PMC10394257 DOI: 10.1016/j.xcrm.2023.101119] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 05/16/2023] [Accepted: 06/19/2023] [Indexed: 07/21/2023]
Abstract
Fast and low-dose reconstructions of medical images are highly desired in clinical routines. We propose a hybrid deep-learning and iterative reconstruction (hybrid DL-IR) framework and apply it for fast magnetic resonance imaging (MRI), fast positron emission tomography (PET), and low-dose computed tomography (CT) image generation tasks. First, in a retrospective MRI study (6,066 cases), we demonstrate its capability of handling 3- to 10-fold under-sampled MR data, enabling organ-level coverage with only 10- to 100-s scan time; second, a low-dose CT study (142 cases) shows that our framework can successfully alleviate the noise and streak artifacts in scans performed with only 10% radiation dose (0.61 mGy); and last, a fast whole-body PET study (131 cases) allows us to faithfully reconstruct tumor-induced lesions, including small ones (<4 mm), from 2- to 4-fold-accelerated PET acquisition (30-60 s/bp). This study offers a promising avenue for accurate and high-quality image reconstruction with broad clinical value.
Collapse
Affiliation(s)
- Shu Liao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Zhanhao Mo
- Department of Radiology, China-Japan Union Hospital of Jilin University, Changchun 130033, China
| | - Mengsu Zeng
- Department of Radiology, Shanghai Institute of Medical Imaging, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Yuning Gu
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Guobin Li
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Guotao Quan
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Yang Lv
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Lin Liu
- Department of Radiology, China-Japan Union Hospital of Jilin University, Changchun 130033, China
| | - Chun Yang
- Department of Radiology, Shanghai Institute of Medical Imaging, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Xinglie Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Xiaoqian Huang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Yang Zhang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Wenjing Cao
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Yun Dong
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Qing Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Yongqin Xiao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China.
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China; School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China; Shanghai Clinical Research and Trial Center, Shanghai 200122, China.
| |
Collapse
|
31
|
Cui ZX, Jia S, Cao C, Zhu Q, Liu C, Qiu Z, Liu Y, Cheng J, Wang H, Zhu Y, Liang D. K-UNN: k-space interpolation with untrained neural network. Med Image Anal 2023; 88:102877. [PMID: 37399681 DOI: 10.1016/j.media.2023.102877] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 05/24/2023] [Accepted: 06/22/2023] [Indexed: 07/05/2023]
Abstract
Recently, untrained neural networks (UNNs) have shown satisfactory performances for MR image reconstruction on random sampling trajectories without using additional full-sampled training data. However, the existing UNN-based approaches lack the modeling of physical priors, resulting in poor performance in some common scenarios (e.g., partial Fourier (PF), regular sampling, etc.) and the lack of theoretical guarantees for reconstruction accuracy. To bridge this gap, we propose a safeguarded k-space interpolation method for MRI using a specially designed UNN with a tripled architecture driven by three physical priors of the MR images (or k-space data), including transform sparsity, coil sensitivity smoothness, and phase smoothness. We also prove that the proposed method guarantees tight bounds for interpolated k-space data accuracy. Finally, ablation experiments show that the proposed method can characterize the physical priors of MR images well. Additionally, experiments show that the proposed method consistently outperforms traditional parallel imaging methods and existing UNNs, and is even competitive against supervised-trained deep learning methods in PF and regular undersampling reconstruction.
Collapse
Affiliation(s)
- Zhuo-Xu Cui
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Chentao Cao
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qingyong Zhu
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Congcong Liu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhilang Qiu
- Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States
| | - Yuanyuan Liu
- National Innovation Center for Advanced Medical Devices, Shenzhen, Guangdong, China
| | - Jing Cheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Haifeng Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yanjie Zhu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Pazhou Lab, Guangzhou, Guangdong, China.
| |
Collapse
|
32
|
Qayyum A, Ilahi I, Shamshad F, Boussaid F, Bennamoun M, Qadir J. Untrained Neural Network Priors for Inverse Imaging Problems: A Survey. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:6511-6536. [PMID: 36063506 DOI: 10.1109/tpami.2022.3204527] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In recent years, advancements in machine learning (ML) techniques, in particular, deep learning (DL) methods have gained a lot of momentum in solving inverse imaging problems, often surpassing the performance provided by hand-crafted approaches. Traditionally, analytical methods have been used to solve inverse imaging problems such as image restoration, inpainting, and superresolution. Unlike analytical methods for which the problem is explicitly defined and the domain knowledge is carefully engineered into the solution, DL models do not benefit from such prior knowledge and instead make use of large datasets to predict an unknown solution to the inverse problem. Recently, a new paradigm of training deep models using a single image, named untrained neural network prior (UNNP) has been proposed to solve a variety of inverse tasks, e.g., restoration and inpainting. Since then, many researchers have proposed various applications and variants of UNNP. In this paper, we present a comprehensive review of such studies and various UNNP applications for different tasks and highlight various open research problems which require further research.
Collapse
|
33
|
Rajagopal A, Natsuaki Y, Wangerin K, Hamdi M, An H, Sunderland JJ, Laforest R, Kinahan PE, Larson PEZ, Hope TA. Synthetic PET via Domain Translation of 3-D MRI. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2023; 7:333-343. [PMID: 37396797 PMCID: PMC10311993 DOI: 10.1109/trpms.2022.3223275] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
Historically, patient datasets have been used to develop and validate various reconstruction algorithms for PET/MRI and PET/CT. To enable such algorithm development, without the need for acquiring hundreds of patient exams, in this article we demonstrate a deep learning technique to generate synthetic but realistic whole-body PET sinograms from abundantly available whole-body MRI. Specifically, we use a dataset of 56 18F-FDG-PET/MRI exams to train a 3-D residual UNet to predict physiologic PET uptake from whole-body T1-weighted MRI. In training, we implemented a balanced loss function to generate realistic uptake across a large dynamic range and computed losses along tomographic lines of response to mimic the PET acquisition. The predicted PET images are forward projected to produce synthetic PET (sPET) time-of-flight (ToF) sinograms that can be used with vendor-provided PET reconstruction algorithms, including using CT-based attenuation correction (CTAC) and MR-based attenuation correction (MRAC). The resulting synthetic data recapitulates physiologic 18F-FDG uptake, e.g., high uptake localized to the brain and bladder, as well as uptake in liver, kidneys, heart, and muscle. To simulate abnormalities with high uptake, we also insert synthetic lesions. We demonstrate that this sPET data can be used interchangeably with real PET data for the PET quantification task of comparing CTAC and MRAC methods, achieving ≤ 7.6% error in mean-SUV compared to using real data. These results together show that the proposed sPET data pipeline can be reasonably used for development, evaluation, and validation of PET/MRI reconstruction methods.
Collapse
Affiliation(s)
- Abhejit Rajagopal
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
| | - Yutaka Natsuaki
- Department of Radiation Oncology, University of New Mexico, Albuquerque, NM 87131 USA
| | | | - Mahdjoub Hamdi
- Department of Radiology, Washington University in St. Louis, St. Louis, MO 63130 USA
| | - Hongyu An
- Department of Radiology, Washington University in St. Louis, St. Louis, MO 63130 USA
| | - John J Sunderland
- Department of Radiology, The University of Iowa, Iowa City, IA 52242 USA
| | - Richard Laforest
- Department of Radiology, Washington University in St. Louis, St. Louis, MO 63130 USA
| | - Paul E Kinahan
- Department of Radiology, University of Washington, Seattle, WA 98195 USA
| | - Peder E Z Larson
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
| | - Thomas A Hope
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
| |
Collapse
|
34
|
Wang YRJ, Wang P, Adams LC, Sheybani ND, Qu L, Sarrami AH, Theruvath AJ, Gatidis S, Ho T, Zhou Q, Pribnow A, Thakor AS, Rubin D, Daldrup-Link HE. Low-count whole-body PET/MRI restoration: an evaluation of dose reduction spectrum and five state-of-the-art artificial intelligence models. Eur J Nucl Med Mol Imaging 2023; 50:1337-1350. [PMID: 36633614 PMCID: PMC10387227 DOI: 10.1007/s00259-022-06097-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 12/24/2022] [Indexed: 01/13/2023]
Abstract
PURPOSE To provide a holistic and complete comparison of the five most advanced AI models in the augmentation of low-dose 18F-FDG PET data over the entire dose reduction spectrum. METHODS In this multicenter study, five AI models were investigated for restoring low-count whole-body PET/MRI, covering convolutional benchmarks - U-Net, enhanced deep super-resolution network (EDSR), generative adversarial network (GAN) - and the most cutting-edge image reconstruction transformer models in computer vision to date - Swin transformer image restoration network (SwinIR) and EDSR-ViT (vision transformer). The models were evaluated against six groups of count levels representing the simulated 75%, 50%, 25%, 12.5%, 6.25%, and 1% (extremely ultra-low-count) of the clinical standard 3 MBq/kg 18F-FDG dose. The comparisons were performed upon two independent cohorts - (1) a primary cohort from Stanford University and (2) a cross-continental external validation cohort from Tübingen University - in order to ensure the findings are generalizable. A total of 476 original count and simulated low-count whole-body PET/MRI scans were incorporated into this analysis. RESULTS For low-count PET restoration on the primary cohort, the mean structural similarity index (SSIM) scores for dose 6.25% were 0.898 (95% CI, 0.887-0.910) for EDSR, 0.893 (0.881-0.905) for EDSR-ViT, 0.873 (0.859-0.887) for GAN, 0.885 (0.873-0.898) for U-Net, and 0.910 (0.900-0.920) for SwinIR. In continuation, SwinIR and U-Net's performances were also discreetly evaluated at each simulated radiotracer dose levels. Using the primary Stanford cohort, the mean diagnostic image quality (DIQ; 5-point Likert scale) scores of SwinIR restoration were 5 (SD, 0) for dose 75%, 4.50 (0.535) for dose 50%, 3.75 (0.463) for dose 25%, 3.25 (0.463) for dose 12.5%, 4 (0.926) for dose 6.25%, and 2.5 (0.534) for dose 1%. CONCLUSION Compared to low-count PET images, with near-to or nondiagnostic images at higher dose reduction levels (up to 6.25%), both SwinIR and U-Net significantly improve the diagnostic quality of PET images. A radiotracer dose reduction to 1% of the current clinical standard radiotracer dose is out of scope for current AI techniques.
Collapse
Affiliation(s)
- Yan-Ran Joyce Wang
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA.
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA.
| | - Pengcheng Wang
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China
| | - Lisa Christine Adams
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Natasha Diba Sheybani
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA
| | - Liangqiong Qu
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA
| | - Amir Hossein Sarrami
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Ashok Joseph Theruvath
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Sergios Gatidis
- Department of Diagnostic and Interventional Radiology, University Hospital Tuebingen, Tuebingen, Germany
| | - Tina Ho
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Quan Zhou
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Allison Pribnow
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA
| | - Avnesh S Thakor
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA
| | - Daniel Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA
| | - Heike E Daldrup-Link
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA.
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA.
| |
Collapse
|
35
|
Anatomically guided reconstruction improves lesion quantitation and detectability in bone SPECT/CT. Nucl Med Commun 2023; 44:330-337. [PMID: 36804873 DOI: 10.1097/mnm.0000000000001675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2023]
Abstract
Bone single-photon emission computed tomography (SPECT)/computed tomography (CT) imaging suffers from poor spatial resolution, but the image quality can be improved during SPECT reconstruction by using anatomical information derived from CT imaging. The purpose of this work was to compare two different anatomically guided SPECT reconstruction methods to ordered subsets expectation maximization (OSEM) which is the most commonly used reconstruction method in nuclear medicine. The comparison was done in terms of lesion quantitation and lesion detectability. Anatomically guided Bayesian reconstruction (AMAP) and kernelized ordered subset expectation maximization (KEM) algorithms were implemented and compared against OSEM. Artificial lesions with a wide range of lesion-to-background contrasts were added to normal bone SPECT/CT studies. The quantitative accuracy was assessed by the error in lesion standardized uptake values and lesion detectability by the area under the receiver operating characteristic curve generated by a non-prewhitening matched filter. AMAP and KEM provided significantly better quantitative accuracy than OSEM at all contrast levels. Accuracy was the highest when SPECT lesions were matched to a lesion on CT. Correspondingly, AMAP and KEM also had significantly better lesion detectability than OSEM at all contrast levels and reconstructions with matching CT lesions performed the best. Quantitative differences between AMAP and KEM algorithms were minor. Visually AMAP and KEM images looked similar. Anatomically guided reconstruction improves lesion quantitation and detectability markedly compared to OSEM. Differences between AMAP and KEM algorithms were small and thus probably clinically insignificant.
Collapse
|
36
|
Pal S, Dutta S, Maitra R. Personalized synthetic MR imaging with deep learning enhancements. Magn Reson Med 2023; 89:1634-1643. [PMID: 36420834 PMCID: PMC10100029 DOI: 10.1002/mrm.29527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 10/25/2022] [Accepted: 10/27/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE Personalized synthetic MRI (syn-MRI) uses MR images of an individual subject acquired at a few design parameters (echo time, repetition time, flip angle) to obtain underlying parametric ( ρ , T 1 , T 2 ) $$ \left(\rho, {\mathrm{T}}_1,{\mathrm{T}}_2\right) $$ maps, from where MR images of that individual at other design parameter settings are synthesized. However, classical methods that use least-squares (LS) or maximum likelihood estimators (MLE) are unsatisfactory at higher noise levels because the underlying inverse problem is ill-posed. This article provides a pipeline to enhance the synthesis of such images in three-dimensional (3D) using a deep learning (DL) neural network architecture for spatial regularization in a personalized setting where having more than a few training images is impractical. METHODS Our DL enhancements employ a Deep Image Prior (DIP) with a U-net type denoising architecture that includes situations with minimal training data, such as personalized syn-MRI. We provide a general workflow for syn-MRI from three or more training images. Our workflow, called DIPsyn-MRI, uses DIP to enhance training images, then obtains parametric images using LS or MLE before synthesizing images at desired design parameter settings. DIPsyn-MRI is implemented in our publicly available Python package DeepSynMRI available at: https://github.com/StatPal/DeepSynMRI. RESULTS We demonstrate feasibility and improved performance of DIPsyn-MRI on 3D datasets acquired using the Brainweb interface for spin-echo and FLASH imaging sequences, at different noise levels. Our DL enhancements improve syn-MRI in the presence of different intensity nonuniformity levels of the magnetic field, for all but very low noise levels. CONCLUSION This article provides recipes and software to realistically facilitate DL-enhanced personalized syn-MRI.
Collapse
Affiliation(s)
- Subrata Pal
- Department of Statistics, Iowa State University, Ames, Iowa, USA
| | - Somak Dutta
- Department of Statistics, Iowa State University, Ames, Iowa, USA
| | - Ranjan Maitra
- Department of Statistics, Iowa State University, Ames, Iowa, USA
| |
Collapse
|
37
|
Li S, Gong K, Badawi RD, Kim EJ, Qi J, Wang G. Neural KEM: A Kernel Method With Deep Coefficient Prior for PET Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:785-796. [PMID: 36288234 PMCID: PMC10081957 DOI: 10.1109/tmi.2022.3217543] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Image reconstruction of low-count positron emission tomography (PET) data is challenging. Kernel methods address the challenge by incorporating image prior information in the forward model of iterative PET image reconstruction. The kernelized expectation-maximization (KEM) algorithm has been developed and demonstrated to be effective and easy to implement. A common approach for a further improvement of the kernel method would be adding an explicit regularization, which however leads to a complex optimization problem. In this paper, we propose an implicit regularization for the kernel method by using a deep coefficient prior, which represents the kernel coefficient image in the PET forward model using a convolutional neural-network. To solve the maximum-likelihood neural network-based reconstruction problem, we apply the principle of optimization transfer to derive a neural KEM algorithm. Each iteration of the algorithm consists of two separate steps: a KEM step for image update from the projection data and a deep-learning step in the image domain for updating the kernel coefficient image using the neural network. This optimization algorithm is guaranteed to monotonically increase the data likelihood. The results from computer simulations and real patient data have demonstrated that the neural KEM can outperform existing KEM and deep image prior methods.
Collapse
|
38
|
Chen J, Chen S, Wee L, Dekker A, Bermejo I. Deep learning based unpaired image-to-image translation applications for medical physics: a systematic review. Phys Med Biol 2023; 68. [PMID: 36753766 DOI: 10.1088/1361-6560/acba74] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 02/08/2023] [Indexed: 02/10/2023]
Abstract
Purpose. There is a growing number of publications on the application of unpaired image-to-image (I2I) translation in medical imaging. However, a systematic review covering the current state of this topic for medical physicists is lacking. The aim of this article is to provide a comprehensive review of current challenges and opportunities for medical physicists and engineers to apply I2I translation in practice.Methods and materials. The PubMed electronic database was searched using terms referring to unpaired (unsupervised), I2I translation, and medical imaging. This review has been reported in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. From each full-text article, we extracted information extracted regarding technical and clinical applications of methods, Transparent Reporting for Individual Prognosis Or Diagnosis (TRIPOD) study type, performance of algorithm and accessibility of source code and pre-trained models.Results. Among 461 unique records, 55 full-text articles were included in the review. The major technical applications described in the selected literature are segmentation (26 studies), unpaired domain adaptation (18 studies), and denoising (8 studies). In terms of clinical applications, unpaired I2I translation has been used for automatic contouring of regions of interest in MRI, CT, x-ray and ultrasound images, fast MRI or low dose CT imaging, CT or MRI only based radiotherapy planning, etc Only 5 studies validated their models using an independent test set and none were externally validated by independent researchers. Finally, 12 articles published their source code and only one study published their pre-trained models.Conclusion. I2I translation of medical images offers a range of valuable applications for medical physicists. However, the scarcity of external validation studies of I2I models and the shortage of publicly available pre-trained models limits the immediate applicability of the proposed methods in practice.
Collapse
Affiliation(s)
- Junhua Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Shenlun Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Leonard Wee
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Inigo Bermejo
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| |
Collapse
|
39
|
Moradi H, Al-Hourani A, Concilia G, Khoshmanesh F, Nezami FR, Needham S, Baratchi S, Khoshmanesh K. Recent developments in modeling, imaging, and monitoring of cardiovascular diseases using machine learning. Biophys Rev 2023; 15:19-33. [PMID: 36909958 PMCID: PMC9995635 DOI: 10.1007/s12551-022-01040-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 12/21/2022] [Indexed: 01/12/2023] Open
Abstract
Cardiovascular diseases are the leading cause of mortality, morbidity, and hospitalization around the world. Recent technological advances have facilitated analyzing, visualizing, and monitoring cardiovascular diseases using emerging computational fluid dynamics, blood flow imaging, and wearable sensing technologies. Yet, computational cost, limited spatiotemporal resolution, and obstacles for thorough data analysis have hindered the utility of such techniques to curb cardiovascular diseases. We herein discuss how leveraging machine learning techniques, and in particular deep learning methods, could overcome these limitations and offer promise for translation. We discuss the remarkable capacity of recently developed machine learning techniques to accelerate flow modeling, enhance the resolution while reduce the noise and scanning time of current blood flow imaging techniques, and accurate detection of cardiovascular diseases using a plethora of data collected by wearable sensors.
Collapse
Affiliation(s)
- Hamed Moradi
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Akram Al-Hourani
- School of Engineering, RMIT University, Melbourne, Victoria Australia
| | | | - Farnaz Khoshmanesh
- School of Allied Health, Human Services & Sport, La Trobe University, Melbourne, Victoria Australia
| | - Farhad R. Nezami
- Division of Thoracic and Cardiac Surgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA USA
| | - Scott Needham
- Leading Technology Group, Melbourne, Victoria Australia
| | - Sara Baratchi
- School of Health and Biomedical Sciences, RMIT University, Melbourne, Victoria Australia
| | | |
Collapse
|
40
|
Deep 3D reconstruction of synchrotron X-ray computed tomography for intact lungs. Sci Rep 2023; 13:1738. [PMID: 36720962 PMCID: PMC9889716 DOI: 10.1038/s41598-023-27627-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 01/04/2023] [Indexed: 02/02/2023] Open
Abstract
Synchrotron X-rays can be used to obtain highly detailed images of parts of the lung. However, micro-motion artifacts induced by such as cardiac motion impede quantitative visualization of the alveoli in the lungs. This paper proposes a method that applies a neural network for synchrotron X-ray Computed Tomography (CT) data to reconstruct the high-quality 3D structure of alveoli in intact mouse lungs at expiration, without needing ground-truth data. Our approach reconstructs the spatial sequence of CT images by using a deep-image prior with interpolated input latent variables, and in this way significantly enhances the images of alveolar structure compared with the prior art. The approach successfully visualizes 3D alveolar units of intact mouse lungs at expiration and enables us to measure the diameter of the alveoli. We believe that our approach helps to accurately visualize other living organs hampered by micro-motion.
Collapse
|
41
|
Yin L, Guo H, Zhang P, Li Y, Hui H, Du Y, Tian J. System matrix recovery based on deep image prior in magnetic particle imaging. Phys Med Biol 2023; 68. [PMID: 36584394 DOI: 10.1088/1361-6560/acaf47] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 12/30/2022] [Indexed: 12/31/2022]
Abstract
Objective.Magnetic particle imaging (MPI) is an emerging tomography imaging technique with high specificity and temporal-spatial resolution. MPI reconstruction based on the system matrix (SM) is an important research content in MPI. However, SM is usually obtained by measuring the response of an MPI scanner at all positions in the field of view. This process is very time-consuming, and the scanner will overheat in a long period of continuous operation, which is easy to generate thermal noise and affects MPI imaging performance.Approach.In this study, we propose a deep image prior-based method that prominently decreases the time of SM calibration. It is an unsupervised method that utilizes the neural network structure itself to recover a high-resolution SM from a downsampled SM without the need to train the network using a large amount of training data.Main results.Experiments on the Open MPI data show that the time of SM calibration can be greatly reduced with only slight degradation of image quality.Significance.This study provides a novel method for obtaining SM in MPI, which shows the potential to achieve SM recovery at a high downsampling rate. It is expected that this study will increase the practicability of MPI in biomedical applications and promote the development of MPI in the future.
Collapse
Affiliation(s)
- Lin Yin
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, People's Republic of China.,Beijing Key Laboratory of Molecular Imaging, Beijing 100190, People's Republic of China.,University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China
| | - Hongbo Guo
- School of Information Sciences and Technology, Northwest University, Xi'an, 710127, People's Republic of China
| | - Peng Zhang
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing, 100044, People's Republic of China
| | - Yimeng Li
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, 100191, People's Republic of China
| | - Hui Hui
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, People's Republic of China.,Beijing Key Laboratory of Molecular Imaging, Beijing 100190, People's Republic of China.,University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China
| | - Yang Du
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, People's Republic of China.,Beijing Key Laboratory of Molecular Imaging, Beijing 100190, People's Republic of China.,University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, People's Republic of China.,Beijing Key Laboratory of Molecular Imaging, Beijing 100190, People's Republic of China.,University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China.,Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, 100191, People's Republic of China
| |
Collapse
|
42
|
Li S, Wang G. Deep Kernel Representation for Image Reconstruction in PET. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3029-3038. [PMID: 35584077 PMCID: PMC9613528 DOI: 10.1109/tmi.2022.3176002] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Image reconstruction for positron emission tomography (PET) is challenging because of the ill-conditioned tomographic problem and low counting statistics. Kernel methods address this challenge by using kernel representation to incorporate image prior information in the forward model of iterative PET image reconstruction. Existing kernel methods construct the kernels commonly using an empirical process, which may lead to unsatisfactory performance. In this paper, we describe the equivalence between the kernel representation and a trainable neural network model. A deep kernel method is then proposed by exploiting a deep neural network to enable automated learning of an improved kernel model and is directly applicable to single subjects in dynamic PET. The training process utilizes available image prior data to form a set of robust kernels in an optimized way rather than empirically. The results from computer simulations and a real patient dataset demonstrate that the proposed deep kernel method can outperform the existing kernel method and neural network method for dynamic PET image reconstruction.
Collapse
|
43
|
Fourcade C, Ferrer L, Moreau N, Santini G, Brennan A, Rousseau C, Lacombe M, Fleury V, Colombié M, Jézéquel P, Rubeaux M, Mateus D. Deformable image registration with deep network priors: a study on longitudinal PET images. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7e17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/04/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. This paper proposes a novel approach for the longitudinal registration of PET imaging acquired for the monitoring of patients with metastatic breast cancer. Unlike with other image analysis tasks, the use of deep learning (DL) has not significantly improved the performance of image registration. With this work, we propose a new registration approach to bridge the performance gap between conventional and DL-based methods: medical image registration method regularized by architecture (MIRRBA). Approach.
MIRRBA is a subject-specific deformable registration method which relies on a deep pyramidal architecture to parametrize the deformation field. Diverging from the usual deep-learning paradigms, MIRRBA does not require a learning database, but only a pair of images to be registered that is used to optimize the network's parameters. We applied MIRRBA on a private dataset of 110 whole-body PET images of patients with metastatic breast cancer. We used different architecture configurations to produce the deformation field and studied the results obtained. We also compared our method to several standard registration approaches: two conventional iterative registration methods (ANTs and Elastix) and two supervised DL-based models (LapIRN and Voxelmorph). Registration accuracy was evaluated using the Dice score, the target registration error, the average Hausdorff distance and the detection rate, while the realism of the registration obtained was evaluated using Jacobian's determinant. The ability of the different methods to shrink disappearing lesions was also computed with the disappearing rate. Main results. MIRRBA significantly improved all metrics when compared to DL-based approaches. The organ and lesion Dice scores of Voxelmorph improved by 6% and 52% respectively, while the ones of LapIRN increased by 5% and 65%. Regarding conventional approaches, MIRRBA presented comparable results showing the feasibility of our method. Significance. In this paper, we also demonstrate the regularizing power of deep architectures and present new elements to understand the role of the architecture in DL methods used for registration.
Collapse
|
44
|
Qayyum A, Sultani W, Shamshad F, Tufail R, Qadir J. Single-shot retinal image enhancement using untrained and pretrained neural networks priors integrated with analytical image priors. Comput Biol Med 2022; 148:105879. [PMID: 35863248 DOI: 10.1016/j.compbiomed.2022.105879] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 06/20/2022] [Accepted: 07/09/2022] [Indexed: 01/08/2023]
Abstract
Retinal images acquired using fundus cameras are often visually blurred due to imperfect imaging conditions, refractive medium turbidity, and motion blur. In addition, ocular diseases such as the presence of cataracts also result in blurred retinal images. The presence of blur in retinal fundus images reduces the effectiveness of the diagnosis process of an expert ophthalmologist or a computer-aided detection/diagnosis system. In this paper, we put forward a single-shot deep image prior (DIP)-based approach for retinal image enhancement. Unlike typical deep learning-based approaches, our method does not require any training data. Instead, our DIP-based method can learn the underlying image prior while using a single degraded image. To perform retinal image enhancement, we frame it as a layer decomposition problem and investigate the use of two well-known analytical priors, i.e., dark channel prior (DCP) and bright channel prior (BCP) for atmospheric light estimation. We show that both the untrained neural networks and the pretrained neural networks can be used to generate an enhanced image while using only a single degraded image. The proposed approach is time and memory-efficient, which makes the solution feasible for real-world resource-constrained environments. We evaluate our proposed framework quantitatively on five datasets using three widely used metrics and complement that with a subjective qualitative assessment of the enhancement by two expert ophthalmologists. For instance, our method has achieved significant performance for untrained CDIPs coupled with DCP in terms of average PSNR, SSIM, and BRISQUE values of 40.41, 0.97, and 34.2, respectively, and for untrained CDIPs coupled with BCP, it achieved average PSNR, SSIM, and BRISQUE values of 40.22, 0.98, and 36.38, respectively. Our extensive experimental comparison with several competitive baselines on public and non-public proprietary datasets validates the proposed ideas and framework.
Collapse
Affiliation(s)
- Adnan Qayyum
- Information Technology University of the Punjab, Lahore, Pakistan
| | - Waqas Sultani
- Information Technology University of the Punjab, Lahore, Pakistan
| | - Fahad Shamshad
- Information Technology University of the Punjab, Lahore, Pakistan
| | | | | |
Collapse
|
45
|
Artificial intelligence-based PET image acquisition and reconstruction. Clin Transl Imaging 2022. [DOI: 10.1007/s40336-022-00508-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
46
|
Hamilton JI. A Self-Supervised Deep Learning Reconstruction for Shortening the Breathhold and Acquisition Window in Cardiac Magnetic Resonance Fingerprinting. Front Cardiovasc Med 2022; 9:928546. [PMID: 35811730 PMCID: PMC9260051 DOI: 10.3389/fcvm.2022.928546] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 06/06/2022] [Indexed: 01/14/2023] Open
Abstract
The aim of this study is to shorten the breathhold and diastolic acquisition window in cardiac magnetic resonance fingerprinting (MRF) for simultaneous T1, T2, and proton spin density (M0) mapping to improve scan efficiency and reduce motion artifacts. To this end, a novel reconstruction was developed that combines low-rank subspace modeling with a deep image prior, termed DIP-MRF. A system of neural networks is used to generate spatial basis images and quantitative tissue property maps, with training performed using only the undersampled k-space measurements from the current scan. This approach avoids difficulties with obtaining in vivo MRF training data, as training is performed de novo for each acquisition. Calculation of the forward model during training is accelerated by using GRAPPA operator gridding to shift spiral k-space data to Cartesian grid points, and by using a neural network to rapidly generate fingerprints in place of a Bloch equation simulation. DIP-MRF was evaluated in simulations and at 1.5 T in a standardized phantom, 18 healthy subjects, and 10 patients with suspected cardiomyopathy. In addition to conventional mapping, two cardiac MRF sequences were acquired, one with a 15-heartbeat(HB) breathhold and 254 ms acquisition window, and one with a 5HB breathhold and 150 ms acquisition window. In simulations, DIP-MRF yielded decreased nRMSE compared to dictionary matching and a sparse and locally low rank (SLLR-MRF) reconstruction. Strong correlation (R2 > 0.999) with T1 and T2 reference values was observed in the phantom using the 5HB/150 ms scan with DIP-MRF. DIP-MRF provided better suppression of noise and aliasing artifacts in vivo, especially for the 5HB/150 ms scan, and lower intersubject and intrasubject variability compared to dictionary matching and SLLR-MRF. Furthermore, it yielded a better agreement between myocardial T1 and T2 from 15HB/254 ms and 5HB/150 ms MRF scans, with a bias of −9 ms for T1 and 2 ms for T2. In summary, this study introduces an extension of the deep image prior framework for cardiac MRF tissue property mapping, which does not require pre-training with in vivo scans, and has the potential to reduce motion artifacts by enabling a shortened breathhold and acquisition window.
Collapse
Affiliation(s)
- Jesse I. Hamilton
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, United States
- *Correspondence: Jesse I. Hamilton,
| |
Collapse
|
47
|
Cui J, Gong K, Guo N, Kim K, Liu H, Li Q. Unsupervised PET logan parametric image estimation using conditional deep image prior. Med Image Anal 2022; 80:102519. [PMID: 35767910 DOI: 10.1016/j.media.2022.102519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 06/14/2022] [Accepted: 06/15/2022] [Indexed: 11/18/2022]
Abstract
Recently, deep learning-based denoising methods have been gradually used for PET images denoising and have shown great achievements. Among these methods, one interesting framework is conditional deep image prior (CDIP) which is an unsupervised method that does not need prior training or a large number of training pairs. In this work, we combined CDIP with Logan parametric image estimation to generate high-quality parametric images. In our method, the kinetic model is the Logan reference tissue model that can avoid arterial sampling. The neural network was utilized to represent the images of Logan slope and intercept. The patient's computed tomography (CT) image or magnetic resonance (MR) image was used as the network input to provide anatomical information. The optimization function was constructed and solved by the alternating direction method of multipliers (ADMM) algorithm. Both simulation and clinical patient datasets demonstrated that the proposed method could generate parametric images with more detailed structures. Quantification results showed that the proposed method results had higher contrast-to-noise (CNR) improvement ratios (PET/CT datasets: 62.25%±29.93%; striatum of brain PET datasets : 129.51%±32.13%, thalamus of brain PET datasets: 128.24%±31.18%) than Gaussian filtered results (PET/CT datasets: 23.33%±18.63%; striatum of brain PET datasets: 74.71%±8.71%, thalamus of brain PET datasets: 73.02%±9.34%) and nonlocal mean (NLM) denoised results (PET/CT datasets: 37.55%±26.56%; striatum of brain PET datasets: 100.89%±16.13%, thalamus of brain PET datasets: 103.59%±16.37%).
Collapse
Affiliation(s)
- Jianan Cui
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, Zhejiang 310027, China; The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Ning Guo
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Kyungsang Kim
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA
| | - Huafeng Liu
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, Zhejiang 310027, China; Jiaxing Key Laboratory of Photonic Sensing and Intelligent Imaging, Jiaxing, Zhejiang 314000, China; Intelligent Optics and Photonics Research Center, Jiaxing Research Institute, Zhejiang University, Zhejiang 314000, China.
| | - Quanzheng Li
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston MA 02114, USA.
| |
Collapse
|
48
|
Deep Learning-Based Denoising in Brain Tumor CHO PET: Comparison with Traditional Approaches. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12105187] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
18F-choline (CHO) PET image remains noisy despite minimum physiological activity in the normal brain, and this study developed a deep learning-based denoising algorithm for brain tumor CHO PET. Thirty-nine presurgical CHO PET/CT data were retrospectively collected for patients with pathological confirmed primary diffuse glioma. Two conventional denoising methods, namely, block-matching and 3D filtering (BM3D) and non-local means (NLM), and two deep learning-based approaches, namely, Noise2Noise (N2N) and Noise2Void (N2V), were established for imaging denoising, and the methods were developed without paired data. All algorithms improved the image quality to a certain extent, with the N2N demonstrating the best contrast-to-noise ratio (CNR) (4.05 ± 3.45), CNR improvement ratio (13.60% ± 2.05%) and the lowest entropy (1.68 ± 0.17), compared with other approaches. Little changes were identified in traditional tumor PET features including maximum standard uptake value (SUVmax), SUVmean and total lesion activity (TLA), while the tumor-to-normal (T/N ratio) increased thanks to smaller noise. These results suggested that the N2N algorithm can acquire sufficient denoising performance while preserving the original features of tumors, and may be generalized for abundant brain tumor PET images.
Collapse
|
49
|
A Compressed Reconstruction Network Combining Deep Image Prior and Autoencoding Priors for Single-Pixel Imaging. PHOTONICS 2022. [DOI: 10.3390/photonics9050343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Single-pixel imaging (SPI) is a promising imaging scheme based on compressive sensing. However, its application in high-resolution and real-time scenarios is a great challenge due to the long sampling and reconstruction required. The Deep Learning Compressed Network (DLCNet) can avoid the long-time iterative operation required by traditional reconstruction algorithms, and can achieve fast and high-quality reconstruction; hence, Deep-Learning-based SPI has attracted much attention. DLCNets learn prior distributions of real pictures from massive datasets, while the Deep Image Prior (DIP) uses a neural network′s own structural prior to solve inverse problems without requiring a lot of training data. This paper proposes a compressed reconstruction network (DPAP) based on DIP for Single-pixel imaging. DPAP is designed as two learning stages, which enables DPAP to focus on statistical information of the image structure at different scales. In order to obtain prior information from the dataset, the measurement matrix is jointly optimized by a network and multiple autoencoders are trained as regularization terms to be added to the loss function. Extensive simulations and practical experiments demonstrate that the proposed network outperforms existing algorithms.
Collapse
|
50
|
Ashouri Z, Wang G, Dansereau RM, deKemp RA. Evaluation of Wavelet Kernel-Based PET Image Reconstruction. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3103104] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Zahra Ashouri
- Cardiac Imaging, Ottawa Heart Institute, Ottawa, ON, Canada
| | - Guobao Wang
- Department of Radiology, University of California at Davis, Davis, CA, USA
| | - Richard M. Dansereau
- Department of Systems and Computer Engineering, Carleton University, Ottawa, ON, Canada
| | | |
Collapse
|