51
|
Fu Y, Dong S, Niu M, Xue L, Guo H, Huang Y, Xu Y, Yu T, Shi K, Yang Q, Shi Y, Zhang H, Tian M, Zhuo C. AIGAN: Attention-encoding Integrated Generative Adversarial Network for the reconstruction of low-dose CT and low-dose PET images. Med Image Anal 2023; 86:102787. [PMID: 36933386 DOI: 10.1016/j.media.2023.102787] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 11/05/2022] [Accepted: 02/22/2023] [Indexed: 03/04/2023]
Abstract
X-ray computed tomography (CT) and positron emission tomography (PET) are two of the most commonly used medical imaging technologies for the evaluation of many diseases. Full-dose imaging for CT and PET ensures the image quality but usually raises concerns about the potential health risks of radiation exposure. The contradiction between reducing the radiation exposure and remaining diagnostic performance can be addressed effectively by reconstructing the low-dose CT (L-CT) and low-dose PET (L-PET) images to the same high-quality ones as full-dose (F-CT and F-PET). In this paper, we propose an Attention-encoding Integrated Generative Adversarial Network (AIGAN) to achieve efficient and universal full-dose reconstruction for L-CT and L-PET images. AIGAN consists of three modules: the cascade generator, the dual-scale discriminator and the multi-scale spatial fusion module (MSFM). A sequence of consecutive L-CT (L-PET) slices is first fed into the cascade generator that integrates with a generation-encoding-generation pipeline. The generator plays the zero-sum game with the dual-scale discriminator for two stages: the coarse and fine stages. In both stages, the generator generates the estimated F-CT (F-PET) images as like the original F-CT (F-PET) images as possible. After the fine stage, the estimated fine full-dose images are then fed into the MSFM, which fully explores the inter- and intra-slice structural information, to output the final generated full-dose images. Experimental results show that the proposed AIGAN achieves the state-of-the-art performances on commonly used metrics and satisfies the reconstruction needs for clinical standards.
Collapse
Affiliation(s)
- Yu Fu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China; Binjiang Institute, Zhejiang University, Hangzhou, China
| | - Shunjie Dong
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Meng Niu
- Department of Radiology, The First Hospital of Lanzhou University, Lanzhou, China
| | - Le Xue
- Department of Nuclear Medicine and Medical PET Center The Second Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Hanning Guo
- Institute of Neuroscience and Medicine, Medical Imaging Physics (INM-4), Forschungszentrum Jülich, Jülich, Germany
| | - Yanyan Huang
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Yuanfan Xu
- Hangzhou Universal Medical Imaging Diagnostic Center, Hangzhou, China
| | - Tianbai Yu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Kuangyu Shi
- Department of Nuclear Medicine, University Hospital Bern, Bern, Switzerland
| | - Qianqian Yang
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Yiyu Shi
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, USA
| | - Hong Zhang
- Binjiang Institute, Zhejiang University, Hangzhou, China; Department of Nuclear Medicine and Medical PET Center The Second Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Mei Tian
- Human Phenome Institute, Fudan University, Shanghai, China.
| | - Cheng Zhuo
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China; Key Laboratory of Collaborative Sensing and Autonomous Unmanned Systems of Zhejiang Province, Hangzhou, China.
| |
Collapse
|
52
|
Dynamic PET images denoising using spectral graph wavelet transform. Med Biol Eng Comput 2023; 61:97-107. [PMID: 36323982 DOI: 10.1007/s11517-022-02698-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Accepted: 10/11/2022] [Indexed: 11/06/2022]
Abstract
Positron emission tomography (PET) is a non-invasive molecular imaging method for quantitative observation of physiological and biochemical changes in living organisms. The quality of the reconstructed PET image is limited by many different physical degradation factors. Various denoising methods including Gaussian filtering (GF) and non-local mean (NLM) filtering have been proposed to improve the image quality. However, image denoising usually blurs edges, of which high frequency components are filtered as noises. On the other hand, it is well-known that edges in a PET image are important to detection and recognition of a lesion. Denoising while preserving the edges of PET images remains an important yet challenging problem in PET image processing. In this paper, we propose a novel denoising method with good edge-preserving performance based on spectral graph wavelet transform (SGWT) for dynamic PET images denoising. We firstly generate a composite image from the entire time series, then perform SGWT on the PET images, and finally reconstruct the low graph frequency content to get the denoised dynamic PET images. Experimental results on simulation and in vivo data show that the proposed approach significantly outperforms the GF, NLM and graph filtering methods. Compared with deep learning-based method, the proposed method has the similar denoising performance, but it does not need lots of training data and has low computational complexity.
Collapse
|
53
|
Fujioka T, Satoh Y, Imokawa T, Mori M, Yamaga E, Takahashi K, Kubota K, Onishi H, Tateishi U. Proposal to Improve the Image Quality of Short-Acquisition Time-Dedicated Breast Positron Emission Tomography Using the Pix2pix Generative Adversarial Network. Diagnostics (Basel) 2022; 12:diagnostics12123114. [PMID: 36553120 PMCID: PMC9777139 DOI: 10.3390/diagnostics12123114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 11/26/2022] [Accepted: 12/07/2022] [Indexed: 12/14/2022] Open
Abstract
This study aimed to evaluate the ability of the pix2pix generative adversarial network (GAN) to improve the image quality of low-count dedicated breast positron emission tomography (dbPET). Pairs of full- and low-count dbPET images were collected from 49 breasts. An image synthesis model was constructed using pix2pix GAN for each acquisition time with training (3776 pairs from 16 breasts) and validation data (1652 pairs from 7 breasts). Test data included dbPET images synthesized by our model from 26 breasts with short acquisition times. Two breast radiologists visually compared the overall image quality of the original and synthesized images derived from the short-acquisition time data (scores of 1−5). Further quantitative evaluation was performed using a peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In the visual evaluation, both readers revealed an average score of >3 for all images. The quantitative evaluation revealed significantly higher SSIM (p < 0.01) and PSNR (p < 0.01) for 26 s synthetic images and higher PSNR for 52 s images (p < 0.01) than for the original images. Our model improved the quality of low-count time dbPET synthetic images, with a more significant effect on images with lower counts.
Collapse
Affiliation(s)
- Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Yoko Satoh
- Yamanashi PET Imaging Clinic, Chuo City 409-3821, Japan
- Department of Radiology, University of Yamanashi, Chuo City 409-3898, Japan
- Correspondence:
| | - Tomoki Imokawa
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Mio Mori
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Emi Yamaga
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Kanae Takahashi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Kazunori Kubota
- Department of Radiology, Dokkyo Medical University Saitama Medical Center, Koshigaya 343-8555, Japan
| | - Hiroshi Onishi
- Department of Radiology, University of Yamanashi, Chuo City 409-3898, Japan
| | - Ukihide Tateishi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| |
Collapse
|
54
|
Sun H, Jiang Y, Yuan J, Wang H, Liang D, Fan W, Hu Z, Zhang N. High-quality PET image synthesis from ultra-low-dose PET/MRI using bi-task deep learning. Quant Imaging Med Surg 2022; 12:5326-5342. [PMID: 36465830 PMCID: PMC9703111 DOI: 10.21037/qims-22-116] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 08/04/2022] [Indexed: 01/25/2023]
Abstract
BACKGROUND Lowering the dose for positron emission tomography (PET) imaging reduces patients' radiation burden but decreases the image quality by increasing noise and reducing imaging detail and quantifications. This paper introduces a method for acquiring high-quality PET images from an ultra-low-dose state to achieve both high-quality images and a low radiation burden. METHODS We developed a two-task-based end-to-end generative adversarial network, named bi-c-GAN, that incorporated the advantages of PET and magnetic resonance imaging (MRI) modalities to synthesize high-quality PET images from an ultra-low-dose input. Moreover, a combined loss, including the mean absolute error, structural loss, and bias loss, was created to improve the trained model's performance. Real integrated PET/MRI data from 67 patients' axial heads (each with 161 slices) were used for training and validation purposes. Synthesized images were quantified by the peak signal-to-noise ratio (PSNR), normalized mean square error (NMSE), structural similarity (SSIM), and contrast noise ratio (CNR). The improvement ratios of these four selected quantitative metrics were used to compare the images produced by bi-c-GAN with other methods. RESULTS In the four-fold cross-validation, the proposed bi-c-GAN outperformed the other three selected methods (U-net, c-GAN, and multiple input c-GAN). With the bi-c-GAN, in a 5% low-dose PET, the image quality was higher than that of the other three methods by at least 6.7% in the PSNR, 0.6% in the SSIM, 1.3% in the NMSE, and 8% in the CNR. In the hold-out validation, bi-c-GAN improved the image quality compared to U-net and c-GAN in both 2.5% and 10% low-dose PET. For example, the PSNR using bi-C-GAN was at least 4.46% in the 2.5% low-dose PET and at most 14.88% in the 10% low-dose PET. Visual examples also showed a higher quality of images generated from the proposed method, demonstrating the denoising and improving ability of bi-c-GAN. CONCLUSIONS By taking advantage of integrated PET/MR images and multitask deep learning (MDL), the proposed bi-c-GAN can efficiently improve the image quality of ultra-low-dose PET and reduce radiation exposure.
Collapse
Affiliation(s)
- Hanyu Sun
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yongluo Jiang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Jianmin Yuan
- Central Research Institute, Shanghai United Imaging Healthcare, Shanghai, China
| | - Haining Wang
- United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China;,United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China;,United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| |
Collapse
|
55
|
Image denoising in the deep learning era. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10305-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
56
|
Liu J, Ren S, Wang R, Mirian N, Tsai YJ, Kulon M, Pucar D, Chen MK, Liu C. Virtual high-count PET image generation using a deep learning method. Med Phys 2022; 49:5830-5840. [PMID: 35880541 PMCID: PMC9474624 DOI: 10.1002/mp.15867] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 06/07/2022] [Accepted: 07/18/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Recently, deep learning-based methods have been established to denoise the low-count positron emission tomography (PET) images and predict their standard-count image counterparts, which could achieve reduction of injected dosage and scan time, and improve image quality for equivalent lesion detectability and clinical diagnosis. In clinical settings, the majority scans are still acquired using standard injection dose with standard scan time. In this work, we applied a 3D U-Net network to reduce the noise of standard-count PET images to obtain the virtual-high-count (VHC) PET images for identifying the potential benefits of the obtained VHC PET images. METHODS The training datasets, including down-sampled standard-count PET images as the network input and high-count images as the desired network output, were derived from 27 whole-body PET datasets, which were acquired using 90-min dynamic scan. The down-sampled standard-count PET images were rebinned with matched noise level of 195 clinical static PET datasets, by matching the normalized standard derivation (NSTD) inside 3D liver region of interests (ROIs). Cross-validation was performed on 27 PET datasets. Normalized mean square error (NMSE), peak signal to noise ratio (PSNR), structural similarity index (SSIM), and standard uptake value (SUV) bias of lesions were used for evaluation on standard-count and VHC PET images, with real-high-count PET image of 90 min as the gold standard. In addition, the network trained with 27 dynamic PET datasets was applied to 195 clinical static datasets to obtain VHC PET images. The NSTD and mean/max SUV of hypermetabolic lesions in standard-count and VHC PET images were evaluated. Three experienced nuclear medicine physicians evaluated the overall image quality of randomly selected 50 out of 195 patients' standard-count and VHC images and conducted 5-score ranking. A Wilcoxon signed-rank test was used to compare differences in the grading of standard-count and VHC images. RESULTS The cross-validation results showed that VHC PET images had improved quantitative metrics scores than the standard-count PET images. The mean/max SUVs of 35 lesions in the standard-count and true-high-count PET images did not show significantly statistical difference. Similarly, the mean/max SUVs of VHC and true-high-count PET images did not show significantly statistical difference. For the 195 clinical data, the VHC PET images had a significantly lower NSTD than the standard-count images. The mean/max SUVs of 215 hypermetabolic lesions in the VHC and standard-count images showed no statistically significant difference. In the image quality evaluation by three experienced nuclear medicine physicians, standard-count images and VHC images received scores with mean and standard deviation of 3.34±0.80 and 4.26 ± 0.72 from Physician 1, 3.02 ± 0.87 and 3.96 ± 0.73 from Physician 2, and 3.74 ± 1.10 and 4.58 ± 0.57 from Physician 3, respectively. The VHC images were consistently ranked higher than the standard-count images. The Wilcoxon signed-rank test also indicated that the image quality evaluation between standard-count and VHC images had significant difference. CONCLUSIONS A DL method was proposed to convert the standard-count images to the VHC images. The VHC images had reduced noise level. No significant difference in mean/max SUV to the standard-count images was observed. VHC images improved image quality for better lesion detectability and clinical diagnosis.
Collapse
Affiliation(s)
- Juan Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Sijin Ren
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Rui Wang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, China
| | - Niloufarsadat Mirian
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Yu-Jung Tsai
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Michal Kulon
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Darko Pucar
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| |
Collapse
|
57
|
Khojaste-Sarakhsi M, Haghighi SS, Ghomi SF, Marchiori E. Deep learning for Alzheimer's disease diagnosis: A survey. Artif Intell Med 2022; 130:102332. [PMID: 35809971 DOI: 10.1016/j.artmed.2022.102332] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 04/29/2022] [Accepted: 05/30/2022] [Indexed: 11/28/2022]
|
58
|
Schramm G. Reconstruction-free positron emission imaging: Fact or fiction? FRONTIERS IN NUCLEAR MEDICINE (LAUSANNE, SWITZERLAND) 2022; 2:936091. [PMID: 39354988 PMCID: PMC11440944 DOI: 10.3389/fnume.2022.936091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 07/05/2022] [Indexed: 10/03/2024]
Affiliation(s)
- Georg Schramm
- Division of Nuclear Medicine, Department of Imaging and Pathology, KU Leuven, Leuven, Belgium
| |
Collapse
|
59
|
Artificial intelligence-based PET image acquisition and reconstruction. Clin Transl Imaging 2022. [DOI: 10.1007/s40336-022-00508-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
60
|
Shokraei Fard A, Reutens DC, Vegh V. From CNNs to GANs for cross-modality medical image estimation. Comput Biol Med 2022; 146:105556. [DOI: 10.1016/j.compbiomed.2022.105556] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 04/03/2022] [Accepted: 04/22/2022] [Indexed: 11/03/2022]
|
61
|
Daveau RS, Law I, Henriksen OM, Hasselbalch SG, Andersen UB, Anderberg L, Højgaard L, Andersen FL, Ladefoged CN. Deep learning based low-activity PET reconstruction of [ 11C]PiB and [ 18F]FE-PE2I in neurodegenerative disorders. Neuroimage 2022; 259:119412. [PMID: 35753592 DOI: 10.1016/j.neuroimage.2022.119412] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 06/17/2022] [Accepted: 06/22/2022] [Indexed: 11/17/2022] Open
Abstract
PURPOSE Positron Emission Tomography (PET) can support a diagnosis of neurodegenerative disorder by identifying disease-specific pathologies. Our aim was to investigate the feasibility of using activity reduction in clinical [18F]FE-PE2I and [11C]PiB PET/CT scans, simulating low injected activity or scanning time reduction, in combination with AI-assisted denoising. METHODS A total of 162 patients with clinically uncertain Alzheimer's disease underwent amyloid [11C]PiB PET/CT and 509 patients referred for clinically uncertain Parkinson's disease underwent dopamine transporter (DAT) [18F]FE-PE2I PET/CT. Simulated low-activity data were obtained by random sampling of 5% of the events from the list-mode file and a 5% time window extraction in the middle of the scan. A three-dimensional convolutional neural network (CNN) was trained to denoise the resulting PET images for each disease cohort. RESULTS Noise reduction of low-activity PET images was successful for both cohorts using 5% of the original activity with improvement in visual quality and all similarity metrics with respect to the ground-truth images. Clinically relevant metrics extracted from the low-activity images deviated <2% compared to ground-truth values, which were not significantly changed when extracting the metrics from the denoised images. CONCLUSION The presented models were based on the same network architecture and proved to be a robust tool for denoising brain PET images with two widely different tracer distributions (delocalized, ([11C]PiB, and highly localized, [18F]FE-PE2I). This broad and robust application makes the presented network a good choice for improving the quality of brain images to the level of the standard-activity images without degrading clinical metric extraction. This will allow for reduced dose or scan time in PET/CT to be implemented clinically.
Collapse
Affiliation(s)
- Raphaël S Daveau
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Ian Law
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Otto Mølby Henriksen
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | | | - Ulrik Bjørn Andersen
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Lasse Anderberg
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Liselotte Højgaard
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Flemming Littrup Andersen
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Claes Nøhr Ladefoged
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark.
| |
Collapse
|
62
|
Pan B, Qi N, Meng Q, Wang J, Peng S, Qi C, Gong NJ, Zhao J. Ultra high speed SPECT bone imaging enabled by a deep learning enhancement method: a proof of concept. EJNMMI Phys 2022; 9:43. [PMID: 35698006 PMCID: PMC9192886 DOI: 10.1186/s40658-022-00472-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 05/29/2022] [Indexed: 11/12/2022] Open
Abstract
Background To generate high-quality bone scan SPECT images from only 1/7 scan time SPECT images using deep learning-based enhancement method. Materials and methods Normal-dose (925–1110 MBq) clinical technetium 99 m-methyl diphosphonate (99mTc-MDP) SPECT/CT images and corresponding SPECT/CT images with 1/7 scan time from 20 adult patients with bone disease and a phantom were collected to develop a lesion-attention weighted U2-Net (Qin et al. in Pattern Recognit 106:107404, 2020), which produces high-quality SPECT images from fast SPECT/CT images. The quality of synthesized SPECT images from different deep learning models was compared using PSNR and SSIM. Clinic evaluation on 5-point Likert scale (5 = excellent) was performed by two experienced nuclear physicians. Average score and Wilcoxon test were constructed to assess the image quality of 1/7 SPECT, DL-enhanced SPECT and the standard SPECT. SUVmax, SUVmean, SSIM and PSNR from each detectable sphere filled with imaging agent were measured and compared for different images. Results U2-Net-based model reached the best PSNR (40.8) and SSIM (0.788) performance compared with other advanced deep learning methods. The clinic evaluation showed the quality of the synthesized SPECT images is much higher than that of fast SPECT images (P < 0.05). Compared to the standard SPECT images, enhanced images exhibited the same general image quality (P > 0.999), similar detail of 99mTc-MDP (P = 0.125) and the same diagnostic confidence (P = 0.1875). 4, 5 and 6 spheres could be distinguished on 1/7 SPECT, DL-enhanced SPECT and the standard SPECT, respectively. The DL-enhanced phantom image outperformed 1/7 SPECT in SUVmax, SUVmean, SSIM and PSNR in quantitative assessment. Conclusions Our proposed method can yield significant image quality improvement in the noise level, details of anatomical structure and SUV accuracy, which enabled applications of ultra fast SPECT bone imaging in real clinic settings.
Collapse
Affiliation(s)
- Boyang Pan
- RadioDynamic Healthcare, Shanghai, China
| | - Na Qi
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, No. 150 Jimo Road, Pudong New District, Shanghai, China
| | - Qingyuan Meng
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, No. 150 Jimo Road, Pudong New District, Shanghai, China
| | | | - Siyue Peng
- RadioDynamic Healthcare, Shanghai, China
| | | | - Nan-Jie Gong
- Vector Lab for Intelligent Medical Imaging and Neural Engineering, International Innovation Center of Tsinghua University, No. 602 Tongpu Street, Putuo District, Shanghai, China.
| | - Jun Zhao
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, No. 150 Jimo Road, Pudong New District, Shanghai, China.
| |
Collapse
|
63
|
Abstract
Purpose To evaluate the clinical feasibility of high-resolution dedicated breast positron emission tomography (dbPET) with real low-dose 18F-2-fluorodeoxy-d-glucose (18F-FDG) by comparing images acquired with full-dose FDG. Materials and methods Nine women with no history of breast cancer and previously scanned by dbPET injected with a clinical 18F-FDG dose (3 MBq/kg) were enrolled. They were injected with 50% of the clinical 18F-FDG dose and scanned with dbPET for 10 min for each breast 60 and 90 min after injection. To investigate the effect of the scan start time and acquisition time on image quality, list-mode data were divided into 1, 3, 5, and 7 min (and 10 min with 50% FDG injected) from the start of acquisition and reconstructed. The reconstructed images were visually and quantitatively compared for contrast between mammary gland and fat (contrast) and for coefficient of variation (CV) in the mammary gland. Results In visual evaluation, the contrast between the mammary gland and fat acquired at a 50% dose for 7 min was comparable and even better in smoothness than that in the images acquired at a 100% dose. No visual difference between the images with a 50% dose was found with scan start times 60 and 90 min after injection. Quantitative evaluation showed a slightly lower contrast in the image at 60 min after 50% dosing, with no difference between acquisition times. There was no difference in CV between conditions; however, smoothness decreased with shorter acquisition time in all conditions. Conclusions The quality of dbPET images with a 50% FDG dose was high enough for clinical application. Although the optimal scan start time for improved lesion-to-background mammary gland contrast remained unknown in this study, it will be clarified in future studies of breast cancer patients.
Collapse
|
64
|
Deep Learning-Based Denoising in Brain Tumor CHO PET: Comparison with Traditional Approaches. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12105187] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
18F-choline (CHO) PET image remains noisy despite minimum physiological activity in the normal brain, and this study developed a deep learning-based denoising algorithm for brain tumor CHO PET. Thirty-nine presurgical CHO PET/CT data were retrospectively collected for patients with pathological confirmed primary diffuse glioma. Two conventional denoising methods, namely, block-matching and 3D filtering (BM3D) and non-local means (NLM), and two deep learning-based approaches, namely, Noise2Noise (N2N) and Noise2Void (N2V), were established for imaging denoising, and the methods were developed without paired data. All algorithms improved the image quality to a certain extent, with the N2N demonstrating the best contrast-to-noise ratio (CNR) (4.05 ± 3.45), CNR improvement ratio (13.60% ± 2.05%) and the lowest entropy (1.68 ± 0.17), compared with other approaches. Little changes were identified in traditional tumor PET features including maximum standard uptake value (SUVmax), SUVmean and total lesion activity (TLA), while the tumor-to-normal (T/N ratio) increased thanks to smaller noise. These results suggested that the N2N algorithm can acquire sufficient denoising performance while preserving the original features of tumors, and may be generalized for abundant brain tumor PET images.
Collapse
|
65
|
Smith NM, Ford JN, Haghdel A, Glodzik L, Li Y, D’Angelo D, RoyChoudhury A, Wang X, Blennow K, de Leon MJ, Ivanidze J. Statistical Parametric Mapping in Amyloid Positron Emission Tomography. Front Aging Neurosci 2022; 14:849932. [PMID: 35547630 PMCID: PMC9083453 DOI: 10.3389/fnagi.2022.849932] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 03/21/2022] [Indexed: 12/03/2022] Open
Abstract
Alzheimer's disease (AD), the most common cause of dementia, has limited treatment options. Emerging disease modifying therapies are targeted at clearing amyloid-β (Aβ) aggregates and slowing the rate of amyloid deposition. However, amyloid burden is not routinely evaluated quantitatively for purposes of disease progression and treatment response assessment. Statistical Parametric Mapping (SPM) is a technique comparing single-subject Positron Emission Tomography (PET) to a healthy cohort that may improve quantification of amyloid burden and diagnostic performance. While primarily used in 2-[18F]-fluoro-2-deoxy-D-glucose (FDG)-PET, SPM's utility in amyloid PET for AD diagnosis is less established and uncertainty remains regarding optimal normal database construction. Using commercially available SPM software, we created a database of 34 non-APOE ε4 carriers with normal cognitive testing (MMSE > 25) and negative cerebrospinal fluid (CSF) AD biomarkers. We compared this database to 115 cognitively normal subjects with variable AD risk factors. We hypothesized that SPM based on our database would identify more positive scans in the test cohort than the qualitatively rated [11C]-PiB PET (QR-PiB), that SPM-based interpretation would correlate better with CSF Aβ42 levels than QR-PiB, and that regional z-scores of specific brain regions known to be involved early in AD would be predictive of CSF Aβ42 levels. Fisher's exact test and the kappa coefficient assessed the agreement between SPM, QR-PiB PET, and CSF biomarkers. Logistic regression determined if the regional z-scores predicted CSF Aβ42 levels. An optimal z-score cutoff was calculated using Youden's index. We found SPM identified more positive scans than QR-PiB PET (19.1 vs. 9.6%) and that SPM correlated more closely with CSF Aβ42 levels than QR-PiB PET (kappa 0.13 vs. 0.06) indicating that SPM may have higher sensitivity than standard QR-PiB PET images. Regional analysis demonstrated the z-scores of the precuneus, anterior cingulate and posterior cingulate were predictive of CSF Aβ42 levels [OR (95% CI) 2.4 (1.1, 5.1) p = 0.024; 1.8 (1.1, 2.8) p = 0.020; 1.6 (1.1, 2.5) p = 0.026]. This study demonstrates the utility of using SPM with a "true normal" database and suggests that SPM enhances diagnostic performance in AD in the clinical setting through its quantitative approach, which will be increasingly important with future disease-modifying therapies.
Collapse
Affiliation(s)
- Natasha M. Smith
- Department of Radiology and MD Program, Weill Cornell Medicine, New York City, NY, United States
| | - Jeremy N. Ford
- Department of Radiology, Weill Cornell Medicine, New York City, NY, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Arsalan Haghdel
- Department of Radiology and MD Program, Weill Cornell Medicine, New York City, NY, United States
| | - Lidia Glodzik
- Department of Radiology, Weill Cornell Medicine, New York City, NY, United States
| | - Yi Li
- Department of Radiology, Weill Cornell Medicine, New York City, NY, United States
| | - Debra D’Angelo
- Department of Population Health Sciences, Weill Cornell Medicine, New York City, NY, United States
| | - Arindam RoyChoudhury
- Department of Population Health Sciences, Weill Cornell Medicine, New York City, NY, United States
| | - Xiuyuan Wang
- Department of Radiology, Weill Cornell Medicine, New York City, NY, United States
| | - Kaj Blennow
- Department of Neuroscience and Physiology, University of Gothenburg, Mölndal, Sweden
- Clinical Neurochemistry Laboratory, Sahlgrenska University Hospital, Mölndal, Sweden
| | - Mony J. de Leon
- Department of Radiology, Weill Cornell Medicine, New York City, NY, United States
| | - Jana Ivanidze
- Department of Radiology, Weill Cornell Medicine, New York City, NY, United States
| |
Collapse
|
66
|
Applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging: A review. Eur J Nucl Med Mol Imaging 2022; 49:3717-3739. [PMID: 35451611 DOI: 10.1007/s00259-022-05805-w] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/12/2022] [Indexed: 11/04/2022]
Abstract
PURPOSE This paper reviews recent applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging. Recent advances in Deep Learning (DL) and GANs catalysed the research of their applications in medical imaging modalities. As a result, several unique GAN topologies have emerged and been assessed in an experimental environment over the last two years. METHODS The present work extensively describes GAN architectures and their applications in PET imaging. The identification of relevant publications was performed via approved publication indexing websites and repositories. Web of Science, Scopus, and Google Scholar were the major sources of information. RESULTS The research identified a hundred articles that address PET imaging applications such as attenuation correction, de-noising, scatter correction, removal of artefacts, image fusion, high-dose image estimation, super-resolution, segmentation, and cross-modality synthesis. These applications are presented and accompanied by the corresponding research works. CONCLUSION GANs are rapidly employed in PET imaging tasks. However, specific limitations must be eliminated to reach their full potential and gain the medical community's trust in everyday clinical practice.
Collapse
|
67
|
Ross DE, Seabaugh J, Seabaugh JM, Barcelona J, Seabaugh D, Wright K, Norwind L, King Z, Graham TJ, Baker J, Lewis T. Updated Review of the Evidence Supporting the Medical and Legal Use of NeuroQuant ® and NeuroGage ® in Patients With Traumatic Brain Injury. Front Hum Neurosci 2022; 16:715807. [PMID: 35463926 PMCID: PMC9027332 DOI: 10.3389/fnhum.2022.715807] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 03/03/2022] [Indexed: 02/05/2023] Open
Abstract
Over 40 years of research have shown that traumatic brain injury affects brain volume. However, technical and practical limitations made it difficult to detect brain volume abnormalities in patients suffering from chronic effects of mild or moderate traumatic brain injury. This situation improved in 2006 with the FDA clearance of NeuroQuant®, a commercially available, computer-automated software program for measuring MRI brain volume in human subjects. More recent strides were made with the introduction of NeuroGage®, commercially available software that is based on NeuroQuant® and extends its utility in several ways. Studies using these and similar methods have found that most patients with chronic mild or moderate traumatic brain injury have brain volume abnormalities, and several of these studies found-surprisingly-more abnormal enlargement than atrophy. More generally, 102 peer-reviewed studies have supported the reliability and validity of NeuroQuant® and NeuroGage®. Furthermore, this updated version of a previous review addresses whether NeuroQuant® and NeuroGage® meet the Daubert standard for admissibility in court. It concludes that NeuroQuant® and NeuroGage® meet the Daubert standard based on their reliability, validity, and objectivity. Due to the improvements in technology over the years, these brain volumetric techniques are practical and readily available for clinical or forensic use, and thus they are important tools for detecting signs of brain injury.
Collapse
Affiliation(s)
- David E. Ross
- Virginia Institute of Neuropsychiatry, Midlothian, VA, United States
- NeuroGage LLC, Midlothian, VA, United States
- Department of Psychiatry, Virginia Commonwealth University, Richmond, VA, United States
| | - John Seabaugh
- Virginia Institute of Neuropsychiatry, Midlothian, VA, United States
- NeuroGage LLC, Midlothian, VA, United States
- Department of Radiology, St. Mary’s Hospital School of Medical Imaging, Richmond, VA, United States
| | - Jan M. Seabaugh
- Virginia Institute of Neuropsychiatry, Midlothian, VA, United States
- NeuroGage LLC, Midlothian, VA, United States
| | - Justis Barcelona
- Virginia Institute of Neuropsychiatry, Midlothian, VA, United States
- NeuroGage LLC, Midlothian, VA, United States
| | - Daniel Seabaugh
- Virginia Institute of Neuropsychiatry, Midlothian, VA, United States
- NeuroGage LLC, Midlothian, VA, United States
| | - Katherine Wright
- Virginia Institute of Neuropsychiatry, Midlothian, VA, United States
- NeuroGage LLC, Midlothian, VA, United States
- Department of Psychiatry, Virginia Commonwealth University, Richmond, VA, United States
| | - Lee Norwind
- Karp, Wigodsky, Norwind, Kudel & Gold, P.A., Rockville, MD, United States
| | - Zachary King
- Karp, Wigodsky, Norwind, Kudel & Gold, P.A., Rockville, MD, United States
| | | | - Joseph Baker
- Virginia Institute of Neuropsychiatry, Midlothian, VA, United States
- NeuroGage LLC, Midlothian, VA, United States
- Department of Neuroscience, Christopher Newport University, Newport News, VA, United States
| | - Tanner Lewis
- Virginia Institute of Neuropsychiatry, Midlothian, VA, United States
- NeuroGage LLC, Midlothian, VA, United States
- Department of Undergraduate Studies, University of Virginia, Charlottesville, VA, United States
| |
Collapse
|
68
|
Zhu G, Chen H, Jiang B, Chen F, Xie Y, Wintermark M. Application of Deep Learning to Ischemic and Hemorrhagic Stroke Computed Tomography and Magnetic Resonance Imaging. Semin Ultrasound CT MR 2022; 43:147-152. [PMID: 35339255 DOI: 10.1053/j.sult.2022.02.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Deep Learning (DL) algorithm holds great potential in the field of stroke imaging. It has been applied not only to the "downstream" side such as lesion detection, treatment decision making, and outcome prediction, but also to the "upstream" side for generation and enhancement of stroke imaging. This paper aims to comprehensively overview the common applications of DL to stroke imaging. In the future, more standardized imaging datasets and more extensive studies are needed to establish and validate the role of DL in stroke imaging.
Collapse
Affiliation(s)
- Guangming Zhu
- Department of Radiology, Neuroradiology Section, Stanford University School of Medicine, Stanford, CA
| | - Hui Chen
- Department of Radiology, Neuroradiology Section, Stanford University School of Medicine, Stanford, CA
| | - Bin Jiang
- Department of Radiology, Neuroradiology Section, Stanford University School of Medicine, Stanford, CA
| | - Fei Chen
- Department of Neurology, Xuan Wu hospital, Capital Meidcal University, Beijing, China
| | - Yuan Xie
- Subtle Medical Inc, Menlo Park, CA
| | - Max Wintermark
- Department of Radiology, Neuroradiology Section, Stanford University School of Medicine, Stanford, CA.
| |
Collapse
|
69
|
Monsour R, Dutta M, Mohamed AZ, Borkowski A, Viswanadhan NA. Neuroimaging in the Era of Artificial Intelligence: Current Applications. Fed Pract 2022; 39:S14-S20. [PMID: 35765692 PMCID: PMC9227741 DOI: 10.12788/fp.0231] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/17/2023]
Abstract
BACKGROUND Artificial intelligence (AI) in medicine has shown significant promise, particularly in neuroimaging. AI increases efficiency and reduces errors, making it a valuable resource for physicians. With the increasing amount of data processing and image interpretation required, the ability to use AI to augment and aid the radiologist could improve the quality of patient care. OBSERVATIONS AI can predict patient wait times, which may allow more efficient patient scheduling. Additionally, AI can save time for repeat magnetic resonance neuroimaging and reduce the time spent during imaging. AI has the ability to read computed tomography, magnetic resonance imaging, and positron emission tomography with reduced or without contrast without significant loss in sensitivity for detecting lesions. Neuroimaging does raise important ethical considerations and is subject to bias. It is vital that users understand the practical and ethical considerations of the technology. CONCLUSIONS The demonstrated applications of AI in neuroimaging are numerous and varied, and it is reasonable to assume that its implementation will increase as the technology matures. AI's use for detecting neurologic conditions holds promise in combatting ever increasing imaging volumes and providing timely diagnoses.
Collapse
Affiliation(s)
- Robert Monsour
- University of South Florida Morsani College of Medicine, Tampa, Florida
| | - Mudit Dutta
- University of South Florida Morsani College of Medicine, Tampa, Florida
| | | | - Andrew Borkowski
- University of South Florida Morsani College of Medicine, Tampa, Florida
- James A. Haley Veterans’ Hospital, Tampa, Florida
| | - Narayan A. Viswanadhan
- University of South Florida Morsani College of Medicine, Tampa, Florida
- James A. Haley Veterans’ Hospital, Tampa, Florida
| |
Collapse
|
70
|
Generative Adversarial Networks in Brain Imaging: A Narrative Review. J Imaging 2022; 8:jimaging8040083. [PMID: 35448210 PMCID: PMC9028488 DOI: 10.3390/jimaging8040083] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 03/08/2022] [Accepted: 03/15/2022] [Indexed: 02/04/2023] Open
Abstract
Artificial intelligence (AI) is expected to have a major effect on radiology as it demonstrated remarkable progress in many clinical tasks, mostly regarding the detection, segmentation, classification, monitoring, and prediction of diseases. Generative Adversarial Networks have been proposed as one of the most exciting applications of deep learning in radiology. GANs are a new approach to deep learning that leverages adversarial learning to tackle a wide array of computer vision challenges. Brain radiology was one of the first fields where GANs found their application. In neuroradiology, indeed, GANs open unexplored scenarios, allowing new processes such as image-to-image and cross-modality synthesis, image reconstruction, image segmentation, image synthesis, data augmentation, disease progression models, and brain decoding. In this narrative review, we will provide an introduction to GANs in brain imaging, discussing the clinical potential of GANs, future clinical applications, as well as pitfalls that radiologists should be aware of.
Collapse
|
71
|
Pain CD, Egan GF, Chen Z. Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement. Eur J Nucl Med Mol Imaging 2022; 49:3098-3118. [PMID: 35312031 PMCID: PMC9250483 DOI: 10.1007/s00259-022-05746-4] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/25/2022] [Indexed: 12/21/2022]
Abstract
Image processing plays a crucial role in maximising diagnostic quality of positron emission tomography (PET) images. Recently, deep learning methods developed across many fields have shown tremendous potential when applied to medical image enhancement, resulting in a rich and rapidly advancing literature surrounding this subject. This review encapsulates methods for integrating deep learning into PET image reconstruction and post-processing for low-dose imaging and resolution enhancement. A brief introduction to conventional image processing techniques in PET is firstly presented. We then review methods which integrate deep learning into the image reconstruction framework as either deep learning-based regularisation or as a fully data-driven mapping from measured signal to images. Deep learning-based post-processing methods for low-dose imaging, temporal resolution enhancement and spatial resolution enhancement are also reviewed. Finally, the challenges associated with applying deep learning to enhance PET images in the clinical setting are discussed and future research directions to address these challenges are presented.
Collapse
Affiliation(s)
- Cameron Dennis Pain
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia.
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Department of Data Science and AI, Monash University, Melbourne, Australia
| |
Collapse
|
72
|
Tian Q, Li Z, Fan Q, Polimeni JR, Bilgic B, Salat DH, Huang SY. SDnDTI: Self-supervised deep learning-based denoising for diffusion tensor MRI. Neuroimage 2022; 253:119033. [PMID: 35240299 DOI: 10.1016/j.neuroimage.2022.119033] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 02/15/2022] [Accepted: 02/21/2022] [Indexed: 12/12/2022] Open
Abstract
Diffusion tensor magnetic resonance imaging (DTI) is a widely adopted neuroimaging method for the in vivo mapping of brain tissue microstructure and white matter tracts. Nonetheless, the noise in the diffusion-weighted images (DWIs) decreases the accuracy and precision of DTI derived microstructural parameters and leads to prolonged acquisition time for achieving improved signal-to-noise ratio (SNR). Deep learning-based image denoising using convolutional neural networks (CNNs) has superior performance but often requires additional high-SNR data for supervising the training of CNNs, which reduces the feasibility of supervised learning-based denoising in practice. In this work, we develop a self-supervised deep learning-based method entitled "SDnDTI" for denoising DTI data, which does not require additional high-SNR data for training. Specifically, SDnDTI divides multi-directional DTI data into many subsets of six DWI volumes and transforms DWIs from each subset to along the same diffusion-encoding directions through the diffusion tensor model, generating multiple repetitions of DWIs with identical image contrasts but different noise observations. SDnDTI removes noise by first denoising each repetition of DWIs using a deep 3-dimensional CNN with the average of all repetitions with higher SNR as the training target, following the same approach as normal supervised learning based denoising methods, and then averaging CNN-denoised images for achieving higher SNR. The denoising efficacy of SDnDTI is demonstrated in terms of the similarity of output images and resultant DTI metrics compared to the ground truth generated using substantially more DWI volumes on two datasets with different spatial resolutions, b-values and numbers of input DWI volumes provided by the Human Connectome Project (HCP) and the Lifespan HCP in Aging. The SDnDTI results preserve image sharpness and textural details and substantially improve upon those from the raw data. The results of SDnDTI are comparable to those from supervised learning-based denoising and outperform those from state-of-the-art conventional denoising algorithms including BM4D, AONLM and MPPCA. By leveraging domain knowledge of diffusion MRI physics, SDnDTI makes it easier to use CNN-based denoising methods in practice and has the potential to benefit a wider range of research and clinical applications that require accelerated DTI acquisition and high-quality DTI data for mapping of tissue microstructure, fiber tracts and structural connectivity in the living human brain.
Collapse
Affiliation(s)
- Qiyuan Tian
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Charlestown, MA 02129, United States; Department of Radiology, Harvard Medical School, Boston, MA, United States.
| | - Ziyu Li
- Department of Biomedical Engineering, Tsinghua University, Beijing, PR China
| | - Qiuyun Fan
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Charlestown, MA 02129, United States; Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Jonathan R Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Charlestown, MA 02129, United States; Department of Radiology, Harvard Medical School, Boston, MA, United States; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Charlestown, MA 02129, United States; Department of Radiology, Harvard Medical School, Boston, MA, United States; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - David H Salat
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Charlestown, MA 02129, United States; Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Susie Y Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Charlestown, MA 02129, United States; Department of Radiology, Harvard Medical School, Boston, MA, United States; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| |
Collapse
|
73
|
Chen K, Adeyeri O, Toueg T, Zeineh M, Mormino E, Khalighi M, Zaharchuk G. Investigating Simultaneity for Deep Learning-Enhanced Actual Ultra-Low-Dose Amyloid PET/MR Imaging. AJNR Am J Neuroradiol 2022; 43:354-360. [PMID: 35086799 PMCID: PMC8910791 DOI: 10.3174/ajnr.a7410] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Accepted: 11/15/2021] [Indexed: 01/27/2023]
Abstract
BACKGROUND AND PURPOSE Diagnostic-quality amyloid PET images can be created with deep learning using actual ultra-low-dose PET images and simultaneous structural MR imaging. Here, we investigated whether simultaneity is required; if not, MR imaging-assisted ultra-low-dose PET imaging could be performed with separate PET/CT and MR imaging acquisitions. MATERIALS AND METHODS We recruited 48 participants: Thirty-two (20 women; mean, 67.7 [SD, 7.9] years) were used for pretraining; 328 (SD, 32) MBq of [18F] florbetaben was injected. Sixteen participants (6 women; mean, 71.4 [SD. 8.7] years of age) were scanned in 2 sessions, with 6.5 (SD, 3.8) and 300 (SD, 14) MBq of [18F] florbetaben injected, respectively. Structural MR imaging was acquired simultaneously with PET (90-110 minutes postinjection) on integrated PET/MR imaging in 2 sessions. Multiple U-Net-based deep networks were trained to create diagnostic PET images. For each method, training was done with the ultra-low-dose PET as input combined with MR imaging from either the ultra-low-dose session (simultaneous) or from the standard-dose PET session (nonsimultaneous). Image quality of the enhanced and ultra-low-dose PET images was evaluated using quantitative signal-processing methods, standardized uptake value ratio correlation, and clinical reads. RESULTS Qualitatively, the enhanced images resembled the standard-dose image for both simultaneous and nonsimultaneous conditions. Three quantitative metrics showed significant improvement for all networks and no differences due to simultaneity. Standardized uptake value ratio correlation was high across different image types and network training methods, and 31/32 enhanced image pairs were read similarly. CONCLUSIONS This work suggests that accurate amyloid PET images can be generated using enhanced ultra-low-dose PET and either nonsimultaneous or simultaneous MR imaging, broadening the utility of ultra-low-dose amyloid PET imaging.
Collapse
Affiliation(s)
- K.T. Chen
- From the Department of Radiology (K.T.C., M.Z., M.K., G.Z.), Stanford University, Stanford, California,Department of Biomedical Engineering (K.T.C.), National Taiwan University, Taipei, Taiwan
| | - O. Adeyeri
- Department of Computer Science (O.A.), Salem State University, Salem, Massachusetts
| | - T.N. Toueg
- Department of Neurology and Neurological Sciences (T.N.T., E.M.), Stanford University, Stanford, California
| | - M. Zeineh
- From the Department of Radiology (K.T.C., M.Z., M.K., G.Z.), Stanford University, Stanford, California
| | - E. Mormino
- Department of Neurology and Neurological Sciences (T.N.T., E.M.), Stanford University, Stanford, California
| | - M. Khalighi
- From the Department of Radiology (K.T.C., M.Z., M.K., G.Z.), Stanford University, Stanford, California
| | - G. Zaharchuk
- From the Department of Radiology (K.T.C., M.Z., M.K., G.Z.), Stanford University, Stanford, California
| |
Collapse
|
74
|
Bone and Soft Tissue Tumors. Radiol Clin North Am 2022; 60:339-358. [DOI: 10.1016/j.rcl.2021.11.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
75
|
Gong K, Catana C, Qi J, Li Q. Direct Reconstruction of Linear Parametric Images From Dynamic PET Using Nonlocal Deep Image Prior. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:680-689. [PMID: 34652998 PMCID: PMC8956450 DOI: 10.1109/tmi.2021.3120913] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Direct reconstruction methods have been developed to estimate parametric images directly from the measured PET sinograms by combining the PET imaging model and tracer kinetics in an integrated framework. Due to limited counts received, signal-to-noise-ratio (SNR) and resolution of parametric images produced by direct reconstruction frameworks are still limited. Recently supervised deep learning methods have been successfully applied to medical imaging denoising/reconstruction when large number of high-quality training labels are available. For static PET imaging, high-quality training labels can be acquired by extending the scanning time. However, this is not feasible for dynamic PET imaging, where the scanning time is already long enough. In this work, we proposed an unsupervised deep learning framework for direct parametric reconstruction from dynamic PET, which was tested on the Patlak model and the relative equilibrium Logan model. The training objective function was based on the PET statistical model. The patient's anatomical prior image, which is readily available from PET/CT or PET/MR scans, was supplied as the network input to provide a manifold constraint, and also utilized to construct a kernel layer to perform non-local feature denoising. The linear kinetic model was embedded in the network structure as a 1 ×1 ×1 convolution layer. Evaluations based on dynamic datasets of 18F-FDG and 11C-PiB tracers show that the proposed framework can outperform the traditional and the kernel method-based direct reconstruction methods.
Collapse
|
76
|
de Vries BM, Golla SSV, Zwezerijnen GJC, Hoekstra OS, Jauw YWS, Huisman MC, van Dongen GAMS, Menke-van der Houven van Oordt WC, Zijlstra-Baalbergen JJM, Mesotten L, Boellaard R, Yaqub M. 3D Convolutional Neural Network-Based Denoising of Low-Count Whole-Body 18F-Fluorodeoxyglucose and 89Zr-Rituximab PET Scans. Diagnostics (Basel) 2022; 12:diagnostics12030596. [PMID: 35328149 PMCID: PMC8946936 DOI: 10.3390/diagnostics12030596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 02/22/2022] [Accepted: 02/24/2022] [Indexed: 11/23/2022] Open
Abstract
Acquisition time and injected activity of 18F-fluorodeoxyglucose (18F-FDG) PET should ideally be reduced. However, this decreases the signal-to-noise ratio (SNR), which impairs the diagnostic value of these PET scans. In addition, 89Zr-antibody PET is known to have a low SNR. To improve the diagnostic value of these scans, a Convolutional Neural Network (CNN) denoising method is proposed. The aim of this study was therefore to develop CNNs to increase SNR for low-count 18F-FDG and 89Zr-antibody PET. Super-low-count, low-count and full-count 18F-FDG PET scans from 60 primary lung cancer patients and full-count 89Zr-rituximab PET scans from five patients with non-Hodgkin lymphoma were acquired. CNNs were built to capture the features and to denoise the PET scans. Additionally, Gaussian smoothing (GS) and Bilateral filtering (BF) were evaluated. The performance of the denoising approaches was assessed based on the tumour recovery coefficient (TRC), coefficient of variance (COV; level of noise), and a qualitative assessment by two nuclear medicine physicians. The CNNs had a higher TRC and comparable or lower COV to GS and BF and was also the preferred method of the two observers for both 18F-FDG and 89Zr-rituximab PET. The CNNs improved the SNR of low-count 18F-FDG and 89Zr-rituximab PET, with almost similar or better clinical performance than the full-count PET, respectively. Additionally, the CNNs showed better performance than GS and BF.
Collapse
Affiliation(s)
- Bart M. de Vries
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
- Correspondence: ; Tel.: +31-643628806
| | - Sandeep S. V. Golla
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
| | - Gerben J. C. Zwezerijnen
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
| | - Otto S. Hoekstra
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
| | - Yvonne W. S. Jauw
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
- Cancer Center Amsterdam, Department of Hematology, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands
| | - Marc C. Huisman
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
| | - Guus A. M. S. van Dongen
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
| | | | - Josée J. M. Zijlstra-Baalbergen
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
- Cancer Center Amsterdam, Department of Hematology, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands
| | - Liesbet Mesotten
- Faculty of Medicine and Life Sciences, Hasselt University, Agoralaan Building D, B-3590 Diepenbeek, Belgium;
- Department of Nuclear Medicine, Ziekenhuis Oost Limburg, Schiepse Bos 6, B-3600 Genk, Belgium
| | - Ronald Boellaard
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
| | - Maqsood Yaqub
- Cancer Center Amsterdam, Department of Radiology and Nuclear Medicine, Vrije Universiteit Amsterdam, Amsterdam UMC, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; (S.S.V.G.); (G.J.C.Z.); (O.S.H.); (Y.W.S.J.); (M.C.H.); (G.A.M.S.v.D.); (J.J.M.Z.-B.); (R.B.); (M.Y.)
| |
Collapse
|
77
|
Deng F, Li X, Yang F, Sun H, Yuan J, He Q, Xu W, Yang Y, Liang D, Liu X, Mok GSP, Zheng H, Hu Z. Low-Dose 68 Ga-PSMA Prostate PET/MRI Imaging Using Deep Learning Based on MRI Priors. Front Oncol 2022; 11:818329. [PMID: 35155207 PMCID: PMC8825350 DOI: 10.3389/fonc.2021.818329] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 12/27/2021] [Indexed: 12/02/2022] Open
Abstract
Background 68 Ga-prostate-specific membrane antigen (PSMA) PET/MRI has become an effective imaging method for prostate cancer. The purpose of this study was to use deep learning methods to perform low-dose image restoration on PSMA PET/MRI and to evaluate the effect of synthesis on the images and the medical diagnosis of patients at risk of prostate cancer. Methods We reviewed the 68 Ga-PSMA PET/MRI data of 41 patients. The low-dose PET (LDPET) images of these patients were restored to full-dose PET (FDPET) images through a deep learning method based on MRI priors. The synthesized images were evaluated according to quantitative scores from nuclear medicine doctors and multiple imaging indicators, such as peak-signal noise ratio (PSNR), structural similarity (SSIM), normalization mean square error (NMSE), and relative contrast-to-noise ratio (RCNR). Results The clinical quantitative scores of the FDPET images synthesized from 25%- and 50%-dose images based on MRI priors were 3.84±0.36 and 4.03±0.17, respectively, which were higher than the scores of the target images. Correspondingly, the PSNR, SSIM, NMSE, and RCNR values of the FDPET images synthesized from 50%-dose PET images based on MRI priors were 39.88±3.83, 0.896±0.092, 0.012±0.007, and 0.996±0.080, respectively. Conclusion According to a combination of quantitative scores from nuclear medicine doctors and evaluations with multiple image indicators, the synthesis of FDPET images based on MRI priors using and 50%-dose PET images did not affect the clinical diagnosis of prostate cancer. Prostate cancer patients can undergo 68 Ga-PSMA prostate PET/MRI scans with radiation doses reduced by up to 50% through the use of deep learning methods to synthesize FDPET images.
Collapse
Affiliation(s)
- Fuquan Deng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Computer Department, North China Electric Power University, Baoding, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Xiaoyuan Li
- Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Fengjiao Yang
- Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Hongwei Sun
- United Imaging Research Institute of Intelligent Imaging, Beijing, China
| | - Jianmin Yuan
- Central Research Institute, United Imaging Healthcare Group, Shanghai, China
| | - Qiang He
- Central Research Institute, United Imaging Healthcare Group, Shanghai, China
| | - Weifeng Xu
- Computer Department, North China Electric Power University, Baoding, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Greta S P Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Avenida da Universidade, Macau SAR, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| |
Collapse
|
78
|
A Review of Deep Learning Methods for Compressed Sensing Image Reconstruction and Its Medical Applications. ELECTRONICS 2022. [DOI: 10.3390/electronics11040586] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Compressed sensing (CS) and its medical applications are active areas of research. In this paper, we review recent works using deep learning method to solve CS problem for images or medical imaging reconstruction including computed tomography (CT), magnetic resonance imaging (MRI) and positron-emission tomography (PET). We propose a novel framework to unify traditional iterative algorithms and deep learning approaches. In short, we define two projection operators toward image prior and data consistency, respectively, and any reconstruction algorithm can be decomposed to the two parts. Though deep learning methods can be divided into several categories, they all satisfies the framework. We built the relationship between different reconstruction methods of deep learning, and connect them to traditional methods through the proposed framework. It also indicates that the key to solve CS problem and its medical applications is how to depict the image prior. Based on the framework, we analyze the current deep learning methods and point out some important directions of research in the future.
Collapse
|
79
|
Ote K, Hashimoto F. Deep-learning-based fast TOF-PET image reconstruction using direction information. Radiol Phys Technol 2022; 15:72-82. [DOI: 10.1007/s12194-022-00652-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 01/26/2022] [Accepted: 01/27/2022] [Indexed: 10/19/2022]
|
80
|
Yu Z, Rahman MA, Jha AK. Investigating the limited performance of a deep-learning-based SPECT denoising approach: An observer-study-based characterization. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2022; 12035:120350D. [PMID: 35847481 PMCID: PMC9286496 DOI: 10.1117/12.2613134] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/29/2023]
Abstract
Multiple objective assessment of image-quality (OAIQ)-based studies have reported that several deep-learning (DL)-based denoising methods show limited performance on signal-detection tasks. Our goal was to investigate the reasons for this limited performance. To achieve this goal, we conducted a task-based characterization of a DL-based denoising approach for individual signal properties. We conducted this study in the context of evaluating a DL-based approach for denoising single photon-emission computed tomography (SPECT) images. The training data consisted of signals of different sizes and shapes within a clustered-lumpy background, imaged with a 2D parallel-hole-collimator SPECT system. The projections were generated at normal and 20% low-count level, both of which were reconstructed using an ordered-subset-expectation-maximization (OSEM) algorithm. A convolutional neural network (CNN)-based denoiser was trained to process the low-count images. The performance of this CNN was characterized for five different signal sizes and four different signal-to-background ratio (SBRs) by designing each evaluation as a signal-known-exactly/background-known-statistically (SKE/BKS) signal-detection task. Performance on this task was evaluated using an anthropomorphic channelized Hotelling observer (CHO). As in previous studies, we observed that the DL-based denoising method did not improve performance on signal-detection tasks. Evaluation using the idea of observer-study-based characterization demonstrated that the DL-based denoising approach did not improve performance on the signal-detection task for any of the signal types. Overall, these results provide new insights on the performance of the DL-based denoising approach as a function of signal size and contrast. More generally, the observer study-based characterization provides a mechanism to evaluate the sensitivity of the method to specific object properties, and may be explored as analogous to characterizations such as modulation transfer function for linear systems. Finally, this work underscores the need for objective task-based evaluation of DL-based denoising approaches.
Collapse
Affiliation(s)
- Zitong Yu
- Department of Biomedical Engineering, Washington University
in St. Louis, St. Louis, MO, USA
| | - Md Ashequr Rahman
- Mallinckrodt Institute of Radiology, Washington University
in St. Louis, St. Louis, MO, USA
| | - Abhinav K. Jha
- Department of Biomedical Engineering, Washington University
in St. Louis, St. Louis, MO, USA
- Mallinckrodt Institute of Radiology, Washington University
in St. Louis, St. Louis, MO, USA
| |
Collapse
|
81
|
Matsubara K, Ibaraki M, Nemoto M, Watabe H, Kimura Y. A review on AI in PET imaging. Ann Nucl Med 2022; 36:133-143. [PMID: 35029818 DOI: 10.1007/s12149-021-01710-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Accepted: 12/09/2021] [Indexed: 12/16/2022]
Abstract
Artificial intelligence (AI) has been applied to various medical imaging tasks, such as computer-aided diagnosis. Specifically, deep learning techniques such as convolutional neural network (CNN) and generative adversarial network (GAN) have been extensively used for medical image generation. Image generation with deep learning has been investigated in studies using positron emission tomography (PET). This article reviews studies that applied deep learning techniques for image generation on PET. We categorized the studies for PET image generation with deep learning into three themes as follows: (1) recovering full PET data from noisy data by denoising with deep learning, (2) PET image reconstruction and attenuation correction with deep learning and (3) PET image translation and synthesis with deep learning. We introduce recent studies based on these three categories. Finally, we mention the limitations of applying deep learning techniques to PET image generation and future prospects for PET image generation.
Collapse
Affiliation(s)
- Keisuke Matsubara
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Masanobu Ibaraki
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Mitsutaka Nemoto
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan
| | - Hiroshi Watabe
- Cyclotron and Radioisotope Center (CYRIC), Tohoku University, Miyagi, Japan
| | - Yuichi Kimura
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan.
| |
Collapse
|
82
|
Total-body PET. Nucl Med Mol Imaging 2022. [DOI: 10.1016/b978-0-12-822960-6.00118-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
|
83
|
Fischer NH, Lopes van den Broek SI, Herth MM, Diness F. Radiolabeled albumin through S NAr of cysteines as a potential pretargeting theranostic agent. RSC Adv 2022; 12:35032-35036. [DOI: 10.1039/d2ra06406e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 11/23/2022] [Indexed: 12/13/2022] Open
Abstract
Human serum albumin has been functionalized with a radionuclide by combining SNAr conjugation to Cys34 with CuAAC and inverse-electron demand Diels–Alder reactions demonstrating a promising strategy for generating theranostics by bioconjugation.
Collapse
Affiliation(s)
- Niklas H. Fischer
- Department of Chemistry, Faculty of Science, University of Copenhagen, Universitetsparken 5, Copenhagen 2100, Denmark
- Department of Science and Environment, Roskilde University, Universitetsparken 1, Roskilde 4000, Denmark
| | - Sara I. Lopes van den Broek
- Department of Drug Design and Pharmacology, Faculty of Health and Medical Sciences, University of Copenhagen, Jagtvej 160, Copenhagen 2100, Denmark
| | - Matthias M. Herth
- Department of Drug Design and Pharmacology, Faculty of Health and Medical Sciences, University of Copenhagen, Jagtvej 160, Copenhagen 2100, Denmark
- Department of Clinical Physiology, Nuclear Medicine & PET, Rigshospitalet, Blegdamsvej 9, Copenhagen 2100, Denmark
| | - Frederik Diness
- Department of Chemistry, Faculty of Science, University of Copenhagen, Universitetsparken 5, Copenhagen 2100, Denmark
- Department of Science and Environment, Roskilde University, Universitetsparken 1, Roskilde 4000, Denmark
| |
Collapse
|
84
|
Wang S, Cao G, Wang Y, Liao S, Wang Q, Shi J, Li C, Shen D. Review and Prospect: Artificial Intelligence in Advanced Medical Imaging. FRONTIERS IN RADIOLOGY 2021; 1:781868. [PMID: 37492170 PMCID: PMC10365109 DOI: 10.3389/fradi.2021.781868] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 11/08/2021] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) as an emerging technology is gaining momentum in medical imaging. Recently, deep learning-based AI techniques have been actively investigated in medical imaging, and its potential applications range from data acquisition and image reconstruction to image analysis and understanding. In this review, we focus on the use of deep learning in image reconstruction for advanced medical imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET). Particularly, recent deep learning-based methods for image reconstruction will be emphasized, in accordance with their methodology designs and performances in handling volumetric imaging data. It is expected that this review can help relevant researchers understand how to adapt AI for medical imaging and which advantages can be achieved with the assistance of AI.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
- Pengcheng Laboratrory, Shenzhen, China
| | - Guohua Cao
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China
| | - Shu Liao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Qian Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jun Shi
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| |
Collapse
|
85
|
Theruvath AJ, Siedek F, Yerneni K, Muehe AM, Spunt SL, Pribnow A, Moseley M, Lu Y, Zhao Q, Gulaka P, Chaudhari A, Daldrup-Link HE. Validation of Deep Learning-based Augmentation for Reduced 18F-FDG Dose for PET/MRI in Children and Young Adults with Lymphoma. Radiol Artif Intell 2021; 3:e200232. [PMID: 34870211 DOI: 10.1148/ryai.2021200232] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 08/30/2021] [Accepted: 09/17/2021] [Indexed: 11/11/2022]
Abstract
Purpose To investigate if a deep learning convolutional neural network (CNN) could enable low-dose fluorine 18 (18F) fluorodeoxyglucose (FDG) PET/MRI for correct treatment response assessment of children and young adults with lymphoma. Materials and Methods In this secondary analysis of prospectively collected data (ClinicalTrials.gov identifier: NCT01542879), 20 patients with lymphoma (mean age, 16.4 years ± 6.4 [standard deviation]) underwent 18F-FDG PET/MRI between July 2015 and August 2019 at baseline and after induction chemotherapy. Full-dose 18F-FDG PET data (3 MBq/kg) were simulated to lower 18F-FDG doses based on the percentage of coincidence events (representing simulated 75%, 50%, 25%, 12.5%, and 6.25% 18F-FDG dose [hereafter referred to as 75%Sim, 50%Sim, 25%Sim, 12.5%Sim, and 6.25%Sim, respectively]). A U.S. Food and Drug Administration-approved CNN was used to augment input simulated low-dose scans to full-dose scans. For each follow-up scan after induction chemotherapy, the standardized uptake value (SUV) response score was calculated as the maximum SUV (SUVmax) of the tumor normalized to the mean liver SUV; tumor response was classified as adequate or inadequate. Sensitivity and specificity in the detection of correct response status were computed using full-dose PET as the reference standard. Results With decreasing simulated radiotracer doses, tumor SUVmax increased. A dose below 75%Sim of the full dose led to erroneous upstaging of adequate responders to inadequate responders (43% [six of 14 patients] for 75%Sim; 93% [13 of 14 patients] for 50%Sim; and 100% [14 of 14 patients] below 50%Sim; P < .05 for all). CNN-enhanced low-dose PET/MRI scans at 75%Sim and 50%Sim enabled correct response assessments for all patients. Use of the CNN augmentation for assessing adequate and inadequate responses resulted in identical sensitivities (100%) and specificities (100%) between the assessment of 100% full-dose PET, augmented 75%Sim, and augmented 50%Sim images. Conclusion CNN enhancement of PET/MRI scans may enable 50% 18F-FDG dose reduction with correct treatment response assessment of children and young adults with lymphoma.Keywords: Pediatrics, PET/MRI, Computer Applications Detection/Diagnosis, Lymphoma, Tumor Response, Whole-Body Imaging, Technology AssessmentClinical trial registration no: NCT01542879 Supplemental material is available for this article. © RSNA, 2021.
Collapse
Affiliation(s)
- Ashok J Theruvath
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Florian Siedek
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Ketan Yerneni
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Anne M Muehe
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Sheri L Spunt
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Allison Pribnow
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Michael Moseley
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Ying Lu
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Qian Zhao
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Praveen Gulaka
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Akshay Chaudhari
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| | - Heike E Daldrup-Link
- Department of Radiology, Molecular Imaging Program at Stanford (A.J.T., F.S., K.Y., A.M.M., M.M., A.C., H.E.D.L.), Department of Pediatrics, Division of Hematology/Oncology, Lucile Packard Children's Hospital (S.L.S., A.P., H.E.D.L.), and Department of Biomedical Data Science (Y.L., Q.Z.), Stanford University, 725 Welch Rd, Stanford, CA 94304; and Subtle Medical, Menlo Park, Calif (P.G.)
| |
Collapse
|
86
|
Cross DJ, Komori S, Minoshima S. Artificial Intelligence for Brain Molecular Imaging. PET Clin 2021; 17:57-64. [PMID: 34809870 DOI: 10.1016/j.cpet.2021.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
AI has been applied to brain molecular imaging for over 30 years. The past two decades, have seen explosive progress. AI applications span from operations processes such as attenuation correction and image generation, to disease diagnosis and prediction. As sophistication in AI software platforms increases, and the availability of large imaging data repositories become common, future studies will incorporate more multidimensional datasets and information that may truly reach "superhuman" levels in the field of brain imaging. However, even with a growing level of complexity, these advanced networks will still require human supervision for appropriate application and interpretation in medical practice.
Collapse
Affiliation(s)
- Donna J Cross
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A71, Salt Lake City, UT 84132-2140, USA.
| | - Seisaku Komori
- Future Design Lab, New Concept Design, Global Strategic Challenge Center, Hamamatsu Photonics K.K. 5000, Hirakuchi, Hamakita-ku, Hamamatsu-City, 434-8601 Japan
| | - Satoshi Minoshima
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A71, Salt Lake City, UT 84132-2140, USA
| |
Collapse
|
87
|
Amirrashedi M, Sarkar S, Mamizadeh H, Ghadiri H, Ghafarian P, Zaidi H, Ay MR. Leveraging deep neural networks to improve numerical and perceptual image quality in low-dose preclinical PET imaging. Comput Med Imaging Graph 2021; 94:102010. [PMID: 34784505 DOI: 10.1016/j.compmedimag.2021.102010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 10/25/2021] [Accepted: 10/26/2021] [Indexed: 01/24/2023]
Abstract
The amount of radiotracer injected into laboratory animals is still the most daunting challenge facing translational PET studies. Since low-dose imaging is characterized by a higher level of noise, the quality of the reconstructed images leaves much to be desired. Being the most ubiquitous techniques in denoising applications, edge-aware denoising filters, and reconstruction-based techniques have drawn significant attention in low-count applications. However, for the last few years, much of the credit has gone to deep-learning (DL) methods, which provide more robust solutions to handle various conditions. Albeit being extensively explored in clinical studies, to the best of our knowledge, there is a lack of studies exploring the feasibility of DL-based image denoising in low-count small animal PET imaging. Therefore, herein, we investigated different DL frameworks to map low-dose small animal PET images to their full-dose equivalent with quality and visual similarity on a par with those of standard acquisition. The performance of the DL model was also compared to other well-established filters, including Gaussian smoothing, nonlocal means, and anisotropic diffusion. Visual inspection and quantitative assessment based on quality metrics proved the superior performance of the DL methods in low-count small animal PET studies, paving the way for a more detailed exploration of DL-assisted algorithms in this domain.
Collapse
Affiliation(s)
- Mahsa Amirrashedi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Saeed Sarkar
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Hojjat Mamizadeh
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Hossein Ghadiri
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Pardis Ghafarian
- Chronic Respiratory Diseases Research Center, National Research Institute of Tuberculosis and Lung Diseases (NRITLD), Shahid Beheshti University of Medical Sciences, Tehran, Iran; PET/CT and Cyclotron Center, Masih Daneshvari Hospital, Shahid Beheshti University of Medical, Tehran, Iran.
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva CH-1211, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| | - Mohammad Reza Ay
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
88
|
Sanaat A, Shooli H, Ferdowsi S, Shiri I, Arabi H, Zaidi H. DeepTOFSino: A deep learning model for synthesizing full-dose time-of-flight bin sinograms from their corresponding low-dose sinograms. Neuroimage 2021; 245:118697. [PMID: 34742941 DOI: 10.1016/j.neuroimage.2021.118697] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Revised: 09/21/2021] [Accepted: 10/29/2021] [Indexed: 11/29/2022] Open
Abstract
PURPOSE Reducing the injected activity and/or the scanning time is a desirable goal to minimize radiation exposure and maximize patients' comfort. To achieve this goal, we developed a deep neural network (DNN) model for synthesizing full-dose (FD) time-of-flight (TOF) bin sinograms from their corresponding fast/low-dose (LD) TOF bin sinograms. METHODS Clinical brain PET/CT raw data of 140 normal and abnormal patients were employed to create LD and FD TOF bin sinograms. The LD TOF sinograms were created through 5% undersampling of FD list-mode PET data. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). Residual network (ResNet) algorithms were trained separately to generate FD bins from LD bins. An extra ResNet model was trained to synthesize FD images from LD images to compare the performance of DNN in sinogram space (SS) vs implementation in image space (IS). Comprehensive quantitative and statistical analysis was performed to assess the performance of the proposed model using established quantitative metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM) region-wise standardized uptake value (SUV) bias and statistical analysis for 83 brain regions. RESULTS SSIM and PSNR values of 0.97 ± 0.01, 0.98 ± 0.01 and 33.70 ± 0.32, 39.36 ± 0.21 were obtained for IS and SS, respectively, compared to 0.86 ± 0.02and 31.12 ± 0.22 for reference LD images. The absolute average SUV bias was 0.96 ± 0.95% and 1.40 ± 0.72% for SS and IS implementations, respectively. The joint histogram analysis revealed the lowest mean square error (MSE) and highest correlation (R2 = 0.99, MSE = 0.019) was achieved by SS compared to IS (R2 = 0.97, MSE= 0.028). The Bland & Altman analysis showed that the lowest SUV bias (-0.4%) and minimum variance (95% CI: -2.6%, +1.9%) were achieved by SS images. The voxel-wise t-test analysis revealed the presence of voxels with statistically significantly lower values in LD, IS, and SS images compared to FD images respectively. CONCLUSION The results demonstrated that images reconstructed from the predicted TOF FD sinograms using the SS approach led to higher image quality and lower bias compared to images predicted from LD images.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Shooli
- Persian Gulf Nuclear Medicine Research Center, Department of Molecular Imaging and Radionuclide Therapy (MIRT), Bushehr Medical University Hospital, Faculty of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Sohrab Ferdowsi
- University of Applied Sciences and Arts of Western, Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland; Geneva University Neurocenter, University of Geneva, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
89
|
Zhou B, Tsai YJ, Chen X, Duncan JS, Liu C. MDPET: A Unified Motion Correction and Denoising Adversarial Network for Low-Dose Gated PET. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3154-3164. [PMID: 33909561 PMCID: PMC8588635 DOI: 10.1109/tmi.2021.3076191] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
In positron emission tomography (PET), gating is commonly utilized to reduce respiratory motion blurring and to facilitate motion correction methods. In application where low-dose gated PET is useful, reducing injection dose causes increased noise levels in gated images that could corrupt motion estimation and subsequent corrections, leading to inferior image quality. To address these issues, we propose MDPET, a unified motion correction and denoising adversarial network for generating motion-compensated low-noise images from low-dose gated PET data. Specifically, we proposed a Temporal Siamese Pyramid Network (TSP-Net) with basic units made up of 1.) Siamese Pyramid Network (SP-Net), and 2.) a recurrent layer for motion estimation among the gates. The denoising network is unified with our motion estimation network to simultaneously correct the motion and predict a motion-compensated denoised PET reconstruction. The experimental results on human data demonstrated that our MDPET can generate accurate motion estimation directly from low-dose gated images and produce high-quality motion-compensated low-noise reconstructions. Comparative studies with previous methods also show that our MDPET is able to generate superior motion estimation and denoising performance. Our code is available at https://github.com/bbbbbbzhou/MDPET.
Collapse
|
90
|
Positron emission tomography in multiple sclerosis - straight to the target. Nat Rev Neurol 2021; 17:663-675. [PMID: 34545219 DOI: 10.1038/s41582-021-00537-1] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/30/2021] [Indexed: 02/08/2023]
Abstract
Following the impressive progress in the treatment of relapsing-remitting multiple sclerosis (MS), the major challenge ahead is the development of treatments to prevent or delay the irreversible accumulation of clinical disability in progressive forms of the disease. The substrate of clinical progression is neuro-axonal degeneration, and a deep understanding of the mechanisms that underlie this process is a precondition for the development of therapies for progressive MS. PET imaging involves the use of radiolabelled compounds that bind to specific cellular and metabolic targets, thereby enabling direct in vivo measurement of several pathological processes. This approach can provide key insights into the clinical relevance of these processes and their chronological sequence during the disease course. In this Review, we focus on the contribution that PET is making to our understanding of extraneuronal and intraneuronal mechanisms that are involved in the pathogenesis of irreversible neuro-axonal damage in MS. We consider the major challenges with the use of PET in MS and the steps necessary to realize clinical benefits of the technique. In addition, we discuss the potential of emerging PET tracers and future applications of existing compounds to facilitate the identification of effective neuroprotective treatments for patients with MS.
Collapse
|
91
|
Kalpathy-Cramer J, Patel JB, Bridge C, Chang K. Basic Artificial Intelligence Techniques: Evaluation of Artificial Intelligence Performance. Radiol Clin North Am 2021; 59:941-954. [PMID: 34689879 DOI: 10.1016/j.rcl.2021.06.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Jayashree Kalpathy-Cramer
- Radiology, Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Boston, MA 02129, USA.
| | - Jay B Patel
- Radiology, Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Boston, MA 02129, USA
| | - Christopher Bridge
- Radiology, Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Boston, MA 02129, USA
| | - Ken Chang
- Radiology, Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Boston, MA 02129, USA
| |
Collapse
|
92
|
Jamadar SD, Zhong S, Carey A, McIntyre R, Ward PGD, Fornito A, Premaratne M, Jon Shah N, O'Brien K, Stäb D, Chen Z, Egan GF. Task-evoked simultaneous FDG-PET and fMRI data for measurement of neural metabolism in the human visual cortex. Sci Data 2021; 8:267. [PMID: 34654823 PMCID: PMC8520012 DOI: 10.1038/s41597-021-01042-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Accepted: 08/12/2021] [Indexed: 01/21/2023] Open
Abstract
Understanding how the living human brain functions requires sophisticated in vivo neuroimaging technologies to characterise the complexity of neuroanatomy, neural function, and brain metabolism. Fluorodeoxyglucose positron emission tomography (FDG-PET) studies of human brain function have historically been limited in their capacity to measure dynamic neural activity. Simultaneous [18 F]-FDG-PET and functional magnetic resonance imaging (fMRI) with FDG infusion protocols enable examination of dynamic changes in cerebral glucose metabolism simultaneously with dynamic changes in blood oxygenation. The Monash vis-fPET-fMRI dataset is a simultaneously acquired FDG-fPET/BOLD-fMRI dataset acquired from n = 10 healthy adults (18-49 yrs) whilst they viewed a flickering checkerboard task. The dataset contains both raw (unprocessed) images and source data organized according to the BIDS specification. The source data includes PET listmode, normalization, sinogram and physiology data. Here, the technical feasibility of using opensource frameworks to reconstruct the PET listmode data is demonstrated. The dataset has significant re-use value for the development of new processing pipelines, signal optimisation methods, and to formulate new hypotheses concerning the relationship between neuronal glucose uptake and cerebral haemodynamics.
Collapse
Affiliation(s)
- Sharna D Jamadar
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia. .,Australian Research Council Centre of Excellence for Integrative Brain Function, Clayton, Australia. .,Turner Institute for Brain and Mental Health, Monash University, Clayton, VIC, Australia.
| | - Shenjun Zhong
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia.,National Imaging Facility, Clayton, Australia
| | - Alexandra Carey
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia.,Department of Medical Imaging, Monash Health, Clayton, VIC, Australia
| | - Richard McIntyre
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia.,Department of Medical Imaging, Monash Health, Clayton, VIC, Australia
| | - Phillip G D Ward
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, Clayton, Australia.,Turner Institute for Brain and Mental Health, Monash University, Clayton, VIC, Australia
| | - Alex Fornito
- Australian Research Council Centre of Excellence for Integrative Brain Function, Clayton, Australia.,Turner Institute for Brain and Mental Health, Monash University, Clayton, VIC, Australia
| | - Malin Premaratne
- Department of Electrical and Computer Systems Engineering, Monash University, Clayton, VIC, Australia
| | - N Jon Shah
- Institute of Neuroscience and Medicine - 4, Forschungszentrum Jülich, Jülich, Germany
| | - Kieran O'Brien
- MR Research Collaborations, Siemens Healthcare Pty Ltd, Clayton, Australia
| | - Daniel Stäb
- MR Research Collaborations, Siemens Healthcare Pty Ltd, Clayton, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia.,Monash Data Futures Institute, Monash University, Clayton, Australia
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, Clayton, Australia.,Turner Institute for Brain and Mental Health, Monash University, Clayton, VIC, Australia
| |
Collapse
|
93
|
Liu J, Malekzadeh M, Mirian N, Song TA, Liu C, Dutta J. Artificial Intelligence-Based Image Enhancement in PET Imaging: Noise Reduction and Resolution Enhancement. PET Clin 2021; 16:553-576. [PMID: 34537130 PMCID: PMC8457531 DOI: 10.1016/j.cpet.2021.06.005] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
High noise and low spatial resolution are two key confounding factors that limit the qualitative and quantitative accuracy of PET images. Artificial intelligence models for image denoising and deblurring are becoming increasingly popular for the postreconstruction enhancement of PET images. We present a detailed review of recent efforts for artificial intelligence-based PET image enhancement with a focus on network architectures, data types, loss functions, and evaluation metrics. We also highlight emerging areas in this field that are quickly gaining popularity, identify barriers to large-scale adoption of artificial intelligence models for PET image enhancement, and discuss future directions.
Collapse
Affiliation(s)
- Juan Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Masoud Malekzadeh
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Niloufar Mirian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Tzu-An Song
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA; Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
94
|
Onishi Y, Hashimoto F, Ote K, Ohba H, Ota R, Yoshikawa E, Ouchi Y. Anatomical-guided attention enhances unsupervised PET image denoising performance. Med Image Anal 2021; 74:102226. [PMID: 34563861 DOI: 10.1016/j.media.2021.102226] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 08/02/2021] [Accepted: 09/05/2021] [Indexed: 10/20/2022]
Abstract
Although supervised convolutional neural networks (CNNs) often outperform conventional alternatives for denoising positron emission tomography (PET) images, they require many low- and high-quality reference PET image pairs. Herein, we propose an unsupervised 3D PET image denoising method based on an anatomical information-guided attention mechanism. The proposed magnetic resonance-guided deep decoder (MR-GDD) utilizes the spatial details and semantic features of MR-guidance image more effectively by introducing encoder-decoder and deep decoder subnetworks. Moreover, the specific shapes and patterns of the guidance image do not affect the denoised PET image, because the guidance image is input to the network through an attention gate. In a Monte Carlo simulation of [18F]fluoro-2-deoxy-D-glucose (FDG), the proposed method achieved the highest peak signal-to-noise ratio and structural similarity (27.92 ± 0.44 dB/0.886 ± 0.007), as compared with Gaussian filtering (26.68 ± 0.10 dB/0.807 ± 0.004), image guided filtering (27.40 ± 0.11 dB/0.849 ± 0.003), deep image prior (DIP) (24.22 ± 0.43 dB/0.737 ± 0.017), and MR-DIP (27.65 ± 0.42 dB/0.879 ± 0.007). Furthermore, we experimentally visualized the behavior of the optimization process, which is often unknown in unsupervised CNN-based restoration problems. For preclinical (using [18F]FDG and [11C]raclopride) and clinical (using [18F]florbetapir) studies, the proposed method demonstrates state-of-the-art denoising performance while retaining spatial resolution and quantitative accuracy, despite using a common network architecture for various noisy PET images with 1/10th of the full counts. These results suggest that the proposed MR-GDD can reduce PET scan times and PET tracer doses considerably without impacting patients.
Collapse
Affiliation(s)
- Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan.
| | - Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Hiroyuki Ohba
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Ryosuke Ota
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Etsuji Yoshikawa
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Yasuomi Ouchi
- Department of Biofunctional Imaging, Preeminent Medical Photonics Education & Research Center, Hamamatsu University School of Medicine, 1-20-1 Handayama, Higashi-ku, Hamamatsu 431-3192, Japan
| |
Collapse
|
95
|
Richardson ML, Garwood ER, Lee Y, Li MD, Lo HS, Nagaraju A, Nguyen XV, Probyn L, Rajiah P, Sin J, Wasnik AP, Xu K. Noninterpretive Uses of Artificial Intelligence in Radiology. Acad Radiol 2021; 28:1225-1235. [PMID: 32059956 DOI: 10.1016/j.acra.2020.01.012] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Revised: 01/08/2020] [Accepted: 01/09/2020] [Indexed: 12/12/2022]
Abstract
We deem a computer to exhibit artificial intelligence (AI) when it performs a task that would normally require intelligent action by a human. Much of the recent excitement about AI in the medical literature has revolved around the ability of AI models to recognize anatomy and detect pathology on medical images, sometimes at the level of expert physicians. However, AI can also be used to solve a wide range of noninterpretive problems that are relevant to radiologists and their patients. This review summarizes some of the newer noninterpretive uses of AI in radiology.
Collapse
Affiliation(s)
| | - Elisabeth R Garwood
- Department of Radiology, University of Massachusetts, Worcester, Massachusetts
| | - Yueh Lee
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina
| | - Matthew D Li
- Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Boston, Massachusets
| | - Hao S Lo
- Department of Radiology, University of Washington, Seattle, Washington
| | - Arun Nagaraju
- Department of Radiology, University of Chicago, Chicago, Illinois
| | - Xuan V Nguyen
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Linda Probyn
- Department of Radiology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario
| | - Prabhakar Rajiah
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Jessica Sin
- Department of Radiology, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Ashish P Wasnik
- Department of Radiology, University of Michigan, Ann Arbor, Michigan
| | - Kali Xu
- Department of Medicine, Santa Clara Valley Medical Center, Santa Clara, California
| |
Collapse
|
96
|
Abstract
PET/MR imaging is in routine clinical use and is at least as effective as PET/CT for oncologic and neurologic studies with advantages with certain PET radiopharmaceuticals and applications. In addition, whole body PET/MR imaging substantially reduces radiation dosages compared with PET/CT which is particularly relevant to pediatric and young adult population. For cancer imaging, assessment of hepatic, pelvic, and soft-tissue malignancies may benefit from PET/MR imaging. For neurologic imaging, volumetric brain MR imaging can detect regional volume loss relevant to cognitive impairment and epilepsy. In addition, the single-bed position acquisition enables dynamic brain PET imaging without extending the total study length which has the potential to enhance the diagnostic information from PET.
Collapse
Affiliation(s)
- Farshad Moradi
- Department of Radiology, Stanford University, 300 Pasteur Drive, H2200, Stanford, CA 94305, USA.
| | - Andrei Iagaru
- Department of Radiology, Stanford University, 300 Pasteur Drive, H2200, Stanford, CA 94305, USA
| | - Jonathan McConathy
- Department of Radiology, University of Alabama at Birmingham, 619 19th Street South, JT 773, Birmingham, AL 35249, USA
| |
Collapse
|
97
|
Chaudhari AS, Mittra E, Davidzon GA, Gulaka P, Gandhi H, Brown A, Zhang T, Srinivas S, Gong E, Zaharchuk G, Jadvar H. Low-count whole-body PET with deep learning in a multicenter and externally validated study. NPJ Digit Med 2021; 4:127. [PMID: 34426629 PMCID: PMC8382711 DOI: 10.1038/s41746-021-00497-2] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2020] [Accepted: 08/03/2021] [Indexed: 02/08/2023] Open
Abstract
More widespread use of positron emission tomography (PET) imaging is limited by its high cost and radiation dose. Reductions in PET scan time or radiotracer dosage typically degrade diagnostic image quality (DIQ). Deep-learning-based reconstruction may improve DIQ, but such methods have not been clinically evaluated in a realistic multicenter, multivendor environment. In this study, we evaluated the performance and generalizability of a deep-learning-based image-quality enhancement algorithm applied to fourfold reduced-count whole-body PET in a realistic clinical oncologic imaging environment with multiple blinded readers, institutions, and scanner types. We demonstrate that the low-count-enhanced scans were noninferior to the standard scans in DIQ (p < 0.05) and overall diagnostic confidence (p < 0.001) independent of the underlying PET scanner used. Lesion detection for the low-count-enhanced scans had a high patient-level sensitivity of 0.94 (0.83-0.99) and specificity of 0.98 (0.95-0.99). Interscan kappa agreement of 0.85 was comparable to intrareader (0.88) and pairwise inter-reader agreements (maximum of 0.72). SUV quantification was comparable in the reference regions and lesions (lowest p-value=0.59) and had high correlation (lowest CCC = 0.94). Thus, we demonstrated that deep learning can be used to restore diagnostic image quality and maintain SUV accuracy for fourfold reduced-count PET scans, with interscan variations in lesion depiction, lower than intra- and interreader variations. This method generalized to an external validation set of clinical patients from multiple institutions and scanner types. Overall, this method may enable either dose or exam-duration reduction, increasing safety and lowering the cost of PET imaging.
Collapse
Affiliation(s)
- Akshay S Chaudhari
- Department of Radiology, Stanford University, Palo Alto, CA, USA.
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA.
- Subtle Medical, Menlo Park, CA, USA.
| | - Erik Mittra
- Division of Diagnostic Radiology, Oregon Health & Science University, Portland, OR, USA
| | - Guido A Davidzon
- Department of Radiology, Stanford University, Palo Alto, CA, USA
| | | | | | - Adam Brown
- Division of Diagnostic Radiology, Oregon Health & Science University, Portland, OR, USA
| | | | - Shyam Srinivas
- Department of Radiology, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | | | - Greg Zaharchuk
- Department of Radiology, Stanford University, Palo Alto, CA, USA
- Subtle Medical, Menlo Park, CA, USA
| | - Hossein Jadvar
- Department of Radiology, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
98
|
Sudarshan VP, Upadhyay U, Egan GF, Chen Z, Awate SP. Towards lower-dose PET using physics-based uncertainty-aware multimodal learning with robustness to out-of-distribution data. Med Image Anal 2021; 73:102187. [PMID: 34348196 DOI: 10.1016/j.media.2021.102187] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 07/12/2021] [Accepted: 07/16/2021] [Indexed: 10/20/2022]
Abstract
Radiation exposure in positron emission tomography (PET) imaging limits its usage in the studies of radiation-sensitive populations, e.g., pregnant women, children, and adults that require longitudinal imaging. Reducing the PET radiotracer dose or acquisition time reduces photon counts, which can deteriorate image quality. Recent deep-neural-network (DNN) based methods for image-to-image translation enable the mapping of low-quality PET images (acquired using substantially reduced dose), coupled with the associated magnetic resonance imaging (MRI) images, to high-quality PET images. However, such DNN methods focus on applications involving test data that match the statistical characteristics of the training data very closely and give little attention to evaluating the performance of these DNNs on new out-of-distribution (OOD) acquisitions. We propose a novel DNN formulation that models the (i) underlying sinogram-based physics of the PET imaging system and (ii) the uncertainty in the DNN output through the per-voxel heteroscedasticity of the residuals between the predicted and the high-quality reference images. Our sinogram-based uncertainty-aware DNN framework, namely, suDNN, estimates a standard-dose PET image using multimodal input in the form of (i) a low-dose/low-count PET image and (ii) the corresponding multi-contrast MRI images, leading to improved robustness of suDNN to OOD acquisitions. Results on in vivo simultaneous PET-MRI, and various forms of OOD data in PET-MRI, show the benefits of suDNN over the current state of the art, quantitatively and qualitatively.
Collapse
Affiliation(s)
- Viswanath P Sudarshan
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India; IITB-Monash Research Academy, Indian Institute of Technology (IIT) Bombay, Mumbai, India
| | - Uddeshya Upadhyay
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India
| | - Gary F Egan
- Monash Biomedical Imaging (MBI), Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging (MBI), Monash University, Melbourne, Australia
| | - Suyash P Awate
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India.
| |
Collapse
|
99
|
Malpani R, Petty CW, Bhatt N, Staib LH, Chapiro J. Use of Artificial Intelligence in Non-Oncologic Interventional Radiology: Current State and Future Directions. DIGESTIVE DISEASE INTERVENTIONS 2021; 5:331-337. [PMID: 35005333 PMCID: PMC8740955 DOI: 10.1055/s-0041-1726300] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The future of radiology is disproportionately linked to the applications of artificial intelligence (AI). Recent exponential advancements in AI are already beginning to augment the clinical practice of radiology. Driven by a paucity of review articles in the area, this article aims to discuss applications of AI in non-oncologic IR across procedural planning, execution, and follow-up along with a discussion on the future directions of the field. Applications in vascular imaging, radiomics, touchless software interactions, robotics, natural language processing, post-procedural outcome prediction, device navigation, and image acquisition are included. Familiarity with AI study analysis will help open the current 'black box' of AI research and help bridge the gap between the research laboratory and clinical practice.
Collapse
Affiliation(s)
- Rohil Malpani
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, 330 Cedar Street, New Haven, CT 06520, USA
| | - Christopher W. Petty
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, 330 Cedar Street, New Haven, CT 06520, USA
| | - Neha Bhatt
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, 330 Cedar Street, New Haven, CT 06520, USA
| | - Lawrence H. Staib
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, 330 Cedar Street, New Haven, CT 06520, USA
| | - Julius Chapiro
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, 330 Cedar Street, New Haven, CT 06520, USA
| |
Collapse
|
100
|
Sanaat A, Shiri I, Arabi H, Mainta I, Nkoulou R, Zaidi H. Deep learning-assisted ultra-fast/low-dose whole-body PET/CT imaging. Eur J Nucl Med Mol Imaging 2021; 48:2405-2415. [PMID: 33495927 PMCID: PMC8241799 DOI: 10.1007/s00259-020-05167-1] [Citation(s) in RCA: 69] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 12/15/2020] [Indexed: 12/21/2022]
Abstract
PURPOSE Tendency is to moderate the injected activity and/or reduce acquisition time in PET examinations to minimize potential radiation hazards and increase patient comfort. This work aims to assess the performance of regular full-dose (FD) synthesis from fast/low-dose (LD) whole-body (WB) PET images using deep learning techniques. METHODS Instead of using synthetic LD scans, two separate clinical WB 18F-Fluorodeoxyglucose (18F-FDG) PET/CT studies of 100 patients were acquired: one regular FD (~ 27 min) and one fast or LD (~ 3 min) consisting of 1/8th of the standard acquisition time. A modified cycle-consistent generative adversarial network (CycleGAN) and residual neural network (ResNET) models, denoted as CGAN and RNET, respectively, were implemented to predict FD PET images. The quality of the predicted PET images was assessed by two nuclear medicine physicians. Moreover, the diagnostic quality of the predicted PET images was evaluated using a pass/fail scheme for lesion detectability task. Quantitative analysis using established metrics including standardized uptake value (SUV) bias was performed for the liver, left/right lung, brain, and 400 malignant lesions from the test and evaluation datasets. RESULTS CGAN scored 4.92 and 3.88 (out of 5) (adequate to good) for brain and neck + trunk, respectively. The average SUV bias calculated over normal tissues was 3.39 ± 0.71% and - 3.83 ± 1.25% for CGAN and RNET, respectively. Bland-Altman analysis reported the lowest SUV bias (0.01%) and 95% confidence interval of - 0.36, + 0.47 for CGAN compared with the reference FD images for malignant lesions. CONCLUSION CycleGAN is able to synthesize clinical FD WB PET images from LD images with 1/8th of standard injected activity or acquisition time. The predicted FD images present almost similar performance in terms of lesion detectability, qualitative scores, and quantification bias and variance.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - René Nkoulou
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
- Geneva University Neurocenter, Geneva University, 1205 Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, DK-500 Odense, Denmark
| |
Collapse
|