1
|
Bahloul MA, Jabeen S, Benoumhani S, Alsaleh HA, Belkhatir Z, Al-Wabil A. Advancements in synthetic CT generation from MRI: A review of techniques, and trends in radiation therapy planning. J Appl Clin Med Phys 2024:e14499. [PMID: 39325781 DOI: 10.1002/acm2.14499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 06/27/2024] [Accepted: 07/26/2024] [Indexed: 09/28/2024] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) and Computed tomography (CT) are crucial imaging techniques in both diagnostic imaging and radiation therapy. MRI provides excellent soft tissue contrast but lacks the direct electron density data needed to calculate dosage. CT, on the other hand, remains the gold standard due to its accurate electron density information in radiation therapy planning (RTP) but it exposes patients to ionizing radiation. Synthetic CT (sCT) generation from MRI has been a focused study field in the last few years due to cost effectiveness as well as for the objective of minimizing side-effects of using more than one imaging modality for treatment simulation. It offers significant time and cost efficiencies, bypassing the complexities of co-registration, and potentially improving treatment accuracy by minimizing registration-related errors. In an effort to navigate the quickly developing field of precision medicine, this paper investigates recent advancements in sCT generation techniques, particularly those using machine learning (ML) and deep learning (DL). The review highlights the potential of these techniques to improve the efficiency and accuracy of sCT generation for use in RTP by improving patient care and reducing healthcare costs. The intricate web of sCT generation techniques is scrutinized critically, with clinical implications and technical underpinnings for enhanced patient care revealed. PURPOSE This review aims to provide an overview of the most recent advancements in sCT generation from MRI with a particular focus of its use within RTP, emphasizing on techniques, performance evaluation, clinical applications, future research trends and open challenges in the field. METHODS A thorough search strategy was employed to conduct a systematic literature review across major scientific databases. Focusing on the past decade's advancements, this review critically examines emerging approaches introduced from 2013 to 2023 for generating sCT from MRI, providing a comprehensive analysis of their methodologies, ultimately fostering further advancement in the field. This study highlighted significant contributions, identified challenges, and provided an overview of successes within RTP. Classifying the identified approaches, contrasting their advantages and disadvantages, and identifying broad trends were all part of the review's synthesis process. RESULTS The review identifies various sCT generation approaches, consisting atlas-based, segmentation-based, multi-modal fusion, hybrid approaches, ML and DL-based techniques. These approaches are evaluated for image quality, dosimetric accuracy, and clinical acceptability. They are used for MRI-only radiation treatment, adaptive radiotherapy, and MR/PET attenuation correction. The review also highlights the diversity of methodologies for sCT generation, each with its own advantages and limitations. Emerging trends incorporate the integration of advanced imaging modalities including various MRI sequences like Dixon sequences, T1-weighted (T1W), T2-weighted (T2W), as well as hybrid approaches for enhanced accuracy. CONCLUSIONS The study examines MRI-based sCT generation, to minimize negative effects of acquiring both modalities. The study reviews 2013-2023 studies on MRI to sCT generation methods, aiming to revolutionize RTP by reducing use of ionizing radiation and improving patient outcomes. The review provides insights for researchers and practitioners, emphasizing the need for standardized validation procedures and collaborative efforts to refine methods and address limitations. It anticipates the continued evolution of techniques to improve the precision of sCT in RTP.
Collapse
Affiliation(s)
- Mohamed A Bahloul
- College of Engineering, Alfaisal University, Riyadh, Saudi Arabia
- Translational Biomedical Engineering Research Lab, College of Engineering, Alfaisal University, Riyadh, Saudi Arabia
| | - Saima Jabeen
- College of Engineering, Alfaisal University, Riyadh, Saudi Arabia
- Translational Biomedical Engineering Research Lab, College of Engineering, Alfaisal University, Riyadh, Saudi Arabia
- AI Research Center, College of Engineering, Alfaisal University, Riyadh, Saudi Arabia
| | - Sara Benoumhani
- College of Engineering, Alfaisal University, Riyadh, Saudi Arabia
- AI Research Center, College of Engineering, Alfaisal University, Riyadh, Saudi Arabia
| | | | - Zehor Belkhatir
- School of Electronics and Computer Science, University of Southampton, Southampton, UK
| | - Areej Al-Wabil
- College of Engineering, Alfaisal University, Riyadh, Saudi Arabia
- AI Research Center, College of Engineering, Alfaisal University, Riyadh, Saudi Arabia
| |
Collapse
|
2
|
Seyyedi N, Ghafari A, Seyyedi N, Sheikhzadeh P. Deep learning-based techniques for estimating high-quality full-dose positron emission tomography images from low-dose scans: a systematic review. BMC Med Imaging 2024; 24:238. [PMID: 39261796 PMCID: PMC11391655 DOI: 10.1186/s12880-024-01417-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 08/30/2024] [Indexed: 09/13/2024] Open
Abstract
This systematic review aimed to evaluate the potential of deep learning algorithms for converting low-dose Positron Emission Tomography (PET) images to full-dose PET images in different body regions. A total of 55 articles published between 2017 and 2023 by searching PubMed, Web of Science, Scopus and IEEE databases were included in this review, which utilized various deep learning models, such as generative adversarial networks and UNET, to synthesize high-quality PET images. The studies involved different datasets, image preprocessing techniques, input data types, and loss functions. The evaluation of the generated PET images was conducted using both quantitative and qualitative methods, including physician evaluations and various denoising techniques. The findings of this review suggest that deep learning algorithms have promising potential in generating high-quality PET images from low-dose PET images, which can be useful in clinical practice.
Collapse
Affiliation(s)
- Negisa Seyyedi
- Nursing and Midwifery Care Research Center, Health Management Research Institute, Iran University of Medical Sciences, Tehran, Iran
| | - Ali Ghafari
- Research Center for Evidence-Based Medicine, Iranian EBM Centre: A JBI Centre of Excellence, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Navisa Seyyedi
- Department of Health Information Management and Medical Informatics, School of Allied Medical Science, Tehran University of Medical Sciences, Tehran, Iran
| | - Peyman Sheikhzadeh
- Medical Physics and Biomedical Engineering Department, Medical Faculty, Tehran University of Medical Sciences, Tehran, Iran.
- Department of Nuclear Medicine, Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
3
|
Champendal M, Ribeiro RST, Müller H, Prior JO, Sá Dos Reis C. Nuclear medicine technologists practice impacted by AI denoising applications in PET/CT images. Radiography (Lond) 2024; 30:1232-1239. [PMID: 38917681 DOI: 10.1016/j.radi.2024.06.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 05/24/2024] [Accepted: 06/11/2024] [Indexed: 06/27/2024]
Abstract
PURPOSE Artificial intelligence (AI) in positron emission tomography/computed tomography (PET/CT) can be used to improve image quality when it is useful to reduce the injected activity or the acquisition time. Particular attention must be paid to ensure that users adopt this technological innovation when outcomes can be improved by its use. The aim of this study was to identify the aspects that need to be analysed and discussed to implement an AI denoising PET/CT algorithm in clinical practice, based on the representations of Nuclear Medicine Technologists (NMT) from Western-Switzerland, highlighting the barriers and facilitators associated. METHODS Two focus groups were organised in June and September 2023, involving ten voluntary participants recruited from all types of medical imaging departments, forming a diverse sample of NMT. The interview guide followed the first stage of the revised model of Ottawa of Research Use. A content analysis was performed following the three-stage approach described by Wanlin. Ethics cleared the study. RESULTS Clinical practice, workload, knowledge and resources were de 4 themes identified as necessary to be thought before implementing an AI denoising PET/CT algorithm by ten NMT participants (aged 31-60), not familiar with this AI tool. The main barriers to implement this algorithm included workflow challenges, resistance from professionals and lack of education; while the main facilitators were explanations and the availability of support to ask questions such as a "local champion". CONCLUSION To implement a denoising algorithm in PET/CT, several aspects of clinical practice need to be thought to reduce the barriers to its implementation such as the procedures, the workload and the available resources. Participants emphasised also the importance of clear explanations, education, and support for successful implementation. IMPLICATIONS FOR PRACTICE To facilitate the implementation of AI tools in clinical practice, it is important to identify the barriers and propose strategies that can mitigate it.
Collapse
Affiliation(s)
- M Champendal
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland: Lausanne, CH, Switzerland; Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland.
| | - R S T Ribeiro
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland: Lausanne, CH, Switzerland.
| | - H Müller
- Informatics Institute, University of Applied Sciences Western Switzerland (HES-SO Valais) Sierre, CH, Switzerland; Medical Faculty, University of Geneva, CH, Switzerland.
| | - J O Prior
- Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland; Nuclear Medicine and Molecular Imaging Department, Lausanne University Hospital (CHUV): Lausanne, CH, Switzerland.
| | - C Sá Dos Reis
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland: Lausanne, CH, Switzerland.
| |
Collapse
|
4
|
Sanaat A, Boccalini C, Mathoux G, Perani D, Frisoni GB, Haller S, Montandon ML, Rodriguez C, Giannakopoulos P, Garibotto V, Zaidi H. A deep learning model for generating [ 18F]FDG PET Images from early-phase [ 18F]Florbetapir and [ 18F]Flutemetamol PET images. Eur J Nucl Med Mol Imaging 2024:10.1007/s00259-024-06755-1. [PMID: 38861183 DOI: 10.1007/s00259-024-06755-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Accepted: 05/05/2024] [Indexed: 06/12/2024]
Abstract
INTRODUCTION Amyloid-β (Aβ) plaques is a significant hallmark of Alzheimer's disease (AD), detectable via amyloid-PET imaging. The Fluorine-18-Fluorodeoxyglucose ([18F]FDG) PET scan tracks cerebral glucose metabolism, correlated with synaptic dysfunction and disease progression and is complementary for AD diagnosis. Dual-scan acquisitions of amyloid PET allows the possibility to use early-phase amyloid-PET as a biomarker for neurodegeneration, proven to have a good correlation to [18F]FDG PET. The aim of this study was to evaluate the added value of synthesizing the later from the former through deep learning (DL), aiming at reducing the number of PET scans, radiation dose, and discomfort to patients. METHODS A total of 166 subjects including cognitively unimpaired individuals (N = 72), subjects with mild cognitive impairment (N = 73) and dementia (N = 21) were included in this study. All underwent T1-weighted MRI, dual-phase amyloid PET scans using either Fluorine-18 Florbetapir ([18F]FBP) or Fluorine-18 Flutemetamol ([18F]FMM), and an [18F]FDG PET scan. Two transformer-based DL models called SwinUNETR were trained separately to synthesize the [18F]FDG from early phase [18F]FBP and [18F]FMM (eFBP/eFMM). A clinical similarity score (1: no similarity to 3: similar) was assessed to compare the imaging information obtained by synthesized [18F]FDG as well as eFBP/eFMM to actual [18F]FDG. Quantitative evaluations include region wise correlation and single-subject voxel-wise analyses in comparison with a reference [18F]FDG PET healthy control database. Dice coefficients were calculated to quantify the whole-brain spatial overlap between hypometabolic ([18F]FDG PET) and hypoperfused (eFBP/eFMM) binary maps at the single-subject level as well as between [18F]FDG PET and synthetic [18F]FDG PET hypometabolic binary maps. RESULTS The clinical evaluation showed that, in comparison to eFBP/eFMM (average of clinical similarity score (CSS) = 1.53), the synthetic [18F]FDG images are quite similar to the actual [18F]FDG images (average of CSS = 2.7) in terms of preserving clinically relevant uptake patterns. The single-subject voxel-wise analyses showed that at the group level, the Dice scores improved by around 13% and 5% when using the DL approach for eFBP and eFMM, respectively. The correlation analysis results indicated a relatively strong correlation between eFBP/eFMM and [18F]FDG (eFBP: slope = 0.77, R2 = 0.61, P-value < 0.0001); eFMM: slope = 0.77, R2 = 0.61, P-value < 0.0001). This correlation improved for synthetic [18F]FDG (synthetic [18F]FDG generated from eFBP (slope = 1.00, R2 = 0.68, P-value < 0.0001), eFMM (slope = 0.93, R2 = 0.72, P-value < 0.0001)). CONCLUSION We proposed a DL model for generating the [18F]FDG from eFBP/eFMM PET images. This method may be used as an alternative for multiple radiotracer scanning in research and clinical settings allowing to adopt the currently validated [18F]FDG PET normal reference databases for data analysis.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.
| | - Cecilia Boccalini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.
- Laboratory of Neuroimaging and Innovative Molecular Tracers (NIMTlab), Geneva University Neurocenter and Faculty of Medicine, University of Geneva, Geneva, Switzerland.
| | - Gregory Mathoux
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Daniela Perani
- Vita-Salute San Raffaele University, Nuclear Medicine Unit San Raffaele Hospital, Milan, Italy
| | | | - Sven Haller
- CIMC - Centre d'Imagerie Médicale de Cornavin, Geneva, Switzerland
- Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Marie-Louise Montandon
- Department of Rehabilitation and Geriatrics, Geneva University Hospitals and University of Geneva, Geneva, Switzerland
| | - Cristelle Rodriguez
- Division of Institutional Measures, Medical Direction, Geneva University Hospitals, Geneva, Switzerland
| | - Panteleimon Giannakopoulos
- Division of Institutional Measures, Medical Direction, Geneva University Hospitals, Geneva, Switzerland
- Department of Psychiatry, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Valentina Garibotto
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Laboratory of Neuroimaging and Innovative Molecular Tracers (NIMTlab), Geneva University Neurocenter and Faculty of Medicine, University of Geneva, Geneva, Switzerland
- CIBM Center for Biomedical Imaging, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
- University Research and Innovation Center, Óbudabuda University, Budapest, Hungary.
| |
Collapse
|
5
|
Azimi MS, Kamali-Asl A, Ay MR, Zeraatkar N, Hosseini MS, Sanaat A, Dadgar H, Arabi H. Deep learning-based partial volume correction in standard and low-dose positron emission tomography-computed tomography imaging. Quant Imaging Med Surg 2024; 14:2146-2164. [PMID: 38545051 PMCID: PMC10963814 DOI: 10.21037/qims-23-871] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 11/20/2023] [Indexed: 08/05/2024]
Abstract
BACKGROUND Positron emission tomography (PET) imaging encounters the obstacle of partial volume effects, arising from its limited intrinsic resolution, giving rise to (I) considerable bias, particularly for structures comparable in size to the point spread function (PSF) of the system; and (II) blurred image edges and blending of textures along the borders. We set out to build a deep learning-based framework for predicting partial volume corrected full-dose (FD + PVC) images from either standard or low-dose (LD) PET images without requiring any anatomical data in order to provide a joint solution for partial volume correction and de-noise LD PET images. METHODS We trained a modified encoder-decoder U-Net network with standard of care or LD PET images as the input and FD + PVC images by six different PVC methods as the target. These six PVC approaches include geometric transfer matrix (GTM), multi-target correction (MTC), region-based voxel-wise correction (RBV), iterative Yang (IY), reblurred Van-Cittert (RVC), and Richardson-Lucy (RL). The proposed models were evaluated using standard criteria, such as peak signal-to-noise ratio (PSNR), root mean squared error (RMSE), structural similarity index (SSIM), relative bias, and absolute relative bias. RESULTS Different levels of error were observed for these partial volume correction methods, which were relatively smaller for GTM with a SSIM of 0.63 for LD and 0.29 for FD, IY with an SSIM of 0.63 for LD and 0.67 for FD, RBV with an SSIM of 0.57 for LD and 0.65 for FD, and RVC with an SSIM of 0.89 for LD and 0.94 for FD PVC approaches. However, large quantitative errors were observed for multi-target MTC with an RMSE of 2.71 for LD and 2.45 for FD and RL with an RMSE of 5 for LD and 3.27 for FD PVC approaches. CONCLUSIONS We found that the proposed framework could effectively perform joint de-noising and partial volume correction for PET images with LD and FD input PET data (LD vs. FD). When no magnetic resonance imaging (MRI) images are available, the developed deep learning models could be used for partial volume correction on LD or standard PET-computed tomography (PET-CT) scans as an image quality enhancement technique.
Collapse
Affiliation(s)
- Mohammad-Saber Azimi
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
- Research Center for Molecular and Cellular Imaging (RCMCI), Advanced Medical Technologies and Equipment Institute (AMTEI), Tehran University of Medical Sciences (TUMS), Tehran, Iran
| | - Alireza Kamali-Asl
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Mohammad-Reza Ay
- Research Center for Molecular and Cellular Imaging (RCMCI), Advanced Medical Technologies and Equipment Institute (AMTEI), Tehran University of Medical Sciences (TUMS), Tehran, Iran
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran
| | | | | | - Amirhossein Sanaat
- Division of Nuclear Medicine & Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habibollah Dadgar
- Cancer Research Center, Razavi Hospital, Imam Reza International University, Mashhad, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine & Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| |
Collapse
|
6
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
7
|
Dayarathna S, Islam KT, Uribe S, Yang G, Hayat M, Chen Z. Deep learning based synthesis of MRI, CT and PET: Review and analysis. Med Image Anal 2024; 92:103046. [PMID: 38052145 DOI: 10.1016/j.media.2023.103046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 11/14/2023] [Accepted: 11/29/2023] [Indexed: 12/07/2023]
Abstract
Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple image modalities for an accurate clinical workflow. This approach proves beneficial in estimating an image of a desired modality from a given source modality among the most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). However, translating between two image modalities presents difficulties due to the complex and non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance in synthetic image contrast applications compared to conventional image synthesis methods. This survey comprehensively reviews deep learning-based medical imaging translation from 2018 to 2023 on pseudo-CT, synthetic MR, and synthetic PET. We provide an overview of synthetic contrasts in medical imaging and the most frequently employed deep learning networks for medical image synthesis. Additionally, we conduct a detailed analysis of each synthesis method, focusing on their diverse model designs based on input domains and network architectures. We also analyse novel network architectures, ranging from conventional CNNs to the recent Transformer and Diffusion models. This analysis includes comparing loss functions, available datasets and anatomical regions, and image quality assessments and performance in other downstream tasks. Finally, we discuss the challenges and identify solutions within the literature, suggesting possible future directions. We hope that the insights offered in this survey paper will serve as a valuable roadmap for researchers in the field of medical image synthesis.
Collapse
Affiliation(s)
- Sanuwani Dayarathna
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia.
| | | | - Sergio Uribe
- Department of Medical Imaging and Radiation Sciences, Faculty of Medicine, Monash University, Clayton VIC 3800, Australia
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, W12 7SL, United Kingdom
| | - Munawar Hayat
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia
| | - Zhaolin Chen
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia; Monash Biomedical Imaging, Clayton VIC 3800, Australia
| |
Collapse
|
8
|
Manoj Doss KK, Chen JC. Utilizing deep learning techniques to improve image quality and noise reduction in preclinical low-dose PET images in the sinogram domain. Med Phys 2024; 51:209-223. [PMID: 37966121 DOI: 10.1002/mp.16830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 09/28/2023] [Accepted: 10/22/2023] [Indexed: 11/16/2023] Open
Abstract
BACKGROUND Low-dose positron emission tomography (LD-PET) imaging is commonly employed in preclinical research to minimize radiation exposure to animal subjects. However, LD-PET images often exhibit poor quality and high noise levels due to the low signal-to-noise ratio. Deep learning (DL) techniques such as generative adversarial networks (GANs) and convolutional neural network (CNN) have the capability to enhance the quality of images derived from noisy or low-quality PET data, which encodes critical information about radioactivity distribution in the body. PURPOSE Our objective was to optimize the image quality and reduce noise in preclinical PET images by utilizing the sinogram domain as input for DL models, resulting in improved image quality as compared to LD-PET images. METHODS A GAN and CNN model were utilized to predict high-dose (HD) preclinical PET sinograms from the corresponding LD preclinical PET sinograms. In order to generate the datasets, experiments were conducted on micro-phantoms, animal subjects (rats), and virtual simulations. The quality of DL-generated images was weighted by performing the following quantitative measures: structural similarity index measure (SSIM), root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). Additionally, DL input and output were both subjected to a spatial resolution calculation of full width half maximum (FWHM) and full width tenth maximum (FWTM). DL outcomes were then compared with the conventional denoising algorithms such as non-local means (NLM), block-matching, and 3D filtering (BM3D). RESULTS The DL models effectively learned image features and produced high-quality images, as reflected in the quantitative metrics. Notably, the FWHM and FWTM values of DL PET images exhibited significantly improved accuracy compared to LD, NLM, and BM3D PET images, and just as precise as HD PET images. The MSE loss underscored the excellent performance of the models, indicating that the models performed well. To further improve the training, the generator loss (G loss) was increased to a value higher than the discriminator loss (D loss), thereby achieving convergence in the GAN model. CONCLUSIONS The sinograms generated by the GAN network closely resembled real HD preclinical PET sinograms and were more realistic than LD. There was a noticeable improvement in image quality and noise factor in the predicted HD images. Importantly, DL networks did not fully compromise the spatial resolution of the images.
Collapse
Affiliation(s)
| | - Jyh-Cheng Chen
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Medical Imaging and Radiological Sciences, China Medical University, Taichung, Taiwan
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
| |
Collapse
|
9
|
Zhao F, Li D, Luo R, Liu M, Jiang X, Hu J. Self-supervised deep learning for joint 3D low-dose PET/CT image denoising. Comput Biol Med 2023; 165:107391. [PMID: 37717529 DOI: 10.1016/j.compbiomed.2023.107391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 08/08/2023] [Accepted: 08/25/2023] [Indexed: 09/19/2023]
Abstract
Deep learning (DL)-based denoising of low-dose positron emission tomography (LDPET) and low-dose computed tomography (LDCT) has been widely explored. However, previous methods have focused only on single modality denoising, neglecting the possibility of simultaneously denoising LDPET and LDCT using only one neural network, i.e., joint LDPET/LDCT denoising. Moreover, DL-based denoising methods generally require plenty of well-aligned LD-normal-dose (LD-ND) sample pairs, which can be difficult to obtain. To this end, we propose a self-supervised two-stage training framework named MAsk-then-Cycle (MAC), to achieve self-supervised joint LDPET/LDCT denoising. The first stage of MAC is masked autoencoder (MAE)-based pre-training and the second stage is self-supervised denoising training. Specifically, we propose a self-supervised denoising strategy named cycle self-recombination (CSR), which enables denoising without well-aligned sample pairs. Unlike other methods that treat noise as a homogeneous whole, CSR disentangles noise into signal-dependent and independent noises. This is more in line with the actual imaging process and allows for flexible recombination of noises and signals to generate new samples. These new samples contain implicit constraints that can improve the network's denoising ability. Based on these constraints, we design multiple loss functions to enable self-supervised training. Then we design a CSR-based denoising network to achieve joint 3D LDPET/LDCT denoising. Existing self-supervised methods generally lack pixel-level constraints on networks, which can easily lead to additional artifacts. Before denoising training, we perform MAE-based pre-training to indirectly impose pixel-level constraints on networks. Experiments on an LDPET/LDCT dataset demonstrate its superiority over existing methods. Our method is the first self-supervised joint LDPET/LDCT denoising method. It does not require any prior assumptions and is therefore more robust.
Collapse
Affiliation(s)
- Feixiang Zhao
- State Key Laboratory of Geohazard Prevention and Geoenvironment Protection, Chengdu University of Technology, Chengdu, 610000, China.
| | - Dongfen Li
- State Key Laboratory of Geohazard Prevention and Geoenvironment Protection, Chengdu University of Technology, Chengdu, 610000, China.
| | - Rui Luo
- Department of Nuclear Medicine, Mianyang Central Hospital, Mianyang, 621000, China.
| | - Mingzhe Liu
- State Key Laboratory of Geohazard Prevention and Geoenvironment Protection, Chengdu University of Technology, Chengdu, 610000, China.
| | - Xin Jiang
- School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Wenzhou, 325000, China.
| | - Junjie Hu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, China.
| |
Collapse
|
10
|
Mushtaq S, Ae PJ, Kim JY, Lee KC, Kim KI. The role of radiolabeling in BNCT tracers for enhanced dosimetry and treatment planning. Theranostics 2023; 13:5247-5265. [PMID: 37908724 PMCID: PMC10614688 DOI: 10.7150/thno.88998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 09/15/2023] [Indexed: 11/02/2023] Open
Abstract
Positron emission tomography (PET) and single photon emission computed tomography (SPECT) are potent technologies for non-invasive imaging of pharmacological and biochemical processes in both preclinical and advanced clinical research settings. In the field of radiation therapy, boron neutron capture therapy (BNCT) stands out because it harnesses biological mechanisms to precisely target tumor cells while preserving the neighboring healthy tissues. To achieve the most favorable therapeutic outcomes, the delivery of boron-enriched tracers to tumors must be selective and efficient, with a substantial concentration of boron atoms meticulously arranged in and around the tumor cells. Although several BNCT tracers have been developed to facilitate the targeted and efficient delivery of boron to tumors, only a few have been labeled with PET or SPECT radionuclides. Such radiolabeling enables comprehensive in vivo examination, encompassing crucial aspects such as pharmacodynamics, pharmacokinetics, tumor selectivity, and accumulation and retention of the tracer within the tumor. This review provides a comprehensive summary of the essential aspects of BNCT tracers, focusing on their radiolabeling with PET or SPECT radioisotopes. This leads to more effective and targeted treatment approaches which ultimately enhance the quality of patient care with respect to cancer treatment.
Collapse
Affiliation(s)
- Sajid Mushtaq
- Division of Applied RI, Korea Institute of Radiological & Medical Sciences (KIRAMS) Seoul 01812, Republic of Korea
- Department of Nuclear Engineering, Pakistan Institute of Engineering and Applied Sciences, P. O. Nilore, Islamabad 45650, Pakistan
| | - Park Ji Ae
- Division of Applied RI, Korea Institute of Radiological & Medical Sciences (KIRAMS) Seoul 01812, Republic of Korea
| | - Jung Young Kim
- Division of Applied RI, Korea Institute of Radiological & Medical Sciences (KIRAMS) Seoul 01812, Republic of Korea
| | - Kyo Chul Lee
- Division of Applied RI, Korea Institute of Radiological & Medical Sciences (KIRAMS) Seoul 01812, Republic of Korea
| | - Kwang Il Kim
- Division of Applied RI, Korea Institute of Radiological & Medical Sciences (KIRAMS) Seoul 01812, Republic of Korea
| |
Collapse
|
11
|
Chen KT, Tesfay R, Koran MEI, Ouyang J, Shams S, Young CB, Davidzon G, Liang T, Khalighi M, Mormino E, Zaharchuk G. Generative Adversarial Network-Enhanced Ultra-Low-Dose [ 18F]-PI-2620 τ PET/MRI in Aging and Neurodegenerative Populations. AJNR Am J Neuroradiol 2023; 44:1012-1019. [PMID: 37591771 PMCID: PMC10494955 DOI: 10.3174/ajnr.a7961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 07/11/2023] [Indexed: 08/19/2023]
Abstract
BACKGROUND AND PURPOSE With the utility of hybrid τ PET/MR imaging in the screening, diagnosis, and follow-up of individuals with neurodegenerative diseases, we investigated whether deep learning techniques can be used in enhancing ultra-low-dose [18F]-PI-2620 τ PET/MR images to produce diagnostic-quality images. MATERIALS AND METHODS Forty-four healthy aging participants and patients with neurodegenerative diseases were recruited for this study, and [18F]-PI-2620 τ PET/MR data were simultaneously acquired. A generative adversarial network was trained to enhance ultra-low-dose τ images, which were reconstructed from a random sampling of 1/20 (approximately 5% of original count level) of the original full-dose data. MR images were also used as additional input channels. Region-based analyses as well as a reader study were conducted to assess the image quality of the enhanced images compared with their full-dose counterparts. RESULTS The enhanced ultra-low-dose τ images showed apparent noise reduction compared with the ultra-low-dose images. The regional standard uptake value ratios showed that while, in general, there is an underestimation for both image types, especially in regions with higher uptake, when focusing on the healthy-but-amyloid-positive population (with relatively lower τ uptake), this bias was reduced in the enhanced ultra-low-dose images. The radiotracer uptake patterns in the enhanced images were read accurately compared with their full-dose counterparts. CONCLUSIONS The clinical readings of deep learning-enhanced ultra-low-dose τ PET images were consistent with those performed with full-dose imaging, suggesting the possibility of reducing the dose and enabling more frequent examinations for dementia monitoring.
Collapse
Affiliation(s)
- K T Chen
- From the Department of Biomedical Engineering (K.T.C.), National Taiwan University, Taipei, Taiwan
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - R Tesfay
- Meharry Medical College (R.T.), Nashville, Tennessee
| | - M E I Koran
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - J Ouyang
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - S Shams
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - C B Young
- Department of Neurology and Neurological Sciences (C.B.Y., E.M.), Stanford University, Stanford, California
| | - G Davidzon
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - T Liang
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - M Khalighi
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - E Mormino
- Department of Neurology and Neurological Sciences (C.B.Y., E.M.), Stanford University, Stanford, California
| | - G Zaharchuk
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| |
Collapse
|
12
|
Sanaei B, Faghihi R, Arabi H. Employing Multiple Low-Dose PET Images (at Different Dose Levels) as Prior Knowledge to Predict Standard-Dose PET Images. J Digit Imaging 2023; 36:1588-1596. [PMID: 36988836 PMCID: PMC10406788 DOI: 10.1007/s10278-023-00815-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 03/13/2023] [Accepted: 03/15/2023] [Indexed: 03/30/2023] Open
Abstract
The existing deep learning-based denoising methods predicting standard-dose PET images (S-PET) from the low-dose versions (L-PET) solely rely on a single-dose level of PET images as the input of deep learning network. In this work, we exploited the prior knowledge in the form of multiple low-dose levels of PET images to estimate the S-PET images. To this end, a high-resolution ResNet architecture was utilized to predict S-PET images from 6 to 4% L-PET images. For the 6% L-PET imaging, two models were developed; the first and second models were trained using a single input of 6% L-PET and three inputs of 6%, 4%, and 2% L-PET as input to predict S-PET images, respectively. Similarly, for 4% L-PET imaging, a model was trained using a single input of 4% low-dose data, and a three-channel model was developed getting 4%, 3%, and 2% L-PET images. The performance of the four models was evaluated using structural similarity index (SSI), peak signal-to-noise ratio (PSNR), and root mean square error (RMSE) within the entire head regions and malignant lesions. The 4% multi-input model led to improved SSI and PSNR and a significant decrease in RMSE by 22.22% and 25.42% within the entire head region and malignant lesions, respectively. Furthermore, the 4% multi-input network remarkably decreased the lesions' SUVmean bias and SUVmax bias by 64.58% and 37.12% comparing to single-input network. In addition, the 6% multi-input network decreased the RMSE within the entire head region, within the lesions, lesions' SUVmean bias, and SUVmax bias by 37.5%, 39.58%, 86.99%, and 45.60%, respectively. This study demonstrated the significant benefits of using prior knowledge in the form of multiple L-PET images to predict S-PET images.
Collapse
Affiliation(s)
- Behnoush Sanaei
- Nuclear Engineering Department, Shiraz University, Shiraz, Iran
| | - Reza Faghihi
- Nuclear Engineering Department, Shiraz University, Shiraz, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| |
Collapse
|
13
|
Yu Z, Rahman A, Laforest R, Schindler TH, Gropler RJ, Wahl RL, Siegel BA, Jha AK. Need for objective task-based evaluation of deep learning-based denoising methods: A study in the context of myocardial perfusion SPECT. Med Phys 2023; 50:4122-4137. [PMID: 37010001 PMCID: PMC10524194 DOI: 10.1002/mp.16407] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 01/20/2023] [Accepted: 03/01/2023] [Indexed: 04/04/2023] Open
Abstract
BACKGROUND Artificial intelligence-based methods have generated substantial interest in nuclear medicine. An area of significant interest has been the use of deep-learning (DL)-based approaches for denoising images acquired with lower doses, shorter acquisition times, or both. Objective evaluation of these approaches is essential for clinical application. PURPOSE DL-based approaches for denoising nuclear-medicine images have typically been evaluated using fidelity-based figures of merit (FoMs) such as root mean squared error (RMSE) and structural similarity index measure (SSIM). However, these images are acquired for clinical tasks and thus should be evaluated based on their performance in these tasks. Our objectives were to: (1) investigate whether evaluation with these FoMs is consistent with objective clinical-task-based evaluation; (2) provide a theoretical analysis for determining the impact of denoising on signal-detection tasks; and (3) demonstrate the utility of virtual imaging trials (VITs) to evaluate DL-based methods. METHODS A VIT to evaluate a DL-based method for denoising myocardial perfusion SPECT (MPS) images was conducted. To conduct this evaluation study, we followed the recently published best practices for the evaluation of AI algorithms for nuclear medicine (the RELAINCE guidelines). An anthropomorphic patient population modeling clinically relevant variability was simulated. Projection data for this patient population at normal and low-dose count levels (20%, 15%, 10%, 5%) were generated using well-validated Monte Carlo-based simulations. The images were reconstructed using a 3-D ordered-subsets expectation maximization-based approach. Next, the low-dose images were denoised using a commonly used convolutional neural network-based approach. The impact of DL-based denoising was evaluated using both fidelity-based FoMs and area under the receiver operating characteristic curve (AUC), which quantified performance on the clinical task of detecting perfusion defects in MPS images as obtained using a model observer with anthropomorphic channels. We then provide a mathematical treatment to probe the impact of post-processing operations on signal-detection tasks and use this treatment to analyze the findings of this study. RESULTS Based on fidelity-based FoMs, denoising using the considered DL-based method led to significantly superior performance. However, based on ROC analysis, denoising did not improve, and in fact, often degraded detection-task performance. This discordance between fidelity-based FoMs and task-based evaluation was observed at all the low-dose levels and for different cardiac-defect types. Our theoretical analysis revealed that the major reason for this degraded performance was that the denoising method reduced the difference in the means of the reconstructed images and of the channel operator-extracted feature vectors between the defect-absent and defect-present cases. CONCLUSIONS The results show the discrepancy between the evaluation of DL-based methods with fidelity-based metrics versus the evaluation on clinical tasks. This motivates the need for objective task-based evaluation of DL-based denoising approaches. Further, this study shows how VITs provide a mechanism to conduct such evaluations computationally, in a time and resource-efficient setting, and avoid risks such as radiation dose to the patient. Finally, our theoretical treatment reveals insights into the reasons for the limited performance of the denoising approach and may be used to probe the effect of other post-processing operations on signal-detection tasks.
Collapse
Affiliation(s)
- Zitong Yu
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Ashequr Rahman
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Richard Laforest
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Thomas H. Schindler
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Robert J. Gropler
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Richard L. Wahl
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Barry A. Siegel
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Abhinav K. Jha
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| |
Collapse
|
14
|
Sun J, Jiang H, Du Y, Li CY, Wu TH, Liu YH, Yang BH, Mok GSP. Deep learning-based denoising in projection-domain and reconstruction-domain for low-dose myocardial perfusion SPECT. J Nucl Cardiol 2023; 30:970-985. [PMID: 35982208 DOI: 10.1007/s12350-022-03045-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Accepted: 06/13/2022] [Indexed: 10/15/2022]
Abstract
BACKGROUND Low-dose (LD) myocardial perfusion (MP) SPECT suffers from high noise level, leading to compromised diagnostic accuracy. Here we investigated the denoising performance for MP-SPECT using a conditional generative adversarial network (cGAN) in projection-domain (cGAN-prj) and reconstruction-domain (cGAN-recon). METHODS Sixty-four noisy SPECT projections were simulated for a population of 100 XCAT phantoms with different anatomical variations and 99mTc-sestamibi distributions. Series of LD projections were obtained by scaling the full dose (FD) count rate to be 1/20 to 1/2 of the original. Twenty patients with 99mTc-sestamibi stress SPECT/CT scans were retrospectively analyzed. For each patient, LD SPECT images (7/10 to 1/10 of FD) were generated from the FD list mode data. All projections were reconstructed by the quantitative OS-EM method. A 3D cGAN was implemented to predict FD images from their corresponding LD images in the projection- and reconstruction-domain. The denoised projections were reconstructed for analysis in various quantitative indices along with cGAN-recon, Gaussian, and Butterworth-filtered images. RESULTS cGAN denoising improves image quality as compared to LD and conventional post-reconstruction filtering. cGAN-prj can further reduce the dose level as compared to cGAN-recon without compromising the image quality. CONCLUSIONS Denoising based on cGAN-prj is superior to cGAN-recon for MP-SPECT.
Collapse
Affiliation(s)
- Jingzhang Sun
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau SAR, China
| | - Han Jiang
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau SAR, China
| | - Yu Du
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau SAR, China
| | - Chien-Ying Li
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
- Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
| | - Tung-Hsin Wu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
| | - Yi-Hwa Liu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
- Department of Internal Medicine, Yale University School of Medicine, New Haven, CT, USA
| | - Bang-Hung Yang
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC.
- Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei, Taiwan, ROC.
| | - Greta S P Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau SAR, China.
| |
Collapse
|
15
|
Sanaat A, Shooli H, Böhringer AS, Sadeghi M, Shiri I, Salimi Y, Ginovart N, Garibotto V, Arabi H, Zaidi H. A cycle-consistent adversarial network for brain PET partial volume correction without prior anatomical information. Eur J Nucl Med Mol Imaging 2023; 50:1881-1896. [PMID: 36808000 PMCID: PMC10199868 DOI: 10.1007/s00259-023-06152-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 02/12/2023] [Indexed: 02/23/2023]
Abstract
PURPOSE Partial volume effect (PVE) is a consequence of the limited spatial resolution of PET scanners. PVE can cause the intensity values of a particular voxel to be underestimated or overestimated due to the effect of surrounding tracer uptake. We propose a novel partial volume correction (PVC) technique to overcome the adverse effects of PVE on PET images. METHODS Two hundred and twelve clinical brain PET scans, including 50 18F-Fluorodeoxyglucose (18F-FDG), 50 18F-Flortaucipir, 36 18F-Flutemetamol, and 76 18F-FluoroDOPA, and their corresponding T1-weighted MR images were enrolled in this study. The Iterative Yang technique was used for PVC as a reference or surrogate of the ground truth for evaluation. A cycle-consistent adversarial network (CycleGAN) was trained to directly map non-PVC PET images to PVC PET images. Quantitative analysis using various metrics, including structural similarity index (SSIM), root mean squared error (RMSE), and peak signal-to-noise ratio (PSNR), was performed. Furthermore, voxel-wise and region-wise-based correlations of activity concentration between the predicted and reference images were evaluated through joint histogram and Bland and Altman analysis. In addition, radiomic analysis was performed by calculating 20 radiomic features within 83 brain regions. Finally, a voxel-wise two-sample t-test was used to compare the predicted PVC PET images with reference PVC images for each radiotracer. RESULTS The Bland and Altman analysis showed the largest and smallest variance for 18F-FDG (95% CI: - 0.29, + 0.33 SUV, mean = 0.02 SUV) and 18F-Flutemetamol (95% CI: - 0.26, + 0.24 SUV, mean = - 0.01 SUV), respectively. The PSNR was lowest (29.64 ± 1.13 dB) for 18F-FDG and highest (36.01 ± 3.26 dB) for 18F-Flutemetamol. The smallest and largest SSIM were achieved for 18F-FDG (0.93 ± 0.01) and 18F-Flutemetamol (0.97 ± 0.01), respectively. The average relative error for the kurtosis radiomic feature was 3.32%, 9.39%, 4.17%, and 4.55%, while it was 4.74%, 8.80%, 7.27%, and 6.81% for NGLDM_contrast feature for 18F-Flutemetamol, 18F-FluoroDOPA, 18F-FDG, and 18F-Flortaucipir, respectively. CONCLUSION An end-to-end CycleGAN PVC method was developed and evaluated. Our model generates PVC images from the original non-PVC PET images without requiring additional anatomical information, such as MRI or CT. Our model eliminates the need for accurate registration or segmentation or PET scanner system response characterization. In addition, no assumptions regarding anatomical structure size, homogeneity, boundary, or background level are required.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Hossein Shooli
- Persian Gulf Nuclear Medicine Research Center, Department of Molecular Imaging and Radionuclide Therapy (MIRT), Bushehr Medical University Hospital, Faculty of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Andrew Stephen Böhringer
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Maryam Sadeghi
- Department of Medical Statistics, Informatics and Health Economics, Medical University of Innsbruck, Schoepfstr. 41, Innsbruck, Austria
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Nathalie Ginovart
- Geneva University Neurocenter, University of Geneva, Geneva, Switzerland
- Department of Psychiatry, Geneva University, Geneva, Switzerland
- Department of Basic Neuroscience, Geneva University, Geneva, Switzerland
| | - Valentina Garibotto
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
- Geneva University Neurocenter, University of Geneva, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Geneva University Neurocenter, University of Geneva, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
16
|
Ghane B, Karimian A, Mostafapour S, Gholamiankhak F, Shojaerazavi S, Arabi H. Quantitative Analysis of Image Quality in Low-Dose Computed Tomography Imaging for COVID-19 Patients. JOURNAL OF MEDICAL SIGNALS & SENSORS 2023; 13:118-128. [PMID: 37448548 PMCID: PMC10336910 DOI: 10.4103/jmss.jmss_173_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 12/31/2021] [Accepted: 04/19/2022] [Indexed: 07/15/2023]
Abstract
Background Computed tomography (CT) scan is one of the main tools to diagnose and grade COVID-19 progression. To avoid the side effects of CT imaging, low-dose CT imaging is of crucial importance to reduce population absorbed dose. However, this approach introduces considerable noise levels in CT images. Methods In this light, we set out to simulate four reduced dose levels (60% dose, 40% dose, 20% dose, and 10% dose) of standard CT imaging using Beer-Lambert's law across 49 patients infected with COVID-19. Then, three denoising filters, namely Gaussian, bilateral, and median, were applied to the different low-dose CT images, the quality of which was assessed prior to and after the application of the various filters via calculation of peak signal-to-noise ratio, root mean square error (RMSE), structural similarity index measure, and relative CT-value bias, separately for the lung tissue and whole body. Results The quantitative evaluation indicated that 10%-dose CT images have inferior quality (with RMSE = 322.1 ± 104.0 HU and bias = 11.44% ± 4.49% in the lung) even after the application of the denoising filters. The bilateral filter exhibited superior performance to suppress the noise and recover the underlying signals in low-dose CT images compared to the other denoising techniques. The bilateral filter led to RMSE and bias of 100.21 ± 16.47 HU and - 0.21% ± 1.20%, respectively, in the lung regions for 20%-dose CT images compared to the Gaussian filter with RMSE = 103.46 ± 15.70 HU and bias = 1.02% ± 1.68% and median filter with RMSE = 129.60 ± 18.09 HU and bias = -6.15% ± 2.24%. Conclusions The 20%-dose CT imaging followed by the bilateral filtering introduced a reasonable compromise between image quality and patient dose reduction.
Collapse
Affiliation(s)
- Behrooz Ghane
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran
| | - Alireza Karimian
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran
| | - Samaneh Mostafapour
- Department of Radiology Technology, Faculty of Paramedical Sciences, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Faezeh Gholamiankhak
- Department of Medical Physics, Faculty of Medicine, Shahid Sadoughi University of Medical Sciences, Yazd, Iran
| | - Seyedjafar Shojaerazavi
- Department of Cardiology, Ghaem Hospital Mashhad, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| |
Collapse
|
17
|
Saboury B, Bradshaw T, Boellaard R, Buvat I, Dutta J, Hatt M, Jha AK, Li Q, Liu C, McMeekin H, Morris MA, Scott PJH, Siegel E, Sunderland JJ, Pandit-Taskar N, Wahl RL, Zuehlsdorff S, Rahmim A. Artificial Intelligence in Nuclear Medicine: Opportunities, Challenges, and Responsibilities Toward a Trustworthy Ecosystem. J Nucl Med 2023; 64:188-196. [PMID: 36522184 PMCID: PMC9902852 DOI: 10.2967/jnumed.121.263703] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 12/06/2022] [Accepted: 12/06/2022] [Indexed: 12/23/2022] Open
Abstract
Trustworthiness is a core tenet of medicine. The patient-physician relationship is evolving from a dyad to a broader ecosystem of health care. With the emergence of artificial intelligence (AI) in medicine, the elements of trust must be revisited. We envision a road map for the establishment of trustworthy AI ecosystems in nuclear medicine. In this report, AI is contextualized in the history of technologic revolutions. Opportunities for AI applications in nuclear medicine related to diagnosis, therapy, and workflow efficiency, as well as emerging challenges and critical responsibilities, are discussed. Establishing and maintaining leadership in AI require a concerted effort to promote the rational and safe deployment of this innovative technology by engaging patients, nuclear medicine physicians, scientists, technologists, and referring providers, among other stakeholders, while protecting our patients and society. This strategic plan was prepared by the AI task force of the Society of Nuclear Medicine and Molecular Imaging.
Collapse
Affiliation(s)
- Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland;
| | - Tyler Bradshaw
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin
| | - Ronald Boellaard
- Department of Radiology and Nuclear Medicine, Cancer Centre Amsterdam, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| | - Irène Buvat
- Institut Curie, Université PSL, INSERM, Université Paris-Saclay, Orsay, France
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, Massachusetts
| | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University of Brest, Brest, France
| | - Abhinav K Jha
- Department of Biomedical Engineering and Mallinckrodt Institute of Radiology, Washington University, St. Louis, Missouri
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut
| | - Helena McMeekin
- Department of Clinical Physics, Barts Health NHS Trust, London, United Kingdom
| | - Michael A Morris
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Peter J H Scott
- Department of Radiology, University of Michigan Medical School, Ann Arbor, Michigan
| | - Eliot Siegel
- Department of Radiology and Nuclear Medicine, University of Maryland Medical Center, Baltimore, Maryland
| | - John J Sunderland
- Departments of Radiology and Physics, University of Iowa, Iowa City, Iowa
| | - Neeta Pandit-Taskar
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York
| | - Richard L Wahl
- Mallinckrodt Institute of Radiology, Washington University, St. Louis, Missouri
| | - Sven Zuehlsdorff
- Siemens Medical Solutions USA, Inc., Hoffman Estates, Illinois; and
| | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
18
|
Shi L, Zhang J, Toyonaga T, Shao D, Onofrey JA, Lu Y. Deep learning-based attenuation map generation with simultaneously reconstructed PET activity and attenuation and low-dose application. Phys Med Biol 2023; 68. [PMID: 36584395 DOI: 10.1088/1361-6560/acaf49] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 12/30/2022] [Indexed: 12/31/2022]
Abstract
Objective. In PET/CT imaging, CT is used for positron emission tomography (PET) attenuation correction (AC). CT artifacts or misalignment between PET and CT can cause AC artifacts and quantification errors in PET. Simultaneous reconstruction (MLAA) of PET activity (λ-MLAA) and attenuation (μ-MLAA) maps was proposed to solve those issues using the time-of-flight PET raw data only. However,λ-MLAA still suffers from quantification error as compared to reconstruction using the gold-standard CT-based attenuation map (μ-CT). Recently, a deep learning (DL)-based framework was proposed to improve MLAA by predictingμ-DL fromλ-MLAA andμ-MLAA using an image domain loss function (IM-loss). However, IM-loss does not directly measure the AC errors according to the PET attenuation physics. Our preliminary studies showed that an additional physics-based loss function can lead to more accurate PET AC. The main objective of this study is to optimize the attenuation map generation framework for clinical full-dose18F-FDG studies. We also investigate the effectiveness of the optimized network on predicting attenuation maps for synthetic low-dose oncological PET studies.Approach. We optimized the proposed DL framework by applying different preprocessing steps and hyperparameter optimization, including patch size, weights of the loss terms and number of angles in the projection-domain loss term. The optimization was performed based on 100 skull-to-toe18F-FDG PET/CT scans with minimal misalignment. The optimized framework was further evaluated on 85 clinical full-dose neck-to-thigh18F-FDG cancer datasets as well as synthetic low-dose studies with only 10% of the full-dose raw data.Main results. Clinical evaluation of tumor quantification as well as physics-based figure-of-merit metric evaluation validated the promising performance of our proposed method. For both full-dose and low-dose studies, the proposed framework achieved <1% error in tumor standardized uptake value measures.Significance. It is of great clinical interest to achieve CT-less PET reconstruction, especially for low-dose PET studies.
Collapse
Affiliation(s)
- Luyao Shi
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America
| | - Jiazhen Zhang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Dan Shao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America.,Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong, People's Republic of China
| | - John A Onofrey
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America.,Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America.,Department of Urology, Yale University, New Haven, CT, United States of America
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| |
Collapse
|
19
|
Sanaat A, Akhavanalaf A, Shiri I, Salimi Y, Arabi H, Zaidi H. Deep-TOF-PET: Deep learning-guided generation of time-of-flight from non-TOF brain PET images in the image and projection domains. Hum Brain Mapp 2022; 43:5032-5043. [PMID: 36087092 PMCID: PMC9582376 DOI: 10.1002/hbm.26068] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 08/18/2022] [Indexed: 11/12/2022] Open
Abstract
We aim to synthesize brain time-of-flight (TOF) PET images/sinograms from their corresponding non-TOF information in the image space (IS) and sinogram space (SS) to increase the signal-to-noise ratio (SNR) and contrast of abnormalities, and decrease the bias in tracer uptake quantification. One hundred forty clinical brain 18 F-FDG PET/CT scans were collected to generate TOF and non-TOF sinograms. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). The predicted TOF sinogram was reconstructed and the performance of both models (IS and SS) compared with reference TOF and non-TOF. Wide-ranging quantitative and statistical analysis metrics, including structural similarity index metric (SSIM), root mean square error (RMSE), as well as 28 radiomic features for 83 brain regions were extracted to evaluate the performance of the CycleGAN model. SSIM and RMSE of 0.99 ± 0.03, 0.98 ± 0.02 and 0.12 ± 0.09, 0.16 ± 0.04 were achieved for the generated TOF-PET images in IS and SS, respectively. They were 0.97 ± 0.03 and 0.22 ± 0.12, respectively, for non-TOF-PET images. The Bland & Altman analysis revealed that the lowest tracer uptake value bias (-0.02%) and minimum variance (95% CI: -0.17%, +0.21%) were achieved for TOF-PET images generated in IS. For malignant lesions, the contrast in the test dataset was enhanced from 3.22 ± 2.51 for non-TOF to 3.34 ± 0.41 and 3.65 ± 3.10 for TOF PET in SS and IS, respectively. The implemented CycleGAN is capable of generating TOF from non-TOF PET images to achieve better image quality.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Azadeh Akhavanalaf
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
- Geneva University Neurocenter, Geneva UniversityGenevaSwitzerland
- Department of Nuclear Medicine and Molecular ImagingUniversity of Groningen, University Medical Center GroningenGroningenNetherlands
- Department of Nuclear MedicineUniversity of Southern DenmarkOdenseDenmark
| |
Collapse
|
20
|
Nai YH, Loi HY, O'Doherty S, Tan TH, Reilhac A. Comparison of the performances of machine learning and deep learning in improving the quality of low dose lung cancer PET images. Jpn J Radiol 2022; 40:1290-1299. [PMID: 35809210 DOI: 10.1007/s11604-022-01311-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 06/19/2022] [Indexed: 11/29/2022]
Abstract
PURPOSE To compare the performances of machine learning (ML) and deep learning (DL) in improving the quality of low dose (LD) lung cancer PET images and the minimum counts required. MATERIALS AND METHODS 33 standard dose (SD) PET images, were used to simulate LD PET images at seven-count levels of 0.25, 0.5, 1, 2, 5, 7.5 and 10 million (M) counts. Image quality transfer (IQT), a ML algorithm that uses decision tree and patch-sampling was compared to two DL networks-HighResNet (HRN) and deep-boosted regression (DBR). Supervised training was performed by training the ML and DL algorithms with matched-pair SD and LD images. Image quality evaluation and clinical lesion detection tasks were performed by three readers. Bias in 53 radiomic features, including mean SUV, was evaluated for all lesions. RESULTS ML- and DL-estimated images showed higher signal and smaller error than LD images with optimal image quality recovery achieved using LD down to 5 M counts. True positive rate and false discovery rate were fairly stable beyond 5 M counts for the detection of small and large true lesions. Readers rated average or higher ratings to images estimated from LD images of count levels above 5 M only, with higher confidence in detecting true lesions. CONCLUSION LD images with a minimum of 5 M counts (8.72 MBq for 10 min scan or 25 MBq for 3 min scan) are required for optimal clinical use of ML and DL, with slightly better but more varied performance shown by DL.
Collapse
Affiliation(s)
- Ying-Hwey Nai
- Clinical Imaging Research Centre, Yong Loo Lin School of Medicine, National University of Singapore, 14 Medical Drive, #B1-01, Singapore, 117599, Singapore.
| | - Hoi Yin Loi
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | - Sophie O'Doherty
- Clinical Imaging Research Centre, Yong Loo Lin School of Medicine, National University of Singapore, 14 Medical Drive, #B1-01, Singapore, 117599, Singapore
| | - Teng Hwee Tan
- Department of Radiation Oncology, National University Cancer Institute, Singapore, Singapore
| | - Anthonin Reilhac
- Clinical Imaging Research Centre, Yong Loo Lin School of Medicine, National University of Singapore, 14 Medical Drive, #B1-01, Singapore, 117599, Singapore
| |
Collapse
|
21
|
Sanaat A, Shiri I, Ferdowsi S, Arabi H, Zaidi H. Robust-Deep: A Method for Increasing Brain Imaging Datasets to Improve Deep Learning Models' Performance and Robustness. J Digit Imaging 2022; 35:469-481. [PMID: 35137305 PMCID: PMC9156620 DOI: 10.1007/s10278-021-00536-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 09/29/2021] [Accepted: 11/08/2021] [Indexed: 12/15/2022] Open
Abstract
A small dataset commonly affects generalization, robustness, and overall performance of deep neural networks (DNNs) in medical imaging research. Since gathering large clinical databases is always difficult, we proposed an analytical method for producing a large realistic/diverse dataset. Clinical brain PET/CT/MR images including full-dose (FD), low-dose (LD) corresponding to only 5 % of events acquired in the FD scan, non-attenuated correction (NAC) and CT-based measured attenuation correction (MAC) PET images, CT images and T1 and T2 MR sequences of 35 patients were included. All images were registered to the Montreal Neurological Institute (MNI) template. Laplacian blending was used to make a natural presentation using information in the frequency domain of images from two separate patients, as well as the blending mask. This classical technique from the computer vision and image processing communities is still widely used and unlike modern DNNs, does not require the availability of training data. A modified ResNet DNN was implemented to evaluate four image-to-image translation tasks, including LD to FD, LD+MR to FD, NAC to MAC, and MRI to CT, with and without using the synthesized images. Quantitative analysis using established metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM), and joint histogram analysis was performed for quantitative evaluation. The quantitative comparison between the registered small dataset containing 35 patients and the large dataset containing 350 synthesized plus 35 real dataset demonstrated improvement of the RMSE and SSIM by 29% and 8% for LD to FD, 40% and 7% for LD+MRI to FD, 16% and 8% for NAC to MAC, and 24% and 11% for MRI to CT mapping task, respectively. The qualitative/quantitative analysis demonstrated that the proposed model improved the performance of all four DNN models through producing images of higher quality and lower quantitative bias and variance compared to reference images.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Isaac Shiri
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Sohrab Ferdowsi
- University of Applied Sciences and Arts of Western, Geneva, Switzerland
| | - Hossein Arabi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Habib Zaidi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland ,grid.8591.50000 0001 2322 4988Geneva University Neurocenter, Geneva University, 1205 Geneva, Switzerland ,grid.4494.d0000 0000 9558 4598Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands ,grid.10825.3e0000 0001 0728 0170Department of Nuclear Medicine, University of Southern Denmark, DK-500 Odense, Denmark
| |
Collapse
|
22
|
Pain CD, Egan GF, Chen Z. Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement. Eur J Nucl Med Mol Imaging 2022; 49:3098-3118. [PMID: 35312031 PMCID: PMC9250483 DOI: 10.1007/s00259-022-05746-4] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/25/2022] [Indexed: 12/21/2022]
Abstract
Image processing plays a crucial role in maximising diagnostic quality of positron emission tomography (PET) images. Recently, deep learning methods developed across many fields have shown tremendous potential when applied to medical image enhancement, resulting in a rich and rapidly advancing literature surrounding this subject. This review encapsulates methods for integrating deep learning into PET image reconstruction and post-processing for low-dose imaging and resolution enhancement. A brief introduction to conventional image processing techniques in PET is firstly presented. We then review methods which integrate deep learning into the image reconstruction framework as either deep learning-based regularisation or as a fully data-driven mapping from measured signal to images. Deep learning-based post-processing methods for low-dose imaging, temporal resolution enhancement and spatial resolution enhancement are also reviewed. Finally, the challenges associated with applying deep learning to enhance PET images in the clinical setting are discussed and future research directions to address these challenges are presented.
Collapse
Affiliation(s)
- Cameron Dennis Pain
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia.
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Department of Data Science and AI, Monash University, Melbourne, Australia
| |
Collapse
|
23
|
Gong K, Catana C, Qi J, Li Q. Direct Reconstruction of Linear Parametric Images From Dynamic PET Using Nonlocal Deep Image Prior. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:680-689. [PMID: 34652998 PMCID: PMC8956450 DOI: 10.1109/tmi.2021.3120913] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Direct reconstruction methods have been developed to estimate parametric images directly from the measured PET sinograms by combining the PET imaging model and tracer kinetics in an integrated framework. Due to limited counts received, signal-to-noise-ratio (SNR) and resolution of parametric images produced by direct reconstruction frameworks are still limited. Recently supervised deep learning methods have been successfully applied to medical imaging denoising/reconstruction when large number of high-quality training labels are available. For static PET imaging, high-quality training labels can be acquired by extending the scanning time. However, this is not feasible for dynamic PET imaging, where the scanning time is already long enough. In this work, we proposed an unsupervised deep learning framework for direct parametric reconstruction from dynamic PET, which was tested on the Patlak model and the relative equilibrium Logan model. The training objective function was based on the PET statistical model. The patient's anatomical prior image, which is readily available from PET/CT or PET/MR scans, was supplied as the network input to provide a manifold constraint, and also utilized to construct a kernel layer to perform non-local feature denoising. The linear kinetic model was embedded in the network structure as a 1 ×1 ×1 convolution layer. Evaluations based on dynamic datasets of 18F-FDG and 11C-PiB tracers show that the proposed framework can outperform the traditional and the kernel method-based direct reconstruction methods.
Collapse
|
24
|
Xing Y, Qiao W, Wang T, Wang Y, Li C, Lv Y, Xi C, Liao S, Qian Z, Zhao J. Deep learning-assisted PET imaging achieves fast scan/low-dose examination. EJNMMI Phys 2022; 9:7. [PMID: 35122172 PMCID: PMC8816983 DOI: 10.1186/s40658-022-00431-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 01/12/2022] [Indexed: 12/16/2022] Open
Abstract
PURPOSE This study aimed to investigate the impact of a deep learning (DL)-based denoising method on the image quality and lesion detectability of 18F-FDG positron emission tomography (PET) images. METHODS Fifty-two oncological patients undergoing an 18F-FDG PET/CT imaging with an acquisition of 180 s per bed position were retrospectively included. The list-mode data were rebinned into four datasets: 100% (reference), 75%, 50%, and 33.3% of the total counts, and then reconstructed by OSEM algorithm and post-processed with the DL and Gaussian filter (GS). The image quality was assessed using a 5-point Likert scale, and FDG-avid lesions were counted to measure lesion detectability. Standardized uptake values (SUVs) in livers and lesions, liver signal-to-noise ratio (SNR) and target-to-background ratio (TBR) values were compared between the methods. Subgroup analyses compared TBRs after categorizing lesions based on parameters like lesion diameter, uptake or patient habitus. RESULTS The DL method showed superior performance regarding image noise and inferior performance regarding lesion contrast in the qualitative assessment. More than 96.8% of the lesions were successfully identified in DL images. Excellent agreements on SUV in livers and lesions were found. The DL method significantly improved the liver SNR for count reduction down to 33.3% (p < 0.001). Lesion TBR was not significantly different between DL and reference images of the 75% dataset; furthermore, there was no significant difference either for lesions of > 10 mm or lesions in BMIs of > 25. For the 50% dataset, there was no significant difference between DL and reference images for TBR of lesion with > 15 mm or higher uptake than liver. CONCLUSIONS The developed DL method improved both liver SNR and lesion TBR indicating better image quality and lesion conspicuousness compared to GS method. Compared with the reference, it showed non-inferior image quality with reduced counts by 25-50% under various conditions.
Collapse
Affiliation(s)
- Yan Xing
- Department of Nuclear Medicine, Shanghai General Hospital, Shanghai Jiaotong University, No. 100 Haining Road, Shanghai, 200080, People's Republic of China
| | - Wenli Qiao
- Department of Nuclear Medicine, Shanghai General Hospital, Shanghai Jiaotong University, No. 100 Haining Road, Shanghai, 200080, People's Republic of China
| | - Taisong Wang
- Department of Nuclear Medicine, Shanghai General Hospital, Shanghai Jiaotong University, No. 100 Haining Road, Shanghai, 200080, People's Republic of China
| | - Ying Wang
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Chenwei Li
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Yang Lv
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Chen Xi
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Shu Liao
- Shanghai United Imaging Intelligence Co. Ltd, Shanghai, People's Republic of China
| | - Zheng Qian
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Jinhua Zhao
- Department of Nuclear Medicine, Shanghai General Hospital, Shanghai Jiaotong University, No. 100 Haining Road, Shanghai, 200080, People's Republic of China.
| |
Collapse
|
25
|
Shiri I, AmirMozafari Sabet K, Arabi H, Pourkeshavarz M, Teimourian B, Ay MR, Zaidi H. Standard SPECT myocardial perfusion estimation from half-time acquisitions using deep convolutional residual neural networks. J Nucl Cardiol 2021; 28:2761-2779. [PMID: 32347527 DOI: 10.1007/s12350-020-02119-y] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Accepted: 03/26/2020] [Indexed: 12/12/2022]
Abstract
INTRODUCTION The purpose of this work was to assess the feasibility of acquisition time reduction in MPI-SPECT imaging using deep leering techniques through two main approaches, namely reduction of the acquisition time per projection and reduction of the number of angular projections. METHODS SPECT imaging was performed using a fixed 90° angle dedicated dual-head cardiac SPECT camera. This study included a prospective cohort of 363 patients with various clinical indications (normal, ischemia, and infarct) referred for MPI-SPECT. For each patient, 32 projections for 20 seconds per projection were acquired using a step and shoot protocol from the right anterior oblique to the left posterior oblique view. SPECT projection data were reconstructed using the OSEM algorithm (6 iterations, 4 subsets, Butterworth post-reconstruction filter). For each patient, four different datasets were generated, namely full time (20 seconds) projections (FT), half-time (10 seconds) acquisition per projection (HT), 32 full projections (FP), and 16 half projections (HP). The image-to-image transformation via the residual network was implemented to predict FT from HT and predict FP from HP images in the projection domain. Qualitative and quantitative evaluations of the proposed framework was performed using a tenfold cross validation scheme using the root mean square error (RMSE), absolute relative error (ARE), structural similarity index, peak signal-to-noise ratio (PSNR) metrics, and clinical quantitative parameters. RESULTS The results demonstrated that the predicted FT had better image quality than the predicted FP images. Among the generated images, predicted FT images resulted in the lowest error metrics (RMSE = 6.8 ± 2.7, ARE = 3.1 ± 1.1%) and highest similarity index and signal-to-noise ratio (SSIM = 0.97 ± 1.1, PSNR = 36.0 ± 1.4). The highest error metrics (RMSE = 32.8 ± 12.8, ARE = 16.2 ± 4.9%) and the lowest similarity and signal-to-noise ratio (SSIM = 0.93 ± 2.6, PSNR = 31.7 ± 2.9) were observed for HT images. The RMSE decreased significantly (P value < .05) for predicted FT (8.0 ± 3.6) relative to predicted FP (6.8 ± 2.7). CONCLUSION Reducing the acquisition time per projection significantly increased the error metrics. The deep neural network effectively recovers image quality and reduces bias in quantification metrics. Further research should be undertaken to explore the impact of time reduction in gated MPI-SPECT.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
| | | | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
| | - Mozhgan Pourkeshavarz
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Science, Tehran, Iran
- Department of Computer Engineering, Shahid Beheshti University, Tehran, Iran
| | - Behnoosh Teimourian
- Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran
| | - Mohammad Reza Ay
- Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
26
|
Zhang L, Xiao Z, Zhou C, Yuan J, He Q, Yang Y, Liu X, Liang D, Zheng H, Fan W, Zhang X, Hu Z. Spatial adaptive and transformer fusion network (STFNet) for low-count PET blind denoising with MRI. Med Phys 2021; 49:343-356. [PMID: 34796526 DOI: 10.1002/mp.15368] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 10/28/2021] [Accepted: 11/08/2021] [Indexed: 01/12/2023] Open
Abstract
PURPOSE Positron emission tomography (PET) has been widely used in various clinical applications. PET is a type of emission computed tomography and operates by positron annihilation radiation. With magnetic resonance imaging (MRI) providing anatomical information, joint PET/MRI reduces the radiation exposure risk of patients. Improved hardware and imaging algorithms have been proposed to further decrease the dose from radioactive tracers or the bed duration, but few methods focus on denoising low-count PET with MRI input. The existing methods are based on fixed conventional convolution and local attention, which do not sufficiently extract and fuse contextual and complementary information from multimodal input. There is still much room for improvement. Therefore, we propose a novel deep learning method for low-count PET/MRI denoising called the spatial-adaptive and transformer fusion network (STFNet), which consists of a Siamese encoder with a spatial-adaptive block (SA-block) and the transformer fusion encoder (TFE). METHODS Our proposed STFNet consists of a Siamese encoder with an SA-block, TFE, and two branches of the decoder. First, in the encoder, we adapt the SA-block in the Siamese encoder. The SA-block comprises deformable convolution with fusion modulation (DCFM) and two convolutional operations, which can promote network extraction of more relative and long-range contextual features. Second, the pixel-to-pixel TFE helps the network establish a local and global relationship between high-level feature maps of PET and MRI. In the decoder part, we design two branches for PET denoising and MRI translation, and predictions are obtained by trainable weighted summation. This proposed algorithm is implemented to predict synthetic standard-dose neck PET images from low-count neck PET images and MRI. Additionally, this method is compared with the existing U-Net and residual U-Net methods with and without MRI input. RESULTS To demonstrate the advantages of our method, we introduce configuration studies about TFE, ablation studies, and empirical comparative studies. Quantitative analyses are based on root mean square error (RSME), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and Pearson correlation coefficient (PCC). Additionally, qualitative results show the comparisons between our proposed method and other existing methods. All experimental results and visualizations show that our method achieves state-of-the-art performance in quantification and qualification. CONCLUSIONS Based on our experiments, STFNet performs better than existing methods in measurement and visualization. However, our proposed method may still be suboptimal because we apply only the L1 loss to train our data set, and the data set includes corrupted PET with different low counts. In the future, we may exploit a generative adversarial network (GAN)-based paradigm in our STFNet to further improve the visual quality.
Collapse
Affiliation(s)
- Lipei Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Zizheng Xiao
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Jianmin Yuan
- Central Research Institute, Shanghai United Imaging Healthcare, Shanghai, China
| | - Qiang He
- Central Research Institute, Shanghai United Imaging Healthcare, Shanghai, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Xu Zhang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| |
Collapse
|
27
|
Amirrashedi M, Sarkar S, Mamizadeh H, Ghadiri H, Ghafarian P, Zaidi H, Ay MR. Leveraging deep neural networks to improve numerical and perceptual image quality in low-dose preclinical PET imaging. Comput Med Imaging Graph 2021; 94:102010. [PMID: 34784505 DOI: 10.1016/j.compmedimag.2021.102010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 10/25/2021] [Accepted: 10/26/2021] [Indexed: 01/24/2023]
Abstract
The amount of radiotracer injected into laboratory animals is still the most daunting challenge facing translational PET studies. Since low-dose imaging is characterized by a higher level of noise, the quality of the reconstructed images leaves much to be desired. Being the most ubiquitous techniques in denoising applications, edge-aware denoising filters, and reconstruction-based techniques have drawn significant attention in low-count applications. However, for the last few years, much of the credit has gone to deep-learning (DL) methods, which provide more robust solutions to handle various conditions. Albeit being extensively explored in clinical studies, to the best of our knowledge, there is a lack of studies exploring the feasibility of DL-based image denoising in low-count small animal PET imaging. Therefore, herein, we investigated different DL frameworks to map low-dose small animal PET images to their full-dose equivalent with quality and visual similarity on a par with those of standard acquisition. The performance of the DL model was also compared to other well-established filters, including Gaussian smoothing, nonlocal means, and anisotropic diffusion. Visual inspection and quantitative assessment based on quality metrics proved the superior performance of the DL methods in low-count small animal PET studies, paving the way for a more detailed exploration of DL-assisted algorithms in this domain.
Collapse
Affiliation(s)
- Mahsa Amirrashedi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Saeed Sarkar
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Hojjat Mamizadeh
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Hossein Ghadiri
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| | - Pardis Ghafarian
- Chronic Respiratory Diseases Research Center, National Research Institute of Tuberculosis and Lung Diseases (NRITLD), Shahid Beheshti University of Medical Sciences, Tehran, Iran; PET/CT and Cyclotron Center, Masih Daneshvari Hospital, Shahid Beheshti University of Medical, Tehran, Iran.
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva CH-1211, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| | - Mohammad Reza Ay
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran; Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
28
|
Deep learning-based denoising of low-dose SPECT myocardial perfusion images: quantitative assessment and clinical performance. Eur J Nucl Med Mol Imaging 2021; 49:1508-1522. [PMID: 34778929 PMCID: PMC8940834 DOI: 10.1007/s00259-021-05614-7] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 11/01/2021] [Indexed: 11/28/2022]
Abstract
Purpose This work was set out to investigate the feasibility of dose reduction in SPECT myocardial perfusion imaging (MPI) without sacrificing diagnostic accuracy. A deep learning approach was proposed to synthesize full-dose images from the corresponding low-dose images at different dose reduction levels in the projection space. Methods Clinical SPECT-MPI images of 345 patients acquired on a dedicated cardiac SPECT camera in list-mode format were retrospectively employed to predict standard-dose from low-dose images at half-, quarter-, and one-eighth-dose levels. To simulate realistic low-dose projections, 50%, 25%, and 12.5% of the events were randomly selected from the list-mode data through applying binomial subsampling. A generative adversarial network was implemented to predict non-gated standard-dose SPECT images in the projection space at the different dose reduction levels. Well-established metrics, including peak signal-to-noise ratio (PSNR), root mean square error (RMSE), and structural similarity index metrics (SSIM) in addition to Pearson correlation coefficient analysis and clinical parameters derived from Cedars-Sinai software were used to quantitatively assess the predicted standard-dose images. For clinical evaluation, the quality of the predicted standard-dose images was evaluated by a nuclear medicine specialist using a seven-point (− 3 to + 3) grading scheme. Results The highest PSNR (42.49 ± 2.37) and SSIM (0.99 ± 0.01) and the lowest RMSE (1.99 ± 0.63) were achieved at a half-dose level. Pearson correlation coefficients were 0.997 ± 0.001, 0.994 ± 0.003, and 0.987 ± 0.004 for the predicted standard-dose images at half-, quarter-, and one-eighth-dose levels, respectively. Using the standard-dose images as reference, the Bland–Altman plots sketched for the Cedars-Sinai selected parameters exhibited remarkably less bias and variance in the predicted standard-dose images compared with the low-dose images at all reduced dose levels. Overall, considering the clinical assessment performed by a nuclear medicine specialist, 100%, 80%, and 11% of the predicted standard-dose images were clinically acceptable at half-, quarter-, and one-eighth-dose levels, respectively. Conclusion The noise was effectively suppressed by the proposed network, and the predicted standard-dose images were comparable to reference standard-dose images at half- and quarter-dose levels. However, recovery of the underlying signals/information in low-dose images beyond a quarter of the standard dose would not be feasible (due to very poor signal-to-noise ratio) which will adversely affect the clinical interpretation of the resulting images. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05614-7.
Collapse
|
29
|
Sanaat A, Shooli H, Ferdowsi S, Shiri I, Arabi H, Zaidi H. DeepTOFSino: A deep learning model for synthesizing full-dose time-of-flight bin sinograms from their corresponding low-dose sinograms. Neuroimage 2021; 245:118697. [PMID: 34742941 DOI: 10.1016/j.neuroimage.2021.118697] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Revised: 09/21/2021] [Accepted: 10/29/2021] [Indexed: 11/29/2022] Open
Abstract
PURPOSE Reducing the injected activity and/or the scanning time is a desirable goal to minimize radiation exposure and maximize patients' comfort. To achieve this goal, we developed a deep neural network (DNN) model for synthesizing full-dose (FD) time-of-flight (TOF) bin sinograms from their corresponding fast/low-dose (LD) TOF bin sinograms. METHODS Clinical brain PET/CT raw data of 140 normal and abnormal patients were employed to create LD and FD TOF bin sinograms. The LD TOF sinograms were created through 5% undersampling of FD list-mode PET data. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). Residual network (ResNet) algorithms were trained separately to generate FD bins from LD bins. An extra ResNet model was trained to synthesize FD images from LD images to compare the performance of DNN in sinogram space (SS) vs implementation in image space (IS). Comprehensive quantitative and statistical analysis was performed to assess the performance of the proposed model using established quantitative metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM) region-wise standardized uptake value (SUV) bias and statistical analysis for 83 brain regions. RESULTS SSIM and PSNR values of 0.97 ± 0.01, 0.98 ± 0.01 and 33.70 ± 0.32, 39.36 ± 0.21 were obtained for IS and SS, respectively, compared to 0.86 ± 0.02and 31.12 ± 0.22 for reference LD images. The absolute average SUV bias was 0.96 ± 0.95% and 1.40 ± 0.72% for SS and IS implementations, respectively. The joint histogram analysis revealed the lowest mean square error (MSE) and highest correlation (R2 = 0.99, MSE = 0.019) was achieved by SS compared to IS (R2 = 0.97, MSE= 0.028). The Bland & Altman analysis showed that the lowest SUV bias (-0.4%) and minimum variance (95% CI: -2.6%, +1.9%) were achieved by SS images. The voxel-wise t-test analysis revealed the presence of voxels with statistically significantly lower values in LD, IS, and SS images compared to FD images respectively. CONCLUSION The results demonstrated that images reconstructed from the predicted TOF FD sinograms using the SS approach led to higher image quality and lower bias compared to images predicted from LD images.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Shooli
- Persian Gulf Nuclear Medicine Research Center, Department of Molecular Imaging and Radionuclide Therapy (MIRT), Bushehr Medical University Hospital, Faculty of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Sohrab Ferdowsi
- University of Applied Sciences and Arts of Western, Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland; Geneva University Neurocenter, University of Geneva, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
30
|
Zhuang H, Zhang J, Liao F. A systematic review on application of deep learning in digestive system image processing. THE VISUAL COMPUTER 2021; 39:2207-2222. [PMID: 34744231 PMCID: PMC8557108 DOI: 10.1007/s00371-021-02322-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/30/2021] [Indexed: 05/07/2023]
Abstract
With the advent of the big data era, the application of artificial intelligence represented by deep learning in medicine has become a hot topic. In gastroenterology, deep learning has accomplished remarkable accomplishments in endoscopy, imageology, and pathology. Artificial intelligence has been applied to benign gastrointestinal tract lesions, early cancer, tumors, inflammatory bowel diseases, livers, pancreas, and other diseases. Computer-aided diagnosis significantly improve diagnostic accuracy and reduce physicians' workload and provide a shred of evidence for clinical diagnosis and treatment. In the near future, artificial intelligence will have high application value in the field of medicine. This paper mainly summarizes the latest research on artificial intelligence in diagnosing and treating digestive system diseases and discussing artificial intelligence's future in digestive system diseases. We sincerely hope that our work can become a stepping stone for gastroenterologists and computer experts in artificial intelligence research and facilitate the application and development of computer-aided image processing technology in gastroenterology.
Collapse
Affiliation(s)
- Huangming Zhuang
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Jixiang Zhang
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Fei Liao
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| |
Collapse
|
31
|
Peng Z, Ni M, Shan H, Lu Y, Li Y, Zhang Y, Pei X, Chen Z, Xie Q, Wang S, Xu XG. Feasibility evaluation of PET scan-time reduction for diagnosing amyloid-β levels in Alzheimer's disease patients using a deep-learning-based denoising algorithm. Comput Biol Med 2021; 138:104919. [PMID: 34655898 DOI: 10.1016/j.compbiomed.2021.104919] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Revised: 09/29/2021] [Accepted: 09/29/2021] [Indexed: 11/27/2022]
Abstract
PURPOSE To shorten positron emission tomography (PET) scanning time in diagnosing amyloid-β levels thus increasing the workflow in centers involving Alzheimer's Disease (AD) patients. METHODS PET datasets were collected for 25 patients injected with 18F-AV45 radiopharmaceutical. To generate necessary training data, PET images from both normal-scanning-time (20-min) as well as so-called "shortened-scanning-time" (1-min, 2-min, 5-min, and 10-min) were reconstructed for each patient. Building on our earlier work on MCDNet (Monte Carlo Denoising Net) and a new Wasserstein-GAN algorithm, we developed a new denoising model called MCDNet-2 to predict normal-scanning-time PET images from a series of shortened-scanning-time PET images. The quality of the predicted PET images was quantitatively evaluated using objective metrics including normalized-root-mean-square-error (NRMSE), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR). Furthermore, two radiologists performed subjective evaluations including the qualitative evaluation and a five-point grading evaluation. The denoising performance of the proposed MCDNet-2 was finally compared with those of U-Net, MCDNet, and a traditional denoising method called Gaussian Filtering. RESULTS The proposed MCDNet-2 can yield good denoising performance in 5-min PET images. In the comparison of denoising methods, MCDNet-2 yielded the best performance in the subjective evaluation although it is comparable with MCDNet in objective comparison (NRMSE, PSNR, and SSIM). In the qualitative evaluation of amyloid-β positive or negative results, MCDNet-2 was found to achieve a classification accuracy of 100%. CONCLUSIONS The proposed denoising method has been found to reduce the PET scan time from the normal level of 20 min to 5 min but still maintaining acceptable image quality in correctly diagnosing amyloid-β levels. These results suggest strongly that deep learning-based methods such as ours can be an attractive solution to the clinical needs to improve PET imaging workflow.
Collapse
Affiliation(s)
- Zhao Peng
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, 230026, China
| | - Ming Ni
- Department of Nuclear Medicine, The First Affiliated Hospital of USTC, Division of Life Science and Medicine, University of Science and Technology of China, Hefei, 230001, China
| | - Hongming Shan
- Institute of Science and Technology for Brain-inspired Intelligence and MOE Frontiers Center for Brain Science, Fudan University, Shanghai, 200433, China; Shanghai Center for Brain Science and Brain-inspired Technology, Shanghai, 201210, China
| | - Yu Lu
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, 230026, China
| | - Yongzhe Li
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, 230026, China
| | - Yifan Zhang
- Department of Nuclear Medicine, The First Affiliated Hospital of USTC, Division of Life Science and Medicine, University of Science and Technology of China, Hefei, 230001, China
| | - Xi Pei
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, 230026, China; Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, 230026, China
| | - Zhi Chen
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, 230026, China; Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, 230026, China
| | - Qiang Xie
- Department of Nuclear Medicine, The First Affiliated Hospital of USTC, Division of Life Science and Medicine, University of Science and Technology of China, Hefei, 230001, China
| | - Shicun Wang
- Department of Nuclear Medicine, The First Affiliated Hospital of USTC, Division of Life Science and Medicine, University of Science and Technology of China, Hefei, 230001, China; Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, 230026, China
| | - X George Xu
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, 230026, China; Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, 230026, China; Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230001, China.
| |
Collapse
|
32
|
Liu J, Malekzadeh M, Mirian N, Song TA, Liu C, Dutta J. Artificial Intelligence-Based Image Enhancement in PET Imaging: Noise Reduction and Resolution Enhancement. PET Clin 2021; 16:553-576. [PMID: 34537130 PMCID: PMC8457531 DOI: 10.1016/j.cpet.2021.06.005] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
High noise and low spatial resolution are two key confounding factors that limit the qualitative and quantitative accuracy of PET images. Artificial intelligence models for image denoising and deblurring are becoming increasingly popular for the postreconstruction enhancement of PET images. We present a detailed review of recent efforts for artificial intelligence-based PET image enhancement with a focus on network architectures, data types, loss functions, and evaluation metrics. We also highlight emerging areas in this field that are quickly gaining popularity, identify barriers to large-scale adoption of artificial intelligence models for PET image enhancement, and discuss future directions.
Collapse
Affiliation(s)
- Juan Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Masoud Malekzadeh
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Niloufar Mirian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Tzu-An Song
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA; Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
33
|
Onishi Y, Hashimoto F, Ote K, Ohba H, Ota R, Yoshikawa E, Ouchi Y. Anatomical-guided attention enhances unsupervised PET image denoising performance. Med Image Anal 2021; 74:102226. [PMID: 34563861 DOI: 10.1016/j.media.2021.102226] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 08/02/2021] [Accepted: 09/05/2021] [Indexed: 10/20/2022]
Abstract
Although supervised convolutional neural networks (CNNs) often outperform conventional alternatives for denoising positron emission tomography (PET) images, they require many low- and high-quality reference PET image pairs. Herein, we propose an unsupervised 3D PET image denoising method based on an anatomical information-guided attention mechanism. The proposed magnetic resonance-guided deep decoder (MR-GDD) utilizes the spatial details and semantic features of MR-guidance image more effectively by introducing encoder-decoder and deep decoder subnetworks. Moreover, the specific shapes and patterns of the guidance image do not affect the denoised PET image, because the guidance image is input to the network through an attention gate. In a Monte Carlo simulation of [18F]fluoro-2-deoxy-D-glucose (FDG), the proposed method achieved the highest peak signal-to-noise ratio and structural similarity (27.92 ± 0.44 dB/0.886 ± 0.007), as compared with Gaussian filtering (26.68 ± 0.10 dB/0.807 ± 0.004), image guided filtering (27.40 ± 0.11 dB/0.849 ± 0.003), deep image prior (DIP) (24.22 ± 0.43 dB/0.737 ± 0.017), and MR-DIP (27.65 ± 0.42 dB/0.879 ± 0.007). Furthermore, we experimentally visualized the behavior of the optimization process, which is often unknown in unsupervised CNN-based restoration problems. For preclinical (using [18F]FDG and [11C]raclopride) and clinical (using [18F]florbetapir) studies, the proposed method demonstrates state-of-the-art denoising performance while retaining spatial resolution and quantitative accuracy, despite using a common network architecture for various noisy PET images with 1/10th of the full counts. These results suggest that the proposed MR-GDD can reduce PET scan times and PET tracer doses considerably without impacting patients.
Collapse
Affiliation(s)
- Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan.
| | - Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Hiroyuki Ohba
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Ryosuke Ota
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Etsuji Yoshikawa
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Yasuomi Ouchi
- Department of Biofunctional Imaging, Preeminent Medical Photonics Education & Research Center, Hamamatsu University School of Medicine, 1-20-1 Handayama, Higashi-ku, Hamamatsu 431-3192, Japan
| |
Collapse
|
34
|
Pontoriero AD, Nordio G, Easmin R, Giacomel A, Santangelo B, Jahuar S, Bonoldi I, Rogdaki M, Turkheimer F, Howes O, Veronese M. Automated Data Quality Control in FDOPA brain PET Imaging using Deep Learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106239. [PMID: 34289438 PMCID: PMC8404039 DOI: 10.1016/j.cmpb.2021.106239] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 06/10/2021] [Indexed: 06/07/2023]
Abstract
INTRODUCTION With biomedical imaging research increasingly using large datasets, it becomes critical to find operator-free methods to quality control the data collected and the associated analysis. Attempts to use artificial intelligence (AI) to perform automated quality control (QC) for both single-site and multi-site datasets have been explored in some neuroimaging techniques (e.g. EEG or MRI), although these methods struggle to find replication in other domains. The aim of this study is to test the feasibility of an automated QC pipeline for brain [18F]-FDOPA PET imaging as a biomarker for the dopamine system. METHODS Two different Convolutional Neural Networks (CNNs) were used and combined to assess spatial misalignment to a standard template and the signal-to-noise ratio (SNR) relative to 200 static [18F]-FDOPA PET images that had been manually quality controlled from three different PET/CT scanners. The scans were combined with an additional 400 scans, in which misalignment (200 scans) and low SNR (200 scans) were simulated. A cross-validation was performed, where 80% of the data were used for training and 20% for validation. Two additional datasets of [18F]-FDOPA PET images (50 and 100 scans respectively with at least 80% of good quality images) were used for out-of-sample validation. RESULTS The CNN performance was excellent in the training dataset (accuracy for motion: 0.86 ± 0.01, accuracy for SNR: 0.69 ± 0.01), leading to 100% accurate QC classification when applied to the two out-of-sample datasets. Data dimensionality reduction affected the generalizability of the CNNs, especially when the classifiers were applied to the out-of-sample data from 3D to 1D datasets. CONCLUSIONS This feasibility study shows that it is possible to perform automatic QC of [18F]-FDOPA PET imaging with CNNs. The approach has the potential to be extended to other PET tracers in both brain and non-brain applications, but it is dependent on the availability of large datasets necessary for the algorithm training.
Collapse
Affiliation(s)
- Antonella D Pontoriero
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Giovanna Nordio
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom.
| | - Rubaida Easmin
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Alessio Giacomel
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Barbara Santangelo
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom; Department of Psychosis Studies, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Sameer Jahuar
- Department of Psychological Medicine, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Ilaria Bonoldi
- Department of Psychosis Studies, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Maria Rogdaki
- Department of Psychosis Studies, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Federico Turkheimer
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Oliver Howes
- Department of Psychosis Studies, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom; H. Lundbeck UK, Ottiliavej 9 2500 Valby, Denmark; Institute of Clinical Sciences (ICS), Faculty of Medicine, Imperial College London, Du Cane Road, London W12 0NN
| | - Mattia Veronese
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom; Department of Information Engineering, University of Padua, Padua, Italy
| |
Collapse
|
35
|
Sanaat A, Shiri I, Arabi H, Mainta I, Nkoulou R, Zaidi H. Deep learning-assisted ultra-fast/low-dose whole-body PET/CT imaging. Eur J Nucl Med Mol Imaging 2021; 48:2405-2415. [PMID: 33495927 PMCID: PMC8241799 DOI: 10.1007/s00259-020-05167-1] [Citation(s) in RCA: 55] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 12/15/2020] [Indexed: 12/21/2022]
Abstract
PURPOSE Tendency is to moderate the injected activity and/or reduce acquisition time in PET examinations to minimize potential radiation hazards and increase patient comfort. This work aims to assess the performance of regular full-dose (FD) synthesis from fast/low-dose (LD) whole-body (WB) PET images using deep learning techniques. METHODS Instead of using synthetic LD scans, two separate clinical WB 18F-Fluorodeoxyglucose (18F-FDG) PET/CT studies of 100 patients were acquired: one regular FD (~ 27 min) and one fast or LD (~ 3 min) consisting of 1/8th of the standard acquisition time. A modified cycle-consistent generative adversarial network (CycleGAN) and residual neural network (ResNET) models, denoted as CGAN and RNET, respectively, were implemented to predict FD PET images. The quality of the predicted PET images was assessed by two nuclear medicine physicians. Moreover, the diagnostic quality of the predicted PET images was evaluated using a pass/fail scheme for lesion detectability task. Quantitative analysis using established metrics including standardized uptake value (SUV) bias was performed for the liver, left/right lung, brain, and 400 malignant lesions from the test and evaluation datasets. RESULTS CGAN scored 4.92 and 3.88 (out of 5) (adequate to good) for brain and neck + trunk, respectively. The average SUV bias calculated over normal tissues was 3.39 ± 0.71% and - 3.83 ± 1.25% for CGAN and RNET, respectively. Bland-Altman analysis reported the lowest SUV bias (0.01%) and 95% confidence interval of - 0.36, + 0.47 for CGAN compared with the reference FD images for malignant lesions. CONCLUSION CycleGAN is able to synthesize clinical FD WB PET images from LD images with 1/8th of standard injected activity or acquisition time. The predicted FD images present almost similar performance in terms of lesion detectability, qualitative scores, and quantification bias and variance.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - René Nkoulou
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
- Geneva University Neurocenter, Geneva University, 1205 Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, DK-500 Odense, Denmark
| |
Collapse
|
36
|
Chen KT, Toueg TN, Koran MEI, Davidzon G, Zeineh M, Holley D, Gandhi H, Halbert K, Boumis A, Kennedy G, Mormino E, Khalighi M, Zaharchuk G. True ultra-low-dose amyloid PET/MRI enhanced with deep learning for clinical interpretation. Eur J Nucl Med Mol Imaging 2021; 48:2416-2425. [PMID: 33416955 PMCID: PMC8891344 DOI: 10.1007/s00259-020-05151-9] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Accepted: 12/06/2020] [Indexed: 02/02/2023]
Abstract
PURPOSE While sampled or short-frame realizations have shown the potential power of deep learning to reduce radiation dose for PET images, evidence in true injected ultra-low-dose cases is lacking. Therefore, we evaluated deep learning enhancement using a significantly reduced injected radiotracer protocol for amyloid PET/MRI. METHODS Eighteen participants underwent two separate 18F-florbetaben PET/MRI studies in which an ultra-low-dose (6.64 ± 3.57 MBq, 2.2 ± 1.3% of standard) or a standard-dose (300 ± 14 MBq) was injected. The PET counts from the standard-dose list-mode data were also undersampled to approximate an ultra-low-dose session. A pre-trained convolutional neural network was fine-tuned using MR images and either the injected or sampled ultra-low-dose PET as inputs. Image quality of the enhanced images was evaluated using three metrics (peak signal-to-noise ratio, structural similarity, and root mean square error), as well as the coefficient of variation (CV) for regional standard uptake value ratios (SUVRs). Mean cerebral uptake was correlated across image types to assess the validity of the sampled realizations. To judge clinical performance, four trained readers scored image quality on a five-point scale (using 15% non-inferiority limits for proportion of studies rated 3 or better) and classified cases into amyloid-positive and negative studies. RESULTS The deep learning-enhanced PET images showed marked improvement on all quality metrics compared with the low-dose images as well as having generally similar regional CVs as the standard-dose. All enhanced images were non-inferior to their standard-dose counterparts. Accuracy for amyloid status was high (97.2% and 91.7% for images enhanced from injected and sampled ultra-low-dose data, respectively) which was similar to intra-reader reproducibility of standard-dose images (98.6%). CONCLUSION Deep learning methods can synthesize diagnostic-quality PET images from ultra-low injected dose simultaneous PET/MRI data, demonstrating the general validity of sampled realizations and the potential to reduce dose significantly for amyloid imaging.
Collapse
Affiliation(s)
- Kevin T. Chen
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Tyler N. Toueg
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
| | | | - Guido Davidzon
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Michael Zeineh
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Dawn Holley
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Harsh Gandhi
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Kim Halbert
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Athanasia Boumis
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Gabriel Kennedy
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
| | - Elizabeth Mormino
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
| | - Mehdi Khalighi
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| |
Collapse
|
37
|
Sanaat A, Mirsadeghi E, Razeghi B, Ginovart N, Zaidi H. Fast dynamic brain PET imaging using stochastic variational prediction for recurrent frame generation. Med Phys 2021; 48:5059-5071. [PMID: 34174787 PMCID: PMC8518550 DOI: 10.1002/mp.15063] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 05/30/2021] [Accepted: 06/08/2021] [Indexed: 12/03/2022] Open
Abstract
Purpose We assess the performance of a recurrent frame generation algorithm for prediction of late frames from initial frames in dynamic brain PET imaging. Methods Clinical dynamic 18F‐DOPA brain PET/CT studies of 46 subjects with ten folds cross‐validation were retrospectively employed. A novel stochastic adversarial video prediction model was implemented to predict the last 13 frames (25–90 minutes) from the initial 13 frames (0–25 minutes). The quantitative analysis of the predicted dynamic PET frames was performed for the test and validation dataset using established metrics. Results The predicted dynamic images demonstrated that the model is capable of predicting the trend of change in time‐varying tracer biodistribution. The Bland‐Altman plots reported the lowest tracer uptake bias (−0.04) for the putamen region and the smallest variance (95% CI: −0.38, +0.14) for the cerebellum. The region‐wise Patlak graphical analysis in the caudate and putamen regions for eight subjects from the test and validation dataset showed that the average bias for Ki and distribution volume was 4.3%, 5.1% and 4.4%, 4.2%, (P‐value <0.05), respectively. Conclusion We have developed a novel deep learning approach for fast dynamic brain PET imaging capable of generating the last 65 minutes time frames from the initial 25 minutes frames, thus enabling significant reduction in scanning time.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ehsan Mirsadeghi
- Electrical Engineering Department, Amirkabir University of Technology, Tehran, Iran
| | - Behrooz Razeghi
- Department of Computer Sciences, University of Geneva, Geneva, Switzerland.,School of Engineering and Applied Sciences, Harvard University, Boston, USA
| | - Nathalie Ginovart
- Department of Psychiatry, Geneva University, Geneva, Switzerland.,Department of Basic Neurosciences, Geneva University, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.,Geneva University Neurocenter, Geneva University, Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands.,University Medical Center, Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
38
|
Lv Y, Xi C. PET image reconstruction with deep progressive learning. Phys Med Biol 2021; 66. [PMID: 33892485 DOI: 10.1088/1361-6560/abfb17] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 04/23/2021] [Indexed: 11/11/2022]
Abstract
Convolutional neural networks (CNNs) have recently achieved state-of-the-art results for positron emission tomography (PET) imaging problems. However direct learning from input image to target image is challenging if the gap is large between two images. Previous studies have shown that CNN can reduce image noise, but it can also degrade contrast recovery for small lesions. In this work, a deep progressive learning (DPL) method for PET image reconstruction is proposed to reduce background noise and improve image contrast. DPL bridges the gap between low quality image and high quality image through two learning steps. In the iterative reconstruction process, two pre-trained neural networks are introduced to control the image noise and contrast in turn. The feedback structure is adopted in the network design, which greatly reduces the parameters. The training data come from uEXPLORER, the world's first total-body PET scanner, in which the PET images show high contrast and very low image noise. We conducted extensive phantom and patient studies to test the algorithm for PET image quality improvement. The experimental results show that DPL is promising for reducing noise and improving contrast of PET images. Moreover, the proposed method has sufficient versatility to solve various imaging and image processing problems.
Collapse
Affiliation(s)
- Yang Lv
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Chen Xi
- United Imaging Healthcare, Shanghai, People's Republic of China
| |
Collapse
|
39
|
Zaidi H, El Naqa I. Quantitative Molecular Positron Emission Tomography Imaging Using Advanced Deep Learning Techniques. Annu Rev Biomed Eng 2021; 23:249-276. [PMID: 33797938 DOI: 10.1146/annurev-bioeng-082420-020343] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The widespread availability of high-performance computing and the popularity of artificial intelligence (AI) with machine learning and deep learning (ML/DL) algorithms at the helm have stimulated the development of many applications involving the use of AI-based techniques in molecular imaging research. Applications reported in the literature encompass various areas, including innovative design concepts in positron emission tomography (PET) instrumentation, quantitative image reconstruction and analysis techniques, computer-aided detection and diagnosis, as well as modeling and prediction of outcomes. This review reflects the tremendous interest in quantitative molecular imaging using ML/DL techniques during the past decade, ranging from the basic principles of ML/DL techniques to the various steps required for obtaining quantitatively accurate PET data, including algorithms used to denoise or correct for physical degrading factors as well as to quantify tracer uptake and metabolic tumor volume for treatment monitoring or radiation therapy treatment planning and response prediction.This review also addresses future opportunities and current challenges facing the adoption of ML/DL approaches and their role in multimodality imaging.
Collapse
Affiliation(s)
- Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211 Geneva, Switzerland; .,Geneva Neuroscience Centre, University of Geneva, 1205 Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, 9700 RB Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, DK-5000 Odense, Denmark
| | - Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, Florida 33612, USA.,Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48109, USA.,Department of Oncology, McGill University, Montreal, Quebec H3A 1G5, Canada
| |
Collapse
|
40
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|
41
|
Shiri I, Akhavanallaf A, Sanaat A, Salimi Y, Askari D, Mansouri Z, Shayesteh SP, Hasanian M, Rezaei-Kalantari K, Salahshour A, Sandoughdaran S, Abdollahi H, Arabi H, Zaidi H. Ultra-low-dose chest CT imaging of COVID-19 patients using a deep residual neural network. Eur Radiol 2021; 31:1420-1431. [PMID: 32879987 PMCID: PMC7467843 DOI: 10.1007/s00330-020-07225-6] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 08/13/2020] [Accepted: 08/21/2020] [Indexed: 02/07/2023]
Abstract
OBJECTIVES The current study aimed to design an ultra-low-dose CT examination protocol using a deep learning approach suitable for clinical diagnosis of COVID-19 patients. METHODS In this study, 800, 170, and 171 pairs of ultra-low-dose and full-dose CT images were used as input/output as training, test, and external validation set, respectively, to implement the full-dose prediction technique. A residual convolutional neural network was applied to generate full-dose from ultra-low-dose CT images. The quality of predicted CT images was assessed using root mean square error (RMSE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR). Scores ranging from 1 to 5 were assigned reflecting subjective assessment of image quality and related COVID-19 features, including ground glass opacities (GGO), crazy paving (CP), consolidation (CS), nodular infiltrates (NI), bronchovascular thickening (BVT), and pleural effusion (PE). RESULTS The radiation dose in terms of CT dose index (CTDIvol) was reduced by up to 89%. The RMSE decreased from 0.16 ± 0.05 to 0.09 ± 0.02 and from 0.16 ± 0.06 to 0.08 ± 0.02 for the predicted compared with ultra-low-dose CT images in the test and external validation set, respectively. The overall scoring assigned by radiologists showed an acceptance rate of 4.72 ± 0.57 out of 5 for reference full-dose CT images, while ultra-low-dose CT images rated 2.78 ± 0.9. The predicted CT images using the deep learning algorithm achieved a score of 4.42 ± 0.8. CONCLUSIONS The results demonstrated that the deep learning algorithm is capable of predicting standard full-dose CT images with acceptable quality for the clinical diagnosis of COVID-19 positive patients with substantial radiation dose reduction. KEY POINTS • Ultra-low-dose CT imaging of COVID-19 patients would result in the loss of critical information about lesion types, which could potentially affect clinical diagnosis. • Deep learning-based prediction of full-dose from ultra-low-dose CT images for the diagnosis of COVID-19 could reduce the radiation dose by up to 89%. • Deep learning algorithms failed to recover the correct lesion structure/density for a number of patients considered outliers, and as such, further research and development is warranted to address these limitations.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Azadeh Akhavanallaf
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Dariush Askari
- Department of Radiology Technology, Shahid Beheshti University of Medical, Tehran, Iran
| | - Zahra Mansouri
- Department of Biomedical Engineering and Medical Physics, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Sajad P Shayesteh
- Department of Physiology, Pharmacology and Medical Physics, Alborz University of Medical Sciences, Karaj, Iran
| | - Mohammad Hasanian
- Department of Radiology, Arak University of Medical Sciences, Arak, Iran
| | - Kiara Rezaei-Kalantari
- Rajaie Cardiovascular, Medical & Research Center, Iran University of Medical Science, Tehran, Iran
| | - Ali Salahshour
- Department of Radiology, Alborz University of Medical Sciences, Karaj, Iran
| | - Saleh Sandoughdaran
- Department of Radiation Oncology, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Hamid Abdollahi
- Department of Radiologic Sciences and Medical Physics, Faculty of Allied Medicine, Kerman University of Medical sciences, Kerman, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Geneva University Neurocenter, Geneva University, CH-1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
42
|
Wang YRJ, Baratto L, Hawk KE, Theruvath AJ, Pribnow A, Thakor AS, Gatidis S, Lu R, Gummidipundi SE, Garcia-Diaz J, Rubin D, Daldrup-Link HE. Artificial intelligence enables whole-body positron emission tomography scans with minimal radiation exposure. Eur J Nucl Med Mol Imaging 2021; 48:2771-2781. [PMID: 33527176 DOI: 10.1007/s00259-021-05197-3] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Accepted: 01/10/2021] [Indexed: 02/03/2023]
Abstract
PURPOSE To generate diagnostic 18F-FDG PET images of pediatric cancer patients from ultra-low-dose 18F-FDG PET input images, using a novel artificial intelligence (AI) algorithm. METHODS We used whole-body 18F-FDG-PET/MRI scans of 33 children and young adults with lymphoma (3-30 years) to develop a convolutional neural network (CNN), which combines inputs from simulated 6.25% ultra-low-dose 18F-FDG PET scans and simultaneously acquired MRI scans to produce a standard-dose 18F-FDG PET scan. The image quality of ultra-low-dose PET scans, AI-augmented PET scans, and clinical standard PET scans was evaluated by traditional metrics in computer vision and by expert radiologists and nuclear medicine physicians, using Wilcoxon signed-rank tests and weighted kappa statistics. RESULTS The peak signal-to-noise ratio and structural similarity index were significantly higher, and the normalized root-mean-square error was significantly lower on the AI-reconstructed PET images compared to simulated 6.25% dose images (p < 0.001). Compared to the ground-truth standard-dose PET, SUVmax values of tumors and reference tissues were significantly higher on the simulated 6.25% ultra-low-dose PET scans as a result of image noise. After the CNN augmentation, the SUVmax values were recovered to values similar to the standard-dose PET. Quantitative measures of the readers' diagnostic confidence demonstrated significantly higher agreement between standard clinical scans and AI-reconstructed PET scans (kappa = 0.942) than 6.25% dose scans (kappa = 0.650). CONCLUSIONS Our CNN model could generate simulated clinical standard 18F-FDG PET images from ultra-low-dose inputs, while maintaining clinically relevant information in terms of diagnostic accuracy and quantitative SUV measurements.
Collapse
Affiliation(s)
- Yan-Ran Joyce Wang
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, 725 Welch Road, CA, 94304, Stanford, USA
| | - Lucia Baratto
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, 725 Welch Road, CA, 94304, Stanford, USA
| | - K Elizabeth Hawk
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, 725 Welch Road, CA, 94304, Stanford, USA
| | - Ashok J Theruvath
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, 725 Welch Road, CA, 94304, Stanford, USA
| | - Allison Pribnow
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA
| | - Avnesh S Thakor
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, 725 Welch Road, CA, 94304, Stanford, USA
| | - Sergios Gatidis
- Department of Diagnostic and Interventional Radiology, University Hospital Tuebingen, Tuebingen, Germany
| | - Rong Lu
- Quantitative Sciences Unit, School of Medicine, Stanford University, Stanford, CA, 94304, USA
| | - Santosh E Gummidipundi
- Quantitative Sciences Unit, School of Medicine, Stanford University, Stanford, CA, 94304, USA
| | - Jordi Garcia-Diaz
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, 725 Welch Road, CA, 94304, Stanford, USA
| | - Daniel Rubin
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, 725 Welch Road, CA, 94304, Stanford, USA. .,Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA.
| | - Heike E Daldrup-Link
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, 725 Welch Road, CA, 94304, Stanford, USA. .,Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA.
| |
Collapse
|
43
|
Arabi H, Zaidi H. Non-local mean denoising using multiple PET reconstructions. Ann Nucl Med 2021; 35:176-186. [PMID: 33244745 PMCID: PMC7895794 DOI: 10.1007/s12149-020-01550-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Accepted: 11/07/2020] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Non-local mean (NLM) filtering has been broadly used for denoising of natural and medical images. The NLM filter relies on the redundant information, in the form of repeated patterns/textures, in the target image to discriminate the underlying structures/signals from noise. In PET (or SPECT) imaging, the raw data could be reconstructed using different parameters and settings, leading to different representations of the target image, which contain highly similar structures/signals to the target image contaminated with different noise levels (or properties). In this light, multiple-reconstruction NLM filtering (MR-NLM) is proposed, which relies on the redundant information provided by the different reconstructions of the same PET data (referred to as auxiliary images) to conduct the denoising process. METHODS Implementation of the MR-NLM approach involved the use of twelve auxiliary PET images (in addition to the target image) reconstructed using the same iterative reconstruction algorithm with different numbers of iterations and subsets. For each target voxel, the patches of voxels at the same location are extracted from the auxiliary PET images based on which the NLM denoising process is conducted. Through this, the exhaustive search scheme performed in the conventional NLM method to find similar patches of voxels is bypassed. The performance evaluation of the MR-NLM filter was carried out against the conventional NLM, Gaussian and bilateral post-reconstruction approaches using the experimental Jaszczak phantom and 25 whole-body PET/CT clinical studies. RESULTS The signal-to-noise ratio (SNR) in the experimental Jaszczak phantom study improved from 25.1 when using Gaussian filtering to 27.9 and 28.8 when the conventional NLM and MR-NLM methods were applied (p value < 0.05), respectively. Conversely, the Gaussian filter led to quantification bias of 35.4%, while NLM and MR-NLM approaches resulted in a bias of 32.0% and 31.1% (p value < 0.05), respectively. The clinical studies further confirm the superior performance of the MR-NLM method, wherein the quantitative bias measured in malignant lesions (hot spots) decreased from - 12.3 ± 2.3% when using the Gaussian filter to - 3.5 ± 1.3% and - 2.2 ± 1.2% when using the NLM and MR-NLM approaches (p value < 0.05), respectively. CONCLUSION The MR-NLM approach exhibited promising performance in terms of noise suppression and signal preservation for PET images, thus translating into higher SNR compared to the conventional NLM approach. Despite the promising performance of the MR-NLM approach, the additional computational burden owing to the requirement of multiple PET reconstruction still needs to be addressed.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, 1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB, Groningen, The Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 5000, Odense, Denmark.
| |
Collapse
|
44
|
Wang T, Lei Y, Fu Y, Wynne JF, Curran WJ, Liu T, Yang X. A review on medical imaging synthesis using deep learning and its clinical applications. J Appl Clin Med Phys 2021; 22:11-36. [PMID: 33305538 PMCID: PMC7856512 DOI: 10.1002/acm2.13121] [Citation(s) in RCA: 102] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 11/12/2020] [Accepted: 11/21/2020] [Indexed: 02/06/2023] Open
Abstract
This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Jacob F. Wynne
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
45
|
Burgos N, Bottani S, Faouzi J, Thibeau-Sutre E, Colliot O. Deep learning for brain disorders: from data processing to disease treatment. Brief Bioinform 2020; 22:1560-1576. [PMID: 33316030 DOI: 10.1093/bib/bbaa310] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 10/09/2020] [Accepted: 10/13/2020] [Indexed: 12/19/2022] Open
Abstract
In order to reach precision medicine and improve patients' quality of life, machine learning is increasingly used in medicine. Brain disorders are often complex and heterogeneous, and several modalities such as demographic, clinical, imaging, genetics and environmental data have been studied to improve their understanding. Deep learning, a subpart of machine learning, provides complex algorithms that can learn from such various data. It has become state of the art in numerous fields, including computer vision and natural language processing, and is also growingly applied in medicine. In this article, we review the use of deep learning for brain disorders. More specifically, we identify the main applications, the concerned disorders and the types of architectures and data used. Finally, we provide guidelines to bridge the gap between research studies and clinical routine.
Collapse
|
46
|
Li W, Liu H, Cheng F, Li Y, Li S, Yan J. Artificial intelligence applications for oncological positron emission tomography imaging. Eur J Radiol 2020; 134:109448. [PMID: 33307463 DOI: 10.1016/j.ejrad.2020.109448] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 10/07/2020] [Accepted: 11/26/2020] [Indexed: 12/16/2022]
Abstract
Positron emission tomography (PET), a functional and dynamic molecular imaging technique, is generally used to reveal tumors' biological behavior. Radiomics allows a high-throughput extraction of multiple features from images with artificial intelligence (AI) approaches and develops rapidly worldwide. Quantitative and objective features of medical images have been explored to recognize reliable biomarkers, with the development of PET radiomics. This paper will review the current clinical exploration of PET-based classical machine learning and deep learning methods, including disease diagnosis, the prediction of histological subtype, gene mutation status, tumor metastasis, tumor relapse, therapeutic side effects, therapeutic intervention and evaluation of prognosis. The applications of AI in oncology will be mainly discussed. The image-guided biopsy or surgery assisted by PET-based AI will be introduced as well. This paper aims to present the applications and methods of AI for PET imaging, which may offer important details for further clinical studies. Relevant precautions are put forward and future research directions are suggested.
Collapse
Affiliation(s)
- Wanting Li
- Shanxi Medical University, Taiyuan 030009, PR China; Department of Nuclear Medicine, First Hospital of Shanxi Medical University, Taiyuan 030001, PR China; Collaborative Innovation Center for Molecular Imaging, Taiyuan 030001, PR China
| | - Haiyan Liu
- Department of Nuclear Medicine, First Hospital of Shanxi Medical University, Taiyuan 030001, PR China; Collaborative Innovation Center for Molecular Imaging, Taiyuan 030001, PR China; Cellular Physiology Key Laboratory of Ministry of Education, Translational Medicine Research Center, Shanxi Medical University, Taiyuan 030001, PR China
| | - Feng Cheng
- Shanxi Medical University, Taiyuan 030009, PR China
| | - Yanhua Li
- Shanxi Medical University, Taiyuan 030009, PR China
| | - Sijin Li
- Shanxi Medical University, Taiyuan 030009, PR China; Department of Nuclear Medicine, First Hospital of Shanxi Medical University, Taiyuan 030001, PR China; Collaborative Innovation Center for Molecular Imaging, Taiyuan 030001, PR China.
| | - Jiangwei Yan
- Shanxi Medical University, Taiyuan 030009, PR China.
| |
Collapse
|
47
|
Shiyam Sundar LK, Muzik O, Buvat I, Bidaut L, Beyer T. Potentials and caveats of AI in hybrid imaging. Methods 2020; 188:4-19. [PMID: 33068741 DOI: 10.1016/j.ymeth.2020.10.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 10/05/2020] [Accepted: 10/07/2020] [Indexed: 12/18/2022] Open
Abstract
State-of-the-art patient management frequently mandates the investigation of both anatomy and physiology of the patients. Hybrid imaging modalities such as the PET/MRI, PET/CT and SPECT/CT have the ability to provide both structural and functional information of the investigated tissues in a single examination. With the introduction of such advanced hardware fusion, new problems arise such as the exceedingly large amount of multi-modality data that requires novel approaches of how to extract a maximum of clinical information from large sets of multi-dimensional imaging data. Artificial intelligence (AI) has emerged as one of the leading technologies that has shown promise in facilitating highly integrative analysis of multi-parametric data. Specifically, the usefulness of AI algorithms in the medical imaging field has been heavily investigated in the realms of (1) image acquisition and reconstruction, (2) post-processing and (3) data mining and modelling. Here, we aim to provide an overview of the challenges encountered in hybrid imaging and discuss how AI algorithms can facilitate potential solutions. In addition, we highlight the pitfalls and challenges in using advanced AI algorithms in the context of hybrid imaging and provide suggestions for building robust AI solutions that enable reproducible and transparent research.
Collapse
Affiliation(s)
- Lalith Kumar Shiyam Sundar
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | | | - Irène Buvat
- Laboratoire d'Imagerie Translationnelle en Oncologie, Inserm, Institut Curie, Orsay, France
| | - Luc Bidaut
- College of Science, University of Lincoln, Lincoln, UK
| | - Thomas Beyer
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|
48
|
Arabi H, Zaidi H. Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy. Eur J Hybrid Imaging 2020; 4:17. [PMID: 34191161 PMCID: PMC8218135 DOI: 10.1186/s41824-020-00086-8] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 08/10/2020] [Indexed: 12/22/2022] Open
Abstract
This brief review summarizes the major applications of artificial intelligence (AI), in particular deep learning approaches, in molecular imaging and radiation therapy research. To this end, the applications of artificial intelligence in five generic fields of molecular imaging and radiation therapy, including PET instrumentation design, PET image reconstruction quantification and segmentation, image denoising (low-dose imaging), radiation dosimetry and computer-aided diagnosis, and outcome prediction are discussed. This review sets out to cover briefly the fundamental concepts of AI and deep learning followed by a presentation of seminal achievements and the challenges facing their adoption in clinical setting.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, CH-1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700, Groningen, RB, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
| |
Collapse
|
49
|
Whole-body voxel-based internal dosimetry using deep learning. Eur J Nucl Med Mol Imaging 2020; 48:670-682. [PMID: 32875430 PMCID: PMC8036208 DOI: 10.1007/s00259-020-05013-4] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Accepted: 08/23/2020] [Indexed: 12/20/2022]
Abstract
Purpose In the era of precision medicine, patient-specific dose calculation using Monte Carlo (MC) simulations is deemed the gold standard technique for risk-benefit analysis of radiation hazards and correlation with patient outcome. Hence, we propose a novel method to perform whole-body personalized organ-level dosimetry taking into account the heterogeneity of activity distribution, non-uniformity of surrounding medium, and patient-specific anatomy using deep learning algorithms. Methods We extended the voxel-scale MIRD approach from single S-value kernel to specific S-value kernels corresponding to patient-specific anatomy to construct 3D dose maps using hybrid emission/transmission image sets. In this context, we employed a Deep Neural Network (DNN) to predict the distribution of deposited energy, representing specific S-values, from a single source in the center of a 3D kernel composed of human body geometry. The training dataset consists of density maps obtained from CT images and the reference voxelwise S-values generated using Monte Carlo simulations. Accordingly, specific S-value kernels are inferred from the trained model and whole-body dose maps constructed in a manner analogous to the voxel-based MIRD formalism, i.e., convolving specific voxel S-values with the activity map. The dose map predicted using the DNN was compared with the reference generated using MC simulations and two MIRD-based methods, including Single and Multiple S-Values (SSV and MSV) and Olinda/EXM software package. Results The predicted specific voxel S-value kernels exhibited good agreement with the MC-based kernels serving as reference with a mean relative absolute error (MRAE) of 4.5 ± 1.8 (%). Bland and Altman analysis showed the lowest dose bias (2.6%) and smallest variance (CI: − 6.6, + 1.3) for DNN. The MRAE of estimated absorbed dose between DNN, MSV, and SSV with respect to the MC simulation reference were 2.6%, 3%, and 49%, respectively. In organ-level dosimetry, the MRAE between the proposed method and MSV, SSV, and Olinda/EXM were 5.1%, 21.8%, and 23.5%, respectively. Conclusion The proposed DNN-based WB internal dosimetry exhibited comparable performance to the direct Monte Carlo approach while overcoming the limitations of conventional dosimetry techniques in nuclear medicine.
Collapse
|
50
|
Shiri I, Arabi H, Geramifar P, Hajianfar G, Ghafarian P, Rahmim A, Ay MR, Zaidi H. Deep-JASC: joint attenuation and scatter correction in whole-body 18F-FDG PET using a deep residual network. Eur J Nucl Med Mol Imaging 2020; 47:2533-2548. [DOI: 10.1007/s00259-020-04852-5] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Accepted: 05/01/2020] [Indexed: 12/22/2022]
|