1
|
Bottani S, Thibeau-Sutre E, Maire A, Ströer S, Dormont D, Colliot O, Burgos N. Contrast-enhanced to non-contrast-enhanced image translation to exploit a clinical data warehouse of T1-weighted brain MRI. BMC Med Imaging 2024; 24:67. [PMID: 38504179 PMCID: PMC10953143 DOI: 10.1186/s12880-024-01242-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 03/07/2024] [Indexed: 03/21/2024] Open
Abstract
BACKGROUND Clinical data warehouses provide access to massive amounts of medical images, but these images are often heterogeneous. They can for instance include images acquired both with or without the injection of a gadolinium-based contrast agent. Harmonizing such data sets is thus fundamental to guarantee unbiased results, for example when performing differential diagnosis. Furthermore, classical neuroimaging software tools for feature extraction are typically applied only to images without gadolinium. The objective of this work is to evaluate how image translation can be useful to exploit a highly heterogeneous data set containing both contrast-enhanced and non-contrast-enhanced images from a clinical data warehouse. METHODS We propose and compare different 3D U-Net and conditional GAN models to convert contrast-enhanced T1-weighted (T1ce) into non-contrast-enhanced (T1nce) brain MRI. These models were trained using 230 image pairs and tested on 77 image pairs from the clinical data warehouse of the Greater Paris area. RESULTS Validation using standard image similarity measures demonstrated that the similarity between real and synthetic T1nce images was higher than between real T1nce and T1ce images for all the models compared. The best performing models were further validated on a segmentation task. We showed that tissue volumes extracted from synthetic T1nce images were closer to those of real T1nce images than volumes extracted from T1ce images. CONCLUSION We showed that deep learning models initially developed with research quality data could synthesize T1nce from T1ce images of clinical quality and that reliable features could be extracted from the synthetic images, thus demonstrating the ability of such methods to help exploit a data set coming from a clinical data warehouse.
Collapse
Affiliation(s)
- Simona Bottani
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Elina Thibeau-Sutre
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Aurélien Maire
- Innovation & Données - Département des Services Numériques, AP-HP, Paris, 75013, France
| | - Sebastian Ströer
- Hôpital Pitié Salpêtrière, Department of Neuroradiology, AP-HP, Paris, 75012, France
| | - Didier Dormont
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, DMU DIAMENT, Paris, 75013, France
| | - Olivier Colliot
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Ninon Burgos
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France.
| |
Collapse
|
2
|
Li X, Johnson JM, Strigel RM, Bancroft LCH, Hurley SA, Estakhraji SIZ, Kumar M, Fowler AM, McMillan AB. Attenuation correction and truncation completion for breast PET/MR imaging using deep learning. Phys Med Biol 2024; 69:045031. [PMID: 38252969 DOI: 10.1088/1361-6560/ad2126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 01/22/2024] [Indexed: 01/24/2024]
Abstract
Objective. Simultaneous PET/MR scanners combine the high sensitivity of MR imaging with the functional imaging of PET. However, attenuation correction of breast PET/MR imaging is technically challenging. The purpose of this study is to establish a robust attenuation correction algorithm for breast PET/MR images that relies on deep learning (DL) to recreate the missing portions of the patient's anatomy (truncation completion), as well as to provide bone information for attenuation correction from only the PET data.Approach. Data acquired from 23 female subjects with invasive breast cancer scanned with18F-fluorodeoxyglucose PET/CT and PET/MR localized to the breast region were used for this study. Three DL models, U-Net with mean absolute error loss (DLMAE) model, U-Net with mean squared error loss (DLMSE) model, and U-Net with perceptual loss (DLPerceptual) model, were trained to predict synthetic CT images (sCT) for PET attenuation correction (AC) given non-attenuation corrected (NAC) PETPET/MRimages as inputs. The DL and Dixon-based sCT reconstructed PET images were compared against those reconstructed from CT images by calculating the percent error of the standardized uptake value (SUV) and conducting Wilcoxon signed rank statistical tests.Main results. sCT images from the DLMAEmodel, the DLMSEmodel, and the DLPerceptualmodel were similar in mean absolute error (MAE), peak-signal-to-noise ratio, and normalized cross-correlation. No significant difference in SUV was found between the PET images reconstructed using the DLMSEand DLPerceptualsCTs compared to the reference CT for AC in all tissue regions. All DL methods performed better than the Dixon-based method according to SUV analysis.Significance. A 3D U-Net with MSE or perceptual loss model can be implemented into a reconstruction workflow, and the derived sCT images allow successful truncation completion and attenuation correction for breast PET/MR images.
Collapse
Affiliation(s)
- Xue Li
- Department of Electrical and Computer Engineering, University of Wisconsin, Madison, WI, United States of America
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Jacob M Johnson
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Roberta M Strigel
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
- Department of Medical Physics, University of Wisconsin, Madison, WI, United States of America
- University of Wisconsin Carbone Cancer Center, Madison, WI, United States of America
| | - Leah C Henze Bancroft
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Samuel A Hurley
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - S Iman Zare Estakhraji
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Manoj Kumar
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
- ICTR Graduate Program in Clinical Investigation, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Amy M Fowler
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
- Department of Medical Physics, University of Wisconsin, Madison, WI, United States of America
- University of Wisconsin Carbone Cancer Center, Madison, WI, United States of America
| | - Alan B McMillan
- Department of Electrical and Computer Engineering, University of Wisconsin, Madison, WI, United States of America
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
- Department of Medical Physics, University of Wisconsin, Madison, WI, United States of America
- University of Wisconsin Carbone Cancer Center, Madison, WI, United States of America
| |
Collapse
|
3
|
Jahangir R, Kamali-Asl A, Arabi H, Zaidi H. Strategies for deep learning-based attenuation and scatter correction of brain 18 F-FDG PET images in the image domain. Med Phys 2024; 51:870-880. [PMID: 38197492 DOI: 10.1002/mp.16914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 11/30/2023] [Accepted: 12/06/2023] [Indexed: 01/11/2024] Open
Abstract
BACKGROUND Attenuation and scatter correction is crucial for quantitative positron emission tomography (PET) imaging. Direct attenuation correction (AC) in the image domain using deep learning approaches has been recently proposed for combined PET/MR and standalone PET modalities lacking transmission scanning devices or anatomical imaging. PURPOSE In this study, different input settings were considered in the model training to investigate deep learning-based AC in the image space. METHODS Three different deep learning methods were developed for direct AC in the image space: (i) use of non-attenuation-corrected PET images as input (NonAC-PET), (ii) use of attenuation-corrected PET images with a simple two-class AC map (composed of soft-tissue and background air) obtained from NonAC-PET images (PET segmentation-based AC [SegAC-PET]), and (iii) use of both NonAC-PET and SegAC-PET images in a Double-Channel fashion to predict ground truth attenuation corrected PET images with Computed Tomography images (CTAC-PET). Since a simple two-class AC map (generated from NonAC-PET images) can easily be generated, this work assessed the added value of incorporating SegAC-PET images into direct AC in the image space. A 4-fold cross-validation scheme was adopted to train and evaluate the different models based using 80 brain 18 F-Fluorodeoxyglucose PET/CT images. The voxel-wise and region-wise accuracy of the models were examined via measuring the standardized uptake value (SUV) quantification bias in different regions of the brain. RESULTS The overall root mean square error (RMSE) for the Double-Channel setting was 0.157 ± 0.08 SUV in the whole brain region, while RMSEs of 0.214 ± 0.07 and 0.189 ± 0.14 SUV were observed in NonAC-PET and SegAC-PET models, respectively. A mean SUV bias of 0.01 ± 0.26% was achieved by the Double-Channel model regarding the activity concentration in cerebellum region, as opposed to 0.08 ± 0.28% and 0.05 ± 0.28% SUV biases for the network that uniquely used NonAC-PET or SegAC-PET as input, respectively. SegAC-PET images with an SUV bias of -1.15 ± 0.54%, served as a benchmark for clinically accepted errors. In general, the Double-Channel network, relying on both SegAC-PET and NonAC-PET images, outperformed the other AC models. CONCLUSION Since the generation of two-class AC maps from non-AC PET images is straightforward, the current study investigated the potential added value of incorporating SegAC-PET images into a deep learning-based direct AC approach. Altogether, compared with models that use only NonAC-PET and SegAC-PET images, the Double-Channel deep learning network exhibited superior attenuation correction accuracy.
Collapse
Affiliation(s)
- Reza Jahangir
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Alireza Kamali-Asl
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Geneva University Neurocenter, Geneva University, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
4
|
Jimenez-Mesa C, Arco JE, Martinez-Murcia FJ, Suckling J, Ramirez J, Gorriz JM. Applications of machine learning and deep learning in SPECT and PET imaging: General overview, challenges and future prospects. Pharmacol Res 2023; 197:106984. [PMID: 37940064 DOI: 10.1016/j.phrs.2023.106984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/04/2023] [Accepted: 11/04/2023] [Indexed: 11/10/2023]
Abstract
The integration of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging techniques with machine learning (ML) algorithms, including deep learning (DL) models, is a promising approach. This integration enhances the precision and efficiency of current diagnostic and treatment strategies while offering invaluable insights into disease mechanisms. In this comprehensive review, we delve into the transformative impact of ML and DL in this domain. Firstly, a brief analysis is provided of how these algorithms have evolved and which are the most widely applied in this domain. Their different potential applications in nuclear imaging are then discussed, such as optimization of image adquisition or reconstruction, biomarkers identification, multimodal fusion and the development of diagnostic, prognostic, and disease progression evaluation systems. This is because they are able to analyse complex patterns and relationships within imaging data, as well as extracting quantitative and objective measures. Furthermore, we discuss the challenges in implementation, such as data standardization and limited sample sizes, and explore the clinical opportunities and future horizons, including data augmentation and explainable AI. Together, these factors are propelling the continuous advancement of more robust, transparent, and reliable systems.
Collapse
Affiliation(s)
- Carmen Jimenez-Mesa
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan E Arco
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Communications Engineering, University of Malaga, 29010, Spain
| | | | - John Suckling
- Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK
| | - Javier Ramirez
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan Manuel Gorriz
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK.
| |
Collapse
|
5
|
Chen X, Liu C. Deep-learning-based methods of attenuation correction for SPECT and PET. J Nucl Cardiol 2023; 30:1859-1878. [PMID: 35680755 DOI: 10.1007/s12350-022-03007-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 05/02/2022] [Indexed: 10/18/2022]
Abstract
Attenuation correction (AC) is essential for quantitative analysis and clinical diagnosis of single-photon emission computed tomography (SPECT) and positron emission tomography (PET). In clinical practice, computed tomography (CT) is utilized to generate attenuation maps (μ-maps) for AC of hybrid SPECT/CT and PET/CT scanners. However, CT-based AC methods frequently produce artifacts due to CT artifacts and misregistration of SPECT-CT and PET-CT scans. Segmentation-based AC methods using magnetic resonance imaging (MRI) for PET/MRI scanners are inaccurate and complicated since MRI does not contain direct information of photon attenuation. Computational AC methods for SPECT and PET estimate attenuation coefficients directly from raw emission data, but suffer from low accuracy, cross-talk artifacts, high computational complexity, and high noise level. The recently evolving deep-learning-based methods have shown promising results in AC of SPECT and PET, which can be generally divided into two categories: indirect and direct strategies. Indirect AC strategies apply neural networks to transform emission, transmission, or MR images into synthetic μ-maps or CT images which are then incorporated into AC reconstruction. Direct AC strategies skip the intermediate steps of generating μ-maps or CT images and predict AC SPECT or PET images from non-attenuation-correction (NAC) SPECT or PET images directly. These deep-learning-based AC methods show comparable and even superior performance to non-deep-learning methods. In this article, we first discussed the principles and limitations of non-deep-learning AC methods, and then reviewed the status and prospects of deep-learning-based methods for AC of SPECT and PET.
Collapse
Affiliation(s)
- Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT, 06520, USA.
| |
Collapse
|
6
|
Gandia-Ferrero MT, Torres-Espallardo I, Martínez-Sanchis B, Morera-Ballester C, Muñoz E, Sopena-Novales P, González-Pavón G, Martí-Bonmatí L. Objective Image Quality Comparison Between Brain-Dedicated PET and PET/CT Scanners. J Med Syst 2023; 47:88. [PMID: 37589893 DOI: 10.1007/s10916-023-01984-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 08/02/2023] [Indexed: 08/18/2023]
Abstract
As part of a clinical validation of a new brain-dedicated PET system (CMB), image quality of this scanner has been compared to that of a whole-body PET/CT scanner. To that goal, Hoffman phantom and patient data were obtined with both devices. Since CMB does not use a CT for attenuation correction (AC) which is crucial for PET images quality, this study includes the evaluation of CMB PET images using emission-based or CT-based attenuation maps. PET images were compared using 34 image quality metrics. Moreover, a neural network was used to evaluate the degree of agreement between both devices on the patients diagnosis prediction. Overall, results showed that CMB images have higher contrast and recovery coefficient but higher noise than PET/CT images. Although SUVr values presented statistically significant differences in many brain regions, relative differences were low. An asymmetry between left and right hemispheres, however, was identified. Even so, the variations between the two devices were minor. Finally, there is a greater similarity between PET/CT and CMB CT-based AC PET images than between PET/CT and the CMB emission-based AC PET images.
Collapse
Affiliation(s)
- Maria Teresa Gandia-Ferrero
- Biomedical Imaging Research Group (GIBI230), La Fe Health Research Institute (IIS La Fe), Avenida Fernando Abril Martorell, València, 46026, Spain.
| | - Irene Torres-Espallardo
- Biomedical Imaging Research Group (GIBI230), La Fe Health Research Institute (IIS La Fe), Avenida Fernando Abril Martorell, València, 46026, Spain
- Nuclear Medicine Department, La Fe University and Polytechnic Hospital, Avenida Fernando Abril Martorell, València, 46026, Spain
| | - Begoña Martínez-Sanchis
- Nuclear Medicine Department, La Fe University and Polytechnic Hospital, Avenida Fernando Abril Martorell, València, 46026, Spain
| | | | - Enrique Muñoz
- Oncovision, Carrer de Jeroni de Montsoriu, 92, València, 46022, Spain
| | - Pablo Sopena-Novales
- Nuclear Medicine Department, La Fe University and Polytechnic Hospital, Avenida Fernando Abril Martorell, València, 46026, Spain
| | | | - Luis Martí-Bonmatí
- Biomedical Imaging Research Group (GIBI230), La Fe Health Research Institute (IIS La Fe), Avenida Fernando Abril Martorell, València, 46026, Spain
- Radiology Department, La Fe University and Polytechnic Hospital, Avenida Fernando Abril Martorell, València, 46026, Spain
| |
Collapse
|
7
|
Sohn JH, Behr SC, Hernandez PM, Seo Y. Quantitative Assessment of Myocardial Ischemia With Positron Emission Tomography. J Thorac Imaging 2023; 38:247-259. [PMID: 33492046 PMCID: PMC8295411 DOI: 10.1097/rti.0000000000000579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Recent advances in positron emission tomography (PET) technology and reconstruction techniques have now made quantitative assessment using cardiac PET readily available in most cardiac PET imaging centers. Multiple PET myocardial perfusion imaging (MPI) radiopharmaceuticals are available for quantitative examination of myocardial ischemia, with each having distinct convenience and accuracy profile. Important properties of these radiopharmaceuticals ( 15 O-water, 13 N-ammonia, 82 Rb, 11 C-acetate, and 18 F-flurpiridaz) including radionuclide half-life, mean positron range in tissue, and the relationship between kinetic parameters and myocardial blood flow (MBF) are presented. Absolute quantification of MBF requires PET MPI to be performed with protocols that allow the generation of dynamic multiframes of reconstructed data. Using a tissue compartment model, the rate constant that governs the rate of PET MPI radiopharmaceutical extraction from the blood plasma to myocardial tissue is calculated. Then, this rate constant ( K1 ) is converted to MBF using an established extraction formula for each radiopharmaceutical. As most of the modern PET scanners acquire the data only in list mode, techniques of processing the list-mode data into dynamic multiframes are also reviewed. Finally, the impact of modern PET technologies such as PET/CT, PET/MR, total-body PET, machine learning/deep learning on comprehensive and quantitative assessment of myocardial ischemia is briefly described in this review.
Collapse
Affiliation(s)
- Jae Ho Sohn
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA
| | - Spencer C. Behr
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA
| | | | - Youngho Seo
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA
- Department of Radiation Oncology, University of California, San Francisco, CA
- UC Berkeley-UCSF Graduate Program in Bioengineering, Berkeley and San Francisco, CA
| |
Collapse
|
8
|
Prieto Canalejo MA, Palau San Pedro A, Geronazzo R, Minsky DM, Juárez-Orozco LE, Namías M. Synthetic Attenuation Correction Maps for SPECT Imaging Using Deep Learning: A Study on Myocardial Perfusion Imaging. Diagnostics (Basel) 2023; 13:2214. [PMID: 37443608 DOI: 10.3390/diagnostics13132214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/24/2023] [Accepted: 06/27/2023] [Indexed: 07/15/2023] Open
Abstract
(1) Background: The CT-based attenuation correction of SPECT images is essential for obtaining accurate quantitative images in cardiovascular imaging. However, there are still many SPECT cameras without associated CT scanners throughout the world, especially in developing countries. Performing additional CT scans implies troublesome planning logistics and larger radiation doses for patients, making it a suboptimal solution. Deep learning (DL) offers a revolutionary way to generate complementary images for individual patients at a large scale. Hence, we aimed to generate linear attenuation coefficient maps from SPECT emission images reconstructed without attenuation correction using deep learning. (2) Methods: A total of 384 SPECT myocardial perfusion studies that used 99mTc-sestamibi were included. A DL model based on a 2D U-Net architecture was trained using information from 312 patients. The quality of the generated synthetic attenuation correction maps (ACMs) and reconstructed emission values were evaluated using three metrics and compared to standard-of-care data using Bland-Altman plots. Finally, a quantitative evaluation of myocardial uptake was performed, followed by a semi-quantitative evaluation of myocardial perfusion. (3) Results: In a test set of 66 test patients, the ACM quality metrics were MSSIM = 0.97 ± 0.001 and NMAE = 3.08 ± 1.26 (%), and the reconstructed emission quality metrics were MSSIM = 0.99 ± 0.003 and NMAE = 0.23 ± 0.13 (%). The 95% limits of agreement (LoAs) at the voxel level for reconstructed SPECT images were: [-9.04; 9.00]%, and for the segment level, they were [-11; 10]%. The 95% LoAs for the Summed Stress Score values between the images reconstructed were [-2.8, 3.0]. When global perfusion scores were assessed, only 2 out of 66 patients showed changes in perfusion categories. (4) Conclusion: Deep learning can generate accurate attenuation correction maps from non-attenuation-corrected cardiac SPECT images. These high-quality attenuation maps are suitable for attenuation correction in myocardial perfusion SPECT imaging and could obviate the need for additional imaging in standalone SPECT scanners.
Collapse
Affiliation(s)
| | | | - Ricardo Geronazzo
- Fundación Centro Diagnóstico Nuclear (FCDN), Buenos Aires C1417CVE, Argentina
| | - Daniel Mauricio Minsky
- Centro Atómico Constituyentes, Comisión Nacional de Energía Atómica, San Martín B1650LWP, Argentina
| | | | - Mauro Namías
- Fundación Centro Diagnóstico Nuclear (FCDN), Buenos Aires C1417CVE, Argentina
| |
Collapse
|
9
|
Laurent B, Bousse A, Merlin T, Nekolla S, Visvikis D. PET scatter estimation using deep learning U-Net architecture. Phys Med Biol 2023; 68. [PMID: 36240745 DOI: 10.1088/1361-6560/ac9a97] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 10/13/2022] [Indexed: 03/11/2023]
Abstract
Objective.Positron emission tomography (PET) image reconstruction needs to be corrected for scatter in order to produce quantitatively accurate images. Scatter correction is traditionally achieved by incorporating an estimated scatter sinogram into the forward model during image reconstruction. Existing scatter estimated methods compromise between accuracy and computing time. Nowadays scatter estimation is routinely performed using single scatter simulation (SSS), which does not accurately model multiple scatter and scatter from outside the field-of-view, leading to reduced qualitative and quantitative PET reconstructed image accuracy. On the other side, Monte-Carlo (MC) methods provide a high precision, but are computationally expensive and time-consuming, even with recent progress in MC acceleration.Approach.In this work we explore the potential of deep learning (DL) for accurate scatter correction in PET imaging, accounting for all scatter coincidences. We propose a network based on a U-Net convolutional neural network architecture with 5 convolutional layers. The network takes as input the emission and computed tomography (CT)-derived attenuation factor (AF) sinograms and returns the estimated scatter sinogram. The network training was performed using MC simulated PET datasets. Multiple anthropomorphic extended cardiac-torso phantoms of two different regions (lung and pelvis) were created, considering three different body sizes and different levels of statistics. In addition, two patient datasets were used to assess the performance of the method in clinical practice.Main results.Our experiments showed that the accuracy of our method, namely DL-based scatter estimation (DLSE), was independent of the anatomical region (lungs or pelvis). They also showed that the DLSE-corrected images were similar to that reconstructed from scatter-free data and more accurate than SSS-corrected images.Significance.The proposed method is able to estimate scatter sinograms from emission and attenuation data. It has shown a better accuracy than the SSS, while being faster than MC scatter estimation methods.
Collapse
Affiliation(s)
| | | | | | - Stephan Nekolla
- Department of Nuclear Medicine, Klinikum rechts der Isar der Technischen Universität München, Munich, Germany
| | | |
Collapse
|
10
|
Raymond C, Jurkiewicz MT, Orunmuyi A, Liu L, Dada MO, Ladefoged CN, Teuho J, Anazodo UC. The performance of machine learning approaches for attenuation correction of PET in neuroimaging: A meta-analysis. J Neuroradiol 2023; 50:315-326. [PMID: 36738990 DOI: 10.1016/j.neurad.2023.01.157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 01/28/2023] [Indexed: 02/05/2023]
Abstract
PURPOSE This systematic review provides a consensus on the clinical feasibility of machine learning (ML) methods for brain PET attenuation correction (AC). Performance of ML-AC were compared to clinical standards. METHODS Two hundred and eighty studies were identified through electronic searches of brain PET studies published between January 1, 2008, and August 1, 2022. Reported outcomes for image quality, tissue classification performance, regional and global bias were extracted to evaluate ML-AC performance. Methodological quality of included studies and the quality of evidence of analysed outcomes were assessed using QUADAS-2 and GRADE, respectively. RESULTS A total of 19 studies (2371 participants) met the inclusion criteria. Overall, the global bias of ML methods was 0.76 ± 1.2%. For image quality, the relative mean square error (RMSE) was 0.20 ± 0.4 while for tissues classification, the Dice similarity coefficient (DSC) for bone/soft tissue/air were 0.82 ± 0.1 / 0.95 ± 0.03 / 0.85 ± 0.14. CONCLUSIONS In general, ML-AC performance is within acceptable limits for clinical PET imaging. The sparse information on ML-AC robustness and its limited qualitative clinical evaluation may hinder clinical implementation in neuroimaging, especially for PET/MRI or emerging brain PET systems where standard AC approaches are not readily available.
Collapse
Affiliation(s)
- Confidence Raymond
- Department of Medical Biophysics, Western University, London, ON, Canada; Lawson Health Research Institute, London, ON, Canada
| | - Michael T Jurkiewicz
- Department of Medical Biophysics, Western University, London, ON, Canada; Lawson Health Research Institute, London, ON, Canada; Department of Medical Imaging, Western University, London, ON, Canada
| | - Akintunde Orunmuyi
- Kenyatta University Teaching, Research and Referral Hospital, Nairobi, Kenya
| | - Linshan Liu
- Lawson Health Research Institute, London, ON, Canada
| | | | - Claes N Ladefoged
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark
| | - Jarmo Teuho
- Turku PET Centre, Turku University, Turku, Finland; Turku University Hospital, Turku, Finland
| | - Udunna C Anazodo
- Department of Medical Biophysics, Western University, London, ON, Canada; Lawson Health Research Institute, London, ON, Canada; Montreal Neurological Institute, 3801 Rue University, Montreal, QC H3A 2B4, Canada.
| |
Collapse
|
11
|
Wang B, Lu L, Liu H. DeTransUnet: attenuation correction of gated cardiac images without structural information. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac840e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 07/25/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. Myocardial perfusion imaging (MPI) with positron emission tomography (PET) is a non-invasive imaging method, and it is of great significance to the diagnosis and prognosis of coronary heart disease. Attenuation correction (AC) for PET images is a necessary step for further quantitative analysis. In order not to use magnetic resonance (MR) or computed tomography (CT) images for AC, this work proposes DeTransUnet to obtain AC PET images directly from no-attenuation corrected (NAC) PET images. Approach. The proposed DeTransUnet is a 3D structure which combines the multi-scale deformable transformer layers and the 3D convolutional neural network (CNN). And it integrates the advantages of transformer with long-range dependence and CNN suitable for image calculation. The AC images using CT images for AC and scatter correction (SC) and are considered as training labels, while the NAC images are reconstructed without AC and SC. Standard uptake value (SUV) values are calculated for both NAC and AC images to exclude the influence of weight and injection dose. With NAC SUV images as the inputs of the DeTransUnet, the outputs of DeTransUnet are AC SUV images. Main results. The proposed DeTransUnet was performed on an MPI gated-PET dataset, and the results were compared with Unet2D and Unet2.5D. The metrics of the whole image and the left ventricular myocardium show that the proposed method has advantages over other deep learning methods. Significance. The proposed DeTransUnet is a novel AC framework that does not require CT or MR images. It can be used as an independent AC method on PET/MR instrument. In addition, when CT images contain defects or cannot be registered with PET images on PET/CT instrument, DeTransUnet is able to repair the defects and keep consistent with the NAC images.
Collapse
|
12
|
Visvikis D, Lambin P, Beuschau Mauridsen K, Hustinx R, Lassmann M, Rischpler C, Shi K, Pruim J. Application of artificial intelligence in nuclear medicine and molecular imaging: a review of current status and future perspectives for clinical translation. Eur J Nucl Med Mol Imaging 2022; 49:4452-4463. [PMID: 35809090 PMCID: PMC9606092 DOI: 10.1007/s00259-022-05891-w] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 06/25/2022] [Indexed: 02/06/2023]
Abstract
Artificial intelligence (AI) will change the face of nuclear medicine and molecular imaging as it will in everyday life. In this review, we focus on the potential applications of AI in the field, both from a physical (radiomics, underlying statistics, image reconstruction and data analysis) and a clinical (neurology, cardiology, oncology) perspective. Challenges for transferability from research to clinical practice are being discussed as is the concept of explainable AI. Finally, we focus on the fields where challenges should be set out to introduce AI in the field of nuclear medicine and molecular imaging in a reliable manner.
Collapse
Affiliation(s)
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology, Maastricht University Medical Center (MUMC +), Maastricht, The Netherlands.,Department of Radiology and Nuclear Medicine, GROW - School for Oncology, Maastricht University Medical Center (MUMC +), Maastricht, The Netherlands
| | - Kim Beuschau Mauridsen
- Center of Functionally Integrative Neuroscience and MindLab, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.,Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Roland Hustinx
- GIGA-CRC in Vivo Imaging, University of Liège, GIGA, Avenue de l'Hôpital 11, 4000, Liege, Belgium
| | - Michael Lassmann
- Klinik Und Poliklinik Für Nuklearmedizin, Universitätsklinikum Würzburg, Würzburg, Germany
| | - Christoph Rischpler
- Department of Nuclear Medicine, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Kuangyu Shi
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland.,Department of Informatics, Technical University of Munich, Munich, Germany
| | - Jan Pruim
- Medical Imaging Center, Dept. of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| |
Collapse
|
13
|
Leynes AP, Ahn S, Wangerin KA, Kaushik SS, Wiesinger F, Hope TA, Larson PEZ. Attenuation Coefficient Estimation for PET/MRI With Bayesian Deep Learning Pseudo-CT and Maximum-Likelihood Estimation of Activity and Attenuation. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:678-689. [PMID: 38223528 PMCID: PMC10785227 DOI: 10.1109/trpms.2021.3118325] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
A major remaining challenge for magnetic resonance-based attenuation correction methods (MRAC) is their susceptibility to sources of magnetic resonance imaging (MRI) artifacts (e.g., implants and motion) and uncertainties due to the limitations of MRI contrast (e.g., accurate bone delineation and density, and separation of air/bone). We propose using a Bayesian deep convolutional neural network that in addition to generating an initial pseudo-CT from MR data, it also produces uncertainty estimates of the pseudo-CT to quantify the limitations of the MR data. These outputs are combined with the maximum-likelihood estimation of activity and attenuation (MLAA) reconstruction that uses the PET emission data to improve the attenuation maps. With the proposed approach uncertainty estimation and pseudo-CT prior for robust MLAA (UpCT-MLAA), we demonstrate accurate estimation of PET uptake in pelvic lesions and show recovery of metal implants. In patients without implants, UpCT-MLAA had acceptable but slightly higher root-mean-squared-error (RMSE) than Zero-echotime and Dixon Deep pseudo-CT when compared to CTAC. In patients with metal implants, MLAA recovered the metal implant; however, anatomy outside the implant region was obscured by noise and crosstalk artifacts. Attenuation coefficients from the pseudo-CT from Dixon MRI were accurate in normal anatomy; however, the metal implant region was estimated to have attenuation coefficients of air. UpCT-MLAA estimated attenuation coefficients of metal implants alongside accurate anatomic depiction outside of implant regions.
Collapse
Affiliation(s)
- Andrew P Leynes
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
- UC Berkeley-UC San Francisco Joint Graduate Program in Bioengineering, University of California at Berkeley, Berkeley, CA 94720 USA
| | - Sangtae Ahn
- Biology and Physics Department, GE Research, Niskayuna, NY 12309 USA
| | | | - Sandeep S Kaushik
- MR Applications Science Laboratory Europe, GE Healthcare, 80807 Munich, Germany
- Department of Computer Science, Technical University of Munich, 80333 Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, 8057 Zurich, Switzerland
| | - Florian Wiesinger
- MR Applications Science Laboratory Europe, GE Healthcare, 80807 Munich, Germany
| | - Thomas A Hope
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA, USA
- Department of Radiology, San Francisco VA Medical Center, San Francisco, CA 94121 USA
| | - Peder E Z Larson
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
- UC Berkeley-UC San Francisco Joint Graduate Program in Bioengineering, University of California at Berkeley, Berkeley, CA 94720 USA
| |
Collapse
|
14
|
Sanaat A, Shiri I, Ferdowsi S, Arabi H, Zaidi H. Robust-Deep: A Method for Increasing Brain Imaging Datasets to Improve Deep Learning Models' Performance and Robustness. J Digit Imaging 2022; 35:469-481. [PMID: 35137305 PMCID: PMC9156620 DOI: 10.1007/s10278-021-00536-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 09/29/2021] [Accepted: 11/08/2021] [Indexed: 12/15/2022] Open
Abstract
A small dataset commonly affects generalization, robustness, and overall performance of deep neural networks (DNNs) in medical imaging research. Since gathering large clinical databases is always difficult, we proposed an analytical method for producing a large realistic/diverse dataset. Clinical brain PET/CT/MR images including full-dose (FD), low-dose (LD) corresponding to only 5 % of events acquired in the FD scan, non-attenuated correction (NAC) and CT-based measured attenuation correction (MAC) PET images, CT images and T1 and T2 MR sequences of 35 patients were included. All images were registered to the Montreal Neurological Institute (MNI) template. Laplacian blending was used to make a natural presentation using information in the frequency domain of images from two separate patients, as well as the blending mask. This classical technique from the computer vision and image processing communities is still widely used and unlike modern DNNs, does not require the availability of training data. A modified ResNet DNN was implemented to evaluate four image-to-image translation tasks, including LD to FD, LD+MR to FD, NAC to MAC, and MRI to CT, with and without using the synthesized images. Quantitative analysis using established metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM), and joint histogram analysis was performed for quantitative evaluation. The quantitative comparison between the registered small dataset containing 35 patients and the large dataset containing 350 synthesized plus 35 real dataset demonstrated improvement of the RMSE and SSIM by 29% and 8% for LD to FD, 40% and 7% for LD+MRI to FD, 16% and 8% for NAC to MAC, and 24% and 11% for MRI to CT mapping task, respectively. The qualitative/quantitative analysis demonstrated that the proposed model improved the performance of all four DNN models through producing images of higher quality and lower quantitative bias and variance compared to reference images.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Isaac Shiri
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Sohrab Ferdowsi
- University of Applied Sciences and Arts of Western, Geneva, Switzerland
| | - Hossein Arabi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Habib Zaidi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland ,grid.8591.50000 0001 2322 4988Geneva University Neurocenter, Geneva University, 1205 Geneva, Switzerland ,grid.4494.d0000 0000 9558 4598Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands ,grid.10825.3e0000 0001 0728 0170Department of Nuclear Medicine, University of Southern Denmark, DK-500 Odense, Denmark
| |
Collapse
|
15
|
Matsubara K, Ibaraki M, Nemoto M, Watabe H, Kimura Y. A review on AI in PET imaging. Ann Nucl Med 2022; 36:133-143. [PMID: 35029818 DOI: 10.1007/s12149-021-01710-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Accepted: 12/09/2021] [Indexed: 12/16/2022]
Abstract
Artificial intelligence (AI) has been applied to various medical imaging tasks, such as computer-aided diagnosis. Specifically, deep learning techniques such as convolutional neural network (CNN) and generative adversarial network (GAN) have been extensively used for medical image generation. Image generation with deep learning has been investigated in studies using positron emission tomography (PET). This article reviews studies that applied deep learning techniques for image generation on PET. We categorized the studies for PET image generation with deep learning into three themes as follows: (1) recovering full PET data from noisy data by denoising with deep learning, (2) PET image reconstruction and attenuation correction with deep learning and (3) PET image translation and synthesis with deep learning. We introduce recent studies based on these three categories. Finally, we mention the limitations of applying deep learning techniques to PET image generation and future prospects for PET image generation.
Collapse
Affiliation(s)
- Keisuke Matsubara
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Masanobu Ibaraki
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Mitsutaka Nemoto
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan
| | - Hiroshi Watabe
- Cyclotron and Radioisotope Center (CYRIC), Tohoku University, Miyagi, Japan
| | - Yuichi Kimura
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan.
| |
Collapse
|
16
|
Decuyper M, Maebe J, Van Holen R, Vandenberghe S. Artificial intelligence with deep learning in nuclear medicine and radiology. EJNMMI Phys 2021; 8:81. [PMID: 34897550 PMCID: PMC8665861 DOI: 10.1186/s40658-021-00426-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 11/19/2021] [Indexed: 12/19/2022] Open
Abstract
The use of deep learning in medical imaging has increased rapidly over the past few years, finding applications throughout the entire radiology pipeline, from improved scanner performance to automatic disease detection and diagnosis. These advancements have resulted in a wide variety of deep learning approaches being developed, solving unique challenges for various imaging modalities. This paper provides a review on these developments from a technical point of view, categorizing the different methodologies and summarizing their implementation. We provide an introduction to the design of neural networks and their training procedure, after which we take an extended look at their uses in medical imaging. We cover the different sections of the radiology pipeline, highlighting some influential works and discussing the merits and limitations of deep learning approaches compared to other traditional methods. As such, this review is intended to provide a broad yet concise overview for the interested reader, facilitating adoption and interdisciplinary research of deep learning in the field of medical imaging.
Collapse
Affiliation(s)
- Milan Decuyper
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| | - Jens Maebe
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| | - Roel Van Holen
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| | - Stefaan Vandenberghe
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| |
Collapse
|
17
|
Hwang D, Kang SK, Kim KY, Choi H, Lee JS. Comparison of deep learning-based emission-only attenuation correction methods for positron emission tomography. Eur J Nucl Med Mol Imaging 2021; 49:1833-1842. [PMID: 34882262 DOI: 10.1007/s00259-021-05637-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 11/24/2021] [Indexed: 11/26/2022]
Abstract
PURPOSE This study aims to compare two approaches using only emission PET data and a convolution neural network (CNN) to correct the attenuation (μ) of the annihilation photons in PET. METHODS One of the approaches uses a CNN to generate μ-maps from the non-attenuation-corrected (NAC) PET images (μ-CNNNAC). In the other method, CNN is used to improve the accuracy of μ-maps generated using maximum likelihood estimation of activity and attenuation (MLAA) reconstruction (μ-CNNMLAA). We investigated the improvement in the CNN performance by combining the two methods (μ-CNNMLAA+NAC) and the suitability of μ-CNNNAC for providing the scatter distribution required for MLAA reconstruction. Image data from 18F-FDG (n = 100) or 68 Ga-DOTATOC (n = 50) PET/CT scans were used for neural network training and testing. RESULTS The error of the attenuation correction factors estimated using μ-CT and μ-CNNNAC was over 7%, but that of scatter estimates was only 2.5%, indicating the validity of the scatter estimation from μ-CNNNAC. However, CNNNAC provided less accurate bone structures in the μ-maps, while the best results in recovering the fine bone structures were obtained by applying CNNMLAA+NAC. Additionally, the μ-values in the lungs were overestimated by CNNNAC. Activity images (λ) corrected for attenuation using μ-CNNMLAA and μ-CNNMLAA+NAC were superior to those corrected using μ-CNNNAC, in terms of their similarity to λ-CT. However, the improvement in the similarity with λ-CT by combining the CNNNAC and CNNMLAA approaches was insignificant (percent error for lung cancer lesions, λ-CNNNAC = 5.45% ± 7.88%; λ-CNNMLAA = 1.21% ± 5.74%; λ-CNNMLAA+NAC = 1.91% ± 4.78%; percent error for bone cancer lesions, λ-CNNNAC = 1.37% ± 5.16%; λ-CNNMLAA = 0.23% ± 3.81%; λ-CNNMLAA+NAC = 0.05% ± 3.49%). CONCLUSION The use of CNNNAC was feasible for scatter estimation to address the chicken-egg dilemma in MLAA reconstruction, but CNNMLAA outperformed CNNNAC.
Collapse
Affiliation(s)
- Donghwi Hwang
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, South Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea
| | - Seung Kwan Kang
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, South Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea
- Brightonix Imaging Inc., Seoul, South Korea
| | - Kyeong Yun Kim
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, South Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
- Brightonix Imaging Inc., Seoul, South Korea
| | - Hongyoon Choi
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Jae Sung Lee
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, South Korea.
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea.
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea.
- Brightonix Imaging Inc., Seoul, South Korea.
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, South Korea.
| |
Collapse
|
18
|
Miller RJH, Singh A, Dey D, Slomka P. Artificial Intelligence and Cardiac PET/Computed Tomography Imaging. PET Clin 2021; 17:85-94. [PMID: 34809873 DOI: 10.1016/j.cpet.2021.06.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Artificial intelligence is an important technology, with rapidly expanding applications for cardiac PET. We review the common terminology, including methods for training and testing, which are fundamental to understanding artificial intelligence. Next, we highlight applications to improve image acquisition, reconstruction, and segmentation. Computed tomographic imaging is commonly acquired in conjunction with PET and various artificial intelligence methods have been applied, including methods to automatically extract anatomic information or generate synthetic attenuation images. Last, we describe methods to automate disease diagnosis or risk stratification. This summary highlights the current and future clinical applications of artificial intelligence to cardiovascular PET imaging.
Collapse
Affiliation(s)
- Robert J H Miller
- Department of Cardiac Sciences, University of Calgary, GAA08 HRIC, 3230 Hospital Drive NW, Calgary AB, T2N 4Z6, Canada
| | - Ananya Singh
- Departments of Imaging and Medicine, Cedars-Sinai Medical Center, 8700 Beverly Blvd, Suite Metro 203, Los Angeles, CA 90048, USA
| | - Damini Dey
- Departments of Imaging and Medicine, Cedars-Sinai Medical Center, 8700 Beverly Blvd, Suite Metro 203, Los Angeles, CA 90048, USA
| | - Piotr Slomka
- Departments of Imaging and Medicine, Cedars-Sinai Medical Center, 8700 Beverly Blvd, Suite Metro 203, Los Angeles, CA 90048, USA.
| |
Collapse
|
19
|
Sanaat A, Shooli H, Ferdowsi S, Shiri I, Arabi H, Zaidi H. DeepTOFSino: A deep learning model for synthesizing full-dose time-of-flight bin sinograms from their corresponding low-dose sinograms. Neuroimage 2021; 245:118697. [PMID: 34742941 DOI: 10.1016/j.neuroimage.2021.118697] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Revised: 09/21/2021] [Accepted: 10/29/2021] [Indexed: 11/29/2022] Open
Abstract
PURPOSE Reducing the injected activity and/or the scanning time is a desirable goal to minimize radiation exposure and maximize patients' comfort. To achieve this goal, we developed a deep neural network (DNN) model for synthesizing full-dose (FD) time-of-flight (TOF) bin sinograms from their corresponding fast/low-dose (LD) TOF bin sinograms. METHODS Clinical brain PET/CT raw data of 140 normal and abnormal patients were employed to create LD and FD TOF bin sinograms. The LD TOF sinograms were created through 5% undersampling of FD list-mode PET data. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). Residual network (ResNet) algorithms were trained separately to generate FD bins from LD bins. An extra ResNet model was trained to synthesize FD images from LD images to compare the performance of DNN in sinogram space (SS) vs implementation in image space (IS). Comprehensive quantitative and statistical analysis was performed to assess the performance of the proposed model using established quantitative metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM) region-wise standardized uptake value (SUV) bias and statistical analysis for 83 brain regions. RESULTS SSIM and PSNR values of 0.97 ± 0.01, 0.98 ± 0.01 and 33.70 ± 0.32, 39.36 ± 0.21 were obtained for IS and SS, respectively, compared to 0.86 ± 0.02and 31.12 ± 0.22 for reference LD images. The absolute average SUV bias was 0.96 ± 0.95% and 1.40 ± 0.72% for SS and IS implementations, respectively. The joint histogram analysis revealed the lowest mean square error (MSE) and highest correlation (R2 = 0.99, MSE = 0.019) was achieved by SS compared to IS (R2 = 0.97, MSE= 0.028). The Bland & Altman analysis showed that the lowest SUV bias (-0.4%) and minimum variance (95% CI: -2.6%, +1.9%) were achieved by SS images. The voxel-wise t-test analysis revealed the presence of voxels with statistically significantly lower values in LD, IS, and SS images compared to FD images respectively. CONCLUSION The results demonstrated that images reconstructed from the predicted TOF FD sinograms using the SS approach led to higher image quality and lower bias compared to images predicted from LD images.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Shooli
- Persian Gulf Nuclear Medicine Research Center, Department of Molecular Imaging and Radionuclide Therapy (MIRT), Bushehr Medical University Hospital, Faculty of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Sohrab Ferdowsi
- University of Applied Sciences and Arts of Western, Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland; Geneva University Neurocenter, University of Geneva, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
20
|
McMillan AB, Bradshaw TJ. Artificial Intelligence-Based Data Corrections for Attenuation and Scatter in Position Emission Tomography and Single-Photon Emission Computed Tomography. PET Clin 2021; 16:543-552. [PMID: 34364816 PMCID: PMC10562009 DOI: 10.1016/j.cpet.2021.06.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Recent developments in artificial intelligence (AI) technology have enabled new developments that can improve attenuation and scatter correction in PET and single-photon emission computed tomography (SPECT). These technologies will enable the use of accurate and quantitative imaging without the need to acquire a computed tomography image, greatly expanding the capability of PET/MR imaging, PET-only, and SPECT-only scanners. The use of AI to aid in scatter correction will lead to improvements in image reconstruction speed, and improve patient throughput. This article outlines the use of these new tools, surveys contemporary implementation, and discusses their limitations.
Collapse
Affiliation(s)
- Alan B McMillan
- Department of Radiology, University of Wisconsin, 3252 Clinical Science Center, 600 Highland Avenue, Madison, WI 53792, USA.
| | - Tyler J Bradshaw
- Department of Radiology, University of Wisconsin, 3252 Clinical Science Center, 600 Highland Avenue, Madison, WI 53792, USA. https://twitter.com/tybradshaw11
| |
Collapse
|
21
|
Chen Y, Goorden MC, Beekman FJ. Convolutional neural network based attenuation correction for 123I-FP-CIT SPECT with focused striatum imaging. Phys Med Biol 2021; 66. [PMID: 34492646 DOI: 10.1088/1361-6560/ac2470] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 09/07/2021] [Indexed: 11/12/2022]
Abstract
SPECT imaging with123I-FP-CIT is used for diagnosis of neurodegenerative disorders like Parkinson's disease. Attenuation correction (AC) can be useful for quantitative analysis of123I-FP-CIT SPECT. Ideally, AC would be performed based on attenuation maps (μ-maps) derived from perfectly registered CT scans. Suchμ-maps, however, are most times not available and possible errors in image registration can induce quantitative inaccuracies in AC corrected SPECT images. Earlier, we showed that a convolutional neural network (CNN) based approach allows to estimate SPECT-alignedμ-maps for full brain perfusion imaging using only emission data. Here we investigate the feasibility of similar CNN methods for axially focused123I-FP-CIT scans. We tested our approach on a high-resolution multi-pinhole prototype clinical SPECT system in a Monte Carlo simulation study. Three CNNs that estimateμ-maps in a voxel-wise, patch-wise and image-wise manner were investigated. As the added value of AC on clinical123I-FP-CIT scans is still debatable, the impact of AC was also reported to check in which cases CNN based AC could be beneficial. AC using the ground truthμ-maps (GT-AC) and CNN estimatedμ-maps (CNN-AC) were compared with the case when no AC was done (No-AC). Results show that the effect of using GT-AC versus CNN-AC or No-AC on striatal shape and symmetry is minimal. Specific binding ratios (SBRs) from localized regions show a deviation from GT-AC≤2.5% for all three CNN-ACs while No-AC systematically underestimates SBRs by 13.1%. A strong correlation (r≥0.99) was obtained between GT-AC based SBRs and SBRs from CNN-ACs and No-AC. Absolute quantification (in kBq ml-1) shows a deviation from GT-AC within 2.2% for all three CNN-ACs and of 71.7% for No-AC. To conclude, all three CNNs show comparable performance in accurateμ-map estimation and123I-FP-CIT quantification. CNN-estimatedμ-map can be a promising substitute for CT-basedμ-map.
Collapse
Affiliation(s)
- Yuan Chen
- Section Biomedical Imaging, Department of Radiation, Science and Technology, Delft University of Technology, Delft, The Netherlands
| | - Marlies C Goorden
- Section Biomedical Imaging, Department of Radiation, Science and Technology, Delft University of Technology, Delft, The Netherlands
| | - Freek J Beekman
- Section Biomedical Imaging, Department of Radiation, Science and Technology, Delft University of Technology, Delft, The Netherlands.,MILabs B.V., Utrecht, The Netherlands.,Department of Translational Neuroscience, Brain Center Rudolf Magnus, University Medical Center Utrecht, The Netherlands
| |
Collapse
|
22
|
Arabi H, Zaidi H. MRI-guided attenuation correction in torso PET/MRI: Assessment of segmentation-, atlas-, and deep learning-based approaches in the presence of outliers. Magn Reson Med 2021; 87:686-701. [PMID: 34480771 PMCID: PMC9292636 DOI: 10.1002/mrm.29003] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 08/14/2021] [Accepted: 08/21/2021] [Indexed: 12/22/2022]
Abstract
Purpose We compare the performance of three commonly used MRI‐guided attenuation correction approaches in torso PET/MRI, namely segmentation‐, atlas‐, and deep learning‐based algorithms. Methods Twenty‐five co‐registered torso 18F‐FDG PET/CT and PET/MR images were enrolled. PET attenuation maps were generated from in‐phase Dixon MRI using a three‐tissue class segmentation‐based approach (soft‐tissue, lung, and background air), voxel‐wise weighting atlas‐based approach, and a residual convolutional neural network. The bias in standardized uptake value (SUV) was calculated for each approach considering CT‐based attenuation corrected PET images as reference. In addition to the overall performance assessment of these approaches, the primary focus of this work was on recognizing the origins of potential outliers, notably body truncation, metal‐artifacts, abnormal anatomy, and small malignant lesions in the lungs. Results The deep learning approach outperformed both atlas‐ and segmentation‐based methods resulting in less than 4% SUV bias across 25 patients compared to the segmentation‐based method with up to 20% SUV bias in bony structures and the atlas‐based method with 9% bias in the lung. The deep learning‐based method exhibited superior performance. Yet, in case of sever truncation and metallic‐artifacts in the input MRI, this approach was outperformed by the atlas‐based method, exhibiting suboptimal performance in the affected regions. Conversely, for abnormal anatomies, such as a patient presenting with one lung or small malignant lesion in the lung, the deep learning algorithm exhibited promising performance compared to other methods. Conclusion The deep learning‐based method provides promising outcome for synthetic CT generation from MRI. However, metal‐artifact and body truncation should be specifically addressed.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.,Geneva University Neurocenter, Geneva University, Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
23
|
Mostafapour S, Gholamiankhah F, Dadgar H, Arabi H, Zaidi H. Feasibility of Deep Learning-Guided Attenuation and Scatter Correction of Whole-Body 68Ga-PSMA PET Studies in the Image Domain. Clin Nucl Med 2021; 46:609-615. [PMID: 33661195 DOI: 10.1097/rlu.0000000000003585] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE This study evaluates the feasibility of direct scatter and attenuation correction of whole-body 68Ga-PSMA PET images in the image domain using deep learning. METHODS Whole-body 68Ga-PSMA PET images of 399 subjects were used to train a residual deep learning model, taking PET non-attenuation-corrected images (PET-nonAC) as input and CT-based attenuation-corrected PET images (PET-CTAC) as target (reference). Forty-six whole-body 68Ga-PSMA PET images were used as an independent validation dataset. For validation, synthetic deep learning-based attenuation-corrected PET images were assessed considering the corresponding PET-CTAC images as reference. The evaluation metrics included the mean absolute error (MAE) of the SUV, peak signal-to-noise ratio, and structural similarity index (SSIM) in the whole body, as well as in different regions of the body, namely, head and neck, chest, and abdomen and pelvis. RESULTS The deep learning-guided direct attenuation and scatter correction produced images of comparable visual quality to PET-CTAC images. It achieved an MAE, relative error (RE%), SSIM, and peak signal-to-noise ratio of 0.91 ± 0.29 (SUV), -2.46% ± 10.10%, 0.973 ± 0.034, and 48.171 ± 2.964, respectively, within whole-body images of the independent external validation dataset. The largest RE% was observed in the head and neck region (-5.62% ± 11.73%), although this region exhibited the highest value of SSIM metric (0.982 ± 0.024). The MAE (SUV) and RE% within the different regions of the body were less than 2.0% and 6%, respectively, indicating acceptable performance of the deep learning model. CONCLUSIONS This work demonstrated the feasibility of direct attenuation and scatter correction of whole-body 68Ga-PSMA PET images in the image domain using deep learning with clinically tolerable errors. The technique has the potential of performing attenuation correction on stand-alone PET or PET/MRI systems.
Collapse
Affiliation(s)
- Samaneh Mostafapour
- From the Department of Radiology Technology, Faculty of Paramedical Sciences, Mashhad University of Medical Sciences, Mashhad
| | - Faeze Gholamiankhah
- Department of Medical Physics, Faculty of Medicine, Shahid Sadoughi University of Medical Sciences, Yazd
| | - Habibollah Dadgar
- Cancer Research Center, Razavi Hospital, Imam Reza International University, Mashhad, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4
| | | |
Collapse
|
24
|
Liu S, Zhang B, Liu Y, Han A, Shi H, Guan T, He Y. Unpaired Stain Transfer Using Pathology-Consistent Constrained Generative Adversarial Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1977-1989. [PMID: 33784619 DOI: 10.1109/tmi.2021.3069874] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Pathological examination is the gold standard for the diagnosis of cancer. Common pathological examinations include hematoxylin-eosin (H&E) staining and immunohistochemistry (IHC). In some cases, it is hard to make accurate diagnoses of cancer by referring only to H&E staining images. Whereas, the IHC examination can further provide enough evidence for the diagnosis process. Hence, the generation of virtual IHC images from H&E-stained images will be a good solution for current IHC examination hard accessibility issue, especially for some low-resource regions. However, existing approaches have limitations in microscopic structural preservation and the consistency of pathology properties. In addition, pixel-level paired data is hard available. In our work, we propose a novel adversarial learning method for effective Ki-67-stained image generation from corresponding H&E-stained image. Our method takes fully advantage of structural similarity constraint and skip connection to improve structural details preservation; and pathology consistency constraint and pathological representation network are first proposed to enforce the generated and source images hold the same pathological properties in different staining domains. We empirically demonstrate the effectiveness of our approach on two different unpaired histopathological datasets. Extensive experiments indicate the superior performance of our method that surpasses the state-of-the-art approaches by a significant margin. In addition, our approach also achieves a stable and good performance on unbalanced datasets, which shows our method has strong robustness. We believe that our method has significant potential in clinical virtual staining and advance the progress of computer-aided multi-staining histology image analysis.
Collapse
|
25
|
Catana C, Laforest R, An H, Boada F, Cao T, Faul D, Jakoby B, Jansen FP, Kemp BJ, Kinahan PE, Larson PEZ, Levine MA, Maniawski P, Mawlawi O, McConathy J, McMillan A, Price JC, Rajagopal A, Sunderland J, Veit-Haibach P, Wangerin KA, Ying C, Hope TA. A Path to Qualification of PET/MR Scanners for Multicenter Brain Imaging Studies: Evaluation of MR-based Attenuation Correction Methods Using a Patient Phantom. J Nucl Med 2021; 63:615-621. [PMID: 34301784 PMCID: PMC8973286 DOI: 10.2967/jnumed.120.261881] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 06/06/2021] [Indexed: 11/25/2022] Open
Abstract
PET/MRI scanners cannot be qualified in the manner adopted for hybrid PET/CT devices. The main hurdle with qualification in PET/MRI is that attenuation correction (AC) cannot be adequately measured in conventional PET phantoms because of the difficulty in converting the MR images of the physical structures (e.g., plastic) into electron density maps. Over the last decade, a plethora of novel MRI-based algorithms has been developed to more accurately derive the attenuation properties of the human head, including the skull. Although promising, none of these techniques has yet emerged as an optimal and universally adopted strategy for AC in PET/MRI. In this work, we propose a path for PET/MRI qualification for multicenter brain imaging studies. Specifically, our solution is to separate the head AC from the other factors that affect PET data quantification and use a patient as a phantom to assess the former. The emission data collected on the integrated PET/MRI scanner to be qualified should be reconstructed using both MRI- and CT-based AC methods, and whole-brain qualitative and quantitative (both voxelwise and regional) analyses should be performed. The MRI-based approach will be considered satisfactory if the PET quantification bias is within the acceptance criteria specified here. We have implemented this approach successfully across 2 PET/MRI scanner manufacturers at 2 sites.
Collapse
Affiliation(s)
- Ciprian Catana
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, United States
| | - Richard Laforest
- Mallinckrodt Institute of Radiology, Washington University School of Medicine
| | | | - Fernando Boada
- Department of Radiology, Center for Advanced Imaging Innovation and Research, New York University Langone Medical Center
| | - Tuoyu Cao
- Shanghai United Imaging Healthcare Co., Ltd., China
| | | | | | | | | | | | | | | | - Piotr Maniawski
- Philips Healthcare, Advanced Molecular Imaging, United States
| | | | | | - Alan McMillan
- University of Wisconsin School of Medicine and Public Health
| | | | - Abhejit Rajagopal
- Department of Radiology and Biomedical Imaging, University of California, San Francisco
| | | | | | | | - Chunwei Ying
- Department of Biomedical Engineering, Washington University in St. Louis
| | | |
Collapse
|
26
|
Arabi H, Zaidi H. Assessment of deep learning-based PET attenuation correction frameworks in the sinogram domain. Phys Med Biol 2021; 66. [PMID: 34167094 DOI: 10.1088/1361-6560/ac0e79] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Accepted: 06/24/2021] [Indexed: 02/04/2023]
Abstract
This study set out to investigate various deep learning frameworks for PET attenuation correction in the sinogram domain. Different models for both time-of-flight (TOF) and non-TOF PET emission data were implemented, including direct estimation of the attenuation corrected (AC) emission sinograms from the nonAC sinograms, estimation of the attenuation correction factors (ACFs) from PET emission data, correction of scattered photons prior to training of the models, and separate training of the models for each segment of the emission sinograms. A segmentation-based 2-class AC map was included as a bottom-line technique for comparison of the different models considering PET/CT AC as reference. Fifty clinical TOF PET/CT brain scans were employed for training whereas 20 were used for evaluation of the models. Quantitative analysis of the resulting PET images was carried out through region-wise standardized uptake value (SUV) bias calculation. The models relying on TOF information significantly outperformed the nonTOF models as well as the segmentation-based AC map resulting in maximum SUV bias of 6.5%, 9.5%, and 14.0%, respectively. Estimation of ACFs from either TOF or nonTOF PET emission data was very sensitive to prior scatter correction. However, direct estimation of AC sinograms from nonAC sinograms revealed no sensitivity to scatter correction, thus obviating the need for prior scatter estimation. For TOF PET data, though direct prediction of the AC sinograms does not require prior estimation of scattered photons, it requires input/output channels equal to the number of TOF bins which might be computationally or memory-wise expensive. Prediction of the ACF matrices from TOF emission data is less demanding in terms of memory as it requires only a single channel for output. AC in the sinogram domain of TOF PET data exhibited superior performance compared to both nonTOF and segmentation-based methods. However, such models require multiple input/output channels.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland.,Geneva Neuroscience Center, Geneva University, CH-1205 Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB Groningen, The Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, DK-500, Odense, Denmark
| |
Collapse
|
27
|
Murata T, Yokota H, Yamato R, Horikoshi T, Tsuneda M, Kurosawa R, Hashimoto T, Ota J, Sawada K, Iimori T, Masuda Y, Mori Y, Suyari H, Uno T. Development of attenuation correction methods using deep learning in brain-perfusion single-photon emission computed tomography. Med Phys 2021; 48:4177-4190. [PMID: 34061380 DOI: 10.1002/mp.15016] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2020] [Revised: 05/25/2021] [Accepted: 05/26/2021] [Indexed: 12/22/2022] Open
Abstract
PURPOSE Computed tomography (CT)-based attenuation correction (CTAC) in single-photon emission computed tomography (SPECT) is highly accurate, but it requires hybrid SPECT/CT instruments and additional radiation exposure. To obtain attenuation correction (AC) without the need for additional CT images, a deep learning method was used to generate pseudo-CT images has previously been reported, but it is limited because of cross-modality transformation, resulting in misalignment and modality-specific artifacts. This study aimed to develop a deep learning-based approach using non-attenuation-corrected (NAC) images and CTAC-based images for training to yield AC images in brain-perfusion SPECT. This study also investigated whether the proposed approach is superior to conventional Chang's AC (ChangAC). METHODS In total, 236 patients who underwent brain-perfusion SPECT were randomly divided into two groups: the training group (189 patients; 80%) and the test group (47 patients; 20%). Two models were constructed using Autoencoder (AutoencoderAC) and U-Net (U-NetAC), respectively. ChangAC, AutoencoderAC, and U-NetAC approaches were compared with CTAC using qualitative analysis (visual evaluation) and quantitative analysis (normalized mean squared error [NMSE] and the percentage error in each brain region). Statistical analyses were performed using the Wilcoxon signed-rank sum test and Bland-Altman analysis. RESULTS U-NetAC had the highest visual evaluation score. The NMSE results for the U-NetAC were the lowest, followed by AutoencoderAC and ChangAC (P < 0.001). Bland-Altman analysis showed a fixed bias for ChangAC and AutoencoderAC and a proportional bias for ChangAC. ChangAC underestimated counts by 30-40% in all brain regions. AutoencoderAC and U-NetAC produced mean errors of <1% and maximum errors of 3%, respectively. CONCLUSION New deep learning-based AC methods for AutoencoderAC and U-NetAC were developed. Their accuracy was higher than that obtained by ChangAC. U-NetAC exhibited higher qualitative and quantitative accuracy than AutoencoderAC. We generated highly accurate AC images directly from NAC images without the need for intermediate pseudo-CT images. To verify our models' generalizability, external validation is required.
Collapse
Affiliation(s)
- Taisuke Murata
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Hajime Yokota
- Department of Diagnostic Radiology and Radiation Oncology, Graduate School of Medicine, Chiba University, Chiba, 260-8670, Japan
| | - Ryuhei Yamato
- Graduate School of Engineering, Chiba University, Chiba, 263-8522, Japan
| | - Takuro Horikoshi
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Masato Tsuneda
- Department of Radiation Oncology, MR Linac ART Division, Graduate School of Medicine, Chiba University, Chiba, 260-8670, Japan
| | - Ryuna Kurosawa
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Takuma Hashimoto
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Joji Ota
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Koichi Sawada
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Takashi Iimori
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Yoshitada Masuda
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Yasukuni Mori
- Graduate School of Engineering, Chiba University, Chiba, 263-8522, Japan
| | - Hiroki Suyari
- Graduate School of Engineering, Chiba University, Chiba, 263-8522, Japan
| | - Takashi Uno
- Department of Diagnostic Radiology and Radiation Oncology, Graduate School of Medicine, Chiba University, Chiba, 260-8670, Japan
| |
Collapse
|
28
|
Jiang C, Zhang X, Zhang N, Zhang Q, Zhou C, Yuan J, He Q, Yang Y, Liu X, Zheng H, Fan W, Hu Z, Liang D. Synthesizing PET/MR (T1-weighted) images from non-attenuation-corrected PET images. Phys Med Biol 2021; 66. [PMID: 34098534 DOI: 10.1088/1361-6560/ac08b2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 06/07/2021] [Indexed: 11/12/2022]
Abstract
Positron emission tomography (PET) imaging can be used for early detection, diagnosis and postoperative patient monitoring of many diseases. Traditional PET imaging requires not only additional computed tomography (CT) imaging or magnetic resonance imaging (MR) to provide anatomical information but also attenuation correction (AC) map calculation based on CT images or MR images for accurate quantitative estimation. During a patient's treatment, PET/CT or PET/MR scans are inevitably repeated many times, leading to additional doses of ionizing radiation (CT scans) and additional economic and time costs (MR scans). To reduce adverse effects while obtaining high-quality PET/MR images in the course of a patient's treatment, especially in the stage of evaluating the effect of postoperative treatment, in this work, we propose a new method based on deep learning, which can directly obtain synthetic attenuation-corrected PET (sAC PET) and synthetic T1-weighted MR (sMR) images based only on non-attenuation-corrected PET (NAC PET) images. Our model, based on the Wasserstein generative adversarial network, first removes noise and artifacts from the NAC PET images to generate sAC PET images and then generates sMR images from the obtained sAC PET images. To evaluate the performance of this generative model, we evaluated it on paired PET/MR images from a total of eighty clinical patients. Based on qualitative and quantitative analysis, the generated sAC PET and sMR images showed a high degree of similarity to the real AC PET and real MR images. These results indicated that our proposed method can reduce the frequency of additional anatomical imaging scans during PET imaging and has great potential in improving doctors' clinical diagnosis efficiency, saving patients' economic expenditure and reducing the radiation risk brought by CT scanning.
Collapse
Affiliation(s)
- Changhui Jiang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China.,National Innovation Center for Advanced Medical Devices, Shenzhen 518131, People's Republic of China
| | - Xu Zhang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou 510060, People's Republic of China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou 510060, People's Republic of China
| | - Jianmin Yuan
- Central Research Institute, Shanghai United Imaging Healthcare, Shanghai 201807, People's Republic of China
| | - Qiang He
- Central Research Institute, Shanghai United Imaging Healthcare, Shanghai 201807, People's Republic of China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou 510060, People's Republic of China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, China Academy of Sciences, Shenzhen 518055, People's Republic of China
| |
Collapse
|
29
|
Kyme AZ, Fulton RR. Motion estimation and correction in SPECT, PET and CT. Phys Med Biol 2021; 66. [PMID: 34102630 DOI: 10.1088/1361-6560/ac093b] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 06/08/2021] [Indexed: 11/11/2022]
Abstract
Patient motion impacts single photon emission computed tomography (SPECT), positron emission tomography (PET) and X-ray computed tomography (CT) by giving rise to projection data inconsistencies that can manifest as reconstruction artifacts, thereby degrading image quality and compromising accurate image interpretation and quantification. Methods to estimate and correct for patient motion in SPECT, PET and CT have attracted considerable research effort over several decades. The aims of this effort have been two-fold: to estimate relevant motion fields characterizing the various forms of voluntary and involuntary motion; and to apply these motion fields within a modified reconstruction framework to obtain motion-corrected images. The aims of this review are to outline the motion problem in medical imaging and to critically review published methods for estimating and correcting for the relevant motion fields in clinical and preclinical SPECT, PET and CT. Despite many similarities in how motion is handled between these modalities, utility and applications vary based on differences in temporal and spatial resolution. Technical feasibility has been demonstrated in each modality for both rigid and non-rigid motion, but clinical feasibility remains an important target. There is considerable scope for further developments in motion estimation and correction, and particularly in data-driven methods that will aid clinical utility. State-of-the-art machine learning methods may have a unique role to play in this context.
Collapse
Affiliation(s)
- Andre Z Kyme
- School of Biomedical Engineering, The University of Sydney, Sydney, New South Wales, AUSTRALIA
| | - Roger R Fulton
- Sydney School of Health Sciences, The University of Sydney, Sydney, New South Wales, AUSTRALIA
| |
Collapse
|
30
|
Cheng Z, Wen J, Huang G, Yan J. Applications of artificial intelligence in nuclear medicine image generation. Quant Imaging Med Surg 2021; 11:2792-2822. [PMID: 34079744 PMCID: PMC8107336 DOI: 10.21037/qims-20-1078] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 02/14/2021] [Indexed: 12/12/2022]
Abstract
Recently, the application of artificial intelligence (AI) in medical imaging (including nuclear medicine imaging) has rapidly developed. Most AI applications in nuclear medicine imaging have focused on the diagnosis, treatment monitoring, and correlation analyses with pathology or specific gene mutation. It can also be used for image generation to shorten the time of image acquisition, reduce the dose of injected tracer, and enhance image quality. This work provides an overview of the application of AI in image generation for single-photon emission computed tomography (SPECT) and positron emission tomography (PET) either without or with anatomical information [CT or magnetic resonance imaging (MRI)]. This review focused on four aspects, including imaging physics, image reconstruction, image postprocessing, and internal dosimetry. AI application in generating attenuation map, estimating scatter events, boosting image quality, and predicting internal dose map is summarized and discussed.
Collapse
Affiliation(s)
- Zhibiao Cheng
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Junhai Wen
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Gang Huang
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| | - Jianhua Yan
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| |
Collapse
|
31
|
Hashimoto F, Ito M, Ote K, Isobe T, Okada H, Ouchi Y. Deep learning-based attenuation correction for brain PET with various radiotracers. Ann Nucl Med 2021; 35:691-701. [PMID: 33811600 DOI: 10.1007/s12149-021-01611-w] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Accepted: 03/17/2021] [Indexed: 01/24/2023]
Abstract
OBJECTIVES Attenuation correction (AC) is crucial for ensuring the quantitative accuracy of positron emission tomography (PET) imaging. However, obtaining accurate μ-maps from brain-dedicated PET scanners without AC acquisition mechanism is challenging. Therefore, to overcome these problems, we developed a deep learning-based PET AC (deep AC) framework to synthesize transmission computed tomography (TCT) images from non-AC (NAC) PET images using a convolutional neural network (CNN) with a huge dataset of various radiotracers for brain PET imaging. METHODS The proposed framework is comprised of three steps: (1) NAC PET image generation, (2) synthetic TCT generation using CNN, and (3) PET image reconstruction. We trained the CNN by combining the mixed image dataset of six radiotracers to avoid overfitting, including [18F]FDG, [18F]BCPP-EF, [11C]Racropride, [11C]PIB, [11C]DPA-713, and [11C]PBB3. We used 1261 brain NAC PET and TCT images (1091 for training and 70 for testing). We did not include [11C]Methionine subjects in the training dataset, but included them in the testing dataset. RESULTS The image quality of the synthetic TCT images obtained using the CNN trained on the mixed dataset of six radiotracers was superior to those obtained using the CNN trained on the split dataset generated from each radiotracer. In the [18F]FDG study, the mean relative PET biases of the emission-segmented AC (ESAC) and deep AC were 8.46 ± 5.24 and - 5.69 ± 4.97, respectively. The deep AC PET and TCT AC PET images exhibited excellent correlation for all seven radiotracers (R2 = 0.912-0.982). CONCLUSION These results indicate that our proposed deep AC framework can be leveraged to provide quantitatively superior PET images when using the CNN trained on the mixed dataset of PET tracers than when using the CNN trained on the split dataset which means specific for each tracer.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K.K., Hamamatsu, 434-8601, Japan.
| | - Masanori Ito
- Global Strategic Challenge Center, Hamamatsu Photonics K.K., Hamamatsu, 434-8601, Japan.
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K.K., Hamamatsu, 434-8601, Japan
| | - Takashi Isobe
- Central Research Laboratory, Hamamatsu Photonics K.K., Hamamatsu, 434-8601, Japan
| | - Hiroyuki Okada
- Global Strategic Challenge Center, Hamamatsu Photonics K.K., Hamamatsu, 434-8601, Japan.,Hamamatsu Medical Imaging Center, Hamamatsu Medical Photonics Foundation, Hamamatsu, 434-8601, Japan
| | - Yasuomi Ouchi
- Department of Biofunctional Imaging, Preeminent Medical Photonics Education and Research Center, Hamamatsu University School of Medicine, Hamamatsu, 431-3192, Japan
| |
Collapse
|
32
|
Zaidi H, El Naqa I. Quantitative Molecular Positron Emission Tomography Imaging Using Advanced Deep Learning Techniques. Annu Rev Biomed Eng 2021; 23:249-276. [PMID: 33797938 DOI: 10.1146/annurev-bioeng-082420-020343] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The widespread availability of high-performance computing and the popularity of artificial intelligence (AI) with machine learning and deep learning (ML/DL) algorithms at the helm have stimulated the development of many applications involving the use of AI-based techniques in molecular imaging research. Applications reported in the literature encompass various areas, including innovative design concepts in positron emission tomography (PET) instrumentation, quantitative image reconstruction and analysis techniques, computer-aided detection and diagnosis, as well as modeling and prediction of outcomes. This review reflects the tremendous interest in quantitative molecular imaging using ML/DL techniques during the past decade, ranging from the basic principles of ML/DL techniques to the various steps required for obtaining quantitatively accurate PET data, including algorithms used to denoise or correct for physical degrading factors as well as to quantify tracer uptake and metabolic tumor volume for treatment monitoring or radiation therapy treatment planning and response prediction.This review also addresses future opportunities and current challenges facing the adoption of ML/DL approaches and their role in multimodality imaging.
Collapse
Affiliation(s)
- Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211 Geneva, Switzerland; .,Geneva Neuroscience Centre, University of Geneva, 1205 Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, 9700 RB Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, DK-5000 Odense, Denmark
| | - Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, Florida 33612, USA.,Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48109, USA.,Department of Oncology, McGill University, Montreal, Quebec H3A 1G5, Canada
| |
Collapse
|
33
|
Meikle SR, Sossi V, Roncali E, Cherry SR, Banati R, Mankoff D, Jones T, James M, Sutcliffe J, Ouyang J, Petibon Y, Ma C, El Fakhri G, Surti S, Karp JS, Badawi RD, Yamaya T, Akamatsu G, Schramm G, Rezaei A, Nuyts J, Fulton R, Kyme A, Lois C, Sari H, Price J, Boellaard R, Jeraj R, Bailey DL, Eslick E, Willowson KP, Dutta J. Quantitative PET in the 2020s: a roadmap. Phys Med Biol 2021; 66:06RM01. [PMID: 33339012 PMCID: PMC9358699 DOI: 10.1088/1361-6560/abd4f7] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Positron emission tomography (PET) plays an increasingly important role in research and clinical applications, catalysed by remarkable technical advances and a growing appreciation of the need for reliable, sensitive biomarkers of human function in health and disease. Over the last 30 years, a large amount of the physics and engineering effort in PET has been motivated by the dominant clinical application during that period, oncology. This has led to important developments such as PET/CT, whole-body PET, 3D PET, accelerated statistical image reconstruction, and time-of-flight PET. Despite impressive improvements in image quality as a result of these advances, the emphasis on static, semi-quantitative 'hot spot' imaging for oncologic applications has meant that the capability of PET to quantify biologically relevant parameters based on tracer kinetics has not been fully exploited. More recent advances, such as PET/MR and total-body PET, have opened up the ability to address a vast range of new research questions, from which a future expansion of applications and radiotracers appears highly likely. Many of these new applications and tracers will, at least initially, require quantitative analyses that more fully exploit the exquisite sensitivity of PET and the tracer principle on which it is based. It is also expected that they will require more sophisticated quantitative analysis methods than those that are currently available. At the same time, artificial intelligence is revolutionizing data analysis and impacting the relationship between the statistical quality of the acquired data and the information we can extract from the data. In this roadmap, leaders of the key sub-disciplines of the field identify the challenges and opportunities to be addressed over the next ten years that will enable PET to realise its full quantitative potential, initially in research laboratories and, ultimately, in clinical practice.
Collapse
Affiliation(s)
- Steven R Meikle
- Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Australia
- Brain and Mind Centre, The University of Sydney, Australia
| | - Vesna Sossi
- Department of Physics and Astronomy, University of British Columbia, Canada
| | - Emilie Roncali
- Department of Biomedical Engineering, University of California, Davis, United States of America
| | - Simon R Cherry
- Department of Biomedical Engineering, University of California, Davis, United States of America
- Department of Radiology, University of California, Davis, United States of America
| | - Richard Banati
- Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Australia
- Brain and Mind Centre, The University of Sydney, Australia
- Australian Nuclear Science and Technology Organisation, Sydney, Australia
| | - David Mankoff
- Department of Radiology, University of Pennsylvania, United States of America
| | - Terry Jones
- Department of Radiology, University of California, Davis, United States of America
| | - Michelle James
- Department of Radiology, Molecular Imaging Program at Stanford (MIPS), CA, United States of America
- Department of Neurology and Neurological Sciences, Stanford University, CA, United States of America
| | - Julie Sutcliffe
- Department of Biomedical Engineering, University of California, Davis, United States of America
- Department of Internal Medicine, University of California, Davis, CA, United States of America
| | - Jinsong Ouyang
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Yoann Petibon
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Chao Ma
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Suleman Surti
- Department of Radiology, University of Pennsylvania, United States of America
| | - Joel S Karp
- Department of Radiology, University of Pennsylvania, United States of America
| | - Ramsey D Badawi
- Department of Biomedical Engineering, University of California, Davis, United States of America
- Department of Radiology, University of California, Davis, United States of America
| | - Taiga Yamaya
- National Institute of Radiological Sciences (NIRS), National Institutes for Quantum and Radiological Science and Technology (QST), Chiba, Japan
| | - Go Akamatsu
- National Institute of Radiological Sciences (NIRS), National Institutes for Quantum and Radiological Science and Technology (QST), Chiba, Japan
| | - Georg Schramm
- Department of Imaging and Pathology, Nuclear Medicine & Molecular imaging, KU Leuven, Belgium
| | - Ahmadreza Rezaei
- Department of Imaging and Pathology, Nuclear Medicine & Molecular imaging, KU Leuven, Belgium
| | - Johan Nuyts
- Department of Imaging and Pathology, Nuclear Medicine & Molecular imaging, KU Leuven, Belgium
| | - Roger Fulton
- Brain and Mind Centre, The University of Sydney, Australia
- Department of Medical Physics, Westmead Hospital, Sydney, Australia
| | - André Kyme
- Brain and Mind Centre, The University of Sydney, Australia
- School of Biomedical Engineering, Faculty of Engineering and IT, The University of Sydney, Australia
| | - Cristina Lois
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Hasan Sari
- Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
- Athinoula A. Martinos Center, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
| | - Julie Price
- Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
- Athinoula A. Martinos Center, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
| | - Ronald Boellaard
- Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam University Medical Center, location VUMC, Netherlands
| | - Robert Jeraj
- Departments of Medical Physics, Human Oncology and Radiology, University of Wisconsin, United States of America
- Faculty of Mathematics and Physics, University of Ljubljana, Slovenia
| | - Dale L Bailey
- Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Australia
- Department of Nuclear Medicine, Royal North Shore Hospital, Sydney, Australia
- Faculty of Science, The University of Sydney, Australia
| | - Enid Eslick
- Department of Nuclear Medicine, Royal North Shore Hospital, Sydney, Australia
| | - Kathy P Willowson
- Department of Nuclear Medicine, Royal North Shore Hospital, Sydney, Australia
- Faculty of Science, The University of Sydney, Australia
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, United States of America
| |
Collapse
|
34
|
Chen Y, Goorden MC, Beekman FJ. Automatic attenuation map estimation from SPECT data only for brain perfusion scans using convolutional neural networks. Phys Med Biol 2021; 66:065006. [PMID: 33571975 DOI: 10.1088/1361-6560/abe557] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
In clinical brain SPECT, correction for photon attenuation in the patient is essential to obtain images which provide quantitative information on the regional activity concentration per unit volume (kBq.[Formula: see text]). This correction generally requires an attenuation map ([Formula: see text] map) denoting the attenuation coefficient at each voxel which is often derived from a CT or MRI scan. However, such an additional scan is not always available and the method may suffer from registration errors. Therefore, we propose a SPECT-only-based strategy for [Formula: see text] map estimation that we apply to a stationary multi-pinhole clinical SPECT system (G-SPECT-I) for 99mTc-HMPAO brain perfusion imaging. The method is based on the use of a convolutional neural network (CNN) and was validated with Monte Carlo simulated scans. Data acquired in list mode was used to employ the energy information of both primary and scattered photons to obtain information about the tissue attenuation as much as possible. Multiple SPECT reconstructions were performed from different energy windows over a large energy range. Locally extracted 4D SPECT patches (three spatial plus one energy dimension) were used as input for the CNN which was trained to predict the attenuation coefficient of the corresponding central voxel of the patch. Results show that Attenuation Correction using the Ground Truth [Formula: see text] maps (GT-AC) or using the CNN estimated [Formula: see text] maps (CNN-AC) achieve comparable accuracy. This was confirmed by a visual assessment as well as a quantitative comparison; the mean deviation from the GT-AC when using the CNN-AC is within 1.8% for the standardized uptake values in all brain regions. Therefore, our results indicate that a CNN-based method can be an automatic and accurate tool for SPECT attenuation correction that is independent of attenuation data from other imaging modalities or human interpretations about head contours.
Collapse
Affiliation(s)
- Yuan Chen
- Section Biomedical Imaging, Department of Radiation, Science and Technology, Delft University of Technology, Delft, The Netherlands
| | | | | |
Collapse
|
35
|
Yang J, Sohn JH, Behr SC, Gullberg GT, Seo Y. CT-less Direct Correction of Attenuation and Scatter in the Image Space Using Deep Learning for Whole-Body FDG PET: Potential Benefits and Pitfalls. Radiol Artif Intell 2021; 3:e200137. [PMID: 33937860 PMCID: PMC8043359 DOI: 10.1148/ryai.2020200137] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 11/04/2020] [Accepted: 11/13/2020] [Indexed: 05/14/2023]
Abstract
PURPOSE To demonstrate the feasibility of CT-less attenuation and scatter correction (ASC) in the image space using deep learning for whole-body PET, with a focus on the potential benefits and pitfalls. MATERIALS AND METHODS In this retrospective study, 110 whole-body fluorodeoxyglucose (FDG) PET/CT studies acquired in 107 patients (mean age ± standard deviation, 58 years ± 18; age range, 11-92 years; 72 females) from February 2016 through January 2018 were randomly collected. A total of 37.3% (41 of 110) of the studies showed metastases, with diverse FDG PET findings throughout the whole body. A U-Net-based network was developed for directly transforming noncorrected PET (PETNC) into attenuation- and scatter-corrected PET (PETASC). Deep learning-corrected PET (PETDL) images were quantitatively evaluated by using the standardized uptake value (SUV) of the normalized root mean square error, the peak signal-to-noise ratio, and the structural similarity index, in addition to a joint histogram for statistical analysis. Qualitative reviews by radiologists revealed the potential benefits and pitfalls of this correction method. RESULTS The normalized root mean square error (0.21 ± 0.05 [mean SUV ± standard deviation]), mean peak signal-to-noise ratio (36.3 ± 3.0), mean structural similarity index (0.98 ± 0.01), and voxelwise correlation (97.62%) of PETDL demonstrated quantitatively high similarity with PETASC. Radiologist reviews revealed the overall quality of PETDL. The potential benefits of PETDL include a radiation dose reduction on follow-up scans and artifact removal in the regions with attenuation correction- and scatter correction-based artifacts. The pitfalls involve potential false-negative results due to blurring or missing lesions or false-positive results due to pseudo-low-uptake patterns. CONCLUSION Deep learning-based direct ASC at whole-body PET is feasible and potentially can be used to overcome the current limitations of CT-based approaches, benefiting patients who are sensitive to radiation from CT.Supplemental material is available for this article.© RSNA, 2020.
Collapse
|
36
|
Gong K, Yang J, Larson PEZ, Behr SC, Hope TA, Seo Y, Li Q. MR-based Attenuation Correction for Brain PET Using 3D Cycle-Consistent Adversarial Network. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:185-192. [PMID: 33778235 PMCID: PMC7993643 DOI: 10.1109/trpms.2020.3006844] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Attenuation correction (AC) is important for the quantitative merits of positron emission tomography (PET). However, attenuation coefficients cannot be derived from magnetic resonance (MR) images directly for PET/MR systems. In this work, we aimed to derive continuous AC maps from Dixon MR images without the requirement of MR and computed tomography (CT) image registration. To achieve this, a 3D generative adversarial network with both discriminative and cycle-consistency loss (Cycle-GAN) was developed. The modified 3D U-net was employed as the structure of the generative networks to generate the pseudo CT/MR images. The 3D patch-based discriminative networks were used to distinguish the generated pseudo CT/MR images from the true CT/MR images. To evaluate its performance, datasets from 32 patients were used in the experiment. The Dixon segmentation and atlas methods provided by the vendor and the convolutional neural network (CNN) method which utilized registered MR and CT images were employed as the reference methods. Dice coefficients of the pseudo-CT image and the regional quantification in the reconstructed PET images were compared. Results show that the Cycle-GAN framework can generate better AC compared to the Dixon segmentation and atlas methods, and shows comparable performance compared to the CNN method.
Collapse
Affiliation(s)
- Kuang Gong
- Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 USA
| | - Jaewon Yang
- Physics Research Laboratory, Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143 USA
| | - Peder E Z Larson
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143 USA
| | - Spencer C Behr
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143 USA
| | - Thomas A Hope
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143 USA
| | - Youngho Seo
- Physics Research Laboratory, Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143 USA
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 USA
| |
Collapse
|
37
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|
38
|
Tao L, Fisher J, Anaya E, Li X, Levin CS. Pseudo CT Image Synthesis and Bone Segmentation From MR Images Using Adversarial Networks With Residual Blocks for MR-Based Attenuation Correction of Brain PET Data. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.2989073] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
39
|
Yang J, Shi L, Wang R, Miller EJ, Sinusas AJ, Liu CJ, Gullberg GT, Seo Y. Direct Attenuation Correction Using Deep Learning for Cardiac SPECT: A Feasibility Study. J Nucl Med 2021; 62:1645-1652. [DOI: 10.2967/jnumed.120.256396] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 02/16/2021] [Indexed: 11/16/2022] Open
|
40
|
Torkaman M, Yang J, Shi L, Wang R, Miller EJ, Sinusas AJ, Liu C, Gullberg GT, Seo Y. Direct Image-Based Attenuation Correction using Conditional Generative Adversarial Network for SPECT Myocardial Perfusion Imaging. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2021; 11600. [PMID: 33727759 DOI: 10.1117/12.2580922] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Attenuation correction (AC) is important for an accurate interpretation and quantitative analysis of SPECT myocardial perfusion imaging. Dedicated cardiac SPECT systems have invaluable efficacy in the evaluation and risk stratification of patients with known or suspected cardiovascular disease. However, most dedicated cardiac SPECT systems are standalone, not combined with a transmission imaging capability such as computed tomography (CT) for generating attenuation maps for AC. To address this problem, we propose to apply a conditional generative adversarial network (cGAN) for generating attenuation-corrected SPECT images (SPECTGAN ) directly from non-corrected SPECT images (SPECTNC ) in image domain as a one-step process without requiring additional intermediate step. The proposed network was trained and tested for 100 cardiac SPECT/CT data from a GE Discovery NM 570c SPECT/CT, collected retrospectively at Yale New Haven Hospital.The generated images were evaluated quantitatively through the normalized root mean square error (NRMSE), peak signal to noise ratio (PSNR), and structural similarity index (SSIM) and statistically through joint histogram and error maps. In comparison to the reference CT-based correction (SPECTCTAC ), NRMSEs were 0.2258±0.0777 and 0.1410±0.0768 (37.5% reduction of errors); PSNRs 31.7712±2.9965 and 36.3823±3.7424 (14.5% improvement in signal to noise ratio); SSIMs 0.9877±0.0075 and 0.9949±0.0043 (0.7% improvement in structural similarity) for SPECTNC and SPECTGAN , respectively. This work demonstrates that the conditional adversarial training can achieve accurate CT-less attenuation correction for SPECT MPI, that is quantitatively comparable to CTAC. Standalone dedicated cardiac SPECT scanners can benefit from the proposed GAN to reduce attenuation artifacts efficiently.
Collapse
Affiliation(s)
- Mahsa Torkaman
- Physics Research Laboratory, Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Jaewon Yang
- Physics Research Laboratory, Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Luyao Shi
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Rui Wang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA.,Department of Engineering Physics, Tsinghua University, China
| | - Edward J Miller
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA.,Section of Cardiovascular Medicine, Department of Medicine, Yale University, New Haven, CT, USA
| | - Albert J Sinusas
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.,Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA.,Section of Cardiovascular Medicine, Department of Medicine, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.,Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Grant T Gullberg
- Physics Research Laboratory, Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA.,Molecular Biophysics and Integrated Bioimaging Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USA
| | - Youngho Seo
- Physics Research Laboratory, Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA.,Molecular Biophysics and Integrated Bioimaging Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USA.,Department of Nuclear Engineering, University of California, Berkeley, Berkeley, CA, USA.,Department of Radiation Oncology, University of California, San Francisco, San Francisco, CA, USA
| |
Collapse
|
41
|
Wang T, Lei Y, Fu Y, Wynne JF, Curran WJ, Liu T, Yang X. A review on medical imaging synthesis using deep learning and its clinical applications. J Appl Clin Med Phys 2021; 22:11-36. [PMID: 33305538 PMCID: PMC7856512 DOI: 10.1002/acm2.13121] [Citation(s) in RCA: 100] [Impact Index Per Article: 33.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 11/12/2020] [Accepted: 11/21/2020] [Indexed: 02/06/2023] Open
Abstract
This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Jacob F. Wynne
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
42
|
Burgos N, Bottani S, Faouzi J, Thibeau-Sutre E, Colliot O. Deep learning for brain disorders: from data processing to disease treatment. Brief Bioinform 2020; 22:1560-1576. [PMID: 33316030 DOI: 10.1093/bib/bbaa310] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 10/09/2020] [Accepted: 10/13/2020] [Indexed: 12/19/2022] Open
Abstract
In order to reach precision medicine and improve patients' quality of life, machine learning is increasingly used in medicine. Brain disorders are often complex and heterogeneous, and several modalities such as demographic, clinical, imaging, genetics and environmental data have been studied to improve their understanding. Deep learning, a subpart of machine learning, provides complex algorithms that can learn from such various data. It has become state of the art in numerous fields, including computer vision and natural language processing, and is also growingly applied in medicine. In this article, we review the use of deep learning for brain disorders. More specifically, we identify the main applications, the concerned disorders and the types of architectures and data used. Finally, we provide guidelines to bridge the gap between research studies and clinical routine.
Collapse
|
43
|
Abstract
Attenuation correction has been one of the main methodological challenges in the integrated positron emission tomography and magnetic resonance imaging (PET/MRI) field. As standard transmission or computed tomography approaches are not available in integrated PET/MRI scanners, MR-based attenuation correction approaches had to be developed. Aspects that have to be considered for implementing accurate methods include the need to account for attenuation in bone tissue, normal and pathological lung and the MR hardware present in the PET field-of-view, to reduce the impact of subject motion, to minimize truncation and susceptibility artifacts, and to address issues related to the data acquisition and processing both on the PET and MRI sides. The standard MR-based attenuation correction techniques implemented by the PET/MRI equipment manufacturers and their impact on clinical and research PET data interpretation and quantification are first discussed. Next, the more advanced methods, including the latest generation deep learning-based approaches that have been proposed for further minimizing the attenuation correction related bias are described. Finally, a future perspective focused on the needed developments in the field is given.
Collapse
Affiliation(s)
- Ciprian Catana
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, United States of America
| |
Collapse
|
44
|
Zhang YD, Dong Z, Wang SH, Yu X, Yao X, Zhou Q, Hu H, Li M, Jiménez-Mesa C, Ramirez J, Martinez FJ, Gorriz JM. Advances in multimodal data fusion in neuroimaging: Overview, challenges, and novel orientation. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2020; 64:149-187. [PMID: 32834795 PMCID: PMC7366126 DOI: 10.1016/j.inffus.2020.07.006] [Citation(s) in RCA: 111] [Impact Index Per Article: 27.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 07/06/2020] [Accepted: 07/14/2020] [Indexed: 05/13/2023]
Abstract
Multimodal fusion in neuroimaging combines data from multiple imaging modalities to overcome the fundamental limitations of individual modalities. Neuroimaging fusion can achieve higher temporal and spatial resolution, enhance contrast, correct imaging distortions, and bridge physiological and cognitive information. In this study, we analyzed over 450 references from PubMed, Google Scholar, IEEE, ScienceDirect, Web of Science, and various sources published from 1978 to 2020. We provide a review that encompasses (1) an overview of current challenges in multimodal fusion (2) the current medical applications of fusion for specific neurological diseases, (3) strengths and limitations of available imaging modalities, (4) fundamental fusion rules, (5) fusion quality assessment methods, and (6) the applications of fusion for atlas-based segmentation and quantification. Overall, multimodal fusion shows significant benefits in clinical diagnosis and neuroscience research. Widespread education and further research amongst engineers, researchers and clinicians will benefit the field of multimodal neuroimaging.
Collapse
Affiliation(s)
- Yu-Dong Zhang
- School of Informatics, University of Leicester, Leicester, LE1 7RH, Leicestershire, UK
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Zhengchao Dong
- Department of Psychiatry, Columbia University, USA
- New York State Psychiatric Institute, New York, NY 10032, USA
| | - Shui-Hua Wang
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- School of Architecture Building and Civil engineering, Loughborough University, Loughborough, LE11 3TU, UK
- School of Mathematics and Actuarial Science, University of Leicester, LE1 7RH, UK
| | - Xiang Yu
- School of Informatics, University of Leicester, Leicester, LE1 7RH, Leicestershire, UK
| | - Xujing Yao
- School of Informatics, University of Leicester, Leicester, LE1 7RH, Leicestershire, UK
| | - Qinghua Zhou
- School of Informatics, University of Leicester, Leicester, LE1 7RH, Leicestershire, UK
| | - Hua Hu
- Department of Psychiatry, Columbia University, USA
- Department of Neurology, The Second Affiliated Hospital of Soochow University, China
| | - Min Li
- Department of Psychiatry, Columbia University, USA
- School of Internet of Things, Hohai University, Changzhou, China
| | - Carmen Jiménez-Mesa
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
| | - Javier Ramirez
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
| | - Francisco J Martinez
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
| | - Juan Manuel Gorriz
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
- Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK
| |
Collapse
|
45
|
Xiang H, Lim H, Fessler JA, Dewaraja YK. A deep neural network for fast and accurate scatter estimation in quantitative SPECT/CT under challenging scatter conditions. Eur J Nucl Med Mol Imaging 2020; 47:2956-2967. [PMID: 32415551 PMCID: PMC7666660 DOI: 10.1007/s00259-020-04840-9] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Accepted: 04/24/2020] [Indexed: 12/18/2022]
Abstract
PURPOSE A major challenge for accurate quantitative SPECT imaging of some radionuclides is the inadequacy of simple energy window-based scatter estimation methods, widely available on clinic systems. A deep learning approach for SPECT/CT scatter estimation is investigated as an alternative to computationally expensive Monte Carlo (MC) methods for challenging SPECT radionuclides, such as 90Y. METHODS A deep convolutional neural network (DCNN) was trained to separately estimate each scatter projection from the measured 90Y bremsstrahlung SPECT emission projection and CT attenuation projection that form the network inputs. The 13-layer deep architecture consisted of separate paths for the emission and attenuation projection that are concatenated before the final convolution steps. The training label consisted of MC-generated "true" scatter projections in phantoms (MC is needed only for training) with the mean square difference relative to the model output serving as the loss function. The test data set included a simulated sphere phantom with a lung insert, measurements of a liver phantom, and patients after 90Y radioembolization. OS-EM SPECT reconstruction without scatter correction (NO-SC), with the true scatter (TRUE-SC) (available for simulated data only), with the DCNN estimated scatter (DCNN-SC), and with a previously developed MC scatter model (MC-SC) were compared, including with 90Y PET when available. RESULTS The contrast recovery (CR) vs. noise and lung insert residual error vs. noise curves for images reconstructed with DCNN-SC and MC-SC estimates were similar. At the same noise level of 10% (across multiple realizations), the average sphere CR was 24%, 52%, 55%, and 67% for NO-SC, MC-SC, DCNN-SC, and TRUE-SC, respectively. For the liver phantom, the average CR for liver inserts were 32%, 73%, and 65% for NO-SC, MC-SC, and DCNN-SC, respectively while the corresponding values for average contrast-to-noise ratio (visibility index) in low-concentration extra-hepatic inserts were 2, 19, and 61, respectively. In patients, there was high concordance between lesion-to-liver uptake ratios for SPECT reconstruction with DCNN-SC (median 4.8, range 0.02-13.8) compared with MC-SC (median 4.0, range 0.13-12.1; CCC = 0.98) and with 90Y PET (median 4.9, range 0.02-11.2; CCC = 0.96) while the concordance with NO-SC was poor (median 2.8, range 0.3-7.2; CCC = 0.59). The trained DCNN took ~ 40 s (using a single i5 processor on a desktop computer) to generate the scatter estimates for all 128 views in a patient scan, compared to ~ 80 min for the MC scatter model using 12 processors. CONCLUSIONS For diverse 90Y test data that included patient studies, we demonstrated comparable performance between images reconstructed with deep learning and MC-based scatter estimates using metrics relevant for dosimetry and for safety. This approach that can be generalized to other radionuclides by changing the training data is well suited for real-time clinical use because of the high speed, orders of magnitude faster than MC, while maintaining high accuracy.
Collapse
Affiliation(s)
- Haowei Xiang
- Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109, USA
| | - Hongki Lim
- Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109, USA
| | - Jeffrey A Fessler
- Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109, USA
| | - Yuni K Dewaraja
- Department of Radiology, University of Michigan, 1301 Catherine, 2276 Medical Science I/5610, Ann Arbor, MI, 48109, USA.
| |
Collapse
|
46
|
Arabi H, Zaidi H. Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy. Eur J Hybrid Imaging 2020; 4:17. [PMID: 34191161 PMCID: PMC8218135 DOI: 10.1186/s41824-020-00086-8] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 08/10/2020] [Indexed: 12/22/2022] Open
Abstract
This brief review summarizes the major applications of artificial intelligence (AI), in particular deep learning approaches, in molecular imaging and radiation therapy research. To this end, the applications of artificial intelligence in five generic fields of molecular imaging and radiation therapy, including PET instrumentation design, PET image reconstruction quantification and segmentation, image denoising (low-dose imaging), radiation dosimetry and computer-aided diagnosis, and outcome prediction are discussed. This review sets out to cover briefly the fundamental concepts of AI and deep learning followed by a presentation of seminal achievements and the challenges facing their adoption in clinical setting.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, CH-1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700, Groningen, RB, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
| |
Collapse
|
47
|
Arabi H, Bortolin K, Ginovart N, Garibotto V, Zaidi H. Deep learning-guided joint attenuation and scatter correction in multitracer neuroimaging studies. Hum Brain Mapp 2020; 41:3667-3679. [PMID: 32436261 PMCID: PMC7416024 DOI: 10.1002/hbm.25039] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 04/15/2020] [Accepted: 05/08/2020] [Indexed: 12/25/2022] Open
Abstract
PET attenuation correction (AC) on systems lacking CT/transmission scanning, such as dedicated brain PET scanners and hybrid PET/MRI, is challenging. Direct AC in image-space, wherein PET images corrected for attenuation and scatter are synthesized from nonattenuation corrected PET (PET-nonAC) images in an end-to-end fashion using deep learning approaches (DLAC) is evaluated for various radiotracers used in molecular neuroimaging studies. One hundred eighty brain PET scans acquired using 18 F-FDG, 18 F-DOPA, 18 F-Flortaucipir (targeting tau pathology), and 18 F-Flutemetamol (targeting amyloid pathology) radiotracers (40 + 5, training/validation + external test, subjects for each radiotracer) were included. The PET data were reconstructed using CT-based AC (CTAC) to generate reference PET-CTAC and without AC to produce PET-nonAC images. A deep convolutional neural network was trained to generate PET attenuation corrected images (PET-DLAC) from PET-nonAC. The quantitative accuracy of this approach was investigated separately for each radiotracer considering the values obtained from PET-CTAC images as reference. A segmented AC map (PET-SegAC) containing soft-tissue and background air was also included in the evaluation. Quantitative analysis of PET images demonstrated superior performance of the DLAC approach compared to SegAC technique for all tracers. Despite the relatively low quantitative bias observed when using the DLAC approach, this approach appears vulnerable to outliers, resulting in noticeable local pseudo uptake and false cold regions. Direct AC in image-space using deep learning demonstrated quantitatively acceptable performance with less than 9% absolute SUV bias for the four different investigated neuroimaging radiotracers. However, this approach is vulnerable to outliers which result in large local quantitative bias.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical ImagingGeneva University HospitalGenevaSwitzerland
| | - Karin Bortolin
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical ImagingGeneva University HospitalGenevaSwitzerland
| | - Nathalie Ginovart
- Department of PsychiatryGeneva UniversityGenevaSwitzerland
- Department of Basic NeurosciencesGeneva UniversityGenevaSwitzerland
| | - Valentina Garibotto
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical ImagingGeneva University HospitalGenevaSwitzerland
- Geneva Neuroscience CenterGeneva UniversityGenevaSwitzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical ImagingGeneva University HospitalGenevaSwitzerland
- Geneva Neuroscience CenterGeneva UniversityGenevaSwitzerland
- Department of Nuclear Medicine and Molecular ImagingUniversity of Groningen, University Medical Center GroningenGroningenNetherlands
- Department of Nuclear MedicineUniversity of Southern DenmarkOdenseDenmark
| |
Collapse
|
48
|
Ladefoged CN, Hansen AE, Henriksen OM, Bruun FJ, Eikenes L, Øen SK, Karlberg A, Højgaard L, Law I, Andersen FL. AI-driven attenuation correction for brain PET/MRI: Clinical evaluation of a dementia cohort and importance of the training group size. Neuroimage 2020; 222:117221. [PMID: 32750498 DOI: 10.1016/j.neuroimage.2020.117221] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Revised: 07/15/2020] [Accepted: 07/28/2020] [Indexed: 11/27/2022] Open
Abstract
INTRODUCTION Robust and reliable attenuation correction (AC) is a prerequisite for accurate quantification of activity concentration. In combined PET/MRI, AC is challenged by the lack of bone signal in the MRI from which the AC maps has to be derived. Deep learning-based image-to-image translation networks present itself as an optimal solution for MRI-derived AC (MR-AC). High robustness and generalizability of these networks are expected to be achieved through large training cohorts. In this study, we implemented an MR-AC method based on deep learning, and investigated how training cohort size, transfer learning, and MR input affected robustness, and subsequently evaluated the method in a clinical setup, with the overall aim to explore if this method could be implemented in clinical routine for PET/MRI examinations. METHODS A total cohort of 1037 adult subjects from the Siemens Biograph mMR with two different software versions (VB20P and VE11P) was used. The software upgrade included updates to all MRI sequences. The impact of training group size was investigated by training a convolutional neural network (CNN) on an increasing training group size from 10 to 403. The ability to adapt to changes in the input images between software versions were evaluated using transfer learning from a large cohort to a smaller cohort, by varying training group size from 5 to 91 subjects. The impact of MRI sequence was evaluated by training three networks based on the Dixon VIBE sequence (DeepDixon), T1-weighted MPRAGE (DeepT1), and ultra-short echo time (UTE) sequence (DeepUTE). Blinded clinical evaluation relative to the reference low-dose CT (CT-AC) was performed for DeepDixon in 104 independent 2-[18F]fluoro-2-deoxy-d-glucose ([18F]FDG) PET patient studies performed for suspected neurodegenerative disorder using statistical surface projections. RESULTS Robustness increased with group size in the training data set: 100 subjects were required to reduce the number of outliers compared to a state-of-the-art segmentation-based method, and a cohort >400 subjects further increased robustness in terms of reduced variation and number of outliers. When using transfer learning to adapt to changes in the MRI input, as few as five subjects were sufficient to minimize outliers. Full robustness was achieved at 20 subjects. Comparable robust and accurate results were obtained using all three types of MRI input with a bias below 1% relative to CT-AC in any brain region. The clinical PET evaluation using DeepDixon showed no clinically relevant differences compared to CT-AC. CONCLUSION Deep learning based AC requires a large training cohort to achieve accurate and robust performance. Using transfer learning, only five subjects were needed to fine-tune the method to large changes to the input images. No clinically relevant differences were found compared to CT-AC, indicating that clinical implementation of our deep learning-based MR-AC method will be feasible across MRI system types using transfer learning and a limited number of subjects.
Collapse
Affiliation(s)
- Claes Nøhr Ladefoged
- Department of Clinical Physiology, Nuclear Medicine & PET, Rigshospitalet, University of Copenhagen, Denmark.
| | - Adam Espe Hansen
- Department of Clinical Physiology, Nuclear Medicine & PET, Rigshospitalet, University of Copenhagen, Denmark
| | - Otto Mølby Henriksen
- Department of Clinical Physiology, Nuclear Medicine & PET, Rigshospitalet, University of Copenhagen, Denmark
| | - Frederik Jager Bruun
- Department of Clinical Physiology, Nuclear Medicine & PET, Rigshospitalet, University of Copenhagen, Denmark
| | - Live Eikenes
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Silje Kjærnes Øen
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Anna Karlberg
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway; (c)Department of Radiology and Nuclear Medicine, St. Olavs hospital, Trondheim University Hospital, Trondheim, Norway
| | - Liselotte Højgaard
- Department of Clinical Physiology, Nuclear Medicine & PET, Rigshospitalet, University of Copenhagen, Denmark
| | - Ian Law
- Department of Clinical Physiology, Nuclear Medicine & PET, Rigshospitalet, University of Copenhagen, Denmark
| | - Flemming Littrup Andersen
- Department of Clinical Physiology, Nuclear Medicine & PET, Rigshospitalet, University of Copenhagen, Denmark
| |
Collapse
|
49
|
Wang T, Lei Y, Fu Y, Curran WJ, Liu T, Nye JA, Yang X. Machine learning in quantitative PET: A review of attenuation correction and low-count image reconstruction methods. Phys Med 2020; 76:294-306. [PMID: 32738777 PMCID: PMC7484241 DOI: 10.1016/j.ejmp.2020.07.028] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 07/13/2020] [Accepted: 07/21/2020] [Indexed: 02/08/2023] Open
Abstract
The rapid expansion of machine learning is offering a new wave of opportunities for nuclear medicine. This paper reviews applications of machine learning for the study of attenuation correction (AC) and low-count image reconstruction in quantitative positron emission tomography (PET). Specifically, we present the developments of machine learning methodology, ranging from random forest and dictionary learning to the latest convolutional neural network-based architectures. For application in PET attenuation correction, two general strategies are reviewed: 1) generating synthetic CT from MR or non-AC PET for the purposes of PET AC, and 2) direct conversion from non-AC PET to AC PET. For low-count PET reconstruction, recent deep learning-based studies and the potential advantages over conventional machine learning-based methods are presented and discussed. In each application, the proposed methods, study designs and performance of published studies are listed and compared with a brief discussion. Finally, the overall contributions and remaining challenges are summarized.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Jonathon A Nye
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
50
|
Duffy IR, Boyle AJ, Vasdev N. Improving PET Imaging Acquisition and Analysis With Machine Learning: A Narrative Review With Focus on Alzheimer's Disease and Oncology. Mol Imaging 2020; 18:1536012119869070. [PMID: 31429375 PMCID: PMC6702769 DOI: 10.1177/1536012119869070] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Machine learning (ML) algorithms have found increasing utility in the medical imaging field and numerous applications in the analysis of digital biomarkers within positron emission tomography (PET) imaging have emerged. Interest in the use of artificial intelligence in PET imaging for the study of neurodegenerative diseases and oncology stems from the potential for such techniques to streamline decision support for physicians providing early and accurate diagnosis and allowing personalized treatment regimens. In this review, the use of ML to improve PET image acquisition and reconstruction is presented, along with an overview of its applications in the analysis of PET images for the study of Alzheimer's disease and oncology.
Collapse
Affiliation(s)
- Ian R Duffy
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Amanda J Boyle
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Neil Vasdev
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada.,2 Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|