1
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
2
|
Li X, Johnson JM, Strigel RM, Bancroft LCH, Hurley SA, Estakhraji SIZ, Kumar M, Fowler AM, McMillan AB. Attenuation correction and truncation completion for breast PET/MR imaging using deep learning. Phys Med Biol 2024; 69:045031. [PMID: 38252969 DOI: 10.1088/1361-6560/ad2126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 01/22/2024] [Indexed: 01/24/2024]
Abstract
Objective. Simultaneous PET/MR scanners combine the high sensitivity of MR imaging with the functional imaging of PET. However, attenuation correction of breast PET/MR imaging is technically challenging. The purpose of this study is to establish a robust attenuation correction algorithm for breast PET/MR images that relies on deep learning (DL) to recreate the missing portions of the patient's anatomy (truncation completion), as well as to provide bone information for attenuation correction from only the PET data.Approach. Data acquired from 23 female subjects with invasive breast cancer scanned with18F-fluorodeoxyglucose PET/CT and PET/MR localized to the breast region were used for this study. Three DL models, U-Net with mean absolute error loss (DLMAE) model, U-Net with mean squared error loss (DLMSE) model, and U-Net with perceptual loss (DLPerceptual) model, were trained to predict synthetic CT images (sCT) for PET attenuation correction (AC) given non-attenuation corrected (NAC) PETPET/MRimages as inputs. The DL and Dixon-based sCT reconstructed PET images were compared against those reconstructed from CT images by calculating the percent error of the standardized uptake value (SUV) and conducting Wilcoxon signed rank statistical tests.Main results. sCT images from the DLMAEmodel, the DLMSEmodel, and the DLPerceptualmodel were similar in mean absolute error (MAE), peak-signal-to-noise ratio, and normalized cross-correlation. No significant difference in SUV was found between the PET images reconstructed using the DLMSEand DLPerceptualsCTs compared to the reference CT for AC in all tissue regions. All DL methods performed better than the Dixon-based method according to SUV analysis.Significance. A 3D U-Net with MSE or perceptual loss model can be implemented into a reconstruction workflow, and the derived sCT images allow successful truncation completion and attenuation correction for breast PET/MR images.
Collapse
Affiliation(s)
- Xue Li
- Department of Electrical and Computer Engineering, University of Wisconsin, Madison, WI, United States of America
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Jacob M Johnson
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Roberta M Strigel
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
- Department of Medical Physics, University of Wisconsin, Madison, WI, United States of America
- University of Wisconsin Carbone Cancer Center, Madison, WI, United States of America
| | - Leah C Henze Bancroft
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Samuel A Hurley
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - S Iman Zare Estakhraji
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Manoj Kumar
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
- ICTR Graduate Program in Clinical Investigation, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
| | - Amy M Fowler
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
- Department of Medical Physics, University of Wisconsin, Madison, WI, United States of America
- University of Wisconsin Carbone Cancer Center, Madison, WI, United States of America
| | - Alan B McMillan
- Department of Electrical and Computer Engineering, University of Wisconsin, Madison, WI, United States of America
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States of America
- Department of Medical Physics, University of Wisconsin, Madison, WI, United States of America
- University of Wisconsin Carbone Cancer Center, Madison, WI, United States of America
| |
Collapse
|
3
|
Wyatt JJ, Kaushik S, Cozzini C, Pearson RA, Petrides G, Wiesinger F, McCallum HM, Maxwell RJ. Evaluating a radiotherapy deep learning synthetic CT algorithm for PET-MR attenuation correction in the pelvis. EJNMMI Phys 2024; 11:10. [PMID: 38282050 PMCID: PMC11266329 DOI: 10.1186/s40658-024-00617-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 01/15/2024] [Indexed: 01/30/2024] Open
Abstract
BACKGROUND Positron emission tomography-magnetic resonance (PET-MR) attenuation correction is challenging because the MR signal does not represent tissue density and conventional MR sequences cannot image bone. A novel zero echo time (ZTE) MR sequence has been previously developed which generates signal from cortical bone with images acquired in 65 s. This has been combined with a deep learning model to generate a synthetic computed tomography (sCT) for MR-only radiotherapy. This study aimed to evaluate this algorithm for PET-MR attenuation correction in the pelvis. METHODS Ten patients being treated with ano-rectal radiotherapy received a [Formula: see text]F-FDG-PET-MR in the radiotherapy position. Attenuation maps were generated from ZTE-based sCT (sCTAC) and the standard vendor-supplied MRAC. The radiotherapy planning CT scan was rigidly registered and cropped to generate a gold standard attenuation map (CTAC). PET images were reconstructed using each attenuation map and compared for standard uptake value (SUV) measurement, automatic thresholded gross tumour volume (GTV) delineation and GTV metabolic parameter measurement. The last was assessed for clinical equivalence to CTAC using two one-sided paired t tests with a significance level corrected for multiple testing of [Formula: see text]. Equivalence margins of [Formula: see text] were used. RESULTS Mean whole-image SUV differences were -0.02% (sCTAC) compared to -3.0% (MRAC), with larger differences in the bone regions (-0.5% to -16.3%). There was no difference in thresholded GTVs, with Dice similarity coefficients [Formula: see text]. However, there were larger differences in GTV metabolic parameters. Mean differences to CTAC in [Formula: see text] were [Formula: see text] (± standard error, sCTAC) and [Formula: see text] (MRAC), and [Formula: see text] (sCTAC) and [Formula: see text] (MRAC) in [Formula: see text]. The sCTAC was statistically equivalent to CTAC within a [Formula: see text] equivalence margin for [Formula: see text] and [Formula: see text] ([Formula: see text] and [Formula: see text]), whereas the MRAC was not ([Formula: see text] and [Formula: see text]). CONCLUSION Attenuation correction using this radiotherapy ZTE-based sCT algorithm was substantially more accurate than current MRAC methods with only a 40 s increase in MR acquisition time. This did not impact tumour delineation but did significantly improve the accuracy of whole-image and tumour SUV measurements, which were clinically equivalent to CTAC. This suggests PET images reconstructed with sCTAC would enable accurate quantitative PET images to be acquired on a PET-MR scanner.
Collapse
Affiliation(s)
- Jonathan J Wyatt
- Translation and Clinical Research Institute, Newcastle University, Newcastle upon Tyne, UK.
- Northern Centre for Cancer Care, Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK.
| | - Sandeep Kaushik
- GE Healthcare, Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | | | - Rachel A Pearson
- Translation and Clinical Research Institute, Newcastle University, Newcastle upon Tyne, UK
- Northern Centre for Cancer Care, Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK
| | - George Petrides
- Nuclear Medicine Department, Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK
| | | | - Hazel M McCallum
- Translation and Clinical Research Institute, Newcastle University, Newcastle upon Tyne, UK
- Northern Centre for Cancer Care, Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK
| | - Ross J Maxwell
- Translation and Clinical Research Institute, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
4
|
Lindemann ME, Gratz M, Grafe H, Jannusch K, Umutlu L, Quick HH. Systematic evaluation of human soft tissue attenuation correction in whole-body PET/MR: Implications from PET/CT for optimization of MR-based AC in patients with normal lung tissue. Med Phys 2024; 51:192-208. [PMID: 38060671 DOI: 10.1002/mp.16863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 11/10/2023] [Accepted: 11/16/2023] [Indexed: 01/10/2024] Open
Abstract
BACKGROUND Attenuation correction (AC) is an important methodical step in positron emission tomography/magnetic resonance imaging (PET/MRI) to correct for attenuated and scattered PET photons. PURPOSE The overall quality of magnetic resonance (MR)-based AC in whole-body PET/MRI was evaluated in direct comparison to computed tomography (CT)-based AC serving as reference. The quantitative impact of isolated tissue classes in the MR-AC was systematically investigated to identify potential optimization needs and strategies. METHODS Data of n = 60 whole-body PET/CT patients with normal lung tissue and without metal implants/prostheses were used to generate six different AC-models based on the CT data for each patient, simulating variations of MR-AC. The original continuous CT-AC (CT-org) is referred to as reference. A pseudo MR-AC (CT-mrac), generated from CT data, with four tissue classes and a bone atlas represents the MR-AC. Relative difference in linear attenuation coefficients (LAC) and standardized uptake values were calculated. From the results two improvements regarding soft tissue AC and lung AC were proposed and evaluated. RESULTS The overall performance of MR-AC is in good agreement compared to CT-AC. Lungs, heart, and bone tissue were identified as the regions with most deviation to the CT-AC (myocardium -15%, bone tissue -14%, and lungs ±20%). Using single-valued LACs for AC in the lung only provides limited accuracy. For improved soft tissue AC, splitting the combined soft tissue class into muscles and organs each with adapted LAC could reduce the deviations to the CT-AC to < ±1%. For improved lung AC, applying a gradient LAC in the lungs could remarkably reduce over- or undercorrections in PET signal compared to CT-AC (±5%). CONCLUSIONS The AC is important to ensure best PET image quality and accurate PET quantification for diagnostics and radiotherapy planning. The optimized segment-based AC proposed in this study, which was evaluated on PET/CT data, inherently reduces quantification bias in normal lung tissue and soft tissue compared to the CT-AC reference.
Collapse
Affiliation(s)
- Maike E Lindemann
- High-Field and Hybrid MR Imaging, University Hospital Essen, University Duisburg-Essen, Essen, Germany
| | - Marcel Gratz
- High-Field and Hybrid MR Imaging, University Hospital Essen, University Duisburg-Essen, Essen, Germany
- Erwin L. Hahn Institute for Magnetic Resonance Imaging, University Duisburg-Essen, Essen, Germany
| | - Hong Grafe
- Department of Nuclear Medicine, University Hospital Essen, University Duisburg-Essen, Essen, Germany
| | - Kai Jannusch
- Department of Diagnostic and Interventional Radiology, University Hospital Duesseldorf, University Duesseldorf, Duesseldorf, Germany
| | - Lale Umutlu
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Harald H Quick
- High-Field and Hybrid MR Imaging, University Hospital Essen, University Duisburg-Essen, Essen, Germany
- Erwin L. Hahn Institute for Magnetic Resonance Imaging, University Duisburg-Essen, Essen, Germany
| |
Collapse
|
5
|
Montgomery ME, Andersen FL, d’Este SH, Overbeck N, Cramon PK, Law I, Fischer BM, Ladefoged CN. Attenuation Correction of Long Axial Field-of-View Positron Emission Tomography Using Synthetic Computed Tomography Derived from the Emission Data: Application to Low-Count Studies and Multiple Tracers. Diagnostics (Basel) 2023; 13:3661. [PMID: 38132245 PMCID: PMC10742516 DOI: 10.3390/diagnostics13243661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Revised: 12/10/2023] [Accepted: 12/11/2023] [Indexed: 12/23/2023] Open
Abstract
Recent advancements in PET/CT, including the emergence of long axial field-of-view (LAFOV) PET/CT scanners, have increased PET sensitivity substantially. Consequently, there has been a significant reduction in the required tracer activity, shifting the primary source of patient radiation dose exposure to the attenuation correction (AC) CT scan during PET imaging. This study proposes a parameter-transferred conditional generative adversarial network (PT-cGAN) architecture to generate synthetic CT (sCT) images from non-attenuation corrected (NAC) PET images, with separate networks for [18F]FDG and [15O]H2O tracers. The study includes a total of 1018 subjects (n = 972 [18F]FDG, n = 46 [15O]H2O). Testing was performed on the LAFOV scanner for both datasets. Qualitative analysis found no differences in image quality in 30 out of 36 cases in FDG patients, with minor insignificant differences in the remaining 6 cases. Reduced artifacts due to motion between NAC PET and CT were found. For the selected organs, a mean average error of 0.45% was found for the FDG cohort, and that of 3.12% was found for the H2O cohort. Simulated low-count images were included in testing, which demonstrated good performance down to 45 s scans. These findings show that the AC of total-body PET is feasible across tracers and in low-count studies and might reduce the artifacts due to motion and metal implants.
Collapse
Affiliation(s)
- Maria Elkjær Montgomery
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen University Hospital, 2100 København, Denmark; (M.E.M.); (N.O.); (P.K.C.); (I.L.); (B.M.F.); (C.N.L.)
| | - Flemming Littrup Andersen
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen University Hospital, 2100 København, Denmark; (M.E.M.); (N.O.); (P.K.C.); (I.L.); (B.M.F.); (C.N.L.)
- Department of Clinical Medicine, Copenhagen University, 2200 København, Denmark
| | - Sabrina Honoré d’Este
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen University Hospital, 2100 København, Denmark; (M.E.M.); (N.O.); (P.K.C.); (I.L.); (B.M.F.); (C.N.L.)
| | - Nanna Overbeck
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen University Hospital, 2100 København, Denmark; (M.E.M.); (N.O.); (P.K.C.); (I.L.); (B.M.F.); (C.N.L.)
| | - Per Karkov Cramon
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen University Hospital, 2100 København, Denmark; (M.E.M.); (N.O.); (P.K.C.); (I.L.); (B.M.F.); (C.N.L.)
| | - Ian Law
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen University Hospital, 2100 København, Denmark; (M.E.M.); (N.O.); (P.K.C.); (I.L.); (B.M.F.); (C.N.L.)
- Department of Clinical Medicine, Copenhagen University, 2200 København, Denmark
| | - Barbara Malene Fischer
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen University Hospital, 2100 København, Denmark; (M.E.M.); (N.O.); (P.K.C.); (I.L.); (B.M.F.); (C.N.L.)
- Department of Clinical Medicine, Copenhagen University, 2200 København, Denmark
| | - Claes Nøhr Ladefoged
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen University Hospital, 2100 København, Denmark; (M.E.M.); (N.O.); (P.K.C.); (I.L.); (B.M.F.); (C.N.L.)
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Lyngby, Denmark
| |
Collapse
|
6
|
Chen X, Liu C. Deep-learning-based methods of attenuation correction for SPECT and PET. J Nucl Cardiol 2023; 30:1859-1878. [PMID: 35680755 DOI: 10.1007/s12350-022-03007-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 05/02/2022] [Indexed: 10/18/2022]
Abstract
Attenuation correction (AC) is essential for quantitative analysis and clinical diagnosis of single-photon emission computed tomography (SPECT) and positron emission tomography (PET). In clinical practice, computed tomography (CT) is utilized to generate attenuation maps (μ-maps) for AC of hybrid SPECT/CT and PET/CT scanners. However, CT-based AC methods frequently produce artifacts due to CT artifacts and misregistration of SPECT-CT and PET-CT scans. Segmentation-based AC methods using magnetic resonance imaging (MRI) for PET/MRI scanners are inaccurate and complicated since MRI does not contain direct information of photon attenuation. Computational AC methods for SPECT and PET estimate attenuation coefficients directly from raw emission data, but suffer from low accuracy, cross-talk artifacts, high computational complexity, and high noise level. The recently evolving deep-learning-based methods have shown promising results in AC of SPECT and PET, which can be generally divided into two categories: indirect and direct strategies. Indirect AC strategies apply neural networks to transform emission, transmission, or MR images into synthetic μ-maps or CT images which are then incorporated into AC reconstruction. Direct AC strategies skip the intermediate steps of generating μ-maps or CT images and predict AC SPECT or PET images from non-attenuation-correction (NAC) SPECT or PET images directly. These deep-learning-based AC methods show comparable and even superior performance to non-deep-learning methods. In this article, we first discussed the principles and limitations of non-deep-learning AC methods, and then reviewed the status and prospects of deep-learning-based methods for AC of SPECT and PET.
Collapse
Affiliation(s)
- Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT, 06520, USA.
| |
Collapse
|
7
|
Krokos G, MacKewn J, Dunn J, Marsden P. A review of PET attenuation correction methods for PET-MR. EJNMMI Phys 2023; 10:52. [PMID: 37695384 PMCID: PMC10495310 DOI: 10.1186/s40658-023-00569-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/07/2023] [Indexed: 09/12/2023] Open
Abstract
Despite being thirteen years since the installation of the first PET-MR system, the scanners constitute a very small proportion of the total hybrid PET systems installed. This is in stark contrast to the rapid expansion of the PET-CT scanner, which quickly established its importance in patient diagnosis within a similar timeframe. One of the main hurdles is the development of an accurate, reproducible and easy-to-use method for attenuation correction. Quantitative discrepancies in PET images between the manufacturer-provided MR methods and the more established CT- or transmission-based attenuation correction methods have led the scientific community in a continuous effort to develop a robust and accurate alternative. These can be divided into four broad categories: (i) MR-based, (ii) emission-based, (iii) atlas-based and the (iv) machine learning-based attenuation correction, which is rapidly gaining momentum. The first is based on segmenting the MR images in various tissues and allocating a predefined attenuation coefficient for each tissue. Emission-based attenuation correction methods aim in utilising the PET emission data by simultaneously reconstructing the radioactivity distribution and the attenuation image. Atlas-based attenuation correction methods aim to predict a CT or transmission image given an MR image of a new patient, by using databases containing CT or transmission images from the general population. Finally, in machine learning methods, a model that could predict the required image given the acquired MR or non-attenuation-corrected PET image is developed by exploiting the underlying features of the images. Deep learning methods are the dominant approach in this category. Compared to the more traditional machine learning, which uses structured data for building a model, deep learning makes direct use of the acquired images to identify underlying features. This up-to-date review goes through the literature of attenuation correction approaches in PET-MR after categorising them. The various approaches in each category are described and discussed. After exploring each category separately, a general overview is given of the current status and potential future approaches along with a comparison of the four outlined categories.
Collapse
Affiliation(s)
- Georgios Krokos
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK.
| | - Jane MacKewn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Joel Dunn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Paul Marsden
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| |
Collapse
|
8
|
Abrahamsen BS, Knudtsen IS, Eikenes L, Bathen TF, Elschot M. Pelvic PET/MR attenuation correction in the image space using deep learning. Front Oncol 2023; 13:1220009. [PMID: 37692851 PMCID: PMC10484800 DOI: 10.3389/fonc.2023.1220009] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 07/31/2023] [Indexed: 09/12/2023] Open
Abstract
Introduction The five-class Dixon-based PET/MR attenuation correction (AC) model, which adds bone information to the four-class model by registering major bones from a bone atlas, has been shown to be error-prone. In this study, we introduce a novel method of accounting for bone in pelvic PET/MR AC by directly predicting the errors in the PET image space caused by the lack of bone in four-class Dixon-based attenuation correction. Methods A convolutional neural network was trained to predict the four-class AC error map relative to CT-based attenuation correction. Dixon MR images and the four-class attenuation correction µ-map were used as input to the models. CT and PET/MR examinations for 22 patients ([18F]FDG) were used for training and validation, and 17 patients were used for testing (6 [18F]PSMA-1007 and 11 [68Ga]Ga-PSMA-11). A quantitative analysis of PSMA uptake using voxel- and lesion-based error metrics was used to assess performance. Results In the voxel-based analysis, the proposed model reduced the median root mean squared percentage error from 12.1% and 8.6% for the four- and five-class Dixon-based AC methods, respectively, to 6.2%. The median absolute percentage error in the maximum standardized uptake value (SUVmax) in bone lesions improved from 20.0% and 7.0% for four- and five-class Dixon-based AC methods to 3.8%. Conclusion The proposed method reduces the voxel-based error and SUVmax errors in bone lesions when compared to the four- and five-class Dixon-based AC models.
Collapse
Affiliation(s)
- Bendik Skarre Abrahamsen
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Ingerid Skjei Knudtsen
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Live Eikenes
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Tone Frost Bathen
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Mattijs Elschot
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| |
Collapse
|
9
|
Shi L, Zhang J, Toyonaga T, Shao D, Onofrey JA, Lu Y. Deep learning-based attenuation map generation with simultaneously reconstructed PET activity and attenuation and low-dose application. Phys Med Biol 2023; 68. [PMID: 36584395 DOI: 10.1088/1361-6560/acaf49] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 12/30/2022] [Indexed: 12/31/2022]
Abstract
Objective. In PET/CT imaging, CT is used for positron emission tomography (PET) attenuation correction (AC). CT artifacts or misalignment between PET and CT can cause AC artifacts and quantification errors in PET. Simultaneous reconstruction (MLAA) of PET activity (λ-MLAA) and attenuation (μ-MLAA) maps was proposed to solve those issues using the time-of-flight PET raw data only. However,λ-MLAA still suffers from quantification error as compared to reconstruction using the gold-standard CT-based attenuation map (μ-CT). Recently, a deep learning (DL)-based framework was proposed to improve MLAA by predictingμ-DL fromλ-MLAA andμ-MLAA using an image domain loss function (IM-loss). However, IM-loss does not directly measure the AC errors according to the PET attenuation physics. Our preliminary studies showed that an additional physics-based loss function can lead to more accurate PET AC. The main objective of this study is to optimize the attenuation map generation framework for clinical full-dose18F-FDG studies. We also investigate the effectiveness of the optimized network on predicting attenuation maps for synthetic low-dose oncological PET studies.Approach. We optimized the proposed DL framework by applying different preprocessing steps and hyperparameter optimization, including patch size, weights of the loss terms and number of angles in the projection-domain loss term. The optimization was performed based on 100 skull-to-toe18F-FDG PET/CT scans with minimal misalignment. The optimized framework was further evaluated on 85 clinical full-dose neck-to-thigh18F-FDG cancer datasets as well as synthetic low-dose studies with only 10% of the full-dose raw data.Main results. Clinical evaluation of tumor quantification as well as physics-based figure-of-merit metric evaluation validated the promising performance of our proposed method. For both full-dose and low-dose studies, the proposed framework achieved <1% error in tumor standardized uptake value measures.Significance. It is of great clinical interest to achieve CT-less PET reconstruction, especially for low-dose PET studies.
Collapse
Affiliation(s)
- Luyao Shi
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America
| | - Jiazhen Zhang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Dan Shao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America.,Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong, People's Republic of China
| | - John A Onofrey
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America.,Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America.,Department of Urology, Yale University, New Haven, CT, United States of America
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| |
Collapse
|
10
|
Torkaman M, Yang J, Shi L, Wang R, Miller EJ, Sinusas AJ, Liu C, Gullberg GT, Seo Y. Data Management and Network Architecture Effect on Performance Variability in Direct Attenuation Correction via Deep Learning for Cardiac SPECT: A Feasibility Study. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:755-765. [PMID: 36059429 PMCID: PMC9438341 DOI: 10.1109/trpms.2021.3138372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Attenuation correction (AC) is important for accurate interpretation of SPECT myocardial perfusion imaging (MPI). However, it is challenging to perform AC in dedicated cardiac systems not equipped with a transmission imaging capability. Previously, we demonstrated the feasibility of generating attenuation-corrected SPECT images using a deep learning technique (SPECTDL) directly from non-corrected images (SPECTNC). However, we observed performance variability across patients which is an important factor for clinical translation of the technique. In this study, we investigate the feasibility of overcoming the performance variability across patients for the direct AC in SPECT MPI by proposing to develop an advanced network and a data management strategy. To investigate, we compared the accuracy of the SPECTDL for the conventional U-Net and Wasserstein cycle GAN (WCycleGAN) networks. To manage the training data, clustering was applied to a representation of data in the lower-dimensional space, and the training data were chosen based on the similarity of data in this space. Quantitative analysis demonstrated that DL model with an advanced network improves the global performance for the AC task with the limited data. However, the regional results were not improved. The proposed data management strategy demonstrated that the clustered training has potential benefit for effective training.
Collapse
Affiliation(s)
- Mahsa Torkaman
- Radiology and Biomedical Imaging Department, University of California, San Francisco, CA, USA
| | - Jaewon Yang
- Radiology and Biomedical Imaging Department, University of California, San Francisco, CA, USA
| | - Luyao Shi
- Biomedical Engineering Department, Yale University, New Haven, CT, USA
| | - Rui Wang
- Radiology and Biomedical Imaging Department, Yale University, New Haven, CT, USA
| | - Edward J Miller
- Radiology and Biomedical Imaging Department, Yale University, New Haven, CT, USA
| | - Albert J Sinusas
- Biomedical Engineering Department, Yale University, New Haven, CT, USA; Radiology and Biomedical Imaging Department, Yale University, New Haven, CT, USA
| | - Chi Liu
- Biomedical Engineering Department, Yale University, New Haven, CT, USA; Radiology and Biomedical Imaging Department, Yale University, New Haven, CT, USA
| | - Grant T Gullberg
- Radiology and Biomedical Imaging Department, University of California, San Francisco, CA, USA
| | - Youngho Seo
- Radiology and Biomedical Imaging Department, University of California, San Francisco, CA, USA
| |
Collapse
|
11
|
Ahangari S, Beck Olin A, Kinggård Federspiel M, Jakoby B, Andersen TL, Hansen AE, Fischer BM, Littrup Andersen F. A deep learning-based whole-body solution for PET/MRI attenuation correction. EJNMMI Phys 2022; 9:55. [PMID: 35978211 PMCID: PMC9385907 DOI: 10.1186/s40658-022-00486-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 08/08/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Deep convolutional neural networks have demonstrated robust and reliable PET attenuation correction (AC) as an alternative to conventional AC methods in integrated PET/MRI systems. However, its whole-body implementation is still challenging due to anatomical variations and the limited MRI field of view. The aim of this study is to investigate a deep learning (DL) method to generate voxel-based synthetic CT (sCT) from Dixon MRI and use it as a whole-body solution for PET AC in a PET/MRI system. MATERIALS AND METHODS Fifteen patients underwent PET/CT followed by PET/MRI with whole-body coverage from skull to feet. We performed MRI truncation correction and employed co-registered MRI and CT images for training and leave-one-out cross-validation. The network was pretrained with region-specific images. The accuracy of the AC maps and reconstructed PET images were assessed by performing a voxel-wise analysis and calculating the quantification error in SUV obtained using DL-based sCT (PETsCT) and a vendor-provided atlas-based method (PETAtlas), with the CT-based reconstruction (PETCT) serving as the reference. In addition, region-specific analysis was performed to compare the performances of the methods in brain, lung, liver, spine, pelvic bone, and aorta. RESULTS Our DL-based method resulted in better estimates of AC maps with a mean absolute error of 62 HU, compared to 109 HU for the atlas-based method. We found an excellent voxel-by-voxel correlation between PETCT and PETsCT (R2 = 0.98). The absolute percentage difference in PET quantification for the entire image was 6.1% for PETsCT and 11.2% for PETAtlas. The regional analysis showed that the average errors and the variability for PETsCT were lower than PETAtlas in all regions. The largest errors were observed in the lung, while the smallest biases were observed in the brain and liver. CONCLUSIONS Experimental results demonstrated that a DL approach for whole-body PET AC in PET/MRI is feasible and allows for more accurate results compared with conventional methods. Further evaluation using a larger training cohort is required for more accurate and robust performance.
Collapse
Affiliation(s)
- Sahar Ahangari
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark.
| | - Anders Beck Olin
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark
| | | | | | - Thomas Lund Andersen
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark
| | - Adam Espe Hansen
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark.,Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark.,Department of Diagnostic Radiology, Rigshospitalet, Copenhagen, Denmark
| | - Barbara Malene Fischer
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark.,Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Littrup Andersen
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark.,Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
12
|
Sari H, Teimoorisichani M, Mingels C, Alberts I, Panin V, Bharkhada D, Xue S, Prenosil G, Shi K, Conti M, Rominger A. Quantitative evaluation of a deep learning-based framework to generate whole-body attenuation maps using LSO background radiation in long axial FOV PET scanners. Eur J Nucl Med Mol Imaging 2022; 49:4490-4502. [PMID: 35852557 PMCID: PMC9606046 DOI: 10.1007/s00259-022-05909-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2022] [Accepted: 07/10/2022] [Indexed: 12/19/2022]
Abstract
Purpose Attenuation correction is a critically important step in data correction in positron emission tomography (PET) image formation. The current standard method involves conversion of Hounsfield units from a computed tomography (CT) image to construct attenuation maps (µ-maps) at 511 keV. In this work, the increased sensitivity of long axial field-of-view (LAFOV) PET scanners was exploited to develop and evaluate a deep learning (DL) and joint reconstruction-based method to generate µ-maps utilizing background radiation from lutetium-based (LSO) scintillators. Methods Data from 18 subjects were used to train convolutional neural networks to enhance initial µ-maps generated using joint activity and attenuation reconstruction algorithm (MLACF) with transmission data from LSO background radiation acquired before and after the administration of 18F-fluorodeoxyglucose (18F-FDG) (µ-mapMLACF-PRE and µ-mapMLACF-POST respectively). The deep learning-enhanced µ-maps (µ-mapDL-MLACF-PRE and µ-mapDL-MLACF-POST) were compared against MLACF-derived and CT-based maps (µ-mapCT). The performance of the method was also evaluated by assessing PET images reconstructed using each µ-map and computing volume-of-interest based standard uptake value measurements and percentage relative mean error (rME) and relative mean absolute error (rMAE) relative to CT-based method. Results No statistically significant difference was observed in rME values for µ-mapDL-MLACF-PRE and µ-mapDL-MLACF-POST both in fat-based and water-based soft tissue as well as bones, suggesting that presence of the radiopharmaceutical activity in the body had negligible effects on the resulting µ-maps. The rMAE values µ-mapDL-MLACF-POST were reduced by a factor of 3.3 in average compared to the rMAE of µ-mapMLACF-POST. Similarly, the average rMAE values of PET images reconstructed using µ-mapDL-MLACF-POST (PETDL-MLACF-POST) were 2.6 times smaller than the average rMAE values of PET images reconstructed using µ-mapMLACF-POST. The mean absolute errors in SUV values of PETDL-MLACF-POST compared to PETCT were less than 5% in healthy organs, less than 7% in brain grey matter and 4.3% for all tumours combined. Conclusion We describe a deep learning-based method to accurately generate µ-maps from PET emission data and LSO background radiation, enabling CT-free attenuation and scatter correction in LAFOV PET scanners. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-022-05909-3.
Collapse
Affiliation(s)
- Hasan Sari
- Advanced Clinical Imaging Technology, Siemens Healthcare AG, Lausanne, Switzerland.
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland.
| | | | - Clemens Mingels
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Ian Alberts
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | | | | | - Song Xue
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | - George Prenosil
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | | | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| |
Collapse
|
13
|
Artificial intelligence-based PET image acquisition and reconstruction. Clin Transl Imaging 2022. [DOI: 10.1007/s40336-022-00508-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
14
|
Toyonaga T, Shao D, Shi L, Zhang J, Revilla EM, Menard D, Ankrah J, Hirata K, Chen MK, Onofrey JA, Lu Y. Deep learning-based attenuation correction for whole-body PET - a multi-tracer study with 18F-FDG, 68 Ga-DOTATATE, and 18F-Fluciclovine. Eur J Nucl Med Mol Imaging 2022; 49:3086-3097. [PMID: 35277742 PMCID: PMC10725742 DOI: 10.1007/s00259-022-05748-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 02/25/2022] [Indexed: 11/04/2022]
Abstract
A novel deep learning (DL)-based attenuation correction (AC) framework was applied to clinical whole-body oncology studies using 18F-FDG, 68 Ga-DOTATATE, and 18F-Fluciclovine. The framework used activity (λ-MLAA) and attenuation (µ-MLAA) maps estimated by the maximum likelihood reconstruction of activity and attenuation (MLAA) algorithm as inputs to a modified U-net neural network with a novel imaging physics-based loss function to learn a CT-derived attenuation map (µ-CT). METHODS Clinical whole-body PET/CT datasets of 18F-FDG (N = 113), 68 Ga-DOTATATE (N = 76), and 18F-Fluciclovine (N = 90) were used to train and test tracer-specific neural networks. For each tracer, forty subjects were used to train the neural network to predict attenuation maps (µ-DL). µ-DL and µ-MLAA were compared to the gold-standard µ-CT. PET images reconstructed using the OSEM algorithm with µ-DL (OSEMDL) and µ-MLAA (OSEMMLAA) were compared to the CT-based reconstruction (OSEMCT). Tumor regions of interest were segmented by two radiologists and tumor SUV and volume measures were reported, as well as evaluation using conventional image analysis metrics. RESULTS µ-DL yielded high resolution and fine detail recovery of the attenuation map, which was superior in quality as compared to µ-MLAA in all metrics for all tracers. Using OSEMCT as the gold-standard, OSEMDL provided more accurate tumor quantification than OSEMMLAA for all three tracers, e.g., error in SUVmax for OSEMMLAA vs. OSEMDL: - 3.6 ± 4.4% vs. - 1.7 ± 4.5% for 18F-FDG (N = 152), - 4.3 ± 5.1% vs. 0.4 ± 2.8% for 68 Ga-DOTATATE (N = 70), and - 7.3 ± 2.9% vs. - 2.8 ± 2.3% for 18F-Fluciclovine (N = 44). OSEMDL also yielded more accurate tumor volume measures than OSEMMLAA, i.e., - 8.4 ± 14.5% (OSEMMLAA) vs. - 3.0 ± 15.0% for 18F-FDG, - 14.1 ± 19.7% vs. 1.8 ± 11.6% for 68 Ga-DOTATATE, and - 15.9 ± 9.1% vs. - 6.4 ± 6.4% for 18F-Fluciclovine. CONCLUSIONS The proposed framework provides accurate and robust attenuation correction for whole-body 18F-FDG, 68 Ga-DOTATATE and 18F-Fluciclovine in tumor SUV measures as well as tumor volume estimation. The proposed method provides clinically equivalent quality as compared to CT in attenuation correction for the three tracers.
Collapse
Affiliation(s)
- Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Dan Shao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Guangdong Provincial People's Hospital, Guangzhou, Guangdong, China
| | - Luyao Shi
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06520, USA
| | - Jiazhen Zhang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Enette Mae Revilla
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | | | | - Kenji Hirata
- Department of Diagnostic Imaging, School of Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Yale New Haven Hospital, New Haven, CT, USA
| | - John A Onofrey
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06520, USA
- Department of Urology, Yale University, New Haven, CT, USA
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA.
| |
Collapse
|
15
|
Matsuo H, Nishio M, Nogami M, Zeng F, Kurimoto T, Kaushik S, Wiesinger F, Kono AK, Murakami T. Unsupervised-learning-based method for chest MRI-CT transformation using structure constrained unsupervised generative attention networks. Sci Rep 2022; 12:11090. [PMID: 35773366 PMCID: PMC9247083 DOI: 10.1038/s41598-022-14677-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 06/10/2022] [Indexed: 01/04/2023] Open
Abstract
The integrated positron emission tomography/magnetic resonance imaging (PET/MRI) scanner simultaneously acquires metabolic information via PET and morphological information using MRI. However, attenuation correction, which is necessary for quantitative PET evaluation, is difficult as it requires the generation of attenuation-correction maps from MRI, which has no direct relationship with the gamma-ray attenuation information. MRI-based bone tissue segmentation is potentially available for attenuation correction in relatively rigid and fixed organs such as the head and pelvis regions. However, this is challenging for the chest region because of respiratory and cardiac motions in the chest, its anatomically complicated structure, and the thin bone cortex. We propose a new method using unsupervised generative attentional networks with adaptive layer-instance normalisation for image-to-image translation (U-GAT-IT), which specialised in unpaired image transformation based on attention maps for image transformation. We added the modality-independent neighbourhood descriptor (MIND) to the loss of U-GAT-IT to guarantee anatomical consistency in the image transformation between different domains. Our proposed method obtained a synthesised computed tomography of the chest. Experimental results showed that our method outperforms current approaches. The study findings suggest the possibility of synthesising clinically acceptable computed tomography images from chest MRI with minimal changes in anatomical structures without human annotation.
Collapse
Affiliation(s)
- Hidetoshi Matsuo
- Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Japan.
| | - Mizuho Nishio
- Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Japan
| | - Munenobu Nogami
- Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Japan
| | - Feibi Zeng
- Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Japan
| | | | | | | | - Atsushi K Kono
- Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Japan
| | - Takamichi Murakami
- Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Japan
| |
Collapse
|
16
|
Liberini V, Laudicella R, Balma M, Nicolotti DG, Buschiazzo A, Grimaldi S, Lorenzon L, Bianchi A, Peano S, Bartolotta TV, Farsad M, Baldari S, Burger IA, Huellner MW, Papaleo A, Deandreis D. Radiomics and artificial intelligence in prostate cancer: new tools for molecular hybrid imaging and theragnostics. Eur Radiol Exp 2022; 6:27. [PMID: 35701671 PMCID: PMC9198151 DOI: 10.1186/s41747-022-00282-0] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 04/20/2022] [Indexed: 11/21/2022] Open
Abstract
In prostate cancer (PCa), the use of new radiopharmaceuticals has improved the accuracy of diagnosis and staging, refined surveillance strategies, and introduced specific and personalized radioreceptor therapies. Nuclear medicine, therefore, holds great promise for improving the quality of life of PCa patients, through managing and processing a vast amount of molecular imaging data and beyond, using a multi-omics approach and improving patients’ risk-stratification for tailored medicine. Artificial intelligence (AI) and radiomics may allow clinicians to improve the overall efficiency and accuracy of using these “big data” in both the diagnostic and theragnostic field: from technical aspects (such as semi-automatization of tumor segmentation, image reconstruction, and interpretation) to clinical outcomes, improving a deeper understanding of the molecular environment of PCa, refining personalized treatment strategies, and increasing the ability to predict the outcome. This systematic review aims to describe the current literature on AI and radiomics applied to molecular imaging of prostate cancer.
Collapse
Affiliation(s)
- Virginia Liberini
- Medical Physiopathology - A.O.U. Città della Salute e della Scienza di Torino, Division of Nuclear Medicine, Department of Medical Science, University of Torino, 10126, Torino, Italy. .,Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy.
| | - Riccardo Laudicella
- Department of Nuclear Medicine, University Hospital Zurich, University of Zurich, 8006, Zurich, Switzerland.,Nuclear Medicine Unit, Department of Biomedical and Dental Sciences and of Morpho-Functional Imaging, University of Messina, 98125, Messina, Italy.,Nuclear Medicine Unit, Fondazione Istituto G. Giglio, Ct.da Pietrapollastra Pisciotto, Cefalù, Palermo, Italy
| | - Michele Balma
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy
| | | | - Ambra Buschiazzo
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy
| | - Serena Grimaldi
- Medical Physiopathology - A.O.U. Città della Salute e della Scienza di Torino, Division of Nuclear Medicine, Department of Medical Science, University of Torino, 10126, Torino, Italy
| | - Leda Lorenzon
- Medical Physics Department, Central Bolzano Hospital, 39100, Bolzano, Italy
| | - Andrea Bianchi
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy
| | - Simona Peano
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy
| | | | - Mohsen Farsad
- Nuclear Medicine, Central Hospital Bolzano, 39100, Bolzano, Italy
| | - Sergio Baldari
- Nuclear Medicine Unit, Department of Biomedical and Dental Sciences and of Morpho-Functional Imaging, University of Messina, 98125, Messina, Italy
| | - Irene A Burger
- Department of Nuclear Medicine, University Hospital Zurich, University of Zurich, 8006, Zurich, Switzerland.,Department of Nuclear Medicine, Kantonsspital Baden, 5004, Baden, Switzerland
| | - Martin W Huellner
- Department of Nuclear Medicine, University Hospital Zurich, University of Zurich, 8006, Zurich, Switzerland
| | - Alberto Papaleo
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy
| | - Désirée Deandreis
- Medical Physiopathology - A.O.U. Città della Salute e della Scienza di Torino, Division of Nuclear Medicine, Department of Medical Science, University of Torino, 10126, Torino, Italy
| |
Collapse
|
17
|
van der Kolk BBY, Slotman DJ, Nijholt IM, van Osch JA, Snoeijink TJ, Podlogar M, A.A.M. van Hasselt B, Boelhouwers HJ, van Stralen M, Seevinck PR, Schep NW, Maas M, Boomsma MF. Bone visualization of the cervical spine with deep learning-based synthetic CT compared to conventional CT: a single-center noninferiority study on image quality. Eur J Radiol 2022; 154:110414. [DOI: 10.1016/j.ejrad.2022.110414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Accepted: 06/13/2022] [Indexed: 11/03/2022]
|
18
|
Anderson TI, Vega B, McKinzie J, Aryana SA, Kovscek AR. 2D-to-3D image translation of complex nanoporous volumes using generative networks. Sci Rep 2021; 11:20768. [PMID: 34675247 PMCID: PMC8531351 DOI: 10.1038/s41598-021-00080-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 10/05/2021] [Indexed: 01/06/2023] Open
Abstract
Image-based characterization offers a powerful approach to studying geological porous media at the nanoscale and images are critical to understanding reactive transport mechanisms in reservoirs relevant to energy and sustainability technologies such as carbon sequestration, subsurface hydrogen storage, and natural gas recovery. Nanoimaging presents a trade off, however, between higher-contrast sample-destructive and lower-contrast sample-preserving imaging modalities. Furthermore, high-contrast imaging modalities often acquire only 2D images, while 3D volumes are needed to characterize fully a source rock sample. In this work, we present deep learning image translation models to predict high-contrast focused ion beam-scanning electron microscopy (FIB-SEM) image volumes from transmission X-ray microscopy (TXM) images when only 2D paired training data is available. We introduce a regularization method for improving 3D volume generation from 2D-to-2D deep learning image models and apply this approach to translate 3D TXM volumes to FIB-SEM fidelity. We then segment a predicted FIB-SEM volume into a flow simulation domain and calculate the sample apparent permeability using a lattice Boltzmann method (LBM) technique. Results show that our image translation approach produces simulation domains suitable for flow visualization and allows for accurate characterization of petrophysical properties from non-destructive imaging data.
Collapse
Affiliation(s)
- Timothy I Anderson
- Department of Electrical Engineering, Stanford University, Stanford, CA, 94305, USA
| | - Bolivia Vega
- Department of Energy Resources Engineering, Stanford University, Stanford, CA, 94305, USA
| | - Jesse McKinzie
- Department of Chemical Engineering, University of Wyoming, Laramie, WY, 82071, USA
| | - Saman A Aryana
- Department of Chemical Engineering, University of Wyoming, Laramie, WY, 82071, USA
| | - Anthony R Kovscek
- Department of Energy Resources Engineering, Stanford University, Stanford, CA, 94305, USA.
| |
Collapse
|
19
|
Bahrami A, Karimian A, Arabi H. Comparison of different deep learning architectures for synthetic CT generation from MR images. Phys Med 2021; 90:99-107. [PMID: 34597891 DOI: 10.1016/j.ejmp.2021.09.006] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 08/12/2021] [Accepted: 09/13/2021] [Indexed: 12/26/2022] Open
Abstract
PURPOSE Among the different available methods for synthetic CT generation from MR images for the task of MR-guided radiation planning, the deep learning algorithms have and do outperform their conventional counterparts. In this study, we investigated the performance of some most popular deep learning architectures including eCNN, U-Net, GAN, V-Net, and Res-Net for the task of sCT generation. As a baseline, an atlas-based method is implemented to which the results of the deep learning-based model are compared. METHODS A dataset consisting of 20 co-registered MR-CT pairs of the male pelvis is applied to assess the different sCT production methods' performance. The mean error (ME), mean absolute error (MAE), Pearson correlation coefficient (PCC), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) metrics were computed between the estimated sCT and the ground truth (reference) CT images. RESULTS The visual inspection revealed that the sCTs produced by eCNN, V-Net, and ResNet, unlike the other methods, were less noisy and greatly resemble the ground truth CT image. In the whole pelvis region, the eCNN yielded the lowest MAE (26.03 ± 8.85 HU) and ME (0.82 ± 7.06 HU), and the highest PCC metrics were yielded by the eCNN (0.93 ± 0.05) and ResNet (0.91 ± 0.02) methods. The ResNet model had the highest PSNR of 29.38 ± 1.75 among all models. In terms of the Dice similarity coefficient, the eCNN method revealed superior performance in major tissue identification (air, bone, and soft tissue). CONCLUSIONS All in all, the eCNN and ResNet deep learning methods revealed acceptable performance with clinically tolerable quantification errors.
Collapse
Affiliation(s)
- Abbas Bahrami
- Faculty of Physics, University of Isfahan, Isfahan, Iran
| | - Alireza Karimian
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran.
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| |
Collapse
|
20
|
Spadea MF, Maspero M, Zaffino P, Seco J. Deep learning based synthetic-CT generation in radiotherapy and PET: A review. Med Phys 2021; 48:6537-6566. [PMID: 34407209 DOI: 10.1002/mp.15150] [Citation(s) in RCA: 96] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 06/06/2021] [Accepted: 07/13/2021] [Indexed: 01/22/2023] Open
Abstract
Recently,deep learning (DL)-based methods for the generation of synthetic computed tomography (sCT) have received significant research attention as an alternative to classical ones. We present here a systematic review of these methods by grouping them into three categories, according to their clinical applications: (i) to replace computed tomography in magnetic resonance (MR) based treatment planning, (ii) facilitate cone-beam computed tomography based image-guided adaptive radiotherapy, and (iii) derive attenuation maps for the correction of positron emission tomography. Appropriate database searching was performed on journal articles published between January 2014 and December 2020. The DL methods' key characteristics were extracted from each eligible study, and a comprehensive comparison among network architectures and metrics was reported. A detailed review of each category was given, highlighting essential contributions, identifying specific challenges, and summarizing the achievements. Lastly, the statistics of all the cited works from various aspects were analyzed, revealing the popularity and future trends and the potential of DL-based sCT generation. The current status of DL-based sCT generation was evaluated, assessing the clinical readiness of the presented methods.
Collapse
Affiliation(s)
- Maria Francesca Spadea
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Matteo Maspero
- Division of Imaging & Oncology, Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands.,Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands
| | - Paolo Zaffino
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Joao Seco
- Division of Biomedical Physics in Radiation Oncology, DKFZ German Cancer Research Center, Heidelberg, Germany.,Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
21
|
Ahangari S, Hansen NL, Olin AB, Nøttrup TJ, Ryssel H, Berthelsen AK, Löfgren J, Loft A, Vogelius IR, Schnack T, Jakoby B, Kjaer A, Andersen FL, Fischer BM, Hansen AE. Toward PET/MRI as one-stop shop for radiotherapy planning in cervical cancer patients. Acta Oncol 2021; 60:1045-1053. [PMID: 34107847 DOI: 10.1080/0284186x.2021.1936164] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
BACKGROUND Radiotherapy (RT) planning for cervical cancer patients entails the acquisition of both Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). Further, molecular imaging by Positron Emission Tomography (PET) could contribute to target volume delineation as well as treatment response monitoring. The objective of this study was to investigate the feasibility of a PET/MRI-only RT planning workflow of patients with cervical cancer. This includes attenuation correction (AC) of MRI hardware and dedicated positioning equipment as well as evaluating MRI-derived synthetic CT (sCT) of the pelvic region for positioning verification and dose calculation to enable a PET/MRI-only setup. MATERIAL AND METHODS 16 patients underwent PET/MRI using a dedicated RT setup after the routine CT (or PET/CT), including eight pilot patients and eight cervical cancer patients who were subsequently referred for RT. Data from 18 patients with gynecological cancer were added for training a deep convolutional neural network to generate sCT from Dixon MRI. The mean absolute difference between the dose distributions calculated on sCT and a reference CT was measured in the RT target volume and organs at risk. PET AC by sCT and a reference CT were compared in the tumor volume. RESULTS All patients completed the examination. sCT was inferred for each patient in less than 5 s. The dosimetric analysis of the sCT-based dose planning showed a mean absolute error (MAE) of 0.17 ± 0.12 Gy inside the planning target volumes (PTV). PET images reconstructed with sCT and CT had no significant difference in quantification for all patients. CONCLUSIONS These results suggest that multiparametric PET/MRI can be successfully integrated as a one-stop-shop in the RT workflow of patients with cervical cancer.
Collapse
Affiliation(s)
- Sahar Ahangari
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Naja Liv Hansen
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Anders Beck Olin
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Trine Jakobi Nøttrup
- Department of Oncology, Section of Radiotherapy, University of Copenhagen, Rigshospitalet, Denmark
| | - Heidi Ryssel
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Anne Kiil Berthelsen
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Johan Löfgren
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Annika Loft
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Ivan Richter Vogelius
- Department of Oncology, Section of Radiotherapy, University of Copenhagen, Rigshospitalet, Denmark
| | - Tine Schnack
- Department of Gynecology, University of Copenhagen, Copenhagen, Denmark
- Department of Gynecology and Obstetrics, Odense University Hospital, Odense, Denmark
| | | | - Andreas Kjaer
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
- Cluster for Molecular Imaging, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Littrup Andersen
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| | - Barbara Malene Fischer
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
- The PET Centre, School of Biomedical Engineering and Imaging Sciences, Kings College London, St Thomas’ Hospital, London, UK
| | - Adam Espe Hansen
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
- Department of Diagnostic Radiology, Rigshospitalet, University of Copenhagen, Denmark Copenhagen
| |
Collapse
|
22
|
Sari H, Reaungamornrat J, Catalano O, Vera-Olmos J, Izquierdo-Garcia D, Morales MA, Torrado-Carvajal A, Ng SCT, Malpica N, Kamen A, Catana C. Evaluation of Deep Learning-based Approaches to Segment Bowel Air Pockets and Generate Pelvis Attenuation Maps from CAIPIRINHA-accelerated Dixon MR Images. J Nucl Med 2021; 63:468-475. [PMID: 34301782 DOI: 10.2967/jnumed.120.261032] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Revised: 06/06/2021] [Indexed: 11/16/2022] Open
Abstract
Attenuation correction (AC) remains a challenge in pelvis PET/MR imaging. In addition to the segmentation/model-based approaches, deep learning methods have shown promise in synthesizing accurate pelvis attenuation maps (μ-maps). However, these methods often misclassify air pockets in the digestive tract, which can introduce bias in the reconstructed PET images. The aims of this work were to develop deep learning-based methods to automatically segment air pockets and generate pseudo-CT images from CAIPIRINHA-accelerated MR Dixon images. Methods: A convolutional neural network (CNN) was trained to segment air pockets using 3D CAIPIRINHA-accelerated MR Dixon datasets from 35 subjects and was evaluated against semi-automated segmentations. A separate CNN was trained to synthesize pseudo-CT μ-maps from the Dixon images. Its accuracy was evaluated by comparing the deep learning-, model- and CT-based μ-maps using data from 30 of the subjects. Finally, the impact of different μ-maps and air pocket segmentation methods on the PET quantification was investigated. Results: Air pockets segmented using the CNN agreed well with semi-automated segmentations, with a mean Dice similarity coefficient of 0.75. Volumetric similarity score between two segmentations was 0.85 ± 0.14. The mean absolute relative change (RCs) with respect to the CT-based μ-maps were 2.6% and 5.1% in the whole pelvis for the deep learning and model-based μ-maps, respectively. The average RC between PET images reconstructed with deep learning and CT-based μ-maps was 2.6%. Conclusion: We presented a deep learning-based method to automatically segment air pockets from CAIPIRINHA-accelerated Dixon images with comparable accuracy to semi-automatic segmentations. We also showed that the μ-maps synthesized using a deep learning-based method from CAIPIRINHA-accelerated Dixon images are more accurate than those generated with the model-based approach available on integrated PET/MRI scanner.
Collapse
Affiliation(s)
- Hasan Sari
- Athinoula A. Martinos Center for Biomedical Imaging, United States
| | | | - Onofrio Catalano
- Athinoula A. Martinos Center for Biomedical Imaging, United States
| | | | | | | | | | | | | | - Ali Kamen
- Siemens Corporate Research, United States
| | - Ciprian Catana
- Athinoula A. Martinos Center for Biomedical Imaging, United States
| |
Collapse
|
23
|
Cheng Z, Wen J, Huang G, Yan J. Applications of artificial intelligence in nuclear medicine image generation. Quant Imaging Med Surg 2021; 11:2792-2822. [PMID: 34079744 PMCID: PMC8107336 DOI: 10.21037/qims-20-1078] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 02/14/2021] [Indexed: 12/12/2022]
Abstract
Recently, the application of artificial intelligence (AI) in medical imaging (including nuclear medicine imaging) has rapidly developed. Most AI applications in nuclear medicine imaging have focused on the diagnosis, treatment monitoring, and correlation analyses with pathology or specific gene mutation. It can also be used for image generation to shorten the time of image acquisition, reduce the dose of injected tracer, and enhance image quality. This work provides an overview of the application of AI in image generation for single-photon emission computed tomography (SPECT) and positron emission tomography (PET) either without or with anatomical information [CT or magnetic resonance imaging (MRI)]. This review focused on four aspects, including imaging physics, image reconstruction, image postprocessing, and internal dosimetry. AI application in generating attenuation map, estimating scatter events, boosting image quality, and predicting internal dose map is summarized and discussed.
Collapse
Affiliation(s)
- Zhibiao Cheng
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Junhai Wen
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Gang Huang
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| | - Jianhua Yan
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| |
Collapse
|
24
|
Hwang D, Kang SK, Kim KY, Choi H, Seo S, Lee JS. Data-driven respiratory phase-matched PET attenuation correction without CT. Phys Med Biol 2021; 66. [PMID: 33910170 DOI: 10.1088/1361-6560/abfc8f] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Accepted: 04/28/2021] [Indexed: 12/20/2022]
Abstract
We propose a deep learning-based data-driven respiratory phase-matched gated-PET attenuation correction (AC) method that does not need a gated-CT. The proposed method is a multi-step process that consists of data-driven respiratory gating, gated attenuation map estimation using maximum-likelihood reconstruction of attenuation and activity (MLAA) algorithm, and enhancement of the gated attenuation maps using convolutional neural network (CNN). The gated MLAA attenuation maps enhanced by the CNN allowed for the phase-matched AC of gated-PET images. We conducted a non-rigid registration of the gated-PET images to generate motion-free PET images. We trained the CNN by conducting a 3D patch-based learning with 80 oncologic whole-body18F-fluorodeoxyglucose (18F-FDG) PET/CT scan data and applied it to seven regional PET/CT scans that cover the lower lung and upper liver. We investigated the impact of the proposed respiratory phase-matched AC of PET without utilizing CT on tumor size and standard uptake value (SUV) assessment, and PET image quality (%STD). The attenuation corrected gated and motion-free PET images generated using the proposed method yielded sharper organ boundaries and better noise characteristics than conventional gated and ungated PET images. A banana artifact observed in a phase-mismatched CT-based AC was not observed in the proposed approach. By employing the proposed method, the size of tumor was reduced by 12.3% and SUV90%was increased by 13.3% in tumors with larger movements than 5 mm. %STD of liver uptake was reduced by 11.1%. The deep learning-based data-driven respiratory phase-matched AC method improved the PET image quality and reduced the motion artifacts.
Collapse
Affiliation(s)
- Donghwi Hwang
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, Republic of Korea.,Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Seung Kwan Kang
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, Republic of Korea.,Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Kyeong Yun Kim
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, Republic of Korea.,Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Hongyoon Choi
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Seongho Seo
- Department of Electronic Engineering, Pai Chai University, Daejeon, Republic of Korea
| | - Jae Sung Lee
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, Republic of Korea.,Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea.,Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
25
|
Lee JS. A Review of Deep-Learning-Based Approaches for Attenuation Correction in Positron Emission Tomography. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3009269] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
26
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|
27
|
Tao L, Fisher J, Anaya E, Li X, Levin CS. Pseudo CT Image Synthesis and Bone Segmentation From MR Images Using Adversarial Networks With Residual Blocks for MR-Based Attenuation Correction of Brain PET Data. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.2989073] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
28
|
Deep learning in Nuclear Medicine—focus on CNN-based approaches for PET/CT and PET/MR: where do we stand? Clin Transl Imaging 2021. [DOI: 10.1007/s40336-021-00411-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
29
|
Abstract
Attenuation correction has been one of the main methodological challenges in the integrated positron emission tomography and magnetic resonance imaging (PET/MRI) field. As standard transmission or computed tomography approaches are not available in integrated PET/MRI scanners, MR-based attenuation correction approaches had to be developed. Aspects that have to be considered for implementing accurate methods include the need to account for attenuation in bone tissue, normal and pathological lung and the MR hardware present in the PET field-of-view, to reduce the impact of subject motion, to minimize truncation and susceptibility artifacts, and to address issues related to the data acquisition and processing both on the PET and MRI sides. The standard MR-based attenuation correction techniques implemented by the PET/MRI equipment manufacturers and their impact on clinical and research PET data interpretation and quantification are first discussed. Next, the more advanced methods, including the latest generation deep learning-based approaches that have been proposed for further minimizing the attenuation correction related bias are described. Finally, a future perspective focused on the needed developments in the field is given.
Collapse
Affiliation(s)
- Ciprian Catana
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, United States of America
| |
Collapse
|
30
|
Wallstén E, Axelsson J, Jonsson J, Karlsson CT, Nyholm T, Larsson A. Improved PET/MRI attenuation correction in the pelvic region using a statistical decomposition method on T2-weighted images. EJNMMI Phys 2020; 7:68. [PMID: 33226495 PMCID: PMC7683750 DOI: 10.1186/s40658-020-00336-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 11/04/2020] [Indexed: 11/29/2022] Open
Abstract
Background Attenuation correction of PET/MRI is a remaining problem for whole-body PET/MRI. The statistical decomposition algorithm (SDA) is a probabilistic atlas-based method that calculates synthetic CTs from T2-weighted MRI scans. In this study, we evaluated the application of SDA for attenuation correction of PET images in the pelvic region. Materials and method Twelve patients were retrospectively selected from an ongoing prostate cancer research study. The patients had same-day scans of [11C]acetate PET/MRI and CT. The CT images were non-rigidly registered to the PET/MRI geometry, and PET images were reconstructed with attenuation correction employing CT, SDA-generated CT, and the built-in Dixon sequence-based method of the scanner. The PET images reconstructed using CT-based attenuation correction were used as ground truth. Results The mean whole-image PET uptake error was reduced from − 5.4% for Dixon-PET to − 0.9% for SDA-PET. The prostate standardized uptake value (SUV) quantification error was significantly reduced from − 5.6% for Dixon-PET to − 2.3% for SDA-PET. Conclusion Attenuation correction with SDA improves quantification of PET/MR images in the pelvic region compared to the Dixon-based method.
Collapse
Affiliation(s)
- Elin Wallstén
- Department of Radiation Sciences, Radiation Physics, Umeå University, 901 85, Umeå, Sweden.
| | - Jan Axelsson
- Department of Radiation Sciences, Radiation Physics, Umeå University, 901 85, Umeå, Sweden
| | - Joakim Jonsson
- Department of Radiation Sciences, Radiation Physics, Umeå University, 901 85, Umeå, Sweden
| | | | - Tufve Nyholm
- Department of Radiation Sciences, Radiation Physics, Umeå University, 901 85, Umeå, Sweden
| | - Anne Larsson
- Department of Radiation Sciences, Radiation Physics, Umeå University, 901 85, Umeå, Sweden
| |
Collapse
|
31
|
Mecheter I, Alic L, Abbod M, Amira A, Ji J. MR Image-Based Attenuation Correction of Brain PET Imaging: Review of Literature on Machine Learning Approaches for Segmentation. J Digit Imaging 2020; 33:1224-1241. [PMID: 32607906 PMCID: PMC7573060 DOI: 10.1007/s10278-020-00361-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Recent emerging hybrid technology of positron emission tomography/magnetic resonance (PET/MR) imaging has generated a great need for an accurate MR image-based PET attenuation correction. MR image segmentation, as a robust and simple method for PET attenuation correction, has been clinically adopted in commercial PET/MR scanners. The general approach in this method is to segment the MR image into different tissue types, each assigned an attenuation constant as in an X-ray CT image. Machine learning techniques such as clustering, classification and deep networks are extensively used for brain MR image segmentation. However, only limited work has been reported on using deep learning in brain PET attenuation correction. In addition, there is a lack of clinical evaluation of machine learning methods in this application. The aim of this review is to study the use of machine learning methods for MR image segmentation and its application in attenuation correction for PET brain imaging. Furthermore, challenges and future opportunities in MR image-based PET attenuation correction are discussed.
Collapse
Affiliation(s)
- Imene Mecheter
- Department of Electronic and Computer Engineering, Brunel University London, Uxbridge, UK.
- Department of Electrical and Computer Engineering, Texas A & M University at Qatar, Doha, Qatar.
| | - Lejla Alic
- Magnetic Detection and Imaging Group, Faculty of Science and Technology, University of Twente, Enschede, Netherlands
| | - Maysam Abbod
- Department of Electronic and Computer Engineering, Brunel University London, Uxbridge, UK
| | - Abbes Amira
- Institute of Artificial Intelligence, De Montfort University, Leicester, UK
| | - Jim Ji
- Department of Electrical and Computer Engineering, Texas A & M University at Qatar, Doha, Qatar
- Department of Electrical and Computer Engineering, Texas A & M University, College Station, TX, USA
| |
Collapse
|
32
|
Abstract
CLINICAL ISSUE Hybrid imaging enables the precise visualization of cellular metabolism by combining anatomical and metabolic information. Advances in artificial intelligence (AI) offer new methods for processing and evaluating this data. METHODOLOGICAL INNOVATIONS This review summarizes current developments and applications of AI methods in hybrid imaging. Applications in image processing as well as methods for disease-related evaluation are presented and discussed. MATERIALS AND METHODS This article is based on a selective literature search with the search engines PubMed and arXiv. ASSESSMENT Currently, there are only a few AI applications using hybrid imaging data and no applications are established in clinical routine yet. Although the first promising approaches are emerging, they still need to be evaluated prospectively. In the future, AI applications will support radiologists and nuclear medicine radiologists in diagnosis and therapy.
Collapse
Affiliation(s)
- Christian Strack
- AG Computational Radiology, Department of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Deutschland
- Heidelberg University, Heidelberg, Deutschland
| | - Robert Seifert
- Department of Nuclear Medicine, Medical Faculty, University Hospital Essen, Essen, Deutschland
| | - Jens Kleesiek
- AG Computational Radiology, Department of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Deutschland.
- German Cancer Consortium (DKTK), Heidelberg, Deutschland.
| |
Collapse
|
33
|
Panda A, Goenka AH, Hope TA, Veit-Haibach P. PET/Magnetic Resonance Imaging Applications in Abdomen and Pelvis. Magn Reson Imaging Clin N Am 2020; 28:369-380. [DOI: 10.1016/j.mric.2020.03.010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
34
|
Duffy IR, Boyle AJ, Vasdev N. Improving PET Imaging Acquisition and Analysis With Machine Learning: A Narrative Review With Focus on Alzheimer's Disease and Oncology. Mol Imaging 2020; 18:1536012119869070. [PMID: 31429375 PMCID: PMC6702769 DOI: 10.1177/1536012119869070] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Machine learning (ML) algorithms have found increasing utility in the medical imaging field and numerous applications in the analysis of digital biomarkers within positron emission tomography (PET) imaging have emerged. Interest in the use of artificial intelligence in PET imaging for the study of neurodegenerative diseases and oncology stems from the potential for such techniques to streamline decision support for physicians providing early and accurate diagnosis and allowing personalized treatment regimens. In this review, the use of ML to improve PET image acquisition and reconstruction is presented, along with an overview of its applications in the analysis of PET images for the study of Alzheimer's disease and oncology.
Collapse
Affiliation(s)
- Ian R Duffy
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Amanda J Boyle
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Neil Vasdev
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada.,2 Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
35
|
Armanious K, Hepp T, Küstner T, Dittmann H, Nikolaou K, La Fougère C, Yang B, Gatidis S. Independent attenuation correction of whole body [ 18F]FDG-PET using a deep learning approach with Generative Adversarial Networks. EJNMMI Res 2020; 10:53. [PMID: 32449036 PMCID: PMC7246235 DOI: 10.1186/s13550-020-00644-y] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Accepted: 05/12/2020] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Attenuation correction (AC) of PET data is usually performed using a second imaging for the generation of attenuation maps. In certain situations however-when CT- or MR-derived attenuation maps are corrupted or CT acquisition solely for the purpose of AC shall be avoided-it would be of value to have the possibility of obtaining attenuation maps only based on PET information. The purpose of this study was to thus develop, implement, and evaluate a deep learning-based method for whole body [18F]FDG-PET AC which is independent of other imaging modalities for acquiring the attenuation map. METHODS The proposed method is investigated on whole body [18F]FDG-PET data using a Generative Adversarial Networks (GAN) deep learning framework. It is trained to generate pseudo CT images (CTGAN) based on paired training data of non-attenuation corrected PET data (PETNAC) and corresponding CT data. Generated pseudo CTs are then used for subsequent PET AC. One hundred data sets of whole body PETNAC and corresponding CT were used for training. Twenty-five PET/CT examinations were used as test data sets (not included in training). On these test data sets, AC of PET was performed using the acquired CT as well as CTGAN resulting in the corresponding PET data sets PETAC and PETGAN. CTGAN and PETGAN were evaluated qualitatively by visual inspection and by visual analysis of color-coded difference maps. Quantitative analysis was performed by comparison of organ and lesion SUVs between PETAC and PETGAN. RESULTS Qualitative analysis revealed no major SUV deviations on PETGAN for most anatomic regions; visually detectable deviations were mainly observed along the diaphragm and the lung border. Quantitative analysis revealed mean percent deviations of SUVs on PETGAN of - 0.8 ± 8.6% over all organs (range [- 30.7%, + 27.1%]). Mean lesion SUVs showed a mean deviation of 0.9 ± 9.2% (range [- 19.6%, + 29.2%]). CONCLUSION Independent AC of whole body [18F]FDG-PET is feasible using the proposed deep learning approach yielding satisfactory PET quantification accuracy. Further clinical validation is necessary prior to implementation in clinical routine applications.
Collapse
Affiliation(s)
- Karim Armanious
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital Tübingen, Hoppe-Seyler-Str. 3, 72076, Tübingen, Germany
- Institute of Signal Processing and System Theory, University of Stuttgart, Stuttgart, Germany
| | - Tobias Hepp
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital Tübingen, Hoppe-Seyler-Str. 3, 72076, Tübingen, Germany
- Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Thomas Küstner
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital Tübingen, Hoppe-Seyler-Str. 3, 72076, Tübingen, Germany
- Institute of Signal Processing and System Theory, University of Stuttgart, Stuttgart, Germany
- School of Biomedical Engineering & Imaging Sciences, King's College London, St. Thomas' Hospital, London, UK
- Cluster of Excellence iFIT (EXC 2180) "Image Guided and Functionally Instructed Tumor Therapies", University of Tübingen, Tübingen, Germany
| | - Helmut Dittmann
- Department of Radiology, Nuclear Medicine and Clinical Molecular Imaging, University Hospital Tübingen, Tübingen, Germany
| | - Konstantin Nikolaou
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital Tübingen, Hoppe-Seyler-Str. 3, 72076, Tübingen, Germany
- Cluster of Excellence iFIT (EXC 2180) "Image Guided and Functionally Instructed Tumor Therapies", University of Tübingen, Tübingen, Germany
| | - Christian La Fougère
- Cluster of Excellence iFIT (EXC 2180) "Image Guided and Functionally Instructed Tumor Therapies", University of Tübingen, Tübingen, Germany
- Department of Radiology, Nuclear Medicine and Clinical Molecular Imaging, University Hospital Tübingen, Tübingen, Germany
| | - Bin Yang
- Institute of Signal Processing and System Theory, University of Stuttgart, Stuttgart, Germany
| | - Sergios Gatidis
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital Tübingen, Hoppe-Seyler-Str. 3, 72076, Tübingen, Germany.
- Cluster of Excellence iFIT (EXC 2180) "Image Guided and Functionally Instructed Tumor Therapies", University of Tübingen, Tübingen, Germany.
| |
Collapse
|
36
|
Zaharchuk G. Next generation research applications for hybrid PET/MR and PET/CT imaging using deep learning. Eur J Nucl Med Mol Imaging 2019; 46:2700-2707. [PMID: 31254036 PMCID: PMC6881542 DOI: 10.1007/s00259-019-04374-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Accepted: 05/23/2019] [Indexed: 02/08/2023]
Abstract
INTRODUCTION Recently there have been significant advances in the field of machine learning and artificial intelligence (AI) centered around imaging-based applications such as computer vision. In particular, the tremendous power of deep learning algorithms, primarily based on convolutional neural network strategies, is becoming increasingly apparent and has already had direct impact on the fields of radiology and nuclear medicine. While most early applications of computer vision to radiological imaging have focused on classification of images into disease categories, it is also possible to use these methods to improve image quality. Hybrid imaging approaches, such as PET/MRI and PET/CT, are ideal for applying these methods. METHODS This review will give an overview of the application of AI to improve image quality for PET imaging directly and how the additional use of anatomic information from CT and MRI can lead to further benefits. For PET, these performance gains can be used to shorten imaging scan times, with improvement in patient comfort and motion artifacts, or to push towards lower radiotracer doses. It also opens the possibilities for dual tracer studies, more frequent follow-up examinations, and new imaging indications. How to assess quality and the potential effects of bias in training and testing sets will be discussed. CONCLUSION Harnessing the power of these new technologies to extract maximal information from hybrid PET imaging will open up new vistas for both research and clinical applications with associated benefits in patient care.
Collapse
Affiliation(s)
- Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, CA, USA.
| |
Collapse
|
37
|
Hope TA, Fayad ZA, Fowler KJ, Holley D, Iagaru A, McMillan AB, Veit-Haiback P, Witte RJ, Zaharchuk G, Catana C. Summary of the First ISMRM-SNMMI Workshop on PET/MRI: Applications and Limitations. J Nucl Med 2019; 60:1340-1346. [PMID: 31123099 PMCID: PMC6785790 DOI: 10.2967/jnumed.119.227231] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2019] [Accepted: 05/21/2019] [Indexed: 12/12/2022] Open
Abstract
Since the introduction of simultaneous PET/MRI in 2011, there have been significant advancements. In this review, we highlight several technical advancements that have been made primarily in attenuation and motion correction and discuss the status of multiple clinical applications using PET/MRI. This review is based on the experience at the first PET/MRI conference cosponsored by the International Society for Magnetic Resonance in Medicine and the Society of Nuclear Medicine and Molecular Imaging.
Collapse
Affiliation(s)
- Thomas A Hope
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, California
- Department of Radiology, San Francisco VA Medical Center, San Francisco, California
- UCSF Helen Diller Family Comprehensive Cancer Center, University of California San Francisco, San Francisco, California
| | - Zahi A Fayad
- Translational and Molecular Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Kathryn J Fowler
- Department of Radiology, University of California San Diego, San Diego, California
| | - Dawn Holley
- Department of Radiology, Stanford University Medical Center, Stanford, California
| | - Andrei Iagaru
- Department of Radiology, Stanford University Medical Center, Stanford, California
| | - Alan B McMillan
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin
| | - Patrick Veit-Haiback
- Joint Department of Medical Imaging, Toronto General Hospital, University Health Network, University of Toronto, Toronto, Canada
| | - Robert J Witte
- Department of Radiology, Mayo Clinic, Rochester, Minnesota; and
| | - Greg Zaharchuk
- Department of Radiology, Stanford University Medical Center, Stanford, California
| | - Ciprian Catana
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, Massachusetts
| |
Collapse
|