1
|
Ohira S, Mochizuki J, Niwa T, Endo K, Minamitani M, Yamashita H, Katano A, Imae T, Nishio T, Koizumi M, Nakagawa K. Variation in Hounsfield unit calculated using dual-energy computed tomography: comparison of dual-layer, dual-source, and fast kilovoltage switching technique. Radiol Phys Technol 2024; 17:458-466. [PMID: 38700638 PMCID: PMC11128400 DOI: 10.1007/s12194-024-00802-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 04/09/2024] [Accepted: 04/10/2024] [Indexed: 05/27/2024]
Abstract
The purpose of the study is to investigate the variation in Hounsfield unit (HU) values calculated using dual-energy computed tomography (DECT) scanners. A tissue characterization phantom inserting 16 reference materials were scanned three times using DECT scanners [dual-layer CT (DLCT), dual-source CT (DSCT), and fast kilovoltage switching CT (FKSCT)] changing scanning conditions. The single-energy CT images (120 or 140 kVp), and virtual monochromatic images at 70 keV (VMI70) and 140 keV (VMI140) were reconstructed, and the HU values of each reference material were measured. The difference in HU values was larger when the phantom was scanned using the half dose with wrapping with rubber (strong beam-hardening effect) compared with the full dose without the rubber (reference condition), and the difference was larger as the electron density increased. For SECT, the difference in HU values against the reference condition measured by the DSCT (3.2 ± 5.0 HU) was significantly smaller (p < 0.05) than that using DLCT with 120 kVp (22.4 ± 23.8 HU), DLCT with 140 kVp (11.4 ± 12.8 HU), and FKSCT (13.4 ± 14.3 HU). The respective difference in HU values in the VMI70 and VMI140 measured using the DSCT (10.8 ± 17.1 and 3.5 ± 4.1 HU) and FKSCT (11.5 ± 21.8 and 5.5 ± 10.4 HU) were significantly smaller than those measured using the DLCT120 (23.1 ± 27.5 and 12.4 ± 9.4 HU) and DLCT140 (22.3 ± 28.6 and 13.1 ± 11.4 HU). The HU values and the susceptibility to beam-hardening effects varied widely depending on the DECT scanners.
Collapse
Affiliation(s)
- Shingo Ohira
- Department of Comprehensive Radiation Oncology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
- Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Japan.
| | - Junji Mochizuki
- Department of Radiology, Minamino Cardiovascular Hospital, Tokyo, Japan
| | - Tatsunori Niwa
- Department of Radiology, Sakakibara Heart Institute, Tokyo, Japan
| | - Kazuyuki Endo
- Department of Radiologic Technology, Tokai University Hachioji Hospital, Tokyo, Japan
| | - Masanari Minamitani
- Department of Comprehensive Radiation Oncology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Hideomi Yamashita
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Atsuto Katano
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Toshikazu Imae
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Teiji Nishio
- Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Japan
| | - Masahiko Koizumi
- Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Japan
| | - Keiichi Nakagawa
- Department of Comprehensive Radiation Oncology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
2
|
Afifah M, Bulthuis MC, Goudschaal KN, Verbeek-Spijkerman JM, Rosario TS, den Boer D, Hinnen KA, Bel A, van Kesteren Z. Virtual unenhanced dual-energy computed tomography for photon radiotherapy: The effect on dose distribution and cone-beam computed tomography based position verification. Phys Imaging Radiat Oncol 2024; 29:100545. [PMID: 38369991 PMCID: PMC10869258 DOI: 10.1016/j.phro.2024.100545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 01/24/2024] [Accepted: 01/29/2024] [Indexed: 02/20/2024] Open
Abstract
Background and Purpose Virtual Unenhanced images (VUE) from contrast-enhanced dual-energy computed tomography (DECT) eliminate manual suppression of contrast-enhanced structures (CES) or pre-contrast scans. CT intensity decreases in high-density structures outside the CES following VUE algorithm application. This study assesses VUE's impact on the radiotherapy workflow of gynecological tumors, comparing dose distribution and cone-beam CT-based (CBCT) position verification to contrast-enhanced CT (CECT) images. Materials and Methods A total of 14 gynecological patients with contrast-enhanced CT simulation were included. Two CT images were reconstructed: CECT and VUE. Volumetric Modulated Arc Therapy (VMAT) plans generated on CECT were recalculated on VUE using both the CECT lookup table (LUT) and a dedicated VUE LUT. Gamma analysis assessed 3D dose distributions. CECT and VUE images were retrospectively registered to daily CBCT using Chamfer matching algorithm.. Results Planning target volume (PTV) dose agreement with CECT was within 0.35% for D2%, Dmean, and D98%. Organs at risk (OARs) D2% agreed within 0.36%. A dedicated VUE LUT lead to smaller dose differences, achieving a 100% gamma pass rate for all subjects. VUE imaging showed similar translations and rotations to CECT, with significant but minor translation differences (<0.02 cm). VUE-based registration outperformed CECT. In 24% of CBCT-CECT registrations, inadequate registration was observed due to contrast-related issues, while corresponding VUE images achieved clinically acceptable registrations. Conclusions VUE imaging in the radiotherapy workflow is feasible, showing comparable dose distributions and improved CBCT registration results compared to CECT. VUE enables automated bone registration, limiting inter-observer variation in the Image-Guided Radiation Therapy (IGRT) process.
Collapse
Affiliation(s)
- Maryam Afifah
- Amsterdam UMC, Location Vrije Universiteit, Department of Radiation Oncology, De Boelelaan 1118, Amsterdam, the Netherlands
| | - Marloes C. Bulthuis
- Amsterdam UMC, Location University of Amsterdam, Department of Radiation Oncology, Meibergdreef 9, Amsterdam, the Netherlands
| | - Karin N. Goudschaal
- Amsterdam UMC, Location University of Amsterdam, Department of Radiation Oncology, Meibergdreef 9, Amsterdam, the Netherlands
| | - Jolanda M. Verbeek-Spijkerman
- Amsterdam UMC, Location University of Amsterdam, Department of Radiation Oncology, Meibergdreef 9, Amsterdam, the Netherlands
| | - Tezontl S. Rosario
- Amsterdam UMC, Location Vrije Universiteit, Department of Radiation Oncology, De Boelelaan 1118, Amsterdam, the Netherlands
| | - Duncan den Boer
- Amsterdam UMC, Location Vrije Universiteit, Department of Radiation Oncology, De Boelelaan 1118, Amsterdam, the Netherlands
| | - Karel A. Hinnen
- Amsterdam UMC, Location University of Amsterdam, Department of Radiation Oncology, Meibergdreef 9, Amsterdam, the Netherlands
| | - Arjan Bel
- Amsterdam UMC, Location University of Amsterdam, Department of Radiation Oncology, Meibergdreef 9, Amsterdam, the Netherlands
| | - Zdenko van Kesteren
- Amsterdam UMC, Location University of Amsterdam, Department of Radiation Oncology, Meibergdreef 9, Amsterdam, the Netherlands
| |
Collapse
|
3
|
Azarfar G, Ko SB, Adams SJ, Babyn PS. Applications of deep learning to reduce the need for iodinated contrast media for CT imaging: a systematic review. Int J Comput Assist Radiol Surg 2023; 18:1903-1914. [PMID: 36947337 DOI: 10.1007/s11548-023-02862-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 02/27/2023] [Indexed: 03/23/2023]
Abstract
PURPOSE The usage of iodinated contrast media (ICM) can improve the sensitivity and specificity of computed tomography (CT) for many clinical indications. However, the adverse effects of ICM administration can include renal injury, life-threatening allergic-like reactions, and environmental contamination. Deep learning (DL) models can generate full-dose ICM CT images from non-contrast or low-dose ICM administration or generate non-contrast CT from full-dose ICM CT. Eliminating the need for both contrast-enhanced and non-enhanced imaging or reducing the amount of required contrast while maintaining diagnostic capability may reduce overall patient risk, improve efficiency and minimize costs. We reviewed the current capabilities of DL to reduce the need for contrast administration in CT. METHODS We conducted a systematic review of articles utilizing DL to reduce the amount of ICM required in CT, searching MEDLINE, Embase, Compendex, Inspec, and Scopus to identify papers published from 2016 to 2022. We classified the articles based on the DL model and ICM reduction. RESULTS Eighteen papers met the inclusion criteria for analysis. Of these, ten generated synthetic full-dose (100%) ICM from real non-contrast CT, while four augmented low-dose to full-dose ICM CT. Three used DL to create synthetic non-contrast CT from real 100% ICM CT, while one paper used DL to translate the 100% ICM to non-contrast CT and vice versa. DL models commonly used generative adversarial networks trained and tested by paired contrast-enhanced and non-contrast or low ICM CTs. Image quality metrics such as peak signal-to-noise ratio and structural similarity index were frequently used for comparing synthetic versus real CT image quality. CONCLUSION DL-generated contrast-enhanced or non-contrast CT may assist in diagnosis and radiation therapy planning; however, further work to optimize protocols to reduce or eliminate ICM for specific pathology is still needed along with a dedicated assessment of the clinical utility of these synthetic images.
Collapse
Affiliation(s)
- Ghazal Azarfar
- Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK, Canada.
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK, Canada.
| | - Seok-Bum Ko
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK, Canada
| | - Scott J Adams
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Paul S Babyn
- Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK, Canada
| |
Collapse
|
4
|
A Survey on Deep Learning for Precision Oncology. Diagnostics (Basel) 2022; 12:diagnostics12061489. [PMID: 35741298 PMCID: PMC9222056 DOI: 10.3390/diagnostics12061489] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/14/2022] [Accepted: 06/14/2022] [Indexed: 12/27/2022] Open
Abstract
Precision oncology, which ensures optimized cancer treatment tailored to the unique biology of a patient’s disease, has rapidly developed and is of great clinical importance. Deep learning has become the main method for precision oncology. This paper summarizes the recent deep-learning approaches relevant to precision oncology and reviews over 150 articles within the last six years. First, we survey the deep-learning approaches categorized by various precision oncology tasks, including the estimation of dose distribution for treatment planning, survival analysis and risk estimation after treatment, prediction of treatment response, and patient selection for treatment planning. Secondly, we provide an overview of the studies per anatomical area, including the brain, bladder, breast, bone, cervix, esophagus, gastric, head and neck, kidneys, liver, lung, pancreas, pelvis, prostate, and rectum. Finally, we highlight the challenges and discuss potential solutions for future research directions.
Collapse
|
5
|
Lartaud PJ, Dupont C, Hallé D, Schleef A, Dessouky R, Vlachomitrou AS, Rouet JM, Nempont O, Boussel L. A conventional-to-spectral CT image translation augmentation workflow for robust contrast injection-independent organ segmentation. Med Phys 2021; 49:1108-1122. [PMID: 34689353 DOI: 10.1002/mp.15310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 10/07/2021] [Accepted: 10/11/2021] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In cardiovascular imaging, the numerous contrast injection protocols used to enhance structures make it difficult to gather training datasets for deep learning applications supporting diverse protocols. Moreover, creating annotations on non-contrast scans is extremely tedious. Recently, spectral CT's virtual-non-contrast images (VNC) have been used as data augmentation to train segmentation networks performing on enhanced and true-non-contrast (TNC) scans alike, while improving results on protocols absent of their training dataset. However, spectral data are not widely available, making it difficult to gather specific datasets for each task. As a solution, we present a data augmentation workflow based on a trained image translation network, to bring spectral-like augmentation to any conventional CT dataset. METHOD The HU-to-spectral image translation network (HUSpecNet) was first trained to generate VNC from HU images, using an unannotated spectral dataset of 1830 patients. It was then tested on a second dataset of 300 spectral CT scans, by comparing generated VNC (VNCDL ) to their true counterparts. To illustrate and compare our workflow's efficiency with true spectral augmentation, HUSpecNet was applied to a third dataset of 112 spectral scans to generate VNCDL along HU and VNC images. Three different 3D networks (U-Net, X-Net, U-Net++) were trained for multi-label heart segmentation, following four augmentation strategies. As baselines, trainings were performed on contrasted images without (HUonly) and with conventional gray-values augmentation (HUaug). Then, the same networks were trained using a proportion of contrasted and VNC/VNCDL images (TrueSpec/GenSpec). Each training strategy applied to each architecture was evaluated using Dice coefficients on a fourth multi-centric multi-vendor single-energy CT dataset of 121 patients, including different contrast injection protocols and unenhanced scans. The U-Net++ results were further explored with distance metrics on every label. RESULTS Tested on 300 full scans, our HUSpectNet translation network shows a mean absolute error of 6.70±2.83 HU between VNCDL and VNC, while peak-signal-to-noise-ratio reaches 43.89 dB. GenSpec and TrueSpec show very close results regardless of the protocol and used architecture: mean Dice coefficients (DSCmean ) are equal with a margin of 0.006, ranging from 0.879 to 0.938. Their performances significantly increase on TNC scans (p-values<0.017 for all architectures) compared to HUonly and HUaug, with DSCmean of 0.448/0.770/0.879/0.885 for HUonly/HUaug/TrueSpec/GenSpec using the Unet++ architecture. Significant improvements are also noted for all architectures on chest-abdominal-pelvic scans (p-values<0.007) compared to HUonly and for pulmonary embolism scans (p-values<0.039) compared to HUaug. Using Unet++, DSCmean reaches 0.892/0.901/0.903 for HUonly/TrueSpec/GenSpec on pulmonary embolism scans and 0.872/0.896/0.896 for HUonly/TrueSpec/GenSpec on chest-abdominal-pelvic scans. CONCLUSION Using the proposed workflow, we trained versatile heart segmentation networks on a dataset of conventional enhanced CT scans, providing robust predictions on both enhanced scans with different contrast injection protocols and TNC scans. The performances obtained were not significantly inferior to training the model on a genuine spectral CT dataset, regardless of the architecture implemented. Using a general-purpose conventional-to-spectral CT translation network as data augmentation could therefore contribute to reducing data collection and annotation requirements for machine learning-based CT studies, while extending their range of application. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Pierre-Jean Lartaud
- CREATIS UMR5220, INSERM U1044, INSA, Université de Lyon, Lyon, France
- Philips Research France, Suresnes, France
| | | | | | | | - Riham Dessouky
- CREATIS UMR5220, INSERM U1044, INSA, Université de Lyon, Lyon, France
- Radiology Department, Faculty of Medicine, Zagazig University, Zagazig, Egypt
| | | | | | | | - Loïc Boussel
- CREATIS UMR5220, INSERM U1044, INSA, Université de Lyon, Lyon, France
- Hospices Civils de Lyon, Lyon, France
| |
Collapse
|
6
|
Gu X, Liu Z, Zhou J, Luo H, Che C, Yang Q, Liu L, Yang Y, Liu X, Zheng H, Liang D, Luo D, Hu Z. Contrast-enhanced to noncontrast CT transformation via an adjacency content-transfer-based deep subtraction residual neural network. Phys Med Biol 2021; 66. [PMID: 34077922 DOI: 10.1088/1361-6560/ac0758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 06/02/2021] [Indexed: 11/11/2022]
Abstract
To reduce overall patient radiation exposure in some clinical scenarios (since cancer patients need frequent follow-ups), noncontrast CT is not used in some institutions. However, although less desirable, noncontrast CT could provide additional important information. In this article, we propose a deep subtraction residual network based on adjacency content transfer to reconstruct noncontrast CT from contrast CT and maintain image quality comparable to that of a CT scan originally acquired without contrast. To address the slight structural dissimilarity of the paired CT images (noncontrast CT and contrast CT) due to involuntary physiological motion, we introduce a contrastive loss network derived from the adjacency content-transfer strategy. We evaluate the results of various similarity metrics (MSE, SSIM, NRMSE, PSNR, MAE) and the fitting curve (HU distribution) of the output mapping to estimate the reconstruction performance of the algorithm. To build the model, we randomly select a total of 15,405 CT paired images (noncontrast CT and contrast-enhanced CT) for training and 10,270 CT paired images for testing. The proposed algorithm preserves the robust structures from the contrast-enhanced CT scans and learns the noncontrast attenuation pattern from the noncontrast CT scans. During the evaluation, the deep subtraction residual network achieves higher MSE, MAE, NRMSE, and PSNR scores (by 30%) than those of the baseline models (BEGAN, CycleGAN, Pixel2Pixel) and better simulates the HU curve of noncontrast CT attenuation. After validation based on an analysis of the experimental results, we can report that the noncontrast CT images reconstructed by our proposed algorithm not only preserve the high-quality structures from the contrast-enhanced CT images, but also mimic the CT attenuation of the originally acquired noncontrast CT images.
Collapse
Affiliation(s)
- Xianfan Gu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Zhou Liu
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen 518116, People's Republic of China
| | - Jinjie Zhou
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Honghong Luo
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen 518116, People's Republic of China
| | - Canwen Che
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen 518116, People's Republic of China
| | - Qian Yang
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen 518116, People's Republic of China
| | - Lijian Liu
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen 518116, People's Republic of China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Dehong Luo
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen 518116, People's Republic of China.,Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| |
Collapse
|
7
|
Dual-energy computed tomography image-based volumetric-modulated arc therapy planning for reducing the effect of contrast-enhanced agent on dose distributions. Med Dosim 2021; 46:328-334. [PMID: 33931321 DOI: 10.1016/j.meddos.2021.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 03/01/2021] [Accepted: 03/19/2021] [Indexed: 11/20/2022]
Abstract
To compare the effect of a contrast-enhanced (CE) agent on volumetric-modulated arc therapy plans based on four types of images-virtual monochromatic images (VMIs) captured at 70 and 140 keV (namely VMI70 and VMI140, respectively), water density image (WDI), and virtual non-contrast image (VNC) generated using a dual-energy computed tomography (DECT) system. A tissue characterization phantom and a multi-energy phantom were scanned, and VMI70, VMI140, WDI, and VNC were retrospectively reconstructed. For each image, a lookup table (LUT) was created. For 13 patients with nasopharyngeal cancer, non-CE and CE scans were performed, and volumetric-modulated arc therapy plans were generated on the basis of non-CE VMI70. Subsequently, the doses were re-calculated using the four types of DECT images and their corresponding LUTs. The maximum differences in the physical density estimation were 21.3, 5.2, -3.9, and 0.5% for VMI70, VMI140, WDI, and VNC, respectively. Compared with VMI70, the WDI approach significantly reduced (p < 0.05) the dosimetric difference due to the CE agent for the planning target volume (PTV) (D50%), whereas the difference was significantly increased for D1%. Except for PTV (D1%), the differences were significantly lower (p < 0.05) in the treatment plans based on VMI140 and VNC than that based on VMI70. For the VNC, the mean difference was less than 0.2% for all dosimetric parameters for the PTV. For patients with NPC, treatment plans based on the VNC derived from CE scan showed the best agreement with those based on the non-CE VMI70. Ideally, the effect of CE agent on dose distribution does not appear in treatment planning procedures.
Collapse
|
8
|
Solomon J, Lyu P, Marin D, Samei E. Noise and spatial resolution properties of a commercially available deep learning-based CT reconstruction algorithm. Med Phys 2020; 47:3961-3971. [PMID: 32506661 DOI: 10.1002/mp.14319] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 05/01/2020] [Accepted: 05/26/2020] [Indexed: 12/22/2022] Open
Abstract
PURPOSE To characterize the noise and spatial resolution properties of a commercially available deep learning-based computed tomography (CT) reconstruction algorithm. METHODS Two phantom experiments were performed. The first used a multisized image quality phantom (Mercury v3.0, Duke University) imaged at five radiation dose levels (CTDIvol : 0.9, 1.2, 3.6, 7.0, and 22.3 mGy) with a fixed tube current technique on a commercial CT scanner (GE Revolution CT). Images were reconstructed with conventional (FBP), iterative (GE ASiR-V), and deep learning-based (GE True Fidelity) reconstruction algorithms. Noise power spectrum (NPS), high-contrast (air-polyethylene interface), and intermediate-contrast (water-polyethylene interface) task transfer functions (TTF) were measured for each dose level and phantom size and summarized in terms of average noise frequency (fav ) and frequency at which the TTF was reduced to 50% (f50% ), respectively. The second experiment used a custom phantom with low-contrast rods and lung texture sections for the assessment of low-contrast TTF and noise spatial distribution. The phantom was imaged at five dose levels (CTDIvol : 1.0, 2.1, 3.0, 6.0, and 10.0 mGy) with 20 repeated scans at each dose, and images reconstructed with the same reconstruction algorithms. The local noise stationarity was assessed by generating spatial noise maps from the ensemble of repeated images and computing a noise inhomogeneity index, η , following AAPM TG233 methods. All measurements were compared among the algorithms. RESULTS Compared to FBP, noise magnitude was reduced on average (± one standard deviation) by 74 ± 6% and 68 ± 4% for ASiR-V (at "100%" setting) and True Fidelity (at "High" setting), respectively. The noise texture from ASiR-V had substantially lower noise frequency content with 55 ± 4% lower NPS fav compared to FBP while True Fidelity had only marginally different noise frequency content with 9 ± 5% lower NPS fav compared to FBP. Both ASiR-V and True Fidelity demonstrated locally nonstationary noise in a lung texture background at all radiation dose levels, with higher noise near high-contrast edges of vessels and lower noise in uniform regions. At the 1.0 mGy dose level η values were 314% and 271% higher in ASiR-V and True Fidelity compared to FBP, respectively. High-contrast spatial resolution was similar between all algorithms for all dose levels and phantom sizes (<3% difference in TTF f50% ). Compared to FBP, low-contrast spatial resolution was lower for ASiR-V and True Fidelity with a reduction of TTF f50% of up to 42% and 36%, respectively. CONCLUSIONS The deep learning-based CT reconstruction demonstrated a strong noise magnitude reduction compared to FBP while maintaining similar noise texture and high-contrast spatial resolution. However, the algorithm resulted in images with a locally nonstationary noise in lung textured backgrounds and had somewhat degraded low-contrast spatial resolution similar to what has been observed in currently available iterative reconstruction techniques.
Collapse
Affiliation(s)
- Justin Solomon
- Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, 2424 Erwin Road, Suite 302, Durham, NC, 27705, USA
| | - Peijei Lyu
- Department of Radiology, Duke University Medical Center, Durham, NC, USA
| | - Daniele Marin
- Department of Radiology, Duke University Medical Center, Durham, NC, USA
| | - Ehsan Samei
- Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, 2424 Erwin Road, Suite 302, Durham, NC, 27705, USA
| |
Collapse
|