1
|
Wong KK, Cummock JS, He Y, Ghosh R, Volpi JJ, Wong STC. Retrospective study of deep learning to reduce noise in non-contrast head CT images. Comput Med Imaging Graph 2021; 94:101996. [PMID: 34637998 DOI: 10.1016/j.compmedimag.2021.101996] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 08/30/2021] [Accepted: 09/06/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE Presented herein is a novel CT denoising method uses a skip residual encoder-decoder framework with group convolutions and a novel loss function to improve the subjective and objective image quality for improved disease detection in patients with acute ischemic stroke (AIS). MATERIALS AND METHODS In this retrospective study, confirmed AIS patients with full-dose NCCT head scans were randomly selected from a stroke registry between 2016 and 2020. 325 patients (67 ± 15 years, 176 men) were included. 18 patients each with 4-7 NCCTs performed within 5-day timeframe (83 total scans) were used for model training; 307 patients each with 1-4 NCCTs performed within 5-day timeframe (380 total scans) were used for hold-out testing. In the training group, a mean CT was created from the patient's co-registered scans for each input CT to train a rotation-reflection equivariant U-Net with skip and residual connections, as well as a group convolutional neural network (SRED-GCNN) using a custom loss function to remove image noise. Denoising performance was compared to the standard Block-matching and 3D filtering (BM3D) method and RED-CNN quantitatively and visually. Signal-to-noise ratio (SNR) and contrast-to-noise (CNR) were measured in manually drawn regions-of-interest in grey matter (GM), white matter (WM) and deep grey matter (DG). Visual comparison and impact on spatial resolution were assessed through phantom images. RESULTS SRED-GCNN reduced the original CT image noise significantly better than BM3D, with SNR improvements in GM, WM, and DG by 2.47x, 2.83x, and 2.64x respectively and CNR improvements in DG/WM and GM/WM by 2.30x and 2.16x respectively. Compared to the proposed SRED-GCNN, RED-CNN reduces noise effectively though the results are visibly blurred. Scans denoised by the SRED-GCNN are shown to be visually clearer with preserved anatomy. CONCLUSION The proposed SRED-GCNN model significantly reduces image noise and improves signal-to-noise and contrast-to-noise ratios in 380 unseen head NCCT cases.
Collapse
Affiliation(s)
- Kelvin K Wong
- Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Houston Methodist Hospital and Department of Radiology, Weill Cornell Medicine, 6670 Bertner Ave, Houston, TX 77030, USA; The Ting Tsung and Wei Fong Chao Center for BRAIN, Houston Methodist Hospital, 6670 Bertner Ave, Houston, TX 77030, USA; Department of Radiology, Houston Methodist Institute for Academic Medicine, 6670 Bertner Ave, Houston, TX 77030, USA.
| | - Jonathon S Cummock
- Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Houston Methodist Hospital and Department of Radiology, Weill Cornell Medicine, 6670 Bertner Ave, Houston, TX 77030, USA; MD/PhD Program, Texas A&M University College of Medicine, 8447 Riverside Parkway, Suite 1002, Bryan, TX 77807, USA
| | - Yunjie He
- Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Houston Methodist Hospital and Department of Radiology, Weill Cornell Medicine, 6670 Bertner Ave, Houston, TX 77030, USA
| | - Rahul Ghosh
- Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Houston Methodist Hospital and Department of Radiology, Weill Cornell Medicine, 6670 Bertner Ave, Houston, TX 77030, USA; MD/PhD Program, Texas A&M University College of Medicine, 8447 Riverside Parkway, Suite 1002, Bryan, TX 77807, USA
| | - John J Volpi
- Department of Neurology, Houston Methodist Institute for Academic Medicine, 6670 Bertner Ave, Houston, TX 77030, USA
| | - Stephen T C Wong
- Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Houston Methodist Hospital and Department of Radiology, Weill Cornell Medicine, 6670 Bertner Ave, Houston, TX 77030, USA; The Ting Tsung and Wei Fong Chao Center for BRAIN, Houston Methodist Hospital, 6670 Bertner Ave, Houston, TX 77030, USA; Department of Radiology, Houston Methodist Institute for Academic Medicine, 6670 Bertner Ave, Houston, TX 77030, USA; Department of Neuroscience and Experimental Therapeutics, Texas A&M University College of Medicine, 8447 Riverside Parkway, Suite 1005, Bryan, TX 77807, USA.
| |
Collapse
|
2
|
Vinas L, Scholey J, Descovich M, Kearney V, Sudhyadhom A. Improved contrast and noise of megavoltage computed tomography (MVCT) through cycle-consistent generative machine learning. Med Phys 2021; 48:676-690. [PMID: 33232526 PMCID: PMC8743188 DOI: 10.1002/mp.14616] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 09/15/2020] [Accepted: 11/12/2020] [Indexed: 01/11/2023] Open
Abstract
PURPOSE Megavoltage computed tomography (MVCT) has been implemented on many radiation therapy treatment machines as a tomographic imaging modality that allows for three-dimensional visualization and localization of patient anatomy. Yet MVCT images exhibit lower contrast and greater noise than its kilovoltage CT (kVCT) counterpart. In this work, we sought to improve these disadvantages of MVCT images through an image-to-image-based machine learning transformation of MVCT and kVCT images. We demonstrated that by learning the style of kVCT images, MVCT images can be converted into high-quality synthetic kVCT (skVCT) images with higher contrast and lower noise, when compared to the original MVCT. METHODS Kilovoltage CT and MVCT images of 120 head and neck (H&N) cancer patients treated on an Accuray TomoHD system were retrospectively analyzed in this study. A cycle-consistent generative adversarial network (CycleGAN) machine learning, a variant of the generative adversarial network (GAN), was used to learn Hounsfield Unit (HU) transformations from MVCT to kVCT images, creating skVCT images. A formal mathematical proof is given describing the interplay between function sensitivity and input noise and how it applies to the error variance of a high-capacity function trained with noisy input data. Finally, we show how skVCT shares distributional similarity to kVCT for various macro-structures found in the body. RESULTS Signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were improved in skVCT images relative to the original MVCT images and were consistent with kVCT images. Specifically, skVCT CNR for muscle-fat, bone-fat, and bone-muscle improved to 14.8 ± 0.4, 122.7 ± 22.6, and 107.9 ± 22.4 compared with 1.6 ± 0.3, 7.6 ± 1.9, and 6.0 ± 1.7, respectively, in the original MVCT images and was more consistent with kVCT CNR values of 15.2 ± 0.8, 124.9 ± 27.0, and 109.7 ± 26.5, respectively. Noise was significantly reduced in skVCT images with SNR values improving by roughly an order of magnitude and consistent with kVCT SNR values. Axial slice mean (S-ME) and mean absolute error (S-MAE) agreement between kVCT and MVCT/skVCT improved, on average, from -16.0 and 109.1 HU to 8.4 and 76.9 HU, respectively. CONCLUSIONS A kVCT-like qualitative aid was generated from input MVCT data through a CycleGAN instance. This qualitative aid, skVCT, was robust toward embedded metallic material, dramatically improves HU alignment from MVCT, and appears perceptually similar to kVCT with SNR and CNR values equivalent to that of kVCT images.
Collapse
Affiliation(s)
- Luciano Vinas
- Department of Physics, University of California Berkeley, Berkeley, California, 94720
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California 94143
| | - Jessica Scholey
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California 94143
| | - Martina Descovich
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California 94143
| | - Vasant Kearney
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California 94143
| | - Atchar Sudhyadhom
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California 94143
| |
Collapse
|
3
|
Sheng K. Artificial intelligence in radiotherapy: a technological review. Front Med 2020; 14:431-449. [PMID: 32728877 DOI: 10.1007/s11684-020-0761-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2019] [Accepted: 02/14/2020] [Indexed: 12/19/2022]
Abstract
Radiation therapy (RT) is widely used to treat cancer. Technological advances in RT have occurred in the past 30 years. These advances, such as three-dimensional image guidance, intensity modulation, and robotics, created challenges and opportunities for the next breakthrough, in which artificial intelligence (AI) will possibly play important roles. AI will replace certain repetitive and labor-intensive tasks and improve the accuracy and consistency of others, particularly those with increased complexity because of technological advances. The improvement in efficiency and consistency is important to manage the increasing cancer patient burden to the society. Furthermore, AI may provide new functionalities that facilitate satisfactory RT. The functionalities include superior images for real-time intervention and adaptive and personalized RT. AI may effectively synthesize and analyze big data for such purposes. This review describes the RT workflow and identifies areas, including imaging, treatment planning, quality assurance, and outcome prediction, that benefit from AI. This review primarily focuses on deep-learning techniques, although conventional machine-learning techniques are also mentioned.
Collapse
Affiliation(s)
- Ke Sheng
- Department of Radiation Oncology, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|