1
|
Peng J, Qiu RLJ, Wynne JF, Chang CW, Pan S, Wang T, Roper J, Liu T, Patel PR, Yu DS, Yang X. CBCT-Based synthetic CT image generation using conditional denoising diffusion probabilistic model. Med Phys 2024; 51:1847-1859. [PMID: 37646491 DOI: 10.1002/mp.16704] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 07/17/2023] [Accepted: 08/08/2023] [Indexed: 09/01/2023] Open
Abstract
BACKGROUND Daily or weekly cone-beam computed tomography (CBCT) scans are commonly used for accurate patient positioning during the image-guided radiotherapy (IGRT) process, making it an ideal option for adaptive radiotherapy (ART) replanning. However, the presence of severe artifacts and inaccurate Hounsfield unit (HU) values prevent its use for quantitative applications such as organ segmentation and dose calculation. To enable the clinical practice of online ART, it is crucial to obtain CBCT scans with a quality comparable to that of a CT scan. PURPOSE This work aims to develop a conditional diffusion model to perform image translation from the CBCT to the CT distribution for the image quality improvement of CBCT. METHODS The proposed method is a conditional denoising diffusion probabilistic model (DDPM) that utilizes a time-embedded U-net architecture with residual and attention blocks to gradually transform the white Gaussian noise sample to the target CT distribution conditioned on the CBCT. The model was trained on deformed planning CT (dpCT) and CBCT image pairs, and its feasibility was verified in brain patient study and head-and-neck (H&N) patient study. The performance of the proposed algorithm was evaluated using mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and normalized cross-correlation (NCC) metrics on generated synthetic CT (sCT) samples. The proposed method was also compared to four other diffusion model-based sCT generation methods. RESULTS In the brain patient study, the MAE, PSNR, and NCC of the generated sCT were 25.99 HU, 30.49 dB, and 0.99, respectively, compared to 40.63 HU, 27.87 dB, and 0.98 of the CBCT images. In the H&N patient study, the metrics were 32.56 HU, 27.65 dB, 0.98 and 38.99 HU, 27.00, 0.98 for sCT and CBCT, respectively. Compared to the other four diffusion models and one Cycle generative adversarial network (Cycle GAN), the proposed method showed superior results in both visual quality and quantitative analysis. CONCLUSIONS The proposed conditional DDPM method can generate sCT from CBCT with accurate HU numbers and reduced artifacts, enabling accurate CBCT-based organ segmentation and dose calculation for online ART.
Collapse
Affiliation(s)
- Junbo Peng
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Nuclear and Radiological Engineering and Medical physics Programs, George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jacob F Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Pretesh R Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - David S Yu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Nuclear and Radiological Engineering and Medical physics Programs, George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
| |
Collapse
|
2
|
Aouadi S, Yoganathan SA, Torfeh T, Paloor S, Caparrotti P, Hammoud R, Al-Hammadi N. Generation of synthetic CT from CBCT using deep learning approaches for head and neck cancer patients. Biomed Phys Eng Express 2023; 9:055020. [PMID: 37489854 DOI: 10.1088/2057-1976/acea27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Accepted: 07/25/2023] [Indexed: 07/26/2023]
Abstract
Purpose.To create a synthetic CT (sCT) from daily CBCT using either deep residual U-Net (DRUnet), or conditional generative adversarial network (cGAN) for adaptive radiotherapy planning (ART).Methods.First fraction CBCT and planning CT (pCT) were collected from 93 Head and Neck patients who underwent external beam radiotherapy. The dataset was divided into training, validation, and test sets of 58, 10 and 25 patients respectively. Three methods were used to generate sCT, 1. Nonlocal means patch based method was modified to include multiscale patches defining the multiscale patch based method (MPBM), 2. An encoder decoder 2D Unet with imbricated deep residual units was implemented, 3. DRUnet was integrated to the generator part of cGAN whereas a convolutional PatchGAN classifier was used as the discriminator. The accuracy of sCT was evaluated geometrically using Mean Absolute Error (MAE). Clinical Volumetric Modulated Arc Therapy (VMAT) plans were copied from pCT to registered CBCT and sCT and dosimetric analysis was performed by comparing Dose Volume Histogram (DVH) parameters of planning target volumes (PTVs) and organs at risk (OARs). Furthermore, 3D Gamma analysis (2%/2mm, global) between the dose on the sCT or CBCT and that on the pCT was performed.Results. The average MAE calculated between pCT and CBCT was 180.82 ± 27.37HU. Overall, all approaches significantly reduced the uncertainties in CBCT. Deep learning approaches outperformed patch-based methods with MAE = 67.88 ± 8.39HU (DRUnet) and MAE = 72.52 ± 8.43HU (cGAN) compared to MAE = 90.69 ± 14.3HU (MPBM). The percentages of DVH metric deviations were below 0.55% for PTVs and 1.17% for OARs using DRUnet. The average Gamma pass rate was 99.45 ± 1.86% for sCT generated using DRUnet.Conclusion.DL approaches outperformed MPBM. Specifically, DRUnet could be used for the generation of sCT with accurate intensities and realistic description of patient anatomy. This could be beneficial for CBCT based ART.
Collapse
Affiliation(s)
- Souha Aouadi
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - S A Yoganathan
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Tarraf Torfeh
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Satheesh Paloor
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Palmira Caparrotti
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Rabih Hammoud
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Noora Al-Hammadi
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| |
Collapse
|
3
|
Dahiya N, Alam SR, Zhang P, Zhang SY, Li T, Yezzi A, Nadeem S. Multitask 3D CBCT-to-CT translation and organs-at-risk segmentation using physics-based data augmentation. Med Phys 2021; 48:5130-5141. [PMID: 34245012 DOI: 10.1002/mp.15083] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 06/18/2021] [Accepted: 06/22/2021] [Indexed: 12/25/2022] Open
Abstract
PURPOSE In current clinical practice, noisy and artifact-ridden weekly cone beam computed tomography (CBCT) images are only used for patient setup during radiotherapy. Treatment planning is performed once at the beginning of the treatment using high-quality planning CT (pCT) images and manual contours for organs-at-risk (OARs) structures. If the quality of the weekly CBCT images can be improved while simultaneously segmenting OAR structures, this can provide critical information for adapting radiotherapy mid-treatment as well as for deriving biomarkers for treatment response. METHODS Using a novel physics-based data augmentation strategy, we synthesize a large dataset of perfectly/inherently registered pCT and synthetic-CBCT pairs for locally advanced lung cancer patient cohort, which are then used in a multitask three-dimensional (3D) deep learning framework to simultaneously segment and translate real weekly CBCT images to high-quality pCT-like images. RESULTS We compared the synthetic CT and OAR segmentations generated by the model to real pCT and manual OAR segmentations and showed promising results. The real week 1 (baseline) CBCT images which had an average mean absolute error (MAE) of 162.77 HU compared to pCT images are translated to synthetic CT images that exhibit a drastically improved average MAE of 29.31 HU and average structural similarity of 92% with the pCT images. The average DICE scores of the 3D OARs segmentations are: lungs 0.96, heart 0.88, spinal cord 0.83, and esophagus 0.66. CONCLUSIONS We demonstrate an approach to translate artifact-ridden CBCT images to high-quality synthetic CT images, while simultaneously generating good quality segmentation masks for different OARs. This approach could allow clinicians to adjust treatment plans using only the routine low-quality CBCT images, potentially improving patient outcomes. Our code, data, and pre-trained models will be made available via our physics-based data augmentation library, Physics-ArX, at https://github.com/nadeemlab/Physics-ArX.
Collapse
Affiliation(s)
- Navdeep Dahiya
- Department of Electrical & Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Sadegh R Alam
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| | - Pengpeng Zhang
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| | - Si-Yuan Zhang
- Department of Radiation Oncology, Peking University Cancer Hospital, Beijing, China
| | - Tianfang Li
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| | - Anthony Yezzi
- Department of Electrical & Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Saad Nadeem
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| |
Collapse
|
4
|
Chen L, Liang X, Shen C, Nguyen D, Jiang S, Wang J. Synthetic CT generation from CBCT images via unsupervised deep learning. Phys Med Biol 2021; 66. [PMID: 34061043 DOI: 10.1088/1361-6560/ac01b6] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Accepted: 05/14/2021] [Indexed: 11/12/2022]
Abstract
Adaptive-radiation-therapy (ART) is applied to account for anatomical variations observed over the treatment course. Daily or weekly cone-beam computed tomography (CBCT) is commonly used in clinic for patient positioning, but CBCT's inaccuracy in Hounsfield units (HU) prevents its application to dose calculation and treatment planning. Adaptive re-planning can be performed by deformably registering planning CT (pCT) to CBCT. However, scattering artifacts and noise in CBCT decrease the accuracy of deformable registration and induce uncertainty in treatment plan. Hence, generating from CBCT a synthetic CT (sCT) that has the same anatomical structure as CBCT but accurate HU values is desirable for ART. We proposed an unsupervised style-transfer-based approach to generate sCT based on CBCT and pCT. Unsupervised learning was desired because exactly matched CBCT and CT are rarely available, even when they are taken a few minutes apart. In the proposed model, CBCT and pCT are two inputs that provide anatomical structure and accurate HU information, respectively. The training objective function is designed to simultaneously minimize (1) contextual loss between sCT and CBCT to maintain the content and structure of CBCT in sCT and (2) style loss between sCT and pCT to achieve pCT-like image quality in sCT. We used CBCT and pCT images of 114 patients to train and validate the designed model, and another 29 independent patient cases to test the model's effectiveness. We quantitatively compared the resulting sCT with the original CBCT using the deformed same-day pCT as reference. Structure-similarity-index, peak-signal-to-noise-ratio, and mean-absolute-error in HU of sCT were 0.9723, 33.68, and 28.52, respectively, while those of CBCT were 0.9182, 29.67, and 49.90, respectively. We have demonstrated the effectiveness of the proposed model in using CBCT and pCT to synthesize CT-quality images. This model may permit using CBCT for advanced applications such as adaptive treatment planning.
Collapse
Affiliation(s)
- Liyuan Chen
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390 United States of America
| | - Xiao Liang
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390 United States of America
| | - Chenyang Shen
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390 United States of America
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390 United States of America
| | - Steve Jiang
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390 United States of America
| | - Jing Wang
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390 United States of America
| |
Collapse
|
5
|
Chen L, Liang X, Shen C, Jiang S, Wang J. Synthetic CT generation from CBCT images via deep learning. Med Phys 2020; 47:1115-1125. [PMID: 31853974 PMCID: PMC7067667 DOI: 10.1002/mp.13978] [Citation(s) in RCA: 94] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Revised: 10/18/2019] [Accepted: 12/11/2019] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Cone-beam computed tomography (CBCT) scanning is used daily or weekly (i.e., on-treatment CBCT) for accurate patient setup in image-guided radiotherapy. However, inaccuracy of CT numbers prevents CBCT from performing advanced tasks such as dose calculation and treatment planning. Motivated by the promising performance of deep learning in medical imaging, we propose a deep U-net-based approach that synthesizes CT-like images with accurate numbers from planning CT, while keeping the same anatomical structure as on-treatment CBCT. METHODS We formulated the CT synthesis problem under a deep learning framework, where a deep U-net architecture was used to take advantage of the anatomical structure of on-treatment CBCT and image intensity information of planning CT. U-net was chosen because it exploits both global and local features in the image spatial domain, matching our task to suppress global scattering artifacts and local artifacts such as noise in CBCT. To train the synthetic CT generation U-net (sCTU-net), we include on-treatment CBCT and initial planning CT of 37 patients (30 for training, seven for validation) as the input. Additional replanning CT images acquired on the same day as CBCT after deformable registration are utilized as the corresponding reference. To demonstrate the effectiveness of the proposed sCTU-net, we use another seven independent patient cases (560 slices) for testing. RESULTS We quantitatively compared the resulting synthetic CT (sCT) with the original CBCT image using deformed same-day pCT images as reference. The averaged accuracy measured by mean absolute error (MAE) between sCT and reference CT (rCT) on testing data is 18.98 HU, while MAE between CBCT and rCT is 44.38 HU. CONCLUSIONS The proposed sCTU-net can synthesize CT-quality images with accurate CT numbers from on-treatment CBCT and planning CT. This potentially enables advanced CBCT applications for adaptive treatment planning.
Collapse
Affiliation(s)
- Liyuan Chen
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390 USA
| | | | - Chenyang Shen
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390 USA
| | - Steve Jiang
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390 USA
| | - Jing Wang
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390 USA
| |
Collapse
|
6
|
Razi T, Manaf NV, Yadekar M, Razi S, Gheibi S. Correction of Cupping Artifacts in Axial Cone-Beam Computed Tomography Images by Using Image Processing Algorithms. JOURNAL OF ADVANCED ORAL RESEARCH 2019. [DOI: 10.1177/2320206819870898] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Objectives: One of the most important problems of cone-beam computed tomography (CBCT) imaging technique is the presence of dense objects, such as implants, amalgam fillings, and metal veneers, which result in beam-hardening artifacts. With an increase in the application of CBCT images and considering the problems in relation to cupping artifacts, some algorithms have been presented to reduce these artifacts. The aim was to present an algorithm to eliminate cupping artifacts from axial and other reconstructed CBCT images. Materials and Methods: We used CBCT images of NewTom VG imaging system (Verona, Italy, at Dentistry Faculty, Medical Sciences University, Tabriz, Iran) in which every image has a resolution of 366 × 320 in DICOM format. 50 images of patients with cupping artifacts were selected. Using Sobel edge detector and nonlinear gamma correction coefficient, the difference was calculated between the density of axial images in the main image and the image resulting from nonlinear gamma correction at the exact location of the radiopaque dental materials detected by Sobel. The points at which this density difference was out of a definite limit were treated as image artifacts and were eliminated from the main image by the inpainting method. Results: The resultant axial images, for producing reconstructed cross-sectional, panoramic images without cupping artifacts, were imported into NTT viewer V5.6 and utilized. Conclusions: With comparison, acquired images observed that the offering algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed images. This algorithm does not need any additional equipment.
Collapse
Affiliation(s)
- Tahmineh Razi
- Department of Oral & Maxillofacial Radiology, Faculty of Dentistry, Dental and Periodontal Research Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Nader Vahdani Manaf
- Department of Electronic Engineering, Tabriz Branch, Seraj Higher Education Institute, Tabriz, Iran
| | - Morteza Yadekar
- Department of Electronic Engineering, Tabriz Branch, Seraj Higher Education Institute, Tabriz, Iran
| | - Sedigheh Razi
- Department of Oral & Maxillofacial Radiology, Faculty of Dentistry, Dental and Periodontal Research Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Shiva Gheibi
- Department of Oral & Maxillofacial Radiology, Faculty of Dentistry, Dental and Periodontal Research Center, Tabriz University of Medical Sciences, Tabriz, Iran
| |
Collapse
|