1
|
Wang H, Li Z, Shi D, Yin P, Liang B, Zou J, Tao Q, Ma W, Yin Y, Li Z. Assessing intra- and interfraction motion and its dosimetric impacts on cervical cancer adaptive radiotherapy based on 1.5T MR-Linac. Radiat Oncol 2024; 19:176. [PMID: 39696365 DOI: 10.1186/s13014-024-02569-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 12/06/2024] [Indexed: 12/20/2024] Open
Abstract
PURPOSE The purpose of this study was to quantify the intra- and interfraction motion of the target volume and organs at risk (OARs) during adaptive radiotherapy (ART) for uterine cervical cancer (UCC) using MR-Linac and to identify appropriate UCC target volume margins for adapt-to-shape (ATS) and adapt-to-position (ATP) workflows. Then, the dosimetric differences caused by motion were analyzed. METHODS Thirty-two UCC patients were included. Magnetic resonance (MR) images were obtained before and after each treatment. The maximum and average shifts in the centroid of the target volume and OARs along the anterior/posterior (A/P: Y axes), cranial/caudal (Cr/C: Z axes), and right/left (R/L: X axes) directions were analyzed through image contours. The bladder wall deformation in six directions and the differences in the volume of the organs were also analyzed. Additionally, the motion of the upper, middle and lower rectum was quantified. The correlation between OAR displacement/deformation and target volume displacement was evaluated. The planning CT dose distribution was mapped to the MR image to generate a plan based on the new anatomy, and the dosimetric differences caused by motion were analyzed. RESULTS For intrafraction motion, the clinical tumor volume (CTV) range of motion along the XYZ axes was within 5 mm; for interfraction motion, the range of motion along the X axis was within 5 mm, and the maximum distances of motion along the Y axis and Z axis were 7.45 and 6.59 mm, respectively. Additionally, deformation of the superior and anterior walls of the bladder was most noticeable. The largest magnitude of motion was observed in the upper segment of the rectum. Posterior bladder wall displacement was correlated with rectal and CTV centroid Y-axis displacement (r = 0.63, r = 0.50, P < 0.05). Compared with the interfractional plan, a significant decrease in the planning target volume (PTV) D98 (7.5 Gy, 7.54 Gy) was observed. However, there were no significant differences within the intrafraction. CONCLUSION During ART for UCC patients using MR-Linac, we recommend an ATS workflow using isotropic PTV margins of 5 mm based on intrafraction motion. Based on interfraction motion, the recommended ATP workflow uses anisotropic PTV margins of 5 mm in the R/L direction, 8 mm in the A/P direction, and 7 mm in the Cr/C direction to compensate for dosimetric errors due to motion.
Collapse
Affiliation(s)
- Huadong Wang
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Zhenkai Li
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
- Chengdu University of Technology, Chengdu, China
| | - Dengxin Shi
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Peijun Yin
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Benzhe Liang
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
- Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jingmin Zou
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
- Department of Graduate Science, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
| | - Qiuqing Tao
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
- Southeastern University, Nanjing, China
| | - Wencheng Ma
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
- Department of Graduate Science, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
| | - Yong Yin
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China.
| | - Zhenjiang Li
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China.
| |
Collapse
|
2
|
Khamfongkhruea C, Prakarnpilas T, Thongsawad S, Deeharing A, Chanpanya T, Mundee T, Suwanbut P, Nimjaroen K. Supervised deep learning-based synthetic computed tomography from kilovoltage cone-beam computed tomography images for adaptive radiation therapy in head and neck cancer. Radiat Oncol J 2024; 42:181-191. [PMID: 39354821 PMCID: PMC11467487 DOI: 10.3857/roj.2023.00584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 02/24/2024] [Accepted: 02/29/2024] [Indexed: 10/03/2024] Open
Abstract
PURPOSE To generate and investigate a supervised deep learning algorithm for creating synthetic computed tomography (sCT) images from kilovoltage cone-beam computed tomography (kV-CBCT) images for adaptive radiation therapy (ART) in head and neck cancer (HNC). MATERIALS AND METHODS This study generated the supervised U-Net deep learning model using 3,491 image pairs from planning computed tomography (pCT) and kV-CBCT datasets obtained from 40 HNC patients. The dataset was split into 80% for training and 20% for testing. The evaluation of the sCT images compared to pCT images focused on three aspects: Hounsfield units accuracy, assessed using mean absolute error (MAE) and root mean square error (RMSE); image quality, evaluated using the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) between sCT and pCT images; and dosimetric accuracy, encompassing 3D gamma passing rates for dose distribution and percentage dose difference. RESULTS MAE, RMSE, PSNR, and SSIM showed improvements from their initial values of 53.15 ± 40.09, 153.99 ± 79.78, 47.91 ± 4.98 dB, and 0.97 ± 0.02 to 41.47 ± 30.59, 130.39 ± 78.06, 49.93 ± 6.00 dB, and 0.98 ± 0.02, respectively. Regarding dose evaluation, 3D gamma passing rates for dose distribution within sCT images under 2%/2 mm, 3%/2 mm, and 3%/3 mm criteria, yielded passing rates of 92.1% ± 3.8%, 93.8% ± 3.0%, and 96.9% ± 2.0%, respectively. The sCT images exhibited minor variations in the percentage dose distribution of the investigated target and structure volumes. However, it is worth noting that the sCT images exhibited anatomical variations when compared to the pCT images. CONCLUSION These findings highlight the potential of the supervised U-Net deep learningmodel in generating kV-CBCT-based sCT images for ART in patients with HNC.
Collapse
Affiliation(s)
- Chirasak Khamfongkhruea
- Medical Physics Program, Princess Srisavangavadhana College of Medicine, Chulabhorn Royal Academy, Bangkok, Thailand
- Department of Radiation Oncology, Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Tipaporn Prakarnpilas
- Medical Physics Program, Princess Srisavangavadhana College of Medicine, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Sangutid Thongsawad
- Medical Physics Program, Princess Srisavangavadhana College of Medicine, Chulabhorn Royal Academy, Bangkok, Thailand
- Department of Radiation Oncology, Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Aphisara Deeharing
- Department of Radiation Oncology, Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Thananya Chanpanya
- Department of Radiation Oncology, Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Thunpisit Mundee
- Department of Radiation Oncology, Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Pattarakan Suwanbut
- Department of Radiation Oncology, Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Kampheang Nimjaroen
- Medical Physics Program, Princess Srisavangavadhana College of Medicine, Chulabhorn Royal Academy, Bangkok, Thailand
- Department of Radiation Oncology, Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, Thailand
| |
Collapse
|
3
|
Peng K, Zhou D, Sun K, Wang J, Deng J, Gong S. ACSwinNet: A Deep Learning-Based Rigid Registration Method for Head-Neck CT-CBCT Images in Image-Guided Radiotherapy. SENSORS (BASEL, SWITZERLAND) 2024; 24:5447. [PMID: 39205140 PMCID: PMC11359988 DOI: 10.3390/s24165447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 08/20/2024] [Accepted: 08/21/2024] [Indexed: 09/04/2024]
Abstract
Accurate and precise rigid registration between head-neck computed tomography (CT) and cone-beam computed tomography (CBCT) images is crucial for correcting setup errors in image-guided radiotherapy (IGRT) for head and neck tumors. However, conventional registration methods that treat the head and neck as a single entity may not achieve the necessary accuracy for the head region, which is particularly sensitive to radiation in radiotherapy. We propose ACSwinNet, a deep learning-based method for head-neck CT-CBCT rigid registration, which aims to enhance the registration precision in the head region. Our approach integrates an anatomical constraint encoder with anatomical segmentations of tissues and organs to enhance the accuracy of rigid registration in the head region. We also employ a Swin Transformer-based network for registration in cases with large initial misalignment and a perceptual similarity metric network to address intensity discrepancies and artifacts between the CT and CBCT images. We validate the proposed method using a head-neck CT-CBCT dataset acquired from clinical patients. Compared with the conventional rigid method, our method exhibits lower target registration error (TRE) for landmarks in the head region (reduced from 2.14 ± 0.45 mm to 1.82 ± 0.39 mm), higher dice similarity coefficient (DSC) (increased from 0.743 ± 0.051 to 0.755 ± 0.053), and higher structural similarity index (increased from 0.854 ± 0.044 to 0.870 ± 0.043). Our proposed method effectively addresses the challenge of low registration accuracy in the head region, which has been a limitation of conventional methods. This demonstrates significant potential in improving the accuracy of IGRT for head and neck tumors.
Collapse
Affiliation(s)
- Kuankuan Peng
- Digital Manufacturing Equipment and Technology Key National Laboratories, Huazhong University of Science and Technology, Wuhan 430074, China; (K.P.); (D.Z.); (K.S.); (J.D.)
- Huagong Manufacturing Equipment Digital National Engineering Center Co., Ltd., Wuhan 430074, China
| | - Danyu Zhou
- Digital Manufacturing Equipment and Technology Key National Laboratories, Huazhong University of Science and Technology, Wuhan 430074, China; (K.P.); (D.Z.); (K.S.); (J.D.)
| | - Kaiwen Sun
- Digital Manufacturing Equipment and Technology Key National Laboratories, Huazhong University of Science and Technology, Wuhan 430074, China; (K.P.); (D.Z.); (K.S.); (J.D.)
| | - Junfeng Wang
- Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Jianchun Deng
- Digital Manufacturing Equipment and Technology Key National Laboratories, Huazhong University of Science and Technology, Wuhan 430074, China; (K.P.); (D.Z.); (K.S.); (J.D.)
- Huagong Manufacturing Equipment Digital National Engineering Center Co., Ltd., Wuhan 430074, China
| | - Shihua Gong
- Digital Manufacturing Equipment and Technology Key National Laboratories, Huazhong University of Science and Technology, Wuhan 430074, China; (K.P.); (D.Z.); (K.S.); (J.D.)
- Huagong Manufacturing Equipment Digital National Engineering Center Co., Ltd., Wuhan 430074, China
| |
Collapse
|
4
|
Vrettos K, Koltsakis E, Zibis AH, Karantanas AH, Klontzas ME. Generative adversarial networks for spine imaging: A critical review of current applications. Eur J Radiol 2024; 171:111313. [PMID: 38237518 DOI: 10.1016/j.ejrad.2024.111313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 12/18/2023] [Accepted: 01/09/2024] [Indexed: 02/10/2024]
Abstract
PURPOSE In recent years, the field of medical imaging has witnessed remarkable advancements, with innovative technologies which revolutionized the visualization and analysis of the human spine. Among the groundbreaking developments in medical imaging, Generative Adversarial Networks (GANs) have emerged as a transformative tool, offering unprecedented possibilities in enhancing spinal imaging techniques and diagnostic outcomes. This review paper aims to provide a comprehensive overview of the use of GANs in spinal imaging, and to emphasize their potential to improve the diagnosis and treatment of spine-related disorders. A specific review focusing on Generative Adversarial Networks (GANs) in the context of medical spine imaging is needed to provide a comprehensive and specialized analysis of the unique challenges, applications, and advancements within this specific domain, which might not be fully addressed in broader reviews covering GANs in general medical imaging. Such a review can offer insights into the tailored solutions and innovations that GANs bring to the field of spinal medical imaging. METHODS An extensive literature search from 2017 until July 2023, was conducted using the most important search engines and identified studies that used GANs in spinal imaging. RESULTS The implementations include generating fat suppressed T2-weighted (fsT2W) images from T1 and T2-weighted sequences, to reduce scan time. The generated images had a significantly better image quality than true fsT2W images and could improve diagnostic accuracy for certain pathologies. GANs were also utilized in generating virtual thin-slice images of intervertebral spaces, creating digital twins of human vertebrae, and predicting fracture response. Lastly, they could be applied to convert CT to MRI images, with the potential to generate near-MR images from CT without MRI. CONCLUSIONS GANs have promising applications in personalized medicine, image augmentation, and improved diagnostic accuracy. However, limitations such as small databases and misalignment in CT-MRI pairs, must be considered.
Collapse
Affiliation(s)
- Konstantinos Vrettos
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece
| | - Emmanouil Koltsakis
- Department of Radiology, Karolinska University Hospital, Solna, Stockholm, Sweden
| | - Aristeidis H Zibis
- Department of Anatomy, Medical School, University of Thessaly, Larissa, Greece
| | - Apostolos H Karantanas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece
| | - Michail E Klontzas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece.
| |
Collapse
|
5
|
Liu X, Yang R, Xiong T, Yang X, Li W, Song L, Zhu J, Wang M, Cai J, Geng L. CBCT-to-CT Synthesis for Cervical Cancer Adaptive Radiotherapy via U-Net-Based Model Hierarchically Trained with Hybrid Dataset. Cancers (Basel) 2023; 15:5479. [PMID: 38001738 PMCID: PMC10670900 DOI: 10.3390/cancers15225479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/11/2023] [Accepted: 11/14/2023] [Indexed: 11/26/2023] Open
Abstract
PURPOSE To develop a deep learning framework based on a hybrid dataset to enhance the quality of CBCT images and obtain accurate HU values. MATERIALS AND METHODS A total of 228 cervical cancer patients treated in different LINACs were enrolled. We developed an encoder-decoder architecture with residual learning and skip connections. The model was hierarchically trained and validated on 5279 paired CBCT/planning CT images and tested on 1302 paired images. The mean absolute error (MAE), peak signal to noise ratio (PSNR), and structural similarity index (SSIM) were utilized to access the quality of the synthetic CT images generated by our model. RESULTS The MAE between synthetic CT images generated by our model and planning CT was 10.93 HU, compared to 50.02 HU for the CBCT images. The PSNR increased from 27.79 dB to 33.91 dB, and the SSIM increased from 0.76 to 0.90. Compared with synthetic CT images generated by the convolution neural networks with residual blocks, our model had superior performance both in qualitative and quantitative aspects. CONCLUSIONS Our model could synthesize CT images with enhanced image quality and accurate HU values. The synthetic CT images preserved the edges of tissues well, which is important for downstream tasks in adaptive radiotherapy.
Collapse
Affiliation(s)
- Xi Liu
- School of Physics, Beihang University, Beijing 102206, China; (X.L.); (X.Y.)
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China; (R.Y.)
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Ruijie Yang
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China; (R.Y.)
| | - Tianyu Xiong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Xueying Yang
- School of Physics, Beihang University, Beijing 102206, China; (X.L.); (X.Y.)
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China; (R.Y.)
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Liming Song
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Jiarui Zhu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Mingqing Wang
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China; (R.Y.)
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Lisheng Geng
- School of Physics, Beihang University, Beijing 102206, China; (X.L.); (X.Y.)
- Beijing Key Laboratory of Advanced Nuclear Materials and Physics, Beihang University, Beijing 102206, China
- Peng Huanwu Collaborative Center for Research and Education, Beihang University, Beijing 100191, China
| |
Collapse
|
6
|
Kim K, Lim CY, Shin J, Chung MJ, Jung YG. Enhanced artificial intelligence-based diagnosis using CBCT with internal denoising: Clinical validation for discrimination of fungal ball, sinusitis, and normal cases in the maxillary sinus. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107708. [PMID: 37473588 DOI: 10.1016/j.cmpb.2023.107708] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 07/03/2023] [Accepted: 07/04/2023] [Indexed: 07/22/2023]
Abstract
BACKGROUND AND OBJECTIVE The cone-beam computed tomography (CBCT) provides three-dimensional volumetric imaging of a target with low radiation dose and cost compared with conventional computed tomography, and it is widely used in the detection of paranasal sinus disease. However, it lacks the sensitivity to detect soft tissue lesions owing to reconstruction constraints. Consequently, only physicians with expertise in CBCT reading can distinguish between inherent artifacts or noise and diseases, restricting the use of this imaging modality. The development of artificial intelligence (AI)-based computer-aided diagnosis methods for CBCT to overcome the shortage of experienced physicians has attracted substantial attention. However, advanced AI-based diagnosis addressing intrinsic noise in CBCT has not been devised, discouraging the practical use of AI solutions for CBCT. We introduce the development of AI-based computer-aided diagnosis for CBCT considering the intrinsic imaging noise and evaluate its efficacy and implications. METHODS We propose an AI-based computer-aided diagnosis method using CBCT with a denoising module. This module is implemented before diagnosis to reconstruct the internal ground-truth full-dose scan corresponding to an input CBCT image and thereby improve the diagnostic performance. The proposed method is model agnostic and compatible with various existing and future AI-based denoising or diagnosis models. RESULTS The external validation results for the unified diagnosis of sinus fungal ball, chronic rhinosinusitis, and normal cases show that the proposed method improves the micro-, macro-average area under the curve, and accuracy by 7.4, 5.6, and 9.6% (from 86.2, 87.0, and 73.4 to 93.6, 92.6, and 83.0%), respectively, compared with a baseline while improving human diagnosis accuracy by 11% (from 71.7 to 83.0%), demonstrating technical differentiation and clinical effectiveness. In addition, the physician's ability to evaluate the AI-derived diagnosis results may be enhanced compared with existing solutions. CONCLUSION This pioneering study on AI-based diagnosis using CBCT indicates that denoising can improve diagnostic performance and reader interpretability in images from the sinonasal area, thereby providing a new approach and direction to radiographic image reconstruction regarding the development of AI-based diagnostic solutions. Furthermore, we believe that the performance enhancement will expedite the adoption of automated diagnostic solutions using CBCT, especially in locations with a shortage of skilled clinicians and limited access to high-dose scanning.
Collapse
Affiliation(s)
- Kyungsu Kim
- Medical AI Research Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul, Republic of Korea; Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea; Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Chae Yeon Lim
- Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
| | - Joongbo Shin
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Myung Jin Chung
- Medical AI Research Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul, Republic of Korea; Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea; Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea; Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Yong Gi Jung
- Medical AI Research Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul, Republic of Korea; Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea; Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
7
|
Li Z, Zhang Q, Li H, Kong L, Wang H, Liang B, Chen M, Qin X, Yin Y, Li Z. Using RegGAN to generate synthetic CT images from CBCT images acquired with different linear accelerators. BMC Cancer 2023; 23:828. [PMID: 37670252 PMCID: PMC10478281 DOI: 10.1186/s12885-023-11274-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 08/08/2023] [Indexed: 09/07/2023] Open
Abstract
BACKGROUND The goal was to investigate the feasibility of the registration generative adversarial network (RegGAN) model in image conversion for performing adaptive radiation therapy on the head and neck and its stability under different cone beam computed tomography (CBCT) models. METHODS A total of 100 CBCT and CT images of patients diagnosed with head and neck tumors were utilized for the training phase, whereas the testing phase involved 40 distinct patients obtained from four different linear accelerators. The RegGAN model was trained and tested to evaluate its performance. The generated synthetic CT (sCT) image quality was compared to that of planning CT (pCT) images by employing metrics such as the mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM). Moreover, the radiation therapy plan was uniformly applied to both the sCT and pCT images to analyze the planning target volume (PTV) dose statistics and calculate the dose difference rate, reinforcing the model's accuracy. RESULTS The generated sCT images had good image quality, and no significant differences were observed among the different CBCT modes. The conversion effect achieved for Synergy was the best, and the MAE decreased from 231.3 ± 55.48 to 45.63 ± 10.78; the PSNR increased from 19.40 ± 1.46 to 26.75 ± 1.32; the SSIM increased from 0.82 ± 0.02 to 0.85 ± 0.04. The quality improvement effect achieved for sCT image synthesis based on RegGAN was obvious, and no significant sCT synthesis differences were observed among different accelerators. CONCLUSION The sCT images generated by the RegGAN model had high image quality, and the RegGAN model exhibited a strong generalization ability across different accelerators, enabling its outputs to be used as reference images for performing adaptive radiation therapy on the head and neck.
Collapse
Affiliation(s)
- Zhenkai Li
- Chengdu University of Technology, Chengdu, China
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | | | - Haodong Li
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Lingke Kong
- Manteia Technologies Co., Ltd., Xiamen, China
| | - Huadong Wang
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Benzhe Liang
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Mingming Chen
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Xiaohang Qin
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Yong Yin
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China.
| | - Zhenjiang Li
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China.
| |
Collapse
|
8
|
Uh J, Wang C, Jordan JA, Pirlepesov F, Becksfort JB, Ates O, Krasin MJ, Hua CH. A hybrid method of correcting CBCT for proton range estimation with deep learning and deformable image registration. Phys Med Biol 2023; 68:10.1088/1361-6560/ace754. [PMID: 37442128 PMCID: PMC10846632 DOI: 10.1088/1361-6560/ace754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 07/13/2023] [Indexed: 07/15/2023]
Abstract
Objective. This study aimed to develop a novel method for generating synthetic CT (sCT) from cone-beam CT (CBCT) of the abdomen/pelvis with bowel gas pockets to facilitate estimation of proton ranges.Approach. CBCT, the same-day repeat CT, and the planning CT (pCT) of 81 pediatric patients were used for training (n= 60), validation (n= 6), and testing (n= 15) of the method. The proposed method hybridizes unsupervised deep learning (CycleGAN) and deformable image registration (DIR) of the pCT to CBCT. The CycleGAN and DIR are respectively applied to generate the geometry-weighted (high spatial-frequency) and intensity-weighted (low spatial-frequency) components of the sCT, thereby each process deals with only the component weighted toward its strength. The resultant sCT is further improved in bowel gas regions and other tissues by iteratively feeding back the sCT to adjust incorrect DIR and by increasing the contribution of the deformed pCT in regions of accurate DIR.Main results. The hybrid sCT was more accurate than deformed pCT and CycleGAN-only sCT as indicated by the smaller mean absolute error in CT numbers (28.7 ± 7.1 HU versus 38.8 ± 19.9 HU/53.2 ± 5.5 HU;P≤ 0.012) and higher Dice similarity of the internal gas regions (0.722 ± 0.088 versus 0.180 ± 0.098/0.659 ± 0.129;P≤ 0.002). Accordingly, the hybrid method resulted in more accurate proton range for the beams intersecting gas pockets (11 fields in 6 patients) than the individual methods (the 90th percentile error in 80% distal fall-off, 1.8 ± 0.6 mm versus 6.5 ± 7.8 mm/3.7 ± 1.5 mm;P≤ 0.013). The gamma passing rates also showed a significant dosimetric advantage by the hybrid method (99.7 ± 0.8% versus 98.4 ± 3.1%/98.3 ± 1.8%;P≤ 0.007).Significance. The hybrid method significantly improved the accuracy of sCT and showed promises in CBCT-based proton range verification and adaptive replanning of abdominal/pelvic proton therapy even when gas pockets are present in the beam path.
Collapse
Affiliation(s)
- Jinsoo Uh
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Chuang Wang
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Jacob A Jordan
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
- College of Medicine, The University of Tennessee Health Science Center, Memphis, TN, United States of America
| | - Fakhriddin Pirlepesov
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Jared B Becksfort
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Ozgur Ates
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Matthew J Krasin
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Chia-Ho Hua
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| |
Collapse
|