1
|
Rossi M, Belotti G, Mainardi L, Baroni G, Cerveri P. Feasibility of proton dosimetry overriding planning CT with daily CBCT elaborated through generative artificial intelligence tools. Comput Assist Surg (Abingdon) 2024; 29:2327981. [PMID: 38468391 DOI: 10.1080/24699322.2024.2327981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024] Open
Abstract
Radiotherapy commonly utilizes cone beam computed tomography (CBCT) for patient positioning and treatment monitoring. CBCT is deemed to be secure for patients, making it suitable for the delivery of fractional doses. However, limitations such as a narrow field of view, beam hardening, scattered radiation artifacts, and variability in pixel intensity hinder the direct use of raw CBCT for dose recalculation during treatment. To address this issue, reliable correction techniques are necessary to remove artifacts and remap pixel intensity into Hounsfield Units (HU) values. This study proposes a deep-learning framework for calibrating CBCT images acquired with narrow field of view (FOV) systems and demonstrates its potential use in proton treatment planning updates. Cycle-consistent generative adversarial networks (cGAN) processes raw CBCT to reduce scatter and remap HU. Monte Carlo simulation is used to generate CBCT scans, enabling the possibility to focus solely on the algorithm's ability to reduce artifacts and cupping effects without considering intra-patient longitudinal variability and producing a fair comparison between planning CT (pCT) and calibrated CBCT dosimetry. To showcase the viability of the approach using real-world data, experiments were also conducted using real CBCT. Tests were performed on a publicly available dataset of 40 patients who received ablative radiation therapy for pancreatic cancer. The simulated CBCT calibration led to a difference in proton dosimetry of less than 2%, compared to the planning CT. The potential toxicity effect on the organs at risk decreased from about 50% (uncalibrated) up the 2% (calibrated). The gamma pass rate at 3%/2 mm produced an improvement of about 37% in replicating the prescribed dose before and after calibration (53.78% vs 90.26%). Real data also confirmed this with slightly inferior performances for the same criteria (65.36% vs 87.20%). These results may confirm that generative artificial intelligence brings the use of narrow FOV CBCT scans incrementally closer to clinical translation in proton therapy planning updates.
Collapse
Affiliation(s)
- Matteo Rossi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- Laboratory of Innovation in Sleep Medicine, Istituto Auxologico Italiano, Milan, Italy
| | - Gabriele Belotti
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Luca Mainardi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Guido Baroni
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- Bioengineering Unit, Clinical Department, National Center for Oncological Hadrontherapy (CNAO), Pavia, Italy
| | - Pietro Cerveri
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- Laboratory of Innovation in Sleep Medicine, Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
2
|
Hooshangnejad H, China D, Huang Y, Zbijewski W, Uneri A, McNutt T, Lee J, Ding K. XIOSIS: An X-Ray-Based Intra-Operative Image-Guided Platform for Oncology Smart Material Delivery. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3176-3187. [PMID: 38602853 DOI: 10.1109/tmi.2024.3387830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/13/2024]
Abstract
Image-guided interventional oncology procedures can greatly enhance the outcome of cancer treatment. As an enhancing procedure, oncology smart material delivery can increase cancer therapy's quality, effectiveness, and safety. However, the effectiveness of enhancing procedures highly depends on the accuracy of smart material placement procedures. Inaccurate placement of smart materials can lead to adverse side effects and health hazards. Image guidance can considerably improve the safety and robustness of smart material delivery. In this study, we developed a novel generative deep-learning platform that highly prioritizes clinical practicality and provides the most informative intra-operative feedback for image-guided smart material delivery. XIOSIS generates a patient-specific 3D volumetric computed tomography (CT) from three intraoperative radiographs (X-ray images) acquired by a mobile C-arm during the operation. As the first of its kind, XIOSIS (i) synthesizes the CT from small field-of-view radiographs;(ii) reconstructs the intra-operative spacer distribution; (iii) is robust; and (iv) is equipped with a novel soft-contrast cost function. To demonstrate the effectiveness of XIOSIS in providing intra-operative image guidance, we applied XIOSIS to the duodenal hydrogel spacer placement procedure. We evaluated XIOSIS performance in an image-guided virtual spacer placement and actual spacer placement in two cadaver specimens. XIOSIS showed a clinically acceptable performance, reconstructed the 3D intra-operative hydrogel spacer distribution with an average structural similarity of 0.88 and Dice coefficient of 0.63 and with less than 1 cm difference in spacer location relative to the spinal cord.
Collapse
|
3
|
Hu Y, Cheng M, Wei H, Liang Z. A joint learning framework for multisite CBCT-to-CT translation using a hybrid CNN-transformer synthesizer and a registration network. Front Oncol 2024; 14:1440944. [PMID: 39175474 PMCID: PMC11338897 DOI: 10.3389/fonc.2024.1440944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Accepted: 07/19/2024] [Indexed: 08/24/2024] Open
Abstract
Background Cone-beam computed tomography (CBCT) is a convenient method for adaptive radiation therapy (ART), but its application is often hindered by its image quality. We aim to develop a unified deep learning model that can consistently enhance the quality of CBCT images across various anatomical sites by generating synthetic CT (sCT) images. Methods A dataset of paired CBCT and planning CT images from 135 cancer patients, including head and neck, chest and abdominal tumors, was collected. This dataset, with its rich anatomical diversity and scanning parameters, was carefully selected to ensure comprehensive model training. Due to the imperfect registration, the inherent challenge of local structural misalignment of paired dataset may lead to suboptimal model performance. To address this limitation, we propose SynREG, a supervised learning framework. SynREG integrates a hybrid CNN-transformer architecture designed for generating high-fidelity sCT images and a registration network designed to correct local structural misalignment dynamically during training. An independent test set of 23 additional patients was used to evaluate the image quality, and the results were compared with those of several benchmark models (pix2pix, cycleGAN and SwinIR). Furthermore, the performance of an autosegmentation application was also assessed. Results The proposed model disentangled sCT generation from anatomical correction, leading to a more rational optimization process. As a result, the model effectively suppressed noise and artifacts in multisite applications, significantly enhancing CBCT image quality. Specifically, the mean absolute error (MAE) of SynREG was reduced to 16.81 ± 8.42 HU, whereas the structural similarity index (SSIM) increased to 94.34 ± 2.85%, representing improvements over the raw CBCT data, which had the MAE of 26.74 ± 10.11 HU and the SSIM of 89.73 ± 3.46%. The enhanced image quality was particularly beneficial for organs with low contrast resolution, significantly increasing the accuracy of automatic segmentation in these regions. Notably, for the brainstem, the mean Dice similarity coefficient (DSC) increased from 0.61 to 0.89, and the MDA decreased from 3.72 mm to 0.98 mm, indicating a substantial improvement in segmentation accuracy and precision. Conclusions SynREG can effectively alleviate the differences in residual anatomy between paired datasets and enhance the quality of CBCT images.
Collapse
Affiliation(s)
- Ying Hu
- School of Mathematics and Statistics, Hubei University of Education, Wuhan, Hubei, China
- Bigdata Modeling and Intelligent Computing Research Institute, Hubei University of Education, Wuhan, Hubei, China
| | - Mengjie Cheng
- Nutrition Department, Renmin Hospital of Wuhan University, Wuhan, China
| | - Hui Wei
- Department of Radiotherapy, Affiliated Hospital of Hebei Engineering University, Handan, China
| | - Zhiwen Liang
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
| |
Collapse
|
4
|
Chen W, Zhao W, Chen Z, Liu T, Liu L, Liu J, Yuan Y. Mask-aware transformer with structure invariant loss for CT translation. Med Image Anal 2024; 96:103205. [PMID: 38788328 DOI: 10.1016/j.media.2024.103205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Revised: 01/19/2024] [Accepted: 05/13/2024] [Indexed: 05/26/2024]
Abstract
Multi-phase enhanced computed tomography (MPECT) translation from plain CT can help doctors to detect the liver lesion and prevent patients from the allergy during MPECT examination. Existing CT translation methods directly learn an end-to-end mapping from plain CT to MPECT, ignoring the crucial clinical domain knowledge. As clinicians subtract the plain CT from MPECT images as subtraction image to highlight the contrast-enhanced regions and further to facilitate liver disease diagnosis in the clinical diagnosis, we aim to exploit this domain knowledge for automatic CT translation. To this end, we propose a Mask-Aware Transformer (MAFormer) with structure invariant loss for CT translation, which presents the first effort to exploit this domain knowledge for CT translation. Specifically, the proposed MAFormer introduces a mask estimator to predict the subtraction image from the plain CT image. To integrate the subtraction image into the network, the MAFormer devises a Mask-Aware Transformer based Normalization (MATNorm) as normalization layer to highlight the contrast-enhanced regions and capture the long-range dependencies among these regions. Moreover, aiming to preserve the biological structure of CT slices, a structure invariant loss is designed to extract the structural information and minimize the structural similarity between the plain and synthetic CT images to ensure the structure invariant. Extensive experiments have proven the effectiveness of the proposed method and its superiority to the state-of-the-art CT translation methods. Source code is to be released.
Collapse
Affiliation(s)
- Wenting Chen
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Wei Zhao
- Department of Radiology, The Second Xiangya Hospital, Central South University, China; Clinical Research Center for Medical Imaging in Hunan Province, China
| | - Zhen Chen
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Tianming Liu
- Department of Computer Science, University of Georgia, United States of America
| | - Li Liu
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Jun Liu
- Department of Radiology, The Second Xiangya Hospital, Central South University, China; Clinical Research Center for Medical Imaging in Hunan Province, China
| | - Yixuan Yuan
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China.
| |
Collapse
|
5
|
Chen X, Qiu RLJ, Peng J, Shelton JW, Chang CW, Yang X, Kesarwala AH. CBCT-based synthetic CT image generation using a diffusion model for CBCT-guided lung radiotherapy. Med Phys 2024. [PMID: 39088750 DOI: 10.1002/mp.17328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 07/01/2024] [Accepted: 07/04/2024] [Indexed: 08/03/2024] Open
Abstract
BACKGROUND Although cone beam computed tomography (CBCT) has lower resolution compared to planning CTs (pCT), its lower dose, higher high-contrast resolution, and shorter scanning time support its widespread use in clinical applications, especially in ensuring accurate patient positioning during the image-guided radiation therapy (IGRT) process. PURPOSE While CBCT is critical to IGRT, CBCT image quality can be compromised by severe stripe and scattering artifacts. Tumor movement secondary to respiratory motion also decreases CBCT resolution. In order to improve the image quality of CBCT, we propose a Lung Diffusion Model (L-DM) framework. METHODS Our proposed algorithm is based on a conditional diffusion model trained on pCT and deformed CBCT (dCBCT) image pairs to synthesize lung CT images from dCBCT images and benefit CBCT-based radiotherapy. dCBCT images were used as the constraint for the L-DM. The image quality and Hounsfield unit (HU) values of the synthetic CTs (sCT) images generated by the proposed L-DM were compared to three selected mainstream generation models. RESULTS We verified our model in both an institutional lung cancer dataset and a selected public dataset. Our L-DM showed significant improvement in the four metrics of mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC), and structural similarity index measure (SSIM). In our institutional dataset, our proposed L-DM decreased the MAE from 101.47 to 37.87 HU and increased the PSNR from 24.97 to 29.89 dB, the NCC from 0.81 to 0.97, and the SSIM from 0.80 to 0.93. In the public dataset, our proposed L-DM decreased the MAE from 173.65 to 58.95 HU, while increasing the PSNR, NCC, and SSIM from 13.07 to 24.05 dB, 0.68 to 0.94, and 0.41 to 0.88, respectively. CONCLUSIONS The proposed L-DM significantly improved sCT image quality compared to the pre-correction CBCT and three mainstream generative models. Our model can benefit CBCT-based IGRT and other potential clinical applications as it increases the HU accuracy and decreases the artifacts from input CBCT images.
Collapse
Affiliation(s)
- Xiaoqian Chen
- Department of Radiation Oncology, Winship Cancer Institute, Emory University School of Medicine, Atlanta, Georgia, USA
| | - Richard L J Qiu
- Department of Radiation Oncology, Winship Cancer Institute, Emory University School of Medicine, Atlanta, Georgia, USA
| | - Junbo Peng
- Department of Radiation Oncology, Winship Cancer Institute, Emory University School of Medicine, Atlanta, Georgia, USA
| | - Joseph W Shelton
- Department of Radiation Oncology, Winship Cancer Institute, Emory University School of Medicine, Atlanta, Georgia, USA
| | - Chih-Wei Chang
- Department of Radiation Oncology, Winship Cancer Institute, Emory University School of Medicine, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Winship Cancer Institute, Emory University School of Medicine, Atlanta, Georgia, USA
| | - Aparna H Kesarwala
- Department of Radiation Oncology, Winship Cancer Institute, Emory University School of Medicine, Atlanta, Georgia, USA
| |
Collapse
|
6
|
Viar-Hernandez D, Molina-Maza JM, Vera-Sánchez JA, Perez-Moreno JM, Mazal A, Rodriguez-Vila B, Malpica N, Torrado-Carvajal A. Enhancing adaptive proton therapy through CBCT images: Synthetic head and neck CT generation based on 3D vision transformers. Med Phys 2024; 51:4922-4935. [PMID: 38569141 DOI: 10.1002/mp.17057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Revised: 03/01/2024] [Accepted: 03/17/2024] [Indexed: 04/05/2024] Open
Abstract
BACKGROUND Proton therapy is a form of radiotherapy commonly used to treat various cancers. Due to its high conformality, minor variations in patient anatomy can lead to significant alterations in dose distribution, making adaptation crucial. While cone-beam computed tomography (CBCT) is a well-established technique for adaptive radiation therapy (ART), it cannot be directly used for adaptive proton therapy (APT) treatments because the stopping power ratio (SPR) cannot be estimated from CBCT images. PURPOSE To address this limitation, Deep Learning methods have been suggested for converting pseudo-CT (pCT) images from CBCT images. In spite of convolutional neural networks (CNNs) have shown consistent improvement in pCT literature, there is still a need for further enhancements to make them suitable for clinical applications. METHODS The authors introduce the 3D vision transformer (ViT) block, studying its performance at various stages of the proposed architectures. Additionally, they conduct a retrospective analysis of a dataset that includes 259 image pairs from 59 patients who underwent treatment for head and neck cancer. The dataset is partitioned into 80% for training, 10% for validation, and 10% for testing purposes. RESULTS The SPR maps obtained from the pCT using the proposed method present an absolute relative error of less than 5% from those computed from the planning CT, thus improving the results of CBCT. CONCLUSIONS We introduce an enhanced ViT3D architecture for pCT image generation from CBCT images, reducing SPR error within clinical margins for APT workflows. The new method minimizes bias compared to CT-based SPR estimation and dose calculation, signaling a promising direction for future research in this field. However, further research is needed to assess the robustness and generalizability across different medical imaging applications.
Collapse
Affiliation(s)
- David Viar-Hernandez
- Universidad Rey Juan Carlos, Medical Image Analysis and Biometry Laboratory, Madrid, Spain
| | | | | | | | - Alejandro Mazal
- Centro de Protonterapia Quironsalud, Servicio de física médica, Madrid, Spain
| | - Borja Rodriguez-Vila
- Universidad Rey Juan Carlos, Medical Image Analysis and Biometry Laboratory, Madrid, Spain
| | - Norberto Malpica
- Universidad Rey Juan Carlos, Medical Image Analysis and Biometry Laboratory, Madrid, Spain
| | - Angel Torrado-Carvajal
- Universidad Rey Juan Carlos, Medical Image Analysis and Biometry Laboratory, Madrid, Spain
| |
Collapse
|
7
|
Gao Y, Xie H, Chang CW, Peng J, Pan S, Qiu RLJ, Wang T, Ghavidel B, Roper J, Zhou J, Yang X. CT-based synthetic iodine map generation using conditional denoising diffusion probabilistic model. Med Phys 2024. [PMID: 38889368 DOI: 10.1002/mp.17258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 04/17/2024] [Accepted: 06/03/2024] [Indexed: 06/20/2024] Open
Abstract
BACKGROUND Iodine maps, derived from image-processing of contrast-enhanced dual-energy computed tomography (DECT) scans, highlight the differences in tissue iodine intake. It finds multiple applications in radiology, including vascular imaging, pulmonary evaluation, kidney assessment, and cancer diagnosis. In radiation oncology, it can contribute to designing more accurate and personalized treatment plans. However, DECT scanners are not commonly available in radiation therapy centers. Additionally, the use of iodine contrast agents is not suitable for all patients, especially those allergic to iodine agents, posing further limitations to the accessibility of this technology. PURPOSE The purpose of this work is to generate synthetic iodine map images from non-contrast single-energy CT (SECT) images using conditional denoising diffusion probabilistic model (DDPM). METHODS One-hundered twenty-six head-and-neck patients' images were retrospectively investigated in this work. Each patient underwent non-contrast SECT and contrast DECT scans. Ground truth iodine maps were generated from contrast DECT scans using commercial software syngo.via installed in the clinic. A conditional DDPM was implemented in this work to synthesize iodine maps. Three-fold cross-validation was conducted, with each iteration selecting the data from 42 patients as the test dataset and the remainder as the training dataset. Pixel-to-pixel generative adversarial network (GAN) and CycleGAN served as reference methods for evaluating the proposed DDPM method. RESULTS The accuracy of the proposed DDPM was evaluated using three quantitative metrics: mean absolute error (MAE) (1.039 ± 0.345 mg/mL), structural similarity index measure (SSIM) (0.89 ± 0.10) and peak signal-to-noise ratio (PSNR) (25.4 ± 3.5 db) respectively. Compared to the reference methods, the proposed technique showcased superior performance across the evaluated metrics, further validated by the paired two-tailed t-tests. CONCLUSION The proposed conditional DDPM framework has demonstrated the feasibility of generating synthetic iodine map images from non-contrast SECT images. This method presents a potential clinical application, which is providing accurate iodine contrast map in instances where only non-contrast SECT is accessible.
Collapse
Affiliation(s)
- Yuan Gao
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Huiqiao Xie
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Junbo Peng
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Beth Ghavidel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
8
|
Wajer R, Wajer A, Kazimierczak N, Wilamowska J, Serafin Z. The Impact of AI on Metal Artifacts in CBCT Oral Cavity Imaging. Diagnostics (Basel) 2024; 14:1280. [PMID: 38928694 PMCID: PMC11203150 DOI: 10.3390/diagnostics14121280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Revised: 06/13/2024] [Accepted: 06/14/2024] [Indexed: 06/28/2024] Open
Abstract
OBJECTIVE This study aimed to assess the impact of artificial intelligence (AI)-driven noise reduction algorithms on metal artifacts and image quality parameters in cone-beam computed tomography (CBCT) images of the oral cavity. MATERIALS AND METHODS This retrospective study included 70 patients, 61 of whom were analyzed after excluding those with severe motion artifacts. CBCT scans, performed using a Hyperion X9 PRO 13 × 10 CBCT machine, included images with dental implants, amalgam fillings, orthodontic appliances, root canal fillings, and crowns. Images were processed with the ClariCT.AI deep learning model (DLM) for noise reduction. Objective image quality was assessed using metrics such as the differentiation between voxel values (ΔVVs), the artifact index (AIx), and the contrast-to-noise ratio (CNR). Subjective assessments were performed by two experienced readers, who rated overall image quality and artifact intensity on predefined scales. RESULTS Compared with native images, DLM reconstructions significantly reduced the AIx and increased the CNR (p < 0.001), indicating improved image clarity and artifact reduction. Subjective assessments also favored DLM images, with higher ratings for overall image quality and lower artifact intensity (p < 0.001). However, the ΔVV values were similar between the native and DLM images, indicating that while the DLM reduced noise, it maintained the overall density distribution. Orthodontic appliances produced the most pronounced artifacts, while implants generated the least. CONCLUSIONS AI-based noise reduction using ClariCT.AI significantly enhances CBCT image quality by reducing noise and metal artifacts, thereby improving diagnostic accuracy and treatment planning. Further research with larger, multicenter cohorts is recommended to validate these findings.
Collapse
Affiliation(s)
- Róża Wajer
- Department of Radiology and Diagnostic Imaging, University Hospital No. 1 in Bydgoszcz, Marii Skłodowskiej—Curie 9, 85-094 Bydgoszcz, Poland; (J.W.); (Z.S.)
| | | | - Natalia Kazimierczak
- Kazimierczak Private Medical Practice, Dworcowa 13/u6a, 85-009 Bydgoszcz, Poland;
| | - Justyna Wilamowska
- Department of Radiology and Diagnostic Imaging, University Hospital No. 1 in Bydgoszcz, Marii Skłodowskiej—Curie 9, 85-094 Bydgoszcz, Poland; (J.W.); (Z.S.)
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
| | - Zbigniew Serafin
- Department of Radiology and Diagnostic Imaging, University Hospital No. 1 in Bydgoszcz, Marii Skłodowskiej—Curie 9, 85-094 Bydgoszcz, Poland; (J.W.); (Z.S.)
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
| |
Collapse
|
9
|
Glielmo P, Fusco S, Gitto S, Zantonelli G, Albano D, Messina C, Sconfienza LM, Mauri G. Artificial intelligence in interventional radiology: state of the art. Eur Radiol Exp 2024; 8:62. [PMID: 38693468 PMCID: PMC11063019 DOI: 10.1186/s41747-024-00452-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 02/26/2024] [Indexed: 05/03/2024] Open
Abstract
Artificial intelligence (AI) has demonstrated great potential in a wide variety of applications in interventional radiology (IR). Support for decision-making and outcome prediction, new functions and improvements in fluoroscopy, ultrasound, computed tomography, and magnetic resonance imaging, specifically in the field of IR, have all been investigated. Furthermore, AI represents a significant boost for fusion imaging and simulated reality, robotics, touchless software interactions, and virtual biopsy. The procedural nature, heterogeneity, and lack of standardisation slow down the process of adoption of AI in IR. Research in AI is in its early stages as current literature is based on pilot or proof of concept studies. The full range of possibilities is yet to be explored.Relevance statement Exploring AI's transformative potential, this article assesses its current applications and challenges in IR, offering insights into decision support and outcome prediction, imaging enhancements, robotics, and touchless interactions, shaping the future of patient care.Key points• AI adoption in IR is more complex compared to diagnostic radiology.• Current literature about AI in IR is in its early stages.• AI has the potential to revolutionise every aspect of IR.
Collapse
Affiliation(s)
- Pierluigi Glielmo
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy.
| | - Stefano Fusco
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
| | - Salvatore Gitto
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Cristina Belgioioso, 173, 20157, Milan, Italy
| | - Giulia Zantonelli
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
| | - Domenico Albano
- IRCCS Istituto Ortopedico Galeazzi, Via Cristina Belgioioso, 173, 20157, Milan, Italy
- Dipartimento di Scienze Biomediche, Chirurgiche ed Odontoiatriche, Università degli Studi di Milano, Via della Commenda, 10, 20122, Milan, Italy
| | - Carmelo Messina
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Cristina Belgioioso, 173, 20157, Milan, Italy
| | - Luca Maria Sconfienza
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Cristina Belgioioso, 173, 20157, Milan, Italy
| | - Giovanni Mauri
- Divisione di Radiologia Interventistica, IEO, IRCCS Istituto Europeo di Oncologia, Milan, Italy
| |
Collapse
|
10
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
11
|
Peng J, Qiu RLJ, Wynne JF, Chang CW, Pan S, Wang T, Roper J, Liu T, Patel PR, Yu DS, Yang X. CBCT-Based synthetic CT image generation using conditional denoising diffusion probabilistic model. Med Phys 2024; 51:1847-1859. [PMID: 37646491 DOI: 10.1002/mp.16704] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 07/17/2023] [Accepted: 08/08/2023] [Indexed: 09/01/2023] Open
Abstract
BACKGROUND Daily or weekly cone-beam computed tomography (CBCT) scans are commonly used for accurate patient positioning during the image-guided radiotherapy (IGRT) process, making it an ideal option for adaptive radiotherapy (ART) replanning. However, the presence of severe artifacts and inaccurate Hounsfield unit (HU) values prevent its use for quantitative applications such as organ segmentation and dose calculation. To enable the clinical practice of online ART, it is crucial to obtain CBCT scans with a quality comparable to that of a CT scan. PURPOSE This work aims to develop a conditional diffusion model to perform image translation from the CBCT to the CT distribution for the image quality improvement of CBCT. METHODS The proposed method is a conditional denoising diffusion probabilistic model (DDPM) that utilizes a time-embedded U-net architecture with residual and attention blocks to gradually transform the white Gaussian noise sample to the target CT distribution conditioned on the CBCT. The model was trained on deformed planning CT (dpCT) and CBCT image pairs, and its feasibility was verified in brain patient study and head-and-neck (H&N) patient study. The performance of the proposed algorithm was evaluated using mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and normalized cross-correlation (NCC) metrics on generated synthetic CT (sCT) samples. The proposed method was also compared to four other diffusion model-based sCT generation methods. RESULTS In the brain patient study, the MAE, PSNR, and NCC of the generated sCT were 25.99 HU, 30.49 dB, and 0.99, respectively, compared to 40.63 HU, 27.87 dB, and 0.98 of the CBCT images. In the H&N patient study, the metrics were 32.56 HU, 27.65 dB, 0.98 and 38.99 HU, 27.00, 0.98 for sCT and CBCT, respectively. Compared to the other four diffusion models and one Cycle generative adversarial network (Cycle GAN), the proposed method showed superior results in both visual quality and quantitative analysis. CONCLUSIONS The proposed conditional DDPM method can generate sCT from CBCT with accurate HU numbers and reduced artifacts, enabling accurate CBCT-based organ segmentation and dose calculation for online ART.
Collapse
Affiliation(s)
- Junbo Peng
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Nuclear and Radiological Engineering and Medical physics Programs, George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jacob F Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Pretesh R Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - David S Yu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Nuclear and Radiological Engineering and Medical physics Programs, George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
| |
Collapse
|
12
|
Zhuang T, Parsons D, Desai N, Gibbard G, Keilty D, Lin MH, Cai B, Nguyen D, Chiu T, Godley A, Pompos A, Jiang S. Simulation and pre-planning omitted radiotherapy (SPORT): a feasibility study for prostate cancer. Biomed Phys Eng Express 2024; 10:025019. [PMID: 38241733 DOI: 10.1088/2057-1976/ad20aa] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Accepted: 01/19/2024] [Indexed: 01/21/2024]
Abstract
This study explored the feasibility of on-couch intensity modulated radiotherapy (IMRT) planning for prostate cancer (PCa) on a cone-beam CT (CBCT)-based online adaptive RT platform without an individualized pre-treatment plan and contours. Ten patients with PCa previously treated with image-guided IMRT (60 Gy/20 fractions) were selected. In contrast to the routine online adaptive RT workflow, a novel approach was employed in which the same preplan that was optimized on one reference patient was adapted to generate individual on-couch/initial plans for the other nine test patients using Ethos emulator. Simulation CTs of the test patients were used as simulated online CBCT (sCBCT) for emulation. Quality assessments were conducted on synthetic CTs (sCT). Dosimetric comparisons were performed between on-couch plans, on-couch plans recomputed on the sCBCT and individually optimized plans for test patients. The median value of mean absolute difference between sCT and sCBCT was 74.7 HU (range 69.5-91.5 HU). The average CTV/PTV coverage by prescription dose was 100.0%/94.7%, and normal tissue constraints were met for the nine test patients in on-couch plans on sCT. Recalculating on-couch plans on the sCBCT showed about 0.7% reduction of PTV coverage and a 0.6% increasing of hotspot, and the dose difference of the OARs was negligible (<0.5 Gy). Hence, initial IMRT plans for new patients can be generated by adapting a reference patient's preplan with online contours, which had similar qualities to the conventional approach of individually optimized plan on the simulation CT. Further study is needed to identify selection criteria for patient anatomy most amenable to this workflow.
Collapse
Affiliation(s)
- Tingliang Zhuang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - David Parsons
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Neil Desai
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Grant Gibbard
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Dana Keilty
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Mu-Han Lin
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Bin Cai
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Dan Nguyen
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Tsuicheng Chiu
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Andrew Godley
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Arnold Pompos
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Steve Jiang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| |
Collapse
|
13
|
Boldrini L, D'Aviero A, De Felice F, Desideri I, Grassi R, Greco C, Iorio GC, Nardone V, Piras A, Salvestrini V. Artificial intelligence applied to image-guided radiation therapy (IGRT): a systematic review by the Young Group of the Italian Association of Radiotherapy and Clinical Oncology (yAIRO). LA RADIOLOGIA MEDICA 2024; 129:133-151. [PMID: 37740838 DOI: 10.1007/s11547-023-01708-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/16/2023] [Indexed: 09/25/2023]
Abstract
INTRODUCTION The advent of image-guided radiation therapy (IGRT) has recently changed the workflow of radiation treatments by ensuring highly collimated treatments. Artificial intelligence (AI) and radiomics are tools that have shown promising results for diagnosis, treatment optimization and outcome prediction. This review aims to assess the impact of AI and radiomics on modern IGRT modalities in RT. METHODS A PubMed/MEDLINE and Embase systematic review was conducted to investigate the impact of radiomics and AI to modern IGRT modalities. The search strategy was "Radiomics" AND "Cone Beam Computed Tomography"; "Radiomics" AND "Magnetic Resonance guided Radiotherapy"; "Radiomics" AND "on board Magnetic Resonance Radiotherapy"; "Artificial Intelligence" AND "Cone Beam Computed Tomography"; "Artificial Intelligence" AND "Magnetic Resonance guided Radiotherapy"; "Artificial Intelligence" AND "on board Magnetic Resonance Radiotherapy" and only original articles up to 01.11.2022 were considered. RESULTS A total of 402 studies were obtained using the previously mentioned search strategy on PubMed and Embase. The analysis was performed on a total of 84 papers obtained following the complete selection process. Radiomics application to IGRT was analyzed in 23 papers, while a total 61 papers were focused on the impact of AI on IGRT techniques. DISCUSSION AI and radiomics seem to significantly impact IGRT in all the phases of RT workflow, even if the evidence in the literature is based on retrospective data. Further studies are needed to confirm these tools' potential and provide a stronger correlation with clinical outcomes and gold-standard treatment strategies.
Collapse
Affiliation(s)
- Luca Boldrini
- UOC Radioterapia Oncologica, Fondazione Policlinico Universitario IRCCS "A. Gemelli", Rome, Italy
- Università Cattolica del Sacro Cuore, Rome, Italy
| | - Andrea D'Aviero
- Radiation Oncology, Mater Olbia Hospital, Olbia, Sassari, Italy
| | - Francesca De Felice
- Radiation Oncology, Department of Radiological, Policlinico Umberto I, Rome, Italy
- Oncological and Pathological Sciences, "Sapienza" University of Rome, Rome, Italy
| | - Isacco Desideri
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
| | - Roberta Grassi
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Carlo Greco
- Department of Radiation Oncology, Università Campus Bio-Medico di Roma, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy
| | | | - Valerio Nardone
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Antonio Piras
- UO Radioterapia Oncologica, Villa Santa Teresa, Bagheria, Palermo, Italy.
| | - Viola Salvestrini
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
- Cyberknife Center, Istituto Fiorentino di Cura e Assistenza (IFCA), 50139, Florence, Italy
| |
Collapse
|
14
|
Bogowicz M, Lustermans D, Taasti VT, Hazelaar C, Verhaegen F, Fonseca GP, van Elmpt W. Evaluation of a cone-beam computed tomography system calibrated for accurate radiotherapy dose calculation. Phys Imaging Radiat Oncol 2024; 29:100566. [PMID: 38487622 PMCID: PMC10937948 DOI: 10.1016/j.phro.2024.100566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 02/26/2024] [Accepted: 02/27/2024] [Indexed: 03/17/2024] Open
Abstract
Background and purpose Dose calculation on cone-beam computed tomography (CBCT) images has been less accurate than on computed tomography (CT) images due to lower image quality and discrepancies in CT numbers for CBCT. As increasing interest arises in offline and online re-planning, dose calculation accuracy was evaluated for a novel CBCT imager integrated into a ring gantry treatment machine. Materials and methods The new CBCT system allowed fast image acquisition (5.9 s) by using new hardware, including a large-size flat panel detector, and incorporated image-processing algorithms with iterative reconstruction techniques, leading to accurate CT numbers allowing dose calculation. In this study, CBCT- and CT-based dose calculations were compared based on three anthropomorphic phantoms, after CBCT-to-mass-density calibration was performed. Six plans were created on the CT scans covering various target locations and complexities, followed by CBCT to CT registrations, copying of contours, and re-calculation of the plans on the CBCT scans. Dose-volume histogram metrics for target volumes and organs-at-risk (OARs) were evaluated, and global gamma analyses were performed. Results Target coverage differences were consistently below 1.2 %, demonstrating the agreement between CT and re-calculated CBCT dose distributions. Differences in Dmean for OARs were below 0.5 Gy for all plans, except for three OARs, which were below 0.8 Gy (<1.1 %). All plans had a 3 %/1mm gamma pass rate > 97 %. Conclusions This study demonstrated comparable results between dose calculations performed on CBCT and CT acquisitions. The new CBCT system with enhanced image quality and CT number accuracy opens possibilities for off-line and on-line re-planning.
Collapse
Affiliation(s)
| | - Didier Lustermans
- Corresponding author at: Postbox 3035, 6202 NA Maastricht, The Netherlands.
| | - Vicki Trier Taasti
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Colien Hazelaar
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Frank Verhaegen
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Gabriel Paiva Fonseca
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Wouter van Elmpt
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| |
Collapse
|
15
|
Mori S, Hirai R, Sakata Y, Koto M, Ishikawa H. Shortening image registration time using a deep neural network for patient positional verification in radiotherapy. Phys Eng Sci Med 2023; 46:1563-1572. [PMID: 37639109 DOI: 10.1007/s13246-023-01320-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 08/09/2023] [Indexed: 08/29/2023]
Abstract
We sought to accelerate 2D/3D image registration computation time using image synthesis with a deep neural network (DNN) to generate digitally reconstructed radiographic (DRR) images from X-ray flat panel detector (FPD) images. And we explored the feasibility of using our DNN in the patient setup verification application. Images of the prostate and of the head and neck (H&N) regions were acquired by two oblique X-ray fluoroscopic units and the treatment planning CT. DNN was designed to generate DRR images from the FPD image data. We evaluated the quality of the synthesized DRR images to compare the ground-truth DRR images using the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). Image registration accuracy and computation time were evaluated by comparing the 2D-3D image registration algorithm using DRR and FPD image data with DRR and synthesized DRR images. Mean PSNR values were 23.4 ± 3.7 dB and 24.1 ± 3.9 dB for the pelvic and H&N regions, respectively. Mean SSIM values for both cases were also similar (= 0.90). Image registration accuracy was degraded by a mean of 0.43 mm and 0.30°, it was clinically acceptable. Computation time was accelerated by a factor of 0.69. Our DNN successfully generated DRR images from FPD image data, and improved 2D-3D image registration computation time up to 37% in average.
Collapse
Affiliation(s)
- Shinichiro Mori
- Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan.
- Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, Inage-ku, Chiba, 263-8555, Japan.
| | - Ryusuke Hirai
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Yukinobu Sakata
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Masashi Koto
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| | - Hitoshi Ishikawa
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| |
Collapse
|
16
|
Honkamaa J, Khan U, Koivukoski S, Valkonen M, Latonen L, Ruusuvuori P, Marttinen P. Deformation equivariant cross-modality image synthesis with paired non-aligned training data. Med Image Anal 2023; 90:102940. [PMID: 37666115 DOI: 10.1016/j.media.2023.102940] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 08/14/2023] [Accepted: 08/18/2023] [Indexed: 09/06/2023]
Abstract
Cross-modality image synthesis is an active research topic with multiple medical clinically relevant applications. Recently, methods allowing training with paired but misaligned data have started to emerge. However, no robust and well-performing methods applicable to a wide range of real world data sets exist. In this work, we propose a generic solution to the problem of cross-modality image synthesis with paired but non-aligned data by introducing new deformation equivariance encouraging loss functions. The method consists of joint training of an image synthesis network together with separate registration networks and allows adversarial training conditioned on the input even with misaligned data. The work lowers the bar for new clinical applications by allowing effortless training of cross-modality image synthesis networks for more difficult data sets.
Collapse
Affiliation(s)
- Joel Honkamaa
- Department of Computer Science, Aalto University, Finland.
| | - Umair Khan
- Institute of Biomedicine, University of Turku, Finland
| | - Sonja Koivukoski
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Mira Valkonen
- Faculty of Medicine and Health Technology, Tampere University, Finland
| | - Leena Latonen
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Pekka Ruusuvuori
- Institute of Biomedicine, University of Turku, Finland; Faculty of Medicine and Health Technology, Tampere University, Finland
| | | |
Collapse
|
17
|
Yang B, Liu Y, Zhu J, Dai J, Men K. Deep learning framework to improve the quality of cone-beam computed tomography for radiotherapy scenarios. Med Phys 2023; 50:7641-7653. [PMID: 37345371 DOI: 10.1002/mp.16562] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 06/01/2023] [Accepted: 06/03/2023] [Indexed: 06/23/2023] Open
Abstract
BACKGROUND The application of cone-beam computed tomography (CBCT) in image-guided radiotherapy and adaptive radiotherapy remains limited due to its poor image quality. PURPOSE In this study, we aim to develop a deep learning framework to generate high-quality CBCT images for therapeutic applications. METHODS The synthetic CT (sCT) generation from the CBCT was proposed using a transformer-based network with a hybrid loss function. The network was trained and validated using the data from 176 patients to produce a general model that can be extensively applied to enhance CBCT images. After the first therapy, each patient can receive paired CBCT/planning CT (pCT) scans, and the obtained data were used to fine-tune the general model for further improvement. For subsequent treatment, a patient-specific, personalized model was made available. In total, 34 patients were examined for general model testing, and another six patients who underwent rescanned pCT scan were used for personalized model training and testing. RESULTS The general model decreased the mean absolute error (MAE) from 135 HU to 59 HU as compared to the CBCT. The hybrid loss function demonstrated superior performance in CT number correction and noise/artifacts reduction. The proposed transformer-based network also showed superior power in CT number correction compared to the classical convolutional neural network. The personalized model showed improvement based on the general model in some details, and the MAE was reduced from 59 HU (for the general model) to 57 HU (p < 0.05 Wilcoxon signed-rank test). CONCLUSION We established a deep learning framework based on transformer for clinical needs. The deep learning model demonstrated potential for continuous improvement with the help of a suggested personalized training strategy compatible with the clinical workflow.
Collapse
Affiliation(s)
- Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
18
|
Liu X, Yang R, Xiong T, Yang X, Li W, Song L, Zhu J, Wang M, Cai J, Geng L. CBCT-to-CT Synthesis for Cervical Cancer Adaptive Radiotherapy via U-Net-Based Model Hierarchically Trained with Hybrid Dataset. Cancers (Basel) 2023; 15:5479. [PMID: 38001738 PMCID: PMC10670900 DOI: 10.3390/cancers15225479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/11/2023] [Accepted: 11/14/2023] [Indexed: 11/26/2023] Open
Abstract
PURPOSE To develop a deep learning framework based on a hybrid dataset to enhance the quality of CBCT images and obtain accurate HU values. MATERIALS AND METHODS A total of 228 cervical cancer patients treated in different LINACs were enrolled. We developed an encoder-decoder architecture with residual learning and skip connections. The model was hierarchically trained and validated on 5279 paired CBCT/planning CT images and tested on 1302 paired images. The mean absolute error (MAE), peak signal to noise ratio (PSNR), and structural similarity index (SSIM) were utilized to access the quality of the synthetic CT images generated by our model. RESULTS The MAE between synthetic CT images generated by our model and planning CT was 10.93 HU, compared to 50.02 HU for the CBCT images. The PSNR increased from 27.79 dB to 33.91 dB, and the SSIM increased from 0.76 to 0.90. Compared with synthetic CT images generated by the convolution neural networks with residual blocks, our model had superior performance both in qualitative and quantitative aspects. CONCLUSIONS Our model could synthesize CT images with enhanced image quality and accurate HU values. The synthetic CT images preserved the edges of tissues well, which is important for downstream tasks in adaptive radiotherapy.
Collapse
Affiliation(s)
- Xi Liu
- School of Physics, Beihang University, Beijing 102206, China; (X.L.); (X.Y.)
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China; (R.Y.)
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Ruijie Yang
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China; (R.Y.)
| | - Tianyu Xiong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Xueying Yang
- School of Physics, Beihang University, Beijing 102206, China; (X.L.); (X.Y.)
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China; (R.Y.)
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Liming Song
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Jiarui Zhu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Mingqing Wang
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China; (R.Y.)
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Lisheng Geng
- School of Physics, Beihang University, Beijing 102206, China; (X.L.); (X.Y.)
- Beijing Key Laboratory of Advanced Nuclear Materials and Physics, Beihang University, Beijing 102206, China
- Peng Huanwu Collaborative Center for Research and Education, Beihang University, Beijing 100191, China
| |
Collapse
|
19
|
Pang B, Si H, Liu M, Fu W, Zeng Y, Liu H, Cao T, Chang Y, Quan H, Yang Z. Comparison and evaluation of different deep learning models of synthetic CT generation from CBCT for nasopharynx cancer adaptive proton therapy. Med Phys 2023; 50:6920-6930. [PMID: 37800874 DOI: 10.1002/mp.16777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/09/2023] [Accepted: 09/17/2023] [Indexed: 10/07/2023] Open
Abstract
BACKGROUND Cone-beam computed tomography (CBCT) scanning is used for patient setup in image-guided radiotherapy. However, its inaccurate CT numbers limit its applicability in dose calculation and treatment planning. PURPOSE This study compares four deep learning methods for generating synthetic CT (sCT) to determine which method is more appropriate and offers potential for further clinical exploration in adaptive proton therapy for nasopharynx cancer. METHODS CBCTs and deformed planning CT (dCT) from 75 patients (60/5/10 for training, validation and testing) were used to compare cycle-consistent Generative Adversarial Network (cycleGAN), Unet, Unet+cycleGAN and conditionalGenerative Adversarial Network (cGAN) for sCT generation. The sCT images generated by each method were evaluated against dCT images using mean absolute error (MAE), structural similarity (SSIM), peak signal-to-noise ratio (PSNR), spatial non-uniformity (SNU) and radial averaging in the frequency domain. In addition, dosimetric accuracy was assessed through gamma analysis, differences in water equivalent thickness (WET), and dose-volume histogram metrics. RESULTS The cGAN model has demonstrated optimal performance in the four models across various indicators. In terms of image quality under global condition, the average MAE has been reduced to 16.39HU, SSIM has increased to 95.24%, and PSNR has increased to 28.98. Regarding dosimetric accuracy, the gamma passing rate (2%/2 mm) has reached 99.02%, and the WET difference is only 1.28 mm. The D95 value of CTVs coverage and Dmax value of spinal cord, brainstem show no significant differences between dCT and sCT generated by cGAN model. CONCLUSIONS The cGAN model has been shown to be a more suitable approach for generating sCT using CBCT, considering its characteristics and concepts. The resulting sCT has the potential for application in adaptive proton therapy.
Collapse
Affiliation(s)
- Bo Pang
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Hang Si
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Muyu Liu
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Wensheng Fu
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yiling Zeng
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Hongyuan Liu
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ting Cao
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yu Chang
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Hong Quan
- Department of Medical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Zhiyong Yang
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Key Laboratory of Precision Radiation Oncology, Wuhan, China
- Institute of Radiation Oncology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
20
|
Tsai P, Tseng YL, Shen B, Ackerman C, Zhai HA, Yu F, Simone CB, Choi JI, Lee NY, Kabarriti R, Lazarev S, Johnson CL, Liu J, Chen CC, Lin H. The Applications and Pitfalls of Cone-Beam Computed Tomography-Based Synthetic Computed Tomography for Adaptive Evaluation in Pencil-Beam Scanning Proton Therapy. Cancers (Basel) 2023; 15:5101. [PMID: 37894469 PMCID: PMC10605451 DOI: 10.3390/cancers15205101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 10/18/2023] [Accepted: 10/20/2023] [Indexed: 10/29/2023] Open
Abstract
PURPOSE The study evaluates the efficacy of cone-beam computed tomography (CBCT)-based synthetic CTs (sCT) as a potential alternative to verification CT (vCT) for enhanced treatment monitoring and early adaptation in proton therapy. METHODS Seven common treatment sites were studied. Two sets of sCT per case were generated: direct-deformed (DD) sCT and image-correction (IC) sCT. The image qualities and dosimetric impact of the sCT were compared to the same-day vCT. RESULTS The sCT agreed with vCT in regions of homogeneous tissues such as the brain and breast; however, notable discrepancies were observed in the thorax and abdomen. The sCT outliers existed for DD sCT when there was an anatomy change and for IC sCT in low-density regions. The target coverage exhibited less than a 5% variance in most DD and IC sCT cases when compared to vCT. The Dmax of serial organ-at-risk (OAR) in sCT plans shows greater deviation from vCT than small-volume dose metrics (D0.1cc). The parallel OAR volumetric and mean doses remained consistent, with average deviations below 1.5%. CONCLUSION The use of sCT enables precise treatment and prompt early adaptation for proton therapy. The quality assurance of sCT is mandatory in the early stage of clinical implementation.
Collapse
Affiliation(s)
- Pingfang Tsai
- New York Proton Center, New York, NY 10035, USA; (P.T.); (B.S.); (H.A.Z.); (F.Y.); (C.B.S.II); (J.I.C.); (C.L.J.); (J.L.); (C.-C.C.)
| | - Yu-Lun Tseng
- Proton Center, Taipei Medical University, Taipei 11031, Taiwan;
- Department of Radiation Oncology, Taipei Medical University, Taipei 11031, Taiwan
| | - Brian Shen
- New York Proton Center, New York, NY 10035, USA; (P.T.); (B.S.); (H.A.Z.); (F.Y.); (C.B.S.II); (J.I.C.); (C.L.J.); (J.L.); (C.-C.C.)
| | | | - Huifang A. Zhai
- New York Proton Center, New York, NY 10035, USA; (P.T.); (B.S.); (H.A.Z.); (F.Y.); (C.B.S.II); (J.I.C.); (C.L.J.); (J.L.); (C.-C.C.)
| | - Francis Yu
- New York Proton Center, New York, NY 10035, USA; (P.T.); (B.S.); (H.A.Z.); (F.Y.); (C.B.S.II); (J.I.C.); (C.L.J.); (J.L.); (C.-C.C.)
| | - Charles B. Simone
- New York Proton Center, New York, NY 10035, USA; (P.T.); (B.S.); (H.A.Z.); (F.Y.); (C.B.S.II); (J.I.C.); (C.L.J.); (J.L.); (C.-C.C.)
| | - J. Isabelle Choi
- New York Proton Center, New York, NY 10035, USA; (P.T.); (B.S.); (H.A.Z.); (F.Y.); (C.B.S.II); (J.I.C.); (C.L.J.); (J.L.); (C.-C.C.)
| | - Nancy Y. Lee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA;
| | - Rafi Kabarriti
- Department of Radiation Oncology, Montefiore Medical Center, Bronx, NY 10467, USA;
| | - Stanislav Lazarev
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA;
| | - Casey L. Johnson
- New York Proton Center, New York, NY 10035, USA; (P.T.); (B.S.); (H.A.Z.); (F.Y.); (C.B.S.II); (J.I.C.); (C.L.J.); (J.L.); (C.-C.C.)
| | - Jiayi Liu
- New York Proton Center, New York, NY 10035, USA; (P.T.); (B.S.); (H.A.Z.); (F.Y.); (C.B.S.II); (J.I.C.); (C.L.J.); (J.L.); (C.-C.C.)
| | - Chin-Cheng Chen
- New York Proton Center, New York, NY 10035, USA; (P.T.); (B.S.); (H.A.Z.); (F.Y.); (C.B.S.II); (J.I.C.); (C.L.J.); (J.L.); (C.-C.C.)
| | - Haibo Lin
- New York Proton Center, New York, NY 10035, USA; (P.T.); (B.S.); (H.A.Z.); (F.Y.); (C.B.S.II); (J.I.C.); (C.L.J.); (J.L.); (C.-C.C.)
| |
Collapse
|
21
|
Liu Y, Chen A, Li Y, Lai H, Huang S, Yang X. CT synthesis from CBCT using a sequence-aware contrastive generative network. Comput Med Imaging Graph 2023; 109:102300. [PMID: 37776676 DOI: 10.1016/j.compmedimag.2023.102300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 08/21/2023] [Accepted: 08/21/2023] [Indexed: 10/02/2023]
Abstract
Computerized tomography (CT) synthesis from cone-beam computerized tomography (CBCT) is a key step in adaptive radiotherapy. It uses a synthetic CT to calculate the dose to correct and adjust the radiotherapy plan in a timely manner. The cycle-consistent adversarial network (Cycle GAN) is commonly used in CT synthesis tasks but it has some defects: (a) the premise of the cycle consistency loss is that the conversion between domains is bijective, but the CBCT and CT conversion does not fully satisfy the bijective relationship, and (b) it does not take advantage of the complementary information between multiple sets of CBCTs for the same patient. To address these problems, we propose a novel framework named the sequence-aware contrastive generative network (SCGN) that introduces an attention sequence fusion module to improve the CBCT quality. In addition, it not only applies contrastive learning to the generative adversarial networks (GANs) to pay more attention to the anatomical structure of CBCT in feature extraction but also uses a new generator to improve the accuracy of the anatomical details. Experimental results on our datasets show that our method significantly outperforms the existing unsupervised CT synthesis methods.
Collapse
Affiliation(s)
- Yanxia Liu
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, China
| | - Anni Chen
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, China
| | - Yuhong Li
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, China
| | - Haoyu Lai
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, China
| | - Sijuan Huang
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Esophageal Cancer Institute, Guangzhou, Guangdong 510060, China.
| | - Xin Yang
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Esophageal Cancer Institute, Guangzhou, Guangdong 510060, China.
| |
Collapse
|
22
|
Park JC, Song B, Liang X, Lu B, Tan J, Parisi A, Denbeigh J, Yaddanpudi S, Choi B, Kim JS, Furutani KM, Beltran CJ. A high-resolution cone beam computed tomography (HRCBCT) reconstruction framework for CBCT-guided online adaptive therapy. Med Phys 2023; 50:6490-6501. [PMID: 37690458 DOI: 10.1002/mp.16734] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 08/18/2023] [Accepted: 08/19/2023] [Indexed: 09/12/2023] Open
Abstract
BACKGROUND Kilo-voltage cone-beam computed tomography (CBCT) is a prevalent modality used for adaptive radiotherapy (ART) due to its compatibility with linear accelerators and ability to provide online imaging. However, the widely-used Feldkamp-Davis-Kress (FDK) reconstruction algorithm has several limitations, including potential streak aliasing artifacts and elevated noise levels. Iterative reconstruction (IR) techniques, such as total variation (TV) minimization, dictionary-based methods, and prior information-based methods, have emerged as viable solutions to address these limitations and improve the quality and applicability of CBCT in ART. PURPOSE One of the primary challenges in IR-based techniques is finding the right balance between minimizing image noise and preserving image resolution. To overcome this challenge, we have developed a new reconstruction technique called high-resolution CBCT (HRCBCT) that specifically focuses on improving image resolution while reducing noise levels. METHODS The HRCBCT reconstruction technique builds upon the conventional IR approach, incorporating three components: the data fidelity term, the resolution preservation term, and the regularization term. The data fidelity term ensures alignment between reconstructed values and measured projection data, while the resolution preservation term exploits the high resolution of the initial Feldkamp-Davis-Kress (FDK) algorithm. The regularization term mitigates noise during the IR process. To enhance convergence and resolution at each iterative stage, we applied Iterative Filtered Backprojection (IFBP) to the data fidelity minimization process. RESULTS We evaluated the performance of the proposed HRCBCT algorithm using data from two physical phantoms and one head and neck patient. The HRCBCT algorithm outperformed all four different algorithms; FDK, Iterative Filtered Back Projection (IFBP), Compressed Sensing based Iterative Reconstruction (CSIR), and Prior Image Constrained Compressed Sensing (PICCS) methods in terms of resolution and noise reduction for all data sets. Line profiles across three line pairs of resolution revealed that the HRCBCT algorithm delivered the highest distinguishable line pairs compared to the other algorithms. Similarly, the Modulation Transfer Function (MTF) measurements, obtained from the tungsten wire insert on the CatPhan 600 physical phantom, showed a significant improvement with HRCBCT over traditional algorithms. CONCLUSION The proposed HRCBCT algorithm offers a promising solution for enhancing CBCT image quality in adaptive radiotherapy settings. By addressing the challenges inherent in traditional IR methods, the algorithm delivers high-definition CBCT images with improved resolution and reduced noise throughout each iterative step. Implementing the HR CBCT algorithm could significantly impact the accuracy of treatment planning during online adaptive therapy.
Collapse
Affiliation(s)
- Justin C Park
- Department of Radiation Oncology, Mayo Clinic, Florida, USA
| | - Bongyong Song
- Department of Radiation Oncology, University of California San Diego, San Diego, California, USA
| | - Xiaoying Liang
- Department of Radiation Oncology, Mayo Clinic, Florida, USA
| | - Bo Lu
- Department of Radiation Oncology, Mayo Clinic, Florida, USA
| | - Jun Tan
- Department of Radiation Oncology, Mayo Clinic, Florida, USA
| | - Alessio Parisi
- Department of Radiation Oncology, Mayo Clinic, Florida, USA
| | - Janet Denbeigh
- Department of Radiation Oncology, Mayo Clinic, Florida, USA
| | | | - Byongsu Choi
- Department of Radiation Oncology, Mayo Clinic, Florida, USA
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | | | | |
Collapse
|
23
|
Yoganathan S, Aouadi S, Ahmed S, Paloor S, Torfeh T, Al-Hammadi N, Hammoud R. Generating synthetic images from cone beam computed tomography using self-attention residual UNet for head and neck radiotherapy. Phys Imaging Radiat Oncol 2023; 28:100512. [PMID: 38111501 PMCID: PMC10726231 DOI: 10.1016/j.phro.2023.100512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 11/09/2023] [Accepted: 11/09/2023] [Indexed: 12/20/2023] Open
Abstract
Background and purpose Accurate CT numbers in Cone Beam CT (CBCT) are crucial for precise dose calculations in adaptive radiotherapy (ART). This study aimed to generate synthetic CT (sCT) from CBCT using deep learning (DL) models in head and neck (HN) radiotherapy. Materials and methods A novel DL model, the 'self-attention-residual-UNet' (ResUNet), was developed for accurate sCT generation. ResUNet incorporates a self-attention mechanism in its long skip connections to enhance information transfer between the encoder and decoder. Data from 93 HN patients, each with planning CT (pCT) and first-day CBCT images were used. Model performance was evaluated using two DL approaches (non-adversarial and adversarial training) and two model types (2D axial only vs. 2.5D axial, sagittal, and coronal). ResUNet was compared with the traditional UNet through image quality assessment (Mean Absolute Error (MAE), Peak-Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM)) and dose calculation accuracy evaluation (DVH deviation and gamma evaluation (1 %/1mm)). Results Image similarity evaluation results for the 2.5D-ResUNet and 2.5D-UNet models were: MAE: 46±7 HU vs. 51±9 HU, PSNR: 66.6±2.0 dB vs. 65.8±1.8 dB, and SSIM: 0.81±0.04 vs. 0.79±0.05. There were no significant differences in dose calculation accuracy between DL models. Both models demonstrated DVH deviation below 0.5 % and a gamma-pass-rate (1 %/1mm) exceeding 97 %. Conclusions ResUNet enhanced CT number accuracy and image quality of sCT and outperformed UNet in sCT generation from CBCT. This method holds promise for generating precise sCT for HN ART.
Collapse
Affiliation(s)
- S.A. Yoganathan
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Souha Aouadi
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Sharib Ahmed
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Satheesh Paloor
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Tarraf Torfeh
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Noora Al-Hammadi
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Rabih Hammoud
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| |
Collapse
|
24
|
Li Z, Zhang Q, Li H, Kong L, Wang H, Liang B, Chen M, Qin X, Yin Y, Li Z. Using RegGAN to generate synthetic CT images from CBCT images acquired with different linear accelerators. BMC Cancer 2023; 23:828. [PMID: 37670252 PMCID: PMC10478281 DOI: 10.1186/s12885-023-11274-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 08/08/2023] [Indexed: 09/07/2023] Open
Abstract
BACKGROUND The goal was to investigate the feasibility of the registration generative adversarial network (RegGAN) model in image conversion for performing adaptive radiation therapy on the head and neck and its stability under different cone beam computed tomography (CBCT) models. METHODS A total of 100 CBCT and CT images of patients diagnosed with head and neck tumors were utilized for the training phase, whereas the testing phase involved 40 distinct patients obtained from four different linear accelerators. The RegGAN model was trained and tested to evaluate its performance. The generated synthetic CT (sCT) image quality was compared to that of planning CT (pCT) images by employing metrics such as the mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM). Moreover, the radiation therapy plan was uniformly applied to both the sCT and pCT images to analyze the planning target volume (PTV) dose statistics and calculate the dose difference rate, reinforcing the model's accuracy. RESULTS The generated sCT images had good image quality, and no significant differences were observed among the different CBCT modes. The conversion effect achieved for Synergy was the best, and the MAE decreased from 231.3 ± 55.48 to 45.63 ± 10.78; the PSNR increased from 19.40 ± 1.46 to 26.75 ± 1.32; the SSIM increased from 0.82 ± 0.02 to 0.85 ± 0.04. The quality improvement effect achieved for sCT image synthesis based on RegGAN was obvious, and no significant sCT synthesis differences were observed among different accelerators. CONCLUSION The sCT images generated by the RegGAN model had high image quality, and the RegGAN model exhibited a strong generalization ability across different accelerators, enabling its outputs to be used as reference images for performing adaptive radiation therapy on the head and neck.
Collapse
Affiliation(s)
- Zhenkai Li
- Chengdu University of Technology, Chengdu, China
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | | | - Haodong Li
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Lingke Kong
- Manteia Technologies Co., Ltd., Xiamen, China
| | - Huadong Wang
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Benzhe Liang
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Mingming Chen
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Xiaohang Qin
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Yong Yin
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China.
| | - Zhenjiang Li
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China.
| |
Collapse
|
25
|
Shimomura T, Fujiwara D, Inoue Y, Takeya A, Ohta T, Nozawa Y, Imae T, Nawa K, Nakagawa K, Haga A. Virtual cone-beam computed tomography simulator with human phantom library and its application to the elemental material decomposition. Phys Med 2023; 113:102648. [PMID: 37672845 DOI: 10.1016/j.ejmp.2023.102648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 06/19/2023] [Accepted: 07/29/2023] [Indexed: 09/08/2023] Open
Abstract
PURPOSE The purpose of this study is to develop a virtual CBCT simulator with a head and neck (HN) human phantom library and to demonstrate the feasibility of elemental material decomposition (EMD) for quantitative CBCT imaging using this virtual simulator. METHODS The library of 36 HN human phantoms were developed by extending the ICRP 110 adult phantoms based on human age, height, and weight statistics. To create the CBCT database for the library, a virtual CBCT simulator that simulated the direct and scattered X-ray on a flat panel detector using ray-tracing and deep-learning (DL) models was used. Gaussian distributed noise was also included on the flat panel detector, which was evaluated using a real CBCT system. The usefulness of the virtual CBCT system was demonstrated through the application of the developed DL-based EMD model for case involving virtual phantom and real patient. RESULTS The virtual simulator could generate various virtual CBCT images based on the human phantom library, and the prediction of the EMD could be successfully performed by preparing the CBCT database from the proposed virtual system, even for a real patient. The CBCT image degradation owing to the scattered X-ray and the statistical noise affected the prediction accuracy, although these effects were minimal. Furthermore, the elemental distribution using the real CBCT image was also predictable. CONCLUSIONS This study demonstrated the potential of using computer vision for medical data preparation and analysis, which could have important implications for improving patient outcomes, especially in adaptive radiation therapy.
Collapse
Affiliation(s)
- Taisei Shimomura
- Graduate School of Biomedical Sciences, Tokushima University, Tokushima 770-8503, Japan; Department of Radiology, The University of Tokyo Hospital, Bunkyo, Tokyo 113-8655, Japan
| | - Daiyu Fujiwara
- Graduate School of Biomedical Sciences, Tokushima University, Tokushima 770-8503, Japan
| | - Yuki Inoue
- Graduate School of Biomedical Sciences, Tokushima University, Tokushima 770-8503, Japan
| | - Atsushi Takeya
- Graduate School of Biomedical Sciences, Tokushima University, Tokushima 770-8503, Japan
| | - Takeshi Ohta
- Department of Radiology, The University of Tokyo Hospital, Bunkyo, Tokyo 113-8655, Japan
| | - Yuki Nozawa
- Department of Radiology, The University of Tokyo Hospital, Bunkyo, Tokyo 113-8655, Japan
| | - Toshikazu Imae
- Department of Radiology, The University of Tokyo Hospital, Bunkyo, Tokyo 113-8655, Japan
| | - Kanabu Nawa
- Department of Radiology, The University of Tokyo Hospital, Bunkyo, Tokyo 113-8655, Japan
| | - Keiichi Nakagawa
- Department of Radiology, The University of Tokyo Hospital, Bunkyo, Tokyo 113-8655, Japan
| | - Akihiro Haga
- Graduate School of Biomedical Sciences, Tokushima University, Tokushima 770-8503, Japan.
| |
Collapse
|
26
|
Chang CW, Nilsson R, Andersson S, Bohannon D, Patel SA, Patel PR, Liu T, Yang X, Zhou J. An optimized framework for cone-beam computed tomography-based online evaluation for proton therapy. Med Phys 2023; 50:5375-5386. [PMID: 37450315 DOI: 10.1002/mp.16625] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 06/01/2023] [Accepted: 06/21/2023] [Indexed: 07/18/2023] Open
Abstract
BACKGROUND Clinical evidence has demonstrated that proton therapy can achieve comparable tumor control probabilities compared to conventional photon therapy but with the added benefit of sparing healthy tissues. However, proton therapy is sensitive to inter-fractional anatomy changes. Online pre-fraction evaluation can effectively verify proton dose before delivery to patients, but there is a lack of guidelines for implementing this workflow. PURPOSE The purpose of this study is to develop a cone-beam CT-based (CBCT) online evaluation framework for proton therapy that enables knowledge transparency and evaluates the efficiency and accuracy of each essential component. METHODS Twenty-three patients with various lesion sites were included to conduct a retrospective study of implementing the proposed CBCT evaluation framework for the clinic. The framework was implemented on the RayStation 11B Research platform. Two synthetic CT (sCT) methods, corrected CBCT (cCBCT), and virtual CT (vCT), were used, and the ground truth images were acquired from the same-day deformed quality assurance CT (dQACT) for the comparisons. The evaluation metrics for the framework include time efficiency, dose-difference distributions (gamma passing rates), and water equivalent thickness (WET) distributions. RESULTS The mean online CBCT evaluation times were 1.6 ± 0.3 min and 1.9 ± 0.4 min using cCBCT and vCT, respectively. The dose calculation and deformable image registration dominated the evaluation efficiency, and accounted for 33% and 30% of the total evaluation time, respectively. The sCT generation took another 19% of the total time. Gamma passing rates were greater than 91% and 97% using 1%/1 mm and 2%/2 mm criteria, respectively. When the appropriate sCT was chosen, the target mean WET difference from the reference were less than 0.5 mm. The appropriate sCT method choice determined the uncertainty for the framework, with the cCBCT being superior for head-and-neck patient evaluation and vCT being better for lung patient evaluation. CONCLUSIONS An online CBCT evaluation framework was proposed to identify the use of the optimal sCT algorithm regarding efficiency and dosimetry accuracy. The framework is extendable to adopt advanced imaging methods and has the potential to support online adaptive radiotherapy to enhance patient benefits. It could be implemented into clinical use in the future.
Collapse
Affiliation(s)
- Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | | | | | - Duncan Bohannon
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Sagar A Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Pretesh R Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology, Mount Sinai Medical Center, New York, New York, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
27
|
Tian D, Sun G, Zheng H, Yu S, Jiang J. CT-CBCT deformable registration using weakly-supervised artifact-suppression transfer learning network. Phys Med Biol 2023; 68:165011. [PMID: 37433303 DOI: 10.1088/1361-6560/ace675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 07/11/2023] [Indexed: 07/13/2023]
Abstract
Objective.Computed tomography-cone-beam computed tomography (CT-CBCT) deformable registration has great potential in adaptive radiotherapy. It plays an important role in tumor tracking, secondary planning, accurate irradiation, and the protection of at-risk organs. Neural networks have been improving CT-CBCT deformable registration, and almost all registration algorithms based on neural networks rely on the gray values of both CT and CBCT. The gray value is a key factor in the loss function, parameter training, and final efficacy of the registration. Unfortunately, the scattering artifacts in CBCT affect the gray values of different pixels inconsistently. Therefore, the direct registration of the original CT-CBCT introduces artifact superposition loss.Approach. In this study, a histogram analysis method for the gray values was used. Based on an analysis of the gray value distribution characteristics of different regions in CT and CBCT, the degree of superposition of the artifact in the region of disinterest was found to be much higher than that in the region of interest. Moreover, the former was the main reason for artifact superposition loss. Consequently, a new weakly supervised two-stage transfer-learning network based on artifact suppression was proposed. The first stage was a pre-training network designed to suppress artifacts contained in the region of disinterest. The second stage was a convolutional neural network that registered the suppressed CBCT and CT.Main Results. Through a comparative test of the thoracic CT-CBCT deformable registration, whose data were collected from the Elekta XVI system, the rationality and accuracy after artifact suppression were confirmed to be significantly improved compared with the other algorithms without artifact suppression.Significance. This study proposed and verified a new deformable registration method with multi-stage neural networks, which can effectively suppress artifacts and further improve registration by incorporating a pre-training technique and an attention mechanism.
Collapse
Affiliation(s)
- Dingshu Tian
- University of Science and Technology of China, Hefei 230026, People's Republic of China
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, People's Republic of China
| | - Guangyao Sun
- SuperSafety Science and Technology Co., Ltd, Hefei 230088, People's Republic of China
- International Academy of Neutron Science, Qingdao 266199, People's Republic of China
| | - Huaqing Zheng
- International Academy of Neutron Science, Qingdao 266199, People's Republic of China
- Super Accuracy Science and Technology Co., Ltd, Nanjing 210044, People's Republic of China
| | - Shengpeng Yu
- SuperSafety Science and Technology Co., Ltd, Hefei 230088, People's Republic of China
- International Academy of Neutron Science, Qingdao 266199, People's Republic of China
| | - Jieqiong Jiang
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, People's Republic of China
| |
Collapse
|
28
|
Aouadi S, Yoganathan SA, Torfeh T, Paloor S, Caparrotti P, Hammoud R, Al-Hammadi N. Generation of synthetic CT from CBCT using deep learning approaches for head and neck cancer patients. Biomed Phys Eng Express 2023; 9:055020. [PMID: 37489854 DOI: 10.1088/2057-1976/acea27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Accepted: 07/25/2023] [Indexed: 07/26/2023]
Abstract
Purpose.To create a synthetic CT (sCT) from daily CBCT using either deep residual U-Net (DRUnet), or conditional generative adversarial network (cGAN) for adaptive radiotherapy planning (ART).Methods.First fraction CBCT and planning CT (pCT) were collected from 93 Head and Neck patients who underwent external beam radiotherapy. The dataset was divided into training, validation, and test sets of 58, 10 and 25 patients respectively. Three methods were used to generate sCT, 1. Nonlocal means patch based method was modified to include multiscale patches defining the multiscale patch based method (MPBM), 2. An encoder decoder 2D Unet with imbricated deep residual units was implemented, 3. DRUnet was integrated to the generator part of cGAN whereas a convolutional PatchGAN classifier was used as the discriminator. The accuracy of sCT was evaluated geometrically using Mean Absolute Error (MAE). Clinical Volumetric Modulated Arc Therapy (VMAT) plans were copied from pCT to registered CBCT and sCT and dosimetric analysis was performed by comparing Dose Volume Histogram (DVH) parameters of planning target volumes (PTVs) and organs at risk (OARs). Furthermore, 3D Gamma analysis (2%/2mm, global) between the dose on the sCT or CBCT and that on the pCT was performed.Results. The average MAE calculated between pCT and CBCT was 180.82 ± 27.37HU. Overall, all approaches significantly reduced the uncertainties in CBCT. Deep learning approaches outperformed patch-based methods with MAE = 67.88 ± 8.39HU (DRUnet) and MAE = 72.52 ± 8.43HU (cGAN) compared to MAE = 90.69 ± 14.3HU (MPBM). The percentages of DVH metric deviations were below 0.55% for PTVs and 1.17% for OARs using DRUnet. The average Gamma pass rate was 99.45 ± 1.86% for sCT generated using DRUnet.Conclusion.DL approaches outperformed MPBM. Specifically, DRUnet could be used for the generation of sCT with accurate intensities and realistic description of patient anatomy. This could be beneficial for CBCT based ART.
Collapse
Affiliation(s)
- Souha Aouadi
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - S A Yoganathan
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Tarraf Torfeh
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Satheesh Paloor
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Palmira Caparrotti
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Rabih Hammoud
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| | - Noora Al-Hammadi
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, PO Box 3050 Doha, Qatar
| |
Collapse
|
29
|
Zhang X, Jiang Y, Luo C, Li D, Niu T, Yu G. Image-based scatter correction for cone-beam CT using flip swin transformer U-shape network. Med Phys 2023; 50:5002-5019. [PMID: 36734321 DOI: 10.1002/mp.16277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 12/23/2022] [Accepted: 01/23/2023] [Indexed: 02/04/2023] Open
Abstract
BACKGROUND Cone beam computed tomography (CBCT) plays an increasingly important role in image-guided radiation therapy. However, the image quality of CBCT is severely degraded by excessive scatter contamination, especially in the abdominal region, hindering its further applications in radiation therapy. PURPOSE To restore low-quality CBCT images contaminated by scatter signals, a scatter correction algorithm combining the advantages of convolutional neural networks (CNN) and Swin Transformer is proposed. METHODS In this paper a scatter correction model for CBCT image, the Flip Swin Transformer U-shape network (FSTUNet) model, is proposed. In this model, the advantages of CNN in texture detail and Swin Transformer in global correlation are used to accurately extract shallow and deep features, respectively. Instead of using the original Swin Transformer tandem structure, we build the Flip Swin Transformer Block to achieve a more powerful inter-window association extraction. The validity and clinical relevance of the method is demonstrated through extensive experiments on a Monte Carlo (MC) simulation dataset and frequency split dataset generated by a validated method, respectively. RESULT Experimental results on the MC simulated dataset show that the root mean square error of images corrected by the method is reduced from over 100 HU to about 7 HU. Both the structural similarity index measure (SSIM) and the universal quality index (UQI) are close to 1. Experimental results on the frequency split dataset demonstrate that the method not only corrects shading artifacts but also exhibits a high degree of structural consistency. In addition, comparison experiments show that FSTUNet outperforms UNet, Deep Residual Convolutional Neural Network (DRCNN), DSENet, Pix2pixGAN, and 3DUnet methods in both qualitative and quantitative metrics. CONCLUSIONS Accurately capturing the features at different levels is greatly beneficial for reconstructing high-quality scatter-free images. The proposed FSTUNet method is an effective solution to CBCT scatter correction and has the potential to improve the accuracy of CBCT image-guided radiation therapy.
Collapse
Affiliation(s)
- Xueren Zhang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, China
| | - Yangkang Jiang
- Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Chen Luo
- Shenzhen Bay Laboratory, Shenzhen, China
- School of Automation, Zhejiang Institute of Mechanical & Electrical Engineering, Hangzhou, China
| | - Dengwang Li
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, China
| | - Tianye Niu
- Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang, China
- Shenzhen Bay Laboratory, Shenzhen, China
| | - Gang Yu
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong, China
| |
Collapse
|
30
|
Uh J, Wang C, Jordan JA, Pirlepesov F, Becksfort JB, Ates O, Krasin MJ, Hua CH. A hybrid method of correcting CBCT for proton range estimation with deep learning and deformable image registration. Phys Med Biol 2023; 68:10.1088/1361-6560/ace754. [PMID: 37442128 PMCID: PMC10846632 DOI: 10.1088/1361-6560/ace754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 07/13/2023] [Indexed: 07/15/2023]
Abstract
Objective. This study aimed to develop a novel method for generating synthetic CT (sCT) from cone-beam CT (CBCT) of the abdomen/pelvis with bowel gas pockets to facilitate estimation of proton ranges.Approach. CBCT, the same-day repeat CT, and the planning CT (pCT) of 81 pediatric patients were used for training (n= 60), validation (n= 6), and testing (n= 15) of the method. The proposed method hybridizes unsupervised deep learning (CycleGAN) and deformable image registration (DIR) of the pCT to CBCT. The CycleGAN and DIR are respectively applied to generate the geometry-weighted (high spatial-frequency) and intensity-weighted (low spatial-frequency) components of the sCT, thereby each process deals with only the component weighted toward its strength. The resultant sCT is further improved in bowel gas regions and other tissues by iteratively feeding back the sCT to adjust incorrect DIR and by increasing the contribution of the deformed pCT in regions of accurate DIR.Main results. The hybrid sCT was more accurate than deformed pCT and CycleGAN-only sCT as indicated by the smaller mean absolute error in CT numbers (28.7 ± 7.1 HU versus 38.8 ± 19.9 HU/53.2 ± 5.5 HU;P≤ 0.012) and higher Dice similarity of the internal gas regions (0.722 ± 0.088 versus 0.180 ± 0.098/0.659 ± 0.129;P≤ 0.002). Accordingly, the hybrid method resulted in more accurate proton range for the beams intersecting gas pockets (11 fields in 6 patients) than the individual methods (the 90th percentile error in 80% distal fall-off, 1.8 ± 0.6 mm versus 6.5 ± 7.8 mm/3.7 ± 1.5 mm;P≤ 0.013). The gamma passing rates also showed a significant dosimetric advantage by the hybrid method (99.7 ± 0.8% versus 98.4 ± 3.1%/98.3 ± 1.8%;P≤ 0.007).Significance. The hybrid method significantly improved the accuracy of sCT and showed promises in CBCT-based proton range verification and adaptive replanning of abdominal/pelvic proton therapy even when gas pockets are present in the beam path.
Collapse
Affiliation(s)
- Jinsoo Uh
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Chuang Wang
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Jacob A Jordan
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
- College of Medicine, The University of Tennessee Health Science Center, Memphis, TN, United States of America
| | - Fakhriddin Pirlepesov
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Jared B Becksfort
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Ozgur Ates
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Matthew J Krasin
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Chia-Ho Hua
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| |
Collapse
|
31
|
Park CS, Kang SR, Kim JE, Huh KH, Lee SS, Heo MS, Han JJ, Yi WJ. Validation of bone mineral density measurement using quantitative CBCT image based on deep learning. Sci Rep 2023; 13:11921. [PMID: 37488135 PMCID: PMC10366160 DOI: 10.1038/s41598-023-38943-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 07/17/2023] [Indexed: 07/26/2023] Open
Abstract
The bone mineral density (BMD) measurement is a direct method of estimating human bone mass for diagnosing osteoporosis, and performed to objectively evaluate bone quality before implant surgery in dental clinics. The objective of this study was to validate the accuracy and reliability of BMD measurements made using quantitative cone-beam CT (CBCT) image based on deep learning by applying the method to clinical data from actual patients. Datasets containing 7500 pairs of CT and CBCT axial slice images from 30 patients were used to train a previously developed deep-learning model (QCBCT-NET). We selected 36 volumes of interest in the CBCT images for each patient in the bone regions of potential implants sites on the maxilla and mandible. We compared the BMDs shown in the quantitative CBCT (QCBCT) images with those in the conventional CBCT (CAL_CBCT) images at the various bone sites of interest across the entire field of view (FOV) using the performance metrics of the MAE, RMSE, MAPE (mean absolute percentage error), R2 (coefficient of determination), and SEE (standard error of estimation). Compared with the ground truth (QCT) images, the accuracy of the BMD measurements from the QCBCT images showed an RMSE of 83.41 mg/cm3, MAE of 67.94 mg/cm3, and MAPE of 8.32% across all the bone sites of interest, whereas for the CAL_CBCT images, those values were 491.15 mg/cm3, 460.52 mg/cm3, and 54.29%, respectively. The linear regression between the QCBCT and QCT images showed a slope of 1.00 and a R2 of 0.85, whereas for the CAL_CBCT images, those values were 0.32 and 0.24, respectively. The overall SEE between the QCBCT images and QCT images was 81.06 mg/cm3, whereas the SEE for the CAL_CBCT images was 109.32 mg/cm3. The QCBCT images thus showed better accuracy, linearity, and uniformity than the CAL_CBCT images across the entire FOV. The BMD measurements from the quantitative CBCT images showed high accuracy, linearity, and uniformity regardless of the relative geometric positions of the bone in the potential implant site. When applied to actual patient CBCT images, the CBCT-based quantitative BMD measurement based on deep learning demonstrated high accuracy and reliability across the entire FOV.
Collapse
Grants
- Project Number: 1711174552, KMDF_PR_20200901_0147 Korea Medical Device Development Fund Grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety)
- Project Number: 1711174543, KMDF_PR_20200901_0011 Korea Medical Device Development Fund Grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety)
Collapse
Affiliation(s)
- Chan-Soo Park
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Se-Ryong Kang
- Department of Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Jeong-Joon Han
- Department of Oral and Maxillofacial Surgery, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Won-Jin Yi
- Department of Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea.
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, South Korea.
| |
Collapse
|
32
|
Allen C, Yeo AU, Hardcastle N, Franich RD. Evaluating synthetic computed tomography images for adaptive radiotherapy decision making in head and neck cancer. Phys Imaging Radiat Oncol 2023; 27:100478. [PMID: 37655123 PMCID: PMC10465931 DOI: 10.1016/j.phro.2023.100478] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 07/19/2023] [Accepted: 07/22/2023] [Indexed: 09/02/2023] Open
Abstract
Background and purpose Adaptive radiotherapy (ART) decision-making benefits from dosimetric information to supplement image inspection when assessing the significance of anatomical changes. This study evaluated a dosimetry-based clinical decision workflow for ART utilizing deformable registration of the original planning computed tomography (CT) image to the daily Cone Beam CT (CBCT) to replace the need for a replan CT for dose estimation. Materials and methods We used 12 retrospective Head & Neck patient cases having a ground truth - a replan CT (rCT) in response to anatomical changes apparent in the daily CBCT - to evaluate the accuracy of dosimetric assessment conducted on synthetic CTs (sCT) generated by deforming the original planning CT Hounsfield Units to the daily CBCT anatomy.The original plan was applied to the sCT and dosimetric accuracy of the sCT was assessed by analyzing plan objectives for targets and organs-at-risk compared to calculations on the ground-truth rCT. Three commercial DIR algorithms were compared. Results For the best-performing algorithms, the majority of dose metrics calculated on the sCTs differed by less than 4 Gy (5.7% of 70 Gy prescription dose). An uncertainty of ±2.5 Gy (3.6% of 70 Gy prescription) is recommended as a conservative tolerance when evaluating dose metrics on sCTs for head and neck. Conclusions Synthetic CTs present a valuable addition to the adaptive radiotherapy workflow, and synthetic CT dose estimates can be effectively used in addition to the current practice of visually inspecting the overlay of the planning CT and CBCT to assess the significance of anatomical change.
Collapse
Affiliation(s)
- Caitlin Allen
- Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia
- School of Science, RMIT University, Melbourne, Victoria, Australia
| | - Adam U. Yeo
- Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia
- School of Science, RMIT University, Melbourne, Victoria, Australia
| | - Nicholas Hardcastle
- Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia
- Centre for Medical Radiation Physics, University of Wollongong, NSW, Australia
| | - Rick D. Franich
- Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia
- School of Science, RMIT University, Melbourne, Victoria, Australia
| |
Collapse
|
33
|
Hooshangnejad H, Chen Q, Feng X, Zhang R, Ding K. deepPERFECT: Novel Deep Learning CT Synthesis Method for Expeditious Pancreatic Cancer Radiotherapy. Cancers (Basel) 2023; 15:cancers15113061. [PMID: 37297023 DOI: 10.3390/cancers15113061] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 05/22/2023] [Accepted: 05/25/2023] [Indexed: 06/12/2023] Open
Abstract
Major sources of delay in the standard of care RT workflow are the need for multiple appointments and separate image acquisition. In this work, we addressed the question of how we can expedite the workflow by synthesizing planning CT from diagnostic CT. This idea is based on the theory that diagnostic CT can be used for RT planning, but in practice, due to the differences in patient setup and acquisition techniques, separate planning CT is required. We developed a generative deep learning model, deepPERFECT, that is trained to capture these differences and generate deformation vector fields to transform diagnostic CT into preliminary planning CT. We performed detailed analysis both from an image quality and a dosimetric point of view, and showed that deepPERFECT enabled the preliminary RT planning to be used for preliminary and early plan dosimetric assessment and evaluation.
Collapse
Affiliation(s)
- Hamed Hooshangnejad
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
- Carnegie Center of Surgical Innovation, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Quan Chen
- City of Hope Comprehensive Cancer Center, Duarte, CA 91010, USA
| | - Xue Feng
- Carina Medical LLC, Lexington, KY 40513, USA
| | - Rui Zhang
- Division of Computational Health Sciences, Department of Surgery, University of Minnesota, Minneapolis, MN 55455, USA
| | - Kai Ding
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
- Carnegie Center of Surgical Innovation, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| |
Collapse
|
34
|
Ran B, Huang B, Liang S, Hou Y. Surgical Instrument Detection Algorithm Based on Improved YOLOv7x. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115037. [PMID: 37299761 DOI: 10.3390/s23115037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 05/19/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023]
Abstract
The counting of surgical instruments is an important task to ensure surgical safety and patient health. However, due to the uncertainty of manual operations, there is a risk of missing or miscounting instruments. Applying computer vision technology to the instrument counting process can not only improve efficiency, but also reduce medical disputes and promote the development of medical informatization. However, during the counting process, surgical instruments may be densely arranged or obstruct each other, and they may be affected by different lighting environments, all of which can affect the accuracy of instrument recognition. In addition, similar instruments may have only minor differences in appearance and shape, which increases the difficulty of identification. To address these issues, this paper improves the YOLOv7x object detection algorithm and applies it to the surgical instrument detection task. First, the RepLK Block module is introduced into the YOLOv7x backbone network, which can increase the effective receptive field and guide the network to learn more shape features. Second, the ODConv structure is introduced into the neck module of the network, which can significantly enhance the feature extraction ability of the basic convolution operation of the CNN and capture more rich contextual information. At the same time, we created the OSI26 data set, which contains 452 images and 26 surgical instruments, for model training and evaluation. The experimental results show that our improved algorithm exhibits higher accuracy and robustness in surgical instrument detection tasks, with F1, AP, AP50, and AP75 reaching 94.7%, 91.5%, 99.1%, and 98.2%, respectively, which are 4.6%, 3.1%, 3.6%, and 3.9% higher than the baseline. Compared to other mainstream object detection algorithms, our method has significant advantages. These results demonstrate that our method can more accurately identify surgical instruments, thereby improving surgical safety and patient health.
Collapse
Affiliation(s)
- Boping Ran
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066000, China
| | - Bo Huang
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066000, China
| | - Shunpan Liang
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066000, China
| | - Yulei Hou
- School of Mechanical Engineering, Yanshan University, Qinhuangdao 066000, China
| |
Collapse
|
35
|
Szmul A, Taylor S, Lim P, Cantwell J, Moreira I, Zhang Y, D’Souza D, Moinuddin S, Gaze MN, Gains J, Veiga C. Deep learning based synthetic CT from cone beam CT generation for abdominal paediatric radiotherapy. Phys Med Biol 2023; 68:105006. [PMID: 36996837 PMCID: PMC10160738 DOI: 10.1088/1361-6560/acc921] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 03/13/2023] [Accepted: 03/30/2023] [Indexed: 04/01/2023]
Abstract
Objective. Adaptive radiotherapy workflows require images with the quality of computed tomography (CT) for re-calculation and re-optimisation of radiation doses. In this work we aim to improve the quality of on-board cone beam CT (CBCT) images for dose calculation using deep learning.Approach. We propose a novel framework for CBCT-to-CT synthesis using cycle-consistent Generative Adversarial Networks (cycleGANs). The framework was tailored for paediatric abdominal patients, a challenging application due to the inter-fractional variability in bowel filling and small patient numbers. We introduced to the networks the concept of global residuals only learning and modified the cycleGAN loss function to explicitly promote structural consistency between source and synthetic images. Finally, to compensate for the anatomical variability and address the difficulties in collecting large datasets in the paediatric population, we applied a smart 2D slice selection based on the common field-of-view (abdomen) to our imaging dataset. This acted as a weakly paired data approach that allowed us to take advantage of scans from patients treated for a variety of malignancies (thoracic-abdominal-pelvic) for training purposes. We first optimised the proposed framework and benchmarked its performance on a development dataset. Later, a comprehensive quantitative evaluation was performed on an unseen dataset, which included calculating global image similarity metrics, segmentation-based measures and proton therapy-specific metrics.Main results. We found improved performance for our proposed method, compared to a baseline cycleGAN implementation, on image-similarity metrics such as Mean Absolute Error calculated for a matched virtual CT (55.0 ± 16.6 HU proposed versus 58.9 ± 16.8 HU baseline). There was also a higher level of structural agreement for gastrointestinal gas between source and synthetic images measured using the dice similarity coefficient (0.872 ± 0.053 proposed versus 0.846 ± 0.052 baseline). Differences found in water-equivalent thickness metrics were also smaller for our method (3.3 ± 2.4% proposed versus 3.7 ± 2.8% baseline).Significance. Our findings indicate that our innovations to the cycleGAN framework improved the quality and structure consistency of the synthetic CTs generated.
Collapse
Affiliation(s)
- Adam Szmul
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Sabrina Taylor
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Pei Lim
- Department of Oncology, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Jessica Cantwell
- Radiotherapy, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Isabel Moreira
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Ying Zhang
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Derek D’Souza
- Radiotherapy Physics Services, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Syed Moinuddin
- Radiotherapy, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Mark N. Gaze
- Department of Oncology, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Jennifer Gains
- Department of Oncology, University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Catarina Veiga
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| |
Collapse
|
36
|
Zhang X, Sisniega A, Zbijewski WB, Lee J, Jones CK, Wu P, Han R, Uneri A, Vagdargi P, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Combining physics-based models with deep learning image synthesis and uncertainty in intraoperative cone-beam CT of the brain. Med Phys 2023; 50:2607-2624. [PMID: 36906915 PMCID: PMC10175241 DOI: 10.1002/mp.16351] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 02/03/2023] [Accepted: 02/27/2023] [Indexed: 03/13/2023] Open
Abstract
BACKGROUND Image-guided neurosurgery requires high localization and registration accuracy to enable effective treatment and avoid complications. However, accurate neuronavigation based on preoperative magnetic resonance (MR) or computed tomography (CT) images is challenged by brain deformation occurring during the surgical intervention. PURPOSE To facilitate intraoperative visualization of brain tissues and deformable registration with preoperative images, a 3D deep learning (DL) reconstruction framework (termed DL-Recon) was proposed for improved intraoperative cone-beam CT (CBCT) image quality. METHODS The DL-Recon framework combines physics-based models with deep learning CT synthesis and leverages uncertainty information to promote robustness to unseen features. A 3D generative adversarial network (GAN) with a conditional loss function modulated by aleatoric uncertainty was developed for CBCT-to-CT synthesis. Epistemic uncertainty of the synthesis model was estimated via Monte Carlo (MC) dropout. Using spatially varying weights derived from epistemic uncertainty, the DL-Recon image combines the synthetic CT with an artifact-corrected filtered back-projection (FBP) reconstruction. In regions of high epistemic uncertainty, DL-Recon includes greater contribution from the FBP image. Twenty paired real CT and simulated CBCT images of the head were used for network training and validation, and experiments evaluated the performance of DL-Recon on CBCT images containing simulated and real brain lesions not present in the training data. Performance among learning- and physics-based methods was quantified in terms of structural similarity (SSIM) of the resulting image to diagnostic CT and Dice similarity metric (DSC) in lesion segmentation compared to ground truth. A pilot study was conducted involving seven subjects with CBCT images acquired during neurosurgery to assess the feasibility of DL-Recon in clinical data. RESULTS CBCT images reconstructed via FBP with physics-based corrections exhibited the usual challenges to soft-tissue contrast resolution due to image non-uniformity, noise, and residual artifacts. GAN synthesis improved image uniformity and soft-tissue visibility but was subject to error in the shape and contrast of simulated lesions that were unseen in training. Incorporation of aleatoric uncertainty in synthesis loss improved estimation of epistemic uncertainty, with variable brain structures and unseen lesions exhibiting higher epistemic uncertainty. The DL-Recon approach mitigated synthesis errors while maintaining improvement in image quality, yielding 15%-22% increase in SSIM (image appearance compared to diagnostic CT) and up to 25% increase in DSC in lesion segmentation compared to FBP. Clear gains in visual image quality were also observed in real brain lesions and in clinical CBCT images. CONCLUSIONS DL-Recon leveraged uncertainty estimation to combine the strengths of DL and physics-based reconstruction and demonstrated substantial improvements in the accuracy and quality of intraoperative CBCT. The improved soft-tissue contrast resolution could facilitate visualization of brain structures and support deformable registration with preoperative images, further extending the utility of intraoperative CBCT in image-guided neurosurgery.
Collapse
Affiliation(s)
- Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Wojciech B. Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Junghoon Lee
- Department of Radiation Oncology, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Craig K. Jones
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Pengwei Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Runze Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Prasad Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | | | - Mark Luciano
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
| | - William S. Anderson
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
| | - Jeffrey H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030
| |
Collapse
|
37
|
Ebadi N, Li R, Das A, Roy A, Nikos P, Najafirad P. CBCT-guided adaptive radiotherapy using self-supervised sequential domain adaptation with uncertainty estimation. Med Image Anal 2023; 86:102800. [PMID: 37003101 DOI: 10.1016/j.media.2023.102800] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 01/29/2023] [Accepted: 03/14/2023] [Indexed: 03/17/2023]
Abstract
Adaptive radiotherapy (ART) is an advanced technology in modern cancer treatment that incorporates progressive changes in patient anatomy into active plan/dose adaption during the fractionated treatment. However, the clinical application relies on the accurate segmentation of cancer tumors on low-quality on-board images, which has posed challenges for both manual delineation and deep learning-based models. In this paper, we propose a novel sequence transduction deep neural network with an attention mechanism to learn the shrinkage of the cancer tumor based on patients' weekly cone-beam computed tomography (CBCT). We design a self-supervised domain adaption (SDA) method to learn and adapt the rich textural and spatial features from pre-treatment high-quality computed tomography (CT) to CBCT modality in order to address the poor image quality and lack of labels. We also provide uncertainty estimation for sequential segmentation, which aids not only in the risk management of treatment planning but also in the calibration and reliability of the model. Our experimental results based on a clinical non-small cell lung cancer (NSCLC) dataset with sixteen patients and ninety-six longitudinal CBCTs show that our model correctly learns weekly deformation of the tumor over time with an average dice score of 0.92 on the immediate next step, and is able to predict multiple steps (up to 5 weeks) for future patient treatments with an average dice score reduction of 0.05. By incorporating the tumor shrinkage predictions into a weekly re-planning strategy, our proposed method demonstrates a significant decrease in the risk of radiation-induced pneumonitis up to 35% while maintaining the high tumor control probability.
Collapse
Affiliation(s)
- Nima Ebadi
- Department of Electrical and Computer Engineering, The University of Texas at San Antonio, San Antonio, TX 78249, United States of America.
| | - Ruiqi Li
- Department of Radiation Oncology, UT Health San Antonio, San Antonio, TX 78229, United States of America.
| | - Arun Das
- Department of Electrical and Computer Engineering, The University of Texas at San Antonio, San Antonio, TX 78249, United States of America; Department of Medicine, The University of Pittsburgh, Pittsburgh, PA 15260, United States of America.
| | - Arkajyoti Roy
- Department of Management Science and Statistics, The University of Texas at San Antonio, San Antonio, TX 78249, United States of America.
| | - Papanikolaou Nikos
- Department of Radiation Oncology, UT Health San Antonio, San Antonio, TX 78229, United States of America.
| | - Peyman Najafirad
- Department of Computer Science, The University of Texas at San Antonio, San Antonio, TX 78249, United States of America.
| |
Collapse
|
38
|
Cao Z, Gao X, Chang Y, Liu G, Pei Y. Improving synthetic CT accuracy by combining the benefits of multiple normalized preprocesses. J Appl Clin Med Phys 2023:e14004. [PMID: 37092739 PMCID: PMC10402686 DOI: 10.1002/acm2.14004] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 02/13/2023] [Accepted: 04/04/2023] [Indexed: 04/25/2023] Open
Abstract
PURPOSE To investigate the effect of different normalization preprocesses in deep learning on the accuracy of different tissues in synthetic computed tomography (sCT) and to combine their advantages to improve the accuracy of all tissues. METHODS The cycle-consistent adversarial network (CycleGAN) model was used to generate sCT images from megavolt cone-beam CT (MVCBCT) images. In this study, 2639 head MVCBCT and CT image pairs from 203 patients were collected as a training set, and 249 image pairs from 29 patients were collected as a test set. We normalized the voxel values in images to 0 to 1 or -1 to 1, using two linear and five nonlinear normalization preprocessing methods to obtain seven data sets and compared the accuracy of different tissues in different sCT obtained from training these data. Finally, to combine the advantages of different normalization preprocessing methods, we obtained sCT_Blur by cropping, stitching, and smoothing (OpenCV's cv2.medianBlur, kernel size 5) each group of sCTs and evaluated its image quality and accuracy of OARs. RESULTS Different normalization preprocesses made sCT more accurate in different tissues. The proposed sCT_Blur took advantage of multiple normalization preprocessing methods, and all tissues are more accurate than the sCT obtained using a single conventional normalization method. Compared with other sCT images, the structural similarity of sCT_Blur versus CT was improved to 0.906 ± 0.019. The mean absolute errors of the CT numbers were reduced to 15.7 ± 4.1 HU, 23.2 ± 7.1 HU, 11.5 ± 4.1 HU, 212.8 ± 104.6 HU, 219.4 ± 35.1 HU, and 268.8 ± 88.8 HU for the oral cavity, parotid, spinal cord, cavity, mandible, and teeth, respectively. CONCLUSION The proposed approach combined the advantages of several normalization preprocessing methods to improve the accuracy of all tissues in sCT images, which is promising for improving the accuracy of dose calculations based on CBCT images in adaptive radiotherapy.
Collapse
Affiliation(s)
- Zheng Cao
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, China
- Hematology & Oncology Department, Hefei First People's Hospital, Hefei, China
| | - Xiang Gao
- Hematology & Oncology Department, Hefei First People's Hospital, Hefei, China
| | - Yankui Chang
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
| | - Gongfa Liu
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, China
| | - Yuanji Pei
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, China
| |
Collapse
|
39
|
Deng L, Ji Y, Huang S, Yang X, Wang J. Synthetic CT generation from CBCT using double-chain-CycleGAN. Comput Biol Med 2023; 161:106889. [DOI: 10.1016/j.compbiomed.2023.106889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 03/16/2023] [Accepted: 04/01/2023] [Indexed: 04/05/2023]
|
40
|
Xie K, Gao L, Xi Q, Zhang H, Zhang S, Zhang F, Sun J, Lin T, Sui J, Ni X. New technique and application of truncated CBCT processing in adaptive radiotherapy for breast cancer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107393. [PMID: 36739623 DOI: 10.1016/j.cmpb.2023.107393] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 01/26/2023] [Accepted: 01/31/2023] [Indexed: 06/18/2023]
Abstract
OBJECTIVE A generative adversarial network (TCBCTNet) was proposed to generate synthetic computed tomography (sCT) from truncated low-dose cone-beam computed tomography (CBCT) and planning CT (pCT). The sCT was applied to the dose calculation of radiotherapy for patients with breast cancer. METHODS The low-dose CBCT and pCT images of 80 female thoracic patients were used for training. The CBCT, pCT, and replanning CT (rCT) images of 20 thoracic patients and 20 patients with breast cancer were used for testing. All patients were fixed in the same posture with a vacuum pad. The CBCT images were scanned under the Fast Chest M20 protocol with a 50% reduction in projection frames compared with the standard Chest M20 protocol. Rigid registration was performed between pCT and CBCT, and deformation registration was performed between rCT and CBCT. In the training stage of the TCBCTNet, truncated CBCT images obtained from complete CBCT images by simulation were used. The input of the CBCT→CT generator was truncated CBCT and pCT, and TCBCTNet was applied to patients with breast cancer after training. The accuracy of the sCT was evaluated by anatomy and dosimetry and compared with the generative adversarial network with UNet and ResNet as the generators (named as UnetGAN, ResGAN). RESULTS The three models could improve the image quality of CBCT and reduce the scattering artifacts while preserving the anatomical geometry of CBCT. For the chest test set, TCBCTNet achieved the best mean absolute error (MAE, 21.18±3.76 HU), better than 23.06±3.90 HU in UnetGAN and 22.47±3.57 HU in ResGAN. When applied to patients with breast cancer, TCBCTNet performance decreased, and MAE was 25.34±6.09 HU. Compared with rCT, sCT by TCBCTNet showed consistent dose distribution and subtle absolute dose differences between the target and the organ at risk. The 3D gamma pass rates were 98.98%±0.64% and 99.69%±0.22% at 2 mm/2% and 3 mm/3%, respectively. Ablation experiments confirmed that pCT and content loss played important roles in TCBCTNet. CONCLUSIONS High-quality sCT images could be synthesized from truncated low-dose CBCT and pCT by using the proposed TCBCTNet model. In addition, sCT could be used to accurately calculate the dose distribution for patients with breast cancer.
Collapse
Affiliation(s)
- Kai Xie
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Liugang Gao
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Qianyi Xi
- Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China
| | - Heng Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China
| | - Sai Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China
| | - Fan Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China
| | - Jiawei Sun
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Tao Lin
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Jianfeng Sui
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Xinye Ni
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China; Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China.
| |
Collapse
|
41
|
Joseph J, Biji I, Babu N, Pournami PN, Jayaraj PB, Puzhakkal N, Sabu C, Patel V. Fan beam CT image synthesis from cone beam CT image using nested residual UNet based conditional generative adversarial network. Phys Eng Sci Med 2023; 46:703-717. [PMID: 36943626 DOI: 10.1007/s13246-023-01244-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 03/09/2023] [Indexed: 03/23/2023]
Abstract
A radiotherapy technique called Image-Guided Radiation Therapy adopts frequent imaging throughout a treatment session. Fan Beam Computed Tomography (FBCT) based planning followed by Cone Beam Computed Tomography (CBCT) based radiation delivery drastically improved the treatment accuracy. Furtherance in terms of radiation exposure and cost can be achieved if FBCT could be replaced with CBCT. This paper proposes a Conditional Generative Adversarial Network (CGAN) for CBCT-to-FBCT synthesis. Specifically, a new architecture called Nested Residual UNet (NR-UNet) is introduced as the generator of the CGAN. A composite loss function, which comprises adversarial loss, Mean Squared Error (MSE), and Gradient Difference Loss (GDL), is used with the generator. The CGAN utilises the inter-slice dependency in the input by taking three consecutive CBCT slices to generate an FBCT slice. The model is trained using Head-and-Neck (H&N) FBCT-CBCT images of 53 cancer patients. The synthetic images exhibited a Peak Signal-to-Noise Ratio of 34.04±0.93 dB, Structural Similarity Index Measure of 0.9751±0.001 and a Mean Absolute Error of 14.81±4.70 HU. On average, the proposed model guarantees an improvement in Contrast-to-Noise Ratio four times better than the input CBCT images. The model also minimised the MSE and alleviated blurriness. Compared to the CBCT-based plan, the synthetic image results in a treatment plan closer to the FBCT-based plan. The three-slice to single-slice translation captures the three-dimensional contextual information in the input. Besides, it withstands the computational complexity associated with a three-dimensional image synthesis model. Furthermore, the results demonstrate that the proposed model is superior to the state-of-the-art methods.
Collapse
Affiliation(s)
- Jiffy Joseph
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India.
| | - Ivan Biji
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Naveen Babu
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - P N Pournami
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - P B Jayaraj
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Niyas Puzhakkal
- Department of Medical Physics, MVR Cancer Centre & Research Institute, Poolacode, Calicut, Kerala, 673601, India
| | - Christy Sabu
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| | - Vedkumar Patel
- Computer science and Engineering Department, National Institute of Technology Calicut, Kattangal, Calicut, Kerala, 673601, India
| |
Collapse
|
42
|
Nelissen KJ, Versteijne E, Senan S, Hoffmans D, Slotman BJ, Verbakel WFAR. Evaluation of a workflow for cone-beam CT-guided online adaptive palliative radiotherapy planned using diagnostic CT scans. J Appl Clin Med Phys 2023; 24:e13841. [PMID: 36573256 PMCID: PMC10018665 DOI: 10.1002/acm2.13841] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2003] [Revised: 09/15/2022] [Accepted: 10/17/2022] [Indexed: 12/28/2022] Open
Abstract
PURPOSE Single-visit radiotherapy (RT) is beneficial for patients requiring pain control and can limit interruptions to systemic treatments. However, the requirement for a dedicated planning CT (pCT)-scan can result in treatment delays. We developed a workflow involving preplanning on available diagnostic CT (dCT) imaging, followed by online plan adaption using a cone-beam CT (CBCT)-scan prior to RT-delivery, in order to account for any changes in anatomy and target position. METHODS Patients previously treated with palliative RT for bone metastases were selected from our hospital database. Patient dCT-images were deformed to treatment CBCTs in the Ethos platform (Varian Medical Systems) and a synthetic CT (sCT) generated. Treatment quality was analyzed by comparing a coverage of the V95% of the planning/clinical target volume and different organ-at-risk (OAR) doses between adapted and initial clinical treatment plans. Doses were recalculated on the CBCT and sCT in a separate treatment planning system. Adapted plan doses were measured on-couch using an anthropomorphic phantom with a Gafchromic EBT3 dosimetric film and compared to dose calculations. RESULTS All adapted treatment plans met the clinical goals for target and OARs and outperformed the original treatment plans calculated on the (daily) sCT. Differences in V95% of the target volume coverage between the initial and adapted treatments were <0.2%. Dose recalculations on CBCT and sCT were comparable, and the average gamma pass rate (3%/2 mm) of dosimetric measurements was 98.8%. CONCLUSIONS Online daily adaptive RT using dCTs instead of a dedicated pCT is feasible using the Ethos platform. This workflow has now been implemented clinically.
Collapse
Affiliation(s)
- Koen J Nelissen
- Department of Radiation Oncology, Amsterdam UMC location Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.,Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, The Netherlands
| | - Eva Versteijne
- Department of Radiation Oncology, Amsterdam UMC location Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.,Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, The Netherlands
| | - Suresh Senan
- Department of Radiation Oncology, Amsterdam UMC location Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.,Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, The Netherlands
| | - Daan Hoffmans
- Department of Radiation Oncology, Amsterdam UMC location Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.,Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, The Netherlands
| | - Ben J Slotman
- Department of Radiation Oncology, Amsterdam UMC location Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.,Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, The Netherlands
| | - Wilko F A R Verbakel
- Department of Radiation Oncology, Amsterdam UMC location Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.,Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, The Netherlands
| |
Collapse
|
43
|
Nelissen KJ, Versteijne E, Senan S, Rijksen B, Admiraal M, Visser J, Barink S, de la Fuente AL, Hoffmans D, Slotman BJ, Verbakel WFAR. Same-day adaptive palliative radiotherapy without prior CT simulation: Early outcomes in the FAST-METS study. Radiother Oncol 2023; 182:109538. [PMID: 36806603 DOI: 10.1016/j.radonc.2023.109538] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 02/03/2023] [Accepted: 02/13/2023] [Indexed: 02/18/2023]
Abstract
BACKGROUND AND PURPOSE Standard palliative radiotherapy workflows involve waiting times or multiple clinic visits. We developed and implemented a rapid palliative workflow using diagnostic imaging (dCT) for pre-planning, with subsequent on-couch target and plan adaptation based on a synthetic computed tomography (CT) obtained from cone-beam CT imaging (CBCT). MATERIALS AND METHODS Patients with painful bone metastases and recent diagnostic imaging were eligible for inclusion in this prospective, ethics-approved study. The workflow consisted of 1) telephone consultation with a radiation oncologist (RO); 2) pre-planning on the dCT using planning templates and mostly intensity-modulated radiotherapy; 3) RO consultation on the day of treatment; 4) CBCT scan with on-couch adaptation of the target and treatment plan; 5) delivery of either scheduled or adapted treatment plan. Primary outcomes were dosimetric data and treatment times; secondary outcome was patient satisfaction. RESULTS 47 patients were enrolled between December 2021 and October 2022. In all treatments, adapted treatment plans were chosen due to significant improvements in target coverage (PTV/CTV V95%, p-value < 0.005) compared to the original treatment plan calculated on daily anatomy. Most patients were satisfied with the workflow. The average treatment time, including consultation and on-couch adaptive treatment, was 85 minutes. On-couch adaptation took on average 30 min. but was longer in cases where the automated deformable image registration failed to correctly propagate the targets. CONCLUSION A fast treatment workflow for patients referred for painful bone metastases was implemented successfully using online adaptive radiotherapy, without a dedicated CT simulation. Patients were generally satisfied with the palliative radiotherapy workflow.
Collapse
Affiliation(s)
- Koen J Nelissen
- Amsterdam UMC location Vrije Universiteit Amsterdam, Radiation Oncology, Amsterdam, the Netherlands; Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, the Netherlands.
| | - Eva Versteijne
- Amsterdam UMC location Vrije Universiteit Amsterdam, Radiation Oncology, Amsterdam, the Netherlands; Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, the Netherlands
| | - Suresh Senan
- Amsterdam UMC location Vrije Universiteit Amsterdam, Radiation Oncology, Amsterdam, the Netherlands; Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, the Netherlands
| | - Barbara Rijksen
- Amsterdam UMC location Vrije Universiteit Amsterdam, Radiation Oncology, Amsterdam, the Netherlands
| | - Marjan Admiraal
- Amsterdam UMC location Vrije Universiteit Amsterdam, Radiation Oncology, Amsterdam, the Netherlands
| | - Jorrit Visser
- Amsterdam UMC location Vrije Universiteit Amsterdam, Radiation Oncology, Amsterdam, the Netherlands
| | - Sarah Barink
- Amsterdam UMC location Vrije Universiteit Amsterdam, Radiation Oncology, Amsterdam, the Netherlands
| | - Amy L de la Fuente
- Amsterdam UMC location Vrije Universiteit Amsterdam, Radiation Oncology, Amsterdam, the Netherlands
| | - Daan Hoffmans
- Amsterdam UMC location Vrije Universiteit Amsterdam, Radiation Oncology, Amsterdam, the Netherlands; Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, the Netherlands
| | - Ben J Slotman
- Amsterdam UMC location Vrije Universiteit Amsterdam, Radiation Oncology, Amsterdam, the Netherlands; Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, the Netherlands
| | - Wilko F A R Verbakel
- Amsterdam UMC location Vrije Universiteit Amsterdam, Radiation Oncology, Amsterdam, the Netherlands; Cancer Center Amsterdam, Cancer Treatment and Quality of Life, Amsterdam, the Netherlands
| |
Collapse
|
44
|
Gao L, Xie K, Sun J, Lin T, Sui J, Yang G, Ni X. Streaking artifact reduction for CBCT-based synthetic CT generation in adaptive radiotherapy. Med Phys 2023; 50:879-893. [PMID: 36183234 DOI: 10.1002/mp.16017] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 09/02/2022] [Accepted: 09/25/2022] [Indexed: 11/07/2022] Open
Abstract
BACKGROUND Cone-beam computed tomography (CBCT) is widely used for daily image guidance in radiation therapy, enhancing the reproducibility of patient setup. However, its application in adaptive radiotherapy (ART) is limited by many imaging artifacts and inaccurate Hounsfield units (HUs). The correction of CBCT image is necessary and of great value for CBCT-based ART. PURPOSE To explore the synthetic CT (sCT) generation from CBCT images of thorax and abdomen patients, which usually surfer from serious artifacts duo to organ state changes. In this study, a streaking artifact reduction network (SARN) is proposed to reduce artifacts and combine with cycleGAN to generate high-quality sCT images from CBCT and achieve an accurate dose calculation. METHODS The proposed SARN was trained in a self-supervised manner. Artifact-CT images were generated from planning CT by random deformation and projection replacement, and SARN was trained based on paired artifact-CT and CT images. The planning CT and CBCT images of 260 patients with cancer, including 120 thoracic and 140 abdominal CT scans, were used to train and evaluate neural networks. The CBCT images of another 12 patients in late treatment fractions, which contained large anatomy changes, were also tested by trained models. The trained models include commonly used U-Net, cycleGAN, attention-gated cycleGAN (cycAT), and cascade models combined SARN with cycleGAN or cycAT. The generated sCT images were compared in terms of image quality and dose calculation accuracy. RESULTS The sCT images generated by SARN combined with cycleGAN and cycAT showed the best image quality, removed the most artifacts, and retained the normal anatomical structure. The SARN+cycleGAN performed best in streaking artifacts removal with the maximum percent integrity uniformity (PIUm ) of 91.0% and minimum standard deviation (SD) of 35.4 HU for delineated artifact regions among all models. The mean absolute error (MAE) of CBCT images in the thorax and abdomen were 71.6 and 55.2 HU, respectively, using planning CT images after deformable registration as ground truth. Compared with CBCT, the thoracic and abdominal sCT images generated by each model had significantly improved image quality with smaller MAE (p < 0.05). The SARN+cycAT obtained the minimum MAEs of 42.5 HU in the thorax while SARN+cycleGAN got the minimum MAEs of 32.0 HU in the abdomen. The sCT generated by U-Net had a remarkably lower anatomical structure accuracy compared with the other models. The thoracic and abdominal sCT images generated by SARN+cycleGAN showed optimal dose calculation accuracy with gamma passing rates (2 mm/2%) of 98.2% and 96.9%, respectively. CONCLUSIONS The proposed SARN can reduce serious streaking artifacts in CBCT images. The SARN combined with cycleGAN can generate high-quality sCT images with fewer artifacts, high-accuracy HU values, and accurate anatomical structures, thus providing reliable dose calculation in ART.
Collapse
Affiliation(s)
- Liugang Gao
- School of Computer Science and Engineering, Southeast University, Nanjing, China
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Kai Xie
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Jiawei Sun
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Tao Lin
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Jianfeng Sui
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Guanyu Yang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Xinye Ni
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| |
Collapse
|
45
|
Hooshangnejad H, Chen Q, Feng X, Zhang R, Ding K. deepPERFECT: Novel Deep Learning CT Synthesis Method for Expeditious Pancreatic Cancer Radiotherapy. ARXIV 2023:2301.11085. [PMID: 36748001 PMCID: PMC9900959] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Pancreatic cancer with more than 60,000 new cases each year has less than 10 percent 5-year overall survival. Radiation therapy (RT) is an effective treatment for Locally advanced pancreatic cancer (LAPC). The current clinical RT workflow is lengthy and involves separate image acquisition for diagnostic CT (dCT) and planning CT (pCT). Studies have shown a reduction in mortality rate from expeditious radiotherapy treatment. dCT and pCT are acquired separately because of the differences in the image acquisition setup and patient body. We are presenting deepPERFECT: deep learning-based model to adapt the shape of the patient body on dCT to the treatment delivery setup. Our method expedites the treatment course by allowing the design of the initial RT planning before the pCT acquisition. Thus, the physicians can evaluate the potential RT prognosis ahead of time, verify the plan on the treatment day-one CT and apply any online adaptation if needed. We used the data from 25 pancreatic cancer patients. The model was trained on 15 cases and tested on the remaining ten cases. We evaluated the performance of four different deep-learning architectures for this task. The synthesized CT (sCT) and regions of interest (ROIs) were compared with ground truth (pCT) using Dice similarity coefficient (DSC) and Hausdorff distance (HD). We found that the three-dimensional Generative Adversarial Network (GAN) model trained on large patches has the best performance. The average DSC and HD for body contours were 0.93, and 4.6 mm. We found no statistically significant difference between the synthesized CT plans and the ground truth. We showed that employing deepPERFECT shortens the current lengthy clinical workflow by at least one week and improves the effectiveness of treatment and the quality of life of pancreatic cancer patients.
Collapse
Affiliation(s)
- Hamed Hooshangnejad
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, MD, USA,,Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins School of Medicine, Baltimore, MD, USA,Carnegie Center of Surgical Innovation, Johns Hopkins School of Medicine, Baltimore, MD, USA
| | - Quan Chen
- City of Hope Comprehensive Cancer Center, Duarte, CA, USA
| | - Xue Feng
- Carina Medical LLC, Lexington, KY, USA
| | - Rui Zhang
- Department of Surgery, University of Minnesota, Minneapolis, MN, USA
| | - Kai Ding
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins School of Medicine, Baltimore, MD, USA,Carnegie Center of Surgical Innovation, Johns Hopkins School of Medicine, Baltimore, MD, USA
| |
Collapse
|
46
|
Wang H, Liu X, Kong L, Huang Y, Chen H, Ma X, Duan Y, Shao Y, Feng A, Shen Z, Gu H, Kong Q, Xu Z, Zhou Y. Improving CBCT image quality to the CT level using RegGAN in esophageal cancer adaptive radiotherapy. Strahlenther Onkol 2023; 199:485-497. [PMID: 36688953 PMCID: PMC10133081 DOI: 10.1007/s00066-022-02039-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 12/04/2022] [Indexed: 01/24/2023]
Abstract
OBJECTIVE This study aimed to improve the image quality and CT Hounsfield unit accuracy of daily cone-beam computed tomography (CBCT) using registration generative adversarial networks (RegGAN) and apply synthetic CT (sCT) images to dose calculations in radiotherapy. METHODS The CBCT/planning CT images of 150 esophageal cancer patients undergoing radiotherapy were used for training (120 patients) and testing (30 patients). An unsupervised deep-learning method, the 2.5D RegGAN model with an adaptively trained registration network, was proposed, through which sCT images were generated. The quality of deep-learning-generated sCT images was quantitatively compared to the reference deformed CT (dCT) image using mean absolute error (MAE), root mean square error (RMSE) of Hounsfield units (HU), and peak signal-to-noise ratio (PSNR). The dose calculation accuracy was further evaluated for esophageal cancer radiotherapy plans, and the same plans were calculated on dCT, CBCT, and sCT images. RESULTS The quality of sCT images produced by RegGAN was significantly improved compared to the original CBCT images. ReGAN achieved image quality in the testing patients with MAE sCT vs. CBCT: 43.7 ± 4.8 vs. 80.1 ± 9.1; RMSE sCT vs. CBCT: 67.2 ± 12.4 vs. 124.2 ± 21.8; and PSNR sCT vs. CBCT: 27.9 ± 5.6 vs. 21.3 ± 4.2. The sCT images generated by the RegGAN model showed superior accuracy on dose calculation, with higher gamma passing rates (93.3 ± 4.4, 90.4 ± 5.2, and 84.3 ± 6.6) compared to original CBCT images (89.6 ± 5.7, 85.7 ± 6.9, and 72.5 ± 12.5) under the criteria of 3 mm/3%, 2 mm/2%, and 1 mm/1%, respectively. CONCLUSION The proposed deep-learning RegGAN model seems promising for generation of high-quality sCT images from stand-alone thoracic CBCT images in an efficient way and thus has the potential to support CBCT-based esophageal cancer adaptive radiotherapy.
Collapse
Affiliation(s)
- Hao Wang
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China.,Institute of Modern Physics, Fudan University, Shanghai, China
| | - Xiao Liu
- Department of Radiotherapy, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | | | - Ying Huang
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Hua Chen
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Xiurui Ma
- Department of Radiation Oncology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Yanhua Duan
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Yan Shao
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Aihui Feng
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Zhenjiong Shen
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Hengle Gu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Qing Kong
- Institute of Modern Physics, Fudan University, Shanghai, China
| | - Zhiyong Xu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Yongkang Zhou
- Department of Radiation Oncology, Zhongshan Hospital, Fudan University, Shanghai, China.
| |
Collapse
|
47
|
Ryu K, Lee C, Han Y, Pang S, Kim YH, Choi C, Jang I, Han SS. Multi-planar 2.5D U-Net for image quality enhancement of dental cone-beam CT. PLoS One 2023; 18:e0285608. [PMID: 37167217 PMCID: PMC10174510 DOI: 10.1371/journal.pone.0285608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 04/26/2023] [Indexed: 05/13/2023] Open
Abstract
Cone-beam computed tomography (CBCT) can provide 3D images of a targeted area with the advantage of lower dosage than multidetector computed tomography (MDCT; also simply referred to as CT). However, in CBCT, due to the cone-shaped geometry of the X-ray source and the absence of post-patient collimation, the presence of more scattering rays deteriorates the image quality compared with MDCT. CBCT is commonly used in dental clinics, and image artifacts negatively affect the radiology workflow and diagnosis. Studies have attempted to eliminate image artifacts and improve image quality; however, a vast majority of that work sacrificed structural details of the image. The current study presents a novel approach to reduce image artifacts while preserving details and sharpness in the original CBCT image for precise diagnostic purposes. We used MDCT images as reference high-quality images. Pairs of CBCT and MDCT scans were collected retrospectively at a university hospital, followed by co-registration between the CBCT and MDCT images. A contextual loss-optimized multi-planar 2.5D U-Net was proposed. Images corrected using this model were evaluated quantitatively and qualitatively by dental clinicians. The quantitative metrics showed superior quality in output images compared to the original CBCT. In the qualitative evaluation, the generated images presented significantly higher scores for artifacts, noise, resolution, and overall image quality. This proposed novel approach for noise and artifact reduction with sharpness preservation in CBCT suggests the potential of this method for diagnostic imaging.
Collapse
Affiliation(s)
- Kanghyun Ryu
- Artificial Intelligence and Robotics Institute, Korea Institute of Science and Technology, Seoul, South Korea
| | - Chena Lee
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, South Korea
| | - Yoseob Han
- College of Information Technology, Department of Electronic Engineering, IT Convergence Major, Soongsil University, Seoul, Korea
- Department of Radiology, Harvard Medical School, Boston, MA, United States of America
| | - Subeen Pang
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, United States of America
| | - Young Hyun Kim
- Department of R&D Performance Evaluation, Korea Health Industry Development Institute (KHIDI), Cheongju, South Korea
| | - Chanyeol Choi
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, United States of America
| | - Ikbeom Jang
- Division of Computer Engineering, Hankuk University of Foreign Studies, Yongin, South Korea
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Sang-Sun Han
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, South Korea
| |
Collapse
|
48
|
Anatomical evaluation of deep-learning synthetic computed tomography images generated from male pelvis cone-beam computed tomography. Phys Imaging Radiat Oncol 2023; 25:100416. [PMID: 36969503 PMCID: PMC10037090 DOI: 10.1016/j.phro.2023.100416] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 01/17/2023] [Accepted: 01/18/2023] [Indexed: 01/25/2023] Open
Abstract
Background and purpose To improve cone-beam computed tomography (CBCT), deep-learning (DL)-models are being explored to generate synthetic CTs (sCT). The sCT evaluation is mainly focused on image quality and CT number accuracy. However, correct representation of daily anatomy of the CBCT is also important for sCTs in adaptive radiotherapy. The aim of this study was to emphasize the importance of anatomical correctness by quantitatively assessing sCT scans generated from CBCT scans using different paired and unpaired dl-models. Materials and methods Planning CTs (pCT) and CBCTs of 56 prostate cancer patients were included to generate sCTs. Three different dl-models, Dual-UNet, Single-UNet and Cycle-consistent Generative Adversarial Network (CycleGAN), were evaluated on image quality and anatomical correctness. The image quality was assessed using image metrics, such as Mean Absolute Error (MAE). The anatomical correctness between sCT and CBCT was quantified using organs-at-risk volumes and average surface distances (ASD). Results MAE was 24 Hounsfield Unit (HU) [range:19-30 HU] for Dual-UNet, 40 HU [range:34-56 HU] for Single-UNet and 41HU [range:37-46 HU] for CycleGAN. Bladder ASD was 4.5 mm [range:1.6-12.3 mm] for Dual-UNet, 0.7 mm [range:0.4-1.2 mm] for Single-UNet and 0.9 mm [range:0.4-1.1 mm] CycleGAN. Conclusions Although Dual-UNet performed best in standard image quality measures, such as MAE, the contour based anatomical feature comparison with the CBCT showed that Dual-UNet performed worst on anatomical comparison. This emphasizes the importance of adding anatomy based evaluation of sCTs generated by dl-models. For applications in the pelvic area, direct anatomical comparison with the CBCT may provide a useful method to assess the clinical applicability of dl-based sCT generation methods.
Collapse
|
49
|
Hamming VC, Andersson S, Maduro JH, Langendijk JA, Both S, Sijtsema NM. Daily dose evaluation based on corrected CBCTs for breast cancer patients: accuracy of dose and complication risk assessment. Radiat Oncol 2022; 17:205. [PMID: 36510254 PMCID: PMC9746176 DOI: 10.1186/s13014-022-02174-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 11/30/2022] [Indexed: 12/14/2022] Open
Abstract
OBJECTIVES The goal of this study is to validate different CBCT correction methods to select the superior method that can be used for dose evaluation in breast cancer patients with large anatomical changes treated with photon irradiation. MATERIALS AND METHOD Seventy-six breast cancer patients treated with a partial VMAT photon technique (70% conformal, 30% VMAT) were included in this study. All patients showed at least a 5 mm variation (swelling or shrinkage) of the breast on the CBCT compared to the planning-CT (pCT) and had a repeat-CT (rCT) for dose evaluation acquired within 3 days of this CBCT. The original CBCT was corrected using four methods: (1) HU-override correction (CBCTHU), (2) analytical correction and conversion (CBCTCC), (3) deep learning (DL) correction (CTDL) and (4) virtual correction (CTV). Image quality evaluation consisted of calculating the mean absolute error (MAE) and mean error (ME) within the whole breast clinical target volume (CTV) and the field of view of the CBCT minus 2 cm (CBCT-ROI) with respect to the rCT. The dose was calculated on all image sets using the clinical treatment plan for dose and gamma passing rate analysis. RESULTS The MAE of the CBCT-ROI was below 66 HU for all corrected CBCTs, except for the CBCTHU with a MAE of 142 HU. No significant dose differences were observed in the CTV regions in the CBCTCC, CTDL and CTv. Only the CBCTHU deviated significantly (p < 0.01) resulting in 1.7% (± 1.1%) average dose deviation. Gamma passing rates were > 95% for 2%/2 mm for all corrected CBCTs. CONCLUSION The analytical correction and conversion, deep learning correction and virtual correction methods can be applied for an accurate CBCT correction that can be used for dose evaluation during the course of photon radiotherapy of breast cancer patients.
Collapse
Affiliation(s)
- Vincent C. Hamming
- grid.4830.f0000 0004 0407 1981Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | | | - John H. Maduro
- grid.4830.f0000 0004 0407 1981Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Johannes A. Langendijk
- grid.4830.f0000 0004 0407 1981Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Stefan Both
- grid.4830.f0000 0004 0407 1981Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Nanna M. Sijtsema
- grid.4830.f0000 0004 0407 1981Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| |
Collapse
|
50
|
Cheon W, Jeong S, Jeong JH, Lim YK, Shin D, Lee SB, Lee DY, Lee SU, Suh YG, Moon SH, Kim TH, Kim H. Interobserver Variability Prediction of Primary Gross Tumor in a Patient with Non-Small Cell Lung Cancer. Cancers (Basel) 2022; 14:cancers14235893. [PMID: 36497374 PMCID: PMC9741368 DOI: 10.3390/cancers14235893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 11/25/2022] [Accepted: 11/27/2022] [Indexed: 12/03/2022] Open
Abstract
This research addresses the problem of interobserver variability (IOV), in which different oncologists manually delineate varying primary gross tumor volume (pGTV) contours, adding risk to targeted radiation treatments. Thus, a method of IOV reduction is urgently needed. Hypothesizing that the radiation oncologist’s IOV may shrink with the aid of IOV maps, we propose IOV prediction network (IOV-Net), a deep-learning model that uses the fuzzy membership function to produce high-quality maps based on computed tomography (CT) images. To test the prediction accuracy, a ground-truth pGTV IOV map was created using the manual contour delineations of radiation therapy structures provided by five expert oncologists. Then, we tasked IOV-Net with producing a map of its own. The mean squared error (prediction vs. ground truth) and its standard deviation were 0.0038 and 0.0005, respectively. To test the clinical feasibility of our method, CT images were divided into two groups, and oncologists from our institution created manual contours with and without IOV map guidance. The Dice similarity coefficient and Jaccard index increased by ~6 and 7%, respectively, and the Hausdorff distance decreased by 2.5 mm, indicating a statistically significant IOV reduction (p < 0.05). Hence, IOV-net and its resultant IOV maps have the potential to improve radiation therapy efficacy worldwide.
Collapse
|