1
|
Khamfongkhruea C, Prakarnpilas T, Thongsawad S, Deeharing A, Chanpanya T, Mundee T, Suwanbut P, Nimjaroen K. Supervised deep learning-based synthetic computed tomography from kilovoltage cone-beam computed tomography images for adaptive radiation therapy in head and neck cancer. Radiat Oncol J 2024; 42:181-191. [PMID: 39354821 DOI: 10.3857/roj.2023.00584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 02/29/2024] [Indexed: 10/03/2024] Open
Abstract
PURPOSE To generate and investigate a supervised deep learning algorithm for creating synthetic computed tomography (sCT) images from kilovoltage cone-beam computed tomography (kV-CBCT) images for adaptive radiation therapy (ART) in head and neck cancer (HNC). MATERIALS AND METHODS This study generated the supervised U-Net deep learning model using 3,491 image pairs from planning computed tomography (pCT) and kV-CBCT datasets obtained from 40 HNC patients. The dataset was split into 80% for training and 20% for testing. The evaluation of the sCT images compared to pCT images focused on three aspects: Hounsfield units accuracy, assessed using mean absolute error (MAE) and root mean square error (RMSE); image quality, evaluated using the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) between sCT and pCT images; and dosimetric accuracy, encompassing 3D gamma passing rates for dose distribution and percentage dose difference. RESULTS MAE, RMSE, PSNR, and SSIM showed improvements from their initial values of 53.15 ± 40.09, 153.99 ± 79.78, 47.91 ± 4.98 dB, and 0.97 ± 0.02 to 41.47 ± 30.59, 130.39 ± 78.06, 49.93 ± 6.00 dB, and 0.98 ± 0.02, respectively. Regarding dose evaluation, 3D gamma passing rates for dose distribution within sCT images under 2%/2 mm, 3%/2 mm, and 3%/3 mm criteria, yielded passing rates of 92.1% ± 3.8%, 93.8% ± 3.0%, and 96.9% ± 2.0%, respectively. The sCT images exhibited minor variations in the percentage dose distribution of the investigated target and structure volumes. However, it is worth noting that the sCT images exhibited anatomical variations when compared to the pCT images. CONCLUSION These findings highlight the potential of the supervised U-Net deep learningmodel in generating kV-CBCT-based sCT images for ART in patients with HNC.
Collapse
Affiliation(s)
- Chirasak Khamfongkhruea
- Medical Physics Program, Princess Srisavangavadhana College of Medicine, Chulabhorn Royal Academy, Bangkok, Thailand
- Department of Radiation Oncology, Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Tipaporn Prakarnpilas
- Medical Physics Program, Princess Srisavangavadhana College of Medicine, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Sangutid Thongsawad
- Medical Physics Program, Princess Srisavangavadhana College of Medicine, Chulabhorn Royal Academy, Bangkok, Thailand
- Department of Radiation Oncology, Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Aphisara Deeharing
- Department of Radiation Oncology, Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Thananya Chanpanya
- Department of Radiation Oncology, Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Thunpisit Mundee
- Department of Radiation Oncology, Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Pattarakan Suwanbut
- Department of Radiation Oncology, Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Kampheang Nimjaroen
- Medical Physics Program, Princess Srisavangavadhana College of Medicine, Chulabhorn Royal Academy, Bangkok, Thailand
- Department of Radiation Oncology, Chulabhorn Hospital, Chulabhorn Royal Academy, Bangkok, Thailand
| |
Collapse
|
2
|
Viar-Hernandez D, Molina-Maza JM, Vera-Sánchez JA, Perez-Moreno JM, Mazal A, Rodriguez-Vila B, Malpica N, Torrado-Carvajal A. Enhancing adaptive proton therapy through CBCT images: Synthetic head and neck CT generation based on 3D vision transformers. Med Phys 2024; 51:4922-4935. [PMID: 38569141 DOI: 10.1002/mp.17057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Revised: 03/01/2024] [Accepted: 03/17/2024] [Indexed: 04/05/2024] Open
Abstract
BACKGROUND Proton therapy is a form of radiotherapy commonly used to treat various cancers. Due to its high conformality, minor variations in patient anatomy can lead to significant alterations in dose distribution, making adaptation crucial. While cone-beam computed tomography (CBCT) is a well-established technique for adaptive radiation therapy (ART), it cannot be directly used for adaptive proton therapy (APT) treatments because the stopping power ratio (SPR) cannot be estimated from CBCT images. PURPOSE To address this limitation, Deep Learning methods have been suggested for converting pseudo-CT (pCT) images from CBCT images. In spite of convolutional neural networks (CNNs) have shown consistent improvement in pCT literature, there is still a need for further enhancements to make them suitable for clinical applications. METHODS The authors introduce the 3D vision transformer (ViT) block, studying its performance at various stages of the proposed architectures. Additionally, they conduct a retrospective analysis of a dataset that includes 259 image pairs from 59 patients who underwent treatment for head and neck cancer. The dataset is partitioned into 80% for training, 10% for validation, and 10% for testing purposes. RESULTS The SPR maps obtained from the pCT using the proposed method present an absolute relative error of less than 5% from those computed from the planning CT, thus improving the results of CBCT. CONCLUSIONS We introduce an enhanced ViT3D architecture for pCT image generation from CBCT images, reducing SPR error within clinical margins for APT workflows. The new method minimizes bias compared to CT-based SPR estimation and dose calculation, signaling a promising direction for future research in this field. However, further research is needed to assess the robustness and generalizability across different medical imaging applications.
Collapse
Affiliation(s)
- David Viar-Hernandez
- Universidad Rey Juan Carlos, Medical Image Analysis and Biometry Laboratory, Madrid, Spain
| | | | | | | | - Alejandro Mazal
- Centro de Protonterapia Quironsalud, Servicio de física médica, Madrid, Spain
| | - Borja Rodriguez-Vila
- Universidad Rey Juan Carlos, Medical Image Analysis and Biometry Laboratory, Madrid, Spain
| | - Norberto Malpica
- Universidad Rey Juan Carlos, Medical Image Analysis and Biometry Laboratory, Madrid, Spain
| | - Angel Torrado-Carvajal
- Universidad Rey Juan Carlos, Medical Image Analysis and Biometry Laboratory, Madrid, Spain
| |
Collapse
|
3
|
Wen X, Zhao C, Zhao B, Yuan M, Chang J, Liu W, Meng J, Shi L, Yang S, Zeng J, Yang Y. Application of deep learning in radiation therapy for cancer. Cancer Radiother 2024; 28:208-217. [PMID: 38519291 DOI: 10.1016/j.canrad.2023.07.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 07/17/2023] [Accepted: 07/18/2023] [Indexed: 03/24/2024]
Abstract
In recent years, with the development of artificial intelligence, deep learning has been gradually applied to clinical treatment and research. It has also found its way into the applications in radiotherapy, a crucial method for cancer treatment. This study summarizes the commonly used and latest deep learning algorithms (including transformer, and diffusion models), introduces the workflow of different radiotherapy, and illustrates the application of different algorithms in different radiotherapy modules, as well as the defects and challenges of deep learning in the field of radiotherapy, so as to provide some help for the development of automatic radiotherapy for cancer.
Collapse
Affiliation(s)
- X Wen
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - C Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Minhang District, Shanghai, China
| | - B Zhao
- Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - M Yuan
- Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - J Chang
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - W Liu
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - J Meng
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - L Shi
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - S Yang
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - J Zeng
- Cancer Institute of the Affiliated Hospital of Qingdao University and Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China
| | - Y Yang
- Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China.
| |
Collapse
|
4
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
5
|
Rusanov B, Hassan GM, Reynolds M, Sabet M, Rowshanfarzad P, Bucknell N, Gill S, Dass J, Ebert M. Transformer CycleGAN with uncertainty estimation for CBCT based synthetic CT in adaptive radiotherapy. Phys Med Biol 2024; 69:035014. [PMID: 38198726 DOI: 10.1088/1361-6560/ad1cfc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 01/10/2024] [Indexed: 01/12/2024]
Abstract
Objective. Clinical implementation of synthetic CT (sCT) from cone-beam CT (CBCT) for adaptive radiotherapy necessitates a high degree of anatomical integrity, Hounsfield unit (HU) accuracy, and image quality. To achieve these goals, a vision-transformer and anatomically sensitive loss functions are described. Better quantification of image quality is achieved using the alignment-invariant Fréchet inception distance (FID), and uncertainty estimation for sCT risk prediction is implemented in a scalable plug-and-play manner.Approach. Baseline U-Net, generative adversarial network (GAN), and CycleGAN models were trained to identify shortcomings in each approach. The proposed CycleGAN-Best model was empirically optimized based on a large ablation study and evaluated using classical image quality metrics, FID, gamma index, and a segmentation analysis. Two uncertainty estimation methods, Monte-Carlo Dropout (MCD) and test-time augmentation (TTA), were introduced to model epistemic and aleatoric uncertainty.Main results. FID was correlated to blind observer image quality scores with a Correlation Coefficient of -0.83, validating the metric as an accurate quantifier of perceived image quality. The FID and mean absolute error (MAE) of CycleGAN-Best was 42.11 ± 5.99 and 25.00 ± 1.97 HU, compared to 63.42 ± 15.45 and 31.80 HU for CycleGAN-Baseline, and 144.32 ± 20.91 and 68.00 ± 5.06 HU for the CBCT, respectively. Gamma 1%/1 mm pass rates were 98.66 ± 0.54% for CycleGAN-Best, compared to 86.72 ± 2.55% for the CBCT. TTA and MCD-based uncertainty maps were well spatially correlated with poor synthesis outputs.Significance. Anatomical accuracy was achieved by suppressing CycleGAN-related artefacts. FID better discriminated image quality, where alignment-based metrics such as MAE erroneously suggest poorer outputs perform better. Uncertainty estimation for sCT was shown to correlate with poor outputs and has clinical relevancy toward model risk assessment and quality assurance. The proposed model and accompanying evaluation and risk assessment tools are necessary additions to achieve clinically robust sCT generation models.
Collapse
Affiliation(s)
- Branimir Rusanov
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
- Center for Advanced Technologies in Cancer Research, Perth, Western Australia, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
| | - Mark Reynolds
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
| | - Mahsheed Sabet
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
- Center for Advanced Technologies in Cancer Research, Perth, Western Australia, Australia
| | - Pejman Rowshanfarzad
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
- Center for Advanced Technologies in Cancer Research, Perth, Western Australia, Australia
| | - Nicholas Bucknell
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | - Suki Gill
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | - Joshua Dass
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | - Martin Ebert
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
- Center for Advanced Technologies in Cancer Research, Perth, Western Australia, Australia
- Australian Centre for Quantitative Imaging, University of Western Australia, Perth, Western Australia, Australia
- School of Medicine and Public Health, University of Wisconsin, Madison WI, United States of America
| |
Collapse
|
6
|
Boldrini L, D'Aviero A, De Felice F, Desideri I, Grassi R, Greco C, Iorio GC, Nardone V, Piras A, Salvestrini V. Artificial intelligence applied to image-guided radiation therapy (IGRT): a systematic review by the Young Group of the Italian Association of Radiotherapy and Clinical Oncology (yAIRO). LA RADIOLOGIA MEDICA 2024; 129:133-151. [PMID: 37740838 DOI: 10.1007/s11547-023-01708-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/16/2023] [Indexed: 09/25/2023]
Abstract
INTRODUCTION The advent of image-guided radiation therapy (IGRT) has recently changed the workflow of radiation treatments by ensuring highly collimated treatments. Artificial intelligence (AI) and radiomics are tools that have shown promising results for diagnosis, treatment optimization and outcome prediction. This review aims to assess the impact of AI and radiomics on modern IGRT modalities in RT. METHODS A PubMed/MEDLINE and Embase systematic review was conducted to investigate the impact of radiomics and AI to modern IGRT modalities. The search strategy was "Radiomics" AND "Cone Beam Computed Tomography"; "Radiomics" AND "Magnetic Resonance guided Radiotherapy"; "Radiomics" AND "on board Magnetic Resonance Radiotherapy"; "Artificial Intelligence" AND "Cone Beam Computed Tomography"; "Artificial Intelligence" AND "Magnetic Resonance guided Radiotherapy"; "Artificial Intelligence" AND "on board Magnetic Resonance Radiotherapy" and only original articles up to 01.11.2022 were considered. RESULTS A total of 402 studies were obtained using the previously mentioned search strategy on PubMed and Embase. The analysis was performed on a total of 84 papers obtained following the complete selection process. Radiomics application to IGRT was analyzed in 23 papers, while a total 61 papers were focused on the impact of AI on IGRT techniques. DISCUSSION AI and radiomics seem to significantly impact IGRT in all the phases of RT workflow, even if the evidence in the literature is based on retrospective data. Further studies are needed to confirm these tools' potential and provide a stronger correlation with clinical outcomes and gold-standard treatment strategies.
Collapse
Affiliation(s)
- Luca Boldrini
- UOC Radioterapia Oncologica, Fondazione Policlinico Universitario IRCCS "A. Gemelli", Rome, Italy
- Università Cattolica del Sacro Cuore, Rome, Italy
| | - Andrea D'Aviero
- Radiation Oncology, Mater Olbia Hospital, Olbia, Sassari, Italy
| | - Francesca De Felice
- Radiation Oncology, Department of Radiological, Policlinico Umberto I, Rome, Italy
- Oncological and Pathological Sciences, "Sapienza" University of Rome, Rome, Italy
| | - Isacco Desideri
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
| | - Roberta Grassi
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Carlo Greco
- Department of Radiation Oncology, Università Campus Bio-Medico di Roma, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy
| | | | - Valerio Nardone
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Antonio Piras
- UO Radioterapia Oncologica, Villa Santa Teresa, Bagheria, Palermo, Italy.
| | - Viola Salvestrini
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
- Cyberknife Center, Istituto Fiorentino di Cura e Assistenza (IFCA), 50139, Florence, Italy
| |
Collapse
|
7
|
Gong Z, Li X, Shi M, Cai G, Chen S, Ye Z, Gan X, Yang R, Wang R, Chen Z. Measuring the binary thickness of buccal bone of anterior maxilla in low-resolution cone-beam computed tomography via a bilinear convolutional neural network. Quant Imaging Med Surg 2023; 13:8053-8066. [PMID: 38106266 PMCID: PMC10722026 DOI: 10.21037/qims-23-744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 08/28/2023] [Indexed: 12/19/2023]
Abstract
Background The thickness of the buccal bone of the anterior maxilla is an important aesthetic-determining factor for dental implant, which is divided into the thick (≥1 mm) and thin type (<1 mm). However, as a micro-scale structure that is evaluated through low-resolution cone-beam computed tomography (CBCT), its thickness measurement is error-prone under the circumstance of enormous patients and relatively inexperienced primary dentists. Further, the challenges of deep learning-based analysis of the binary thickness of buccal bone include the substantial real-world variance caused by pixel error, the extraction of fine-grained features, and burdensome annotations. Methods This study built bilinear convolutional neural network (BCNN) with 2 convolutional neural network (CNN) backbones and a bilinear pooling module to predict the binary thickness of buccal bone (thick or thin) of the anterior maxilla in an end-to-end manner. The methods of 5-fold cross-validation and model ensemble were adopted at the training and testing stages. The visualization methods of Gradient Weighted Class Activation Mapping (Grad-CAM), Guided Grad-CAM, and layer-wise relevance propagation (LRP) were used for revealing the important features on which the model focused. The performance metrics and efficacy were compared between BCNN, dentists of different clinical experience (i.e., dental student, junior dentist, and senior dentist), and the fusion of BCNN and dentists to investigate the clinical feasibility of BCNN. Results Based on the dataset of 4,000 CBCT images from 1,000 patients (aged 36.15±13.09 years), the BCNN with visual geometry group (VGG)16 backbone achieved an accuracy of 0.870 [95% confidence interval (CI): 0.838-0.902] and an area under the receiver operating characteristic (ROC) curve (AUC) of 0.924 (95% CI: 0.896-0.948). Compared with the conventional CNNs, BCNN precisely located the buccal bone wall over irrelevant regions. The BCNN generally outperformed the expert-level dentists. The clinical diagnostic performance of the dentists was improved with the assistance of BCNN. Conclusions The application of BCNN to the quantitative analysis of binary buccal bone thickness validated the model's excellent ability of subtle feature extraction and achieved expert-level performance. This work signals the potential of fine-grained image recognition networks to the precise quantitative analysis of micro-scale structures.
Collapse
Affiliation(s)
- Zhuohong Gong
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Xiaohui Li
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Mengru Shi
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Gengbin Cai
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Shijie Chen
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Zejun Ye
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Xuejing Gan
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Ruihan Yang
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| | - Ruixuan Wang
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Zetao Chen
- Hospital of Stomatology, Guanghua School of Stomatology, Guangdong Provincial Key Laboratory of Stomatology, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
8
|
Rusanov B, Hassan GM, Reynolds M, Sabet M, Kendrick J, Farzad PR, Ebert M. Deep learning methods for enhancing cone-beam CT image quality towards adaptive radiation therapy: A systematic review. Med Phys 2022; 49:6019-6054. [PMID: 35789489 PMCID: PMC9543319 DOI: 10.1002/mp.15840] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 05/21/2022] [Accepted: 06/16/2022] [Indexed: 11/11/2022] Open
Abstract
The use of deep learning (DL) to improve cone-beam CT (CBCT) image quality has gained popularity as computational resources and algorithmic sophistication have advanced in tandem. CBCT imaging has the potential to facilitate online adaptive radiation therapy (ART) by utilizing up-to-date patient anatomy to modify treatment parameters before irradiation. Poor CBCT image quality has been an impediment to realizing ART due to the increased scatter conditions inherent to cone-beam acquisitions. Given the recent interest in DL applications in radiation oncology, and specifically DL for CBCT correction, we provide a systematic theoretical and literature review for future stakeholders. The review encompasses DL approaches for synthetic CT generation, as well as projection domain methods employed in the CBCT correction literature. We review trends pertaining to publications from January 2018 to April 2022 and condense their major findings - with emphasis on study design and deep learning techniques. Clinically relevant endpoints relating to image quality and dosimetric accuracy are summarised, highlighting gaps in the literature. Finally, we make recommendations for both clinicians and DL practitioners based on literature trends and the current DL state of the art methods utilized in radiation oncology. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Branimir Rusanov
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia
| | - Mark Reynolds
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia
| | - Mahsheed Sabet
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Jake Kendrick
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Pejman Rowshan Farzad
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Martin Ebert
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| |
Collapse
|