1
|
Rossi M, Belotti G, Mainardi L, Baroni G, Cerveri P. Feasibility of proton dosimetry overriding planning CT with daily CBCT elaborated through generative artificial intelligence tools. Comput Assist Surg (Abingdon) 2024; 29:2327981. [PMID: 38468391 DOI: 10.1080/24699322.2024.2327981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024] Open
Abstract
Radiotherapy commonly utilizes cone beam computed tomography (CBCT) for patient positioning and treatment monitoring. CBCT is deemed to be secure for patients, making it suitable for the delivery of fractional doses. However, limitations such as a narrow field of view, beam hardening, scattered radiation artifacts, and variability in pixel intensity hinder the direct use of raw CBCT for dose recalculation during treatment. To address this issue, reliable correction techniques are necessary to remove artifacts and remap pixel intensity into Hounsfield Units (HU) values. This study proposes a deep-learning framework for calibrating CBCT images acquired with narrow field of view (FOV) systems and demonstrates its potential use in proton treatment planning updates. Cycle-consistent generative adversarial networks (cGAN) processes raw CBCT to reduce scatter and remap HU. Monte Carlo simulation is used to generate CBCT scans, enabling the possibility to focus solely on the algorithm's ability to reduce artifacts and cupping effects without considering intra-patient longitudinal variability and producing a fair comparison between planning CT (pCT) and calibrated CBCT dosimetry. To showcase the viability of the approach using real-world data, experiments were also conducted using real CBCT. Tests were performed on a publicly available dataset of 40 patients who received ablative radiation therapy for pancreatic cancer. The simulated CBCT calibration led to a difference in proton dosimetry of less than 2%, compared to the planning CT. The potential toxicity effect on the organs at risk decreased from about 50% (uncalibrated) up the 2% (calibrated). The gamma pass rate at 3%/2 mm produced an improvement of about 37% in replicating the prescribed dose before and after calibration (53.78% vs 90.26%). Real data also confirmed this with slightly inferior performances for the same criteria (65.36% vs 87.20%). These results may confirm that generative artificial intelligence brings the use of narrow FOV CBCT scans incrementally closer to clinical translation in proton therapy planning updates.
Collapse
Affiliation(s)
- Matteo Rossi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- Laboratory of Innovation in Sleep Medicine, Istituto Auxologico Italiano, Milan, Italy
| | - Gabriele Belotti
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Luca Mainardi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Guido Baroni
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- Bioengineering Unit, Clinical Department, National Center for Oncological Hadrontherapy (CNAO), Pavia, Italy
| | - Pietro Cerveri
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- Laboratory of Innovation in Sleep Medicine, Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
2
|
Hu Y, Zhou H, Cao N, Li C, Hu C. Synthetic CT generation based on CBCT using improved vision transformer CycleGAN. Sci Rep 2024; 14:11455. [PMID: 38769329 PMCID: PMC11106312 DOI: 10.1038/s41598-024-61492-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 05/06/2024] [Indexed: 05/22/2024] Open
Abstract
Cone-beam computed tomography (CBCT) is a crucial component of adaptive radiation therapy; however, it frequently encounters challenges such as artifacts and noise, significantly constraining its clinical utility. While CycleGAN is a widely employed method for CT image synthesis, it has notable limitations regarding the inadequate capture of global features. To tackle these challenges, we introduce a refined unsupervised learning model called improved vision transformer CycleGAN (IViT-CycleGAN). Firstly, we integrate a U-net framework that builds upon ViT. Next, we augment the feed-forward neural network by incorporating deep convolutional networks. Lastly, we enhance the stability of the model training process by introducing gradient penalty and integrating an additional loss term into the generator loss. The experiment demonstrates from multiple perspectives that our model-generated synthesizing CT(sCT) has significant advantages compared to other unsupervised learning models, thereby validating the clinical applicability and robustness of our model. In future clinical practice, our model has the potential to assist clinical practitioners in formulating precise radiotherapy plans.
Collapse
Affiliation(s)
- Yuxin Hu
- School of Computer and Software, Hohai University, Nanjing, 211100, China
| | - Han Zhou
- School of Electronic Science and Engineering, Nanjing University, NanJing, 210046, China
- Department of Radiation Oncology, The Fourth Affiliated Hospital of Nanjing Medical University, Nanjing, 210013, China
| | - Ning Cao
- School of Computer and Software, Hohai University, Nanjing, 211100, China
| | - Can Li
- Engineering Research Center of TCM Intelligence Health Service, School of Artificial Intelligence and Information Technology, Nanjing University of Chinese Medicine, Nanjing, 210023, China.
| | - Can Hu
- School of Computer and Software, Hohai University, Nanjing, 211100, China.
| |
Collapse
|
3
|
Orhan K, Kocyigit D, Firincioglulari M, Adisen MZ, Kocyigit S. Quantitative assessment of image artifacts from zygoma implants on CBCT scans using different exposure parameters. Proc Inst Mech Eng H 2023; 237:1082-1090. [PMID: 37528643 DOI: 10.1177/09544119231190447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/03/2023]
Abstract
This study was aimed at quantifying artifacts from zygoma implants in cone-beam computed tomography (CBCT) images using different exposure parameters. Two cadaver heads, one with two zygoma implants on each side and the other for control, were scanned using 18 different exposure parameters. Quantitative analysis was performed to evaluate the hypodense and hyperdense artifact percentages calculated as the percentage of the area. Hyperdense artifacts and hypodense artifacts were detected, followed by the calculation of the hyperdense and hypodense artifact percentages in the image. In the qualitative analysis of the artifacts, the scores used were as follows: absence (0), moderate presence (1), or high presence (2) for hypodense halos, thin hypodense lines, and hyperdense lines. Artifact analysis was performed qualitatively and quantitatively using the post-hoc Tukey and Two-way ANOVA tests. As a result, in the qualitative analyses, zygoma implants showed a significant difference compared to the control group with regard to hyperdense and hypodense artifacts (p < 0.05). There was a significant difference between the means according to the FOV size arithmetic averages (p < 0.05). In terms of voxel size, the difference was found to be significant, where 400 microns showed the highest hypodense artifact while 200 microns showed the lowest hypodense artifact. In conclusion, hypodense and hyperdense artifacts were significantly higher in cadavers with zygoma implants than in controls. As FOV and voxel size increase, more hypodense artifacts are produced by zygoma implants so smaller FOV and voxel sizes should be used to prevent poor image quality of adjacent teeth.
Collapse
Affiliation(s)
- Kaan Orhan
- Faculty of Dentistry, Department of Dentomaxillofacial Radiology, Ankara University, Ankara, Turkey
- Medical Design Application and Research Center (MEDITAM), Ankara University, Ankara, Turkey
| | - Doruk Kocyigit
- Faculty of Dentistry, Department of Oral and Maxillofacial Surgery, Kirikkale University, Kirkkale, Turkey
| | - Mujgan Firincioglulari
- Faculty of Dentistry, Department of Dentomaxillofacial Radiology, Cyprus International University, Nicosia, Cyprus
| | - Mehmet Zahit Adisen
- Faculty of Dentistry, Department of Oral and Maxillofacial Surgery, Kirikkale University, Kirkkale, Turkey
| | - Seda Kocyigit
- Department of Oral and Maxillofacial Surgery, Ministry of Health Turkey, Uskudar, Istanbul, Turkey
| |
Collapse
|
4
|
Deng L, Zhang Y, Wang J, Huang S, Yang X. Improving performance of medical image alignment through super-resolution. Biomed Eng Lett 2023; 13:397-406. [PMID: 37519883 PMCID: PMC10382383 DOI: 10.1007/s13534-023-00268-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 01/29/2023] [Accepted: 02/01/2023] [Indexed: 02/21/2023] Open
Abstract
Medical image alignment is an important tool for tracking patient conditions, but the quality of alignment is influenced by the effectiveness of low-dose Cone-beam CT (CBCT) imaging and patient characteristics. To address these two issues, we propose an unsupervised alignment method that incorporates a preprocessing super-resolution process. We constructed the model based on a private clinical dataset and validated the enhancement of the super-resolution on alignment using clinical and public data. Through all three experiments, we demonstrate that higher resolution data yields better results in the alignment process. To fully constrain similarity and structure, a new loss function is proposed; Pearson correlation coefficient combined with regional mutual information. In all test samples, the newly proposed loss function obtains higher results than the common loss function and improve alignment accuracy. Subsequent experiments verified that, combined with the newly proposed loss function, the super-resolution processed data boosts alignment, can reaching up to 9.58%. Moreover, this boost is not limited to a single model, but is effective in different alignment models. These experiments demonstrate that the unsupervised alignment method with super-resolution preprocessing proposed in this study effectively improved alignment and plays an important role in tracking different patient conditions over time.
Collapse
Affiliation(s)
- Liwei Deng
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 Heilongjiang China
| | - Yuanzhi Zhang
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 Heilongjiang China
| | - Jing Wang
- Faculty of Rehabilitation Medicine, Biofeedback Laboratory, Guangzhou Xinhua University, Guangzhou, 510520 Guangdong China
| | - Sijuan Huang
- Department of Radiation Oncology State Key Laboratory of Oncology in South China Collaborative Innovation Center for Cancer Medicine Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060 Guangdong China
| | - Xin Yang
- Department of Radiation Oncology State Key Laboratory of Oncology in South China Collaborative Innovation Center for Cancer Medicine Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060 Guangdong China
| |
Collapse
|
5
|
Rusanov B, Hassan GM, Reynolds M, Sabet M, Kendrick J, Farzad PR, Ebert M. Deep learning methods for enhancing cone-beam CT image quality towards adaptive radiation therapy: A systematic review. Med Phys 2022; 49:6019-6054. [PMID: 35789489 PMCID: PMC9543319 DOI: 10.1002/mp.15840] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 05/21/2022] [Accepted: 06/16/2022] [Indexed: 11/11/2022] Open
Abstract
The use of deep learning (DL) to improve cone-beam CT (CBCT) image quality has gained popularity as computational resources and algorithmic sophistication have advanced in tandem. CBCT imaging has the potential to facilitate online adaptive radiation therapy (ART) by utilizing up-to-date patient anatomy to modify treatment parameters before irradiation. Poor CBCT image quality has been an impediment to realizing ART due to the increased scatter conditions inherent to cone-beam acquisitions. Given the recent interest in DL applications in radiation oncology, and specifically DL for CBCT correction, we provide a systematic theoretical and literature review for future stakeholders. The review encompasses DL approaches for synthetic CT generation, as well as projection domain methods employed in the CBCT correction literature. We review trends pertaining to publications from January 2018 to April 2022 and condense their major findings - with emphasis on study design and deep learning techniques. Clinically relevant endpoints relating to image quality and dosimetric accuracy are summarised, highlighting gaps in the literature. Finally, we make recommendations for both clinicians and DL practitioners based on literature trends and the current DL state of the art methods utilized in radiation oncology. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Branimir Rusanov
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia
| | - Mark Reynolds
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia
| | - Mahsheed Sabet
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Jake Kendrick
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Pejman Rowshan Farzad
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Martin Ebert
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| |
Collapse
|