1
|
Madesta F, Sentker T, Rohling C, Gauer T, Schmitz R, Werner R. Monte Carlo-based simulation of virtual 3 and 4-dimensional cone-beam computed tomography from computed tomography images: An end-to-end framework and a deep learning-based speedup strategy. Phys Imaging Radiat Oncol 2024; 32:100644. [PMID: 39381614 PMCID: PMC11458955 DOI: 10.1016/j.phro.2024.100644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 08/30/2024] [Accepted: 09/06/2024] [Indexed: 10/10/2024] Open
Abstract
Background and purpose In radiotherapy, precise comparison of fan-beam computed tomography (CT) and cone-beam CT (CBCT) arises as a commonplace, yet intricate task. This paper proposes a publicly available end-to-end pipeline featuring an intrinsic deep-learning-based speedup technique for generating virtual 3D and 4D CBCT from CT images. Materials and methods Physical properties, derived from CT intensity information, are obtained through automated whole-body segmentation of organs and tissues. Subsequently, Monte Carlo (MC) simulations generate CBCT X-ray projections for a full circular arc around the patient employing acquisition settings matched with a clinical CBCT scanner (modeled according to Varian TrueBeam specifications). In addition to 3D CBCT reconstruction, a 4D CBCT can be simulated with a fully time-resolved MC simulation by incorporating respiratory correspondence modeling. To address the computational complexity of MC simulations, a deep-learning-based speedup technique is developed and integrated that uses projection data simulated with a reduced number of photon histories to predict a projection that matches the image characteristics and signal-to-noise ratio of the reference simulation. Results MC simulations with default parameter setting yield CBCT images with high agreement to ground truth data acquired by a clinical CBCT scanner. Furthermore, the proposed speedup technique achieves up to 20-fold speedup while preserving image features and resolution compared to the reference simulation. Conclusion The presented MC pipeline and speedup approach provide an openly accessible end-to-end framework for researchers and clinicians to investigate limitations of image-guided radiation therapy workflows built on both (4D) CT and CBCT images.
Collapse
Affiliation(s)
- Frederic Madesta
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
- Department of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
- Center for Biomedical Artificial Intelligence (bAIome), University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
| | - Thilo Sentker
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
- Department of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
- Center for Biomedical Artificial Intelligence (bAIome), University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
| | - Clemens Rohling
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
- Department of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
- Center for Biomedical Artificial Intelligence (bAIome), University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
| | - Tobias Gauer
- Department of Radiotherapy and Radiation Oncology, University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
| | - Rüdiger Schmitz
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
- Department of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
- Center for Biomedical Artificial Intelligence (bAIome), University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
| | - René Werner
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
- Department of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
- Center for Biomedical Artificial Intelligence (bAIome), University Medical Center Hamburg-Eppendorf, Hamburg 20246, Germany
| |
Collapse
|
2
|
Rabe M, Kurz C, Thummerer A, Landry G. Artificial intelligence for treatment delivery: image-guided radiotherapy. Strahlenther Onkol 2024:10.1007/s00066-024-02277-9. [PMID: 39138806 DOI: 10.1007/s00066-024-02277-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 07/07/2024] [Indexed: 08/15/2024]
Abstract
Radiation therapy (RT) is a highly digitized field relying heavily on computational methods and, as such, has a high affinity for the automation potential afforded by modern artificial intelligence (AI). This is particularly relevant where imaging is concerned and is especially so during image-guided RT (IGRT). With the advent of online adaptive RT (ART) workflows at magnetic resonance (MR) linear accelerators (linacs) and at cone-beam computed tomography (CBCT) linacs, the need for automation is further increased. AI as applied to modern IGRT is thus one area of RT where we can expect important developments in the near future. In this review article, after outlining modern IGRT and online ART workflows, we cover the role of AI in CBCT and MRI correction for dose calculation, auto-segmentation on IGRT imaging, motion management, and response assessment based on in-room imaging.
Collapse
Affiliation(s)
- Moritz Rabe
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Christopher Kurz
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Adrian Thummerer
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
- German Cancer Consortium (DKTK), partner site Munich, a partnership between the DKFZ and the LMU University Hospital Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
- Bavarian Cancer Research Center (BZKF), Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
| |
Collapse
|
3
|
Zhao H, Liang X, Meng B, Dohopolski M, Choi B, Cai B, Lin MH, Bai T, Nguyen D, Jiang S. Progressive auto-segmentation for cone-beam computed tomography-based online adaptive radiotherapy. Phys Imaging Radiat Oncol 2024; 31:100610. [PMID: 39132556 PMCID: PMC11315102 DOI: 10.1016/j.phro.2024.100610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 06/28/2024] [Accepted: 07/08/2024] [Indexed: 08/13/2024] Open
Abstract
Background and purpose Accurate and automated segmentation of targets and organs-at-risk (OARs) is crucial for the successful clinical application of online adaptive radiotherapy (ART). Current methods for cone-beam computed tomography (CBCT) auto-segmentation face challenges, resulting in segmentations often failing to reach clinical acceptability. Current approaches for CBCT auto-segmentation overlook the wealth of information available from initial planning and prior adaptive fractions that could enhance segmentation precision. Materials and methods We introduce a novel framework that incorporates data from a patient's initial plan and previous adaptive fractions, harnessing this additional temporal context to significantly refine the segmentation accuracy for the current fraction's CBCT images. We present LSTM-UNet, an innovative architecture that integrates Long Short-Term Memory (LSTM) units into the skip connections of the traditional U-Net framework to retain information from previous fractions. The models underwent initial pre-training with simulated data followed by fine-tuning on a clinical dataset. Results Our proposed model's segmentation predictions yield an average Dice similarity coefficient of 79% from 8 Head & Neck organs and targets, compared to 52% from a baseline model without prior knowledge and 78% from a baseline model with prior knowledge but no memory. Conclusions Our proposed model excels beyond baseline segmentation frameworks by effectively utilizing information from prior fractions, thus reducing the effort of clinicians to revise the auto-segmentation results. Moreover, it works together with registration-based methods that offer better prior knowledge. Our model holds promise for integration into the online ART workflow, offering precise segmentation capabilities on synthetic CT images.
Collapse
Affiliation(s)
- Hengrui Zhao
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Xiao Liang
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Boyu Meng
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Michael Dohopolski
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Byongsu Choi
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Bin Cai
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Mu-Han Lin
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Ti Bai
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Steve Jiang
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
4
|
Wang Z, Cao N, Sun J, Zhang H, Zhang S, Ding J, Xie K, Gao L, Ni X. Uncertainty estimation- and attention-based semi-supervised models for automatically delineate clinical target volume in CBCT images of breast cancer. Radiat Oncol 2024; 19:66. [PMID: 38811994 DOI: 10.1186/s13014-024-02455-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 05/14/2024] [Indexed: 05/31/2024] Open
Abstract
OBJECTIVES Accurate segmentation of the clinical target volume (CTV) of CBCT images can observe the changes of CTV during patients' radiotherapy, and lay a foundation for the subsequent implementation of adaptive radiotherapy (ART). However, segmentation is challenging due to the poor quality of CBCT images and difficulty in obtaining target volumes. An uncertainty estimation- and attention-based semi-supervised model called residual convolutional block attention-uncertainty aware mean teacher (RCBA-UAMT) was proposed to delineate the CTV in cone-beam computed tomography (CBCT) images of breast cancer automatically. METHODS A total of 60 patients who undergone radiotherapy after breast-conserving surgery were enrolled in this study, which involved 60 planning CTs and 380 CBCTs. RCBA-UAMT was proposed by integrating residual and attention modules in the backbone network 3D UNet. The attention module can adjust channel and spatial weights of the extracted image features. The proposed design can train the model and segment CBCT images with a small amount of labeled data (5%, 10%, and 20%) and a large amount of unlabeled data. Four types of evaluation metrics, namely, dice similarity coefficient (DSC), Jaccard, average surface distance (ASD), and 95% Hausdorff distance (95HD), are used to assess the model segmentation performance quantitatively. RESULTS The proposed method achieved average DSC, Jaccard, 95HD, and ASD of 82%, 70%, 8.93, and 1.49 mm for CTV delineation on CBCT images of breast cancer, respectively. Compared with the three classical methods of mean teacher, uncertainty-aware mean-teacher and uncertainty rectified pyramid consistency, DSC and Jaccard increased by 7.89-9.33% and 14.75-16.67%, respectively, while 95HD and ASD decreased by 33.16-67.81% and 36.05-75.57%, respectively. The comparative experiment results of the labeled data with different proportions (5%, 10% and 20%) showed significant differences in the DSC, Jaccard, and 95HD evaluation indexes in the labeled data with 5% versus 10% and 5% versus 20%. Moreover, no significant differences were observed in the labeled data with 10% versus 20% among all evaluation indexes. Therefore, we can use only 10% labeled data to achieve the experimental objective. CONCLUSIONS Using the proposed RCBA-UAMT, the CTV of breast cancer CBCT images can be delineated reliably with a small amount of labeled data. These delineated images can be used to observe the changes in CTV and lay the foundation for the follow-up implementation of ART.
Collapse
Affiliation(s)
- Ziyi Wang
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Nannan Cao
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Jiawei Sun
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Heng Zhang
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Sai Zhang
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Jiangyi Ding
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Kai Xie
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Liugang Gao
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Xinye Ni
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China.
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China.
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China.
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China.
| |
Collapse
|
5
|
Li Z, Gan G, Guo J, Zhan W, Chen L. Accurate object localization facilitates automatic esophagus segmentation in deep learning. Radiat Oncol 2024; 19:55. [PMID: 38735947 PMCID: PMC11088757 DOI: 10.1186/s13014-024-02448-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 05/01/2024] [Indexed: 05/14/2024] Open
Abstract
BACKGROUND Currently, automatic esophagus segmentation remains a challenging task due to its small size, low contrast, and large shape variation. We aimed to improve the performance of esophagus segmentation in deep learning by applying a strategy that involves locating the object first and then performing the segmentation task. METHODS A total of 100 cases with thoracic computed tomography scans from two publicly available datasets were used in this study. A modified CenterNet, an object location network, was employed to locate the center of the esophagus for each slice. Subsequently, the 3D U-net and 2D U-net_coarse models were trained to segment the esophagus based on the predicted object center. A 2D U-net_fine model was trained based on the updated object center according to the 3D U-net model. The dice similarity coefficient and the 95% Hausdorff distance were used as quantitative evaluation indexes for the delineation performance. The characteristics of the automatically delineated esophageal contours by the 2D U-net and 3D U-net models were summarized. Additionally, the impact of the accuracy of object localization on the delineation performance was analyzed. Finally, the delineation performance in different segments of the esophagus was also summarized. RESULTS The mean dice coefficient of the 3D U-net, 2D U-net_coarse, and 2D U-net_fine models were 0.77, 0.81, and 0.82, respectively. The 95% Hausdorff distance for the above models was 6.55, 3.57, and 3.76, respectively. Compared with the 2D U-net, the 3D U-net has a lower incidence of delineating wrong objects and a higher incidence of missing objects. After using the fine object center, the average dice coefficient was improved by 5.5% in the cases with a dice coefficient less than 0.75, while that value was only 0.3% in the cases with a dice coefficient greater than 0.75. The dice coefficients were lower for the esophagus between the orifice of the inferior and the pulmonary bifurcation compared with the other regions. CONCLUSION The 3D U-net model tended to delineate fewer incorrect objects but also miss more objects. Two-stage strategy with accurate object location could enhance the robustness of the segmentation model and significantly improve the esophageal delineation performance, especially for cases with poor delineation results.
Collapse
Affiliation(s)
- Zhibin Li
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Guanghui Gan
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Jian Guo
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Wei Zhan
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Long Chen
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China.
| |
Collapse
|
6
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
7
|
Li Y, Shao HC, Liang X, Chen L, Li R, Jiang S, Wang J, Zhang Y. Zero-Shot Medical Image Translation via Frequency-Guided Diffusion Models. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:980-993. [PMID: 37851552 PMCID: PMC11000254 DOI: 10.1109/tmi.2023.3325703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2023]
Abstract
Recently, the diffusion model has emerged as a superior generative model that can produce high quality and realistic images. However, for medical image translation, the existing diffusion models are deficient in accurately retaining structural information since the structure details of source domain images are lost during the forward diffusion process and cannot be fully recovered through learned reverse diffusion, while the integrity of anatomical structures is extremely important in medical images. For instance, errors in image translation may distort, shift, or even remove structures and tumors, leading to incorrect diagnosis and inadequate treatments. Training and conditioning diffusion models using paired source and target images with matching anatomy can help. However, such paired data are very difficult and costly to obtain, and may also reduce the robustness of the developed model to out-of-distribution testing data. We propose a frequency-guided diffusion model (FGDM) that employs frequency-domain filters to guide the diffusion model for structure-preserving image translation. Based on its design, FGDM allows zero-shot learning, as it can be trained solely on the data from the target domain, and used directly for source-to-target domain translation without any exposure to the source-domain data during training. We evaluated it on three cone-beam CT (CBCT)-to-CT translation tasks for different anatomical sites, and a cross-institutional MR imaging translation task. FGDM outperformed the state-of-the-art methods (GAN-based, VAE-based, and diffusion-based) in metrics of Fréchet Inception Distance (FID), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index Measure (SSIM), showing its significant advantages in zero-shot medical image translation.
Collapse
|
8
|
Zhuang T, Parsons D, Desai N, Gibbard G, Keilty D, Lin MH, Cai B, Nguyen D, Chiu T, Godley A, Pompos A, Jiang S. Simulation and pre-planning omitted radiotherapy (SPORT): a feasibility study for prostate cancer. Biomed Phys Eng Express 2024; 10:025019. [PMID: 38241733 DOI: 10.1088/2057-1976/ad20aa] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Accepted: 01/19/2024] [Indexed: 01/21/2024]
Abstract
This study explored the feasibility of on-couch intensity modulated radiotherapy (IMRT) planning for prostate cancer (PCa) on a cone-beam CT (CBCT)-based online adaptive RT platform without an individualized pre-treatment plan and contours. Ten patients with PCa previously treated with image-guided IMRT (60 Gy/20 fractions) were selected. In contrast to the routine online adaptive RT workflow, a novel approach was employed in which the same preplan that was optimized on one reference patient was adapted to generate individual on-couch/initial plans for the other nine test patients using Ethos emulator. Simulation CTs of the test patients were used as simulated online CBCT (sCBCT) for emulation. Quality assessments were conducted on synthetic CTs (sCT). Dosimetric comparisons were performed between on-couch plans, on-couch plans recomputed on the sCBCT and individually optimized plans for test patients. The median value of mean absolute difference between sCT and sCBCT was 74.7 HU (range 69.5-91.5 HU). The average CTV/PTV coverage by prescription dose was 100.0%/94.7%, and normal tissue constraints were met for the nine test patients in on-couch plans on sCT. Recalculating on-couch plans on the sCBCT showed about 0.7% reduction of PTV coverage and a 0.6% increasing of hotspot, and the dose difference of the OARs was negligible (<0.5 Gy). Hence, initial IMRT plans for new patients can be generated by adapting a reference patient's preplan with online contours, which had similar qualities to the conventional approach of individually optimized plan on the simulation CT. Further study is needed to identify selection criteria for patient anatomy most amenable to this workflow.
Collapse
Affiliation(s)
- Tingliang Zhuang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - David Parsons
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Neil Desai
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Grant Gibbard
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Dana Keilty
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Mu-Han Lin
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Bin Cai
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Dan Nguyen
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Tsuicheng Chiu
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Andrew Godley
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Arnold Pompos
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| | - Steve Jiang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, United States of America
| |
Collapse
|
9
|
O'Connell J, Weil MD, Bazalova-Carter M. Non-coplanar lung SABR treatments delivered with a gantry-mounted x-ray tube. Phys Med Biol 2024; 69:025002. [PMID: 38035372 DOI: 10.1088/1361-6560/ad111a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 11/30/2023] [Indexed: 12/02/2023]
Abstract
Objective.To create two non-coplanar, stereotactic ablative radiotherapy (SABR) lung patient treatment plans compliant with the radiation therapy oncology group (RTOG) 0813 dosimetric criteria using a simple, isocentric, therapy with kilovoltage arcs (SITKA) system designed to provide low cost external radiotherapy treatments for low- and middle-income countries (LMICs).Approach.A treatment machine design has been proposed featuring a 320 kVp x-ray tube mounted on a gantry. A deep learning cone-beam CT (CBCT) to synthetic CT (sCT) method was employed to remove the additional cost of planning CTs. A novel inverse treatment planning approach using GPU backprojection was used to create a highly non-coplanar treatment plan with circular beam shapes generated by an iris collimator. Treatments were planned and simulated using the TOPAS Monte Carlo (MC) code for two lung patients. Dose distributions were compared to 6 MV volumetric modulated arc therapy (VMAT) planned in Eclipse on the same cases for a Truebeam linac as well as obeying the RTOG 0813 protocols for lung SABR treatments with a prescribed dose of 50 Gy.Main results.The low-cost SITKA treatments were compliant with all RTOG 0813 dosimetric criteria. SITKA treatments showed, on average, a 6.7 and 4.9 Gy reduction of the maximum dose in soft tissue organs at risk (OARs) as compared to VMAT, for the two patients respectively. This was accompanied by a small increase in the mean dose of 0.17 and 0.30 Gy in soft tissue OARs.Significance.The proposed SITKA system offers a maximally low-cost, effective alternative to conventional radiotherapy systems for lung cancer patients, particularly in low-income countries. The system's non-coplanar, isocentric approach, coupled with the deep learning CBCT to sCT and GPU backprojection-based inverse treatment planning, offers lower maximum doses in OARs and comparable conformity to VMAT plans at a fraction of the cost of conventional radiotherapy.
Collapse
Affiliation(s)
| | - Michael D Weil
- Sirius Medicine LLC, Half Moon Bay, CA, United States of America
| | | |
Collapse
|
10
|
de Hond YJM, van Haaren PMA, Verrijssen AE, Tijssen RHN, Hurkmans CW. Inter-observer variability in library plan selection on iterative CBCT and synthetic CT images of cervical cancer patients. J Appl Clin Med Phys 2023; 24:e14170. [PMID: 37788333 PMCID: PMC10647946 DOI: 10.1002/acm2.14170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 09/11/2023] [Accepted: 09/15/2023] [Indexed: 10/05/2023] Open
Abstract
INTRODUCTION In the Library-of-Plans (LoP) approach, correct plan selection is essential for delivering radiotherapy treatment accurately. However, poor image quality of the cone-beam computed tomography (CBCT) may introduce inter-observer variability and thereby hamper accurate plan selection. In this study, we investigated whether new techniques to improve the CBCT image quality and improve consistency in plan selection, affects the accuracy of LoP selection in cervical cancer patients. MATERIALS AND METHODS CBCT images of 12 patients were used to investigate the inter-observer variability of plan selection based on different CBCT image types. Six observers were asked to individually select a plan based on clinical X-ray Volumetric Imaging (XVI) CBCT, iterative reconstructed CBCT (iCBCT) and synthetic CTs (sCT). Selections were performed before and after a consensus meeting with the entire group, in which guidelines were created. A scoring by all observers on the image quality and plan selection procedure was also included. For plan selection, Fleiss' kappa (κ) statistical test was used to determine the inter-observer variability within one image type. RESULTS The agreement between observers was significantly higher on sCT compared to CBCT. The consensus meeting improved the duration and inter-observer variability. In this manuscript, the guidelines attributed the overall results in the plan selection. Before the meeting, the gold standard was selected in 76% of the cases on XVI CBCT, 74% on iCBCT, and 76% on sCT. After the meeting, the gold standard was selected in 83% of the cases on XVI CBCT, 81% on iCBCT, and 90% on sCT. CONCLUSION The use of sCTs can increase the agreement of plan selection among observers and the gold standard was indicated to be selected more often. It is important that clear guidelines for plan selection are implemented in order to benefit from the increased image quality, accurate selection, and decrease inter-observer variability.
Collapse
Affiliation(s)
- Yvonne J. M. de Hond
- Department of Radiation OncologyCatharina Hospital EindhovenEindhovenThe Netherlands
| | | | | | - Rob H. N. Tijssen
- Department of Radiation OncologyCatharina Hospital EindhovenEindhovenThe Netherlands
| | - Coen W. Hurkmans
- Department of Radiation OncologyCatharina Hospital EindhovenEindhovenThe Netherlands
| |
Collapse
|
11
|
Li Z, Zhang Q, Li H, Kong L, Wang H, Liang B, Chen M, Qin X, Yin Y, Li Z. Using RegGAN to generate synthetic CT images from CBCT images acquired with different linear accelerators. BMC Cancer 2023; 23:828. [PMID: 37670252 PMCID: PMC10478281 DOI: 10.1186/s12885-023-11274-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 08/08/2023] [Indexed: 09/07/2023] Open
Abstract
BACKGROUND The goal was to investigate the feasibility of the registration generative adversarial network (RegGAN) model in image conversion for performing adaptive radiation therapy on the head and neck and its stability under different cone beam computed tomography (CBCT) models. METHODS A total of 100 CBCT and CT images of patients diagnosed with head and neck tumors were utilized for the training phase, whereas the testing phase involved 40 distinct patients obtained from four different linear accelerators. The RegGAN model was trained and tested to evaluate its performance. The generated synthetic CT (sCT) image quality was compared to that of planning CT (pCT) images by employing metrics such as the mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM). Moreover, the radiation therapy plan was uniformly applied to both the sCT and pCT images to analyze the planning target volume (PTV) dose statistics and calculate the dose difference rate, reinforcing the model's accuracy. RESULTS The generated sCT images had good image quality, and no significant differences were observed among the different CBCT modes. The conversion effect achieved for Synergy was the best, and the MAE decreased from 231.3 ± 55.48 to 45.63 ± 10.78; the PSNR increased from 19.40 ± 1.46 to 26.75 ± 1.32; the SSIM increased from 0.82 ± 0.02 to 0.85 ± 0.04. The quality improvement effect achieved for sCT image synthesis based on RegGAN was obvious, and no significant sCT synthesis differences were observed among different accelerators. CONCLUSION The sCT images generated by the RegGAN model had high image quality, and the RegGAN model exhibited a strong generalization ability across different accelerators, enabling its outputs to be used as reference images for performing adaptive radiation therapy on the head and neck.
Collapse
Affiliation(s)
- Zhenkai Li
- Chengdu University of Technology, Chengdu, China
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | | | - Haodong Li
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Lingke Kong
- Manteia Technologies Co., Ltd., Xiamen, China
| | - Huadong Wang
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Benzhe Liang
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Mingming Chen
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Xiaohang Qin
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Yong Yin
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China.
| | - Zhenjiang Li
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China.
| |
Collapse
|
12
|
Zhang X, Sisniega A, Zbijewski WB, Lee J, Jones CK, Wu P, Han R, Uneri A, Vagdargi P, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Combining physics-based models with deep learning image synthesis and uncertainty in intraoperative cone-beam CT of the brain. Med Phys 2023; 50:2607-2624. [PMID: 36906915 PMCID: PMC10175241 DOI: 10.1002/mp.16351] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 02/03/2023] [Accepted: 02/27/2023] [Indexed: 03/13/2023] Open
Abstract
BACKGROUND Image-guided neurosurgery requires high localization and registration accuracy to enable effective treatment and avoid complications. However, accurate neuronavigation based on preoperative magnetic resonance (MR) or computed tomography (CT) images is challenged by brain deformation occurring during the surgical intervention. PURPOSE To facilitate intraoperative visualization of brain tissues and deformable registration with preoperative images, a 3D deep learning (DL) reconstruction framework (termed DL-Recon) was proposed for improved intraoperative cone-beam CT (CBCT) image quality. METHODS The DL-Recon framework combines physics-based models with deep learning CT synthesis and leverages uncertainty information to promote robustness to unseen features. A 3D generative adversarial network (GAN) with a conditional loss function modulated by aleatoric uncertainty was developed for CBCT-to-CT synthesis. Epistemic uncertainty of the synthesis model was estimated via Monte Carlo (MC) dropout. Using spatially varying weights derived from epistemic uncertainty, the DL-Recon image combines the synthetic CT with an artifact-corrected filtered back-projection (FBP) reconstruction. In regions of high epistemic uncertainty, DL-Recon includes greater contribution from the FBP image. Twenty paired real CT and simulated CBCT images of the head were used for network training and validation, and experiments evaluated the performance of DL-Recon on CBCT images containing simulated and real brain lesions not present in the training data. Performance among learning- and physics-based methods was quantified in terms of structural similarity (SSIM) of the resulting image to diagnostic CT and Dice similarity metric (DSC) in lesion segmentation compared to ground truth. A pilot study was conducted involving seven subjects with CBCT images acquired during neurosurgery to assess the feasibility of DL-Recon in clinical data. RESULTS CBCT images reconstructed via FBP with physics-based corrections exhibited the usual challenges to soft-tissue contrast resolution due to image non-uniformity, noise, and residual artifacts. GAN synthesis improved image uniformity and soft-tissue visibility but was subject to error in the shape and contrast of simulated lesions that were unseen in training. Incorporation of aleatoric uncertainty in synthesis loss improved estimation of epistemic uncertainty, with variable brain structures and unseen lesions exhibiting higher epistemic uncertainty. The DL-Recon approach mitigated synthesis errors while maintaining improvement in image quality, yielding 15%-22% increase in SSIM (image appearance compared to diagnostic CT) and up to 25% increase in DSC in lesion segmentation compared to FBP. Clear gains in visual image quality were also observed in real brain lesions and in clinical CBCT images. CONCLUSIONS DL-Recon leveraged uncertainty estimation to combine the strengths of DL and physics-based reconstruction and demonstrated substantial improvements in the accuracy and quality of intraoperative CBCT. The improved soft-tissue contrast resolution could facilitate visualization of brain structures and support deformable registration with preoperative images, further extending the utility of intraoperative CBCT in image-guided neurosurgery.
Collapse
Affiliation(s)
- Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Wojciech B. Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Junghoon Lee
- Department of Radiation Oncology, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Craig K. Jones
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Pengwei Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Runze Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Prasad Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | | | - Mark Luciano
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
| | - William S. Anderson
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
| | - Jeffrey H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030
| |
Collapse
|
13
|
Cao Z, Gao X, Chang Y, Liu G, Pei Y. Improving synthetic CT accuracy by combining the benefits of multiple normalized preprocesses. J Appl Clin Med Phys 2023:e14004. [PMID: 37092739 PMCID: PMC10402686 DOI: 10.1002/acm2.14004] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 02/13/2023] [Accepted: 04/04/2023] [Indexed: 04/25/2023] Open
Abstract
PURPOSE To investigate the effect of different normalization preprocesses in deep learning on the accuracy of different tissues in synthetic computed tomography (sCT) and to combine their advantages to improve the accuracy of all tissues. METHODS The cycle-consistent adversarial network (CycleGAN) model was used to generate sCT images from megavolt cone-beam CT (MVCBCT) images. In this study, 2639 head MVCBCT and CT image pairs from 203 patients were collected as a training set, and 249 image pairs from 29 patients were collected as a test set. We normalized the voxel values in images to 0 to 1 or -1 to 1, using two linear and five nonlinear normalization preprocessing methods to obtain seven data sets and compared the accuracy of different tissues in different sCT obtained from training these data. Finally, to combine the advantages of different normalization preprocessing methods, we obtained sCT_Blur by cropping, stitching, and smoothing (OpenCV's cv2.medianBlur, kernel size 5) each group of sCTs and evaluated its image quality and accuracy of OARs. RESULTS Different normalization preprocesses made sCT more accurate in different tissues. The proposed sCT_Blur took advantage of multiple normalization preprocessing methods, and all tissues are more accurate than the sCT obtained using a single conventional normalization method. Compared with other sCT images, the structural similarity of sCT_Blur versus CT was improved to 0.906 ± 0.019. The mean absolute errors of the CT numbers were reduced to 15.7 ± 4.1 HU, 23.2 ± 7.1 HU, 11.5 ± 4.1 HU, 212.8 ± 104.6 HU, 219.4 ± 35.1 HU, and 268.8 ± 88.8 HU for the oral cavity, parotid, spinal cord, cavity, mandible, and teeth, respectively. CONCLUSION The proposed approach combined the advantages of several normalization preprocessing methods to improve the accuracy of all tissues in sCT images, which is promising for improving the accuracy of dose calculations based on CBCT images in adaptive radiotherapy.
Collapse
Affiliation(s)
- Zheng Cao
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, China
- Hematology & Oncology Department, Hefei First People's Hospital, Hefei, China
| | - Xiang Gao
- Hematology & Oncology Department, Hefei First People's Hospital, Hefei, China
| | - Yankui Chang
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
| | - Gongfa Liu
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, China
| | - Yuanji Pei
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, China
| |
Collapse
|
14
|
Liang X, Morgan H, Bai T, Dohopolski M, Nguyen D, Jiang S. Deep learning based direct segmentation assisted by deformable image registration for cone-beam CT based auto-segmentation for adaptive radiotherapy. Phys Med Biol 2023; 68. [PMID: 36657169 DOI: 10.1088/1361-6560/acb4d7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 01/19/2023] [Indexed: 01/21/2023]
Abstract
Cone-beam CT (CBCT)-based online adaptive radiotherapy calls for accurate auto-segmentation to reduce the time cost for physicians. However, deep learning (DL)-based direct segmentation of CBCT images is a challenging task, mainly due to the poor image quality and lack of well-labelled large training datasets. Deformable image registration (DIR) is often used to propagate the manual contours on the planning CT (pCT) of the same patient to CBCT. In this work, we undertake solving the problems mentioned above with the assistance of DIR. Our method consists of three main components. First, we use deformed pCT contours derived from multiple DIR methods between pCT and CBCT as pseudo labels for initial training of the DL-based direct segmentation model. Second, we use deformed pCT contours from another DIR algorithm as influencer volumes to define the region of interest for DL-based direct segmentation. Third, the initially trained DL model is further fine-tuned using a smaller set of true labels. Nine patients are used for model evaluation. We found that DL-based direct segmentation on CBCT without influencer volumes has much poorer performance compared to DIR-based segmentation. However, adding deformed pCT contours as influencer volumes in the direct segmentation network dramatically improves segmentation performance, reaching the accuracy level of DIR-based segmentation. The DL model with influencer volumes can be further improved through fine-tuning using a smaller set of true labels, achieving mean Dice similarity coefficient of 0.86, Hausdorff distance at the 95th percentile of 2.34 mm, and average surface distance of 0.56 mm. A DL-based direct CBCT segmentation model can be improved to outperform DIR-based segmentation models by using deformed pCT contours as pseudo labels and influencer volumes for initial training, and by using a smaller set of true labels for model fine tuning.
Collapse
Affiliation(s)
- Xiao Liang
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Howard Morgan
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Ti Bai
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Michael Dohopolski
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Steve Jiang
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| |
Collapse
|
15
|
Lee D, Yorke E, Zarepisheh M, Nadeem S, Hu YC. RMSim: controlled respiratory motion simulation on static patient scans. Phys Med Biol 2023; 68. [PMID: 36652721 DOI: 10.1088/1361-6560/acb484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 01/18/2023] [Indexed: 01/20/2023]
Abstract
Objective.This work aims to generate realistic anatomical deformations from static patient scans. Specifically, we present a method to generate these deformations/augmentations via deep learning driven respiratory motion simulation that provides the ground truth for validating deformable image registration (DIR) algorithms and driving more accurate deep learning based DIR.Approach.We present a novel 3D Seq2Seq deep learning respiratory motion simulator (RMSim) that learns from 4D-CT images and predicts future breathing phases given a static CT image. The predicted respiratory patterns, represented by time-varying displacement vector fields (DVFs) at different breathing phases, are modulated through auxiliary inputs of 1D breathing traces so that a larger amplitude in the trace results in more significant predicted deformation. Stacked 3D-ConvLSTMs are used to capture the spatial-temporal respiration patterns. Training loss includes a smoothness loss in the DVF and mean-squared error between the predicted and ground truth phase images. A spatial transformer deforms the static CT with the predicted DVF to generate the predicted phase image. 10-phase 4D-CTs of 140 internal patients were used to train and test RMSim. The trained RMSim was then used to augment a public DIR challenge dataset for training VoxelMorph to show the effectiveness of RMSim-generated deformation augmentation.Main results.We validated our RMSim output with both private and public benchmark datasets (healthy and cancer patients). The structure similarity index measure (SSIM) for predicted breathing phases and ground truth 4D CT images was 0.92 ± 0.04, demonstrating RMSim's potential to generate realistic respiratory motion. Moreover, the landmark registration error in a public DIR dataset was improved from 8.12 ± 5.78 mm to 6.58mm ± 6.38 mm using RMSim-augmented training data.Significance.The proposed approach can be used for validating DIR algorithms as well as for patient-specific augmentations to improve deep learning DIR algorithms. The code, pretrained models, and augmented DIR validation datasets will be released athttps://github.com/nadeemlab/SeqX2Y.
Collapse
Affiliation(s)
- Donghoon Lee
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States of America
| | - Ellen Yorke
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States of America
| | - Masoud Zarepisheh
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States of America
| | - Saad Nadeem
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States of America
| | - Yu-Chi Hu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States of America
| |
Collapse
|
16
|
Wang H, Liu X, Kong L, Huang Y, Chen H, Ma X, Duan Y, Shao Y, Feng A, Shen Z, Gu H, Kong Q, Xu Z, Zhou Y. Improving CBCT image quality to the CT level using RegGAN in esophageal cancer adaptive radiotherapy. Strahlenther Onkol 2023; 199:485-497. [PMID: 36688953 PMCID: PMC10133081 DOI: 10.1007/s00066-022-02039-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 12/04/2022] [Indexed: 01/24/2023]
Abstract
OBJECTIVE This study aimed to improve the image quality and CT Hounsfield unit accuracy of daily cone-beam computed tomography (CBCT) using registration generative adversarial networks (RegGAN) and apply synthetic CT (sCT) images to dose calculations in radiotherapy. METHODS The CBCT/planning CT images of 150 esophageal cancer patients undergoing radiotherapy were used for training (120 patients) and testing (30 patients). An unsupervised deep-learning method, the 2.5D RegGAN model with an adaptively trained registration network, was proposed, through which sCT images were generated. The quality of deep-learning-generated sCT images was quantitatively compared to the reference deformed CT (dCT) image using mean absolute error (MAE), root mean square error (RMSE) of Hounsfield units (HU), and peak signal-to-noise ratio (PSNR). The dose calculation accuracy was further evaluated for esophageal cancer radiotherapy plans, and the same plans were calculated on dCT, CBCT, and sCT images. RESULTS The quality of sCT images produced by RegGAN was significantly improved compared to the original CBCT images. ReGAN achieved image quality in the testing patients with MAE sCT vs. CBCT: 43.7 ± 4.8 vs. 80.1 ± 9.1; RMSE sCT vs. CBCT: 67.2 ± 12.4 vs. 124.2 ± 21.8; and PSNR sCT vs. CBCT: 27.9 ± 5.6 vs. 21.3 ± 4.2. The sCT images generated by the RegGAN model showed superior accuracy on dose calculation, with higher gamma passing rates (93.3 ± 4.4, 90.4 ± 5.2, and 84.3 ± 6.6) compared to original CBCT images (89.6 ± 5.7, 85.7 ± 6.9, and 72.5 ± 12.5) under the criteria of 3 mm/3%, 2 mm/2%, and 1 mm/1%, respectively. CONCLUSION The proposed deep-learning RegGAN model seems promising for generation of high-quality sCT images from stand-alone thoracic CBCT images in an efficient way and thus has the potential to support CBCT-based esophageal cancer adaptive radiotherapy.
Collapse
Affiliation(s)
- Hao Wang
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China.,Institute of Modern Physics, Fudan University, Shanghai, China
| | - Xiao Liu
- Department of Radiotherapy, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | | | - Ying Huang
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Hua Chen
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Xiurui Ma
- Department of Radiation Oncology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Yanhua Duan
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Yan Shao
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Aihui Feng
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Zhenjiong Shen
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Hengle Gu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Qing Kong
- Institute of Modern Physics, Fudan University, Shanghai, China
| | - Zhiyong Xu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Yongkang Zhou
- Department of Radiation Oncology, Zhongshan Hospital, Fudan University, Shanghai, China.
| |
Collapse
|
17
|
Pai S, Hadzic I, Rao C, Zhovannik I, Dekker A, Traverso A, Asteriadis S, Hortal E. Frequency-Domain-Based Structure Losses for CycleGAN-Based Cone-Beam Computed Tomography Translation. SENSORS (BASEL, SWITZERLAND) 2023; 23:1089. [PMID: 36772129 PMCID: PMC9920313 DOI: 10.3390/s23031089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 12/28/2022] [Accepted: 01/05/2023] [Indexed: 06/18/2023]
Abstract
Research exploring CycleGAN-based synthetic image generation has recently accelerated in the medical community due to its ability to leverage unpaired images effectively. However, a commonly established drawback of the CycleGAN, the introduction of artifacts in generated images, makes it unreliable for medical imaging use cases. In an attempt to address this, we explore the effect of structure losses on the CycleGAN and propose a generalized frequency-based loss that aims at preserving the content in the frequency domain. We apply this loss to the use-case of cone-beam computed tomography (CBCT) translation to computed tomography (CT)-like quality. Synthetic CT (sCT) images generated from our methods are compared against baseline CycleGAN along with other existing structure losses proposed in the literature. Our methods (MAE: 85.5, MSE: 20433, NMSE: 0.026, PSNR: 30.02, SSIM: 0.935) quantitatively and qualitatively improve over the baseline CycleGAN (MAE: 88.8, MSE: 24244, NMSE: 0.03, PSNR: 29.37, SSIM: 0.935) across all investigated metrics and are more robust than existing methods. Furthermore, no observable artifacts or loss in image quality were observed. Finally, we demonstrated that sCTs generated using our methods have superior performance compared to the original CBCT images on selected downstream tasks.
Collapse
Affiliation(s)
- Suraj Pai
- GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands
| | - Ibrahim Hadzic
- GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands
| | - Chinmay Rao
- Division of Image Processing, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands
| | - Ivan Zhovannik
- GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands
| | - Andre Dekker
- GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands
| | - Alberto Traverso
- GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, 6229 HX Maastricht, The Netherlands
| | - Stylianos Asteriadis
- Department of Advanced Computing Sciences, Maastricht University, 6229 EN Maastricht, The Netherlands
| | - Enrique Hortal
- Department of Advanced Computing Sciences, Maastricht University, 6229 EN Maastricht, The Netherlands
| |
Collapse
|
18
|
Jiang Y, Shang F, Peng J, Liang J, Fan Y, Yang Z, Qi Y, Yang Y, Xu T, Jiang R. Automatic Masseter Muscle Accurate Segmentation from CBCT Using Deep Learning-Based Model. J Clin Med 2022; 12:jcm12010055. [PMID: 36614860 PMCID: PMC9820952 DOI: 10.3390/jcm12010055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 12/17/2022] [Accepted: 12/18/2022] [Indexed: 12/24/2022] Open
Abstract
Segmentation of the masseter muscle (MM) on cone-beam computed tomography (CBCT) is challenging due to the lack of sufficient soft-tissue contrast. Moreover, manual segmentation is laborious and time-consuming. The purpose of this study was to propose a deep learning-based automatic approach to accurately segment the MM from CBCT under the refinement of high-quality paired computed tomography (CT). Fifty independent CBCT and 42 clinically hard-to-obtain paired CBCT and CT were manually annotated by two observers. A 3D U-shape network was carefully designed to segment the MM effectively. Manual annotations on CT were set as the ground truth. Additionally, an extra five CT and five CBCT auto-segmentation results were revised by one oral and maxillofacial anatomy expert to evaluate their clinical suitability. CBCT auto-segmentation results were comparable to the CT counterparts and significantly improved the similarity with the ground truth compared with manual annotations on CBCT. The automatic approach was more than 332 times shorter than that of a human operation. Only 0.52% of the manual revision fraction was required. This automatic model could simultaneously and accurately segment the MM structures on CBCT and CT, which can improve clinical efficiency and efficacy, and provide critical information for personalized treatment and long-term follow-up.
Collapse
Affiliation(s)
- Yiran Jiang
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100081, China
- National Clinical Research Center for Oral Diseases, Beijing 100081, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing 100081, China
- NHC Research Center of Engineering and Technology for Computerized Dentistry, Beijing 100081, China
| | - Fangxin Shang
- Intelligent Healthcare Unit, Baidu, Beijing 100081, China
| | - Jiale Peng
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100081, China
- National Clinical Research Center for Oral Diseases, Beijing 100081, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing 100081, China
- NHC Research Center of Engineering and Technology for Computerized Dentistry, Beijing 100081, China
| | - Jie Liang
- National Clinical Research Center for Oral Diseases, Beijing 100081, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing 100081, China
- NHC Research Center of Engineering and Technology for Computerized Dentistry, Beijing 100081, China
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing 100081, China
| | - Yi Fan
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100081, China
- National Clinical Research Center for Oral Diseases, Beijing 100081, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing 100081, China
- NHC Research Center of Engineering and Technology for Computerized Dentistry, Beijing 100081, China
| | - Zhongpeng Yang
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100081, China
- National Clinical Research Center for Oral Diseases, Beijing 100081, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing 100081, China
- NHC Research Center of Engineering and Technology for Computerized Dentistry, Beijing 100081, China
| | - Yuhan Qi
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100081, China
- National Clinical Research Center for Oral Diseases, Beijing 100081, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing 100081, China
- NHC Research Center of Engineering and Technology for Computerized Dentistry, Beijing 100081, China
| | - Yehui Yang
- Intelligent Healthcare Unit, Baidu, Beijing 100081, China
| | - Tianmin Xu
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100081, China
- National Clinical Research Center for Oral Diseases, Beijing 100081, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing 100081, China
- NHC Research Center of Engineering and Technology for Computerized Dentistry, Beijing 100081, China
- Correspondence: (T.X.); (R.J.); Tel.: +86-10-8219-5330 (T.X.); +86-10-8129-5737 (R.J.)
| | - Ruoping Jiang
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100081, China
- National Clinical Research Center for Oral Diseases, Beijing 100081, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing 100081, China
- NHC Research Center of Engineering and Technology for Computerized Dentistry, Beijing 100081, China
- Correspondence: (T.X.); (R.J.); Tel.: +86-10-8219-5330 (T.X.); +86-10-8129-5737 (R.J.)
| |
Collapse
|
19
|
Abbani N, Baudier T, Rit S, Franco FD, Okoli F, Jaouen V, Tilquin F, Barateau A, Simon A, de Crevoisier R, Bert J, Sarrut D. Deep learning-based segmentation in prostate radiation therapy using Monte Carlo simulated cone-beam computed tomography. Med Phys 2022; 49:6930-6944. [PMID: 36000762 DOI: 10.1002/mp.15946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 07/28/2022] [Accepted: 08/05/2022] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Segmenting organs in cone-beam CT (CBCT) images would allow to adapt the radiotherapy based on the organ deformations that may occur between treatment fractions. However, this is a difficult task because of the relative lack of contrast in CBCT images, leading to high inter-observer variability. Deformable image registration (DIR) and deep-learning based automatic segmentation approaches have shown interesting results for this task in the past years. However, they are either sensitive to large organ deformations, or require to train a convolutional neural network (CNN) from a database of delineated CBCT images, which is difficult to do without improvement of image quality. In this work, we propose an alternative approach: to train a CNN (using a deep learning-based segmentation tool called nnU-Net) from a database of artificial CBCT images simulated from planning CT, for which it is easier to obtain the organ contours. METHODS Pseudo-CBCT (pCBCT) images were simulated from readily available segmented planning CT images, using the GATE Monte Carlo simulation. CT reference delineations were copied onto the pCBCT, resulting in a database of segmented images used to train the neural network. The studied segmentation contours were: bladder, rectum, and prostate contours. We trained multiple nnU-Net models using different training: (1) segmented real CBCT, (2) pCBCT, (3) segmented real CT and tested on pseudo-CT (pCT) generated from CBCT with cycleGAN, and (4) a combination of (2) and (3). The evaluation was performed on different datasets of segmented CBCT or pCT by comparing predicted segmentations with reference ones thanks to Dice similarity score and Hausdorff distance. A qualitative evaluation was also performed to compare DIR-based and nnU-Net-based segmentations. RESULTS Training with pCBCT was found to lead to comparable results to using real CBCT images. When evaluated on CBCT obtained from the same hospital as the CT images used in the simulation of the pCBCT, the model trained with pCBCT scored mean DSCs of 0.92 ± 0.05, 0.87 ± 0.02, and 0.85 ± 0.04 and mean Hausdorff distance 4.67 ± 3.01, 3.91 ± 0.98, and 5.00 ± 1.32 for the bladder, rectum, and prostate contours respectively, while the model trained with real CBCT scored mean DSCs of 0.91 ± 0.06, 0.83 ± 0.07, and 0.81 ± 0.05 and mean Hausdorff distance 5.62 ± 3.24, 6.43 ± 5.11, and 6.19 ± 1.14 for the bladder, rectum, and prostate contours, respectively. It was also found to outperform models using pCT or a combination of both, except for the prostate contour when tested on a dataset from a different hospital. Moreover, the resulting segmentations demonstrated a clinical acceptability, where 78% of bladder segmentations, 98% of rectum segmentations, and 93% of prostate segmentations required minor or no corrections, and for 76% of the patients, all structures of the patient required minor or no corrections. CONCLUSION We proposed to use simulated CBCT images to train a nnU-Net segmentation model, avoiding the need to gather complex and time-consuming reference delineations on CBCT images.
Collapse
Affiliation(s)
- Nelly Abbani
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| | - Thomas Baudier
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| | - Simon Rit
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| | - Francesca di Franco
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| | - Franklin Okoli
- LaTIM, Université de Bretagne Occidentale, Inserm, Brest, France
| | - Vincent Jaouen
- LaTIM, Université de Bretagne Occidentale, Inserm, Brest, France
| | | | - Anaïs Barateau
- Univ Rennes, CLCC Eugène Marquis, Inserm, Rennes, France
| | - Antoine Simon
- Univ Rennes, CLCC Eugène Marquis, Inserm, Rennes, France
| | | | - Julien Bert
- LaTIM, Université de Bretagne Occidentale, Inserm, Brest, France
| | - David Sarrut
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| |
Collapse
|
20
|
Rusanov B, Hassan GM, Reynolds M, Sabet M, Kendrick J, Farzad PR, Ebert M. Deep learning methods for enhancing cone-beam CT image quality towards adaptive radiation therapy: A systematic review. Med Phys 2022; 49:6019-6054. [PMID: 35789489 PMCID: PMC9543319 DOI: 10.1002/mp.15840] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 05/21/2022] [Accepted: 06/16/2022] [Indexed: 11/11/2022] Open
Abstract
The use of deep learning (DL) to improve cone-beam CT (CBCT) image quality has gained popularity as computational resources and algorithmic sophistication have advanced in tandem. CBCT imaging has the potential to facilitate online adaptive radiation therapy (ART) by utilizing up-to-date patient anatomy to modify treatment parameters before irradiation. Poor CBCT image quality has been an impediment to realizing ART due to the increased scatter conditions inherent to cone-beam acquisitions. Given the recent interest in DL applications in radiation oncology, and specifically DL for CBCT correction, we provide a systematic theoretical and literature review for future stakeholders. The review encompasses DL approaches for synthetic CT generation, as well as projection domain methods employed in the CBCT correction literature. We review trends pertaining to publications from January 2018 to April 2022 and condense their major findings - with emphasis on study design and deep learning techniques. Clinically relevant endpoints relating to image quality and dosimetric accuracy are summarised, highlighting gaps in the literature. Finally, we make recommendations for both clinicians and DL practitioners based on literature trends and the current DL state of the art methods utilized in radiation oncology. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Branimir Rusanov
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia
| | - Mark Reynolds
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia
| | - Mahsheed Sabet
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Jake Kendrick
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Pejman Rowshan Farzad
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Martin Ebert
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| |
Collapse
|