1
|
Liu X, Geng LS, Huang D, Cai J, Yang R. Deep learning-based target tracking with X-ray images for radiotherapy: a narrative review. Quant Imaging Med Surg 2024; 14:2671-2692. [PMID: 38545053 PMCID: PMC10963821 DOI: 10.21037/qims-23-1489] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 01/08/2024] [Indexed: 11/10/2024]
Abstract
Background and Objective As one of the main treatment modalities, radiotherapy (RT) (also known as radiation therapy) plays an increasingly important role in the treatment of cancer. RT could benefit greatly from the accurate localization of the gross tumor volume and circumambient organs at risk (OARs). Modern linear accelerators (LINACs) are typically equipped with either gantry-mounted or room-mounted X-ray imaging systems, which provide possibilities for marker-less tracking with two-dimensional (2D) kV X-ray images. However, due to organ overlapping and poor soft tissue contrast, it is challenging to track the target directly and precisely with 2D kV X-ray images. With the flourishing development of deep learning in the field of image processing, it is possible to achieve real-time marker-less tracking of targets with 2D kV X-ray images in RT using advanced deep-learning frameworks. This article sought to review the current development of deep learning-based target tracking with 2D kV X-ray images and discuss the existing limitations and potential solutions. Finally, it also discusses some common challenges and potential future developments. Methods Manual searches of the Web of Science, and PubMed, and Google Scholar were carried out to retrieve English-language articles. The keywords used in the searches included "radiotherapy, radiation therapy, motion tracking, target tracking, motion estimation, motion monitoring, X-ray images, digitally reconstructed radiographs, deep learning, convolutional neural network, and deep neural network". Only articles that met the predetermined eligibility criteria were included in the review. Ultimately, 23 articles published between March 2019 and December 2023 were included in the review. Key Content and Findings In this article, we narratively reviewed deep learning-based target tracking with 2D kV X-ray images in RT. The existing limitations, common challenges, possible solutions, and future directions of deep learning-based target tracking were also discussed. The use of deep learning-based methods has been shown to be feasible in marker-less target tracking and real-time motion management. However, it is still quite challenging to directly locate tumor and OARs in real-time with 2D kV X-ray images, and more technical and clinical efforts are needed. Conclusions Deep learning-based target tracking with 2D kV X-ray images is a promising method in motion management during RT. It has the potential to track the target in real time, recognize motion, reduce the extended margin, and better spare the normal tissue. However, it still has many issues that demand prompt attention, and further development before it can be put into clinical practice.
Collapse
Affiliation(s)
- Xi Liu
- School of Physics, Beihang University, Beijing, China
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing, China
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Li-Sheng Geng
- School of Physics, Beihang University, Beijing, China
- Beijing Key Laboratory of Advanced Nuclear Materials and Physics, Beihang University, Beijing, China
- Peng Huanwu Collaborative Center for Research and Education, Beihang University, Beijing, China
| | - David Huang
- Peng Huanwu Collaborative Center for Research and Education, Beihang University, Beijing, China
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Ruijie Yang
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing, China
| |
Collapse
|
2
|
Huang L, Kurz C, Freislederer P, Manapov F, Corradini S, Niyazi M, Belka C, Landry G, Riboldi M. Simultaneous object detection and segmentation for patient-specific markerless lung tumor tracking in simulated radiographs with deep learning. Med Phys 2024; 51:1957-1973. [PMID: 37683107 DOI: 10.1002/mp.16705] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 04/23/2023] [Accepted: 05/12/2023] [Indexed: 09/10/2023] Open
Abstract
BACKGROUND Real-time tumor tracking is one motion management method to address motion-induced uncertainty. To date, fiducial markers are often required to reliably track lung tumors with X-ray imaging, which carries risks of complications and leads to prolonged treatment time. A markerless tracking approach is thus desirable. Deep learning-based approaches have shown promise for markerless tracking, but systematic evaluation and procedures to investigate applicability in individual cases are missing. Moreover, few efforts have been made to provide bounding box prediction and mask segmentation simultaneously, which could allow either rigid or deformable multi-leaf collimator tracking. PURPOSE The purpose of this study was to implement a deep learning-based markerless lung tumor tracking model exploiting patient-specific training which outputs both a bounding box and a mask segmentation simultaneously. We also aimed to compare the two kinds of predictions and to implement a specific procedure to understand the feasibility of markerless tracking on individual cases. METHODS We first trained a Retina U-Net baseline model on digitally reconstructed radiographs (DRRs) generated from a public dataset containing 875 CT scans and corresponding lung nodule annotations. Afterwards, we used an independent cohort of 97 lung patients to develop a patient-specific refinement procedure. In order to determine the optimal hyperparameters for automatic patient-specific training, we selected 13 patients for validation where the baseline model predicted a bounding box on planning CT (PCT)-DRR with intersection over union (IoU) with the ground-truth higher than 0.7. The final test set contained the remaining 84 patients with varying PCT-DRR IoU. For each testing patient, the baseline model was refined on the PCT-DRR to generate a patient-specific model, which was then tested on a separate 10-phase 4DCT-DRR to mimic the intrafraction motion during treatment. A template matching algorithm served as benchmark model. The testing results were evaluated by four metrics: the center of mass (COM) error and the Dice similarity coefficient (DSC) for segmentation masks, and the center of box (COB) error and the DSC for bounding box detections. Performance was compared to the benchmark model including statistical testing for significance. RESULTS A PCT-DRR IoU value of 0.2 was shown to be the threshold dividing inconsistent (68%) and consistent (100%) success (defined as mean bounding box DSC > 0.6) of PS models on 4DCT-DRRs. Thirty-seven out of the eighty-four testing cases had a PCT-DRR IoU above 0.2. For these 37 cases, the mean COM error was 2.6 mm, the mean segmentation DSC was 0.78, the mean COB error was 2.7 mm, and the mean box DSC was 0.83. Including the validation cases, the model was applicable to 50 out of 97 patients when using the PCT-DRR IoU threshold of 0.2. The inference time per frame was 170 ms. The model outperformed the benchmark model on all metrics, and the comparison was significant (p < 0.001) over the 37 PCT-DRR IoU > 0.2 cases, but not over the undifferentiated 84 testing cases. CONCLUSIONS The implemented patient-specific refinement approach based on a pre-trained baseline model was shown to be applicable to markerless tumor tracking in simulated radiographs for lung cases.
Collapse
Affiliation(s)
- Lili Huang
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
- Department of Medical Physics, Faculty of Physics, Ludwig-Maximilians-Universität München, München, Germany
| | - Christopher Kurz
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Philipp Freislederer
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Farkhad Manapov
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Stefanie Corradini
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Maximilian Niyazi
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Claus Belka
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
- German Cancer Consortium (DKTK), partner site Munich, a partnership between DKFZ and LMU University Hospital Munich, Germany
- Bavarian Cancer Research Center (BZKF), Munich, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Marco Riboldi
- Department of Medical Physics, Faculty of Physics, Ludwig-Maximilians-Universität München, München, Germany
| |
Collapse
|
3
|
Jassar H, Tai A, Chen X, Keiper TD, Paulson E, Lathuilière F, Bériault S, Hébert F, Savard L, Cooper DT, Cloake S, Li XA. Real-time motion monitoring using orthogonal cine MRI during MR-guided adaptive radiation therapy for abdominal tumors on 1.5T MR-Linac. Med Phys 2023; 50:3103-3116. [PMID: 36893292 DOI: 10.1002/mp.16342] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 02/01/2023] [Accepted: 02/24/2023] [Indexed: 03/11/2023] Open
Abstract
BACKGROUND Real-time motion monitoring (RTMM) is necessary for accurate motion management of intrafraction motions during radiation therapy (RT). PURPOSE Building upon a previous study, this work develops and tests an improved RTMM technique based on real-time orthogonal cine magnetic resonance imaging (MRI) acquired during magnetic resonance-guided adaptive RT (MRgART) for abdominal tumors on MR-Linac. METHODS A motion monitoring research package (MMRP) was developed and tested for RTMM based on template rigid registration between beam-on real-time orthogonal cine MRI and pre-beam daily reference 3D-MRI (baseline). The MRI data acquired under free-breathing during the routine MRgART on a 1.5T MR-Linac for 18 patients with abdominal malignancies of 8 liver, 4 adrenal glands (renal fossa), and 6 pancreas cases were used to evaluate the MMRP package. For each patient, a 3D mid-position image derived from an in-house daily 4D-MRI was used to define a target mask or a surrogate sub-region encompassing the target. Additionally, an exploratory case reviewed for an MRI dataset of a healthy volunteer acquired under both free-breathing and deep inspiration breath-hold (DIBH) was used to test how effectively the RTMM using the MMRP can address through-plane motion (TPM). For all cases, the 2D T2/T1-weighted cine MRIs were captured with a temporal resolution of 200 ms interleaved between coronal and sagittal orientations. Manually delineated contours on the cine frames were used as the ground-truth motion. Common visible vessels and segments of target boundaries in proximity to the target were used as anatomical landmarks for reproducible delineations on both the 3D and the cine MRI images. Standard deviation of the error (SDE) between the ground-truth and the measured target motion from the MMRP package were analyzed to evaluate the RTMM accuracy. The maximum target motion (MTM) was measured on the 4D-MRI for all cases during free-breathing. RESULTS The mean (range) centroid motions for the 13 abdominal tumor cases were 7.69 (4.71-11.15), 1.73 (0.81-3.05), and 2.71 (1.45-3.93) mm with an overall accuracy of <2 mm in the superior-inferior (SI), the left-right (LR), and the anterior-posterior (AP) directions, respectively. The mean (range) of the MTM from the 4D-MRI was 7.38 (2-11) mm in the SI direction, smaller than the monitored motion of centroid, demonstrating the importance of the real-time motion capture. For the remaining patient cases, the ground-truth delineation was challenging under free-breathing due to the target deformation and the large TPM in the AP direction, the implant-induced image artifacts, and/or the suboptimal image plane selection. These cases were evaluated based on visual assessment. For the healthy volunteer, the TPM of the target was significant under free-breathing which degraded the RTMM accuracy. RTMM accuracy of <2 mm was achieved under DIBH, indicating DIBH is an effective method to address large TPM. CONCLUSIONS We have successfully developed and tested the use of a template-based registration method for an accurate RTMM of abdominal targets during MRgART on a 1.5T MR-Linac without using injected contrast agents or radio-opaque implants. DIBH may be used to effectively reduce or eliminate TPM of abdominal targets during RTMM.
Collapse
Affiliation(s)
- Hassan Jassar
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| | - An Tai
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| | - Xinfeng Chen
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| | - Timothy D Keiper
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| | - Eric Paulson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| | | | | | | | | | | | | | - X Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin, USA
| |
Collapse
|
4
|
Ahmed AM, Gargett M, Madden L, Mylonas A, Chrystall D, Brown R, Briggs A, Nguyen T, Keall P, Kneebone A, Hruby G, Booth J. Evaluation of deep learning based implanted fiducial markers tracking in pancreatic cancer patients. Biomed Phys Eng Express 2023; 9. [PMID: 36689758 DOI: 10.1088/2057-1976/acb550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 01/23/2023] [Indexed: 01/24/2023]
Abstract
Real-time target position verification during pancreas stereotactic body radiation therapy (SBRT) is important for the detection of unplanned tumour motions. Fast and accurate fiducial marker segmentation is a Requirement of real-time marker-based verification. Deep learning (DL) segmentation techniques are ideal because they don't require additional learning imaging or prior marker information (e.g., shape, orientation). In this study, we evaluated three DL frameworks for marker tracking applied to pancreatic cancer patient data. The DL frameworks evaluated were (1) a convolutional neural network (CNN) classifier with sliding window, (2) a pretrained you-only-look-once (YOLO) version-4 architecture, and (3) a hybrid CNN-YOLO. Intrafraction kV images collected during pancreas SBRT treatments were used as training data (44 fractions, 2017 frames). All patients had 1-4 implanted fiducial markers. Each model was evaluated on unseen kV images (42 fractions, 2517 frames). The ground truth was calculated from manual segmentation and triangulation of markers in orthogonal paired kV/MV images. The sensitivity, specificity, and area under the precision-recall curve (AUC) were calculated. In addition, the mean-absolute-error (MAE), root-mean-square-error (RMSE) and standard-error-of-mean (SEM) were calculated for the centroid of the markers predicted by the models, relative to the ground truth. The sensitivity and specificity of the CNN model were 99.41% and 99.69%, respectively. The AUC was 0.9998. The average precision of the YOLO model for different values of recall was 96.49%. The MAE of the three models in the left-right, superior-inferior, and anterior-posterior directions were under 0.88 ± 0.11 mm, and the RMSE were under 1.09 ± 0.12 mm. The detection times per frame on a GPU were 48.3, 22.9, and 17.1 milliseconds for the CNN, YOLO, and CNN-YOLO, respectively. The results demonstrate submillimeter accuracy of marker position predicted by DL models compared to the ground truth. The marker detection time was fast enough to meet the requirements for real-time application.
Collapse
Affiliation(s)
- Abdella M Ahmed
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia.,School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Australia
| | - Maegan Gargett
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia.,School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Australia
| | - Levi Madden
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia.,ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, NSW Australia
| | - Adam Mylonas
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, NSW Australia
| | - Danielle Chrystall
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia.,Institute of Medical Physics, School of Physics, The University of Sydney, NSW, Australia
| | - Ryan Brown
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia
| | - Adam Briggs
- Shoalhaven Cancer Care Centre, Shoalhaven District Memorial Hospital, Nowra, NSW, Australia
| | - Trang Nguyen
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, NSW Australia
| | - Paul Keall
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, NSW Australia
| | - Andrew Kneebone
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia.,Northern Clinical School, Sydney Medical School, University of Sydney, NSW, Australia
| | - George Hruby
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia.,Northern Clinical School, Sydney Medical School, University of Sydney, NSW, Australia
| | - Jeremy Booth
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia.,Institute of Medical Physics, School of Physics, The University of Sydney, NSW, Australia
| |
Collapse
|
5
|
Fan F, Kreher B, Keil H, Maier A, Huang Y. Fiducial marker recovery and detection from severely truncated data in navigation assisted spine surgery. Med Phys 2022; 49:2914-2930. [PMID: 35305271 DOI: 10.1002/mp.15617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 02/16/2022] [Accepted: 03/06/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Fiducial markers are commonly used in navigation assisted minimally invasive spine surgery and they help transfer image coordinates into real world coordinates. In practice, these markers might be located outside the field-of-view (FOV) of C-arm cone-beam computed tomography (CBCT) systems used in intraoperative surgeries, due to the limited detector sizes. As a consequence, reconstructed markers in CBCT volumes suffer from artifacts and have distorted shapes, which sets an obstacle for navigation. METHODS In this work, we propose two fiducial marker detection methods: direct detection from distorted markers (direct method) and detection after marker recovery (recovery method). For direct detection from distorted markers in reconstructed volumes, an efficient automatic marker detection method using two neural networks and a conventional circle detection algorithm is proposed. For marker recovery, a task-specific data preparation strategy is proposed to recover markers from severely truncated data. Afterwards, a conventional marker detection algorithm is applied for position detection. The networks in both methods are trained based on simulated data. For the direct method, 6800 images and 10000 images are generated respectively to train the U-Net and ResNet50. For the recovery method, the training set includes 1360 images for FBPConvNet and Pix2pixGAN. The simulated data set with 166 markers and 4 cadaver cases with real fiducials are used for evaluation. RESULTS The two methods are evaluated on simulated data and real cadaver data. The direct method achieves 100% detection rates within 1 mm detection error on simulated data with normal truncation and simulated data with heavier noise, but only detect 94.6% markers in extremely severe truncation case. The recovery method detects all the markers successfully in three test data sets and around 95% markers are detected within 0.5 mm error. For real cadaver data, both methods achieve 100% marker detection rates with mean registration error below 0.2 mm. CONCLUSIONS Our experiments demonstrate that the direct method is capable of detecting distorted markers accurately and the recovery method with the task-specific data preparation strategy has high robustness and generalizability on various data sets. The task-specific data preparation is able to reconstruct structures of interest outside the FOV from severely truncated data better than conventional data preparation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Fuxin Fan
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, 91058, Germany
| | | | - Holger Keil
- Department of Trauma and Orthopedic Surgery, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, 91054, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, 91058, Germany
| | - Yixing Huang
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, 91054, Germany
| |
Collapse
|
6
|
|
7
|
Bertholet J, Knopf A, Eiben B, McClelland J, Grimwood A, Harris E, Menten M, Poulsen P, Nguyen DT, Keall P, Oelfke U. Real-time intrafraction motion monitoring in external beam radiotherapy. Phys Med Biol 2019; 64:15TR01. [PMID: 31226704 PMCID: PMC7655120 DOI: 10.1088/1361-6560/ab2ba8] [Citation(s) in RCA: 116] [Impact Index Per Article: 23.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Revised: 05/10/2019] [Accepted: 06/21/2019] [Indexed: 12/25/2022]
Abstract
Radiotherapy (RT) aims to deliver a spatially conformal dose of radiation to tumours while maximizing the dose sparing to healthy tissues. However, the internal patient anatomy is constantly moving due to respiratory, cardiac, gastrointestinal and urinary activity. The long term goal of the RT community to 'see what we treat, as we treat' and to act on this information instantaneously has resulted in rapid technological innovation. Specialized treatment machines, such as robotic or gimbal-steered linear accelerators (linac) with in-room imaging suites, have been developed specifically for real-time treatment adaptation. Additional equipment, such as stereoscopic kilovoltage (kV) imaging, ultrasound transducers and electromagnetic transponders, has been developed for intrafraction motion monitoring on conventional linacs. Magnetic resonance imaging (MRI) has been integrated with cobalt treatment units and more recently with linacs. In addition to hardware innovation, software development has played a substantial role in the development of motion monitoring methods based on respiratory motion surrogates and planar kV or Megavoltage (MV) imaging that is available on standard equipped linacs. In this paper, we review and compare the different intrafraction motion monitoring methods proposed in the literature and demonstrated in real-time on clinical data as well as their possible future developments. We then discuss general considerations on validation and quality assurance for clinical implementation. Besides photon RT, particle therapy is increasingly used to treat moving targets. However, transferring motion monitoring technologies from linacs to particle beam lines presents substantial challenges. Lessons learned from the implementation of real-time intrafraction monitoring for photon RT will be used as a basis to discuss the implementation of these methods for particle RT.
Collapse
Affiliation(s)
- Jenny Bertholet
- Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS
Foundation Trust, London, United
Kingdom
- Author to whom any correspondence should be
addressed
| | - Antje Knopf
- Department of Radiation Oncology,
University Medical Center
Groningen, University of Groningen, The
Netherlands
| | - Björn Eiben
- Department of Medical Physics and Biomedical
Engineering, Centre for Medical Image Computing, University College London, London,
United Kingdom
| | - Jamie McClelland
- Department of Medical Physics and Biomedical
Engineering, Centre for Medical Image Computing, University College London, London,
United Kingdom
| | - Alexander Grimwood
- Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS
Foundation Trust, London, United
Kingdom
| | - Emma Harris
- Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS
Foundation Trust, London, United
Kingdom
| | - Martin Menten
- Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS
Foundation Trust, London, United
Kingdom
| | - Per Poulsen
- Department of Oncology, Aarhus University Hospital, Aarhus,
Denmark
| | - Doan Trang Nguyen
- ACRF Image X Institute, University of Sydney, Sydney,
Australia
- School of Biomedical Engineering,
University of Technology
Sydney, Sydney, Australia
| | - Paul Keall
- ACRF Image X Institute, University of Sydney, Sydney,
Australia
| | - Uwe Oelfke
- Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS
Foundation Trust, London, United
Kingdom
| |
Collapse
|
8
|
Zhao W, Han B, Yang Y, Buyyounouski M, Hancock SL, Bagshaw H, Xing L. Incorporating imaging information from deep neural network layers into image guided radiation therapy (IGRT). Radiother Oncol 2019; 140:167-174. [PMID: 31302347 DOI: 10.1016/j.radonc.2019.06.027] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Revised: 05/06/2019] [Accepted: 06/17/2019] [Indexed: 02/07/2023]
Abstract
BACKGROUND AND PURPOSE To investigate a novel markerless prostate localization strategy using a pre-trained deep learning model to interpret routine projection kilovoltage (kV) X-ray images in image-guided radiation therapy (IGRT). MATERIALS AND METHODS We developed a personalized region-based convolutional neural network to localize the prostate treatment target without implanted fiducials. To train the deep neural network (DNN), we used the patient's planning computed tomography (pCT) images with pre-delineated prostate target to generate a large amount of synthetic kV projection X-ray images in the geometry of onboard imager (OBI) system. The DNN model was evaluated by retrospectively studying 10 patients who underwent prostate IGRT. Three out of the ten patients who had implanted fiducials and the fiducials' positions in the OBI images acquired for treatment setup were examined to show the potential of the proposed method for prostate IGRT. Statistical analysis using Lin's concordance correlation coefficient was calculated to assess the results along with the difference between the digitally reconstructed radiographs (DRR) derived and DNN predicted locations of the prostate. RESULTS Differences between the predicted target positions using DNN and their actual positions are (mean ± standard deviation) 1.58 ± 0.43 mm, 1.64 ± 0.43 mm, and 1.67 ± 0.36 mm in anterior-posterior, lateral, and oblique directions, respectively. Prostate position identified on the OBI kV images is also found to be consistent with that derived from the implanted fiducials. CONCLUSIONS Highly accurate, markerless prostate localization based on deep learning is achievable. The proposed method is useful for daily patient positioning and real-time target tracking during prostate radiotherapy.
Collapse
Affiliation(s)
- Wei Zhao
- Stanford University, Department of Radiation Oncology, Stanford, USA.
| | - Bin Han
- Stanford University, Department of Radiation Oncology, Stanford, USA.
| | - Yong Yang
- Stanford University, Department of Radiation Oncology, Stanford, USA.
| | - Mark Buyyounouski
- Stanford University, Department of Radiation Oncology, Stanford, USA.
| | - Steven L Hancock
- Stanford University, Department of Radiation Oncology, Stanford, USA.
| | - Hilary Bagshaw
- Stanford University, Department of Radiation Oncology, Stanford, USA.
| | - Lei Xing
- Stanford University, Department of Radiation Oncology, Stanford, USA.
| |
Collapse
|
9
|
Ding Y, Campbell WG, Miften M, Vinogradskiy Y, Goodman KA, Schefter T, Jones BL. Quantifying Allowable Motion to Achieve Safe Dose Escalation in Pancreatic SBRT. Pract Radiat Oncol 2019; 9:e432-e442. [PMID: 30951868 PMCID: PMC6592725 DOI: 10.1016/j.prro.2019.03.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2018] [Revised: 03/04/2019] [Accepted: 03/23/2019] [Indexed: 12/17/2022]
Abstract
PURPOSE Tumor motion plays a key role in the safe delivery of stereotactic body radiation therapy (SBRT) for pancreatic cancer. The purpose of this study was to use tumor motion measured in patients to establish limits on motion magnitude for safe delivery of pancreatic SBRT and to help guide motion-management decisions in potential dose-escalation scenarios. METHODS AND MATERIALS Using 91 sets of pancreatic tumor motion data, we calculated the motion-convolved dose of the gross tumor volume, duodenum, and stomach for 25 patients with pancreatic cancer. We derived simple linear or quadratic models relating motion to changes in dose and used these models to establish the maximum amount of motion allowable while satisfying error thresholds on key dose metrics. In the same way, we studied the effects of dose escalation and tumor volume on allowable motion. RESULTS In our patient cohort, the mean (range) allowable motion for 33, 40, and 50 Gy to the planning target volume was 11.9 (6.3-22.4), 10.4 (5.2-19.1), and 9.0 (4.2-16.0) mm, respectively. The maximum allowable motion decreased as the dose was escalated and was smaller in patients with larger tumors. We found significant differences in allowable motion between the different plans, suggesting a patient-specific approach to motion management is possible. CONCLUSIONS The effects of motion on pancreatic SBRT are highly variable among patients, and there is potential to allow more motion in certain patients, even in dose-escalated scenarios. In our dataset, a conservative limit of 6.3 mm would ensure safe treatment of all patients treated to 33 Gy in 5 fractions.
Collapse
Affiliation(s)
- Yijun Ding
- Department of Radiation Oncology, University of Colorado, Denver, Colorado
| | - Warren G Campbell
- Department of Radiation Oncology, University of Colorado, Denver, Colorado
| | - Moyed Miften
- Department of Radiation Oncology, University of Colorado, Denver, Colorado
| | | | - Karyn A Goodman
- Department of Radiation Oncology, University of Colorado, Denver, Colorado
| | - Tracey Schefter
- Department of Radiation Oncology, University of Colorado, Denver, Colorado
| | - Bernard L Jones
- Department of Radiation Oncology, University of Colorado, Denver, Colorado.
| |
Collapse
|
10
|
Zhao W, Shen L, Han B, Yang Y, Cheng K, Toesca DAS, Koong AC, Chang DT, Xing L. Markerless Pancreatic Tumor Target Localization Enabled By Deep Learning. Int J Radiat Oncol Biol Phys 2019; 105:432-439. [PMID: 31201892 DOI: 10.1016/j.ijrobp.2019.05.071] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2018] [Revised: 05/17/2019] [Accepted: 05/25/2019] [Indexed: 02/07/2023]
Abstract
PURPOSE Deep learning is an emerging technique that allows us to capture imaging information beyond the visually recognizable level of a human being. Because of the anatomic characteristics and location, on-board target verification for radiation delivery to pancreatic tumors is a challenging task. Our goal was to use a deep neural network to localize the pancreatic tumor target on kV x-ray images acquired using an on-board imager for image guided radiation therapy. METHODS AND MATERIALS The network is set up in such a way that the input is either a digitally reconstructed radiograph image or a monoscopic x-ray projection image acquired by the on-board imager from a given direction, and the output is the location of the planning target volume in the projection image. To produce a sufficient number of training x-ray images reflecting the vast number of possible clinical scenarios of anatomy distribution, a series of changes were introduced to the planning computed tomography images, including deformation, rotation, and translation, to simulate inter- and intrafractional variations. After model training, the accuracy of the model was evaluated by retrospectively studying patients who underwent pancreatic cancer radiation therapy. Statistical analysis using mean absolute differences (MADs) and Lin's concordance correlation coefficient were used to assess the accuracy of the predicted target positions. RESULTS MADs between the model-predicted and the actual positions were found to be less than 2.60 mm in anteroposterior, lateral, and oblique directions for both axes in the detector plane. For comparison studies with and without fiducials, MADs are less than 2.49 mm. For all cases, Lin's concordance correlation coefficients between the predicted and actual positions were found to be better than 93%, demonstrating the success of the proposed deep learning for image guided radiation therapy. CONCLUSIONS We demonstrated that markerless pancreatic tumor target localization is achievable with high accuracy by using a deep learning technique approach.
Collapse
Affiliation(s)
- Wei Zhao
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Liyue Shen
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Bin Han
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Yong Yang
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Kai Cheng
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Diego A S Toesca
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Albert C Koong
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Daniel T Chang
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, California.
| |
Collapse
|
11
|
Ding Y, Barrett HH, Kupinski MA, Vinogradskiy Y, Miften M, Jones BL. Objective assessment of the effects of tumor motion in radiation therapy. Med Phys 2019; 46:3311-3323. [PMID: 31111961 DOI: 10.1002/mp.13601] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Revised: 05/10/2019] [Accepted: 05/14/2019] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Internal organ motion reduces the accuracy and efficacy of radiation therapy. However, there is a lack of tools to objectively (based on a medical or scientific task) assess the dosimetric consequences of motion, especially on an individual basis. We propose to use therapy operating characteristic (TOC) analysis to quantify the effects of motion on treatment efficacy for individual patients. We demonstrate the application of this tool with pancreatic stereotactic body radiation therapy (SBRT) clinical data and explore the origin of motion sensitivity. METHODS The technique is described as follows. (a) Use tumor-motion data measured from patients to calculate the motion-convolved dose of the gross tumor volume (GTV) and the organs at risk (OARs). (b) Calculate tumor control probability (TCP) and normal tissue complication probability (NTCP) from the motion-convolved dose-volume histograms. (c) Construct TOC curves from TCP and NTCP models. (d) Calculate the area under the TOC curve (AUTOC) and use it as a figure of merit for treatment efficacy. We used tumor motion data measured from patients to calculate the relation between AUTOC and motion magnitude for 25 pancreatic SBRT treatment plans. Furthermore, to explore the driving factor of motion sensitivity of a given plan, we compared the dose distribution of motion-sensitive plans and motion-robust plans and studied the dependence of motion sensitivity to motion directions. RESULTS Our technique is able to recognize treatment plans that are sensitive to motion. Under the presence of motion, the treatment efficacy of some plans changes from providing high tumor control and low risks of complications to providing no tumor control and high risks of side effects. Several treatment plans experience falloffs in AUTOC at a smaller magnitude of motion than other plans. In our dataset, a potential indicator of a motion-sensitive treatment plan is that the duodenum is in proximity to the tumor in the SI direction. CONCLUSIONS The TOC framework can serve as a tool to quantify the effects of internal organ motion in radiation therapy. With pancreatic SBRT clinical data, we applied this tool to study the change in treatment efficacy induced by motion for individual treatment plans. This framework could potentially be used clinically to understand the effects of motion in an individual patient and to design a patient-specific motion management plan. This framework could also be used in research to evaluate different components of the treatment process, such as motion-management techniques, treatment-planning algorithms, and treatment margins.
Collapse
Affiliation(s)
- Yijun Ding
- College of Optical Sciences, University of Arizona, Tucson, AZ, 85719, USA
| | - Harrison H Barrett
- College of Optical Sciences, University of Arizona, Tucson, AZ, 85719, USA.,Department of Medical Imaging, University of Arizona, Tucson, AZ, 85719, USA
| | - Matthew A Kupinski
- College of Optical Sciences, University of Arizona, Tucson, AZ, 85719, USA
| | - Yevgeniy Vinogradskiy
- Department of Radiation Oncology, University of Colorado School of Medicine, Aurora, CO, 80045, USA
| | - Moyed Miften
- Department of Radiation Oncology, University of Colorado School of Medicine, Aurora, CO, 80045, USA
| | - Bernard L Jones
- Department of Radiation Oncology, University of Colorado School of Medicine, Aurora, CO, 80045, USA
| |
Collapse
|
12
|
Mylonas A, Keall PJ, Booth JT, Shieh CC, Eade T, Poulsen PR, Nguyen DT. A deep learning framework for automatic detection of arbitrarily shaped fiducial markers in intrafraction fluoroscopic images. Med Phys 2019; 46:2286-2297. [PMID: 30929254 DOI: 10.1002/mp.13519] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Revised: 01/24/2019] [Accepted: 03/16/2019] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Real-time image-guided adaptive radiation therapy (IGART) requires accurate marker segmentation to resolve three-dimensional (3D) motion based on two-dimensional (2D) fluoroscopic images. Most common marker segmentation methods require prior knowledge of marker properties to construct a template. If marker properties are not known, an additional learning period is required to build the template which exposes the patient to an additional imaging dose. This work investigates a deep learning-based fiducial marker classifier for use in real-time IGART that requires no prior patient-specific data or additional learning periods. The proposed tracking system uses convolutional neural network (CNN) models to segment cylindrical and arbitrarily shaped fiducial markers. METHODS The tracking system uses a tracking window approach to perform sliding window classification of each implanted marker. Three cylindrical marker training datasets were generated from phantom kilovoltage (kV) and patient intrafraction images with increasing levels of megavoltage (MV) scatter. The cylindrical shaped marker CNNs were validated on unseen kV fluoroscopic images from 12 fractions of 10 prostate cancer patients with implanted gold fiducials. For the training and validation of the arbitrarily shaped marker CNNs, cone beam computed tomography (CBCT) projection images from ten fractions of seven lung cancer patients with implanted coiled markers were used. The arbitrarily shaped marker CNNs were trained using three patients and the other four unseen patients were used for validation. The effects of full training using a compact CNN (four layers with learnable weights) and transfer learning using a pretrained CNN (AlexNet, eight layers with learnable weights) were analyzed. Each CNN was evaluated using a Precision-Recall curve (PRC), the area under the PRC plot (AUC), and by the calculation of sensitivity and specificity. The tracking system was assessed using the validation data and the accuracy was quantified by calculating the mean error, root-mean-square error (RMSE) and the 1st and 99th percentiles of the error. RESULTS The fully trained CNN on the dataset with moderate noise levels had a sensitivity of 99.00% and specificity of 98.92%. Transfer learning of AlexNet resulted in a sensitivity and specificity of 99.42% and 98.13%, respectively, for the same datasets. For the arbitrarily shaped marker CNNs, the sensitivity was 98.58% and specificity was 98.97% for the fully trained CNN. The transfer learning CNN had a sensitivity and specificity of 98.49% and 99.56%, respectively. The CNNs were successfully incorporated into a multiple object tracking system for both cylindrical and arbitrarily shaped markers. The cylindrical shaped marker tracking had a mean RMSE of 1.6 ± 0.2 pixels and 1.3 ± 0.4 pixels in the x- and y-directions, respectively. The arbitrarily shaped marker tracking had a mean RMSE of 3.0 ± 0.5 pixels and 2.2 ± 0.4 pixels in the x- and y-directions, respectively. CONCLUSION With deep learning CNNs, high classification performances on unseen patient images were achieved for both cylindrical and arbitrarily shaped markers. Furthermore, the application of CNN models to intrafraction monitoring was demonstrated using a simple tracking system. The results demonstrate that CNN models can be used to track markers without prior knowledge of the marker properties or an additional learning period.
Collapse
Affiliation(s)
- Adam Mylonas
- Faculty of Medicine and Health, ACRF Image X Institute, The University of Sydney, Sydney, NSW, Australia
| | - Paul J Keall
- Faculty of Medicine and Health, ACRF Image X Institute, The University of Sydney, Sydney, NSW, Australia
| | - Jeremy T Booth
- Royal North Shore Hospital, Northern Sydney Cancer Centre, St Leonards, NSW, Australia
| | - Chun-Chien Shieh
- Faculty of Medicine and Health, ACRF Image X Institute, The University of Sydney, Sydney, NSW, Australia
| | - Thomas Eade
- Royal North Shore Hospital, Northern Sydney Cancer Centre, St Leonards, NSW, Australia
| | | | - Doan Trang Nguyen
- Faculty of Medicine and Health, ACRF Image X Institute, The University of Sydney, Sydney, NSW, Australia.,School of Biomedical Engineering, University of Technology Sydney, Sydney, NSW, Australia
| |
Collapse
|
13
|
|
14
|
Kim JH, Nguyen DT, Booth JT, Huang CY, Fuangrod T, Poulsen P, O'Brien R, Caillet V, Eade T, Kneebone A, Keall P. The accuracy and precision of Kilovoltage Intrafraction Monitoring (KIM) six degree-of-freedom prostate motion measurements during patient treatments. Radiother Oncol 2018; 126:236-243. [PMID: 29471970 DOI: 10.1016/j.radonc.2017.10.030] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2017] [Revised: 10/17/2017] [Accepted: 10/22/2017] [Indexed: 10/18/2022]
Abstract
BACKGROUND AND PURPOSE To perform a quantitative analysis of the accuracy and precision of Kilovoltage Intrafraction Monitoring (KIM) six degree-of-freedom (6DoF) prostate motion measurements during treatments. MATERIAL AND METHODS Real-time 6DoF prostate motion was acquired using KIM for 14 prostate cancer patients (377 fractions). KIM outputs the 6DoF prostate motion, combining 3D translation and 3D rotational motion information relative to its planning position. The corresponding groundtruth target motion was obtained post-treatment based on kV/MV triangulation. The accuracy and precision of the 6DoF KIM motion estimates were calculated as the mean and standard deviation differences compared with the ground-truth. RESULTS The accuracy ± precision of real-time 6DoF KIM-measured prostate motion were 0.2 ± 1.3° for rotations and 0.1 ± 0.5 mm for translations, respectively. The magnitude of KIM-measured motion was well-correlated with the magnitude of ground-truth motion resulting in Pearson correlation coefficients of ≥0.88 in all DoF. CONCLUSIONS The results demonstrate that KIM is capable of providing the real-time 6DoF prostate target motion during patient treatments with an accuracy ± precision of within 0.2 ± 1.3° and 0.1 ± 0.5 mm for rotation and translation, respectively. As KIM only requires a single X-ray imager, which is available on most modern cancer radiotherapy devices, there is potential for widespread adoption of this technology.
Collapse
Affiliation(s)
- Jung-Ha Kim
- Radiation Physics Laboratory, Sydney Medical School, The University of Sydney, Australia
| | - Doan T Nguyen
- Radiation Physics Laboratory, Sydney Medical School, The University of Sydney, Australia
| | - Jeremy T Booth
- Northern Sydney Cancer Centre, Royal North Shore Hospital, Australia; School of Physics, The University of Sydney, Australia
| | - Chen-Yu Huang
- Radiation Physics Laboratory, Sydney Medical School, The University of Sydney, Australia
| | - Todsaporn Fuangrod
- Department of Radiation Oncology, Calvary Mater Hospital, Newcastle, Australia
| | - Per Poulsen
- Department of Oncology, Aarhus University Hospital, Denmark
| | - Ricky O'Brien
- Radiation Physics Laboratory, Sydney Medical School, The University of Sydney, Australia
| | - Vincent Caillet
- Radiation Physics Laboratory, Sydney Medical School, The University of Sydney, Australia; Northern Sydney Cancer Centre, Royal North Shore Hospital, Australia
| | - Thomas Eade
- Northern Sydney Cancer Centre, Royal North Shore Hospital, Australia
| | - Andrew Kneebone
- Northern Sydney Cancer Centre, Royal North Shore Hospital, Australia
| | - Paul Keall
- Radiation Physics Laboratory, Sydney Medical School, The University of Sydney, Australia.
| |
Collapse
|
15
|
Pettersson N, Simpson D, Atwood T, Hattangadi-Gluth J, Murphy J, Cerviño L. Automatic patient positioning and gating window settings in respiratory-gated stereotactic body radiation therapy for pancreatic cancer using fluoroscopic imaging. J Appl Clin Med Phys 2018; 19:74-82. [PMID: 29377561 PMCID: PMC5849837 DOI: 10.1002/acm2.12258] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2017] [Revised: 08/29/2017] [Accepted: 11/21/2017] [Indexed: 12/25/2022] Open
Abstract
Before treatment delivery of respiratory-gated radiation therapy (RT) in patients with implanted fiducials, both the patient position and the gating window thresholds must be set. In linac-based RT, this is currently done manually and setup accuracy will therefore be dependent on the skill of the user. In this study, we present an automatic method for finding the patient position and the gating window thresholds. Our method uses sequentially acquired anterior-posterior (AP) and lateral fluoroscopic imaging with simultaneous breathing amplitude monitoring and intends to reach 100% gating accuracy while keeping the duty cycle as high as possible. We retrospectively compared clinically used setups to the automatic setups by our method in five pancreatic cancer patients treated with hypofractionated RT. In 15 investigated fractions, the average (±standard deviation) differences between the clinical and automatic setups were -0.4 ± 0.8 mm, -1.0 ± 1.1 mm, and 1.8 ± 1.3 mm in the left-right (LR), the AP, and the superior-inferior (SI) direction, respectively. For the clinical setups, typical interfractional setup variations were 1-2 mm in the LR and AP directions, and 2-3 mm in the SI direction. Using the automatic method, the duty cycle could be improved in six fractions, in four fractions the duty cycle had to be lowered to improve gating accuracy, and in five fractions both duty cycle and gating accuracy could be improved. Our automatic method has the potential to increase accuracy and decrease user dependence of setup for patients with implanted fiducials treated with respiratory-gated RT. After fluoroscopic image acquisition, the calculated patient shifts and gating window thresholds are calculated in 1-2 s. The method gives the user the possibility to evaluate the effect of different patient positions and gating window thresholds on gating accuracy and duty cycle. If deemed necessary, it can be used at any time during treatment delivery.
Collapse
Affiliation(s)
- Niclas Pettersson
- Department of Radiation Oncology, University of California San Diego, La Jolla, CA, USA
| | - Daniel Simpson
- Department of Radiation Oncology, University of California San Diego, La Jolla, CA, USA
| | - Todd Atwood
- Department of Radiation Oncology, University of California San Diego, La Jolla, CA, USA
| | - Jona Hattangadi-Gluth
- Department of Radiation Oncology, University of California San Diego, La Jolla, CA, USA
| | - James Murphy
- Department of Radiation Oncology, University of California San Diego, La Jolla, CA, USA
| | - Laura Cerviño
- Department of Radiation Oncology, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
16
|
Dutta D, Menon A, Abraham AG, Madhavan R, Nair H, Shalet P, Jishan J, Holla R. Cholangiocarcinoma treatment with Synchrony-based robotic radiosurgery system: Tracking options. JOURNAL OF RADIOSURGERY AND SBRT 2018; 5:335-340. [PMID: 30538895 PMCID: PMC6255724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Subscribe] [Scholar Register] [Received: 06/24/2018] [Accepted: 07/25/2018] [Indexed: 06/09/2023]
Affiliation(s)
- Debnarayan Dutta
- Department of Radiation Oncology, Amrita Institute of Medical Science, Kochi, Kerala, India
| | - Anjali Menon
- Department of Radiation Oncology, Amrita Institute of Medical Science, Kochi, Kerala, India
| | - Aswin George Abraham
- Department of Radiation Oncology, Amrita Institute of Medical Science, Kochi, Kerala, India
| | - Ram Madhavan
- Department of Radiation Oncology, Amrita Institute of Medical Science, Kochi, Kerala, India
| | - Haridas Nair
- Department of Radiation Oncology, Amrita Institute of Medical Science, Kochi, Kerala, India
| | - P.G. Shalet
- Department of Medical Dosimetry, Amrita Institute of Medical Science, Kochi, Kerala, India
| | - J. Jishan
- Department of Medical Dosimetry, Amrita Institute of Medical Science, Kochi, Kerala, India
| | - Raghavendra Holla
- Department of Radiation Physics, Amrita Institute of Medical Science, Kochi, Kerala, India
| |
Collapse
|
17
|
Campbell WG, Jones BL, Schefter T, Goodman KA, Miften M. An evaluation of motion mitigation techniques for pancreatic SBRT. Radiother Oncol 2017; 124:168-173. [PMID: 28571887 DOI: 10.1016/j.radonc.2017.05.013] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Revised: 04/20/2017] [Accepted: 05/15/2017] [Indexed: 12/29/2022]
Abstract
BACKGROUND AND PURPOSE Ablative radiation therapy can be beneficial for pancreatic cancer, and motion mitigation helps to reduce dose to nearby organs-at-risk. Here, we compared two competing methods of motion mitigation-abdominal compression and respiratory gating. MATERIALS AND METHODS CBCT scans of 19 pancreatic cancer patients receiving stereotactic body radiation therapy were acquired with and without abdominal compression, and 3D target motion was reconstructed from CBCT projection images. Daily target motion without mitigation was compared against motion with compression and with simulated respiratory gating. Gating was free-breathing and based on an external surrogate. Target coverage was also evaluated for each scenario by simulating reduced target margins. RESULTS Without mitigation, average daily target motion in LR/AP/SI directions was 5.3, 7.3, and 13.9mm, respectively. With abdominal compression, these values were 5.2, 5.3, and 8.5mm, and with respiratory gating they were 3.2, 3.9, and 5.5mm, respectively. Reductions with compression were significant in AP/SI directions, while reductions with gating were significant in all directions. Respiratory gating also demonstrated better coverage in the reduced margins scenario. CONCLUSION Respiratory gating is the most effective strategy for reducing motion in pancreatic SBRT, and may allow for dose escalation through a reduction in target margin.
Collapse
Affiliation(s)
- Warren G Campbell
- Department of Radiation Oncology, University of Colorado School of Medicine, USA.
| | - Bernard L Jones
- Department of Radiation Oncology, University of Colorado School of Medicine, USA
| | - Tracey Schefter
- Department of Radiation Oncology, University of Colorado School of Medicine, USA
| | - Karyn A Goodman
- Department of Radiation Oncology, University of Colorado School of Medicine, USA
| | - Moyed Miften
- Department of Radiation Oncology, University of Colorado School of Medicine, USA
| |
Collapse
|