1
|
Gardner M, Dillon O, Byrne H, Keall P, O'Brien R. Data-driven rapid 4D cone-beam CT reconstruction for new generation linacs. Phys Med Biol 2024; 69:18NT02. [PMID: 39241801 DOI: 10.1088/1361-6560/ad780a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Accepted: 09/06/2024] [Indexed: 09/09/2024]
Abstract
Objective.Newer generation linear accelerators (Linacs) allow 20 s cone-beam CT (CBCT) acquisition which reduces radiation therapy treatment time. However, the current clinical application of these rapid scans is only 3DCBCT. In this paper we propose a novel data-driven rapid 4DCBCT reconstruction method for new generation linacs.Approach.This method relies on estimating the magnitude of the diaphragm motion from an initial 3D reconstruction. This estimated motion is used to linearly approximate a deformation vector field (DVF) for each respiration phase. These DVFs are then used for motion compensated Feldkamp-Davis-Kress (MCFDK) reconstructions. This method, named MCFDK Data Driven (MCFDK-DD), was compared to a MCFDK reconstruction using a prior motion model (MCFDK-Prior), a 3D-FDK reconstruction, and a conventional acquisition (4 mins) conventional reconstruction 4DCBCT (4D-FDK). The data used in this paper were derived from 4DCT volumes from 12 patients from The Cancer Imaging Archives. Image quality was quantified using RMSE of line plots centred on the tumour, tissue interface width (TIW), the mean square error (MSE) and structural similarity index measurement (SSIM).Main Results.The tumour line plots in the Superior-Inferior direction showed reduced RMSE for the MCFDK-DD compared to the 3D-FDK method, indicating the MCFDK-DD method provided a more accurate tumour location. Similarly, the TIW values from the MCFDK-DD reconstructions (median 8.6 mm) were significantly reduced for the MCFDK-DD method compared to the 3D-FDK reconstructions (median 14.8 mm, (p< 0.001). The MCFDK-DD, MCFDK-Prior and 3D-FDK had median MSE values of1.08×10-6mm-1,1.11×10-6mm-1and1.17×10-6mm-1respectively. The corresponding median SSIM values were 0.93, 0.92 and 0.92 respectively indicating the MCFDK-DD had good agreement with the conventional 4D-FDK reconstructions.Significance.These results demonstrate the feasibility of creating accurate data-driven 4DCBCT images for rapid scans on new generation linacs. These findings could lead to increased clinical usage of 4D information on newer generation linacs.
Collapse
Affiliation(s)
- Mark Gardner
- Faculty of Medicine and Health, Image X Institute, University of Sydney, Darlington, New South Wales, Australia
| | - Owen Dillon
- Faculty of Medicine and Health, Image X Institute, University of Sydney, Darlington, New South Wales, Australia
| | - Hilary Byrne
- Faculty of Medicine and Health, Image X Institute, University of Sydney, Darlington, New South Wales, Australia
| | - Paul Keall
- Faculty of Medicine and Health, Image X Institute, University of Sydney, Darlington, New South Wales, Australia
| | - Ricky O'Brien
- Medical Radiations, School of Health and Biomedical Sciences, RMIT University, Melbourne, Victoria 3001, Australia
| |
Collapse
|
2
|
Xie J, Shao HC, Li Y, Zhang Y. Prior frequency guided diffusion model for limited angle (LA)-CBCT reconstruction. Phys Med Biol 2024; 69:135008. [PMID: 38870947 PMCID: PMC11218670 DOI: 10.1088/1361-6560/ad580d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Revised: 05/29/2024] [Accepted: 06/13/2024] [Indexed: 06/15/2024]
Abstract
Objective.Cone-beam computed tomography (CBCT) is widely used in image-guided radiotherapy. Reconstructing CBCTs from limited-angle acquisitions (LA-CBCT) is highly desired for improved imaging efficiency, dose reduction, and better mechanical clearance. LA-CBCT reconstruction, however, suffers from severe under-sampling artifacts, making it a highly ill-posed inverse problem. Diffusion models can generate data/images by reversing a data-noising process through learned data distributions; and can be incorporated as a denoiser/regularizer in LA-CBCT reconstruction. In this study, we developed a diffusion model-based framework, prior frequency-guided diffusion model (PFGDM), for robust and structure-preserving LA-CBCT reconstruction.Approach.PFGDM uses a conditioned diffusion model as a regularizer for LA-CBCT reconstruction, and the condition is based on high-frequency information extracted from patient-specific prior CT scans which provides a strong anatomical prior for LA-CBCT reconstruction. Specifically, we developed two variants of PFGDM (PFGDM-A and PFGDM-B) with different conditioning schemes. PFGDM-A applies the high-frequency CT information condition until a pre-optimized iteration step, and drops it afterwards to enable both similar and differing CT/CBCT anatomies to be reconstructed. PFGDM-B, on the other hand, continuously applies the prior CT information condition in every reconstruction step, while with a decaying mechanism, to gradually phase out the reconstruction guidance from the prior CT scans. The two variants of PFGDM were tested and compared with current available LA-CBCT reconstruction solutions, via metrics including peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM).Main results.PFGDM outperformed all traditional and diffusion model-based methods. The mean(s.d.) PSNR/SSIM were 27.97(3.10)/0.949(0.027), 26.63(2.79)/0.937(0.029), and 23.81(2.25)/0.896(0.036) for PFGDM-A, and 28.20(1.28)/0.954(0.011), 26.68(1.04)/0.941(0.014), and 23.72(1.19)/0.894(0.034) for PFGDM-B, based on 120°, 90°, and 30° orthogonal-view scan angles respectively. In contrast, the PSNR/SSIM was 19.61(2.47)/0.807(0.048) for 30° for DiffusionMBIR, a diffusion-based method without prior CT conditioning.Significance. PFGDM reconstructs high-quality LA-CBCTs under very-limited gantry angles, allowing faster and more flexible CBCT scans with dose reductions.
Collapse
Affiliation(s)
- Jiacheng Xie
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Hua-Chieh Shao
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Yunxiang Li
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - You Zhang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| |
Collapse
|
3
|
Shao HC, Mengke T, Pan T, Zhang Y. Dynamic CBCT imaging using prior model-free spatiotemporal implicit neural representation (PMF-STINR). Phys Med Biol 2024; 69:115030. [PMID: 38697195 PMCID: PMC11133878 DOI: 10.1088/1361-6560/ad46dc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 04/12/2024] [Accepted: 05/01/2024] [Indexed: 05/04/2024]
Abstract
Objective. Dynamic cone-beam computed tomography (CBCT) can capture high-spatial-resolution, time-varying images for motion monitoring, patient setup, and adaptive planning of radiotherapy. However, dynamic CBCT reconstruction is an extremely ill-posed spatiotemporal inverse problem, as each CBCT volume in the dynamic sequence is only captured by one or a few x-ray projections, due to the slow gantry rotation speed and the fast anatomical motion (e.g. breathing).Approach. We developed a machine learning-based technique, prior-model-free spatiotemporal implicit neural representation (PMF-STINR), to reconstruct dynamic CBCTs from sequentially acquired x-ray projections. PMF-STINR employs a joint image reconstruction and registration approach to address the under-sampling challenge, enabling dynamic CBCT reconstruction from singular x-ray projections. Specifically, PMF-STINR uses spatial implicit neural representations to reconstruct a reference CBCT volume, and it applies temporal INR to represent the intra-scan dynamic motion of the reference CBCT to yield dynamic CBCTs. PMF-STINR couples the temporal INR with a learning-based B-spline motion model to capture time-varying deformable motion during the reconstruction. Compared with the previous methods, the spatial INR, the temporal INR, and the B-spline model of PMF-STINR are all learned on the fly during reconstruction in a one-shot fashion, without using any patient-specific prior knowledge or motion sorting/binning.Main results. PMF-STINR was evaluated via digital phantom simulations, physical phantom measurements, and a multi-institutional patient dataset featuring various imaging protocols (half-fan/full-fan, full sampling/sparse sampling, different energy and mAs settings, etc). The results showed that the one-shot learning-based PMF-STINR can accurately and robustly reconstruct dynamic CBCTs and capture highly irregular motion with high temporal (∼ 0.1 s) resolution and sub-millimeter accuracy.Significance. PMF-STINR can reconstruct dynamic CBCTs and solve the intra-scan motion from conventional 3D CBCT scans without using any prior anatomical/motion model or motion sorting/binning. It can be a promising tool for motion management by offering richer motion information than traditional 4D-CBCTs.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Tielige Mengke
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Tinsu Pan
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, TX, 77030, United States of America
| | - You Zhang
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| |
Collapse
|
4
|
Zhao X, Du Y, Yue H, Wang R, Zhou S, Wu H, Wang W, Peng Y. Deep learning-based projection synthesis for low-dose cone-beam computed tomography imaging in image-guided radiotherapy. Quant Imaging Med Surg 2024; 14:231-250. [PMID: 38223024 PMCID: PMC10784032 DOI: 10.21037/qims-23-759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Accepted: 10/19/2023] [Indexed: 01/16/2024]
Abstract
Background The imaging dose of cone-beam computed tomography (CBCT) in image-guided radiotherapy (IGRT) poses adverse effects on patient health. To improve the quality of sparse-view low-dose CBCT images, a projection synthesis convolutional neural network (SynCNN) model is proposed. Methods Included in this retrospective, single-center study were 223 patients diagnosed with brain tumours from Beijing Cancer Hospital. The proposed SynCNN model estimated two pairs of orthogonally direction-separable spatial kernels to synthesize the missing projection in between the input neighboring sparse-view projections via local convolution operations. The SynCNN model was trained on 150 real patients to learn patterns for inter-view projection synthesis. CBCT data from 30 real patients were used to validate the SynCNN, while data from a phantom and 43 real patients were used to test the SynCNN externally. Sparse-view projection datasets with 1/2, 1/4, and 1/8 of the original sampling rate were simulated, and the corresponding full-view projection datasets were restored using the SynCNN model. The tomographic images were then reconstructed with the Feldkamp-Davis-Kress algorithm. The root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) metrics were measured in both the projection and image domains. Five experts were invited to grade the image quality blindly for 40 randomly selected evaluation groups with a four-level rubric, where a score greater than or equal to 2 was considered acceptable image quality. The running time of the SynCNN model was recorded. The SynCNN model was directly compared with the three other methods on 1/4 sparse-view reconstructions. Results The phantom and patient studies showed that the missing projections were accurately synthesized. In the image domain, for the phantom study, compared with images reconstructed from sparse-view projections, images with SynCNN synthesis exhibited significantly improved qualities with decreased values in RMSE and increased values in PSNR and SSIM. For the patient study, between the results with and without the SynCNN synthesis, the averaged RMSE decreased by 3.4×10-4, 10.3×10-4, and 21.7×10-4, the averaged PSNR increased by 3.4, 6.6, and 9.4 dB, and the averaged SSIM increased by 5.2×10-2, 18.9×10-2 and 33.9×10-2, for the 1/2, 1/4, and 1/8 sparse-view reconstructions, respectively. In expert subjective evaluation, both the median scores and acceptance rates of the images with SynCNN synthesis were higher than those reconstructed from sparse-view projections. It took the model less than 0.01 s to synthesize an inter-view projection. Compared with the three other methods, the SynCNN model obtained the best scores in terms of the three metrics in both domains. Conclusions The proposed SynCNN model effectively improves the quality of sparse-view CBCT images at a low time cost. With the SynCNN model, the CBCT imaging dose in IGRT could be reduced potentially.
Collapse
Affiliation(s)
- Xuzhi Zhao
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China
| | - Yi Du
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, China
- Institute of Medical Technology, Peking University Health Science Center, Beijing, China
| | - Haizhen Yue
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, China
| | - Ruoxi Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, China
| | - Shun Zhou
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, China
| | - Hao Wu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, China
- Institute of Medical Technology, Peking University Health Science Center, Beijing, China
| | - Wei Wang
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China
| | - Yahui Peng
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China
| |
Collapse
|
5
|
Knäusl B, Belotti G, Bertholet J, Daartz J, Flampouri S, Hoogeman M, Knopf AC, Lin H, Moerman A, Paganelli C, Rucinski A, Schulte R, Shimizu S, Stützer K, Zhang X, Zhang Y, Czerska K. A review of the clinical introduction of 4D particle therapy research concepts. Phys Imaging Radiat Oncol 2024; 29:100535. [PMID: 38298885 PMCID: PMC10828898 DOI: 10.1016/j.phro.2024.100535] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 12/12/2023] [Accepted: 01/04/2024] [Indexed: 02/02/2024] Open
Abstract
Background and purpose Many 4D particle therapy research concepts have been recently translated into clinics, however, remaining substantial differences depend on the indication and institute-related aspects. This work aims to summarise current state-of-the-art 4D particle therapy technology and outline a roadmap for future research and developments. Material and methods This review focused on the clinical implementation of 4D approaches for imaging, treatment planning, delivery and evaluation based on the 2021 and 2022 4D Treatment Workshops for Particle Therapy as well as a review of the most recent surveys, guidelines and scientific papers dedicated to this topic. Results Available technological capabilities for motion surveillance and compensation determined the course of each 4D particle treatment. 4D motion management, delivery techniques and strategies including imaging were diverse and depended on many factors. These included aspects of motion amplitude, tumour location, as well as accelerator technology driving the necessity of centre-specific dosimetric validation. Novel methodologies for X-ray based image processing and MRI for real-time tumour tracking and motion management were shown to have a large potential for online and offline adaptation schemes compensating for potential anatomical changes over the treatment course. The latest research developments were dominated by particle imaging, artificial intelligence methods and FLASH adding another level of complexity but also opportunities in the context of 4D treatments. Conclusion This review showed that the rapid technological advances in radiation oncology together with the available intrafractional motion management and adaptive strategies paved the way towards clinical implementation.
Collapse
Affiliation(s)
- Barbara Knäusl
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Gabriele Belotti
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Jenny Bertholet
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Juliane Daartz
- Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | | | - Mischa Hoogeman
- Department of Medical Physics & Informatics, HollandPTC, Delft, The Netherlands
- Erasmus MC Cancer Institute, University Medical Center Rotterdam, Department of Radiotherapy, Rotterdam, The Netherlands
| | - Antje C Knopf
- Institut für Medizintechnik und Medizininformatik Hochschule für Life Sciences FHNW, Muttenz, Switzerland
| | - Haibo Lin
- New York Proton Center, New York, NY, USA
| | - Astrid Moerman
- Department of Medical Physics & Informatics, HollandPTC, Delft, The Netherlands
| | - Chiara Paganelli
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Antoni Rucinski
- Institute of Nuclear Physics Polish Academy of Sciences, PL-31342 Krakow, Poland
| | - Reinhard Schulte
- Division of Biomedical Engineering Sciences, School of Medicine, Loma Linda University
| | - Shing Shimizu
- Department of Carbon Ion Radiotherapy, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Kristin Stützer
- OncoRay – National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden – Rossendorf, Institute of Radiooncology – OncoRay, Dresden, Germany
| | - Xiaodong Zhang
- Department of Radiation Physics, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
| | - Katarzyna Czerska
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
| |
Collapse
|
6
|
Dai J, Dong G, Zhang C, He W, Liu L, Wang T, Jiang Y, Zhao W, Zhao X, Xie Y, Liang X. Volumetric tumor tracking from a single cone-beam X-ray projection image enabled by deep learning. Med Image Anal 2024; 91:102998. [PMID: 37857066 DOI: 10.1016/j.media.2023.102998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 09/19/2023] [Accepted: 10/06/2023] [Indexed: 10/21/2023]
Abstract
Radiotherapy serves as a pivotal treatment modality for malignant tumors. However, the accuracy of radiotherapy is significantly compromised due to respiratory-induced fluctuations in the size, shape, and position of the tumor. To address this challenge, we introduce a deep learning-anchored, volumetric tumor tracking methodology that employs single-angle X-ray projection images. This process involves aligning the intraoperative two-dimensional (2D) X-ray images with the pre-treatment three-dimensional (3D) planning Computed Tomography (CT) scans, enabling the extraction of the 3D tumor position and segmentation. Prior to therapy, a bespoke patient-specific tumor tracking model is formulated, leveraging a hybrid data augmentation, style correction, and registration network to create a mapping from single-angle 2D X-ray images to the corresponding 3D tumors. During the treatment phase, real-time X-ray images are fed into the trained model, producing the respective 3D tumor positioning. Rigorous validation conducted on actual patient lung data and lung phantoms attests to the high localization precision of our method at lowered radiation doses, thus heralding promising strides towards enhancing the precision of radiotherapy.
Collapse
Affiliation(s)
- Jingjing Dai
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Guoya Dong
- School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Hebei Key Laboratory of Bioelectromagnetics and Neural Engineering, Tianjin Key Laboratory of Bioelectricity and Intelligent Health, 300130, Tianjin, China
| | - Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wenfeng He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Lin Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Tangsheng Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yuming Jiang
- Department of Radiation Oncology, Wake Forest University School of Medicine, Winston-Salem,North Carolina, 27157, USA
| | - Wei Zhao
- School of Physics, Beihang University, Beijing, 100191, China
| | - Xiang Zhao
- Department of Radiology, Tianjin Medical University General Hospital, 300050, China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
7
|
Shao HC, Mengke T, Pan T, Zhang Y. Dynamic CBCT Imaging using Prior Model-Free Spatiotemporal Implicit Neural Representation (PMF-STINR). ARXIV 2023:arXiv:2311.10036v2. [PMID: 38013886 PMCID: PMC10680908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Objective Dynamic cone-beam computed tomography (CBCT) can capture high-spatial-resolution, time-varying images for motion monitoring, patient setup, and adaptive planning of radiotherapy. However, dynamic CBCT reconstruction is an extremely ill-posed spatiotemporal inverse problem, as each CBCT volume in the dynamic sequence is only captured by one or a few X-ray projections, due to the slow gantry rotation speed and the fast anatomical motion (e.g., breathing). Approach We developed a machine learning-based technique, prior-model-free spatiotemporal implicit neural representation (PMF-STINR), to reconstruct dynamic CBCTs from sequentially acquired X-ray projections. PMF-STINR employs a joint image reconstruction and registration approach to address the under-sampling challenge, enabling dynamic CBCT reconstruction from singular X-ray projections. Specifically, PMF-STINR uses spatial implicit neural representation to reconstruct a reference CBCT volume, and it applies temporal INR to represent the intra-scan dynamic motion with respect to the reference CBCT to yield dynamic CBCTs. PMF-STINR couples the temporal INR with a learning-based B-spline motion model to capture time-varying deformable motion during the reconstruction. Compared with the previous methods, the spatial INR, the temporal INR, and the B-spline model of PMF-STINR are all learned on the fly during reconstruction in a one-shot fashion, without using any patient-specific prior knowledge or motion sorting/binning. Main results PMF-STINR was evaluated via digital phantom simulations, physical phantom measurements, and a multi-institutional patient dataset featuring various imaging protocols (half-fan/full-fan, full sampling/sparse sampling, different energy and mAs settings, etc.). The results showed that the one-shot learning-based PMF-STINR can accurately and robustly reconstruct dynamic CBCTs and capture highly irregular motion with high temporal (~0.1s) resolution and sub-millimeter accuracy. Significance PMF-STINR can reconstruct dynamic CBCTs and solve the intra-scan motion from conventional 3D CBCT scans without using any prior anatomical/motion model or motion sorting/binning. It can be a promising tool for motion management by offering richer motion information than traditional 4D-CBCTs.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Tielige Mengke
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Tinsu Pan
- Department of Imaging Physics University of Texas MD Anderson Cancer Center, Houston, TX, 77030, USA
| | - You Zhang
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
8
|
Dong Z, Yu S, Szmul A, Wang J, Qi J, Wu H, Li J, Lu Z, Zhang Y. Simulation of a new respiratory phase sorting method for 4D-imaging using optical surface information towards precision radiotherapy. Comput Biol Med 2023; 162:107073. [PMID: 37290392 PMCID: PMC10311359 DOI: 10.1016/j.compbiomed.2023.107073] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 05/09/2023] [Accepted: 05/27/2023] [Indexed: 06/10/2023]
Abstract
BACKGROUND Respiratory signal detection is critical for 4-dimensional (4D) imaging. This study proposes and evaluates a novel phase sorting method using optical surface imaging (OSI), aiming to improve the precision of radiotherapy. METHOD Based on 4D Extended Cardiac-Torso (XCAT) digital phantom, OSI in point cloud format was generated from the body segmentation, and image projections were simulated using the geometries of Varian 4D kV cone-beam-CT (CBCT). Respiratory signals were extracted respectively from the segmented diaphragm image (reference method) and OSI respectively, where Gaussian Mixture Model and Principal Component Analysis (PCA) were used for image registration and dimension reduction respectively. Breathing frequencies were compared using Fast-Fourier-Transform. Consistency of 4DCBCT images reconstructed using Maximum Likelihood Expectation Maximization algorithm was also evaluated quantitatively, where high consistency can be suggested by lower Root-Mean-Square-Error (RMSE), Structural-Similarity-Index (SSIM) value closer to 1, and larger Peak-Signal-To-Noise-Ratio (PSNR) respectively. RESULTS High consistency of breathing frequencies was observed between the diaphragm-based (0.232 Hz) and OSI-based (0.251 Hz) signals, with a slight discrepancy of 0.019Hz. Using end of expiration (EOE) and end of inspiration (EOI) phases as examples, the mean±1SD values of the 80 transverse, 100 coronal and 120 sagittal planes were 0.967, 0,972, 0.974 (SSIM); 1.657 ± 0.368, 1.464 ± 0.104, 1.479 ± 0.297 (RMSE); and 40.501 ± 1.737, 41.532 ± 1.464, 41.553 ± 1.910 (PSNR) for the EOE; and 0.969, 0.973, 0.973 (SSIM); 1.686 ± 0.278, 1.422 ± 0.089, 1.489 ± 0.238 (RMSE); and 40.535 ± 1.539, 41.605 ± 0.534, 41.401 ± 1.496 (PSNR) for EOI respectively. CONCLUSIONS This work proposed and evaluated a novel respiratory phase sorting approach for 4D imaging using optical surface signals, which can potentially be applied to precision radiotherapy. Its potential advantages were non-ionizing, non-invasive, non-contact, and more compatible with various anatomic regions and treatment/imaging systems.
Collapse
Affiliation(s)
- Zhengkun Dong
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China; Institute of Medical Technology, Peking University Health Science Center, Beijing, 100191, China
| | - Shutong Yu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China; Institute of Medical Technology, Peking University Health Science Center, Beijing, 100191, China
| | - Adam Szmul
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Jingyuan Wang
- Department of Biostatistics, School of Public Health, Peking University, Beijing, China
| | - Junfeng Qi
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, China
| | - Hao Wu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Junyu Li
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Zihong Lu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Yibao Zhang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital & Institute, Beijing, 100142, China.
| |
Collapse
|
9
|
Zhang Y, Shao HC, Pan T, Mengke T. Dynamic cone-beam CT reconstruction using spatial and temporal implicit neural representation learning (STINR). Phys Med Biol 2023; 68:045005. [PMID: 36638543 PMCID: PMC10087494 DOI: 10.1088/1361-6560/acb30d] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 12/27/2022] [Accepted: 01/13/2023] [Indexed: 01/15/2023]
Abstract
Objective. Dynamic cone-beam CT (CBCT) imaging is highly desired in image-guided radiation therapy to provide volumetric images with high spatial and temporal resolutions to enable applications including tumor motion tracking/prediction and intra-delivery dose calculation/accumulation. However, dynamic CBCT reconstruction is a substantially challenging spatiotemporal inverse problem, due to the extremely limited projection sample available for each CBCT reconstruction (one projection for one CBCT volume).Approach. We developed a simultaneous spatial and temporal implicit neural representation (STINR) method for dynamic CBCT reconstruction. STINR mapped the unknown image and the evolution of its motion into spatial and temporal multi-layer perceptrons (MLPs), and iteratively optimized the neuron weightings of the MLPs via acquired projections to represent the dynamic CBCT series. In addition to the MLPs, we also introduced prior knowledge, in the form of principal component analysis (PCA)-based patient-specific motion models, to reduce the complexity of the temporal mapping to address the ill-conditioned dynamic CBCT reconstruction problem. We used the extended-cardiac-torso (XCAT) phantom and a patient 4D-CBCT dataset to simulate different lung motion scenarios to evaluate STINR. The scenarios contain motion variations including motion baseline shifts, motion amplitude/frequency variations, and motion non-periodicity. The XCAT scenarios also contain inter-scan anatomical variations including tumor shrinkage and tumor position change.Main results. STINR shows consistently higher image reconstruction and motion tracking accuracy than a traditional PCA-based method and a polynomial-fitting-based neural representation method. STINR tracks the lung target to an average center-of-mass error of 1-2 mm, with corresponding relative errors of reconstructed dynamic CBCTs around 10%.Significance. STINR offers a general framework allowing accurate dynamic CBCT reconstruction for image-guided radiotherapy. It is a one-shot learning method that does not rely on pre-training and is not susceptible to generalizability issues. It also allows natural super-resolution. It can be readily applied to other imaging modalities as well.
Collapse
Affiliation(s)
- You Zhang
- Advanced Imaging and Informatics in Radiation Therapy (AIRT) Laboratory, Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, 75235, United States of America
| | - Hua-Chieh Shao
- Advanced Imaging and Informatics in Radiation Therapy (AIRT) Laboratory, Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, 75235, United States of America
| | - Tinsu Pan
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, TX, 77030, United States of America
| | - Tielige Mengke
- Advanced Imaging and Informatics in Radiation Therapy (AIRT) Laboratory, Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, 75235, United States of America
| |
Collapse
|
10
|
Shao HC, Wang J, Bai T, Chun J, Park JC, Jiang S, Zhang Y. Real-time liver tumor localization via a single x-ray projection using deep graph neural network-assisted biomechanical modeling. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac6b7b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 04/28/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Real-time imaging is highly desirable in image-guided radiotherapy, as it provides instantaneous knowledge of patients’ anatomy and motion during treatments and enables online treatment adaptation to achieve the highest tumor targeting accuracy. Due to extremely limited acquisition time, only one or few x-ray projections can be acquired for real-time imaging, which poses a substantial challenge to localize the tumor from the scarce projections. For liver radiotherapy, such a challenge is further exacerbated by the diminished contrast between the tumor and the surrounding normal liver tissues. Here, we propose a framework combining graph neural network-based deep learning and biomechanical modeling to track liver tumor in real-time from a single onboard x-ray projection. Approach. Liver tumor tracking is achieved in two steps. First, a deep learning network is developed to predict the liver surface deformation using image features learned from the x-ray projection. Second, the intra-liver deformation is estimated through biomechanical modeling, using the liver surface deformation as the boundary condition to solve tumor motion by finite element analysis. The accuracy of the proposed framework was evaluated using a dataset of 10 patients with liver cancer. Main results. The results show accurate liver surface registration from the graph neural network-based deep learning model, which translates into accurate, fiducial-less liver tumor localization after biomechanical modeling (<1.2 (±1.2) mm average localization error). Significance. The method demonstrates its potentiality towards intra-treatment and real-time 3D liver tumor monitoring and localization. It could be applied to facilitate 4D dose accumulation, multi-leaf collimator tracking and real-time plan adaptation. The method can be adapted to other anatomical sites as well.
Collapse
|
11
|
Jiang Z, Zhang Z, Chang Y, Ge Y, Yin FF, Ren L. Enhancement of 4-D Cone-Beam Computed Tomography (4D-CBCT) Using a Dual-Encoder Convolutional Neural Network (DeCNN). IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:222-230. [PMID: 35386935 PMCID: PMC8979258 DOI: 10.1109/trpms.2021.3133510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
4D-CBCT is a powerful tool to provide respiration-resolved images for the moving target localization. However, projections in each respiratory phase are intrinsically under-sampled under the clinical scanning time and imaging dose constraints. Images reconstructed by compressed sensing (CS)-based methods suffer from blurred edges. Introducing the average-4D-image constraint to the CS-based reconstruction, such as prior-image-constrained CS (PICCS), can improve the edge sharpness of the stable structures. However, PICCS can lead to motion artifacts in the moving regions. In this study, we proposed a dual-encoder convolutional neural network (DeCNN) to realize the average-image-constrained 4D-CBCT reconstruction. The proposed DeCNN has two parallel encoders to extract features from both the under-sampled target phase images and the average images. The features are then concatenated and fed into the decoder for the high-quality target phase image reconstruction. The reconstructed 4D-CBCT using of the proposed DeCNN from the real lung cancer patient data showed (1) qualitatively, clear and accurate edges for both stable and moving structures; (2) quantitatively, low-intensity errors, high peak signal-to-noise ratio, and high structural similarity compared to the ground truth images; and (3) superior quality to those reconstructed by several other state-of-the-art methods including the back-projection, CS total-variation, PICCS, and the single-encoder CNN. Overall, the proposed DeCNN is effective in exploiting the average-image constraint to improve the 4D-CBCT image quality.
Collapse
Affiliation(s)
- Zhuoran Jiang
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC, 27705, USA
| | - Zeyu Zhang
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC, 27705, USA
| | - Yushi Chang
- Department of Radiation Oncology, Hospital of University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Yun Ge
- School of Electronic Science and Engineering, Nanjing University, 163 Xianlin Road, Nanjing, 210046, China
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, North Carolina, 27710, USA, and is also with Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA, and is also with Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, 215316, China
| | - Lei Ren
- Department of Radiation Oncology, University of Maryland, Baltimore, MD, 21201, USA
| |
Collapse
|
12
|
Peng T, Jiang Z, Chang Y, Ren L. Real-time Markerless Tracking of Lung Tumors based on 2-D Fluoroscopy Imaging using Convolutional LSTM. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:189-199. [PMID: 35386934 PMCID: PMC8979268 DOI: 10.1109/trpms.2021.3126318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Purpose To investigate the feasibility of tracking targets in 2D fluor images using a novel deep learning network. Methods Our model design aims to capture the consistent motion of tumors in fluoroscopic images by neural network. Specifically, the model is trained by generative adversarial methods. The network is a coarse-to-fine architecture design. Convolutional LSTM (Long Short-term Memory) modules are introduced to account for the time correlation between different frames of the fluoroscopic images. The model was trained and tested on a digital X-CAT phantom in two studies. Series of coherent 2D fluoroscopic images representing the full respiration cycle were fed into the model to predict the localized tumor regions. In first study to test on massive scenarios, phantoms of different scales, tumor positions, sizes, and respiration amplitudes were generated to evaluate the accuracy of the model comprehensively. In second study to test on specific sample, phantoms were generated with fixed body and tumor sizes but different respiration amplitudes to investigate the effects of motion amplitude on the tracking accuracy. The tracking accuracy was quantitatively evaluated using intersection over union (IOU), tumor area difference, and centroid of mass difference (COMD). Results In the first comprehensive study, the mean IOU and dice coefficient achieved 0.93±0.04 and 0.96±0.02. The mean tumor area difference was 4.34%±4.04%. And the COMD was 0.16 cm and 0.07 cm on average in SI (superior-interior) and LR (left-right) directions, respectively. In the second amplitude study, the mean IOU and dice coefficient achieved 0.98 and 0.99. The mean tumor difference was 0.17%. And the COMD was 0.03cm and 0.01 cm on average in SI and LR directions, respectively. Results demonstrated the robustness of our model against breathing variations. Conclusion Our study showed the feasibility of using deep learning to track targets in x-ray fluoroscopic projection images without the aid of markers. The technique can be valuable for both pre- and during-treatment real-time target verification using fluoroscopic imaging in lung SBRT treatments.
Collapse
Affiliation(s)
- Tengya Peng
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, 215316, China
| | - Zhuoran Jiang
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA,School of Electronic Science and Engineering, Nanjing University, 163 Xianlin Road, Nanjing, Jiangsu, 210046, China
| | - Yushi Chang
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA
| | - Lei Ren
- Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD, 21201, US,Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA
| |
Collapse
|
13
|
Jiang Z, Zhang Z, Chang Y, Ge Y, Yin FF, Ren L. Prior image-guided cone-beam computed tomography augmentation from under-sampled projections using a convolutional neural network. Quant Imaging Med Surg 2021; 11:4767-4780. [PMID: 34888188 DOI: 10.21037/qims-21-114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 07/09/2021] [Indexed: 11/06/2022]
Abstract
Background Acquiring sparse-view cone-beam computed tomography (CBCT) is an effective way to reduce the imaging dose. However, images reconstructed by the conventional filtered back-projection method suffer from severe streak artifacts due to the projection under-sampling. Existing deep learning models have demonstrated feasibilities in restoring volumetric structures from the highly under-sampled images. However, because of the inter-patient variabilities, they failed to restore the patient-specific details with the common restoring pattern learned from the group data. Although the patient-specific models have been developed by training models using the intra-patient data and have shown effectiveness in restoring the patient-specific details, the models have to be retrained to be exclusive for each patient. It is highly desirable to develop a generalized model that can utilize the patient-specific information for the under-sampled image augmentation. Methods In this study, we proposed a merging-encoder convolutional neural network (MeCNN) to realize the prior image-guided under-sampled CBCT augmentation. Instead of learning the patient-specific structures, the proposed model learns a generalized pattern of utilizing the patient-specific information in the prior images to facilitate the under-sampled image enhancement. Specifically, the MeCNN consists of a merging-encoder and a decoder. The merging-encoder extracts image features from both the prior CT images and the under-sampled CBCT images, and merges the features at multi-scale levels via deep convolutions. The merged features are then connected to the decoders via shortcuts to yield high-quality CBCT images. The proposed model was tested on both the simulated CBCTs and the clinical CBCTs. The predicted CBCT images were evaluated qualitatively and quantitatively in terms of image quality and tumor localization accuracy. Mann-Whitney U test was conducted for the statistical analysis. P<0.05 was considered statistically significant. Results The proposed model yields CT-like high-quality CBCT images from only 36 half-fan projections. Compared to other methods, CBCT images augmented by the proposed model have significantly lower intensity errors, significantly higher peak signal-to-noise ratio, and significantly higher structural similarity with respect to the ground truth images. Besides, the proposed method significantly reduced the 3D distance of the CBCT-based tumor localization errors. In addition, the CBCT augmentation is nearly real-time. Conclusions With the prior-image guidance, the proposed method is effective in reconstructing high-quality CBCT images from the highly under-sampled projections, considerably reducing the imaging dose and improving the clinical utility of the CBCT.
Collapse
Affiliation(s)
- Zhuoran Jiang
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Zeyu Zhang
- Medical Physics Graduate Program, Duke University, Durham, NC, USA
| | - Yushi Chang
- Medical Physics Graduate Program, Duke University, Durham, NC, USA
| | - Yun Ge
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA.,Medical Physics Graduate Program, Duke University, Durham, NC, USA.,Medical Physics Graduate Program, Duke Kunshan University, Kunshan, China
| | - Lei Ren
- Department of Radiation Oncology, University of Maryland, Baltimore, MD, USA
| |
Collapse
|
14
|
Shao HC, Huang X, Folkert MR, Wang J, Zhang Y. Automatic liver tumor localization using deep learning-based liver boundary motion estimation and biomechanical modeling (DL-Bio). Med Phys 2021; 48:7790-7805. [PMID: 34632589 DOI: 10.1002/mp.15275] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 10/01/2021] [Accepted: 10/02/2021] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Recently, two-dimensional-to-three-dimensional (2D-3D) deformable registration has been applied to deform liver tumor contours from prior reference images onto estimated cone-beam computed tomography (CBCT) target images to automate on-board tumor localizations. Biomechanical modeling has also been introduced to fine-tune the intra-liver deformation-vector-fields (DVFs) solved by 2D-3D deformable registration, especially at low-contrast regions, using tissue elasticity information and liver boundary DVFs. However, the caudal liver boundary shows low contrast from surrounding tissues in the cone-beam projections, which degrades the accuracy of the intensity-based 2D-3D deformable registration there and results in less accurate boundary conditions for biomechanical modeling. We developed a deep-learning (DL)-based method to optimize the liver boundary DVFs after 2D-3D deformable registration to further improve the accuracy of subsequent biomechanical modeling and liver tumor localization. METHODS The DL-based network was built based on the U-Net architecture. The network was trained in a supervised fashion to learn motion correlation between cranial and caudal liver boundaries to optimize the liver boundary DVFs. Inputs of the network had three channels, and each channel featured the 3D DVFs estimated by the 2D-3D deformable registration along one Cartesian direction (x, y, z). To incorporate patient-specific liver boundary information into the DVFs, the DVFs were masked by a liver boundary ring structure generated from the liver contour of the prior reference image. The network outputs were the optimized DVFs along the liver boundary with higher accuracy. From these optimized DVFs, boundary conditions were extracted for biomechanical modeling to further optimize the solution of intra-liver tumor motion. We evaluated the method using 34 liver cancer patient cases, with 24 for training and 10 for testing. We evaluated and compared the performance of three methods: 2D-3D deformable registration, 2D-3D-Bio (2D-3D deformable registration with biomechanical modeling), and DL-Bio (DL model prediction with biomechanical modeling). The tumor localization errors were quantified through calculating the center-of-mass-errors (COMEs), DICE coefficients, and Hausdorff distance between deformed liver tumor contours and manually segmented "gold-standard" contours. RESULTS The predicted DVFs by the DL model showed improved accuracy at the liver boundary, which translated into more accurate liver tumor localizations through biomechanical modeling. On a total of 90 evaluated images and tumor contours, the average (± sd) liver tumor COMEs of the 2D-3D, 2D-3D-Bio, and DL-Bio techniques were 4.7 ± 1.9 mm, 2.9 ± 1.0 mm, and 1.7 ± 0.4 mm. The corresponding average (± sd) DICE coefficients were 0.60 ± 0.12, 0.71 ± 0.07, and 0.78 ± 0.03; and the average (± sd) Hausdorff distances were 7.0 ± 2.6 mm, 5.4 ± 1.5 mm, and 4.5 ± 1.3 mm, respectively. CONCLUSION DL-Bio solves a general correlation model to improve the accuracy of the DVFs at the liver boundary. With improved boundary conditions, the accuracy of biomechanical modeling can be further increased for accurate intra-liver low-contrast tumor localization.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Xiaokun Huang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Michael R Folkert
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jing Wang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - You Zhang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
15
|
Zhang Y. An unsupervised 2D-3D deformable registration network (2D3D-RegNet) for cone-beam CT estimation. Phys Med Biol 2021; 66. [PMID: 33631734 DOI: 10.1088/1361-6560/abe9f6] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 02/25/2021] [Indexed: 12/25/2022]
Abstract
Acquiring CBCTs from a limited scan angle can help to reduce the imaging time, save the imaging dose, and allow continuous target localizations through arc-based treatments with high temporal resolution. However, insufficient scan angle sampling leads to severe distortions and artifacts in the reconstructed CBCT images, limiting their clinical applicability. 2D-3D deformable registration can map a prior fully-sampled CT/CBCT volume to estimate a new CBCT, based on limited-angle on-board cone-beam projections. The resulting CBCT images estimated by 2D-3D deformable registration can successfully suppress the distortions and artifacts, and reflect up-to-date patient anatomy. However, traditional iterative 2D-3D deformable registration algorithm is very computationally expensive and time-consuming, which takes hours to generate a high quality deformation vector field (DVF) and the CBCT. In this work, we developed an unsupervised, end-to-end, 2D-3D deformable registration framework using convolutional neural networks (2D3D-RegNet) to address the speed bottleneck of the conventional iterative 2D-3D deformable registration algorithm. The 2D3D-RegNet was able to solve the DVFs within 5 seconds for 90 orthogonally-arranged projections covering a combined 90° scan angle, with DVF accuracy superior to 3D-3D deformable registration, and on par with the conventional 2D-3D deformable registration algorithm. We also performed a preliminary robustness analysis of 2D3D-RegNet towards projection angular sampling frequency variations, as well as scan angle offsets. The synergy of 2D3D-RegNet with biomechanical modeling was also evaluated, and demonstrated that 2D3D-RegNet can function as a fast DVF solution core for further DVF refinement.
Collapse
Affiliation(s)
- You Zhang
- Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75235, United States of America
| |
Collapse
|
16
|
Jiang Z, Yin FF, Ge Y, Ren L. Enhancing digital tomosynthesis (DTS) for lung radiotherapy guidance using patient-specific deep learning model. Phys Med Biol 2021; 66:035009. [PMID: 33238249 DOI: 10.1088/1361-6560/abcde8] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Digital tomosynthesis (DTS) has been proposed as a fast low-dose imaging technique for image-guided radiation therapy (IGRT). However, due to the limited scanning angle, DTS reconstructed by the conventional FDK method suffers from significant distortions and poor plane-to-plane resolutions without full volumetric information, which severely limits its capability for image guidance. Although existing deep learning-based methods showed feasibilities in restoring volumetric information in DTS, they ignored the inter-patient variabilities by training the model using group patients. Consequently, the restored images still suffered from blurred and inaccurate edges. In this study, we presented a DTS enhancement method based on a patient-specific deep learning model to recover the volumetric information in DTS images. The main idea is to use the patient-specific prior knowledge to train the model to learn the patient-specific correlation between DTS and the ground truth volumetric images. To validate the performance of the proposed method, we enrolled both simulated and real on-board projections from lung cancer patient data. Results demonstrated the benefits of the proposed method: (1) qualitatively, DTS enhanced by the proposed method shows CT-like high image quality with accurate and clear edges; (2) quantitatively, the enhanced DTS has low-intensity errors and high structural similarity with respect to the ground truth CT images; (3) in the tumor localization study, compared to the ground truth CT-CBCT registration, the enhanced DTS shows 3D localization errors of ≤0.7 mm and ≤1.6 mm for studies using simulated and real projections, respectively; and (4), the DTS enhancement is nearly real-time. Overall, the proposed method is effective and efficient in enhancing DTS to make it a valuable tool for IGRT applications.
Collapse
Affiliation(s)
- Zhuoran Jiang
- School of Electronic Science and Engineering, Nanjing University, 163 Xianlin Road, Nanjing, Jiangsu, 210046, People's Republic of China.,Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC 27710, United States of America
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC 27710, United States of America.,Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, United States of America.,Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, 215316, People's Republic of China
| | - Yun Ge
- School of Electronic Science and Engineering, Nanjing University, 163 Xianlin Road, Nanjing, Jiangsu, 210046, People's Republic of China
| | - Lei Ren
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC 27710, United States of America.,Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, United States of America
| |
Collapse
|
17
|
Vergalasova I, Cai J. A modern review of the uncertainties in volumetric imaging of respiratory-induced target motion in lung radiotherapy. Med Phys 2020; 47:e988-e1008. [PMID: 32506452 DOI: 10.1002/mp.14312] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2019] [Revised: 05/15/2020] [Accepted: 05/26/2020] [Indexed: 12/25/2022] Open
Abstract
Radiotherapy has become a critical component for the treatment of all stages and types of lung cancer, often times being the primary gateway to a cure. However, given that radiation can cause harmful side effects depending on how much surrounding healthy tissue is exposed, treatment of the lung can be particularly challenging due to the presence of moving targets. Careful implementation of every step in the radiotherapy process is absolutely integral for attaining optimal clinical outcomes. With the advent and now widespread use of stereotactic body radiation therapy (SBRT), where extremely large doses are delivered, accurate, and precise dose targeting is especially vital to achieve an optimal risk to benefit ratio. This has largely become possible due to the rapid development of image-guided technology. Although imaging is critical to the success of radiotherapy, it can often be plagued with uncertainties due to respiratory-induced target motion. There has and continues to be an immense research effort aimed at acknowledging and addressing these uncertainties to further our abilities to more precisely target radiation treatment. Thus, the goal of this article is to provide a detailed review of the prevailing uncertainties that remain to be investigated across the different imaging modalities, as well as to highlight the more modern solutions to imaging motion and their role in addressing the current challenges.
Collapse
Affiliation(s)
- Irina Vergalasova
- Department of Radiation Oncology, Rutgers Cancer Institute of New Jersey, Rutgers University, New Brunswick, NJ, USA
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| |
Collapse
|
18
|
Jiang Z, Yin FF, Ge Y, Ren L. A multi-scale framework with unsupervised joint training of convolutional neural networks for pulmonary deformable image registration. Phys Med Biol 2020; 65:015011. [PMID: 31783390 DOI: 10.1088/1361-6560/ab5da0] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
To achieve accurate and fast deformable image registration (DIR) for pulmonary CT, we proposed a Multi-scale DIR framework with unsupervised Joint training of Convolutional Neural Network (MJ-CNN). MJ-CNN contains three models at multi-scale levels for a coarse-to-fine DIR to avoid being trapped in a local minimum. It is trained based on image similarity and deformation vector field (DVF) smoothness, requiring no supervision of ground-truth DVF. The three models are first trained sequentially and separately for their own registration tasks, and then are trained jointly for an end-to-end optimization under the multi-scale framework. In this study, MJ-CNN was trained using public SPARE 4D-CT data. The trained MJ-CNN was then evaluated on public DIR-LAB 4D-CT dataset as well as clinical CT-to-CBCT and CBCT-to-CBCT registration. For 4D-CT inter-phase registration, MJ-CNN achieved comparable accuracy to conventional iteration optimization-based methods, and showed the smallest registration errors compared to recently published deep learning-based DIR methods, demonstrating the efficacy of the proposed multi-scale joint training scheme. Besides, MJ-CNN trained using one dataset (SPARE) could generalize to a different dataset (DIR-LAB) acquired by different scanners and imaging protocols. Furthermore, MJ-CNN trained on 4D-CTs also performed well on CT-to-CBCT and CBCT-to-CBCT registration without any re-training or fine-tuning, demonstrating MJ-CNN's robustness against applications and imaging techniques. MJ-CNN took about 1.4 s for DVF estimation and required no manual-tuning of parameters during the evaluation. MJ-CNN is able to perform accurate DIR for pulmonary CT with nearly real-time speed, making it very applicable for clinical tasks.
Collapse
Affiliation(s)
- Zhuoran Jiang
- School of Electronic Science and Engineering, Nanjing University, 163 Xianlin Road, Nanjing, Jiangsu 210046, People's Republic of China. Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC 27710, United States of America
| | | | | | | |
Collapse
|
19
|
Zhang Y, Huang X, Wang J. Advanced 4-dimensional cone-beam computed tomography reconstruction by combining motion estimation, motion-compensated reconstruction, biomechanical modeling and deep learning. Vis Comput Ind Biomed Art 2019; 2:23. [PMID: 32190409 PMCID: PMC7055574 DOI: 10.1186/s42492-019-0033-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2019] [Accepted: 11/13/2019] [Indexed: 12/25/2022] Open
Abstract
4-Dimensional cone-beam computed tomography (4D-CBCT) offers several key advantages over conventional 3D-CBCT in moving target localization/delineation, structure de-blurring, target motion tracking, treatment dose accumulation and adaptive radiation therapy. However, the use of the 4D-CBCT in current radiation therapy practices has been limited, mostly due to its sub-optimal image quality from limited angular sampling of cone-beam projections. In this study, we summarized the recent developments of 4D-CBCT reconstruction techniques for image quality improvement, and introduced our developments of a new 4D-CBCT reconstruction technique which features simultaneous motion estimation and image reconstruction (SMEIR). Based on the original SMEIR scheme, biomechanical modeling-guided SMEIR (SMEIR-Bio) was introduced to further improve the reconstruction accuracy of fine details in lung 4D-CBCTs. To improve the efficiency of reconstruction, we recently developed a U-net-based deformation-vector-field (DVF) optimization technique to leverage a population-based deep learning scheme to improve the accuracy of intra-lung DVFs (SMEIR-Unet), without explicit biomechanical modeling. Details of each of the SMEIR, SMEIR-Bio and SMEIR-Unet techniques were included in this study, along with the corresponding results comparing the reconstruction accuracy in terms of CBCT images and the DVFs. We also discussed the application prospects of the SMEIR-type techniques in image-guided radiation therapy and adaptive radiation therapy, and presented potential schemes on future developments to achieve faster and more accurate 4D-CBCT imaging.
Collapse
Affiliation(s)
- You Zhang
- Division of Medical Physics and Engineering, Department of Radiation Oncology, UT Southwestern Medical Center, 2280 Inwood Road, Dallas, TX 75390 USA
| | - Xiaokun Huang
- Division of Medical Physics and Engineering, Department of Radiation Oncology, UT Southwestern Medical Center, 2280 Inwood Road, Dallas, TX 75390 USA
| | - Jing Wang
- Division of Medical Physics and Engineering, Department of Radiation Oncology, UT Southwestern Medical Center, 2280 Inwood Road, Dallas, TX 75390 USA
| |
Collapse
|
20
|
Jiang Z, Chen Y, Zhang Y, Ge Y, Yin FF, Ren L. Augmentation of CBCT Reconstructed From Under-Sampled Projections Using Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2705-2715. [PMID: 31021791 PMCID: PMC6812588 DOI: 10.1109/tmi.2019.2912791] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Edges tend to be over-smoothed in total variation (TV) regularized under-sampled images. In this paper, symmetric residual convolutional neural network (SR-CNN), a deep learning based model, was proposed to enhance the sharpness of edges and detailed anatomical structures in under-sampled cone-beam computed tomography (CBCT). For training, CBCT images were reconstructed using TV-based method from limited projections simulated from the ground truth CT, and were fed into SR-CNN, which was trained to learn a restoring pattern from under-sampled images to the ground truth. For testing, under-sampled CBCT was reconstructed using TV regularization and was then augmented by SR-CNN. Performance of SR-CNN was evaluated using phantom and patient images of various disease sites acquired at different institutions both qualitatively and quantitatively using structure similarity (SSIM) and peak signal-to-noise ratio (PSNR). SR-CNN substantially enhanced image details in the TV-based CBCT across all experiments. In the patient study using real projections, SR-CNN augmented CBCT images reconstructed from as low as 120 half-fan projections to image quality comparable to the reference fully-sampled FDK reconstruction using 900 projections. In the tumor localization study, improvements in the tumor localization accuracy were made by the SR-CNN augmented images compared with the conventional FDK and TV-based images. SR-CNN demonstrated robustness against noise levels and projection number reductions and generalization for various disease sites and datasets from different institutions. Overall, the SR-CNN-based image augmentation technique was efficient and effective in considerably enhancing edges and anatomical structures in under-sampled 3D/4D-CBCT, which can be very valuable for image-guided radiotherapy.
Collapse
Affiliation(s)
- Zhuoran Jiang
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, North Carolina, 27710, USA
- School of Electronic Science and Engineering, Nanjing University, 163 Xianlin Road, Nanjing, Jiangsu, 210046, China
| | - Yingxuan Chen
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA
| | - Yawei Zhang
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, North Carolina, 27710, USA
| | - Yun Ge
- School of Electronic Science and Engineering, Nanjing University, 163 Xianlin Road, Nanjing, Jiangsu, 210046, China
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, North Carolina, 27710, USA
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, 215316, China
| | - Lei Ren
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, North Carolina, 27710, USA
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA
| |
Collapse
|
21
|
Chen Y, Yin FF, Jiang Z, Ren L. Daily edge deformation prediction using an unsupervised convolutional neural network model for low dose prior contour based total variation CBCT reconstruction (PCTV-CNN). Biomed Phys Eng Express 2019; 5:065013. [PMID: 32587754 PMCID: PMC7316357 DOI: 10.1088/2057-1976/ab446b] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
PURPOSE Previously we developed a PCTV method to enhance the edge sharpness for low-dose CBCT reconstruction. However, the iterative deformable registration method used for deforming edges from planning-CT to on-board CBCT is time-consuming and user-dependent. This study aims to automate and accelerate PCTV reconstruction by developing an unsupervised CNN model to bypass the conventional deformable registration. METHODS The new method uses unsupervised CNN model for deformation prediction and PCTV reconstruction. An unsupervised CNN model with a u-net structure was used to predict deformation vector fields (DVF) to generate on-board contours for PCTV reconstruction. Paired 3D image volumes of prior CT and on-board CBCT are inputs and DVF are predicted without the need of ground truths. The model was initially trained on brain MRI images, and then fine-tuned using our lung SBRT data. This method was evaluated using lung SBRT patient data. In the intra-patient study, the first n-1 day's CBCTs are used for CNN training to predict nth day edge information (n = 2, 3, 4, 5). 45 half-fan projections covering 360˚ from nth day CBCT is used for reconstruction. In the inter-patient study, the 10 patient images including CT and first day's CBCT are used for training. Results from Edge-preserving (EPTV), PCTV and PCTV-CNN are compared. RESULTS The cross-correlations of the predicted edge map and the ground truth were on average 0.88 for both intra-patient and inter-patient studies. PCTV-CNN achieved comparable image quality as PCTV while automating the registration process and reducing the registration time from 1-2 min to 1.4 s. CONCLUSION It is feasible to use an unsupervised CNN to predict daily deformation of on-board edge information for PCTV based low-dose CBCT reconstruction. PCTV-CNN has a great potential for enhancing the edge sharpness with high efficiency for low-dose CBCT to improve the precision of on-board target localization and adaptive radiotherapy.
Collapse
Affiliation(s)
- Yingxuan Chen
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, United States of America
| | - Fang-Fang Yin
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, United States of America
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, North Carolina, 27710, United States of America
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, 215316, People's Republic of China
| | - Zhuoran Jiang
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, North Carolina, 27710, United States of America
| | - Lei Ren
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, United States of America
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, North Carolina, 27710, United States of America
| |
Collapse
|
22
|
Zhang Y, Folkert MR, Huang X, Ren L, Meyer J, Tehrani JN, Reynolds R, Wang J. Enhancing liver tumor localization accuracy by prior-knowledge-guided motion modeling and a biomechanical model. Quant Imaging Med Surg 2019; 9:1337-1349. [PMID: 31448218 PMCID: PMC6685812 DOI: 10.21037/qims.2019.07.04] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Accepted: 07/10/2019] [Indexed: 11/06/2022]
Abstract
BACKGROUND Pre-treatment liver tumor localization remains a challenging task for radiation therapy, mostly due to the limited tumor contrast against normal liver tissues, and the respiration-induced liver tumor motion. Recently, we developed a biomechanical modeling-based, deformation-driven cone-beam CT estimation technique (Bio-CBCT), which achieved substantially improved accuracy on low-contrast liver tumor localization. However, the accuracy of Bio-CBCT is still affected by the limited tissue contrast around the caudal liver boundary, which reduces the accuracy of the boundary condition that is fed into the biomechanical modeling process. In this study, we developed a motion modeling and biomechanical modeling-guided CBCT estimation technique (MM-Bio-CBCT), to further improve the liver tumor localization accuracy by incorporating a motion model into the CBCT estimation process. METHODS MM-Bio-CBCT estimates new CBCT images through deforming a prior high-quality CT or CBCT volume. The deformation vector field (DVF) is solved by iteratively matching the digitally-reconstructed-radiographs (DRRs) of the deformed prior image to the acquired 2D cone-beam projections. Using the same solved DVF, the liver tumor volume contoured on the prior image can be transferred onto the new CBCT image for automatic tumor localization. To maximize the accuracy of the solved DVF, MM-Bio-CBCT employs two strategies for additional DVF optimization: (I) prior-knowledge-guided liver boundary motion modeling with motion patterns extracted from a prior 4D imaging set like 4D-CTs/4D-CBCTs, to improve the liver boundary DVF accuracy; and (II) finite-element-analysis-based biomechanical modeling of the liver volume to improve the intra-liver DVF accuracy. We evaluated the accuracy of MM-Bio-CBCT on both the digital extended-cardiac-torso (XCAT) phantom images and real liver patient images. The liver tumor localization accuracy of MM-Bio-CBCT was evaluated and compared with that of the purely intensity-driven 2D-3D deformation technique, the 2D-3D deformation technique with motion modeling, and the Bio-CBCT technique. Metrics including the DICE coefficient and the center-of-mass-error (COME) were assessed for quantitative evaluation. RESULTS Using limited-view 20 projections for CBCT estimation, the average (± SD) DICE coefficients between the estimated and the 'gold-standard' liver tumors of the XCAT study were 0.57±0.31, 0.78±0.26, 0.83±0.21, and 0.89±0.11 for 2D-3D deformation, 2D-3D deformation with motion modeling, Bio-CBCT and MM-Bio-CBCT techniques, respectively. Using 20 projections for estimation, the patient study yielded average DICE results of 0.63±0.21, 0.73±0.13 and 0.78±0.12, and 0.83±0.09, correspondingly. The MM-Bio-CBCT localized the liver tumor to an average COME of ~2 mm for both the XCAT and the liver patient studies. CONCLUSIONS Compared to Bio-CBCT, MM-Bio-CBCT further improves the accuracy of liver tumor localization. MM-Bio-CBCT can potentially be used towards pre-treatment liver tumor localization and intra-treatment liver tumor location verification to achieve substantial radiotherapy margin reduction.
Collapse
Affiliation(s)
- You Zhang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Michael R. Folkert
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Xiaokun Huang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Lei Ren
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Jeffrey Meyer
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University School of Medicine, Baltimore, USA
| | - Joubin Nasehi Tehrani
- Department of Radiation Oncology, University of Virginia Medical Center, Charlottesville, VA, USA
| | - Robert Reynolds
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Jing Wang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
23
|
Chen Y, Yin FF, Zhang Y, Zhang Y, Ren L. Low dose cone-beam computed tomography reconstruction via hybrid prior contour based total variation regularization (hybrid-PCTV). Quant Imaging Med Surg 2019; 9:1214-1228. [PMID: 31448208 DOI: 10.21037/qims.2019.06.02] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Background Previously, we developed a prior contour based total variation (PCTV) method to use edge information derived from prior images for edge enhancement in low-dose cone-beam computed tomography (CBCT) reconstruction. However, the accuracy of edge enhancement in PCTV is affected by the deformable registration errors and anatomical changes from prior to on-board images. In this study, we develop a hybrid-PCTV method to address this limitation to enhance the robustness and accuracy of the PCTV method. Methods Planning-CT is used as prior images and deformably registered with on-board CBCT reconstructed by the edge preserving TV (EPTV) method. Edges derived from planning CT are deformed based on the registered deformation vector fields to generate on-board edges for edge enhancement in PCTV reconstruction. Reference CBCT is reconstructed from the simulated projections of the deformed planning-CT. Image similarity map is then calculated between reference and on-board CBCT using structural similarity index (SSIM) method to estimate local registration accuracy. The hybrid-PCTV method enhances the edge information based on a weighted edge map that combines edges from both PCTV and EPTV methods. Higher weighting is given to PCTV edges at regions with high registration accuracy and to EPTV edges at regions with low registration accuracy. The hybrid-PCTV method was evaluated using both digital extended-cardiac-torso (XCAT) phantom and lung patient data. In XCAT study, breathing amplitude change, tumor shrinkage and new tumor were simulated from CT to CBCT. In the patient study, both simulated and real projections of lung patients were used for reconstruction. Results were compared with both EPTV and PCTV methods. Results EPTV led to blurring bony structures due to missing edge information, and PCTV led to blurring tumor edges due to inaccurate edge information caused by errors in the deformable registration. In contrast, hybrid-PCTV enhanced edges of both bone and tumor. In XCAT study using 30 half-fan CBCT projections, compared with ground truth, relative errors (REs) were 1.3%, 1.1% and 0.9% and edge cross-correlation were 0.66, 0.68 and 0.71 for EPTV, PCTV and hybrid-PCTV, respectively. Moreover, in the lung patient data, hybrid-PCTV avoided the wrong edge enhancement in the PCTV method while maintaining enhancements of the correct edges. Conclusions Hybrid-PCTV further improved the robustness and accuracy of PCTV by accounting for uncertainties in deformable registration and anatomical changes between prior and onboard images. The accurate edge enhancement in hybrid-PCTV will be valuable for target localization in radiation therapy.
Collapse
Affiliation(s)
- Yingxuan Chen
- Medical Physics Graduate Program, Duke University, Durham, NC, USA
| | - Fang-Fang Yin
- Medical Physics Graduate Program, Duke University, Durham, NC, USA.,Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA.,Medical Physics Graduate Program, Duke Kunshan University, Kunshan 215316, China
| | - Yawei Zhang
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - You Zhang
- Medical Physics Graduate Program, Duke University, Durham, NC, USA
| | - Lei Ren
- Medical Physics Graduate Program, Duke University, Durham, NC, USA.,Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| |
Collapse
|
24
|
Guo M, Chee G, O'Connell D, Dhou S, Fu J, Singhrao K, Ionascu D, Ruan D, Lee P, Low DA, Zhao J, Lewis JH. Reconstruction of a high-quality volumetric image and a respiratory motion model from patient CBCT projections. Med Phys 2019; 46:3627-3639. [PMID: 31087359 DOI: 10.1002/mp.13595] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Revised: 04/10/2019] [Accepted: 05/08/2019] [Indexed: 12/25/2022] Open
Abstract
PURPOSE To develop and evaluate a method of reconstructing a patient- and treatment day- specific volumetric image and motion model from free-breathing cone-beam projections and respiratory surrogate measurements. This Motion-Compensated Simultaneous Algebraic Reconstruction Technique (MC-SART) generates and uses a motion model derived directly from the cone-beam projections, without requiring prior motion measurements from 4DCT, and can compensate for both inter- and intrabin deformations. The motion model can be used to generate images at arbitrary breathing points, which can be used for estimating volumetric images during treatment delivery. METHODS The MC-SART was formulated using simultaneous image reconstruction and motion model estimation. For image reconstruction, projections were first binned according to external surrogate measurements. Projections in each bin were used to reconstruct a set of volumetric images using MC-SART. The motion model was estimated based on deformable image registration between the reconstructed bins, and least squares fitting to model parameters. The model was used to compensate for motion in both projection and backprojection operations in the subsequent image reconstruction iterations. These updated images were then used to update the motion model, and the two steps were alternated between. The final output is a volumetric reference image and a motion model that can be used to generate images at any other time point from surrogate measurements. RESULTS A retrospective patient dataset consisting of eight lung cancer patients was used to evaluate the method. The absolute intensity differences in the lung regions compared to ground truth were 50.8 ± 43.9 HU in peak exhale phases (reference) and 80.8 ± 74.0 in peak inhale phases (generated). The 50th percentile of voxel registration error of all voxels in the lung regions with >5 mm amplitude was 1.3 mm. The MC-SART was also applied to measured patient cone-beam projections acquired with a linac-mounted CBCT system. Results from this patient data demonstrate the feasibility of MC-SART and showed qualitative image quality improvements compared to other state-of-the-art algorithms. CONCLUSION We have developed a simultaneous image reconstruction and motion model estimation method that uses Cone-beam computed tomography (CBCT) projections and respiratory surrogate measurements to reconstruct a high-quality reference image and motion model of a patient in treatment position. The method provided superior performance in both HU accuracy and positional accuracy compared to other existing methods. The resultant reference image and motion model can be combined with respiratory surrogate measurements to generate volumetric images representing patient anatomy at arbitrary time points.
Collapse
Affiliation(s)
- Minghao Guo
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.,Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Geraldine Chee
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Dylan O'Connell
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Salam Dhou
- Department of Computer Science and Engineering, American University of Sharjah, Sharjah, 26666, United Arab Emirates
| | - Jie Fu
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Kamal Singhrao
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Dan Ionascu
- Department of Radiation Oncology, College of Medicine, University of Cincinnati, Cincinnati, OH, 45221, USA
| | - Dan Ruan
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Percy Lee
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Daniel A Low
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - John H Lewis
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| |
Collapse
|
25
|
Zhang Y, Yin FF, Ren L. First clinical retrospective investigation of limited projection CBCT for lung tumor localization in patients receiving SBRT treatment. Phys Med Biol 2019; 64:10NT01. [PMID: 31018195 DOI: 10.1088/1361-6560/ab1c0c] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
To clinically investigate the limited-projection CBCT (LP-CBCT) technology for daily positioning of patients receiving breath-hold lung SBRT radiation treatment and to investigate the feasibility of reconstructing fast 4D-CBCT from 1 min 3D-CBCT scan. Eleven patients who underwent breath-hold lung SBRT radiation treatment were scanned daily with on-board full-projection CBCT (CBCT) using half-fan scan. A subset of the CBCT projections and the prior planning CT were used to estimate the LP-CBCT images using the weighted free-form deformation method. The limited projections are clusteringly sampled within fifteen sub-angles in 360° in order to simulate the fast 1 min scan for 4D-CBCT. The estimated LP-CBCTs were rigidly registered to the planning CT to determine the clinical shifts needed for patient setup corrections, which were compared with shifts determined by the CBCT for evaluation. Both manual and automatic registrations were performed in order to compare the systematic registration errors. Fifty CBCT volumes were obtained from the eleven patients in fifty fractions for this pilot clinical study. For the CBCT images, the mean (±standard deviation) shifts between CBCT and planning CT from manual registration in left-right (LR), anterior-posterior (AP), and superior-inferior (SI) directions are 1.1 ± 1.2 mm, 2.1 ± 1.9 mm, 5.2 ± 3.6 mm, respectively. The mean deviation difference between shifts determined by CBCT and LP-CBCT images are 0.3 ± 0.5 mm, 0.5 ± 0.8 mm, 0.4 ± 0.3 mm, in LR, AP, and SI directions, respectively. The mean vector length of CBCT shift for all fractions is 6.1 ± 3.6 mm, and the mean vector length difference between CBCT and LP-CBCT for all fractions studied is 1.0 ± 0.9 mm. The automatic registrations yield similar results as manual registrations. The pilot clinical study shows that LP-CBCT localization offers comparable accuracy to CBCT localization for daily tumor positioning while reducing the projection number to 1/10 for patients receiving breath hold lung radiation treatment. The cluster projection sampling in this study also shows the feasibility of reconstructing fast 4D-CBCT from 1 min 3D-CBCT scan.
Collapse
Affiliation(s)
- Yawei Zhang
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC 27710, United States of America
| | | | | |
Collapse
|
26
|
Ding GX, Zhang Y, Ren L. Technical Note: Imaging dose resulting from optimized procedures with limited-angle intrafractional verification system during stereotactic body radiation therapy lung treatment. Med Phys 2019; 46:2709-2715. [PMID: 30937910 DOI: 10.1002/mp.13511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Revised: 01/17/2019] [Accepted: 02/15/2019] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The limited-angle intrafractional verification (LIVE) system was developed to track tumor movement during stereotactic body radiation therapy (SBRT). However, the four-dimensional (4D) MV/kV imaging procedure results in additional radiation dose to patients. This study is to quantify imaging radiation dose from optimized MV/kV image acquisition in the LIVE system and to determine if it exceeds the American Association of Physicists in Medicine Task Group Report 180 image dose threshold. METHODS TrueBeam™ platform with a fully integrated system for image guidance was studied. Monte Carlo-simulated kV and MV beams were calibrated and then used as incident sources in an EGSnrc Monte Carlo dose calculation in a CT image-based patient model. In three representative lung SBRT treatments evaluated in this study, tumors were located in the patient's posterior left lung, mid-left lung, and right upper lung. The optimized imaging sequence comprised of arcs ranging from 2 to 7, acquired between adjacent three-dimensional (3D)/IMRT beams, with multiple simultaneous kV (125 kVp) and MV (6 MV) image projections in each arc, for different optimization scenarios. The MV imaging fields were generally confined to the treatment target while kV images were acquired with a normal open field size with a full bow-tie filter. RESULTS In a seven-arc acquisitions case (highest imaging dose scenario), the maximum kV imaging doses to 50% of the tissue volume (D50 from DVHs), for spinal cord, right lung, heart, left lung, and the target, were 0.4, 0.4, 0.6, 0.7, and 1.4 cGy, respectively. The corresponding MV imaging doses were 0.1 cGy to spinal cord, right lung, heart, and left lung, and 11 cGy to target. In contrast, the maximum radiation dose from two cases treated with two Volumetric-Modulated Arc Therapy (VMAT) fields and two-arc image acquisitions is approximately 30% of that of the seven-arc acquisition. CONCLUSIONS We have evaluated the additional radiation dose resulting from optimized LIVE system MV/kV image acquisitions in two best (least imaging dose) and one worst (highest imaging dose) lung SBRT treatment scenarios. The results show that these MV/kV imaging doses are comparable to those resulting from current imaging procedures used in Image-Guided Radiation Therapy (IGRT) and are within the dose threshold of 5% target dose as recommended by the AAPM TG-180 report.
Collapse
Affiliation(s)
- George X Ding
- Department of Radiation Oncology, Vanderbilt University School of Medicine, Nashville, TN, USA
| | - Yawei Zhang
- Department of Radiation Oncology, Duke University, Durham, NC, USA
| | - Lei Ren
- Department of Radiation Oncology, Duke University, Durham, NC, USA.,Medical Physics Graduate Program, Duke University, Durham, NC, USA
| |
Collapse
|
27
|
Kim DS, Lee S, Kim TH, Kang SH, Kim KH, Shin DS, Kim S, Suh TS. A respiratory-guided 4D digital tomosynthesis. ACTA ACUST UNITED AC 2018; 63:245007. [DOI: 10.1088/1361-6560/aaeddb] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
28
|
Zhang Y, Folkert MR, Li B, Huang X, Meyer JJ, Chiu T, Lee P, Tehrani JN, Cai J, Parsons D, Jia X, Wang J. 4D liver tumor localization using cone-beam projections and a biomechanical model. Radiother Oncol 2018; 133:183-192. [PMID: 30448003 DOI: 10.1016/j.radonc.2018.10.040] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Revised: 10/11/2018] [Accepted: 10/14/2018] [Indexed: 10/27/2022]
Abstract
PURPOSE To improve the accuracy of liver tumor localization, this study tests a biomechanical modeling-guided liver cone-beam CT (CBCT) estimation (Bio-CBCT-est) technique, which generates new CBCTs by deforming a prior high-quality CT or CBCT image using deformation vector fields (DVFs). The DVFs can be used to propagate tumor contours from the prior image to new CBCTs for automatic 4D tumor localization. METHODS/MATERIALS To solve the DVFs, the Bio-CBCT-est technique employs an iterative scheme that alternates between intensity-driven 2D-3D deformation and biomechanical modeling-guided DVF regularization and optimization. The 2D-3D deformation step solves DVFs by matching digitally reconstructed radiographs of the 3D deformed prior image to 2D phase-sorted on-board projections according to imaging intensities. This step's accuracy is limited at low-contrast intra-liver regions without sufficient intensity variations. To boost the DVF accuracy in these regions, we use the intensity-driven DVFs solved at higher-contrast liver boundaries to fine-tune the intra-liver DVFs by finite element analysis-based biomechanical modeling. We evaluated Bio-CBCT-est's accuracy with seven liver cancer patient cases. For each patient, we simulated 4D cone-beam projections from 4D-CT images, and used these projections for Bio-CBCT-est based image estimations. After Bio-CBCT-est, the DVF-propagated liver tumor/cyst contours were quantitatively compared with the manual contours on the original 4D-CT 'reference' images, using the DICE similarity index, the center-of-mass-error (COME), the Hausdorff distance (HD) and the voxel-wise cross-correlation (CC) metrics. In addition to simulation, we also performed a preliminary study to qualitatively evaluate the Bio-CBCT-est technique via clinically acquired cone beam projections. A quantitative study using an in-house deformable liver phantom was also performed. RESULTS Using 20 projections for image estimation, the average (±s.d.) DICE index increased from 0.48 ± 0.13 (by 2D-3D deformation) to 0.77 ± 0.08 (by Bio-CBCT-est), the average COME decreased from 7.7 ± 1.5 mm to 2.2 ± 1.2 mm, the average HD decreased from 10.6 ± 2.2 mm to 5.9 ± 2.0 mm, and the average CC increased from -0.004 ± 0.216 to 0.422 ± 0.206. The tumor/cyst trajectory solved by Bio-CBCT-est matched well with that manually obtained from 4D-CT reference images. CONCLUSIONS Bio-CBCT-est substantially improves the accuracy of 4D liver tumor localization via cone-beam projections and a biomechanical model.
Collapse
Affiliation(s)
- You Zhang
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, USA.
| | - Michael R Folkert
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, USA
| | - Bin Li
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, USA; Department of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Xiaokun Huang
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, USA
| | - Jeffrey J Meyer
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, USA; Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University School of Medicine, Baltimore, USA
| | - Tsuicheng Chiu
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, USA
| | - Pam Lee
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, USA
| | - Joubin Nasehi Tehrani
- Department of Radiation Oncology, University of Virginia Medical Center, Charlottesville, USA
| | - Jing Cai
- Department of Radiation Oncology, Duke University, Durham, , USA
| | - David Parsons
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, USA
| | - Xun Jia
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, USA
| | - Jing Wang
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, USA
| |
Collapse
|
29
|
Harris W, Wang C, Yin FF, Cai J, Ren L. A Novel method to generate on-board 4D MRI using prior 4D MRI and on-board kV projections from a conventional LINAC for target localization in liver SBRT. Med Phys 2018; 45:3238-3245. [PMID: 29799620 DOI: 10.1002/mp.12998] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2017] [Revised: 04/10/2018] [Accepted: 05/21/2018] [Indexed: 12/25/2022] Open
Abstract
PURPOSE On-board MRI can provide superb soft tissue contrast for improving liver SBRT localization. However, the availability of on-board MRI in clinics is extremely limited. On the contrary, on-board kV imaging systems are widely available on radiotherapy machines, but its capability to localize tumors in soft tissue is limited due to its poor soft tissue contrast. This study aims to explore the feasibility of using an on-board kV imaging system and patient prior knowledge to generate on-board four-dimensional (4D)-MRI for target localization in liver SBRT. METHODS Prior 4D MRI volumes were separated into end of expiration (EOE) phase (MRIprior ) and all other phases. MRIprior was used to generate a synthetic CT at EOE phase (sCTprior ). On-board 4D MRI at each respiratory phase was considered a deformation of MRIprior . The deformation field map (DFM) was estimated by matching DRRs of the deformed sCTprior to on-board kV projections using a motion modeling and free-form deformation optimization algorithm. The on-board 4D MRI method was evaluated using both XCAT simulation and real patient data. The accuracy of the estimated on-board 4D MRI was quantitatively evaluated using Volume Percent Difference (VPD), Volume Dice Coefficient (VDC), and Center of Mass Shift (COMS). Effects of scan angle and number of projections were also evaluated. RESULTS In the XCAT study, VPD/VDC/COMS among all XCAT scenarios were 10.16 ± 1.31%/0.95 ± 0.01/0.88 ± 0.15 mm using orthogonal-view 30° scan angles with 102 projections. The on-board 4D MRI method was robust against the various scan angles and projection numbers evaluated. In the patient study, estimated on-board 4D MRI was generated successfully when compared to the "reference on-board 4D MRI" for the liver patient case. CONCLUSIONS A method was developed to generate on-board 4D MRI using prior 4D MRI and on-board limited kV projections. Preliminary results demonstrated the potential for MRI-based image guidance for liver SBRT using only a kV imaging system on a conventional LINAC.
Collapse
Affiliation(s)
- Wendy Harris
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC, 27705, USA
| | - Chunhao Wang
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC, 27710, USA
| | - Fang-Fang Yin
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC, 27705, USA.,Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC, 27710, USA.,Medical Physics Graduate Program, Duke Kunshan University, 8 Duke Avenue, Kunshan, Jiangsu, 215316, China
| | - Jing Cai
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC, 27705, USA.,Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC, 27710, USA.,Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, 999077, Hong Kong
| | - Lei Ren
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC, 27705, USA.,Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC, 27710, USA
| |
Collapse
|
30
|
Chen T, Zhang M, Jabbour S, Wang H, Barbee D, Das IJ, Yue N. Principal component analysis-based imaging angle determination for 3D motion monitoring using single-slice on-board imaging. Med Phys 2018; 45:2377-2387. [DOI: 10.1002/mp.12904] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Revised: 03/13/2018] [Accepted: 03/22/2018] [Indexed: 01/07/2023] Open
Affiliation(s)
- Ting Chen
- Department of Radiation Oncology; Laura and Isaac Perlmutter Cancer Center New York University Langone Health; New York NY 10016 USA
- Department of Radiation Oncology; Rutgers Cancer Institute of New Jersey; New Brunswick NJ 08901 USA
| | - Miao Zhang
- Department of Radiation Oncology; Rutgers Cancer Institute of New Jersey; New Brunswick NJ 08901 USA
- Department of Medical Physics; Memorial Sloan Kettering Cancer Center; New York NY 10065 USA
| | - Salma Jabbour
- Department of Radiation Oncology; Rutgers Cancer Institute of New Jersey; New Brunswick NJ 08901 USA
| | - Hesheng Wang
- Department of Radiation Oncology; Laura and Isaac Perlmutter Cancer Center New York University Langone Health; New York NY 10016 USA
| | - David Barbee
- Department of Radiation Oncology; Laura and Isaac Perlmutter Cancer Center New York University Langone Health; New York NY 10016 USA
| | - Indra J. Das
- Department of Radiation Oncology; Laura and Isaac Perlmutter Cancer Center New York University Langone Health; New York NY 10016 USA
| | - Ning Yue
- Department of Radiation Oncology; Rutgers Cancer Institute of New Jersey; New Brunswick NJ 08901 USA
| |
Collapse
|
31
|
Chen Y, Yin FF, Zhang Y, Zhang Y, Ren L. Low dose CBCT reconstruction via prior contour based total variation (PCTV) regularization: a feasibility study. Phys Med Biol 2018. [PMID: 29537385 DOI: 10.1088/1361-6560/aab68d] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
PURPOSE compressed sensing reconstruction using total variation (TV) tends to over-smooth the edge information by uniformly penalizing the image gradient. The goal of this study is to develop a novel prior contour based TV (PCTV) method to enhance the edge information in compressed sensing reconstruction for CBCT. METHODS the edge information is extracted from prior planning-CT via edge detection. Prior CT is first registered with on-board CBCT reconstructed with TV method through rigid or deformable registration. The edge contours in prior-CT is then mapped to CBCT and used as the weight map for TV regularization to enhance edge information in CBCT reconstruction. The PCTV method was evaluated using extended-cardiac-torso (XCAT) phantom, physical CatPhan phantom and brain patient data. Results were compared with both TV and edge preserving TV (EPTV) methods which are commonly used for limited projection CBCT reconstruction. Relative error was used to calculate pixel value difference and edge cross correlation was defined as the similarity of edge information between reconstructed images and ground truth in the quantitative evaluation. RESULTS compared to TV and EPTV, PCTV enhanced the edge information of bone, lung vessels and tumor in XCAT reconstruction and complex bony structures in brain patient CBCT. In XCAT study using 45 half-fan CBCT projections, compared with ground truth, relative errors were 1.5%, 0.7% and 0.3% and edge cross correlations were 0.66, 0.72 and 0.78 for TV, EPTV and PCTV, respectively. PCTV is more robust to the projection number reduction. Edge enhancement was reduced slightly with noisy projections but PCTV was still superior to other methods. PCTV can maintain resolution while reducing the noise in the low mAs CatPhan reconstruction. Low contrast edges were preserved better with PCTV compared with TV and EPTV. CONCLUSION PCTV preserved edge information as well as reduced streak artifacts and noise in low dose CBCT reconstruction. PCTV is superior to TV and EPTV methods in edge enhancement, which can potentially improve the localization accuracy in radiation therapy.
Collapse
Affiliation(s)
- Yingxuan Chen
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, United States of America
| | | | | | | | | |
Collapse
|
32
|
Huang X, Zhang Y, Wang J. A biomechanical modeling-guided simultaneous motion estimation and image reconstruction technique (SMEIR-Bio) for 4D-CBCT reconstruction. Phys Med Biol 2018; 63:045002. [PMID: 29328048 DOI: 10.1088/1361-6560/aaa730] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Reconstructing four-dimensional cone-beam computed tomography (4D-CBCT) images directly from respiratory phase-sorted traditional 3D-CBCT projections can capture target motion trajectory, reduce motion artifacts, and reduce imaging dose and time. However, the limited numbers of projections in each phase after phase-sorting decreases CBCT image quality under traditional reconstruction techniques. To address this problem, we developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, an iterative method that can reconstruct higher quality 4D-CBCT images from limited projections using an inter-phase intensity-driven motion model. However, the accuracy of the intensity-driven motion model is limited in regions with fine details whose quality is degraded due to insufficient projection number, which consequently degrades the reconstructed image quality in corresponding regions. In this study, we developed a new 4D-CBCT reconstruction algorithm by introducing biomechanical modeling into SMEIR (SMEIR-Bio) to boost the accuracy of the motion model in regions with small fine structures. The biomechanical modeling uses tetrahedral meshes to model organs of interest and solves internal organ motion using tissue elasticity parameters and mesh boundary conditions. This physics-driven approach enhances the accuracy of solved motion in the organ's fine structures regions. This study used 11 lung patient cases to evaluate the performance of SMEIR-Bio, making both qualitative and quantitative comparisons between SMEIR-Bio, SMEIR, and the algebraic reconstruction technique with total variation regularization (ART-TV). The reconstruction results suggest that SMEIR-Bio improves the motion model's accuracy in regions containing small fine details, which consequently enhances the accuracy and quality of the reconstructed 4D-CBCT images.
Collapse
Affiliation(s)
- Xiaokun Huang
- Xiaokun Huang and You Zhang contributed equally to the work
| | | | | |
Collapse
|
33
|
Real-Time Whole-Brain Radiation Therapy: A Single-Institution Experience. Int J Radiat Oncol Biol Phys 2017; 100:1280-1288. [PMID: 29397212 DOI: 10.1016/j.ijrobp.2017.12.282] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2017] [Revised: 12/19/2017] [Accepted: 12/20/2017] [Indexed: 11/21/2022]
Abstract
PURPOSE To demonstrate the feasibility of a real-time whole-brain radiation therapy (WBRT) workflow, taking advantage of contemporary radiation therapy capabilities and seeking to optimize clinical workflow for WBRT. METHODS AND MATERIALS We developed a method incorporating the linear accelerator's on-board imaging system for patient simulation, used cone-beam computed tomography (CBCT) data for treatment planning, and delivered the first fraction of prescribed therapy, all during the patient's initial appointment. Simulation was performed in the linear accelerator vault. An acquired CBCT data set was used for scripted treatment planning protocol, providing inversely planned, automated treatment plan generation. The osseous boundaries of the brain were auto-contoured to create a target volume. Two parallel-opposed beams using field-in-field intensity modulate radiation therapy covered this target to the user-defined inferior level (C1 or C2). The method was commissioned using an anthropomorphic head phantom and verified using 100 clinically treated patients. RESULTS Whole-brain target heterogeneity was within 95%-107% of the prescription dose, and target coverage compared favorably to standard, manually created 3-dimensional plans. For the commissioning CBCT datasets, the secondary monitor unit verification and independent 3-dimensional dose distribution comparison for computed and delivered doses were within 2% agreement relative to the scripted auto-plans. On average, time needed to complete the entire process was 35.1 ± 10.3 minutes from CBCT start to last beam delivered. CONCLUSIONS The real-time WBRT workflow using integrated on-site imaging, planning, quality assurance, and delivery was tested and deemed clinically feasible. The design necessitates a synchronized team consisting of physician, physicist, dosimetrist, and therapists. This work serves as a proof of concept of real-time planning and delivery for other treatment sites.
Collapse
|
34
|
Harris W, Yin FF, Wang C, Zhang Y, Cai J, Ren L. Accelerating volumetric cine MRI (VC-MRI) using undersampling for real-time 3D target localization/tracking in radiation therapy: a feasibility study. Phys Med Biol 2017; 63:01NT01. [PMID: 29087963 PMCID: PMC5756137 DOI: 10.1088/1361-6560/aa9746] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
PURPOSE To accelerate volumetric cine MRI (VC-MRI) using undersampled 2D-cine MRI to provide real-time 3D guidance for gating/target tracking in radiotherapy. METHODS 4D-MRI is acquired during patient simulation. One phase of the prior 4D-MRI is selected as the prior images, designated as MRIprior. The on-board VC-MRI at each time-step is considered a deformation of the MRIprior. The deformation field map is represented as a linear combination of the motion components extracted by principal component analysis from the prior 4D-MRI. The weighting coefficients of the motion components are solved by matching the corresponding 2D-slice of the VC-MRI with the on-board undersampled 2D-cine MRI acquired. Undersampled Cartesian and radial k-space acquisition strategies were investigated. The effects of k-space sampling percentage (SP) and distribution, tumor sizes and noise on the VC-MRI estimation were studied. The VC-MRI estimation was evaluated using XCAT simulation of lung cancer patients and data from liver cancer patients. Volume percent difference (VPD) and Center of Mass Shift (COMS) of the tumor volumes and tumor tracking errors were calculated. RESULTS For XCAT, VPD/COMS were 11.93 ± 2.37%/0.90 ± 0.27 mm and 11.53 ± 1.47%/0.85 ± 0.20 mm among all scenarios with Cartesian sampling (SP = 10%) and radial sampling (21 spokes, SP = 5.2%), respectively. When tumor size decreased, higher sampling rate achieved more accurate VC-MRI than lower sampling rate. VC-MRI was robust against noise levels up to SNR = 20. For patient data, the tumor tracking errors in superior-inferior, anterior-posterior and lateral (LAT) directions were 0.46 ± 0.20 mm, 0.56 ± 0.17 mm and 0.23 ± 0.16 mm, respectively, for Cartesian-based sampling with SP = 20% and 0.60 ± 0.19 mm, 0.56 ± 0.22 mm and 0.42 ± 0.15 mm, respectively, for radial-based sampling with SP = 8% (32 spokes). CONCLUSIONS It is feasible to estimate VC-MRI from a single undersampled on-board 2D cine MRI. Phantom and patient studies showed that the temporal resolution of VC-MRI can potentially be improved by 5-10 times using a 2D cine image acquired with 10-20% k-space sampling.
Collapse
Affiliation(s)
- Wendy Harris
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, North Carolina, 27710, USA
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA
- Medical Physics Graduate Program, Duke Kunshan University, 8 Duke Avenue, Kunshan, Jiangsu, 215316, China
| | - Chunhao Wang
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, North Carolina, 27710, USA
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA
| | - You Zhang
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA
| | - Jing Cai
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, North Carolina, 27710, USA
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA
| | - Lei Ren
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, North Carolina, 27710, USA
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA
| |
Collapse
|
35
|
Zhang Y, Deng X, Yin FF, Ren L. Image acquisition optimization of a limited-angle intrafraction verification (LIVE) system for lung radiotherapy. Med Phys 2017; 45:340-351. [PMID: 29091287 DOI: 10.1002/mp.12647] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2017] [Revised: 09/24/2017] [Accepted: 10/19/2017] [Indexed: 01/18/2023] Open
Abstract
PURPOSE Limited-angle intrafraction verification (LIVE) has been previously developed for four-dimensional (4D) intrafraction target verification either during arc delivery or between three-dimensional (3D)/IMRT beams. Preliminary studies showed that LIVE can accurately estimate the target volume using kV/MV projections acquired over orthogonal view 30° scan angles. Currently, the LIVE imaging acquisition requires slow gantry rotation and is not clinically optimized. The goal of this study is to optimize the image acquisition parameters of LIVE for different patient respiratory periods and gantry rotation speeds for the effective clinical implementation of the system. METHOD Limited-angle intrafraction verification imaging acquisition was optimized using a digital anthropomorphic phantom (XCAT) with simulated respiratory periods varying from 3 s to 6 s and gantry rotation speeds varying from 1°/s to 6°/s. LIVE scanning time was optimized by minimizing the number of respiratory cycles needed for the four-dimensional scan, and imaging dose was optimized by minimizing the number of kV and MV projections needed for four-dimensional estimation. The estimation accuracy was evaluated by calculating both the center-of-mass-shift (COMS) and three-dimensional volume-percentage-difference (VPD) between the tumor in estimated images and the ground truth images. The robustness of LIVE was evaluated with varied respiratory patterns, tumor sizes, and tumor locations in XCAT simulation. A dynamic thoracic phantom (CIRS) was used to further validate the optimized imaging schemes from XCAT study with changes of respiratory patterns, tumor sizes, and imaging scanning directions. RESULTS Respiratory periods, gantry rotation speeds, number of respiratory cycles scanned and number of kV/MV projections acquired were all positively correlated with the estimation accuracy of LIVE. Faster gantry rotation speed or longer respiratory period allowed less respiratory cycles to be scanned and less kV/MV projections to be acquired to estimate the target volume accurately. Regarding the scanning time minimization, for patient respiratory periods of 3-4 s, gantry rotation speeds of 1°/s, 2°/s, 3-6°/s required scanning of five, four, and three respiratory cycles, respectively. For patient respiratory periods of 5-6 s, the corresponding respiratory cycles required in the scan changed to four, three, and two cycles, respectively. Regarding the imaging dose minimization, for patient respiratory periods of 3-4 s, gantry rotation speeds of 1°/s, 2-4°/s, 5-6°/s required acquiring of 7, 5, 4 kV and MV projections, respectively. For patient respiratory periods of 5-6 s, 5 kV and 5 MV projections are sufficient for all gantry rotation speeds. The optimized LIVE system was robust against breathing pattern, tumor size and tumor location changes. In the CIRS study, the optimized LIVE system achieved the average center-of-mass-shift (COMS)/volume-percentage-difference (VPD) of 0.3 ± 0.1 mm/7.7 ± 2.0% for the scanning time priority case, 0.2 ± 0.1 mm/6.1 ± 1.2% for the imaging dose priority case, respectively, among all gantry rotation speeds tested. LIVE was robust against different scanning directions investigated. CONCLUSION The LIVE system has been preliminarily optimized for different patient respiratory periods and treatment gantry rotation speeds using digital and physical phantoms. The optimized imaging parameters, including number of respiratory cycles scanned and kV/MV projection numbers acquired, provide guidelines for optimizing the scanning time and imaging dose of the LIVE system for its future evaluations and clinical implementations through patient studies.
Collapse
Affiliation(s)
- Yawei Zhang
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC, 27710, USA
| | - Xinchen Deng
- Medical Physics Graduate Program, Duke Kunshan University, No. 8 Duke Avenue, Kunshan, Jiangsu, 215316, China
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC, 27710, USA.,Medical Physics Graduate Program, Duke Kunshan University, No. 8 Duke Avenue, Kunshan, Jiangsu, 215316, China.,Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC, 27705, USA
| | - Lei Ren
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC, 27710, USA.,Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC, 27705, USA
| |
Collapse
|
36
|
Hazelaar C, Dahele M, Scheib S, Slotman BJ, Verbakel WF. Verifying tumor position during stereotactic body radiation therapy delivery using (limited-arc) cone beam computed tomography imaging. Radiother Oncol 2017; 123:355-362. [DOI: 10.1016/j.radonc.2017.04.022] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2016] [Revised: 04/26/2017] [Accepted: 04/29/2017] [Indexed: 11/16/2022]
|
37
|
Zhang Y, Ma J, Iyengar P, Zhong Y, Wang J. A new CT reconstruction technique using adaptive deformation recovery and intensity correction (ADRIC). Med Phys 2017; 44:2223-2241. [PMID: 28380247 DOI: 10.1002/mp.12259] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2016] [Revised: 03/26/2017] [Accepted: 03/30/2017] [Indexed: 11/06/2022] Open
Abstract
PURPOSE Sequential same-patient CT images may involve deformation-induced and non-deformation-induced voxel intensity changes. An adaptive deformation recovery and intensity correction (ADRIC) technique was developed to improve the CT reconstruction accuracy, and to separate deformation from non-deformation-induced voxel intensity changes between sequential CT images. MATERIALS AND METHODS ADRIC views the new CT volume as a deformation of a prior high-quality CT volume, but with additional non-deformation-induced voxel intensity changes. ADRIC first applies the 2D-3D deformation technique to recover the deformation field between the prior CT volume and the new, to-be-reconstructed CT volume. Using the deformation-recovered new CT volume, ADRIC further corrects the non-deformation-induced voxel intensity changes with an updated algebraic reconstruction technique ("ART-dTV"). The resulting intensity-corrected new CT volume is subsequently fed back into the 2D-3D deformation process to further correct the residual deformation errors, which forms an iterative loop. By ADRIC, the deformation field and the non-deformation voxel intensity corrections are optimized separately and alternately to reconstruct the final CT. CT myocardial perfusion imaging scenarios were employed to evaluate the efficacy of ADRIC, using both simulated data of the extended-cardiac-torso (XCAT) digital phantom and experimentally acquired porcine data. The reconstruction accuracy of the ADRIC technique was compared to the technique using ART-dTV alone, and to the technique using 2D-3D deformation alone. The relative error metric and the universal quality index metric are calculated between the images for quantitative analysis. The relative error is defined as the square root of the sum of squared voxel intensity differences between the reconstructed volume and the "ground-truth" volume, normalized by the square root of the sum of squared "ground-truth" voxel intensities. In addition to the XCAT and porcine studies, a physical lung phantom measurement study was also conducted. Water-filled balloons with various shapes/volumes and concentrations of iodinated contrasts were put inside the phantom to simulate both deformations and non-deformation-induced intensity changes for ADRIC reconstruction. The ADRIC-solved deformations and intensity changes from limited-view projections were compared to those of the "gold-standard" volumes reconstructed from fully sampled projections. RESULTS For the XCAT simulation study, the relative errors of the reconstructed CT volume by the 2D-3D deformation technique, the ART-dTV technique, and the ADRIC technique were 14.64%, 19.21%, and 11.90% respectively, by using 20 projections for reconstruction. Using 60 projections for reconstruction reduced the relative errors to 12.33%, 11.04%, and 7.92% for the three techniques, respectively. For the porcine study, the corresponding results were 13.61%, 8.78%, and 6.80% by using 20 projections; and 12.14%, 6.91%, and 5.29% by using 60 projections. The ADRIC technique also demonstrated robustness to varying projection exposure levels. For the physical phantom study, the average DICE coefficient between the initial prior balloon volume and the new "gold-standard" balloon volumes was 0.460. ADRIC reconstruction by 21 projections increased the average DICE coefficient to 0.954. CONCLUSION The ADRIC technique outperformed both the 2D-3D deformation technique and the ART-dTV technique in reconstruction accuracy. The alternately solved deformation field and non-deformation voxel intensity corrections can benefit multiple clinical applications, including tumor tracking, radiotherapy dose accumulation, and treatment outcome analysis.
Collapse
Affiliation(s)
- You Zhang
- Department of Radiation Oncology, UT Southwestern Medical Center at Dallas, Dallas, TX, 75390, USA
| | - Jianhua Ma
- Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Puneeth Iyengar
- Department of Radiation Oncology, UT Southwestern Medical Center at Dallas, Dallas, TX, 75390, USA
| | - Yuncheng Zhong
- Department of Radiation Oncology, UT Southwestern Medical Center at Dallas, Dallas, TX, 75390, USA
| | - Jing Wang
- Department of Radiation Oncology, UT Southwestern Medical Center at Dallas, Dallas, TX, 75390, USA
| |
Collapse
|
38
|
Zhang Y, Ren L, Vergalasova I, Yin FF. Clinical Study of Orthogonal-View Phase-Matched Digital Tomosynthesis for Lung Tumor Localization. Technol Cancer Res Treat 2017; 16:866-878. [PMID: 28449625 PMCID: PMC5547009 DOI: 10.1177/1533034617705716] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
Background and Purpose: Compared to cone-beam computed tomography, digital tomosynthesis imaging has the benefits of shorter scanning time, less imaging dose, and better mechanical clearance for tumor localization in radiation therapy. However, for lung tumors, the localization accuracy of the conventional digital tomosynthesis technique is affected by the lack of depth information and the existence of lung tumor motion. This study investigates the clinical feasibility of using an orthogonal-view phase-matched digital tomosynthesis technique to improve the accuracy of lung tumor localization. Materials and Methods: The proposed orthogonal-view phase-matched digital tomosynthesis technique benefits from 2 major features: (1) it acquires orthogonal-view projections to improve the depth information in reconstructed digital tomosynthesis images and (2) it applies respiratory phase-matching to incorporate patient motion information into the synthesized reference digital tomosynthesis sets, which helps to improve the localization accuracy of moving lung tumors. A retrospective study enrolling 14 patients was performed to evaluate the accuracy of the orthogonal-view phase-matched digital tomosynthesis technique. Phantom studies were also performed using an anthropomorphic phantom to investigate the feasibility of using intratreatment aggregated kV and beams’ eye view cine MV projections for orthogonal-view phase-matched digital tomosynthesis imaging. The localization accuracy of the orthogonal-view phase-matched digital tomosynthesis technique was compared to that of the single-view digital tomosynthesis techniques and the digital tomosynthesis techniques without phase-matching. Results: The orthogonal-view phase-matched digital tomosynthesis technique outperforms the other digital tomosynthesis techniques in tumor localization accuracy for both the patient study and the phantom study. For the patient study, the orthogonal-view phase-matched digital tomosynthesis technique localizes the tumor to an average (± standard deviation) error of 1.8 (0.7) mm for a 30° total scan angle. For the phantom study using aggregated kV–MV projections, the orthogonal-view phase-matched digital tomosynthesis localizes the tumor to an average error within 1 mm for varying magnitudes of scan angles. Conclusion: The pilot clinical study shows that the orthogonal-view phase-matched digital tomosynthesis technique enables fast and accurate localization of moving lung tumors.
Collapse
Affiliation(s)
- You Zhang
- Medical Physics Graduate Program, Duke University, Durham, NC, USA
| | - Lei Ren
- Medical Physics Graduate Program, Duke University, Durham, NC, USA.,Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Irina Vergalasova
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Fang-Fang Yin
- Medical Physics Graduate Program, Duke University, Durham, NC, USA.,Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| |
Collapse
|
39
|
Harris W, Zhang Y, Yin FF, Ren L. Estimating 4D-CBCT from prior information and extremely limited angle projections using structural PCA and weighted free-form deformation for lung radiotherapy. Med Phys 2017; 44:1089-1104. [PMID: 28079267 DOI: 10.1002/mp.12102] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2016] [Revised: 11/18/2016] [Accepted: 01/04/2017] [Indexed: 12/25/2022] Open
Abstract
PURPOSE To investigate the feasibility of using structural-based principal component analysis (PCA) motion-modeling and weighted free-form deformation to estimate on-board 4D-CBCT using prior information and extremely limited angle projections for potential 4D target verification of lung radiotherapy. METHODS A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In the previous method, each phase of the 4D-CBCT was generated by deforming a prior CT volume. The DFM was solved by a motion model extracted by a global PCA and free-form deformation (GMM-FD) technique, using a data fidelity constraint and deformation energy minimization. In this study, a new structural PCA method was developed to build a structural motion model (SMM) by accounting for potential relative motion pattern changes between different anatomical structures from simulation to treatment. The motion model extracted from planning 4DCT was divided into two structures: tumor and body excluding tumor, and the parameters of both structures were optimized together. Weighted free-form deformation (WFD) was employed afterwards to introduce flexibility in adjusting the weightings of different structures in the data fidelity constraint based on clinical interests. XCAT (computerized patient model) simulation with a 30 mm diameter lesion was simulated with various anatomical and respiratory changes from planning 4D-CT to on-board volume to evaluate the method. The estimation accuracy was evaluated by the volume percent difference (VPD)/center-of-mass-shift (COMS) between lesions in the estimated and "ground-truth" on-board 4D-CBCT. Different on-board projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy. The method was also evaluated against three lung patients. RESULTS The SMM-WFD method achieved substantially better accuracy than the GMM-FD method for CBCT estimation using extremely small scan angles or projections. Using orthogonal 15° scanning angles, the VPD/COMS were 3.47 ± 2.94% and 0.23 ± 0.22 mm for SMM-WFD and 25.23 ± 19.01% and 2.58 ± 2.54 mm for GMM-FD among all eight XCAT scenarios. Compared to GMM-FD, SMM-WFD was more robust against reduction of the scanning angles down to orthogonal 10° with VPD/COMS of 6.21 ± 5.61% and 0.39 ± 0.49 mm, and more robust against reduction of projection numbers down to only 8 projections in total for both orthogonal-view 30° and orthogonal-view 15° scan angles. SMM-WFD method was also more robust than the GMM-FD method against increasing levels of noise in the projection images. Additionally, the SMM-WFD technique provided better tumor estimation for all three lung patients compared to the GMM-FD technique. CONCLUSION Compared to the GMM-FD technique, the SMM-WFD technique can substantially improve the 4D-CBCT estimation accuracy using extremely small scan angles and low number of projections to provide fast low dose 4D target verification.
Collapse
Affiliation(s)
- Wendy Harris
- Medical Physics Graduate Program, Duke University, Durham, NC, 27705, USA
| | - You Zhang
- Medical Physics Graduate Program, Duke University, Durham, NC, 27705, USA
| | - Fang-Fang Yin
- Medical Physics Graduate Program, Duke University, Durham, NC, 27705, USA.,Department of Radiation Oncology, Duke University Medical Center, Durham, NC, 27710, USA
| | - Lei Ren
- Medical Physics Graduate Program, Duke University, Durham, NC, 27705, USA.,Department of Radiation Oncology, Duke University Medical Center, Durham, NC, 27710, USA
| |
Collapse
|
40
|
Zhang Y, Yin FF, Zhang Y, Ren L. Reducing scan angle using adaptive prior knowledge for a limited-angle intrafraction verification (LIVE) system for conformal arc radiotherapy. Phys Med Biol 2017; 62:3859-3882. [PMID: 28338470 DOI: 10.1088/1361-6560/aa6913] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
The purpose of this study is to develop an adaptive prior knowledge guided image estimation technique to reduce the scan angle needed in the limited-angle intrafraction verification (LIVE) system for 4D-CBCT reconstruction. The LIVE system has been previously developed to reconstruct 4D volumetric images on-the-fly during arc treatment for intrafraction target verification and dose calculation. In this study, we developed an adaptive constrained free-form deformation reconstruction technique in LIVE to further reduce the scanning angle needed to reconstruct the 4D-CBCT images for faster intrafraction verification. This technique uses free form deformation with energy minimization to deform prior images to estimate 4D-CBCT based on kV-MV projections acquired in extremely limited angle (orthogonal 3°) during the treatment. Note that the prior images are adaptively updated using the latest CBCT images reconstructed by LIVE during treatment to utilize the continuity of the respiratory motion. The 4D digital extended-cardiac-torso (XCAT) phantom and a CIRS 008A dynamic thoracic phantom were used to evaluate the effectiveness of this technique. The reconstruction accuracy of the technique was evaluated by calculating both the center-of-mass-shift (COMS) and 3D volume-percentage-difference (VPD) of the tumor in reconstructed images and the true on-board images. The performance of the technique was also assessed with varied breathing signals against scanning angle, lesion size, lesion location, projection sampling interval, and scanning direction. In the XCAT study, using orthogonal-view of 3° kV and portal MV projections, this technique achieved an average tumor COMS/VPD of 0.4 ± 0.1 mm/5.5 ± 2.2%, 0.6 ± 0.3 mm/7.2 ± 2.8%, 0.5 ± 0.2 mm/7.1 ± 2.6%, 0.6 ± 0.2 mm/8.3 ± 2.4%, for baseline drift, amplitude variation, phase shift, and patient breathing signal variation, respectively. In the CIRS phantom study, this technique achieved an average tumor COMS/VPD of 0.7 ± 0.1 mm/7.5 ± 1.3% for a 3 cm lesion and 0.6 ± 0.2 mm/11.4 ± 1.5% for a 2 cm lesion in the baseline drift case. The average tumor COMS/VPD were 0.5 ± 0.2 mm/10.8 ± 1.4%, 0.4 ± 0.3 mm/7.3 ± 2.9%, 0.4 ± 0.2 mm/7.4 ± 2.5%, 0.4 ± 0.2 mm/7.3 ± 2.8% for the four real patient breathing signals, respectively. Results demonstrated that the adaptive prior knowledge guided image estimation technique with LIVE system is robust against scanning angle, lesion size, location and scanning direction. It can estimate on-board images accurately with as little as 6 projections in orthogonal-view 3° angle. In conclusion, adaptive prior knowledge guided image reconstruction technique accurately estimates 4D-CBCT images using extremely-limited angle and projections. This technique greatly improves the efficiency and accuracy of LIVE system for ultrafast 4D intrafraction verification of lung SBRT treatments.
Collapse
Affiliation(s)
- Yawei Zhang
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC 27710, United States of America
| | | | | | | |
Collapse
|
41
|
Dang J, Yin FF, You T, Dai C, Chen D, Wang J. Simultaneous 4D-CBCT reconstruction with sliding motion constraint. Med Phys 2017; 43:5453. [PMID: 27782722 DOI: 10.1118/1.4959998] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Current approaches using deformable vector field (DVF) for motion-compensated 4D-cone beam CT (CBCT) reconstruction typically utilize an isotropically smoothed DVF between different respiration phases. Such isotropically smoothed DVF does not work well if sliding motion exists between neighboring organs. This study investigated an anisotropic motion modeling scheme by extracting organ boundary local motions (e.g., sliding) and incorporated them into 4D-CBCT reconstruction to optimize the motion modeling and reconstruction methods. METHODS Initially, a modified simultaneous algebraic reconstruction technique (mSART) was applied to reconstruct high quality reference phase CBCT using all phase projections. The initial DVFs were precalculated and subsequently updated to achieve the optimized solution. During the DVF update, sliding motion estimation was performed by matching the measured projections to the forward projection of the deformed reference phase CBCT. In this process, each moving organ boundary was first segmented. The normal vectors of the boundary DVF were then extracted and incorporated for further DVF optimization. The regularization term in the objective function adaptively regularizes the DVF by (1) isotopically smoothing the DVF within each organ; (2) smoothing the DVF at boundary along the normal direction; and (3) leaving the tangent direction of boundary DVF unsmoothed (i.e., allowing for sliding motion). A nonlinear conjugate gradient optimizer was used. The algorithm was validated on a digital cubic tube phantom with sliding motion, nonuniform rotational B-spline based cardiac-torso (NCAT) phantom, and two anonymized patient data. The relative reconstruction error (RE), the motion trajectory's root mean square error (RMSE) together with its maximum error (MaxE), and the Dice coefficient of the lung boundary were calculated to evaluate the algorithm performance. RESULTS For the cubic tube and NCAT phantom tests, the REs are 10.2% and 7.4% with sliding motion compensation, compared to 13.4% and 8.9% without sliding modeling. The motion trajectory's RMSE and MaxE for NCAT phantom tests are 0.5 and 0.8 mm with sliding motion constraint compared to 3.5 and 7.3 mm without sliding motion modeling. The Dice coefficients for both NCAT phantom and the patients show a consistent trend that sliding motion constraint achieves better similarity for segmented lung boundary compared with the ground truth or patient reference. CONCLUSIONS A sliding motion-compensated 4D-CBCT reconstruction and the motion modeling scheme was developed. Both phantom and patient study demonstrated the improved accuracy and motion modeling accuracy in reconstructed 4D-CBCT.
Collapse
Affiliation(s)
- Jun Dang
- Department of Radiation Oncology, Affiliated Hospital of Jiangsu University, Zhenjiang 212000, China
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27705 and Department of Medical Physics, Duke Kunshan University, Kunshan 215316, China
| | - Tao You
- Department of Radiation Oncology, Affiliated Hospital of Jiangsu University, Zhenjiang 212000, China
| | - Chunhua Dai
- Department of Radiation Oncology, Affiliated Hospital of Jiangsu University, Zhenjiang 212000, China
| | - Deyu Chen
- Department of Radiation Oncology, Affiliated Hospital of Jiangsu University, Zhenjiang 212000, China
| | - Jing Wang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas 75390
| |
Collapse
|
42
|
Zhang Y, Tehrani JN, Wang J. A Biomechanical Modeling Guided CBCT Estimation Technique. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:641-652. [PMID: 27831866 PMCID: PMC5381525 DOI: 10.1109/tmi.2016.2623745] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks.
Collapse
|
43
|
Zhang L, Zhang Y, Zhang Y, Harris WB, Yin FF, Cai J, Ren L. Markerless Four-Dimensional-Cone Beam Computed Tomography Projection-Phase Sorting Using Prior Knowledge and Patient Motion Modeling: A Feasibility Study. CANCER TRANSLATIONAL MEDICINE 2017; 3:185-193. [PMID: 30135868 PMCID: PMC6101251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022] Open
Abstract
AIM During cancer radiotherapy treatment, on-board four-dimensional-cone beam computed tomography (4D-CBCT) provides important patient 4D volumetric information for tumor target verification. Reconstruction of 4D-CBCT images requires sorting of acquired projections into different respiratory phases. Traditional phase sorting methods are either based on external surrogates, which might miscorrelate with internal structures; or on 2D internal structures, which require specific organ presence or slow gantry rotations. The aim of this study is to investigate the feasibility of a 3D motion modeling-based method for markerless 4D-CBCT projection-phase sorting. METHODS Patient 4D-CT images acquired during simulation are used as prior images. Principal component analysis (PCA) is used to extract three major respiratory deformation patterns. On-board patient image volume is considered as a deformation of the prior CT at the end-expiration phase. Coefficients of the principal deformation patterns are solved for each on-board projection by matching it with the digitally reconstructed radiograph (DRR) of the deformed prior CT. The primary PCA coefficients are used for the projection-phase sorting. RESULTS PCA coefficients solved in nine digital phantoms (XCATs) showed the same pattern as the breathing motions in both the anteroposterior and superoinferior directions. The mean phase sorting differences were below 2% and percentages of phase difference < 10% were 100% for all the nine XCAT phantoms. Five lung cancer patient results showed mean phase difference ranging from 1.62% to 2.23%. The percentage of projections within 10% phase difference ranged from 98.4% to 100% and those within 5% phase difference ranged from 88.9% to 99.8%. CONCLUSION The study demonstrated the feasibility of using PCA coefficients for 4D-CBCT projection-phase sorting. High sorting accuracy in both digital phantoms and patient cases was achieved. This method provides an accurate and robust tool for automatic 4D-CBCT projection sorting using 3D motion modeling without the need of external surrogate or internal markers.
Collapse
Affiliation(s)
- Lei Zhang
- Medical Physics Graduate Program, Duke University, Durham, NC, USA,Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Yawei Zhang
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - You Zhang
- Medical Physics Graduate Program, Duke University, Durham, NC, USA,Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA,Department of Radiation Oncology, UT Southwestern Cancer Center, TX, USA
| | - Wendy B. Harris
- Medical Physics Graduate Program, Duke University, Durham, NC, USA,Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Fang-Fang Yin
- Medical Physics Graduate Program, Duke University, Durham, NC, USA,Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA,Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China
| | - Jing Cai
- Medical Physics Graduate Program, Duke University, Durham, NC, USA,Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China,Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China
| | - Lei Ren
- Medical Physics Graduate Program, Duke University, Durham, NC, USA,Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| |
Collapse
|
44
|
Nasehi Tehrani J, McEwan A, Wang J. Lung surface deformation prediction from spirometry measurement and chest wall surface motion. Med Phys 2016; 43:5493. [PMID: 27782714 PMCID: PMC5035308 DOI: 10.1118/1.4962479] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2016] [Revised: 08/26/2016] [Accepted: 08/29/2016] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The authors have developed and evaluated a method to predict lung surface motion based on spirometry measurements, and chest and abdomen motion at selected locations. METHODS A patient-specific 3D triangular surface mesh of the lung region was obtained at the end expiratory phase by the threshold-based segmentation method. Lung flow volume changes were recorded with a spirometer for each patient. A total of 192 selected points at a regular spacing of 2 × 2 cm matrix points were used to detect chest wall motion over a total area of 32 × 24 cm covering the chest and abdomen surfaces. QR factorization with column pivoting was employed to remove redundant observations of the chest and abdominal areas. To create a statistical model between the lung surface and the corresponding surrogate signals, the authors developed a predictive model based on canonical ridge regression. Two unique weighting vectors were selected for each vertex on the lung surface; they were optimized during the training process using all other 4D-CT phases except for the test inspiration phase. These parameters were employed to predict the vertex locations of a testing data set. RESULTS The position of each lung surface mesh vertex was estimated from the motion at selected positions within the chest wall surface and from spirometry measurements in ten lung cancer patients. The average estimation of the 98th error percentile for the end inspiration phase was less than 1 mm (AP = 0.9 mm, RL = 0.6 mm, and SI = 0.8 mm). The vertices located at the lower region of the lung had a larger estimation error as compared with those within the upper region of the lung. The average landmark motion errors, derived from the biomechanical modeling using real surface deformation vector fields (SDVFs), and the predicted SDVFs were 3.0 and 3.1 mm, respectively. CONCLUSIONS Our newly developed predictive model provides a noninvasive approach to derive lung boundary conditions. The proposed system can be used with personalized biomechanical respiration modeling to derive lung tumor motion during radiation therapy from noninvasive measurements.
Collapse
Affiliation(s)
- Joubin Nasehi Tehrani
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas 75235-8808
| | - Alistair McEwan
- School of Electrical and Information Engineering, University of Sydney, New South Wales 2006, Australia
| | - Jing Wang
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas 75235-8808
| |
Collapse
|
45
|
Zhang Y, Yin FF, Ren L. Dosimetric verification of lung cancer treatment using the CBCTs estimated from limited-angle on-board projections. Med Phys 2016; 42:4783-95. [PMID: 26233206 DOI: 10.1118/1.4926559] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Lung cancer treatment is susceptible to treatment errors caused by interfractional anatomical and respirational variations of the patient. On-board treatment dose verification is especially critical for the lung stereotactic body radiation therapy due to its high fractional dose. This study investigates the feasibility of using cone-beam (CB)CT images estimated by a motion modeling and free-form deformation (MM-FD) technique for on-board dose verification. METHODS Both digital and physical phantom studies were performed. Various interfractional variations featuring patient motion pattern change, tumor size change, and tumor average position change were simulated from planning CT to on-board images. The doses calculated on the planning CT (planned doses), the on-board CBCT estimated by MM-FD (MM-FD doses), and the on-board CBCT reconstructed by the conventional Feldkamp-Davis-Kress (FDK) algorithm (FDK doses) were compared to the on-board dose calculated on the "gold-standard" on-board images (gold-standard doses). The absolute deviations of minimum dose (ΔDmin), maximum dose (ΔDmax), and mean dose (ΔDmean), and the absolute deviations of prescription dose coverage (ΔV100%) were evaluated for the planning target volume (PTV). In addition, 4D on-board treatment dose accumulations were performed using 4D-CBCT images estimated by MM-FD in the physical phantom study. The accumulated doses were compared to those measured using optically stimulated luminescence (OSL) detectors and radiochromic films. RESULTS Compared with the planned doses and the FDK doses, the MM-FD doses matched much better with the gold-standard doses. For the digital phantom study, the average (± standard deviation) ΔDmin, ΔDmax, ΔDmean, and ΔV100% (values normalized by the prescription dose or the total PTV) between the planned and the gold-standard PTV doses were 32.9% (±28.6%), 3.0% (±2.9%), 3.8% (±4.0%), and 15.4% (±12.4%), respectively. The corresponding values of FDK PTV doses were 1.6% (±1.9%), 1.2% (±0.6%), 2.2% (±0.8%), and 17.4% (±15.3%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.3% (±0.2%), 0.9% (±0.6%), 0.6% (±0.4%), and 1.0% (±0.8%), respectively. Similarly, for the physical phantom study, the average ΔDmin, ΔDmax, ΔDmean, and ΔV100% of planned PTV doses were 38.1% (±30.8%), 3.5% (±5.1%), 3.0% (±2.6%), and 8.8% (±8.0%), respectively. The corresponding values of FDK PTV doses were 5.8% (±4.5%), 1.6% (±1.6%), 2.0% (±0.9%), and 9.3% (±10.5%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.4% (±0.8%), 0.8% (±1.0%), 0.5% (±0.4%), and 0.8% (±0.8%), respectively. For the 4D dose accumulation study, the average (± standard deviation) absolute dose deviation (normalized by local doses) between the accumulated doses and the OSL measured doses was 3.3% (±2.7%). The average gamma index (3%/3 mm) between the accumulated doses and the radiochromic film measured doses was 94.5% (±2.5%). CONCLUSIONS MM-FD estimated 4D-CBCT enables accurate on-board dose calculation and accumulation for lung radiation therapy. It can potentially be valuable for treatment quality assessment and adaptive radiation therapy.
Collapse
Affiliation(s)
- You Zhang
- Medical Physics Graduate Program, Duke University, Durham, North Carolina 27710
| | - Fang-Fang Yin
- Medical Physics Graduate Program, Duke University, Durham, North Carolina 27710 and Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710
| | - Lei Ren
- Medical Physics Graduate Program, Duke University, Durham, North Carolina 27710 and Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710
| |
Collapse
|