1
|
Zhang Y, Jiang Z, Zhang Y, Ren L. A review on 4D cone-beam CT (4D-CBCT) in radiation therapy: Technical advances and clinical applications. Med Phys 2024. [PMID: 38922912 DOI: 10.1002/mp.17269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 03/05/2024] [Accepted: 06/01/2024] [Indexed: 06/28/2024] Open
Abstract
Cone-beam CT (CBCT) is the most commonly used onboard imaging technique for target localization in radiation therapy. Conventional 3D CBCT acquires x-ray cone-beam projections at multiple angles around the patient to reconstruct 3D images of the patient in the treatment room. However, despite its wide usage, 3D CBCT is limited in imaging disease sites affected by respiratory motions or other dynamic changes within the body, as it lacks time-resolved information. To overcome this limitation, 4D-CBCT was developed to incorporate a time dimension in the imaging to account for the patient's motion during the acquisitions. For example, respiration-correlated 4D-CBCT divides the breathing cycles into different phase bins and reconstructs 3D images for each phase bin, ultimately generating a complete set of 4D images. 4D-CBCT is valuable for localizing tumors in the thoracic and abdominal regions where the localization accuracy is affected by respiratory motions. This is especially important for hypofractionated stereotactic body radiation therapy (SBRT), which delivers much higher fractional doses in fewer fractions than conventional fractionated treatments. Nonetheless, 4D-CBCT does face certain limitations, including long scanning times, high imaging doses, and compromised image quality due to the necessity of acquiring sufficient x-ray projections for each respiratory phase. In order to address these challenges, numerous methods have been developed to achieve fast, low-dose, and high-quality 4D-CBCT. This paper aims to review the technical developments surrounding 4D-CBCT comprehensively. It will explore conventional algorithms and recent deep learning-based approaches, delving into their capabilities and limitations. Additionally, the paper will discuss the potential clinical applications of 4D-CBCT and outline a future roadmap, highlighting areas for further research and development. Through this exploration, the readers will better understand 4D-CBCT's capabilities and potential to enhance radiation therapy.
Collapse
Affiliation(s)
- Yawei Zhang
- Department of Radiation Oncology, University of Florida Health Proton Therapy Institute, Jacksonville, Florida, USA
- Department of Radiation Oncology, University of Florida College of Medicine, Gainesville, Florida, USA
| | - Zhuoran Jiang
- Medical Physics Graduate Program, Duke University, Durham, North Carolina, USA
| | - You Zhang
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas, USA
| | - Lei Ren
- Department of Radiation Oncology, University of Maryland, Baltimore, Maryland, USA
| |
Collapse
|
2
|
Shao HC, Mengke T, Pan T, Zhang Y. Dynamic CBCT imaging using prior model-free spatiotemporal implicit neural representation (PMF-STINR). Phys Med Biol 2024; 69:115030. [PMID: 38697195 PMCID: PMC11133878 DOI: 10.1088/1361-6560/ad46dc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 04/12/2024] [Accepted: 05/01/2024] [Indexed: 05/04/2024]
Abstract
Objective. Dynamic cone-beam computed tomography (CBCT) can capture high-spatial-resolution, time-varying images for motion monitoring, patient setup, and adaptive planning of radiotherapy. However, dynamic CBCT reconstruction is an extremely ill-posed spatiotemporal inverse problem, as each CBCT volume in the dynamic sequence is only captured by one or a few x-ray projections, due to the slow gantry rotation speed and the fast anatomical motion (e.g. breathing).Approach. We developed a machine learning-based technique, prior-model-free spatiotemporal implicit neural representation (PMF-STINR), to reconstruct dynamic CBCTs from sequentially acquired x-ray projections. PMF-STINR employs a joint image reconstruction and registration approach to address the under-sampling challenge, enabling dynamic CBCT reconstruction from singular x-ray projections. Specifically, PMF-STINR uses spatial implicit neural representations to reconstruct a reference CBCT volume, and it applies temporal INR to represent the intra-scan dynamic motion of the reference CBCT to yield dynamic CBCTs. PMF-STINR couples the temporal INR with a learning-based B-spline motion model to capture time-varying deformable motion during the reconstruction. Compared with the previous methods, the spatial INR, the temporal INR, and the B-spline model of PMF-STINR are all learned on the fly during reconstruction in a one-shot fashion, without using any patient-specific prior knowledge or motion sorting/binning.Main results. PMF-STINR was evaluated via digital phantom simulations, physical phantom measurements, and a multi-institutional patient dataset featuring various imaging protocols (half-fan/full-fan, full sampling/sparse sampling, different energy and mAs settings, etc). The results showed that the one-shot learning-based PMF-STINR can accurately and robustly reconstruct dynamic CBCTs and capture highly irregular motion with high temporal (∼ 0.1 s) resolution and sub-millimeter accuracy.Significance. PMF-STINR can reconstruct dynamic CBCTs and solve the intra-scan motion from conventional 3D CBCT scans without using any prior anatomical/motion model or motion sorting/binning. It can be a promising tool for motion management by offering richer motion information than traditional 4D-CBCTs.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Tielige Mengke
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Tinsu Pan
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, TX, 77030, United States of America
| | - You Zhang
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| |
Collapse
|
3
|
Shao HC, Mengke T, Pan T, Zhang Y. Dynamic CBCT Imaging using Prior Model-Free Spatiotemporal Implicit Neural Representation (PMF-STINR). ARXIV 2023:arXiv:2311.10036v2. [PMID: 38013886 PMCID: PMC10680908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Objective Dynamic cone-beam computed tomography (CBCT) can capture high-spatial-resolution, time-varying images for motion monitoring, patient setup, and adaptive planning of radiotherapy. However, dynamic CBCT reconstruction is an extremely ill-posed spatiotemporal inverse problem, as each CBCT volume in the dynamic sequence is only captured by one or a few X-ray projections, due to the slow gantry rotation speed and the fast anatomical motion (e.g., breathing). Approach We developed a machine learning-based technique, prior-model-free spatiotemporal implicit neural representation (PMF-STINR), to reconstruct dynamic CBCTs from sequentially acquired X-ray projections. PMF-STINR employs a joint image reconstruction and registration approach to address the under-sampling challenge, enabling dynamic CBCT reconstruction from singular X-ray projections. Specifically, PMF-STINR uses spatial implicit neural representation to reconstruct a reference CBCT volume, and it applies temporal INR to represent the intra-scan dynamic motion with respect to the reference CBCT to yield dynamic CBCTs. PMF-STINR couples the temporal INR with a learning-based B-spline motion model to capture time-varying deformable motion during the reconstruction. Compared with the previous methods, the spatial INR, the temporal INR, and the B-spline model of PMF-STINR are all learned on the fly during reconstruction in a one-shot fashion, without using any patient-specific prior knowledge or motion sorting/binning. Main results PMF-STINR was evaluated via digital phantom simulations, physical phantom measurements, and a multi-institutional patient dataset featuring various imaging protocols (half-fan/full-fan, full sampling/sparse sampling, different energy and mAs settings, etc.). The results showed that the one-shot learning-based PMF-STINR can accurately and robustly reconstruct dynamic CBCTs and capture highly irregular motion with high temporal (~0.1s) resolution and sub-millimeter accuracy. Significance PMF-STINR can reconstruct dynamic CBCTs and solve the intra-scan motion from conventional 3D CBCT scans without using any prior anatomical/motion model or motion sorting/binning. It can be a promising tool for motion management by offering richer motion information than traditional 4D-CBCTs.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Tielige Mengke
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Tinsu Pan
- Department of Imaging Physics University of Texas MD Anderson Cancer Center, Houston, TX, 77030, USA
| | - You Zhang
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
4
|
Fluoroscopic 3D Image Generation from Patient-Specific PCA Motion Models Derived from 4D-CBCT Patient Datasets: A Feasibility Study. J Imaging 2022; 8:jimaging8020017. [PMID: 35200720 PMCID: PMC8879782 DOI: 10.3390/jimaging8020017] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 01/06/2022] [Accepted: 01/12/2022] [Indexed: 12/25/2022] Open
Abstract
A method for generating fluoroscopic (time-varying) volumetric images using patient-specific motion models derived from four-dimensional cone-beam CT (4D-CBCT) images was developed. 4D-CBCT images acquired immediately prior to treatment have the potential to accurately represent patient anatomy and respiration during treatment. Fluoroscopic 3D image estimation is performed in two steps: (1) deriving motion models and (2) optimization. To derive motion models, every phase in a 4D-CBCT set is registered to a reference phase chosen from the same set using deformable image registration (DIR). Principal components analysis (PCA) is used to reduce the dimensionality of the displacement vector fields (DVFs) resulting from DIR into a few vectors representing organ motion found in the DVFs. The PCA motion models are optimized iteratively by comparing a cone-beam CT (CBCT) projection to a simulated projection computed from both the motion model and a reference 4D-CBCT phase, resulting in a sequence of fluoroscopic 3D images. Patient datasets were used to evaluate the method by estimating the tumor location in the generated images compared to manually defined ground truth positions. Experimental results showed that the average tumor mean absolute error (MAE) along the superior–inferior (SI) direction and the 95th percentile in two patient datasets were 2.29 and 5.79 mm for patient 1, and 1.89 and 4.82 mm for patient 2. This study demonstrated the feasibility of deriving 4D-CBCT-based PCA motion models that have the potential to account for the 3D non-rigid patient motion and localize tumors and other patient anatomical structures on the day of treatment.
Collapse
|
5
|
Cui H, Jiang X, Fang C, Zhu L, Yang Y. Planning CT-guided robust and fast cone-beam CT scatter correction using a local filtration technique. Med Phys 2021; 48:6832-6843. [PMID: 34662433 DOI: 10.1002/mp.15299] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Revised: 08/27/2021] [Accepted: 10/11/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Cone-beam CT (CBCT) has been widely utilized in image-guided radiotherapy. Planning CT (pCT)-aided CBCT scatter correction could further enhance image quality and extend CBCT application to dose calculation and adaptive planning. Nevertheless, existing pCT-based approaches demand accurate registration between pCT and CBCT, leading to limited imaging performance and increased computational cost when large anatomical discrepancies exist. In this work, we proposed a robust and fast CBCT scatter correction method using local filtration technique and rigid registration between pCT and CBCT (LF-RR). METHODS First of all, the pCT was rigidly registered with CBCT, then forward projection was performed on registered pCT to create scatter-free projections. The raw scatter signals were obtained via subtracting the scatter-free projections from the measured CBCT projections. Based on frequency and intensity threshold criteria, reliable scatter signals were selected from the raw scatter signals, and further filtered for global scatter estimation via local filtration technique. Finally, corrected CBCT was reconstructed with the projections generated by subtracting the scatter estimation from the raw CBCT projections using FDK algorithm. The LF-RR method was evaluated via comparison with another pCT-based scatter correction method based on Median and Gaussian filters (MG method). RESULTS Proposed method was first validated on an anthropomorphic pelvis phantom, and showed satisfied performance on scatter removal even when anatomical mismatches were intentionally created on pCT. The quantitative analysis was further performed on four clinical CBCT images. Compared with the uncorrected CBCT, CBCT corrected by MG with rigid registration (MG-RR), MG with deformable registration (MG-DR), and LF-RR reduced the CT number error from 79 ± 35 to 25 ± 18 , 17 ± 13 and 7 ± 3 HU for adipose and from 115 ± 61 to 36 ± 22 , 30 ± 24 , 7 ± 3 HU for muscle, respectively. After correction, the spatial non-uniformity (SNU) of CBCT corrected with MG-RR, MG-DR and LF-RR was 51 ± 13 , 60 ± 21 , and 21 ± 9 HU for adipose, and 50 ± 22 , 57 ± 41 , and 25 ± 6 HU for muscle, respectively. Meanwhile, the contrast-to-noise ratio (CNR) between muscle and adipose was increased by a factor of 2.70, 2.89 and 2.56, respectively. Using the LF-RR method, the scatter correction of 656 projections can be finished within 10 s and the corrected volumetric images (200 slices) can be obtained within 2 min. CONCLUSION We developed a fast and robust pCT-based CBCT scatter correction method which exploits the local-filtration technique to promote the accuracy of scatter estimation and is resistant to pCT-to-CBCT registration uncertainties. Both phantom and patient studies showed the superiority of the proposed correction in imaging accuracy and computational efficiency, indicating promisingfuture clinical application.
Collapse
Affiliation(s)
- Hehe Cui
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China
| | - Xiao Jiang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China
| | - Chengyijue Fang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China
| | - Lei Zhu
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China
| | - Yidong Yang
- Department of Radiation Oncology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China.,Hefei National Laboratory for Physical Sciences at the Microscale & School of Physical Sciences, University of Science and Technology of China, Hefei, Anhui, China
| |
Collapse
|
6
|
Song Y, Zhang W, Zhang H, Wang Q, Xiao Q, Li Z, Wei X, Lai J, Wang X, Li W, Zhong Q, Gong P, Zhong R, Zhao J. Low-dose cone-beam CT (LD-CBCT) reconstruction for image-guided radiation therapy (IGRT) by three-dimensional dual-dictionary learning. Radiat Oncol 2020; 15:192. [PMID: 32787941 PMCID: PMC7425566 DOI: 10.1186/s13014-020-01630-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Accepted: 07/29/2020] [Indexed: 02/08/2023] Open
Abstract
BACKGROUND To develop a low-dose cone beam CT (LD-CBCT) reconstruction method named simultaneous algebraic reconstruction technique and dual-dictionary learning (SART-DDL) joint algorithm for image guided radiation therapy (IGRT) and evaluate its imaging quality and clinical application ability. METHODS In this retrospective study, 62 CBCT image sets from February 2018 to July 2018 at west china hospital were randomly collected from 42 head and neck patients (mean [standard deviation] age, 49.7 [11.4] years, 12 females and 30 males). All image sets were retrospectively reconstructed by SART-DDL (resultant D-CBCT image sets) with 18% less clinical raw projections. Reconstruction quality was evaluated by quantitative parameters compared with SART and Total Variation minimization (SART-TV) joint reconstruction algorithm with paired t test. Five-grade subjective grading evaluations were done by two oncologists in a blind manner compared with clinically used Feldkamp-Davis-Kress algorithm CBCT images (resultant F-CBCT image sets) and the grading results were compared by paired Wilcoxon rank test. Registration results between D-CBCT and F-CBCT were compared. D-CBCT image geometry fidelity was tested. RESULTS The mean peak signal to noise ratio of D-CBCT was 1.7 dB higher than SART-TV reconstructions (P < .001, SART-DDL vs SART-TV, 36.36 ± 0.55 dB vs 34.68 ± 0.28 dB). All D-CBCT images were recognized as clinically acceptable without significant difference with F-CBCT in subjective grading (P > .05). In clinical registration, the maximum translational and rotational difference was 1.8 mm and 1.7 degree respectively. The horizontal, vertical and sagittal geometry fidelity of D-CBCT were acceptable. CONCLUSIONS The image quality, geometry fidelity and clinical application ability of D-CBCT are comparable to that of the F-CBCT for head-and-neck patients with 18% less projections by SART-DDL.
Collapse
Affiliation(s)
- Ying Song
- Department of Radiotherapy, Cancer Center, West China Hospital, Sichuan University, No.37 Guo Xue Alley, Chengdu, 610065 P. R. China
| | - Weikang Zhang
- The School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Minhang District, Shanghai, 610065 P. R. China
| | - Hong Zhang
- Department of Radiotherapy, Cancer Center, West China Hospital, Sichuan University, No.37 Guo Xue Alley, Chengdu, 610065 P. R. China
| | - Qiang Wang
- Department of Radiotherapy, Cancer Center, West China Hospital, Sichuan University, No.37 Guo Xue Alley, Chengdu, 610065 P. R. China
| | - Qing Xiao
- Department of Radiotherapy, Cancer Center, West China Hospital, Sichuan University, No.37 Guo Xue Alley, Chengdu, 610065 P. R. China
| | - Zhibing Li
- Department of Radiotherapy, Cancer Center, West China Hospital, Sichuan University, No.37 Guo Xue Alley, Chengdu, 610065 P. R. China
| | - Xing Wei
- Department of Radiotherapy, Cancer Center, West China Hospital, Sichuan University, No.37 Guo Xue Alley, Chengdu, 610065 P. R. China
| | - Jialu Lai
- Department of Radiotherapy, Cancer Center, West China Hospital, Sichuan University, No.37 Guo Xue Alley, Chengdu, 610065 P. R. China
| | - Xuetao Wang
- Department of Radiotherapy, Cancer Center, West China Hospital, Sichuan University, No.37 Guo Xue Alley, Chengdu, 610065 P. R. China
| | - Wan Li
- Department of Radiotherapy, Cancer Center, West China Hospital, Sichuan University, No.37 Guo Xue Alley, Chengdu, 610065 P. R. China
| | - Quan Zhong
- Department of Radiotherapy, Cancer Center, West China Hospital, Sichuan University, No.37 Guo Xue Alley, Chengdu, 610065 P. R. China
| | - Pan Gong
- Department of Radiotherapy, Cancer Center, West China Hospital, Sichuan University, No.37 Guo Xue Alley, Chengdu, 610065 P. R. China
| | - Renming Zhong
- Department of Radiotherapy, Cancer Center, West China Hospital, Sichuan University, No.37 Guo Xue Alley, Chengdu, 610065 P. R. China
| | - Jun Zhao
- The School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Minhang District, Shanghai, 610065 P. R. China
| |
Collapse
|
7
|
Wei R, Zhou F, Liu B, Bai X, Fu D, Liang B, Wu Q. Real-time tumor localization with single x-ray projection at arbitrary gantry angles using a convolutional neural network (CNN). Phys Med Biol 2020; 65:065012. [PMID: 31896093 DOI: 10.1088/1361-6560/ab66e4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
For tumor tracking therapy, precise knowledge of tumor position in real-time is very important. A technique using single x-ray projection based on a convolutional neural network (CNN) was recently developed which can achieve accurate tumor localization in real-time. However, this method was only validated at fixed gantry angles. In this study, an improved technique is developed to handle arbitrary gantry angles for rotational radiotherapy. To evaluate the highly complex relationship between x-ray projections at arbitrary angles and tumor motion, a special CNN was proposed. In this network, a binary region of interest (ROI) mask was applied on every extracted feature map. This avoids the overfitting problem due to gantry rotation by directing the network to neglect those irrelevant pixels whose intensity variation had nothing to do with breathing motion. In addition, an angle-dependent fully connection layer (ADFCL) was utilized to recover the mapping from extracted feature maps to tumor motion, which would vary with the gantry angles. The method was tested with images from 15 realistic patients and compared with a variant network of VGG, developed by Oxford University's Visual Geometry Group. The tumors were clearly visible on x-ray projections for five patients only. The average tumor localization error was under 1.8 mm and 1.0 mm in superior-inferior and lateral directions. For the other ten patients whose tumors were not clearly visible in the x-ray projection, a feature point localization error was computed to evaluate the proposed method, the mean value of which was no more than 1.5 mm and 1.0 mm in both directions for all patients. A tumor localization method for single x-ray projection at arbitrary angles based on a novel CNN was developed and validated in this study for real-time operation. This greatly expanded the applicability of the tumor localization framework to the rotation therapy.
Collapse
Affiliation(s)
- Ran Wei
- Image Processing Center, Beihang University, Beijing 100191, People's Republic of China
| | | | | | | | | | | | | |
Collapse
|