1
|
Rabe M, Kurz C, Thummerer A, Landry G. Artificial intelligence for treatment delivery: image-guided radiotherapy. Strahlenther Onkol 2024:10.1007/s00066-024-02277-9. [PMID: 39138806 DOI: 10.1007/s00066-024-02277-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 07/07/2024] [Indexed: 08/15/2024]
Abstract
Radiation therapy (RT) is a highly digitized field relying heavily on computational methods and, as such, has a high affinity for the automation potential afforded by modern artificial intelligence (AI). This is particularly relevant where imaging is concerned and is especially so during image-guided RT (IGRT). With the advent of online adaptive RT (ART) workflows at magnetic resonance (MR) linear accelerators (linacs) and at cone-beam computed tomography (CBCT) linacs, the need for automation is further increased. AI as applied to modern IGRT is thus one area of RT where we can expect important developments in the near future. In this review article, after outlining modern IGRT and online ART workflows, we cover the role of AI in CBCT and MRI correction for dose calculation, auto-segmentation on IGRT imaging, motion management, and response assessment based on in-room imaging.
Collapse
Affiliation(s)
- Moritz Rabe
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Christopher Kurz
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Adrian Thummerer
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
- German Cancer Consortium (DKTK), partner site Munich, a partnership between the DKFZ and the LMU University Hospital Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
- Bavarian Cancer Research Center (BZKF), Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
| |
Collapse
|
2
|
Zhang H, Chen K, Xu X, You T, Sun W, Dang J. Spatiotemporal correlation enhanced real-time 4D-CBCT imaging using convolutional LSTM networks. Front Oncol 2024; 14:1390398. [PMID: 39161388 PMCID: PMC11330803 DOI: 10.3389/fonc.2024.1390398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 07/01/2024] [Indexed: 08/21/2024] Open
Abstract
Purpose To enhance the accuracy of real-time four-dimensional cone beam CT (4D-CBCT) imaging by incorporating spatiotemporal correlation from the sequential projection image into the single projection-based 4D-CBCT estimation process. Methods We first derived 4D deformation vector fields (DVFs) from patient 4D-CT. Principal component analysis (PCA) was then employed to extract distinctive feature labels for each DVF, focusing on the first three PCA coefficients. To simulate a wide range of respiratory motion, we expanded the motion amplitude and used random sampling to generate approximately 900 sets of PCA labels. These labels were used to produce 900 simulated 4D-DVFs, which in turn deformed the 0% phase 4D-CT to obtain 900 CBCT volumes with continuous motion amplitudes. Following this, the forward projection was performed at one angle to get all of the digital reconstructed radiographs (DRRs). These DRRs and the PCA labels were used as the training data set. To capture the spatiotemporal correlation in the projections, we propose to use the convolutional LSTM (ConvLSTM) network for PCA coefficient estimation. For network testing, when several online CBCT projections (with different motion amplitudes that cover the full respiration range) are acquired and sent into the network, the corresponding 4D-PCA coefficients will be obtained and finally lead to a full online 4D-CBCT prediction. A phantom experiment is first performed with the XCAT phantom; then, a pilot clinical evaluation is further conducted. Results Results on the XCAT phantom and the patient data show that the proposed approach outperformed other networks in terms of visual inspection and quantitative metrics. For the XCAT phantom experiment, ConvLSTM achieves the highest quantification accuracy with MAPE(Mean Absolute Percentage Error), PSNR (Peak Signal-to-Noise Ratio), and RMSE(Root Mean Squared Error) of 0.0459, 64.6742, and 0.0011, respectively. For the patient pilot clinical experiment, ConvLSTM also achieves the best quantification accuracy with that of 0.0934, 63.7294, and 0.0019, respectively. The quantification evaluation labels that we used are 1) the Mean Absolute Error (MAE), 2) the Normalized Cross Correlation (NCC), 3)the Structural Similarity Index Measurement(SSIM), 4)the Peak Signal-to-Noise Ratio (PSNR), 5)the Root Mean Squared Error(RMSE), and 6) the Absolute Percentage Error (MAPE). Conclusion The spatiotemporal correlation-based respiration motion modeling supplied a potential solution for accurate real-time 4D-CBCT reconstruction.
Collapse
Affiliation(s)
- Hua Zhang
- School of Biomedical Engineering, Southern Medical University, Guang Zhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guang Zhou, Guangdong, China
| | - Kai Chen
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| | - Xiaotong Xu
- School of Biomedical Engineering, Southern Medical University, Guang Zhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guang Zhou, Guangdong, China
| | - Tao You
- Department of Radiation Oncology, The Affiliated Hospital of Jiangsu University, Zhenjiang, Jiangsu, China
| | - Wenzheng Sun
- Department of Radiation Oncology, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jun Dang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| |
Collapse
|
3
|
Zhu M, Fu Q, Liu B, Zhang M, Li B, Luo X, Zhou F. RT-SRTS: Angle-agnostic real-time simultaneous 3D reconstruction and tumor segmentation from single X-ray projection. Comput Biol Med 2024; 173:108390. [PMID: 38569234 DOI: 10.1016/j.compbiomed.2024.108390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 03/24/2024] [Accepted: 03/26/2024] [Indexed: 04/05/2024]
Abstract
Radiotherapy is one of the primary treatment methods for tumors, but the organ movement caused by respiration limits its accuracy. Recently, 3D imaging from a single X-ray projection has received extensive attention as a promising approach to address this issue. However, current methods can only reconstruct 3D images without directly locating the tumor and are only validated for fixed-angle imaging, which fails to fully meet the requirements of motion control in radiotherapy. In this study, a novel imaging method RT-SRTS is proposed which integrates 3D imaging and tumor segmentation into one network based on multi-task learning (MTL) and achieves real-time simultaneous 3D reconstruction and tumor segmentation from a single X-ray projection at any angle. Furthermore, the attention enhanced calibrator (AEC) and uncertain-region elaboration (URE) modules have been proposed to aid feature extraction and improve segmentation accuracy. The proposed method was evaluated on fifteen patient cases and compared with three state-of-the-art methods. It not only delivers superior 3D reconstruction but also demonstrates commendable tumor segmentation results. Simultaneous reconstruction and segmentation can be completed in approximately 70 ms, significantly faster than the required time threshold for real-time tumor tracking. The efficacies of both AEC and URE have also been validated in ablation studies. The code of work is available at https://github.com/ZywooSimple/RT-SRTS.
Collapse
Affiliation(s)
- Miao Zhu
- Image Processing Center, Beihang University, Beijing, 100191, PR China
| | - Qiming Fu
- Image Processing Center, Beihang University, Beijing, 100191, PR China
| | - Bo Liu
- Image Processing Center, Beihang University, Beijing, 100191, PR China.
| | - Mengxi Zhang
- Image Processing Center, Beihang University, Beijing, 100191, PR China
| | - Bojian Li
- Image Processing Center, Beihang University, Beijing, 100191, PR China
| | - Xiaoyan Luo
- Image Processing Center, Beihang University, Beijing, 100191, PR China.
| | - Fugen Zhou
- Image Processing Center, Beihang University, Beijing, 100191, PR China
| |
Collapse
|
4
|
Zhang C, He W, Liu L, Dai J, Salim Ahmad I, Xie Y, Liang X. Volumetric feature points integration with bio-structure-informed guidance for deformable multi-modal CT image registration. Phys Med Biol 2023; 68:245007. [PMID: 37844603 DOI: 10.1088/1361-6560/ad03d2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Accepted: 10/16/2023] [Indexed: 10/18/2023]
Abstract
Objective.Medical image registration represents a fundamental challenge in medical image processing. Specifically, CT-CBCT registration has significant implications in the context of image-guided radiation therapy (IGRT). However, traditional iterative methods often require considerable computational time. Deep learning based methods, especially when dealing with low contrast organs, are frequently entangled in local optimal solutions.Approach.To address these limitations, we introduce a registration method based on volumetric feature points integration with bio-structure-informed guidance. Surface point cloud is generated from segmentation labels during the training stage, with both the surface-registered point pairs and voxel feature point pairs co-guiding the training process, thereby achieving higher registration accuracy.Main results.Our findings have been validated on paired CT-CBCT datasets. In comparison with other deep learning registration methods, our approach has improved the precision by 6%, reaching a state-of-the-art status.Significance.The integration of voxel feature points and bio-structure feature points to guide the training of the medical image registration network has achieved promising results. This provides a meaningful direction for further research in medical image registration and IGRT.
Collapse
Affiliation(s)
- Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055 Guangdong, People's Republic of China
| | - Wenfeng He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055 Guangdong, People's Republic of China
| | - Lin Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055 Guangdong, People's Republic of China
| | - Jingjing Dai
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055 Guangdong, People's Republic of China
| | - Isah Salim Ahmad
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055 Guangdong, People's Republic of China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055 Guangdong, People's Republic of China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055 Guangdong, People's Republic of China
| |
Collapse
|
5
|
Lell M, Kachelrieß M. Computed Tomography 2.0: New Detector Technology, AI, and Other Developments. Invest Radiol 2023; 58:587-601. [PMID: 37378467 PMCID: PMC10332658 DOI: 10.1097/rli.0000000000000995] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 05/04/2023] [Indexed: 06/29/2023]
Abstract
ABSTRACT Computed tomography (CT) dramatically improved the capabilities of diagnostic and interventional radiology. Starting in the early 1970s, this imaging modality is still evolving, although tremendous improvements in scan speed, volume coverage, spatial and soft tissue resolution, as well as dose reduction have been achieved. Tube current modulation, automated exposure control, anatomy-based tube voltage (kV) selection, advanced x-ray beam filtration, and iterative image reconstruction techniques improved image quality and decreased radiation exposure. Cardiac imaging triggered the demand for high temporal resolution, volume acquisition, and high pitch modes with electrocardiogram synchronization. Plaque imaging in cardiac CT as well as lung and bone imaging demand for high spatial resolution. Today, we see a transition of photon-counting detectors from experimental and research prototype setups into commercially available systems integrated in patient care. Moreover, with respect to CT technology and CT image formation, artificial intelligence is increasingly used in patient positioning, protocol adjustment, and image reconstruction, but also in image preprocessing or postprocessing. The aim of this article is to give an overview of the technical specifications of up-to-date available whole-body and dedicated CT systems, as well as hardware and software innovations for CT systems in the near future.
Collapse
|
6
|
Kachelrieß M. [Risk-minimizing tube current modulation for computed tomography]. RADIOLOGIE (HEIDELBERG, GERMANY) 2023:10.1007/s00117-023-01160-5. [PMID: 37306750 DOI: 10.1007/s00117-023-01160-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Accepted: 04/28/2023] [Indexed: 06/13/2023]
Abstract
AIM/PROBLEM Every computed tomography (CT) examination is accompanied by radiation exposure. The aim is to reduce this as much as possible without compromising image quality by using a tube current modulation technique. STANDARD PROCEDURE CT tube current modulation (TCM), which has been in use for about two decades, adjusts the tube current to the patient's attenuation (in the angular and z‑directions) in a way that minimizes the mAs product (tube current-time product) of the scan without compromising image quality. This mAsTCM, present in all CT devices, is associated with a significant dose reduction in those anatomical areas that have high attenuation differences between anterior-posterior (a.p.) and lateral, particularly the shoulder and pelvis. Radiation risk of individual organs or of the patient is not considered in mAsTCM. METHODOLOGICAL INNOVATION Recently, a TCM method was proposed that directly minimizes the patient's radiation risk by predicting organ dose levels and taking them into account when choosing tube current. It is shown that this so-called riskTCM is significantly superior to mAsTCM in all body regions. To be able to use riskTCM in clinical routine, only a software adaptation of the CT system would be necessary. CONCLUSIONS With riskTCM, significant dose reductions can be achieved compared to the standard procedure, typically around 10%-30%. This is especially true in those body regions where the standard procedure shows only moderate advantages over a scan without any tube current modulation at all. It is now up to the CT vendors to take action and implement riskTCM.
Collapse
Affiliation(s)
- Marc Kachelrieß
- Abteilung Röntgenbildgebung und Computertomographie, Deutsches Krebsforschungszentrum (DFKZ), Heidelberg, Deutschland.
| |
Collapse
|
7
|
Asano S, Oseki K, Takao S, Miyazaki K, Yokokawa K, Matsuura T, Taguchi H, Katoh N, Aoyama H, Umegaki K, Miyamoto N. Technical note: Performance evaluation of volumetric imaging based on motion modeling by principal component analysis. Med Phys 2023; 50:993-999. [PMID: 36427355 DOI: 10.1002/mp.16123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 10/17/2022] [Accepted: 11/20/2022] [Indexed: 11/27/2022] Open
Abstract
PURPOSE To quantitatively evaluate the achievable performance of volumetric imaging based on lung motion modeling by principal component analysis (PCA). METHODS In volumetric imaging based on PCA, internal deformation was represented as a linear combination of the eigenvectors derived by PCA of the deformation vector fields evaluated from patient-specific four-dimensional-computed tomography (4DCT) datasets. The volumetric image was synthesized by warping the reference CT image with a deformation vector field which was evaluated using optimal principal component coefficients (PCs). Larger PCs were hypothesized to reproduce deformations larger than those included in the original 4DCT dataset. To evaluate the reproducibility of PCA-reconstructed volumetric images synthesized to be close to the ground truth as possible, mean absolute error (MAE), structure similarity index measure (SSIM) and discrepancy of diaphragm position were evaluated using 22 4DCT datasets of nine patients. RESULTS Mean MAE and SSIM values for the PCA-reconstructed volumetric images were approximately 80 HU and 0.88, respectively, regardless of the respiratory phase. In most test cases including the data of which motion range was exceeding that of the modeling data, the positional error of diaphragm was less than 5 mm. The results suggested that large deformations not included in the modeling 4DCT dataset could be reproduced. Furthermore, since the first PC correlated with the displacement of the diaphragm position, the first eigenvector became the dominant factor representing the respiration-associated deformations. However, other PCs did not necessarily change with the same trend as the first PC, and no correlation was observed between the coefficients. Hence, randomly allocating or sampling these PCs in expanded ranges may be applicable to reasonably generate an augmented dataset with various deformations. CONCLUSIONS Reasonable accuracy of image synthesis comparable to those in the previous research were shown by using clinical data. These results indicate the potential of PCA-based volumetric imaging for clinical applications.
Collapse
Affiliation(s)
- Suzuka Asano
- Graduate School of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Keishi Oseki
- Graduate School of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Seishin Takao
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Koichi Miyazaki
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Kohei Yokokawa
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Taeko Matsuura
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Hiroshi Taguchi
- Department of Radiation Oncology, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Norio Katoh
- Faculty of Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Hidefumi Aoyama
- Faculty of Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Kikuo Umegaki
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Naoki Miyamoto
- Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan.,Department of Medical Physics, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| |
Collapse
|
8
|
Li N, Zhou X, Chen S, Dai J, Wang T, Zhang C, He W, Xie Y, Liang X. Incorporating the synthetic CT image for improving the performance of deformable image registration between planning CT and cone-beam CT. Front Oncol 2023; 13:1127866. [PMID: 36910636 PMCID: PMC9993856 DOI: 10.3389/fonc.2023.1127866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 01/25/2023] [Indexed: 02/25/2023] Open
Abstract
Objective To develop a contrast learning-based generative (CLG) model for the generation of high-quality synthetic computed tomography (sCT) from low-quality cone-beam CT (CBCT). The CLG model improves the performance of deformable image registration (DIR). Methods This study included 100 post-breast-conserving patients with the pCT images, CBCT images, and the target contours, which the physicians delineated. The CT images were generated from CBCT images via the proposed CLG model. We used the Sct images as the fixed images instead of the CBCT images to achieve the multi-modality image registration accurately. The deformation vector field is applied to propagate the target contour from the pCT to CBCT to realize the automatic target segmentation on CBCT images. We calculate the Dice similarity coefficient (DSC), 95 % Hausdorff distance (HD95), and average surface distance (ASD) between the prediction and reference segmentation to evaluate the proposed method. Results The DSC, HD95, and ASD of the target contours with the proposed method were 0.87 ± 0.04, 4.55 ± 2.18, and 1.41 ± 0.56, respectively. Compared with the traditional method without the synthetic CT assisted (0.86 ± 0.05, 5.17 ± 2.60, and 1.55 ± 0.72), the proposed method was outperformed, especially in the soft tissue target, such as the tumor bed region. Conclusion The CLG model proposed in this study can create the high-quality sCT from low-quality CBCT and improve the performance of DIR between the CBCT and the pCT. The target segmentation accuracy is better than using the traditional DIR.
Collapse
Affiliation(s)
- Na Li
- School of Biomedical Engineering, Guangdong Medical University, Dongguan, Guangdong, China.,Dongguan Key Laboratory of Medical Electronics and Medical Imaging Equipment, Dongguan, Guangdong, China.,Songshan Lake Innovation Center of Medicine & Engineering, Guangdong Medical University, Dongguan, Guangdong, China
| | - Xuanru Zhou
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China.,Department of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Shupeng Chen
- Department of Radiation Oncology, Beaumont Health, Royal Oak, MI, United States
| | - Jingjing Dai
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Tangsheng Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Wenfeng He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| |
Collapse
|
9
|
Liu C, Wang Q, Si W, Ni X. NuTracker: a coordinate-based neural network representation of lung motion for intrafraction tumor tracking with various surrogates in radiotherapy. Phys Med Biol 2022; 68. [PMID: 36537890 DOI: 10.1088/1361-6560/aca873] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 12/01/2022] [Indexed: 12/03/2022]
Abstract
Objective. Tracking tumors and surrounding tissues in real-time is critical for reducing errors and uncertainties during radiotherapy. Existing methods are either limited by the linear representation or scale poorly with the volume resolution. To address both issues, we propose a novel coordinate-based neural network representation of lung motion to predict the instantaneous 3D volume at arbitrary spatial resolution from various surrogates: patient surface, fiducial marker, and single kV projection.Approach. The proposed model, namely NuTracker, decomposes the 4DCT into a template volume and dense displacement fields (DDFs), and uses two coordinate neural networks to predict them from spatial coordinates and surrogate states. The predicted template is spatially warped with the predicted DDF to produce the deformed volume for a given surrogate state. The nonlinear coordinate networks enable representing complex motion at infinite resolution. The decomposition allows imposing different regularizations on the spatial and temporal domains. The meta-learning and multi-task learning are used to train NuTracker across patients and tasks, so that commonalities and differences can be exploited. NuTracker was evaluated on seven patients implanted with markers using a leave-one-phase-out procedure.Main results. The 3D marker localization error is 0.66 mm on average and <1 mm at 95th-percentile, which is about 26% and 32% improvement over the predominant linear methods. The tumor coverage and image quality are improved by 5.7% and 11% in terms of dice and PSNR. The difference in the localization error for different surrogates is small and is not statistically significant. Cross-population learning and multi-task learning contribute to performance. The model tolerates surrogate drift to a certain extent.Significance. NuTracker can provide accurate estimation for entire tumor volume based on various surrogates at infinite resolution. It is of great potential to apply the coordinate network to other imaging modalities, e.g. 4DCBCT and other tasks, e.g. 4D dose calculation.
Collapse
Affiliation(s)
- Cong Liu
- Radiation Oncology Center, The Affiliated Changzhou Second People's Hospital of Nanjing Medical University, Changzhou, People's Republic of China.,Center of Medical Physics, Nanjing Medical University, Changzhou, People's Republic of China.,Faculty of Business Information, Shanghai Business School, Shanghai, People's Republic of China
| | - Qingxin Wang
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, People's Republic of China
| | - Wen Si
- Faculty of Business Information, Shanghai Business School, Shanghai, People's Republic of China.,Huashan Hospital, Fudan University, Shanghai, People's Republic of China
| | - Xinye Ni
- Radiation Oncology Center, The Affiliated Changzhou Second People's Hospital of Nanjing Medical University, Changzhou, People's Republic of China.,Center of Medical Physics, Nanjing Medical University, Changzhou, People's Republic of China
| |
Collapse
|
10
|
Klein L, Liu C, Steidel J, Enzmann L, Knaup M, Sawall S, Maier A, Lell M, Maier J, Kachelrieß M. Patient-specific radiation risk-based tube current modulation for diagnostic CT. Med Phys 2022; 49:4391-4403. [PMID: 35421263 DOI: 10.1002/mp.15673] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Revised: 03/11/2022] [Accepted: 03/29/2022] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Modern CT scanners use automatic exposure control (AEC) techniques, such as tube current modulation (TCM), to reduce dose delivered to patients while maintaining image quality. In contrast to conventional approaches that minimize the tube current time product of the CT scan, referred to as mAsTCM in the following, we herein propose a new method referred to as riskTCM which aims at reducing the radiation risk to the patient by taking into account the specific radiation risk of every dose-sensitive organ. METHODS For current mAsTCM implementations, the mAs-product is used as a surrogate for the patient dose. Thus they do not take into account the varying dose sensitivity of different organs. Our riskTCM framework assumes that a coarse CT reconstruction, an organ segmentation and an estimation of the dose distribution can be provided in real time, e.g. by applying machine learning techniques. Using this information riskTCM determines a tube current curve that minimizes a patient risk measure, e.g. the effective dose, while keeping the image quality constant. We retrospectively applied riskTCM to 20 patients covering all relevant anatomical regions and tube voltages from 70 kV to 150 kV. The potential reduction of effective dose at same image noise is evaluated as a figure of merit and compared to mAsTCM and to a situation with a constant tube current referred to as noTCM. RESULTS Anatomical regions like the neck, thorax, abdomen and the pelvis benefit from the proposed riskTCM. On average, a reduction of effective dose of about 23 % for the thorax, 31 % for the abdomen, 24 % for the pelvis, and 27% for the neck have been evaluated compared to today's state-of-the-art mAsTCM. For the head, the resulting reduction of effective dose is lower, about 13 % on average compared to mAsTCM. CONCLUSIONS With a risk-minimizing tube current modulation, significant higher reduction of effective dose compared to mAs-minimizing tube current modulation is possible. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Laura Klein
- Division of X-Ray Imaging and CT, German Cancer Research Center (DKFZ), Heidelberg, Germany.,Department of Physics and Astronomy, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany
| | - Chang Liu
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, Germany
| | - Jörg Steidel
- Division of X-Ray Imaging and CT, German Cancer Research Center (DKFZ), Heidelberg, Germany.,Department of Physics and Astronomy, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany
| | - Lucia Enzmann
- Division of X-Ray Imaging and CT, German Cancer Research Center (DKFZ), Heidelberg, Germany.,Department of Physics and Astronomy, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany
| | - Michael Knaup
- Division of X-Ray Imaging and CT, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Stefan Sawall
- Division of X-Ray Imaging and CT, German Cancer Research Center (DKFZ), Heidelberg, Germany.,Medical Faculty, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nürnberg, Erlangen, Germany
| | - Michael Lell
- Department of Radiology and Nuclear Medicine, Klinikum Nürnberg, Paracelsus Medical University, Nürnberg
| | - Joscha Maier
- Division of X-Ray Imaging and CT, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Marc Kachelrieß
- Division of X-Ray Imaging and CT, German Cancer Research Center (DKFZ), Heidelberg, Germany.,Medical Faculty, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany
| |
Collapse
|
11
|
Wang H, Wang N, Xie H, Wang L, Zhou W, Yang D, Cao X, Zhu S, Liang J, Chen X. Two-stage deep learning network-based few-view image reconstruction for parallel-beam projection tomography. Quant Imaging Med Surg 2022; 12:2535-2551. [PMID: 35371942 PMCID: PMC8923870 DOI: 10.21037/qims-21-778] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 12/20/2021] [Indexed: 08/30/2023]
Abstract
BACKGROUND Projection tomography (PT) is a very important and valuable method for fast volumetric imaging with isotropic spatial resolution. Sparse-view or limited-angle reconstruction-based PT can greatly reduce data acquisition time, lower radiation doses, and simplify sample fixation modes. However, few techniques can currently achieve image reconstruction based on few-view projection data, which is especially important for in vivo PT in living organisms. METHODS A 2-stage deep learning network (TSDLN)-based framework was proposed for parallel-beam PT reconstructions using few-view projections. The framework is composed of a reconstruction network (R-net) and a correction network (C-net). The R-net is a generative adversarial network (GAN) used to complete image information with direct back-projection (BP) of a sparse signal, bringing the reconstructed image close to reconstruction results obtained from fully projected data. The C-net is a U-net array that denoises the compensation result to obtain a high-quality reconstructed image. RESULTS The accuracy and feasibility of the proposed TSDLN-based framework in few-view projection-based reconstruction were first evaluated with simulations, using images from the DeepLesion public dataset. The framework exhibited better reconstruction performance than traditional analytic reconstruction algorithms and iterative algorithms, especially in cases using sparse-view projection images. For example, with as few as two projections, the TSDLN-based framework reconstructed high-quality images very close to the original image, with structural similarities greater than 0.8. By using previously acquired optical PT (OPT) data in the TSDLN-based framework trained on computed tomography (CT) data, we further exemplified the migration capabilities of the TSDLN-based framework. The results showed that when the number of projections was reduced to 5, the contours and distribution information of the samples in question could still be seen in the reconstructed images. CONCLUSIONS The simulations and experimental results showed that the TSDLN-based framework has strong reconstruction abilities using few-view projection images, and has great potential in the application of in vivo PT.
Collapse
Affiliation(s)
- Huiyuan Wang
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Nan Wang
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Hui Xie
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Lin Wang
- School of Computer Science and Engineering, Xi’an University of Technology, Xi’an, China
| | - Wangting Zhou
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Defu Yang
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, China
| | - Xu Cao
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Shouping Zhu
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Jimin Liang
- School of Electronic Engineering, Xidian University, Xi’an, China
| | - Xueli Chen
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| |
Collapse
|
12
|
Montoya JC, Zhang C, Li Y, Li K, Chen GH. Reconstruction of three-dimensional tomographic patient models for radiation dose modulation in CT from two scout views using deep learning. Med Phys 2022; 49:901-916. [PMID: 34908175 PMCID: PMC9080958 DOI: 10.1002/mp.15414] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 11/11/2021] [Accepted: 11/16/2021] [Indexed: 02/03/2023] Open
Abstract
BACKGROUND A tomographic patient model is essential for radiation dose modulation in x-ray computed tomography (CT). Currently, two-view scout images (also known as topograms) are used to estimate patient models with relatively uniform attenuation coefficients. These patient models do not account for the detailed anatomical variations of human subjects, and thus, may limit the accuracy of intraview or organ-specific dose modulations in emerging CT technologies. PURPOSE The purpose of this work was to show that 3D tomographic patient models can be generated from two-view scout images using deep learning strategies, and the reconstructed 3D patient models indeed enable accurate prescriptions of fluence-field modulated or organ-specific dose delivery in the subsequent CT scans. METHODS CT images and the corresponding two-view scout images were retrospectively collected from 4214 individual CT exams. The collected data were curated for the training of a deep neural network architecture termed ScoutCT-NET to generate 3D tomographic attenuation models from two-view scout images. The trained network was validated using a cohort of 55 136 images from 212 individual patients. To evaluate the accuracy of the reconstructed 3D patient models, radiation delivery plans were generated using ScoutCT-NET 3D patient models and compared with plans prescribed based on true CT images (gold standard) for both fluence-field-modulated CT and organ-specific CT. Radiation dose distributions were estimated using Monte Carlo simulations and were quantitatively evaluated using the Gamma analysis method. Modulated dose profiles were compared against state-of-the-art tube current modulation schemes. Impacts of ScoutCT-NET patient model-based dose modulation schemes on universal-purpose CT acquisitions and organ-specific acquisitions were also compared in terms of overall image appearance, noise magnitude, and noise uniformity. RESULTS The results demonstrate that (1) The end-to-end trained ScoutCT-NET can be used to generate 3D patient attenuation models and demonstrate empirical generalizability. (2) The 3D patient models can be used to accurately estimate the spatial distribution of radiation dose delivered by standard helical CTs prior to the actual CT acquisition; compared to the gold-standard dose distribution, 95.0% of the voxels in the ScoutCT-NET based dose maps have acceptable gamma values for 5 mm distance-to-agreement and 10% dose difference. (3) The 3D patient models also enabled accurate prescription of fluence-field modulated CT to generate a more uniform noise distribution across the patient body compared to tube current-modulated CT. (4) ScoutCT-NET 3D patient models enabled accurate prescription of organ-specific CT to boost image quality for a given body region-of-interest under a given radiation dose constraint. CONCLUSION 3D tomographic attenuation models generated by ScoutCT-NET from two-view scout images can be used to prescribe fluence-field-modulated or organ-specific CT scans with high accuracy for the overall objective of radiation dose reduction or image quality improvement for a given imaging task.
Collapse
Affiliation(s)
- Juan C Montoya
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Chengzhu Zhang
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Yinsheng Li
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Ke Li
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Guang-Hong Chen
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| |
Collapse
|
13
|
Eulig E, Maier J, Knaup M, Bennett NR, Hörndler K, Wang AS, Kachelrieß M. Deep learning-based reconstruction of interventional tools and devices from four X-ray projections for tomographic interventional guidance. Med Phys 2021; 48:5837-5850. [PMID: 34387362 DOI: 10.1002/mp.15160] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 07/09/2021] [Accepted: 07/26/2021] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Image guidance for minimally invasive interventions is usually performed by acquiring fluoroscopic images using a monoplanar or a biplanar C-arm system. However, the projective data provide only limited information about the spatial structure and position of interventional tools and devices such as stents, guide wires, or coils. In this work, we propose a deep learning-based pipeline for real-time tomographic (four-dimensional [4D]) interventional guidance at conventional dose levels. METHODS Our pipeline is comprised of two steps. In the first one, interventional tools are extracted from four cone-beam CT projections using a deep convolutional neural network. These projections are then Feldkamp reconstructed and fed into a second network, which is trained to segment the interventional tools and devices in this highly undersampled reconstruction. Both networks are trained using simulated CT data and evaluated on both simulated data and C-arm cone-beam CT measurements of stents, coils, and guide wires. RESULTS The pipeline is capable of reconstructing interventional tools from only four X-ray projections without the need for a patient prior. At an isotropic voxel size of 100 μ m , our methods achieve a precision/recall within a 100 μ m environment of the ground truth of 93%/98%, 90%/71%, and 93%/76% for guide wires, stents, and coils, respectively. CONCLUSIONS A deep learning-based approach for 4D interventional guidance is able to overcome the drawbacks of today's interventional guidance by providing full spatiotemporal (4D) information about the interventional tools at dose levels comparable to conventional fluoroscopy.
Collapse
Affiliation(s)
- Elias Eulig
- Division of X-Ray Imaging and Computed Tomography, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Joscha Maier
- Division of X-Ray Imaging and Computed Tomography, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Michael Knaup
- Division of X-Ray Imaging and Computed Tomography, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - N Robert Bennett
- Department of Radiology, Stanford University, Stanford, California, USA
| | | | - Adam S Wang
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Marc Kachelrieß
- Division of X-Ray Imaging and Computed Tomography, German Cancer Research Center (DKFZ), Heidelberg, Germany
| |
Collapse
|
14
|
Mylonas A, Booth J, Nguyen DT. A review of artificial intelligence applications for motion tracking in radiotherapy. J Med Imaging Radiat Oncol 2021; 65:596-611. [PMID: 34288501 DOI: 10.1111/1754-9485.13285] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 06/29/2021] [Indexed: 11/28/2022]
Abstract
During radiotherapy, the organs and tumour move as a result of the dynamic nature of the body; this is known as intrafraction motion. Intrafraction motion can result in tumour underdose and healthy tissue overdose, thereby reducing the effectiveness of the treatment while increasing toxicity to the patients. There is a growing appreciation of intrafraction target motion management by the radiation oncology community. Real-time image-guided radiation therapy (IGRT) can track the target and account for the motion, improving the radiation dose to the tumour and reducing the dose to healthy tissue. Recently, artificial intelligence (AI)-based approaches have been applied to motion management and have shown great potential. In this review, four main categories of motion management using AI are summarised: marker-based tracking, markerless tracking, full anatomy monitoring and motion prediction. Marker-based and markerless tracking approaches focus on tracking the individual target throughout the treatment. Full anatomy algorithms monitor for intrafraction changes in the full anatomy within the field of view. Motion prediction algorithms can be used to account for the latencies due to the time for the system to localise, process and act.
Collapse
Affiliation(s)
- Adam Mylonas
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,School of Biomedical Engineering, University of Technology Sydney, Sydney, New South Wales, Australia
| | - Jeremy Booth
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales, Australia.,Institute of Medical Physics, School of Physics, The University of Sydney, Sydney, New South Wales, Australia
| | - Doan Trang Nguyen
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,School of Biomedical Engineering, University of Technology Sydney, Sydney, New South Wales, Australia.,Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales, Australia
| |
Collapse
|
15
|
Lei Y, Tian Z, Wang T, Higgins K, Bradley JD, Curran WJ, Liu T, Yang X. Deep learning-based real-time volumetric imaging for lung stereotactic body radiation therapy: a proof of concept study. Phys Med Biol 2020; 65:235003. [PMID: 33080578 DOI: 10.1088/1361-6560/abc303] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Due to the inter- and intra- variation of respiratory motion, it is highly desired to provide real-time volumetric images during the treatment delivery of lung stereotactic body radiation therapy (SBRT) for accurate and active motion management. In this proof-of-concept study, we propose a novel generative adversarial network integrated with perceptual supervision to derive instantaneous volumetric images from a single 2D projection. Our proposed network, named TransNet, consists of three modules, i.e. encoding, transformation and decoding modules. Rather than only using image distance loss between the generated 3D images and the ground truth 3D CT images to supervise the network, perceptual loss in feature space is integrated into loss function to force the TransNet to yield accurate lung boundary. Adversarial supervision is also used to improve the realism of generated 3D images. We conducted a simulation study on 20 patient cases, who had received lung SBRT treatments in our institution and undergone 4D-CT simulation, and evaluated the efficacy and robustness of our method for four different projection angles, i.e. 0°, 30°, 60° and 90°. For each 3D CT image set of a breathing phase, we simulated its 2D projections at these angles. For each projection angle, a patient's 3D CT images of 9 phases and the corresponding 2D projection data were used to train our network for that specific patient, with the remaining phase used for testing. The mean absolute error of the 3D images obtained by our method are 99.3 ± 14.1 HU. The peak signal-to-noise ratio and structural similarity index metric within the tumor region of interest are 15.4 ± 2.5 dB and 0.839 ± 0.090, respectively. The center of mass distance between the manual tumor contours on the 3D images obtained by our method and the manual tumor contours on the corresponding 3D phase CT images are within 2.6 mm, with a mean value of 1.26 mm averaged over all the cases. Our method has also been validated in a simulated challenging scenario with increased respiratory motion amplitude and tumor shrinkage, and achieved acceptable results. Our experimental results demonstrate the feasibility and efficacy of our 2D-to-3D method for lung cancer patients, which provides a potential solution for in-treatment real-time on-board volumetric imaging for tumor tracking and dose delivery verification to ensure the effectiveness of lung SBRT treatment.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
- Co-first author
| | - Zhen Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
- Co-first author
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| |
Collapse
|
16
|
Wei R, Liu B, Zhou F, Bai X, Fu D, Liang B, Wu Q. A patient-independent CT intensity matching method using conditional generative adversarial networks (cGAN) for single x-ray projection-based tumor localization. Phys Med Biol 2020; 65:145009. [PMID: 32320959 DOI: 10.1088/1361-6560/ab8bf2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
A convolutional neural network (CNN)-based tumor localization method with a single x-ray projection was previously developed by us. One finding is that the discrepancy in the discrepancy in the intensity between a digitally reconstructed radiograph (DRR) of a three-dimensional computed tomography (3D-CT) and the measured x-ray projection has an impact on the performance. To address this issue, a patient-dependent intensity matching process for 3D-CT was performed using 3D-cone-beam computed tomography (3D-CBCT) from the same patient, which was sometimes inefficient and could adversely affect the clinical implementation of the framework. To circumvent this, in this work, we propose and validate a patient-independent intensity matching method based on a conditional generative adversarial network (cGAN). A 3D cGAN was trained to approximate the mapping from 3D-CT to 3D-CBCT from previous patient data. By applying the trained network to a new patient, a synthetic 3D-CBCT could be generated without the need to perform an actual CBCT scan on that patient. The DRR of the synthetic 3D-CBCT was subsequently utilized in our CNN-based tumor localization scheme. The method was tested using data from 12 patients with the same imaging parameters. The resulting 3D-CBCT and DRR were compared with real ones to demonstrate the efficacy of the proposed method. The tumor localization errors were also analyzed. The difference between the synthetic and real 3D-CBCT had a median value of no more than 10 HU for all patients. The relative error between the DRR and the measured x-ray projection was less than 4.8% ± 2.0% for all patients. For the three patients with a visible tumor in the x-ray projections, the average tumor localization errors were below 1.7 and 0.9 mm in the superior-inferior and lateral directions, resepectively. A patient-independent CT intensity matching method was developed, based on which accurate tumor localization was achieved. It does not require an actual CBCT scan to be performed before treatment for each patient, therefore making it more efficient in the clinical workflow.
Collapse
Affiliation(s)
- Ran Wei
- Image Processing Center, Beihang University, Beijing 100191, People's Republic of China. These authors contributed equally
| | | | | | | | | | | | | |
Collapse
|
17
|
A statistical weighted sparse-based local lung motion modelling approach for model-driven lung biopsy. Int J Comput Assist Radiol Surg 2020; 15:1279-1290. [PMID: 32347465 DOI: 10.1007/s11548-020-02154-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2019] [Accepted: 04/03/2020] [Indexed: 10/24/2022]
Abstract
PURPOSE Lung biopsy is currently the most effective procedure for cancer diagnosis. However, respiration-induced location uncertainty presents a challenge in precise lung biopsy. To reduce the medical image requirements for motion modelling, in this study, local lung motion information in the region of interest (ROI) is extracted from whole chest computed tomography (CT) and CT-fluoroscopy scans to predict the motion of potentially cancerous tissue and important vessels during the model-driven lung biopsy process. METHODS The motion prior of the ROI was generated via a sparse linear combination of a subset of motion information from a respiratory motion repository, and a weighted sparse-based statistical model was used to preserve the local respiratory motion details. We also employed a motion prior-based registration method to improve the motion estimation accuracy in the ROI and designed adaptive variable coefficients to interactively weigh the relative influence of the prior knowledge and image intensity information during the registration process. RESULTS The proposed method was applied to ten test subjects for the estimation of the respiratory motion field. The quantitative analysis resulted in a mean target registration error of 1.5 (0.8) mm and an average symmetric surface distance of 1.4 (0.6) mm. CONCLUSIONS The proposed method shows remarkable advantages over traditional methods in preserving local motion details and reducing the estimation error in the ROI. These results also provide a benchmark for lung respiratory motion modelling in the literature.
Collapse
|
18
|
Wei R, Zhou F, Liu B, Bai X, Fu D, Liang B, Wu Q. Real-time tumor localization with single x-ray projection at arbitrary gantry angles using a convolutional neural network (CNN). Phys Med Biol 2020; 65:065012. [PMID: 31896093 DOI: 10.1088/1361-6560/ab66e4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
For tumor tracking therapy, precise knowledge of tumor position in real-time is very important. A technique using single x-ray projection based on a convolutional neural network (CNN) was recently developed which can achieve accurate tumor localization in real-time. However, this method was only validated at fixed gantry angles. In this study, an improved technique is developed to handle arbitrary gantry angles for rotational radiotherapy. To evaluate the highly complex relationship between x-ray projections at arbitrary angles and tumor motion, a special CNN was proposed. In this network, a binary region of interest (ROI) mask was applied on every extracted feature map. This avoids the overfitting problem due to gantry rotation by directing the network to neglect those irrelevant pixels whose intensity variation had nothing to do with breathing motion. In addition, an angle-dependent fully connection layer (ADFCL) was utilized to recover the mapping from extracted feature maps to tumor motion, which would vary with the gantry angles. The method was tested with images from 15 realistic patients and compared with a variant network of VGG, developed by Oxford University's Visual Geometry Group. The tumors were clearly visible on x-ray projections for five patients only. The average tumor localization error was under 1.8 mm and 1.0 mm in superior-inferior and lateral directions. For the other ten patients whose tumors were not clearly visible in the x-ray projection, a feature point localization error was computed to evaluate the proposed method, the mean value of which was no more than 1.5 mm and 1.0 mm in both directions for all patients. A tumor localization method for single x-ray projection at arbitrary angles based on a novel CNN was developed and validated in this study for real-time operation. This greatly expanded the applicability of the tumor localization framework to the rotation therapy.
Collapse
Affiliation(s)
- Ran Wei
- Image Processing Center, Beihang University, Beijing 100191, People's Republic of China
| | | | | | | | | | | | | |
Collapse
|
19
|
Patient-specific reconstruction of volumetric computed tomography images from a single projection view via deep learning. Nat Biomed Eng 2019; 3:880-888. [PMID: 31659306 PMCID: PMC6858583 DOI: 10.1038/s41551-019-0466-4] [Citation(s) in RCA: 116] [Impact Index Per Article: 23.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Accepted: 09/19/2019] [Indexed: 12/12/2022]
Abstract
Tomographic imaging via penetrating waves generates cross-sectional views of the internal anatomy of a living subject. For artefact-free volumetric imaging, projection views from a large number of angular positions are required. Here, we show that a deep-learning model trained to map projection radiographs of a patient to the corresponding 3D anatomy can subsequently generate volumetric tomographic X-ray images of the patient from a single projection view. We demonstrate the feasibility of the approach with upper-abdomen, lung, and head-and-neck computed tomography scans from three patients. Volumetric reconstruction via deep learning could be useful in image-guided interventional procedures such as radiation therapy and needle biopsy, and might help simplify the hardware of tomographic imaging systems.
Collapse
|
20
|
O'Connell D, Ruan D, Thomas DH, Dou TH, Lewis JH, Santhanam A, Lee P, Low DA. A prospective gating method to acquire a diverse set of free-breathing CT images for model-based 4DCT. Phys Med Biol 2018; 63:04NT03. [PMID: 29350191 DOI: 10.1088/1361-6560/aaa90f] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Breathing motion modeling requires observation of tissues at sufficiently distinct respiratory states for proper 4D characterization. This work proposes a method to improve sampling of the breathing cycle with limited imaging dose. We designed and tested a prospective free-breathing acquisition protocol with a simulation using datasets from five patients imaged with a model-based 4DCT technique. Each dataset contained 25 free-breathing fast helical CT scans with simultaneous breathing surrogate measurements. Tissue displacements were measured using deformable image registration. A correspondence model related tissue displacement to the surrogate. Model residual was computed by comparing predicted displacements to image registration results. To determine a stopping criteria for the prospective protocol, i.e. when the breathing cycle had been sufficiently sampled, subsets of N scans where 5 ⩽ N ⩽ 9 were used to fit reduced models for each patient. A previously published metric was employed to describe the phase coverage, or 'spread', of the respiratory trajectories of each subset. Minimum phase coverage necessary to achieve mean model residual within 0.5 mm of the full 25-scan model was determined and used as the stopping criteria. Using the patient breathing traces, a prospective acquisition protocol was simulated. In all patients, phase coverage greater than the threshold necessary for model accuracy within 0.5 mm of the 25 scan model was achieved in six or fewer scans. The prospectively selected respiratory trajectories ranked in the (97.5 ± 4.2)th percentile among subsets of the originally sampled scans on average. Simulation results suggest that the proposed prospective method provides an effective means to sample the breathing cycle with limited free-breathing scans. One application of the method is to reduce the imaging dose of a previously published model-based 4DCT protocol to 25% of its original value while achieving mean model residual within 0.5 mm.
Collapse
Affiliation(s)
- D O'Connell
- Department of Radiation Oncology, University of California, Los Angeles, CA 90095, United States of America
| | | | | | | | | | | | | | | |
Collapse
|
21
|
Yan H, Tian Z, Shao Y, Jiang SB, Jia X. A new scheme for real-time high-contrast imaging in lung cancer radiotherapy: a proof-of-concept study. Phys Med Biol 2016; 61:2372-88. [PMID: 26943271 PMCID: PMC5590640 DOI: 10.1088/0031-9155/61/6/2372] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Visualization of anatomy in real time is of critical importance for motion management in lung cancer radiotherapy. To achieve real-time, and high-contrast in-treatment imaging, we propose a novel scheme based on the measurement of Compton scatter photons. In our method, a slit x-ray beam along the superior-inferior direction is directed to the patient, (intersecting the lung region at a 2D plane) containing most of the tumor motion trajectory. X-ray photons are scattered off this plane primarily due to the Compton interaction. An imager with a pinhole or a slat collimator is placed at one side of the plane to capture the scattered photons. The resulting image, after correcting for incoming fluence inhomogeneity, x-ray attenuation, scatter angle variation, and outgoing beam geometry, represents the linear attenuation coefficient of Compton scattering. This allows the visualization of the anatomy on this plane. We performed Monte Carlo simulation studies both on a phantom and a patient for proof-of-principle purposes. In the phantom case, a small tumor-like structure could be clearly visualized. The contrast-resolution calculated using tumor/lung as foreground/background for kV fluoroscopy, cone beam computed tomography (CBCT), and scattering image were 0.037, 0.70, and 0.54, respectively. In the patient case, tumor motion could be clearly observed in the scatter images. Imaging dose to the voxels directly exposed by the slit beam was ~0.4 times of that under a single CBCT projection. These studies demonstrated the potential feasibility of the proposed imaging scheme to capture the instantaneous anatomy of a patient on a 2D plane with a high image contrast. Clear visualization of the tumor motion may facilitate marker-less tumor tracking.
Collapse
|
22
|
Cai W, Dhou S, Cifter F, Myronakis M, Hurwitz MH, Williams CL, Berbeco RI, Seco J, Lewis JH. 4D cone beam CT-based dose assessment for SBRT lung cancer treatment. Phys Med Biol 2015; 61:554-68. [DOI: 10.1088/0031-9155/61/2/554] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|