1
|
Lang Y, Jiang Z, Sun L, Tran P, Mossahebi S, Xiang L, Ren L. Patient-specific deep learning for 3D protoacoustic image reconstruction and dose verification in proton therapy. Med Phys 2024. [PMID: 38980065 DOI: 10.1002/mp.17294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 06/27/2024] [Accepted: 06/27/2024] [Indexed: 07/10/2024] Open
Abstract
BACKGROUND Protoacoustic (PA) imaging has the potential to provide real-time 3D dose verification of proton therapy. However, PA images are susceptible to severe distortion due to limited angle acquisition. Our previous studies showed the potential of using deep learning to enhance PA images. As the model was trained using a limited number of patients' data, its efficacy was limited when applied to individual patients. PURPOSE In this study, we developed a patient-specific deep learning method for protoacoustic imaging to improve the reconstruction quality of protoacoustic imaging and the accuracy of dose verification for individual patients. METHODS Our method consists of two stages: in the first stage, a group model is trained from a diverse training set containing all patients, where a novel deep learning network is employed to directly reconstruct the initial pressure maps from the radiofrequency (RF) signals; in the second stage, we apply transfer learning on the pre-trained group model using patient-specific dataset derived from a novel data augmentation method to tune it into a patient-specific model. Raw PA signals were simulated based on computed tomography (CT) images and the pressure map derived from the planned dose. The reconstructed PA images were evaluated against the ground truth by using the root mean squared errors (RMSE), structural similarity index measure (SSIM) and gamma index on 10 specific prostate cancer patients. The significance level was evaluated by t-test with the p-value threshold of 0.05 compared with the results from the group model. RESULTS The patient-specific model achieved an average RMSE of 0.014 (p < 0.05 ${{{p}}}<{0.05}$ ), and an average SSIM of 0.981 (p < 0.05 ${{{p}}}<{0.05}$ ), out-performing the group model. Qualitative results also demonstrated that our patient-specific approach acquired better imaging quality with more details reconstructed when comparing with the group model. Dose verification achieved an average RMSE of 0.011 (p < 0.05 ${{{p}}}<{0.05}$ ), and an average SSIM of 0.995 (p < 0.05 ${{{p}}}<{0.05}$ ). Gamma index evaluation demonstrated a high agreement (97.4% [p < 0.05 ${{{p}}}<{0.05}$ ] and 97.9% [p < 0.05 ${{{p}}}<{0.05}$ ] for 1%/3 and 1%/5 mm) between the predicted and the ground truth dose maps. Our approach approximately took 6 s to reconstruct PA images for each patient, demonstrating its feasibility for online 3D dose verification for prostate proton therapy. CONCLUSIONS Our method demonstrated the feasibility of achieving 3D high-precision PA-based dose verification using patient-specific deep-learning approaches, which can potentially be used to guide the treatment to mitigate the impact of range uncertainty and improve the precision. Further studies are needed to validate the clinical impact of the technique.
Collapse
Affiliation(s)
- Yankun Lang
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Maryland, USA
| | - Zhuoran Jiang
- Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | - Leshan Sun
- Department of Biomedical Engineering and Radiology, University of California, Irnive, California, USA
| | - Phuoc Tran
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Maryland, USA
| | - Sina Mossahebi
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Maryland, USA
| | - Liangzhong Xiang
- Department of Biomedical Engineering and Radiology, University of California, Irnive, California, USA
| | - Lei Ren
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Maryland, USA
| |
Collapse
|
2
|
Choi Y, Jang H, Baek J. Chest tomosynthesis deblurring using CNN with deconvolution layer for vertebrae segmentation. Med Phys 2023; 50:7714-7730. [PMID: 37401539 DOI: 10.1002/mp.16576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 04/13/2023] [Accepted: 06/06/2023] [Indexed: 07/05/2023] Open
Abstract
BACKGROUND Limited scan angles cause severe distortions and artifacts in reconstructed tomosynthesis images when the Feldkamp-Davis-Kress (FDK) algorithm is used for the purpose, which degrades clinical diagnostic performance. These blurring artifacts are fatal in chest tomosynthesis images because precise vertebrae segmentation is crucial for various diagnostic analyses, such as early diagnosis, surgical planning, and injury detection. Moreover, because most spinal pathologies are related to vertebral conditions, the development of methods for accurate and objective vertebrae segmentation in medical images is an important and challenging research area. PURPOSE The existing point-spread-function-(PSF)-based deblurring methods use the same PSF in all sub-volumes without considering the spatially varying property of tomosynthesis images. This increases the PSF estimation error, thus further degrading the deblurring performance. However, the proposed method estimates the PSF more accurately by using sub-CNNs that contain a deconvolution layer for each sub-system, which improves the deblurring performance. METHODS To minimize the effect of the spatially varying property, the proposed deblurring network architecture comprises four modules: (1) block division module, (2) partial PSF module, (3) deblurring block module, and (4) assembling block module. We compared the proposed DL-based method with the FDK algorithm, total-variation iterative reconstruction with GP-BB (TV-IR), 3D U-Net, FBPConvNet, and two-phase deblurring method. To investigate the deblurring performance of the proposed method, we evaluated its vertebrae segmentation performance by comparing the pixel accuracy (PA), intersection-over-union (IoU), and F-score values of reference images to those of the deblurred images. Also, pixel-based evaluations of the reference and deblurred images were performed by comparing their root mean squared error (RMSE) and visual information fidelity (VIF) values. In addition, 2D analysis of the deblurred images were performed by artifact spread function (ASF) and full width half maximum (FWHM) of the ASF curve. RESULTS The proposed method was able to recover the original structure significantly, thereby further improving the image quality. The proposed method yielded the best deblurring performance in terms of vertebrae segmentation and similarity. The IoU, F-score, and VIF values of the chest tomosynthesis images reconstructed using the proposed SV method were 53.5%, 28.7%, and 63.2% higher, respectively, than those of the images reconstructed using the FDK method, and the RMSE value was 80.3% lower. These quantitative results indicate that the proposed method can effectively restore both the vertebrae and the surrounding soft tissue. CONCLUSIONS We proposed a chest tomosynthesis deblurring technique for vertebrae segmentation by considering the spatially varying property of tomosynthesis systems. The results of quantitative evaluations indicated that the vertebrae segmentation performance of the proposed method was better than those of the existing deblurring methods.
Collapse
Affiliation(s)
- Yunsu Choi
- School of Integrated Technology, Yonsei University, Incheon, South Korea
| | - Hanjoo Jang
- School of Integrated Technology, Yonsei University, Incheon, South Korea
| | - Jongduk Baek
- Department of Artificial Intelligence, College of Computing, Yonsei University, Incheon, South Korea
| |
Collapse
|
3
|
Jiang Z, Wang S, Xu Y, Sun L, Gonzalez G, Chen Y, Wu QJ, Xiang L, Ren L. Radiation-induced acoustic signal denoising using a supervised deep learning framework for imaging and therapy monitoring. Phys Med Biol 2023; 68:10.1088/1361-6560/ad0283. [PMID: 37820684 PMCID: PMC11000456 DOI: 10.1088/1361-6560/ad0283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 10/11/2023] [Indexed: 10/13/2023]
Abstract
Radiation-induced acoustic (RA) imaging is a promising technique for visualizing the invisible radiation energy deposition in tissues, enabling new imaging modalities and real-time therapy monitoring. However, RA imaging signal often suffers from poor signal-to-noise ratios (SNRs), thus requiring measuring hundreds or even thousands of frames for averaging to achieve satisfactory quality. This repetitive measurement increases ionizing radiation dose and degrades the temporal resolution of RA imaging, limiting its clinical utility. In this study, we developed a general deep inception convolutional neural network (GDI-CNN) to denoise RA signals to substantially reduce the number of frames needed for averaging. The network employs convolutions with multiple dilations in each inception block, allowing it to encode and decode signal features with varying temporal characteristics. This design generalizes GDI-CNN to denoise acoustic signals resulting from different radiation sources. The performance of the proposed method was evaluated using experimental data of x-ray-induced acoustic, protoacoustic, and electroacoustic signals both qualitatively and quantitatively. Results demonstrated the effectiveness of GDI-CNN: it achieved x-ray-induced acoustic image quality comparable to 750-frame-averaged results using only 10-frame-averaged measurements, reducing the imaging dose of x-ray-acoustic computed tomography (XACT) by 98.7%; it realized proton range accuracy parallel to 1500-frame-averaged results using only 20-frame-averaged measurements, improving the range verification frequency in proton therapy from 0.5 to 37.5 Hz; it reached electroacoustic image quality comparable to 750-frame-averaged results using only a single frame signal, increasing the electric field monitoring frequency from 1 fps to 1k fps. Compared to lowpass filter-based denoising, the proposed method demonstrated considerably lower mean-squared-errors, higher peak-SNR, and higher structural similarities with respect to the corresponding high-frame-averaged measurements. The proposed deep learning-based denoising framework is a generalized method for few-frame-averaged acoustic signal denoising, which significantly improves the RA imaging's clinical utilities for low-dose imaging and real-time therapy monitoring.
Collapse
Affiliation(s)
- Zhuoran Jiang
- Medical Physics Graduate Program, Duke University, Durham, NC 27705, United States of America
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC 27710, United States of America
- Contributed equally
| | - Siqi Wang
- Department of Biomedical Engineering, University of California, Irvine, CA 92617, United States of America
- Contributed equally
| | - Yifei Xu
- Department of Biomedical Engineering, University of California, Irvine, CA 92617, United States of America
| | - Leshan Sun
- Department of Biomedical Engineering, University of California, Irvine, CA 92617, United States of America
| | - Gilberto Gonzalez
- Department of Radiation Oncology, University of Oklahoma Health Sciences Center, Oklahoma City, OK, 73104, United States of America
| | - Yong Chen
- Department of Radiation Oncology, University of Oklahoma Health Sciences Center, Oklahoma City, OK, 73104, United States of America
| | - Q Jackie Wu
- Medical Physics Graduate Program, Duke University, Durham, NC 27705, United States of America
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC 27710, United States of America
| | - Liangzhong Xiang
- Department of Biomedical Engineering, University of California, Irvine, CA 92617, United States of America
- Department of Radiological Sciences, University of California, Irvine, CA 92697, United States of America
- Beckman Laser Institute & Medical Clinic, University of California, Irvine, CA 92612, United States of America
| | - Lei Ren
- Department of Radiation Oncology, University of Maryland, Baltimore, MD 21201, United States of America
| |
Collapse
|
4
|
Shao HC, Li Y, Wang J, Jiang S, Zhang Y. Real-time liver motion estimation via deep learning-based angle-agnostic X-ray imaging. Med Phys 2023; 50:6649-6662. [PMID: 37922461 PMCID: PMC10629841 DOI: 10.1002/mp.16691] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 07/17/2023] [Accepted: 08/06/2023] [Indexed: 11/05/2023] Open
Abstract
BACKGROUND Real-time liver imaging is challenged by the short imaging time (within hundreds of milliseconds) to meet the temporal constraint posted by rapid patient breathing, resulting in extreme under-sampling for desired 3D imaging. Deep learning (DL)-based real-time imaging/motion estimation techniques are emerging as promising solutions, which can use a single X-ray projection to estimate 3D moving liver volumes by solved deformable motion. However, such techniques were mostly developed for a specific, fixed X-ray projection angle, thereby impractical to verify and guide arc-based radiotherapy with continuous gantry rotation. PURPOSE To enable deformable motion estimation and 3D liver imaging from individual X-ray projections acquired at arbitrary X-ray scan angles, and to further improve the accuracy of single X-ray-driven motion estimation. METHODS We developed a DL-based method, X360, to estimate the deformable motion of the liver boundary using an X-ray projection acquired at an arbitrary gantry angle (angle-agnostic). X360 incorporated patient-specific prior information from planning 4D-CTs to address the under-sampling issue, and adopted a deformation-driven approach to deform a prior liver surface mesh to new meshes that reflect real-time motion. The liver mesh motion is solved via motion-related image features encoded in the arbitrary-angle X-ray projection, and through a sequential combination of rigid and deformable registration modules. To achieve the angle agnosticism, a geometry-informed X-ray feature pooling layer was developed to allow X360 to extract angle-dependent image features for motion estimation. As a liver boundary motion solver, X360 was also combined with priorly-developed, DL-based optical surface imaging and biomechanical modeling techniques for intra-liver motion estimation and tumor localization. RESULTS With geometry-aware feature pooling, X360 can solve the liver boundary motion from an arbitrary-angle X-ray projection. Evaluated on a set of 10 liver patient cases, the mean (± s.d.) 95-percentile Hausdorff distance between the solved liver boundary and the "ground-truth" decreased from 10.9 (±4.5) mm (before motion estimation) to 5.5 (±1.9) mm (X360). When X360 was further integrated with surface imaging and biomechanical modeling for liver tumor localization, the mean (± s.d.) center-of-mass localization error of the liver tumors decreased from 9.4 (± 5.1) mm to 2.2 (± 1.7) mm. CONCLUSION X360 can achieve fast and robust liver boundary motion estimation from arbitrary-angle X-ray projections for real-time imaging guidance. Serving as a surface motion solver, X360 can be integrated into a combined framework to achieve accurate, real-time, and marker-less liver tumor localization.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Dallas, Texas, USA
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Dallas, Texas, USA
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Yunxiang Li
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Dallas, Texas, USA
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Dallas, Texas, USA
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jing Wang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Dallas, Texas, USA
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Dallas, Texas, USA
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Steve Jiang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Dallas, Texas, USA
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Dallas, Texas, USA
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - You Zhang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Dallas, Texas, USA
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Dallas, Texas, USA
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
5
|
Cao Y, Kunaprayoon D, Ren L. Interpretable AI-assisted clinical decision making (CDM) for dose prescription in radiosurgery of brain metastases. Radiother Oncol 2023; 187:109842. [PMID: 37543055 PMCID: PMC11195016 DOI: 10.1016/j.radonc.2023.109842] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 07/19/2023] [Accepted: 07/27/2023] [Indexed: 08/07/2023]
Abstract
PURPOSE AI modeling physicians' clinical decision-making (CDM) can improve the efficiency and accuracy of clinical practice or serve as a surrogate to provide initial consultations to patients seeking secondary opinions. In this study, we developed an interpretable AI model that predicts dose fractionation for patients receiving radiation therapy for brain metastases with an interpretation of its decision-making process. MATERIALS/METHODS 152 patients with brain metastases treated by radiosurgery from 2017 to 2021 were obtained. CT images and target and organ-at-risk (OAR) contours were extracted. Eight non-image clinical parameters were also extracted and digitized, including age, the number of brain metastasis, ECOG performance status, presence of symptoms, sequencing with surgery (pre- or post-operative radiation therapy), de novo vs. re-treatment, primary cancer type, and metastasis to other sites. 3D convolutional neural networks (CNN) architectures with encoding paths were built based on the CT data and clinical parameters to capture three inputs: (1) Tumor size, shape, and location; (2) The spatial relationship between tumors and OARs; (3) The clinical parameters. The models fuse the features extracted from these three inputs at the decision-making level to learn the input independently to predict dose prescription. Models with different independent paths were developed, including models combining two independent paths (IM-2), three independent paths (IM-3), and ten independent paths (IM-10) at the decision-making level. A class activation score and relative weighting were calculated for each input path during the model prediction to represent the role of each input in the decision-making process, providing an interpretation of the model prediction. The actual prescription in the record was used as ground truth for model training. The model performance was assessed by 19-fold cross-validation, with each fold consisting of randomly selected 128 training, 16 validation, and 8 testing subjects. RESULT The dose prescriptions of 152 patient cases included 48 cases with 1 × 24 Gy, 48 cases with 1 × 20-22 Gy, 32 cases with 3 × 9 Gy, and 24 cases with 5 × 6 Gy prescribed by 8 physicians. IM-2 achieved slightly superior performance than IM-3 and IM-10, with 131 (86%) patients classified correctly and 21 (14%) patients misclassified. IM-10 provided the most interpretability with a relative weighting for each input: target (34%), the relationship between target and OAR (35%), ECOG (6%), re-treatment (6%), metastasis to other sites (6%), number of brain metastases (3%), symptomatic (3%), pre/post-surgery (3%), primary cancer type (2%), age (2%), reflecting the importance of the inputs in decision making. The importance ranking of inputs interpreted from the model also matched closely with a physician's own ranking in the decision process. CONCLUSION Interpretable CNN models were successfully developed to use CT images and non-image clinical parameters to predict dose prescriptions for brain metastases patients treated by radiosurgery. Models showed high prediction accuracy while providing an interpretation of the decision process, which was validated by the physician. Such interpretability makes the model more transparent, which is crucial for the future clinical adoption of the models in routine practice for CDM assistance.
Collapse
Affiliation(s)
- Yufeng Cao
- Department of Radiation Oncology, University of Maryland, Baltimore, Maryland, USA
| | - Dan Kunaprayoon
- Department of Radiation Oncology, University of Maryland, Baltimore, Maryland, USA
| | - Lei Ren
- Department of Radiation Oncology, University of Maryland, Baltimore, Maryland, USA.
| |
Collapse
|
6
|
He X, Cai W, Li F, Fan Q, Zhang P, Cuaron JJ, Cerviño LI, Moran JM, Li X, Li T. Patient specific prior cross attention for kV decomposition in paraspinal motion tracking. Med Phys 2023; 50:5343-5353. [PMID: 37538040 PMCID: PMC11167561 DOI: 10.1002/mp.16644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 06/20/2023] [Accepted: 06/28/2023] [Indexed: 08/05/2023] Open
Abstract
BACKGROUND X-ray image quality is critical for accurate intrafraction motion tracking in radiation therapy. PURPOSE This study aims to develop a deep-learning algorithm to improve kV image contrast by decomposing the image into bony and soft tissue components. In particular, we designed a priori attention mechanism in the neural network framework for optimal decomposition. We show that a patient-specific prior cross-attention (PCAT) mechanism can boost the performance of kV image decomposition. We demonstrate its use in paraspinal SBRT motion tracking with online kV imaging. METHODS Online 2D kV projections were acquired during paraspinal SBRT for patient motion monitoring. The patient-specific prior images were generated by randomly shifting and rotating spine-only DRR created from the setup CBCT, simulating potential motions. The latent features of the prior images were incorporated into the PCAT using multi-head cross attention. The neural network aimed to learn to selectively amplify the transmission of the projection image features that correlate with features of the priori. The PCAT network structure consisted of (1) a dual-branch generator that separates the spine and soft tissue component of the kV projection image and (2) a dual-function discriminator (DFD) that provides the realness score of the predicted images. For supervision, we used a loss combining mean absolute error loss, discriminator loss, perceptual loss, total variation, and mean squared error loss for soft tissues. The proposed PCAT approach was benchmarked against previous work using the ResNet generative adversarial network (ResNetGAN) without prior information. RESULTS The trained PCAT had improved performance in effectively retaining and preserving the spine structure and texture information while suppressing the soft tissues from the kV projection images. The decomposed spine-only x-ray images had the submillimeter matching accuracy at all beam angles. The decomposed spine-only x-ray significantly reduced the maximum errors to 0.44 mm (<2 pixels) in comparison to 0.92 mm (∼4 pixels) of ResNetGAN. The PCAT decomposed spine images also had higher PSNR and SSIM (p-value < 0.001). CONCLUSION The PCAT selectively learned the important latent features by incorporating the patient-specific prior knowledge into the deep learning algorithm, significantly improving the robustness of the kV projection image decomposition, and leading to improved motion tracking accuracy in paraspinal SBRT.
Collapse
Affiliation(s)
- Xiuxiu He
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Weixing Cai
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Feifei Li
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Qiyong Fan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Pengpeng Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - John J. Cuaron
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Laura I. Cerviño
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Jean M. Moran
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Xiang Li
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Tianfang Li
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| |
Collapse
|
7
|
Jiang Z, Polf JC, Barajas CA, Gobbert MK, Ren L. A feasibility study of enhanced prompt gamma imaging for range verification in proton therapy using deep learning. Phys Med Biol 2023; 68:10.1088/1361-6560/acbf9a. [PMID: 36848674 PMCID: PMC10173868 DOI: 10.1088/1361-6560/acbf9a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Accepted: 02/27/2023] [Indexed: 03/01/2023]
Abstract
Background and objective. Range uncertainty is a major concern affecting the delivery precision in proton therapy. The Compton camera (CC)-based prompt-gamma (PG) imaging is a promising technique to provide 3Din vivorange verification. However, the conventional back-projected PG images suffer from severe distortions due to the limited view of the CC, significantly limiting its clinical utility. Deep learning has demonstrated effectiveness in enhancing medical images from limited-view measurements. But different from other medical images with abundant anatomical structures, the PGs emitted along the path of a proton pencil beam take up an extremely low portion of the 3D image space, presenting both the attention and the imbalance challenge for deep learning. To solve these issues, we proposed a two-tier deep learning-based method with a novel weighted axis-projection loss to generate precise 3D PG images to achieve accurate proton range verification.Materials and methods: the proposed method consists of two models: first, a localization model is trained to define a region-of-interest (ROI) in the distorted back-projected PG image that contains the proton pencil beam; second, an enhancement model is trained to restore the true PG emissions with additional attention on the ROI. In this study, we simulated 54 proton pencil beams (energy range: 75-125 MeV, dose level: 1 × 109protons/beam and 3 × 108protons/beam) delivered at clinical dose rates (20 kMU min-1and 180 kMU min-1) in a tissue-equivalent phantom using Monte-Carlo (MC). PG detection with a CC was simulated using the MC-Plus-Detector-Effects model. Images were reconstructed using the kernel-weighted-back-projection algorithm, and were then enhanced by the proposed method.Results. The method effectively restored the 3D shape of the PG images with the proton pencil beam range clearly visible in all testing cases. Range errors were within 2 pixels (4 mm) in all directions in most cases at a higher dose level. The proposed method is fully automatic, and the enhancement takes only ∼0.26 s.Significance. Overall, this preliminary study demonstrated the feasibility of the proposed method to generate accurate 3D PG images using a deep learning framework, providing a powerful tool for high-precisionin vivorange verification of proton therapy.
Collapse
Affiliation(s)
- Zhuoran Jiang
- Medical Physics Graduate Program, Duke University, Durham, NC, 27705, USA
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, 27710, USA
| | - Jerimy C. Polf
- Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD, 21201, USA
| | - Carlos A. Barajas
- Department of Mathematics and Statistics, University of Maryland, Baltimore County, Baltimore, MD, 21250, USA
| | - Matthias K. Gobbert
- Department of Mathematics and Statistics, University of Maryland, Baltimore County, Baltimore, MD, 21250, USA
| | - Lei Ren
- Medical Physics Graduate Program, Duke University, Durham, NC, 27705, USA
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, 27710, USA
- Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD, 21201, USA
| |
Collapse
|
8
|
Shao HC, Li Y, Wang J, Jiang S, Zhang Y. Real-time liver tumor localization via combined surface imaging and a single x-ray projection. Phys Med Biol 2023; 68:065002. [PMID: 36731143 PMCID: PMC10394117 DOI: 10.1088/1361-6560/acb889] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 01/12/2023] [Accepted: 02/01/2023] [Indexed: 02/04/2023]
Abstract
Objective. Real-time imaging, a building block of real-time adaptive radiotherapy, provides instantaneous knowledge of anatomical motion to drive delivery adaptation to improve patient safety and treatment efficacy. The temporal constraint of real-time imaging (<500 milliseconds) significantly limits the imaging signals that can be acquired, rendering volumetric imaging and 3D tumor localization extremely challenging. Real-time liver imaging is particularly difficult, compounded by the low soft tissue contrast within the liver. We proposed a deep learning (DL)-based framework (Surf-X-Bio), to track 3D liver tumor motion in real-time from combined optical surface image and a single on-board x-ray projection.Approach. Surf-X-Bio performs mesh-based deformable registration to track/localize liver tumors volumetrically via three steps. First, a DL model was built to estimate liver boundary motion from an optical surface image, using learnt motion correlations between the respiratory-induced external body surface and liver boundary. Second, the residual liver boundary motion estimation error was further corrected by a graph neural network-based DL model, using information extracted from a single x-ray projection. Finally, a biomechanical modeling-driven DL model was applied to solve the intra-liver motion for tumor localization, using the liver boundary motion derived via prior steps.Main results. Surf-X-Bio demonstrated higher accuracy and better robustness in tumor localization, as compared to surface-image-only and x-ray-only models. By Surf-X-Bio, the mean (±s.d.) 95-percentile Hausdorff distance of the liver boundary from the 'ground-truth' decreased from 9.8 (±4.5) (before motion estimation) to 2.4 (±1.6) mm. The mean (±s.d.) center-of-mass localization error of the liver tumors decreased from 8.3 (±4.8) to 1.9 (±1.6) mm.Significance. Surf-X-Bio can accurately track liver tumors from combined surface imaging and x-ray imaging. The fast computational speed (<250 milliseconds per inference) allows it to be applied clinically for real-time motion management and adaptive radiotherapy.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Yunxiang Li
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Jing Wang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Steve Jiang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - You Zhang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| |
Collapse
|
9
|
AI-assisted clinical decision making (CDM) for dose prescription in radiosurgery of brain metastases using three-path three-dimensional CNN. Clin Transl Radiat Oncol 2022; 39:100565. [PMID: 36594076 PMCID: PMC9804100 DOI: 10.1016/j.ctro.2022.100565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 11/04/2022] [Accepted: 12/14/2022] [Indexed: 12/24/2022] Open
Abstract
Purpose AI modeling physicians' clinical decision-making (CDM) can improve the efficiency and accuracy of clinical practice or serve as a surrogate to provide initial consultations to patients seeking secondary opinions. In this study, we developed an AI network to model radiotherapy CDM and used dose prescription as an example to demonstrate its feasibility. Materials/Methods 152 patients with brain metastases treated by radiosurgery from 2017 to 2021 were included. CT images and tumor and organ-at-risk (OAR) contours were exported. Eight relevant clinical parameters were extracted and digitized, including age, numbers of lesions, performance status (ECOG), presence of symptoms, arrangement with surgery (pre- or post-surgery radiation therapy), re-treatment, primary cancer type, and metastasis to other sites. A 3D convolutional neural network (CNN) architecture was built using three encoding paths with the same kernel and filters to capture the different image and contour features. Specifically, one path was built to capture the tumor feature, including the size and location of the tumor, another path was built to capture the relative spatial relationship between the tumor and OARs, and the third path was built to capture the clinical parameters. The model combines information from three paths to predict dose prescription. The actual prescription in the patient record was used as ground truth for model training. The model performance was assessed by 19-fold-cross-validation, with each fold consisting of randomly selected 128 training, 16 validation, and 8 testing subjects. Result The dose prescriptions of 152 patient cases included 48 cases with 1 × 24 Gy, 48 cases with 1 × 20-22 Gy, 32 cases with 3 × 9 Gy, and 24 cases with 5 × 6 Gy prescribed by 8 physicians. The AI model prescribed correctly for 124 (82 %) cases, including 44 (92 %) cases with 1 × 24 Gy, 36 (75 %) cases with 1 × 20-22 Gy, 25 (78 %) cases with 3 × 9 Gy, and 19 (79 %) cases with 5 × 6 Gy. Analysis of the failed cases showed the potential cause of practice variations across individual physicians, which were not accounted for in the model trained by the group data. Including clinical parameters improved the overall prediction accuracy by 20 %. Conclusion To our best knowledge, this is the first study to demonstrate the feasibility of AI in predicting dose prescription in CDM in radiation therapy. Such CDM models can serve as vital tools to address healthcare disparities by providing preliminary consultations to patients in underdeveloped areas or as a valuable quality assurance (QA) tool for physicians to cross-check intra- and inter-institution practices.
Collapse
|
10
|
Jiang Z, Sun L, Yao W, Wu QJ, Xiang L, Ren L. 3D in vivodose verification in prostate proton therapy with deep learning-based proton-acoustic imaging. Phys Med Biol 2022; 67:10.1088/1361-6560/ac9881. [PMID: 36206745 PMCID: PMC9647484 DOI: 10.1088/1361-6560/ac9881] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 10/07/2022] [Indexed: 02/10/2023]
Abstract
Dose delivery uncertainty is a major concern in proton therapy, adversely affecting the treatment precision and outcome. Recently, a promising technique, proton-acoustic (PA) imaging, has been developed to provide real-timein vivo3D dose verification. However, its dosimetry accuracy is limited due to the limited-angle view of the ultrasound transducer. In this study, we developed a deep learning-based method to address the limited-view issue in the PA reconstruction. A deep cascaded convolutional neural network (DC-CNN) was proposed to reconstruct 3D high-quality radiation-induced pressures using PA signals detected by a matrix array, and then derive precise 3D dosimetry from pressures for dose verification in proton therapy. To validate its performance, we collected 81 prostate cancer patients' proton therapy treatment plans. Dose was calculated using the commercial software RayStation and was normalized to the maximum dose. The PA simulation was performed using the open-source k-wave package. A matrix ultrasound array with 64 × 64 sensors and 500 kHz central frequency was simulated near the perineum to acquire radiofrequency (RF) signals during dose delivery. For realistic acoustic simulations, tissue heterogeneity and attenuation were considered, and Gaussian white noise was added to the acquired RF signals. The proposed DC-CNN was trained on 204 samples from 69 patients and tested on 26 samples from 12 other patients. Predicted 3D pressures and dose maps were compared against the ground truth qualitatively and quantitatively using root-mean-squared-error (RMSE), gamma-index (GI), and dice coefficient of isodose lines. Results demonstrated that the proposed method considerably improved the limited-view PA image quality, reconstructing pressures with clear and accurate structures and deriving doses with a high agreement with the ground truth. Quantitatively, the pressure accuracy achieved an RMSE of 0.061, and the dose accuracy achieved an RMSE of 0.044, GI (3%/3 mm) of 93.71%, and 90%-isodose line dice of 0.922. The proposed method demonstrates the feasibility of achieving high-quality quantitative 3D dosimetry in PA imaging using a matrix array, which potentially enables the online 3D dose verification for prostate proton therapy.
Collapse
Affiliation(s)
- Zhuoran Jiang
- Medical Physics Graduate Program, Duke University, Durham, NC, 27705, USA
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, 27710, USA
| | - Leshan Sun
- Department of Biomedical Engineering, University of California, Irvine, California 92617, USA
| | - Weiguang Yao
- Department of Radiation Oncology, University of Maryland, Baltimore, MD, 21201, USA
| | - Q. Jackie Wu
- Medical Physics Graduate Program, Duke University, Durham, NC, 27705, USA
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, 27710, USA
| | - Liangzhong Xiang
- Department of Biomedical Engineering, University of California, Irvine, California 92617, USA
- Department of Radiological Sciences, University of California, Irvine, CA 92697, USA
- Beckman Laser Institute & Medical Clinic, University of California, Irvine, Irvine, CA 92612, USA
| | - Lei Ren
- Department of Radiation Oncology, University of Maryland, Baltimore, MD, 21201, USA
| |
Collapse
|
11
|
Shao HC, Wang J, Bai T, Chun J, Park JC, Jiang S, Zhang Y. Real-time liver tumor localization via a single x-ray projection using deep graph neural network-assisted biomechanical modeling. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac6b7b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 04/28/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Real-time imaging is highly desirable in image-guided radiotherapy, as it provides instantaneous knowledge of patients’ anatomy and motion during treatments and enables online treatment adaptation to achieve the highest tumor targeting accuracy. Due to extremely limited acquisition time, only one or few x-ray projections can be acquired for real-time imaging, which poses a substantial challenge to localize the tumor from the scarce projections. For liver radiotherapy, such a challenge is further exacerbated by the diminished contrast between the tumor and the surrounding normal liver tissues. Here, we propose a framework combining graph neural network-based deep learning and biomechanical modeling to track liver tumor in real-time from a single onboard x-ray projection. Approach. Liver tumor tracking is achieved in two steps. First, a deep learning network is developed to predict the liver surface deformation using image features learned from the x-ray projection. Second, the intra-liver deformation is estimated through biomechanical modeling, using the liver surface deformation as the boundary condition to solve tumor motion by finite element analysis. The accuracy of the proposed framework was evaluated using a dataset of 10 patients with liver cancer. Main results. The results show accurate liver surface registration from the graph neural network-based deep learning model, which translates into accurate, fiducial-less liver tumor localization after biomechanical modeling (<1.2 (±1.2) mm average localization error). Significance. The method demonstrates its potentiality towards intra-treatment and real-time 3D liver tumor monitoring and localization. It could be applied to facilitate 4D dose accumulation, multi-leaf collimator tracking and real-time plan adaptation. The method can be adapted to other anatomical sites as well.
Collapse
|
12
|
Zhang Z, Jiang Z, Zhong H, Lu K, Yin FF, Ren L. Patient‐specific synthetic magnetic resonance imaging generation from cone beam computed tomography for image guidance in liver stereotactic body radiation therapy. PRECISION RADIATION ONCOLOGY 2022; 6:110-118. [PMID: 37064765 PMCID: PMC10103741 DOI: 10.1002/pro6.1163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
Objective Despite its prevalence, cone beam computed tomography (CBCT) has poor soft-tissue contrast, making it challenging to localize liver tumors. We propose a patient-specific deep learning model to generate synthetic magnetic resonance imaging (MRI) from CBCT to improve tumor localization. Methods A key innovation is using patient-specific CBCT-MRI image pairs to train a deep learning model to generate synthetic MRI from CBCT. Specifically, patient planning CT was deformably registered to prior MRI, and then used to simulate CBCT with simulated projections and Feldkamp, Davis, and Kress reconstruction. These CBCT-MRI images were augmented using translations and rotations to generate enough patient-specific training data. A U-Net-based deep learning model was developed and trained to generate synthetic MRI from CBCT in the liver, and then tested on a different CBCT dataset. Synthetic MRIs were quantitatively evaluated against ground-truth MRI. Results The synthetic MRI demonstrated superb soft-tissue contrast with clear tumor visualization. On average, the synthetic MRI achieved 28.01, 0.025, and 0.929 for peak signal-to-noise ratio, mean square error, and structural similarity index, respectively, outperforming CBCT images. The model performance was consistent across all three patients tested. Conclusion Our study demonstrated the feasibility of a patient-specific model to generate synthetic MRI from CBCT for liver tumor localization, opening up a potential to democratize MRI guidance in clinics with conventional LINACs.
Collapse
Affiliation(s)
- Zeyu Zhang
- Duke University Medical Center, Durham, North Carolina, USA
| | - Zhuoran Jiang
- Duke University Medical Center, Durham, North Carolina, USA
| | | | - Ke Lu
- Duke University Medical Center, Durham, North Carolina, USA
| | - Fang-Fang Yin
- Duke University Medical Center, Durham, North Carolina, USA
| | - Lei Ren
- University of Maryland School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
13
|
Zhang Z, Huang M, Jiang Z, Chang Y, Lu K, Yin FF, Tran P, Wu D, Beltran C, Ren L. Patient-specific deep learning model to enhance 4D-CBCT image for radiomics analysis. Phys Med Biol 2022; 67. [PMID: 35313293 PMCID: PMC9066277 DOI: 10.1088/1361-6560/ac5f6e] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 03/21/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. 4D-CBCT provides phase-resolved images valuable for radiomics analysis for outcome prediction throughout treatment courses. However, 4D-CBCT suffers from streak artifacts caused by under-sampling, which severely degrades the accuracy of radiomic features. Previously we developed group-patient-trained deep learning methods to enhance the 4D-CBCT quality for radiomics analysis, which was not optimized for individual patients. In this study, a patient-specific model was developed to further improve the accuracy of 4D-CBCT based radiomics analysis for individual patients. Approach. This patient-specific model was trained with intra-patient data. Specifically, patient planning 4D-CT was augmented through image translation, rotation, and deformation to generate 305 CT volumes from 10 volumes to simulate possible patient positions during the onboard image acquisition. 72 projections were simulated from 4D-CT for each phase and were used to reconstruct 4D-CBCT using FDK back-projection algorithm. The patient-specific model was trained using these 305 paired sets of patient-specific 4D-CT and 4D-CBCT data to enhance the 4D-CBCT image to match with 4D-CT images as ground truth. For model testing, 4D-CBCT were simulated from a separate set of 4D-CT scan images acquired from the same patient and were then enhanced by this patient-specific model. Radiomics features were then extracted from the testing 4D-CT, 4D-CBCT, and enhanced 4D-CBCT image sets for comparison. The patient-specific model was tested using 4 lung-SBRT patients’ data and compared with the performance of the group-based model. The impact of model dimensionality, region of interest (ROI) selection, and loss function on the model accuracy was also investigated. Main results. Compared with a group-based model, the patient-specific training model further improved the accuracy of radiomic features, especially for features with large errors in the group-based model. For example, the 3D whole-body and ROI loss-based patient-specific model reduces the errors of the first-order median feature by 83.67%, the wavelet LLL feature maximum by 91.98%, and the wavelet HLL skewness feature by 15.0% on average for the four patients tested. In addition, the patient-specific models with different dimensionality (2D versus 3D) or loss functions (L1 versus L1 + VGG + GAN) achieved comparable results for improving the radiomics accuracy. Using whole-body or whole-body+ROI L1 loss for the model achieved better results than using the ROI L1 loss alone as the loss function. Significance. This study demonstrated that the patient-specific model is more effective than the group-based model on improving the accuracy of the 4D-CBCT radiomic features analysis, which could potentially improve the precision for outcome prediction in radiotherapy.
Collapse
|
14
|
Abbasi S, Tavakoli M, Boveiri HR, Mosleh Shirazi MA, Khayami R, Khorasani H, Javidan R, Mehdizadeh A. Medical image registration using unsupervised deep neural network: A scoping literature review. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103444] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
15
|
Jiang Z, Zhang Z, Chang Y, Ge Y, Yin FF, Ren L. Prior image-guided cone-beam computed tomography augmentation from under-sampled projections using a convolutional neural network. Quant Imaging Med Surg 2021; 11:4767-4780. [PMID: 34888188 DOI: 10.21037/qims-21-114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 07/09/2021] [Indexed: 11/06/2022]
Abstract
Background Acquiring sparse-view cone-beam computed tomography (CBCT) is an effective way to reduce the imaging dose. However, images reconstructed by the conventional filtered back-projection method suffer from severe streak artifacts due to the projection under-sampling. Existing deep learning models have demonstrated feasibilities in restoring volumetric structures from the highly under-sampled images. However, because of the inter-patient variabilities, they failed to restore the patient-specific details with the common restoring pattern learned from the group data. Although the patient-specific models have been developed by training models using the intra-patient data and have shown effectiveness in restoring the patient-specific details, the models have to be retrained to be exclusive for each patient. It is highly desirable to develop a generalized model that can utilize the patient-specific information for the under-sampled image augmentation. Methods In this study, we proposed a merging-encoder convolutional neural network (MeCNN) to realize the prior image-guided under-sampled CBCT augmentation. Instead of learning the patient-specific structures, the proposed model learns a generalized pattern of utilizing the patient-specific information in the prior images to facilitate the under-sampled image enhancement. Specifically, the MeCNN consists of a merging-encoder and a decoder. The merging-encoder extracts image features from both the prior CT images and the under-sampled CBCT images, and merges the features at multi-scale levels via deep convolutions. The merged features are then connected to the decoders via shortcuts to yield high-quality CBCT images. The proposed model was tested on both the simulated CBCTs and the clinical CBCTs. The predicted CBCT images were evaluated qualitatively and quantitatively in terms of image quality and tumor localization accuracy. Mann-Whitney U test was conducted for the statistical analysis. P<0.05 was considered statistically significant. Results The proposed model yields CT-like high-quality CBCT images from only 36 half-fan projections. Compared to other methods, CBCT images augmented by the proposed model have significantly lower intensity errors, significantly higher peak signal-to-noise ratio, and significantly higher structural similarity with respect to the ground truth images. Besides, the proposed method significantly reduced the 3D distance of the CBCT-based tumor localization errors. In addition, the CBCT augmentation is nearly real-time. Conclusions With the prior-image guidance, the proposed method is effective in reconstructing high-quality CBCT images from the highly under-sampled projections, considerably reducing the imaging dose and improving the clinical utility of the CBCT.
Collapse
Affiliation(s)
- Zhuoran Jiang
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Zeyu Zhang
- Medical Physics Graduate Program, Duke University, Durham, NC, USA
| | - Yushi Chang
- Medical Physics Graduate Program, Duke University, Durham, NC, USA
| | - Yun Ge
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA.,Medical Physics Graduate Program, Duke University, Durham, NC, USA.,Medical Physics Graduate Program, Duke Kunshan University, Kunshan, China
| | - Lei Ren
- Department of Radiation Oncology, University of Maryland, Baltimore, MD, USA
| |
Collapse
|
16
|
Zhang Z, Huang M, Jiang Z, Chang Y, Torok J, Yin FF, Ren L. 4D radiomics: impact of 4D-CBCT image quality on radiomic analysis. Phys Med Biol 2021; 66:045023. [PMID: 33361574 DOI: 10.1088/1361-6560/abd668] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
PURPOSE To investigate the impact of 4D-CBCT image quality on radiomic analysis and the efficacy of using deep learning based image enhancement to improve the accuracy of radiomic features of 4D-CBCT. MATERIAL AND METHODS In this study, 4D-CT data from 16 lung cancer patients were obtained. Digitally reconstructed radiographs (DRRs) were simulated from the 4D-CT, and then used to reconstruct 4D CBCT using the conventional FDK (Feldkamp et al 1984 J. Opt. Soc. Am. A 1 612-9) algorithm. Different projection numbers (i.e. 72, 120, 144, 180) and projection angle distributions (i.e. evenly distributed and unevenly distributed using angles from real 4D-CBCT scans) were simulated to generate the corresponding 4D-CBCT. A deep learning model (TecoGAN) was trained on 10 patients and validated on 3 patients to enhance the 4D-CBCT image quality to match with the corresponding ground-truth 4D-CT. The remaining 3 patients with different tumor sizes were used for testing. The radiomic features in 6 different categories, including histogram, GLCM, GLRLM, GLSZM, NGTDM, and wavelet, were extracted from the gross tumor volumes of each phase of original 4D-CBCT, enhanced 4D-CBCT, and 4D-CT. The radiomic features in 4D-CT were used as the ground-truth to evaluate the errors of the radiomic features in the original 4D-CBCT and enhanced 4D-CBCT. Errors in the original 4D-CBCT demonstrated the impact of image quality on radiomic features. Comparison between errors in the original 4D-CBCT and enhanced 4D-CBCT demonstrated the efficacy of using deep learning to improve the radiomic feature accuracy. RESULTS 4D-CBCT image quality can substantially affect the accuracy of the radiomic features, and the degree of impact is feature-dependent. The deep learning model was able to enhance the anatomical details and edge information in the 4D-CBCT as well as removing other image artifacts. This enhancement of image quality resulted in reduced errors for most radiomic features. The average reduction of radiomics errors for 3 patients are 20.0%, 31.4%, 36.7%, 50.0%, 33.6% and 11.3% for histogram, GLCM, GLRLM, GLSZM, NGTDM and Wavelet features. And the error reduction was more significant for patients with larger tumors. The findings were consistent across different respiratory phases, projection numbers, and angle distributions. CONCLUSIONS The study demonstrated that 4D-CBCT image quality has a significant impact on the radiomic analysis. The deep learning-based augmentation technique proved to be an effective approach to enhance 4D-CBCT image quality to improve the accuracy of radiomic analysis.
Collapse
Affiliation(s)
- Zeyu Zhang
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC, 27710, United States of America.,Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, United States of America
| | - Mi Huang
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC, 27710, United States of America
| | - Zhuoran Jiang
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC, 27710, United States of America.,School of Electronic Science and Engineering, Nanjing University, 163 Xianlin Road, Nanjing, Jiangsu, 210046, People's Republic of China
| | - Yushi Chang
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC, 27710, United States of America.,Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, United States of America
| | - Jordan Torok
- Department of Radiation Oncology, University of Pittsburgh Medical Center, 5150 Centre Ave, Pittsburgh, PA 15232, United States of America
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC, 27710, United States of America.,Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, United States of America.,Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, 215316, People's Republic of China
| | - Lei Ren
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC, 27710, United States of America.,Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, United States of America
| |
Collapse
|