1
|
Hu Y, Zhou H, Cao N, Li C, Hu C. Synthetic CT generation based on CBCT using improved vision transformer CycleGAN. Sci Rep 2024; 14:11455. [PMID: 38769329 PMCID: PMC11106312 DOI: 10.1038/s41598-024-61492-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 05/06/2024] [Indexed: 05/22/2024] Open
Abstract
Cone-beam computed tomography (CBCT) is a crucial component of adaptive radiation therapy; however, it frequently encounters challenges such as artifacts and noise, significantly constraining its clinical utility. While CycleGAN is a widely employed method for CT image synthesis, it has notable limitations regarding the inadequate capture of global features. To tackle these challenges, we introduce a refined unsupervised learning model called improved vision transformer CycleGAN (IViT-CycleGAN). Firstly, we integrate a U-net framework that builds upon ViT. Next, we augment the feed-forward neural network by incorporating deep convolutional networks. Lastly, we enhance the stability of the model training process by introducing gradient penalty and integrating an additional loss term into the generator loss. The experiment demonstrates from multiple perspectives that our model-generated synthesizing CT(sCT) has significant advantages compared to other unsupervised learning models, thereby validating the clinical applicability and robustness of our model. In future clinical practice, our model has the potential to assist clinical practitioners in formulating precise radiotherapy plans.
Collapse
Affiliation(s)
- Yuxin Hu
- School of Computer and Software, Hohai University, Nanjing, 211100, China
| | - Han Zhou
- School of Electronic Science and Engineering, Nanjing University, NanJing, 210046, China
- Department of Radiation Oncology, The Fourth Affiliated Hospital of Nanjing Medical University, Nanjing, 210013, China
| | - Ning Cao
- School of Computer and Software, Hohai University, Nanjing, 211100, China
| | - Can Li
- Engineering Research Center of TCM Intelligence Health Service, School of Artificial Intelligence and Information Technology, Nanjing University of Chinese Medicine, Nanjing, 210023, China.
| | - Can Hu
- School of Computer and Software, Hohai University, Nanjing, 211100, China.
| |
Collapse
|
2
|
Li S, Wang H, Meng Y, Zhang C, Song Z. Multi-organ segmentation: a progressive exploration of learning paradigms under scarce annotation. Phys Med Biol 2024; 69:11TR01. [PMID: 38479023 DOI: 10.1088/1361-6560/ad33b5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 03/13/2024] [Indexed: 05/21/2024]
Abstract
Precise delineation of multiple organs or abnormal regions in the human body from medical images plays an essential role in computer-aided diagnosis, surgical simulation, image-guided interventions, and especially in radiotherapy treatment planning. Thus, it is of great significance to explore automatic segmentation approaches, among which deep learning-based approaches have evolved rapidly and witnessed remarkable progress in multi-organ segmentation. However, obtaining an appropriately sized and fine-grained annotated dataset of multiple organs is extremely hard and expensive. Such scarce annotation limits the development of high-performance multi-organ segmentation models but promotes many annotation-efficient learning paradigms. Among these, studies on transfer learning leveraging external datasets, semi-supervised learning including unannotated datasets and partially-supervised learning integrating partially-labeled datasets have led the dominant way to break such dilemmas in multi-organ segmentation. We first review the fully supervised method, then present a comprehensive and systematic elaboration of the 3 abovementioned learning paradigms in the context of multi-organ segmentation from both technical and methodological perspectives, and finally summarize their challenges and future trends.
Collapse
Affiliation(s)
- Shiman Li
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Haoran Wang
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Yucong Meng
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Chenxi Zhang
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| |
Collapse
|
3
|
Liu Y, Chen A, Li Y, Lai H, Huang S, Yang X. CT synthesis from CBCT using a sequence-aware contrastive generative network. Comput Med Imaging Graph 2023; 109:102300. [PMID: 37776676 DOI: 10.1016/j.compmedimag.2023.102300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 08/21/2023] [Accepted: 08/21/2023] [Indexed: 10/02/2023]
Abstract
Computerized tomography (CT) synthesis from cone-beam computerized tomography (CBCT) is a key step in adaptive radiotherapy. It uses a synthetic CT to calculate the dose to correct and adjust the radiotherapy plan in a timely manner. The cycle-consistent adversarial network (Cycle GAN) is commonly used in CT synthesis tasks but it has some defects: (a) the premise of the cycle consistency loss is that the conversion between domains is bijective, but the CBCT and CT conversion does not fully satisfy the bijective relationship, and (b) it does not take advantage of the complementary information between multiple sets of CBCTs for the same patient. To address these problems, we propose a novel framework named the sequence-aware contrastive generative network (SCGN) that introduces an attention sequence fusion module to improve the CBCT quality. In addition, it not only applies contrastive learning to the generative adversarial networks (GANs) to pay more attention to the anatomical structure of CBCT in feature extraction but also uses a new generator to improve the accuracy of the anatomical details. Experimental results on our datasets show that our method significantly outperforms the existing unsupervised CT synthesis methods.
Collapse
Affiliation(s)
- Yanxia Liu
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, China
| | - Anni Chen
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, China
| | - Yuhong Li
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, China
| | - Haoyu Lai
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, China
| | - Sijuan Huang
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Esophageal Cancer Institute, Guangzhou, Guangdong 510060, China.
| | - Xin Yang
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Esophageal Cancer Institute, Guangzhou, Guangdong 510060, China.
| |
Collapse
|
4
|
Deng L, Zhang Y, Wang J, Huang S, Yang X. Improving performance of medical image alignment through super-resolution. Biomed Eng Lett 2023; 13:397-406. [PMID: 37519883 PMCID: PMC10382383 DOI: 10.1007/s13534-023-00268-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 01/29/2023] [Accepted: 02/01/2023] [Indexed: 02/21/2023] Open
Abstract
Medical image alignment is an important tool for tracking patient conditions, but the quality of alignment is influenced by the effectiveness of low-dose Cone-beam CT (CBCT) imaging and patient characteristics. To address these two issues, we propose an unsupervised alignment method that incorporates a preprocessing super-resolution process. We constructed the model based on a private clinical dataset and validated the enhancement of the super-resolution on alignment using clinical and public data. Through all three experiments, we demonstrate that higher resolution data yields better results in the alignment process. To fully constrain similarity and structure, a new loss function is proposed; Pearson correlation coefficient combined with regional mutual information. In all test samples, the newly proposed loss function obtains higher results than the common loss function and improve alignment accuracy. Subsequent experiments verified that, combined with the newly proposed loss function, the super-resolution processed data boosts alignment, can reaching up to 9.58%. Moreover, this boost is not limited to a single model, but is effective in different alignment models. These experiments demonstrate that the unsupervised alignment method with super-resolution preprocessing proposed in this study effectively improved alignment and plays an important role in tracking different patient conditions over time.
Collapse
Affiliation(s)
- Liwei Deng
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 Heilongjiang China
| | - Yuanzhi Zhang
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 Heilongjiang China
| | - Jing Wang
- Faculty of Rehabilitation Medicine, Biofeedback Laboratory, Guangzhou Xinhua University, Guangzhou, 510520 Guangdong China
| | - Sijuan Huang
- Department of Radiation Oncology State Key Laboratory of Oncology in South China Collaborative Innovation Center for Cancer Medicine Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060 Guangdong China
| | - Xin Yang
- Department of Radiation Oncology State Key Laboratory of Oncology in South China Collaborative Innovation Center for Cancer Medicine Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060 Guangdong China
| |
Collapse
|
5
|
Kang SR, Shin W, Yang S, Kim JE, Huh KH, Lee SS, Heo MS, Yi WJ. Structure-preserving quality improvement of cone beam CT images using contrastive learning. Comput Biol Med 2023; 158:106803. [PMID: 36989743 DOI: 10.1016/j.compbiomed.2023.106803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 02/13/2023] [Accepted: 03/20/2023] [Indexed: 03/29/2023]
Abstract
Cone-beam CT (CBCT) is widely used in dental clinics but exhibits limitations in assessing soft tissue pathology because of its lack of contrast resolution and low Hounsfield Units (HU) quantification accuracy. We aimed to increase the image quality and HU accuracy of CBCTs while preserving anatomical structures. We generated CT-like images from CBCT images using a patchwise contrastive learning-based GAN model. Our model was trained on unpaired CT and CBCT datasets with the novel combination of losses and the feature extractor pretrained on our training dataset. We evaluated the quality of the images generated by our model in terms of Fréchet inception distance (FID), peak signal-to-noise ratio (PSNR), mean absolute error (MAE), and root mean square error (RMSE). Additionally, the structure preservation performance was assessed by the structure score. As a result, the generated CT-like images by our model were significantly superior to those generated by various baseline models in terms of FID, PSNR, MAE, RMSE, and structure score. Therefore, we demonstrated that our model provided the complementary benefits of preserving the anatomical structures of the input CBCT images and improving the image quality to be similar to those of CT images.
Collapse
|
6
|
Scholey J, Vinas L, Kearney V, Yom S, Larson PEZ, Descovich M, Sudhyadhom A. Improved accuracy of relative electron density and proton stopping power ratio through CycleGAN machine learning. Phys Med Biol 2022; 67. [PMID: 35417903 PMCID: PMC9121765 DOI: 10.1088/1361-6560/ac6725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 04/13/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. Kilovoltage computed tomography (kVCT) is the cornerstone of radiotherapy treatment planning for delineating tissues and towards dose calculation. For the former, kVCT provides excellent contrast and signal-to-noise ratio. For the latter, kVCT may have greater uncertainty in determining relative electron density (
ρ
e
) and proton stopping power ratio (SPR). Conversely, megavoltage CT (MVCT) may result in superior dose calculation accuracy. The purpose of this work was to convert kVCT HU to MVCT HU using deep learning to obtain higher accuracy
ρ
e
and SPR. Approach. Tissue-mimicking phantoms were created to compare kVCT- and MVCT-determined
ρ
e
and SPR to physical measurements. Using 100 head-and-neck datasets, an unpaired deep learning model was trained to learn the relationship between kVCTs and MVCTs, creating synthetic MVCTs (sMVCTs). Similarity metrics were calculated between kVCTs, sMVCTs, and MVCTs in 20 test datasets. An anthropomorphic head phantom containing bone-mimicking material with known composition was scanned to provide an independent determination of
ρ
e
and SPR accuracy by sMVCT. Main results. In tissue-mimicking bone,
ρ
e
errors were 2.20% versus 0.19% and SPR errors were 4.38% versus 0.22%, for kVCT versus MVCT, respectively. Compared to MVCT, in vivo mean difference (MD) values were 11 and 327 HU for kVCT and 2 and 3 HU for sMVCT in soft tissue and bone, respectively.
ρ
e
MD decreased from 1.3% to 0.35% in soft tissue and 2.9% to 0.13% in bone, for kVCT and sMVCT, respectively. SPR MD decreased from 1.8% to 0.24% in soft tissue and 6.8% to 0.16% in bone, for kVCT and sMVCT, respectively. Relative to physical measurements,
ρ
e
and SPR error in anthropomorphic bone decreased from 7.50% and 7.48% for kVCT to <1% for both MVCT and sMVCT. Significance. Deep learning can be used to map kVCT to sMVCT, suggesting higher accuracy
ρ
e
and SPR is achievable with sMVCT versus kVCT.
Collapse
|
7
|
Deng L, Hu J, Wang J, Huang S, Yang X. Synthetic CT generation based on CBCT using respath-cycleGAN. Med Phys 2022; 49:5317-5329. [PMID: 35488299 DOI: 10.1002/mp.15684] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 04/08/2022] [Accepted: 04/13/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Cone-beam computed tomography (CBCT) plays an important role in radiotherapy, but the presence of a large number of artifacts limits its application. The purpose of this study was to use respath-cycleGAN to synthesize CT (sCT) similar to planning CT (pCT) from CBCT for future clinical practice. METHODS The method integrates the respath concept into the original cycleGAN, called respath-cycleGAN, to map CBCT to pCT. Thirty patients were used for training, and 15 for testing. RESULTS The mean absolute error (MAE), root mean square error (RMSE), peak signal to noise ratio (PSNR), structural similarity index (SSIM), and spatial non-uniformity (SNU) were calculated to assess the quality of sCT generated from CBCT. Compared with CBCT images, the MAE improved from 197.72 to 140.7, RMSE from 339.17 to 266.51, and PSNR from 22.07 to 24.44, while SSIM increased from 0.948 to 0.964. Both visually and quantitatively, sCT with respath is superior to sCT without respath. We also performed a generalization test of the head-and-neck (H&N) model on a pelvic dataset. The results again showed that our model was superior. CONCLUSION We developed a respath-cycleGAN method to synthesize CT with good quality from CBCT. In future clinical practice, this method may be used to develop radiotherapy plans. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Liwei Deng
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, Heilongjiang, 150080, China
| | - Jie Hu
- School of Automation, Harbin University of Science and Technology, Harbin, Heilongjiang, 150080, China
| | - Jing Wang
- School of Biomedical Engineering, Guangzhou Xinhua University, Guangzhou, Guangdong, 510520, China
| | - Sijuan Huang
- Huang Department of Radiation Oncology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China
| | - Xin Yang
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China
| |
Collapse
|
8
|
Yang B, Chang Y, Liang Y, Wang Z, Pei X, Xu X, Qiu J. A Comparison Study Between CNN-Based Deformed Planning CT and CycleGAN-Based Synthetic CT Methods for Improving iCBCT Image Quality. Front Oncol 2022; 12:896795. [PMID: 35707352 PMCID: PMC9189355 DOI: 10.3389/fonc.2022.896795] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 04/27/2022] [Indexed: 12/24/2022] Open
Abstract
Purpose The aim of this study is to compare two methods for improving the image quality of the Varian Halcyon cone-beam CT (iCBCT) system through the deformed planning CT (dpCT) based on the convolutional neural network (CNN) and the synthetic CT (sCT) generation based on the cycle-consistent generative adversarial network (CycleGAN). Methods A total of 190 paired pelvic CT and iCBCT image datasets were included in the study, out of which 150 were used for model training and the remaining 40 were used for model testing. For the registration network, we proposed a 3D multi-stage registration network (MSnet) to deform planning CT images to agree with iCBCT images, and the contours from CT images were propagated to the corresponding iCBCT images through a deformation matrix. The overlap between the deformed contours (dpCT) and the fixed contours (iCBCT) was calculated for purposes of evaluating the registration accuracy. For the sCT generation, we trained the 2D CycleGAN using the deformation-registered CT-iCBCT slicers and generated the sCT with corresponding iCBCT image data. Then, on sCT images, physicians re-delineated the contours that were compared with contours of manually delineated iCBCT images. The organs for contour comparison included the bladder, spinal cord, femoral head left, femoral head right, and bone marrow. The dice similarity coefficient (DSC) was used to evaluate the accuracy of registration and the accuracy of sCT generation. Results The DSC values of the registration and sCT generation were found to be 0.769 and 0.884 for the bladder (p < 0.05), 0.765 and 0.850 for the spinal cord (p < 0.05), 0.918 and 0.923 for the femoral head left (p > 0.05), 0.916 and 0.921 for the femoral head right (p > 0.05), and 0.878 and 0.916 for the bone marrow (p < 0.05), respectively. When the bladder volume difference in planning CT and iCBCT scans was more than double, the accuracy of sCT generation was significantly better than that of registration (DSC of bladder: 0.859 vs. 0.596, p < 0.05). Conclusion The registration and sCT generation could both improve the iCBCT image quality effectively, and the sCT generation could achieve higher accuracy when the difference in planning CT and iCBCT was large.
Collapse
Affiliation(s)
- Bo Yang
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Yankui Chang
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
| | - Yongguang Liang
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Zhiqun Wang
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Xi Pei
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
- Technology Development Department, Anhui Wisdom Technology Co., Ltd., Hefei, China
| | - Xie George Xu
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
- Department of Radiation Oncology, First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| | - Jie Qiu
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
- *Correspondence: Jie Qiu,
| |
Collapse
|
9
|
Liu J, Yan H, Cheng H, Liu J, Sun P, Wang B, Mao R, Du C, Luo S. CBCT-based synthetic CT generation using generative adversarial networks with disentangled representation. Quant Imaging Med Surg 2021; 11:4820-4834. [PMID: 34888192 DOI: 10.21037/qims-20-1056] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 06/02/2021] [Indexed: 11/06/2022]
Abstract
Background Cone-beam computed tomography (CBCT) plays a key role in image-guided radiotherapy (IGRT), however its poor image quality limited its clinical application. In this study, we developed a deep-learning based approach to translate CBCT image to synthetic CT (sCT) image that preserves both CT image quality and CBCT anatomical structures. Methods A novel synthetic CT generative adversarial network (sCTGAN) was proposed for CBCT-to-CT translation via disentangled representation. The approach of disentangled representation was employed to extract the anatomical information shared by CBCT and CT image domains. Both on-board CBCT and planning CT of 40 patients were used for network learning and those of another 12 patients were used for testing. Accuracy of our network was quantitatively evaluated using a series of statistical metrics, including the peak signal-to-noise ratio (PSNR), mean structural similarity index (SSIM), mean absolute error (MAE), and root-mean-square error (RMSE). Effectiveness of our network was compared against three state-of-the-art CycleGAN-based methods. Results The PSNR, SSIM, MAE, and RMSE between sCT generated by sCTGAN and deformed planning CT (dpCT) were 34.12 dB, 0.86, 32.70 HU, and 60.53 HU, while the corresponding values between original CBCT and dpCT were 28.67 dB, 0.64, 70.56 HU, and 112.13 HU. The RMSE (60.53±14.38 HU) of sCT generated by sCTGAN was less than that of sCT generated by all the three comparing methods (72.40±16.03 HU by CycleGAN, 71.60±15.09 HU by CycleGAN-Unet512, 64.93±14.33 HU by CycleGAN-AG). Conclusions The sCT generated by our sCTGAN network was closer to the ground truth (dpCT), in comparison to all the three comparing CycleGAN-based methods. It provides an effective way to generate high-quality sCT which has a wide application in IGRT and adaptive radiotherapy.
Collapse
Affiliation(s)
- Jiwei Liu
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Hui Yan
- Department of Radiation Oncology, National Clinical Research Center for Cancer, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Hanlin Cheng
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Jianfei Liu
- School of Electrical Engineering and Automation, Anhui University, Hefei, China
| | - Pengjian Sun
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Boyi Wang
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Ronghu Mao
- Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University, Henan Cancer Hospital, Zhengzhou, China
| | - Chi Du
- Cancer Center, The Second Peoples Hospital of Neijiang, Neijiang, China
| | - Shengquan Luo
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| |
Collapse
|
10
|
Qiu RLJ, Lei Y, Shelton J, Higgins K, Bradley JD, Curran WJ, Liu T, Kesarwala AH, Yang X. Deep learning-based thoracic CBCT correction with histogram matching. Biomed Phys Eng Express 2021; 7. [PMID: 34654011 DOI: 10.1088/2057-1976/ac3055] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Accepted: 10/15/2021] [Indexed: 12/25/2022]
Abstract
Kilovoltage cone-beam computed tomography (CBCT)-based image-guided radiation therapy (IGRT) is used for daily delivery of radiation therapy, especially for stereotactic body radiation therapy (SBRT), which imposes particularly high demands for setup accuracy. The clinical applications of CBCTs are constrained, however, by poor soft tissue contrast, image artifacts, and instability of Hounsfield unit (HU) values. Here, we propose a new deep learning-based method to generate synthetic CTs (sCT) from thoracic CBCTs. A deep-learning model which integrates histogram matching (HM) into a cycle-consistent adversarial network (Cycle-GAN) framework, called HM-Cycle-GAN, was trained to learn mapping between thoracic CBCTs and paired planning CTs. Perceptual supervision was adopted to minimize blurring of tissue interfaces. An informative maximizing loss was calculated by feeding CBCT into the HM-Cycle-GAN to evaluate the image histogram matching between the planning CTs and the sCTs. The proposed algorithm was evaluated using data from 20 SBRT patients who each received 5 fractions and therefore 5 thoracic CBCTs. To reduce the effect of anatomy mismatch, original CBCT images were pre-processed via deformable image registrations with the planning CT before being used in model training and result assessment. We used planning CTs as ground truth for the derived sCTs from the correspondent co-registered CBCTs. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and normalized cross-correlation (NCC) indices were adapted as evaluation metrics of the proposed algorithm. Assessments were done using Cycle-GAN as the benchmark. The average MAE, PSNR, and NCC of the sCTs generated by our method were 66.2 HU, 30.3 dB, and 0.95, respectively, over all CBCT fractions. Superior image quality and reduced noise and artifact severity were seen using the proposed method compared to the results from the standard Cycle-GAN method. Our method could therefore improve the accuracy of IGRT and corrected CBCTs could help improve online adaptive RT by offering better contouring accuracy and dose calculation.
Collapse
Affiliation(s)
- Richard L J Qiu
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| | - Yang Lei
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| | - Joseph Shelton
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| | - Kristin Higgins
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| | - Jeffrey D Bradley
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| | - Walter J Curran
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| | - Tian Liu
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| | - Aparna H Kesarwala
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, GA, United States of America
| |
Collapse
|
11
|
Momin S, Fu Y, Lei Y, Roper J, Bradley JD, Curran WJ, Liu T, Yang X. Knowledge-based radiation treatment planning: A data-driven method survey. J Appl Clin Med Phys 2021; 22:16-44. [PMID: 34231970 PMCID: PMC8364264 DOI: 10.1002/acm2.13337] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 04/26/2021] [Accepted: 06/02/2021] [Indexed: 12/18/2022] Open
Abstract
This paper surveys the data-driven dose prediction methods investigated for knowledge-based planning (KBP) in the last decade. These methods were classified into two major categories-traditional KBP methods and deep-learning (DL) methods-according to their techniques of utilizing previous knowledge. Traditional KBP methods include studies that require geometric or anatomical features to either find the best-matched case(s) from a repository of prior treatment plans or to build dose prediction models. DL methods include studies that train neural networks to make dose predictions. A comprehensive review of each category is presented, highlighting key features, methods, and their advancements over the years. We separated the cited works according to the framework and cancer site in each category. Finally, we briefly discuss the performance of both traditional KBP methods and DL methods, then discuss future trends of both data-driven KBP methods to dose prediction.
Collapse
Affiliation(s)
- Shadab Momin
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Jeffrey D. Bradley
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
12
|
Lin M, Wynne JF, Zhou B, Wang T, Lei Y, Curran WJ, Liu T, Yang X. Artificial intelligence in tumor subregion analysis based on medical imaging: A review. J Appl Clin Med Phys 2021; 22:10-26. [PMID: 34164913 PMCID: PMC8292694 DOI: 10.1002/acm2.13321] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 04/17/2021] [Accepted: 05/22/2021] [Indexed: 12/20/2022] Open
Abstract
Medical imaging is widely used in the diagnosis and treatment of cancer, and artificial intelligence (AI) has achieved tremendous success in medical image analysis. This paper reviews AI-based tumor subregion analysis in medical imaging. We summarize the latest AI-based methods for tumor subregion analysis and their applications. Specifically, we categorize the AI-based methods by training strategy: supervised and unsupervised. A detailed review of each category is presented, highlighting important contributions and achievements. Specific challenges and potential applications of AI in tumor subregion analysis are discussed.
Collapse
Affiliation(s)
- Mingquan Lin
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jacob F. Wynne
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Boran Zhou
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
13
|
Zhao J, Chen Z, Wang J, Xia F, Peng J, Hu Y, Hu W, Zhang Z. MV CBCT-Based Synthetic CT Generation Using a Deep Learning Method for Rectal Cancer Adaptive Radiotherapy. Front Oncol 2021; 11:655325. [PMID: 34136391 PMCID: PMC8201514 DOI: 10.3389/fonc.2021.655325] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 04/26/2021] [Indexed: 01/04/2023] Open
Abstract
Due to image quality limitations, online Megavoltage cone beam CT (MV CBCT), which represents real online patient anatomy, cannot be used to perform adaptive radiotherapy (ART). In this study, we used a deep learning method, the cycle-consistent adversarial network (CycleGAN), to improve the MV CBCT image quality and Hounsfield-unit (HU) accuracy for rectal cancer patients to make the generated synthetic CT (sCT) eligible for ART. Forty rectal cancer patients treated with the intensity modulated radiotherapy (IMRT) were involved in this study. The CT and MV CBCT images of 30 patients were used for model training, and the images of the remaining 10 patients were used for evaluation. Image quality, autosegmentation capability and dose calculation capability using the autoplanning technique of the generated sCT were evaluated. The mean absolute error (MAE) was reduced from 135.84 ± 41.59 HU for the CT and CBCT comparison to 52.99 ± 12.09 HU for the CT and sCT comparison. The structural similarity (SSIM) index for the CT and sCT comparison was 0.81 ± 0.03, which is a great improvement over the 0.44 ± 0.07 for the CT and CBCT comparison. The autosegmentation model performance on sCT for femoral heads was accurate and required almost no manual modification. For the CTV and bladder, although modification was needed for autocontouring, the Dice similarity coefficient (DSC) indices were high, at 0.93 and 0.94 for the CTV and bladder, respectively. For dose evaluation, the sCT-based plan has a much smaller dose deviation from the CT-based plan than that of the CBCT-based plan. The proposed method solved a key problem for rectal cancer ART realization based on MV CBCT. The generated sCT enables ART based on the actual patient anatomy at the treatment position.
Collapse
Affiliation(s)
- Jun Zhao
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Zhi Chen
- Department of Medical Physics, Shanghai Proton and Heavy Ion Center, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Fan Xia
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Jiayuan Peng
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yiwen Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Zhen Zhang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| |
Collapse
|
14
|
Edmund JM, Andreasen D, Van Leemput K. Cone beam computed tomography based image guidance and quality assessment of prostate cancer for magnetic resonance imaging-only radiotherapy in the pelvis. PHYSICS & IMAGING IN RADIATION ONCOLOGY 2021; 18:55-60. [PMID: 34258409 PMCID: PMC8254192 DOI: 10.1016/j.phro.2021.05.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Revised: 04/23/2021] [Accepted: 05/04/2021] [Indexed: 12/22/2022]
Abstract
MRI-only IGRT accuracy is ≤2 mm as compared to CT but significant differences were observed. MRI-only CBCT-based IGRT seems feasible but caution is advised. The median absolute error (MeAE) for independent verification on the sCT quality is proposed. A MeAE around 0.1 in mass density could call for sCT quality inspection.
Background and purpose Radiotherapy (RT) based on magentic resonance imaging (MRI) only is currently used clinically in the pelvis. A synthetic computed tomography (sCT) is needed for dose planning. Here, we investigate the accuracy of cone beam CT (CBCT) based MRI-only image guided RT (IGRT) and sCT image quality. Materials and methods CT, MRI and CBCT scans of ten prostate cancer patients were included. The MRI was converted to a sCT using a multi-atlas approach. The sCT, CT and MR images were auto-matched with the CBCT on the bony anatomy. Paired sCT-CT and sCT-CBCT data were created. CT numbers were converted to relative electron (RED) and mass densities (DES) using a standard calibration curve for the CT and sCT. For the CBCT RED/DES conversion, a phantom and paired CT-CBCT population based calibration curve was used. For the latter, the CBCT numbers were averaged in 100 HU bins and the known RED/DES of the CT were assigned. The paired sCT-CT and sCT-CBCT data were averaged in bins of 10 HU or 0.01 RED/DES. The median absolute error (MeAE) between the sCT-CT and sCT-CBCT bins was calculated. Wilcoxon rank-sum tests were carried out for the IGRT and MeAE study. Results The mean sCT or MR IGRT difference from CT was ≤ 2 mm but significant differences were observed. A CBCT HU or phantom-based RED/DES MeAE did not estimate the sCT quality similar to a CT based MeAE but the CBCT population-based RED/DES MeAE did. Conclusions MRI-only CBCT-based IGRT seems feasible but caution is advised. A MeAE around 0.1 DES could call for sCT quality inspection.
Collapse
Affiliation(s)
- Jens M Edmund
- Radiotherapy Research Unit, Department of Oncology, Gentofte and Herlev Hospital, University of Copenhagen, 2730 Herlev, Denmark.,Niels Bohr Institute, University of Copenhagen, 2100 Copenhagen, Denmark
| | - Daniel Andreasen
- Department of Health Technology, Technical University of Denmark, 2800 Lyngby, Denmark
| | - Koen Van Leemput
- Department of Health Technology, Technical University of Denmark, 2800 Lyngby, Denmark.,Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
| |
Collapse
|
15
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 59] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
16
|
Sun H, Fan R, Li C, Lu Z, Xie K, Ni X, Yang J. Imaging Study of Pseudo-CT Synthesized From Cone-Beam CT Based on 3D CycleGAN in Radiotherapy. Front Oncol 2021; 11:603844. [PMID: 33777746 PMCID: PMC7994515 DOI: 10.3389/fonc.2021.603844] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 02/01/2021] [Indexed: 11/13/2022] Open
Abstract
Purpose To propose a synthesis method of pseudo-CT (CTCycleGAN) images based on an improved 3D cycle generative adversarial network (CycleGAN) to solve the limitations of cone-beam CT (CBCT), which cannot be directly applied to the correction of radiotherapy plans. Methods The improved U-Net with residual connection and attention gates was used as the generator, and the discriminator was a full convolutional neural network (FCN). The imaging quality of pseudo-CT images is improved by adding a 3D gradient loss function. Fivefold cross-validation was performed to validate our model. Each pseudo CT generated is compared against the real CT image (ground truth CT, CTgt) of the same patient based on mean absolute error (MAE) and structural similarity index (SSIM). The dice similarity coefficient (DSC) coefficient was used to evaluate the segmentation results of pseudo CT and real CT. 3D CycleGAN performance was compared to 2D CycleGAN based on normalized mutual information (NMI) and peak signal-to-noise ratio (PSNR) metrics between the pseudo-CT and CTgt images. The dosimetric accuracy of pseudo-CT images was evaluated by gamma analysis. Results The MAE metric values between the CTCycleGAN and the real CT in fivefold cross-validation are 52.03 ± 4.26HU, 50.69 ± 5.25HU, 52.48 ± 4.42HU, 51.27 ± 4.56HU, and 51.65 ± 3.97HU, respectively, and the SSIM values are 0.87 ± 0.02, 0.86 ± 0.03, 0.85 ± 0.02, 0.85 ± 0.03, and 0.87 ± 0.03 respectively. The DSC values of the segmentation of bladder, cervix, rectum, and bone between CTCycleGAN and real CT images are 91.58 ± 0.45, 88.14 ± 1.26, 87.23 ± 2.01, and 92.59 ± 0.33, respectively. Compared with 2D CycleGAN, the 3D CycleGAN based pseudo-CT image is closer to the real image, with NMI values of 0.90 ± 0.01 and PSNR values of 30.70 ± 0.78. The gamma pass rate of the dose distribution between CTCycleGAN and CTgt is 97.0% (2%/2 mm). Conclusion The pseudo-CT images obtained based on the improved 3D CycleGAN have more accurate electronic density and anatomical structure.
Collapse
Affiliation(s)
- Hongfei Sun
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Rongbo Fan
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Chunying Li
- Department of Radiotherapy, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Center of Medical Physics With Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Key Laboratory of Medical Physics With Changzhou, Changzhou, China
| | - Zhengda Lu
- Department of Radiotherapy, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Center of Medical Physics With Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Key Laboratory of Medical Physics With Changzhou, Changzhou, China
| | - Kai Xie
- Department of Radiotherapy, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Center of Medical Physics With Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Key Laboratory of Medical Physics With Changzhou, Changzhou, China
| | - Xinye Ni
- Department of Radiotherapy, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Center of Medical Physics With Nanjing Medical University, Changzhou, China.,Department of Radiotherapy, The Key Laboratory of Medical Physics With Changzhou, Changzhou, China
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| |
Collapse
|
17
|
Abstract
This paper presents a review of deep learning (DL)-based medical image registration methods. We summarized the latest developments and applications of DL-based registration methods in the medical field. These methods were classified into seven categories according to their methods, functions and popularity. A detailed review of each category was presented, highlighting important contributions and identifying specific challenges. A short assessment was presented following the detailed review of each category to summarize its achievements and future potential. We provided a comprehensive comparison among DL-based methods for lung and brain registration using benchmark datasets. Lastly, we analyzed the statistics of all the cited works from various aspects, revealing the popularity and future trend of DL-based medical image registration.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | | | | | | | | | | |
Collapse
|
18
|
Liu Y, Lei Y, Fu Y, Wang T, Tang X, Jiang X, Curran WJ, Liu T, Patel P, Yang X. CT-based multi-organ segmentation using a 3D self-attention U-net network for pancreatic radiotherapy. Med Phys 2020; 47:4316-4324. [PMID: 32654153 DOI: 10.1002/mp.14386] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Revised: 07/05/2020] [Accepted: 07/06/2020] [Indexed: 01/24/2023] Open
Abstract
PURPOSE Segmentation of organs-at-risk (OARs) is a weak link in radiotherapeutic treatment planning process because the manual contouring action is labor-intensive and time-consuming. This work aimed to develop a deep learning-based method for rapid and accurate pancreatic multi-organ segmentation that can expedite the treatment planning process. METHODS We retrospectively investigated one hundred patients with computed tomography (CT) simulation scanned and contours delineated. Eight OARs including large bowel, small bowel, duodenum, left kidney, right kidney, liver, spinal cord and stomach were the target organs to be segmented. The proposed three-dimensional (3D) deep attention U-Net is featured with a deep attention strategy to effectively differentiate multiple organs. Performance of the proposed method was evaluated using six metrics, including Dice similarity coefficient (DSC), sensitivity, specificity, Hausdorff distance 95% (HD95), mean surface distance (MSD) and residual mean square distance (RMSD). RESULTS The contours generated by the proposed method closely resemble the ground-truth manual contours, as evidenced by encouraging quantitative results in terms of DSC, sensitivity, specificity, HD95, MSD and RMSD. For DSC, mean values of 0.91 ± 0.03, 0.89 ± 0.06, 0.86 ± 0.06, 0.95 ± 0.02, 0.95 ± 0.02, 0.96 ± 0.01, 0.87 ± 0.05 and 0.93 ± 0.03 were achieved for large bowel, small bowel, duodenum, left kidney, right kidney, liver, spinal cord and stomach, respectively. CONCLUSIONS The proposed method could significantly expedite the treatment planning process by rapidly segmenting multiple OARs. The method could potentially be used in pancreatic adaptive radiotherapy to increase dose delivery accuracy and minimize gastrointestinal toxicity.
Collapse
Affiliation(s)
- Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaojun Jiang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
19
|
Wang T, Lei Y, Fu Y, Curran WJ, Liu T, Nye JA, Yang X. Machine learning in quantitative PET: A review of attenuation correction and low-count image reconstruction methods. Phys Med 2020; 76:294-306. [PMID: 32738777 PMCID: PMC7484241 DOI: 10.1016/j.ejmp.2020.07.028] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 07/13/2020] [Accepted: 07/21/2020] [Indexed: 02/08/2023] Open
Abstract
The rapid expansion of machine learning is offering a new wave of opportunities for nuclear medicine. This paper reviews applications of machine learning for the study of attenuation correction (AC) and low-count image reconstruction in quantitative positron emission tomography (PET). Specifically, we present the developments of machine learning methodology, ranging from random forest and dictionary learning to the latest convolutional neural network-based architectures. For application in PET attenuation correction, two general strategies are reviewed: 1) generating synthetic CT from MR or non-AC PET for the purposes of PET AC, and 2) direct conversion from non-AC PET to AC PET. For low-count PET reconstruction, recent deep learning-based studies and the potential advantages over conventional machine learning-based methods are presented and discussed. In each application, the proposed methods, study designs and performance of published studies are listed and compared with a brief discussion. Finally, the overall contributions and remaining challenges are summarized.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Jonathon A Nye
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
20
|
Liu Y, Lei Y, Wang T, Fu Y, Tang X, Curran WJ, Liu T, Patel P, Yang X. CBCT-based synthetic CT generation using deep-attention cycleGAN for pancreatic adaptive radiotherapy. Med Phys 2020; 47:2472-2483. [PMID: 32141618 DOI: 10.1002/mp.14121] [Citation(s) in RCA: 100] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 02/27/2020] [Accepted: 02/27/2020] [Indexed: 12/18/2022] Open
Abstract
PURPOSE Current clinical application of cone-beam CT (CBCT) is limited to patient setup. Imaging artifacts and Hounsfield unit (HU) inaccuracy make the process of CBCT-based adaptive planning presently impractical. In this study, we developed a deep-learning-based approach to improve CBCT image quality and HU accuracy for potential extended clinical use in CBCT-guided pancreatic adaptive radiotherapy. METHODS Thirty patients previously treated with pancreas SBRT were included. The CBCT acquired prior to the first fraction of treatment was registered to the planning CT for training and generation of synthetic CT (sCT). A self-attention cycle generative adversarial network (cycleGAN) was used to generate CBCT-based sCT. For the cohort of 30 patients, the CT-based contours and treatment plans were transferred to the first fraction CBCTs and sCTs for dosimetric comparison. RESULTS At the site of abdomen, mean absolute error (MAE) between CT and sCT was 56.89 ± 13.84 HU, comparing to 81.06 ± 15.86 HU between CT and the raw CBCT. No significant differences (P > 0.05) were observed in the PTV and OAR dose-volume-histogram (DVH) metrics between the CT- and sCT-based plans, while significant differences (P < 0.05) were found between the CT- and the CBCT-based plans. CONCLUSIONS The image similarity and dosimetric agreement between the CT and sCT-based plans validated the dose calculation accuracy carried by sCT. The CBCT-based sCT approach can potentially increase treatment precision and thus minimize gastrointestinal toxicity.
Collapse
Affiliation(s)
- Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
21
|
Lei Y, Wang T, Tian S, Dong X, Jani AB, Schuster D, Curran WJ, Patel P, Liu T, Yang X. Male pelvic multi-organ segmentation aided by CBCT-based synthetic MRI. Phys Med Biol 2020; 65:035013. [PMID: 31851956 DOI: 10.1088/1361-6560/ab63bb] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
To develop an automated cone-beam computed tomography (CBCT) multi-organ segmentation method for potential CBCT-guided adaptive radiation therapy workflow. The proposed method combines the deep leaning-based image synthesis method, which generates magnetic resonance images (MRIs) with superior soft-tissue contrast from on-board setup CBCT images to aid CBCT segmentation, with a deep attention strategy, which focuses on learning discriminative features for differentiating organ margins. The whole segmentation method consists of 3 major steps. First, a cycle-consistent adversarial network (CycleGAN) was used to estimate a synthetic MRI (sMRI) from CBCT images. Second, a deep attention network was trained based on sMRI and its corresponding manual contours. Third, the segmented contours for a query patient was obtained by feeding the patient's CBCT images into the trained sMRI estimation and segmentation model. In our retrospective study, we included 100 prostate cancer patients, each of whom has CBCT acquired with prostate, bladder and rectum contoured by physicians with MRI guidance as ground truth. We trained and tested our model with separate datasets among these patients. The resulting segmentations were compared with physicians' manual contours. The Dice similarity coefficient and mean surface distance indices between our segmented and physicians' manual contours (bladder, prostate, and rectum) were 0.95 ± 0.02, 0.44 ± 0.22 mm, 0.86 ± 0.06, 0.73 ± 0.37 mm, and 0.91 ± 0.04, 0.72 ± 0.65 mm, respectively. We have proposed a novel CBCT-only pelvic multi-organ segmentation strategy using CBCT-based sMRI and validated its accuracy against manual contours. This technique could provide accurate organ volume for treatment planning without requiring MR images acquisition, greatly facilitating routine clinical workflow.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America. Co-first author
| | | | | | | | | | | | | | | | | | | |
Collapse
|
22
|
|
23
|
Wang T, Lei Y, Tian Z, Dong X, Liu Y, Jiang X, Curran WJ, Liu T, Shu HK, Yang X. Deep learning-based image quality improvement for low-dose computed tomography simulation in radiation therapy. J Med Imaging (Bellingham) 2019; 6:043504. [PMID: 31673567 PMCID: PMC6811730 DOI: 10.1117/1.jmi.6.4.043504] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2019] [Accepted: 10/03/2019] [Indexed: 01/02/2023] Open
Abstract
Low-dose computed tomography (CT) is desirable for treatment planning and simulation in radiation therapy. Multiple rescanning and replanning during the treatment course with a smaller amount of dose than a single conventional full-dose CT simulation is a crucial step in adaptive radiation therapy. We developed a machine learning-based method to improve image quality of low-dose CT for radiation therapy treatment simulation. We used a residual block concept and a self-attention strategy with a cycle-consistent adversarial network framework. A fully convolution neural network with residual blocks and attention gates (AGs) was used in the generator to enable end-to-end transformation. We have collected CT images from 30 patients treated with frameless brain stereotactic radiosurgery (SRS) for this study. These full-dose images were used to generate projection data, which were then added with noise to simulate the low-mAs scanning scenario. Low-dose CT images were reconstructed from this noise-contaminated projection data and were fed into our network along with the original full-dose CT images for training. The performance of our network was evaluated by quantitatively comparing the high-quality CT images generated by our method with the original full-dose images. When mAs is reduced to 0.5% of the original CT scan, the mean square error of the CT images obtained by our method is ∼ 1.6 % , with respect to the original full-dose images. The proposed method successfully improved the noise, contract-to-noise ratio, and nonuniformity level to be close to those of full-dose CT images and outperforms a state-of-the-art iterative reconstruction method. Dosimetric studies show that the average differences of dose-volume histogram metrics are < 0.1 Gy ( p > 0.05 ). These quantitative results strongly indicate that the denoised low-dose CT images using our method maintains image accuracy and quality and are accurate enough for dose calculation in current CT simulation of brain SRS treatment. We also demonstrate the great potential for low-dose CT in the process of simulation and treatment planning.
Collapse
Affiliation(s)
- Tonghe Wang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Yang Lei
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Zhen Tian
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Xue Dong
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Yingzi Liu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Xiaojun Jiang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Walter J. Curran
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Tian Liu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Hui-Kuo Shu
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| | - Xiaofeng Yang
- Emory University, Winship Cancer Institute, Department of Radiation Oncology, Atlanta, Georgia, United States
| |
Collapse
|
24
|
Harms J, Lei Y, Wang T, Zhang R, Zhou J, Tang X, Curran WJ, Liu T, Yang X. Paired cycle-GAN-based image correction for quantitative cone-beam computed tomography. Med Phys 2019; 46:3998-4009. [PMID: 31206709 DOI: 10.1002/mp.13656] [Citation(s) in RCA: 136] [Impact Index Per Article: 27.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 06/07/2019] [Accepted: 06/07/2019] [Indexed: 12/15/2022] Open
Abstract
PURPOSE The incorporation of cone-beam computed tomography (CBCT) has allowed for enhanced image-guided radiation therapy. While CBCT allows for daily 3D imaging, images suffer from severe artifacts, limiting the clinical potential of CBCT. In this work, a deep learning-based method for generating high quality corrected CBCT (CCBCT) images is proposed. METHODS The proposed method integrates a residual block concept into a cycle-consistent adversarial network (cycle-GAN) framework, called res-cycle GAN, to learn a mapping between CBCT images and paired planning CT images. Compared with a GAN, a cycle-GAN includes an inverse transformation from CBCT to CT images, which constrains the model by forcing calculation of both a CCBCT and a synthetic CBCT. A fully convolution neural network with residual blocks is used in the generator to enable end-to-end CBCT-to-CT transformations. The proposed algorithm was evaluated using 24 sets of patient data in the brain and 20 sets of patient data in the pelvis. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC) indices, and spatial non-uniformity (SNU) were used to quantify the correction accuracy of the proposed algorithm. The proposed method is compared to both a conventional scatter correction and another machine learning-based CBCT correction method. RESULTS Overall, the MAE, PSNR, NCC, and SNU were 13.0 HU, 37.5 dB, 0.99, and 0.05 in the brain, 16.1 HU, 30.7 dB, 0.98, and 0.09 in the pelvis for the proposed method, improvements of 45%, 16%, 1%, and 93% in the brain, and 71%, 38%, 2%, and 65% in the pelvis, over the CBCT image. The proposed method showed superior image quality as compared to the scatter correction method, reducing noise and artifact severity. The proposed method produced images with less noise and artifacts than the comparison machine learning-based method. CONCLUSIONS The authors have developed a novel deep learning-based method to generate high-quality corrected CBCT images. The proposed method increases onboard CBCT image quality, making it comparable to that of the planning CT. With further evaluation and clinical implementation, this method could lead to quantitative adaptive radiation therapy.
Collapse
Affiliation(s)
- Joseph Harms
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Rongxiao Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
25
|
Liu Y, Lei Y, Wang Y, Wang T, Ren L, Lin L, McDonald M, Curran WJ, Liu T, Zhou J, Yang X. MRI-based treatment planning for proton radiotherapy: dosimetric validation of a deep learning-based liver synthetic CT generation method. Phys Med Biol 2019; 64:145015. [PMID: 31146267 PMCID: PMC6635951 DOI: 10.1088/1361-6560/ab25bc] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Magnetic resonance imaging (MRI) has been widely used in combination with computed tomography (CT) radiation therapy because MRI improves the accuracy and reliability of target delineation due to its superior soft tissue contrast over CT. The MRI-only treatment process is currently an active field of research since it could eliminate systematic MR-CT co-registration errors, reduce medical cost, avoid diagnostic radiation exposure, and simplify clinical workflow. The purpose of this work is to validate the application of a deep learning-based method for abdominal synthetic CT (sCT) generation by image evaluation and dosimetric assessment in a commercial proton pencil beam treatment planning system (TPS). This study proposes to integrate dense block into a 3D cycle-consistent generative adversarial networks (cycle GAN) framework in an effort to effectively learn the nonlinear mapping between MRI and CT pairs. A cohort of 21 patients with co-registered CT and MR pairs were used to test the deep learning-based sCT image quality by leave-one-out cross validation. The CT image quality, dosimetric accuracy and the distal range fidelity were rigorously checked, using side-by-side comparison against the corresponding original CT images. The average mean absolute error (MAE) was 72.87±18.16 HU. The relative differences of the statistics of the PTV dose volume histogram (DVH) metrics between sCT and CT were generally less than 1%. Mean 3D gamma analysis passing rate of 1mm/1%, 2mm/2%, 3mm/3% criteria with 10% dose threshold were 90.76±5.94%, 96.98±2.93% and 99.37±0.99%, respectively. The median, mean and standard deviation of absolute maximum range differences were 0.170 cm, 0.186 cm and 0.155 cm. The image similarity, dosimetric and distal range agreement between sCT and original CT suggests the feasibility of further development of an MRI-only workflow for liver proton radiotherapy.
Collapse
Affiliation(s)
- Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
26
|
Jeong J, Wang L, Ji B, Lei Y, Ali A, Liu T, Curran WJ, Mao H, Yang X. Machine-learning based classification of glioblastoma using delta-radiomic features derived from dynamic susceptibility contrast enhanced magnetic resonance images: Introduction. Quant Imaging Med Surg 2019; 9:1201-1213. [PMID: 31448207 PMCID: PMC6685811 DOI: 10.21037/qims.2019.07.01] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Accepted: 06/27/2019] [Indexed: 12/27/2022]
Abstract
BACKGROUND Glioblastoma is the most aggressive brain tumor with poor prognosis. The purpose of this study is to improve the tissue characterization of these highly heterogeneous tumors using delta-radiomic features of images from dynamic susceptibility contrast enhanced (DSC) magnetic resonance imaging (MRI). METHODS Twenty-five patients with histopathologically confirmed to be 13 high-grade (HG) and 12 low-grade (LG) gliomas who underwent the standard brain tumor MRI protocol, including DSC MRI, were included. Tumor regions on all DSC MRI images were registered to and contoured in T2-weighted fluid-attenuated inversion recovery (FLAIR) images. These contours and its contralateral regions of the normal tissue were used to extract delta-radiomic features before applying feature selection. The most informative and non-redundant features were selected to train a random forest to differentiate HG and LG gliomas. Then a leave-one-out cross-validation random forest was applied to classify these tumors for grading. Finally, a majority-voting method was applied to reduce binarization bias and to combine the results of various feature lists. RESULTS Analysis of the predictions showed that the reported method consistently predicted the tumor grade of 24 out of 25 patients correctly (0.96). Finally, the mean prediction accuracy was 0.950±0.091 for HG and 0.850±0.255 for LG. The area under the receiver operating characteristic curve (AUC) was 0.94. CONCLUSIONS This study shows that delta-radiomic features derived from DSC MRI data can be used to characterize and determine the tumor grades. The radiomic features from DSC MRI may be used to elucidate the underlying tumor biology and response to therapy.
Collapse
Affiliation(s)
- Jiwoong Jeong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Liya Wang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, USA
- Department of Radiology, the People’s Hospital of Longhua, Shenzhen 518109, China
| | - Bing Ji
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Arif Ali
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
27
|
Wang T, Lei Y, Manohar N, Tian S, Jani AB, Shu HK, Higgins K, Dhabaan A, Patel P, Tang X, Liu T, Curran WJ, Yang X. Dosimetric study on learning-based cone-beam CT correction in adaptive radiation therapy. Med Dosim 2019; 44:e71-e79. [PMID: 30948341 DOI: 10.1016/j.meddos.2019.03.001] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Revised: 08/16/2018] [Accepted: 03/04/2019] [Indexed: 11/16/2022]
Abstract
INTRODUCTION Cone-beam CT (CBCT) image quality is important for its quantitative analysis in adaptive radiation therapy. However, due to severe artifacts, the CBCTs are primarily used for verifying patient setup only so far. We have developed a learning-based image quality improvement method which could provide CBCTs with image quality comparable to planning CTs (pCTs). The accuracy of dose calculations based on these CBCTs is unknown. In this study, we aim to investigate the dosimetric accuracy of our corrected CBCT (CCBCT) in brain stereotactic radiosurgery (SRS) and pelvic radiotherapy. MATERIALS AND METHODS We retrospectively investigated a total of 32 treatment plans from 22 patients, each of whom with both original treatment pCTs and CBCTs acquired during treatment setup. The CCBCT and original CBCT (OCBCT) were registered to the pCT for generating CCBCT-based and OCBCT-based treatment plans. The original pCT-based plans served as ground truth. Clinically-relevant dose volume histogram (DVH) metrics were extracted from the ground truth, OCBCT-based and CCBCT-based plans for comparison. Gamma analysis was also performed to compare the absorbed dose distributions between the pCT-based and OCBCT/CCBCT-based plans of each patient. RESULTS CCBCTs demonstrated better image contrast and more accurate HU ranges when compared side-by-side with OCBCTs. For pelvic radiotherapy plans, the mean dose error in DVH metrics for planning target volume (PTV), bladder and rectum was significantly reduced, from 1% to 0.3%, after CBCT correction. The gamma analysis showed the average pass rate increased from 94.5% before correction to 99.0% after correction. For brain SRS treatment plans, both original and corrected CBCT images were accurate enough for dose calculation, though CCBCT featured higher image quality. CONCLUSION CCBCTs can provide a level of dose accuracy comparable to traditional pCTs for brain and prostate radiotherapy planning and the correction method proposed here can be useful in CBCT-guided adaptive radiotherapy.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Nivedh Manohar
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Hui-Kuo Shu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Anees Dhabaan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiangyang Tang
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA.
| |
Collapse
|