1
|
Rossi M, Belotti G, Mainardi L, Baroni G, Cerveri P. Feasibility of proton dosimetry overriding planning CT with daily CBCT elaborated through generative artificial intelligence tools. Comput Assist Surg (Abingdon) 2024; 29:2327981. [PMID: 38468391 DOI: 10.1080/24699322.2024.2327981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024] Open
Abstract
Radiotherapy commonly utilizes cone beam computed tomography (CBCT) for patient positioning and treatment monitoring. CBCT is deemed to be secure for patients, making it suitable for the delivery of fractional doses. However, limitations such as a narrow field of view, beam hardening, scattered radiation artifacts, and variability in pixel intensity hinder the direct use of raw CBCT for dose recalculation during treatment. To address this issue, reliable correction techniques are necessary to remove artifacts and remap pixel intensity into Hounsfield Units (HU) values. This study proposes a deep-learning framework for calibrating CBCT images acquired with narrow field of view (FOV) systems and demonstrates its potential use in proton treatment planning updates. Cycle-consistent generative adversarial networks (cGAN) processes raw CBCT to reduce scatter and remap HU. Monte Carlo simulation is used to generate CBCT scans, enabling the possibility to focus solely on the algorithm's ability to reduce artifacts and cupping effects without considering intra-patient longitudinal variability and producing a fair comparison between planning CT (pCT) and calibrated CBCT dosimetry. To showcase the viability of the approach using real-world data, experiments were also conducted using real CBCT. Tests were performed on a publicly available dataset of 40 patients who received ablative radiation therapy for pancreatic cancer. The simulated CBCT calibration led to a difference in proton dosimetry of less than 2%, compared to the planning CT. The potential toxicity effect on the organs at risk decreased from about 50% (uncalibrated) up the 2% (calibrated). The gamma pass rate at 3%/2 mm produced an improvement of about 37% in replicating the prescribed dose before and after calibration (53.78% vs 90.26%). Real data also confirmed this with slightly inferior performances for the same criteria (65.36% vs 87.20%). These results may confirm that generative artificial intelligence brings the use of narrow FOV CBCT scans incrementally closer to clinical translation in proton therapy planning updates.
Collapse
Affiliation(s)
- Matteo Rossi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- Laboratory of Innovation in Sleep Medicine, Istituto Auxologico Italiano, Milan, Italy
| | - Gabriele Belotti
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Luca Mainardi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Guido Baroni
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- Bioengineering Unit, Clinical Department, National Center for Oncological Hadrontherapy (CNAO), Pavia, Italy
| | - Pietro Cerveri
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- Laboratory of Innovation in Sleep Medicine, Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
2
|
Hu C, Bian C, Cao N, Zhou H, Guo B. Synthesizing High b-Value Diffusion-Weighted Imaging of Gastric Cancer Using an Improved Vision Transformer CycleGAN. Bioengineering (Basel) 2024; 11:805. [PMID: 39199763 PMCID: PMC11351349 DOI: 10.3390/bioengineering11080805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Revised: 08/02/2024] [Accepted: 08/07/2024] [Indexed: 09/01/2024] Open
Abstract
BACKGROUND Diffusion-weighted imaging (DWI), a pivotal component of multiparametric magnetic resonance imaging (mpMRI), plays a pivotal role in the detection, diagnosis, and evaluation of gastric cancer. Despite its potential, DWI is often marred by substantial anatomical distortions and sensitivity artifacts, which can hinder its practical utility. Presently, enhancing DWI's image quality necessitates reliance on cutting-edge hardware and extended scanning durations. The development of a rapid technique that optimally balances shortened acquisition time with improved image quality would have substantial clinical relevance. OBJECTIVES This study aims to construct and evaluate the unsupervised learning framework called attention dual contrast vision transformer cyclegan (ADCVCGAN) for enhancing image quality and reducing scanning time in gastric DWI. METHODS The ADCVCGAN framework, proposed in this study, employs high b-value DWI (b = 1200 s/mm2) as a reference for generating synthetic b-value DWI (s-DWI) from acquired lower b-value DWI (a-DWI, b = 800 s/mm2). Specifically, ADCVCGAN incorporates an attention mechanism CBAM module into the CycleGAN generator to enhance feature extraction from the input a-DWI in both the channel and spatial dimensions. Subsequently, a vision transformer module, based on the U-net framework, is introduced to refine detailed features, aiming to produce s-DWI with image quality comparable to that of b-DWI. Finally, images from the source domain are added as negative samples to the discriminator, encouraging the discriminator to steer the generator towards synthesizing images distant from the source domain in the latent space, with the goal of generating more realistic s-DWI. The image quality of the s-DWI is quantitatively assessed using metrics such as the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), feature similarity index (FSIM), mean squared error (MSE), weighted peak signal-to-noise ratio (WPSNR), and weighted mean squared error (WMSE). Subjective evaluations of different DWI images were conducted using the Wilcoxon signed-rank test. The reproducibility and consistency of b-ADC and s-ADC, calculated from b-DWI and s-DWI, respectively, were assessed using the intraclass correlation coefficient (ICC). A statistical significance level of p < 0.05 was considered. RESULTS The s-DWI generated by the unsupervised learning framework ADCVCGAN scored significantly higher than a-DWI in quantitative metrics such as PSNR, SSIM, FSIM, MSE, WPSNR, and WMSE, with statistical significance (p < 0.001). This performance is comparable to the optimal level achieved by the latest synthetic algorithms. Subjective scores for lesion visibility, image anatomical details, image distortion, and overall image quality were significantly higher for s-DWI and b-DWI compared to a-DWI (p < 0.001). At the same time, there was no significant difference between the scores of s-DWI and b-DWI (p > 0.05). The consistency of b-ADC and s-ADC readings was comparable among different readers (ICC: b-ADC 0.87-0.90; s-ADC 0.88-0.89, respectively). The repeatability of b-ADC and s-ADC readings by the same reader was also comparable (Reader1 ICC: b-ADC 0.85-0.86, s-ADC 0.85-0.93; Reader2 ICC: b-ADC 0.86-0.87, s-ADC 0.89-0.92, respectively). CONCLUSIONS ADCVCGAN shows excellent promise in generating gastric cancer DWI images. It effectively reduces scanning time, improves image quality, and ensures the authenticity of s-DWI images and their s-ADC values, thus providing a basis for assisting clinical decision making.
Collapse
Affiliation(s)
- Can Hu
- School of Computer and Soft, Hohai University, Nanjing 211100, China; (C.H.); (C.B.)
| | - Congchao Bian
- School of Computer and Soft, Hohai University, Nanjing 211100, China; (C.H.); (C.B.)
| | - Ning Cao
- School of Computer and Soft, Hohai University, Nanjing 211100, China; (C.H.); (C.B.)
| | - Han Zhou
- School of Electronic Science and Engineering, Nanjing University, Nanjing 210046, China;
| | - Bin Guo
- College of Computer and Information Engineering, Xinjiang Agricultural University, Urumqi 830052, China;
| |
Collapse
|
3
|
Rusanov B, Hassan GM, Reynolds M, Sabet M, Rowshanfarzad P, Bucknell N, Gill S, Dass J, Ebert M. Transformer CycleGAN with uncertainty estimation for CBCT based synthetic CT in adaptive radiotherapy. Phys Med Biol 2024; 69:035014. [PMID: 38198726 DOI: 10.1088/1361-6560/ad1cfc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 01/10/2024] [Indexed: 01/12/2024]
Abstract
Objective. Clinical implementation of synthetic CT (sCT) from cone-beam CT (CBCT) for adaptive radiotherapy necessitates a high degree of anatomical integrity, Hounsfield unit (HU) accuracy, and image quality. To achieve these goals, a vision-transformer and anatomically sensitive loss functions are described. Better quantification of image quality is achieved using the alignment-invariant Fréchet inception distance (FID), and uncertainty estimation for sCT risk prediction is implemented in a scalable plug-and-play manner.Approach. Baseline U-Net, generative adversarial network (GAN), and CycleGAN models were trained to identify shortcomings in each approach. The proposed CycleGAN-Best model was empirically optimized based on a large ablation study and evaluated using classical image quality metrics, FID, gamma index, and a segmentation analysis. Two uncertainty estimation methods, Monte-Carlo Dropout (MCD) and test-time augmentation (TTA), were introduced to model epistemic and aleatoric uncertainty.Main results. FID was correlated to blind observer image quality scores with a Correlation Coefficient of -0.83, validating the metric as an accurate quantifier of perceived image quality. The FID and mean absolute error (MAE) of CycleGAN-Best was 42.11 ± 5.99 and 25.00 ± 1.97 HU, compared to 63.42 ± 15.45 and 31.80 HU for CycleGAN-Baseline, and 144.32 ± 20.91 and 68.00 ± 5.06 HU for the CBCT, respectively. Gamma 1%/1 mm pass rates were 98.66 ± 0.54% for CycleGAN-Best, compared to 86.72 ± 2.55% for the CBCT. TTA and MCD-based uncertainty maps were well spatially correlated with poor synthesis outputs.Significance. Anatomical accuracy was achieved by suppressing CycleGAN-related artefacts. FID better discriminated image quality, where alignment-based metrics such as MAE erroneously suggest poorer outputs perform better. Uncertainty estimation for sCT was shown to correlate with poor outputs and has clinical relevancy toward model risk assessment and quality assurance. The proposed model and accompanying evaluation and risk assessment tools are necessary additions to achieve clinically robust sCT generation models.
Collapse
Affiliation(s)
- Branimir Rusanov
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
- Center for Advanced Technologies in Cancer Research, Perth, Western Australia, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
| | - Mark Reynolds
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
| | - Mahsheed Sabet
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
- Center for Advanced Technologies in Cancer Research, Perth, Western Australia, Australia
| | - Pejman Rowshanfarzad
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
- Center for Advanced Technologies in Cancer Research, Perth, Western Australia, Australia
| | - Nicholas Bucknell
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | - Suki Gill
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | - Joshua Dass
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | - Martin Ebert
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, Western Australia, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
- Center for Advanced Technologies in Cancer Research, Perth, Western Australia, Australia
- Australian Centre for Quantitative Imaging, University of Western Australia, Perth, Western Australia, Australia
- School of Medicine and Public Health, University of Wisconsin, Madison WI, United States of America
| |
Collapse
|
4
|
Wynne JF, Lei Y, Pan S, Wang T, Pasha M, Luca K, Roper J, Patel P, Patel SA, Godette K, Jani AB, Yang X. Rapid unpaired CBCT-based synthetic CT for CBCT-guided adaptive radiotherapy. J Appl Clin Med Phys 2023; 24:e14064. [PMID: 37345557 PMCID: PMC10562022 DOI: 10.1002/acm2.14064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 04/18/2023] [Accepted: 05/15/2023] [Indexed: 06/23/2023] Open
Abstract
In this work, we demonstrate a method for rapid synthesis of high-quality CT images from unpaired, low-quality CBCT images, permitting CBCT-based adaptive radiotherapy. We adapt contrastive unpaired translation (CUT) to be used with medical images and evaluate the results on an institutional pelvic CT dataset. We compare the method against cycleGAN using mean absolute error, structural similarity index, root mean squared error, and Frèchet Inception Distance and show that CUT significantly outperforms cycleGAN while requiring less time and fewer resources. The investigated method improves the feasibility of online adaptive radiotherapy over the present state-of-the-art.
Collapse
Affiliation(s)
- Jacob F. Wynne
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tonghe Wang
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Mosa Pasha
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Kirk Luca
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Sagar A. Patel
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Karen Godette
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
5
|
Deng L, Zhang Y, Wang J, Huang S, Yang X. Improving performance of medical image alignment through super-resolution. Biomed Eng Lett 2023; 13:397-406. [PMID: 37519883 PMCID: PMC10382383 DOI: 10.1007/s13534-023-00268-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 01/29/2023] [Accepted: 02/01/2023] [Indexed: 02/21/2023] Open
Abstract
Medical image alignment is an important tool for tracking patient conditions, but the quality of alignment is influenced by the effectiveness of low-dose Cone-beam CT (CBCT) imaging and patient characteristics. To address these two issues, we propose an unsupervised alignment method that incorporates a preprocessing super-resolution process. We constructed the model based on a private clinical dataset and validated the enhancement of the super-resolution on alignment using clinical and public data. Through all three experiments, we demonstrate that higher resolution data yields better results in the alignment process. To fully constrain similarity and structure, a new loss function is proposed; Pearson correlation coefficient combined with regional mutual information. In all test samples, the newly proposed loss function obtains higher results than the common loss function and improve alignment accuracy. Subsequent experiments verified that, combined with the newly proposed loss function, the super-resolution processed data boosts alignment, can reaching up to 9.58%. Moreover, this boost is not limited to a single model, but is effective in different alignment models. These experiments demonstrate that the unsupervised alignment method with super-resolution preprocessing proposed in this study effectively improved alignment and plays an important role in tracking different patient conditions over time.
Collapse
Affiliation(s)
- Liwei Deng
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 Heilongjiang China
| | - Yuanzhi Zhang
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 Heilongjiang China
| | - Jing Wang
- Faculty of Rehabilitation Medicine, Biofeedback Laboratory, Guangzhou Xinhua University, Guangzhou, 510520 Guangdong China
| | - Sijuan Huang
- Department of Radiation Oncology State Key Laboratory of Oncology in South China Collaborative Innovation Center for Cancer Medicine Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060 Guangdong China
| | - Xin Yang
- Department of Radiation Oncology State Key Laboratory of Oncology in South China Collaborative Innovation Center for Cancer Medicine Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060 Guangdong China
| |
Collapse
|
6
|
Cao Z, Gao X, Chang Y, Liu G, Pei Y. Improving synthetic CT accuracy by combining the benefits of multiple normalized preprocesses. J Appl Clin Med Phys 2023:e14004. [PMID: 37092739 PMCID: PMC10402686 DOI: 10.1002/acm2.14004] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 02/13/2023] [Accepted: 04/04/2023] [Indexed: 04/25/2023] Open
Abstract
PURPOSE To investigate the effect of different normalization preprocesses in deep learning on the accuracy of different tissues in synthetic computed tomography (sCT) and to combine their advantages to improve the accuracy of all tissues. METHODS The cycle-consistent adversarial network (CycleGAN) model was used to generate sCT images from megavolt cone-beam CT (MVCBCT) images. In this study, 2639 head MVCBCT and CT image pairs from 203 patients were collected as a training set, and 249 image pairs from 29 patients were collected as a test set. We normalized the voxel values in images to 0 to 1 or -1 to 1, using two linear and five nonlinear normalization preprocessing methods to obtain seven data sets and compared the accuracy of different tissues in different sCT obtained from training these data. Finally, to combine the advantages of different normalization preprocessing methods, we obtained sCT_Blur by cropping, stitching, and smoothing (OpenCV's cv2.medianBlur, kernel size 5) each group of sCTs and evaluated its image quality and accuracy of OARs. RESULTS Different normalization preprocesses made sCT more accurate in different tissues. The proposed sCT_Blur took advantage of multiple normalization preprocessing methods, and all tissues are more accurate than the sCT obtained using a single conventional normalization method. Compared with other sCT images, the structural similarity of sCT_Blur versus CT was improved to 0.906 ± 0.019. The mean absolute errors of the CT numbers were reduced to 15.7 ± 4.1 HU, 23.2 ± 7.1 HU, 11.5 ± 4.1 HU, 212.8 ± 104.6 HU, 219.4 ± 35.1 HU, and 268.8 ± 88.8 HU for the oral cavity, parotid, spinal cord, cavity, mandible, and teeth, respectively. CONCLUSION The proposed approach combined the advantages of several normalization preprocessing methods to improve the accuracy of all tissues in sCT images, which is promising for improving the accuracy of dose calculations based on CBCT images in adaptive radiotherapy.
Collapse
Affiliation(s)
- Zheng Cao
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, China
- Hematology & Oncology Department, Hefei First People's Hospital, Hefei, China
| | - Xiang Gao
- Hematology & Oncology Department, Hefei First People's Hospital, Hefei, China
| | - Yankui Chang
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
| | - Gongfa Liu
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, China
| | - Yuanji Pei
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, China
| |
Collapse
|
7
|
B G A, Bentefour EH, Teo BKK, Samuel D. A comparative study of machine-learning approaches in proton radiography using energy-resolved dose function. Phys Med 2023; 106:102525. [PMID: 36621081 DOI: 10.1016/j.ejmp.2023.102525] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 12/28/2022] [Accepted: 01/02/2023] [Indexed: 01/09/2023] Open
Abstract
PURPOSE The feasibility of machine learning (ML) techniques and their performance compared to the conventional χ2-minimization technique in the context of the proton energy-resolved dose imaging method are presented. MATERIALS AND METHOD Various geometries resembling a wedge and varying gradients are simulated in GATE to obtain energy-resolved dose functions (ERDF) from proton beams of different energies. These ERDFs are used to predict the WEPL using a conventional technique and other ML-based methods. The results are compared to gain an understanding of the performance of ML models in proton radiography. RESULTS The results obtained from the χ2-minimization technique indicate that it is robust and more reliable compared to the ML-based techniques. It is also observed that the ML-based techniques did not mitigate the effect of range-mixing but seem to be more affected by it compared to the χ2-minimization technique. Substantial data reduction was required in order to make the results of ML-based methods comparable to that of χ2-minimization. We also note that such data reduction might not be possible in a clinical setting. The only advantage in using the ML-based technique is the computational time required to generate a WEPL map which, in our case study, is 10-30 times shorter than the time required for the conventional χ2-minimization technique. CONCLUSIONS The first results from this preliminary study indicate that the ML techniques failed to be on par with the conventional χ2-minimization technique in terms of the achievable accuracy in the predictions of WEPL and in the mitigation of range-mixing effects in the WEPL image. Modern strategies like the GAN-based models may be suitable for such applications.
Collapse
Affiliation(s)
- Alaka B G
- Department of Physics, Central University of Karnataka, Kalaburagi, 585367, Karnataka, India.
| | - El H Bentefour
- Veritas Medical Solutions inc, Cassell Rd, Harleysville, 19438, PA, USA
| | - Boon-Keng Kevin Teo
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA, USA
| | - Deepak Samuel
- Department of Physics, Central University of Karnataka, Kalaburagi, 585367, Karnataka, India
| |
Collapse
|
8
|
Liu X, Liang X, Deng L, Tan S, Xie Y. Learning low-dose CT degradation from unpaired data with flow-based model. Med Phys 2022; 49:7516-7530. [PMID: 35880375 DOI: 10.1002/mp.15886] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 07/13/2022] [Accepted: 07/17/2022] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND There has been growing interest in low-dose computed tomography (LDCT) for reducing the X-ray radiation to patients. However, LDCT always suffers from complex noise in reconstructed images. Although deep learning-based methods have shown their strong performance in LDCT denoising, most of them require a large number of paired training data of normal-dose CT (NDCT) images and LDCT images, which are hard to acquire in the clinic. Lack of paired training data significantly undermines the practicability of supervised deep learning-based methods. To alleviate this problem, unsupervised or weakly supervised deep learning-based methods are required. PURPOSE We aimed to propose a method that achieves LDCT denoising without training pairs. Specifically, we first trained a neural network in a weakly supervised manner to simulate LDCT images from NDCT images. Then, simulated training pairs could be used for supervised deep denoising networks. METHODS We proposed a weakly supervised method to learn the degradation of LDCT from unpaired LDCT and NDCT images. Concretely, LDCT and normal-dose images were fed into one shared flow-based model and projected to the latent space. Then, the degradation between low-dose and normal-dose images was modeled in the latent space. Finally, the model was trained by minimizing the negative log-likelihood loss with no requirement of paired training data. After training, an NDCT image can be input to the trained flow-based model to generate the corresponding LDCT image. The simulated image pairs of NDCT and LDCT can be further used to train supervised denoising neural networks for test. RESULTS Our method achieved much better performance on LDCT image simulation compared with the most widely used image-to-image translation method, CycleGAN, according to the radial noise power spectrum. The simulated image pairs could be used for any supervised LDCT denoising neural networks. We validated the effectiveness of our generated image pairs on a classic convolutional neural network, REDCNN, and a novel transformer-based model, TransCT. Our method achieved mean peak signal-to-noise ratio (PSNR) of 24.43dB, mean structural similarity (SSIM) of 0.785 on an abdomen CT dataset, mean PSNR of 33.88dB, mean SSIM of 0.797 on a chest CT dataset, which outperformed several traditional CT denoising methods, the same network trained by CycleGAN-generated data, and a novel transfer learning method. Besides, our method was on par with the supervised networks in terms of visual effects. CONCLUSION We proposed a flow-based method to learn LDCT degradation from only unpaired training data. It achieved impressive performance on LDCT synthesis. Next, we could train neural networks with the generated paired data for LDCT denoising. The denoising results are better than traditional and weakly supervised methods, comparable to supervised deep learning methods.
Collapse
Affiliation(s)
- Xuan Liu
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaokun Liang
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lei Deng
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Shan Tan
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
| | - Yaoqin Xie
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
9
|
Rusanov B, Hassan GM, Reynolds M, Sabet M, Kendrick J, Farzad PR, Ebert M. Deep learning methods for enhancing cone-beam CT image quality towards adaptive radiation therapy: A systematic review. Med Phys 2022; 49:6019-6054. [PMID: 35789489 PMCID: PMC9543319 DOI: 10.1002/mp.15840] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 05/21/2022] [Accepted: 06/16/2022] [Indexed: 11/11/2022] Open
Abstract
The use of deep learning (DL) to improve cone-beam CT (CBCT) image quality has gained popularity as computational resources and algorithmic sophistication have advanced in tandem. CBCT imaging has the potential to facilitate online adaptive radiation therapy (ART) by utilizing up-to-date patient anatomy to modify treatment parameters before irradiation. Poor CBCT image quality has been an impediment to realizing ART due to the increased scatter conditions inherent to cone-beam acquisitions. Given the recent interest in DL applications in radiation oncology, and specifically DL for CBCT correction, we provide a systematic theoretical and literature review for future stakeholders. The review encompasses DL approaches for synthetic CT generation, as well as projection domain methods employed in the CBCT correction literature. We review trends pertaining to publications from January 2018 to April 2022 and condense their major findings - with emphasis on study design and deep learning techniques. Clinically relevant endpoints relating to image quality and dosimetric accuracy are summarised, highlighting gaps in the literature. Finally, we make recommendations for both clinicians and DL practitioners based on literature trends and the current DL state of the art methods utilized in radiation oncology. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Branimir Rusanov
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia
| | - Mark Reynolds
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia
| | - Mahsheed Sabet
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Jake Kendrick
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Pejman Rowshan Farzad
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| | - Martin Ebert
- School of Physics, Mathematics and Computing, The University of Western Australia, Perth, Western Australia, 6009, Australia.,Department of Radiation Oncology, Sir Chairles Gairdner Hospital, Perth, Western Australia, 6009, Australia
| |
Collapse
|
10
|
Barragán-Montero A, Bibal A, Dastarac MH, Draguet C, Valdés G, Nguyen D, Willems S, Vandewinckele L, Holmström M, Löfman F, Souris K, Sterpin E, Lee JA. Towards a safe and efficient clinical implementation of machine learning in radiation oncology by exploring model interpretability, explainability and data-model dependency. Phys Med Biol 2022; 67:10.1088/1361-6560/ac678a. [PMID: 35421855 PMCID: PMC9870296 DOI: 10.1088/1361-6560/ac678a] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 04/14/2022] [Indexed: 01/26/2023]
Abstract
The interest in machine learning (ML) has grown tremendously in recent years, partly due to the performance leap that occurred with new techniques of deep learning, convolutional neural networks for images, increased computational power, and wider availability of large datasets. Most fields of medicine follow that popular trend and, notably, radiation oncology is one of those that are at the forefront, with already a long tradition in using digital images and fully computerized workflows. ML models are driven by data, and in contrast with many statistical or physical models, they can be very large and complex, with countless generic parameters. This inevitably raises two questions, namely, the tight dependence between the models and the datasets that feed them, and the interpretability of the models, which scales with its complexity. Any problems in the data used to train the model will be later reflected in their performance. This, together with the low interpretability of ML models, makes their implementation into the clinical workflow particularly difficult. Building tools for risk assessment and quality assurance of ML models must involve then two main points: interpretability and data-model dependency. After a joint introduction of both radiation oncology and ML, this paper reviews the main risks and current solutions when applying the latter to workflows in the former. Risks associated with data and models, as well as their interaction, are detailed. Next, the core concepts of interpretability, explainability, and data-model dependency are formally defined and illustrated with examples. Afterwards, a broad discussion goes through key applications of ML in workflows of radiation oncology as well as vendors' perspectives for the clinical implementation of ML.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Adrien Bibal
- PReCISE, NaDI Institute, Faculty of Computer Science, UNamur and CENTAL, ILC, UCLouvain, Belgium
| | - Margerie Huet Dastarac
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Camille Draguet
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, United States of America
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, United States of America
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| |
Collapse
|
11
|
Dong G, Zhang C, Deng L, Zhu Y, Dai J, Song L, Meng R, Niu T, Liang X, Xie Y. A deep unsupervised learning framework for the 4D CBCT artifact correction. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac55a5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 02/16/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. Four-dimensional cone-beam computed tomography (4D CBCT) has unique advantages in moving target localization, tracking and therapeutic dose accumulation in adaptive radiotherapy. However, the severe fringe artifacts and noise degradation caused by 4D CBCT reconstruction restrict its clinical application. We propose a novel deep unsupervised learning model to generate the high-quality 4D CBCT from the poor-quality 4D CBCT. Approach. The proposed model uses a contrastive loss function to preserve the anatomical structure in the corrected image. To preserve the relationship between the input and output image, we use a multilayer, patch-based method rather than operate on entire images. Furthermore, we draw negatives from within the input 4D CBCT rather than from the rest of the dataset. Main results. The results showed that the streak and motion artifacts were significantly suppressed. The spatial resolution of the pulmonary vessels and microstructure were also improved. To demonstrate the results in the different directions, we make the animation to show the different views of the predicted correction image in the supplementary animation. Significance. The proposed method can be integrated into any 4D CBCT reconstruction method and maybe a practical way to enhance the image quality of the 4D CBCT.
Collapse
|
12
|
Yang B, Chang Y, Liang Y, Wang Z, Pei X, Xu X, Qiu J. A Comparison Study Between CNN-Based Deformed Planning CT and CycleGAN-Based Synthetic CT Methods for Improving iCBCT Image Quality. Front Oncol 2022; 12:896795. [PMID: 35707352 PMCID: PMC9189355 DOI: 10.3389/fonc.2022.896795] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 04/27/2022] [Indexed: 12/24/2022] Open
Abstract
Purpose The aim of this study is to compare two methods for improving the image quality of the Varian Halcyon cone-beam CT (iCBCT) system through the deformed planning CT (dpCT) based on the convolutional neural network (CNN) and the synthetic CT (sCT) generation based on the cycle-consistent generative adversarial network (CycleGAN). Methods A total of 190 paired pelvic CT and iCBCT image datasets were included in the study, out of which 150 were used for model training and the remaining 40 were used for model testing. For the registration network, we proposed a 3D multi-stage registration network (MSnet) to deform planning CT images to agree with iCBCT images, and the contours from CT images were propagated to the corresponding iCBCT images through a deformation matrix. The overlap between the deformed contours (dpCT) and the fixed contours (iCBCT) was calculated for purposes of evaluating the registration accuracy. For the sCT generation, we trained the 2D CycleGAN using the deformation-registered CT-iCBCT slicers and generated the sCT with corresponding iCBCT image data. Then, on sCT images, physicians re-delineated the contours that were compared with contours of manually delineated iCBCT images. The organs for contour comparison included the bladder, spinal cord, femoral head left, femoral head right, and bone marrow. The dice similarity coefficient (DSC) was used to evaluate the accuracy of registration and the accuracy of sCT generation. Results The DSC values of the registration and sCT generation were found to be 0.769 and 0.884 for the bladder (p < 0.05), 0.765 and 0.850 for the spinal cord (p < 0.05), 0.918 and 0.923 for the femoral head left (p > 0.05), 0.916 and 0.921 for the femoral head right (p > 0.05), and 0.878 and 0.916 for the bone marrow (p < 0.05), respectively. When the bladder volume difference in planning CT and iCBCT scans was more than double, the accuracy of sCT generation was significantly better than that of registration (DSC of bladder: 0.859 vs. 0.596, p < 0.05). Conclusion The registration and sCT generation could both improve the iCBCT image quality effectively, and the sCT generation could achieve higher accuracy when the difference in planning CT and iCBCT was large.
Collapse
Affiliation(s)
- Bo Yang
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Yankui Chang
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
| | - Yongguang Liang
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Zhiqun Wang
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Xi Pei
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
- Technology Development Department, Anhui Wisdom Technology Co., Ltd., Hefei, China
| | - Xie George Xu
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
- Department of Radiation Oncology, First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| | - Jie Qiu
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
- *Correspondence: Jie Qiu,
| |
Collapse
|