1
|
Taciuc IA, Dumitru M, Vrinceanu D, Gherghe M, Manole F, Marinescu A, Serboiu C, Neagos A, Costache A. Applications and challenges of neural networks in otolaryngology (Review). Biomed Rep 2024; 20:92. [PMID: 38765859 PMCID: PMC11099604 DOI: 10.3892/br.2024.1781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Accepted: 04/05/2024] [Indexed: 05/22/2024] Open
Abstract
Artificial Intelligence (AI) has become a topic of interest that is frequently debated in all research fields. The medical field is no exception, where several unanswered questions remain. When and how this field can benefit from AI support in daily routines are the most frequently asked questions. The present review aims to present the types of neural networks (NNs) available for development, discussing their advantages, disadvantages and how they can be applied practically. In addition, the present review summarizes how NNs (combined with various other features) have already been applied in studies in the ear nose throat research field, from assisting diagnosis to treatment management. Although the answer to this question regarding AI remains elusive, understanding the basics and types of applicable NNs can lead to future studies possibly using more than one type of NN. This approach may bypass the actual limitations in accuracy and relevance of information generated by AI. The proposed studies, the majority of which used convolutional NNs, obtained accuracies varying 70-98%, with a number of studies having the AI trained on a limited number of cases (<100 patients). The lack of standardization in AI protocols for research negatively affects data homogeneity and transparency of databases.
Collapse
Affiliation(s)
- Iulian-Alexandru Taciuc
- Department of Pathology, ‘Carol Davila’ University of Medicine and Pharmacy, 020021 Bucharest, Romania
| | - Mihai Dumitru
- Department of ENT, ‘Carol Davila’ University of Medicine and Pharmacy, 050751 Bucharest, Romania
| | - Daniela Vrinceanu
- Department of ENT, ‘Carol Davila’ University of Medicine and Pharmacy, 050751 Bucharest, Romania
| | - Mirela Gherghe
- Department of Nuclear Medicine, ‘Carol Davila’ University of Medicine and Pharmacy, 022328 Bucharest, Romania
| | - Felicia Manole
- Department of ENT, Faculty of Medicine University of Oradea, 410073 Oradea, Romania
| | - Andreea Marinescu
- Department of Radiology and Medical Imaging ‘Carol Davila’ University of Medicine and Pharmacy, 050096 Bucharest, Romania
| | - Crenguta Serboiu
- Department of Cell Biology, Molecular and Histology, ‘Carol Davila’ University of Medicine and Pharmacy, 050096 Bucharest, Romania
| | - Adriana Neagos
- Department of ENT, ‘George Emil Palade’ University of Medicine, Pharmacy, Science, and Technology of Targu Mures, 540142 Mures, Romania
| | - Adrian Costache
- Department of Pathology, ‘Carol Davila’ University of Medicine and Pharmacy, 020021 Bucharest, Romania
| |
Collapse
|
2
|
Galapon AV, Thummerer A, Langendijk JA, Wagenaar D, Both S. Feasibility of Monte Carlo dropout-based uncertainty maps to evaluate deep learning-based synthetic CTs for adaptive proton therapy. Med Phys 2024; 51:2499-2509. [PMID: 37956266 DOI: 10.1002/mp.16838] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Accepted: 10/28/2023] [Indexed: 11/15/2023] Open
Abstract
BACKGROUND Deep learning has shown promising results to generate MRI-based synthetic CTs and to enable accurate proton dose calculations on MRIs. For clinical implementation of synthetic CTs, quality assurance tools that verify their quality and reliability are required but still lacking. PURPOSE This study aims to evaluate the predictive value of uncertainty maps generated with Monte Carlo dropout (MCD) for verifying proton dose calculations on deep-learning-based synthetic CTs (sCTs) derived from MRIs in online adaptive proton therapy. METHODS Two deep-learning models (DCNN and cycleGAN) were trained for CT image synthesis using 101 paired CT-MR images. sCT images were generated using MCD for each model by performing 10 inferences with activated dropout layers. The final sCT was obtained by averaging the inferred sCTs, while the uncertainty map was obtained from the HU variance corresponding to each voxel of 10 sCTs. The resulting uncertainty maps were compared to the observed HU-, range-, WET-, and dose-error maps between the sCT and planning CT. For range and WET errors, the generated uncertainty maps were projected along the 90-degree angle. To evaluate the dose distribution, a mask based on the 5%-isodose curve was applied to only include voxels along the beam paths. Pearson's correlation coefficients were calculated to determine the correlation between the uncertainty maps and HUs, range, WET, and dose errors. To evaluate the dosimetric accuracy of synthetic CTs, clinical proton treatment plans were recalculated and compared to the pCTs RESULTS: Evaluation of the correlation showed an average of r = 0.92 ± 0.03 and r = 0.92 ± 0.03 for errors between uncertainty-HU, r = 0.66 ± 0.09 and r = 0.62 ± 0.06 between uncertainty-range, r = 0.64 ± 0.06 and r = 0.58 ± 0.07 between uncertainty-WET, and r = 0.65 ± 0.09 and r = 0.67 ± 0.07 between uncertainty and dose difference for DCNN and cycleGAN model, respectively. Dosimetric comparison for target volumes showed an average 3%/3 mm gamma pass rate of 99.76 ± 0.43 (DCNN) and 99.10 ± 1.27 (cycleGAN). CONCLUSION The observed correlations between uncertainty maps and the various metrics (HU, range, WET, and dose errors) demonstrated the potential of MCD-based uncertainty maps as a reliable QA tool to evaluate the accuracy of deep learning-based sCTs.
Collapse
Affiliation(s)
- Arthur Villanueva Galapon
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Adrian Thummerer
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Germany
| | - Johannes Albertus Langendijk
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Dirk Wagenaar
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Stefan Both
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
3
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
4
|
Sun H, Yang Z, Zhu J, Li J, Gong J, Chen L, Wang Z, Yin Y, Ren G, Cai J, Zhao L. Pseudo-medical image-guided technology based on 'CBCT-only' mode in esophageal cancer radiotherapy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 245:108007. [PMID: 38241802 DOI: 10.1016/j.cmpb.2024.108007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 11/03/2023] [Accepted: 01/03/2024] [Indexed: 01/21/2024]
Abstract
Purpose To minimize the various errors introduced by image-guided radiotherapy (IGRT) in the application of esophageal cancer treatment, this study proposes a novel technique based on the 'CBCT-only' mode of pseudo-medical image guidance. Methods The framework of this technology consists of two pseudo-medical image synthesis models in the CBCT→CT and the CT→PET direction. The former utilizes a dual-domain parallel deep learning model called AWM-PNet, which incorporates attention waning mechanisms. This model effectively suppresses artifacts in CBCT images in both the sinogram and spatial domains while efficiently capturing important image features and contextual information. The latter leverages tumor location and shape information provided by clinical experts. It introduces a PRAM-GAN model based on a prior region aware mechanism to establish a non-linear mapping relationship between CT and PET image domains. As a result, it enables the generation of pseudo-PET images that meet the clinical requirements for radiotherapy. Results The NRMSE and multi-scale SSIM (MS-SSIM) were utilized to evaluate the test set, and the results were presented as median values with lower quartile and upper quartile ranges. For the AWM-PNet model, the NRMSE and MS-SSIM values were 0.0218 (0.0143, 0.0255) and 0.9325 (0.9141, 0.9410), respectively. The PRAM-GAN model produced NRMSE and MS-SSIM values of 0.0404 (0.0356, 0.0476) and 0.9154 (0.8971, 0.9294), respectively. Statistical analysis revealed significant differences (p < 0.05) between these models and others. The numerical results of dose metrics, including D98 %, Dmean, and D2 %, validated the accuracy of HU values in the pseudo-CT images synthesized by the AWM-PNet. Furthermore, the Dice coefficient results confirmed statistically significant differences (p < 0.05) in GTV delineation between the pseudo-PET images synthesized using the PRAM-GAN model and other compared methods. Conclusion The AWM-PNet and PRAM-GAN models have the capability to generate accurate pseudo-CT and pseudo-PET images, respectively. The pseudo-image-guided technique based on the 'CBCT-only' mode shows promising prospects for application in esophageal cancer radiotherapy.
Collapse
Affiliation(s)
- Hongfei Sun
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Zhi Yang
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Jiarui Zhu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jie Li
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Jie Gong
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Liting Chen
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Zhongfei Wang
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Yutian Yin
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China.
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China.
| | - Lina Zhao
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China.
| |
Collapse
|
5
|
Hoffmans-Holtzer N, Magallon-Baro A, de Pree I, Slagter C, Xu J, Thill D, Olofsen-van Acht M, Hoogeman M, Petit S. Evaluating AI-generated CBCT-based synthetic CT images for target delineation in palliative treatments of pelvic bone metastasis at conventional C-arm linacs. Radiother Oncol 2024; 192:110110. [PMID: 38272314 DOI: 10.1016/j.radonc.2024.110110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 01/11/2024] [Accepted: 01/18/2024] [Indexed: 01/27/2024]
Abstract
PURPOSE One-table treatments with treatment imaging, preparation and delivery occurring at one treatment couch, could increase patients' comfort and throughput for palliative treatments. On regular C-arm linacs, however, cone-beam CT (CBCT) imaging quality is currently insufficient. Therefore, our goal was to assess the suitability of AI-generated CBCT based synthetic CT (sCT) images for target delineation and treatment planning for palliative radiotherapy. MATERIALS AND METHODS CBCTs and planning CT-scans of 22 female patients with pelvic bone metastasis were included. For each CBCT, a corresponding sCT image was generated by a deep learning model in ADMIRE 3.38.0. Radiation oncologists delineated 23 target volumes (TV) on the sCTs (TVsCT) and scored their delineation confidence. The delineations were transferred to planning CTs and manually adjusted if needed to yield gold standard target volumes (TVclin). TVsCT were geometrically compared to TVclin using Dice coefficient (DC) and Hausdorff Distance (HD). The dosimetric impact of TVsCT inaccuracies was evaluated for VMAT plans with different PTV margins. RESULTS Radiation oncologists scored the sCT quality as sufficient for 13/23 TVsCT (median: DC = 0.9, HD = 11 mm) and insufficient for 10/23 TVsCT (median: DC = 0.7, HD = 34 mm). For the sufficient category, remaining inaccuracies could be compensated by +1 to +4 mm additional margin to achieve coverage of V95% > 95% and V95% > 98%, respectively in 12/13 TVsCT. CONCLUSION The evaluated sCT quality allowed for accurate delineation for most targets. sCTs with insufficient quality could be identified accurately upfront. A moderate PTV margin expansion could address remaining delineation inaccuracies. Therefore, these findings support further exploration of CBCT based one-table treatments on C-arm linacs.
Collapse
Affiliation(s)
- Nienke Hoffmans-Holtzer
- Erasmus MC - Cancer Institute, University Medical Center Rotterdam, Department of Radiotherapy, P.O. Box 2040, 3000 CA Rotterdam, The Netherlands.
| | - Alba Magallon-Baro
- Erasmus MC - Cancer Institute, University Medical Center Rotterdam, Department of Radiotherapy, P.O. Box 2040, 3000 CA Rotterdam, The Netherlands.
| | - Ilse de Pree
- Erasmus MC - Cancer Institute, University Medical Center Rotterdam, Department of Radiotherapy, P.O. Box 2040, 3000 CA Rotterdam, The Netherlands
| | - Cleo Slagter
- Erasmus MC - Cancer Institute, University Medical Center Rotterdam, Department of Radiotherapy, P.O. Box 2040, 3000 CA Rotterdam, The Netherlands
| | - Jiaofeng Xu
- Elekta Inc, St. Charles office, 1450 Beale St, St. Charles, MO 63303, USA
| | - Daniel Thill
- Elekta Inc, St. Charles office, 1450 Beale St, St. Charles, MO 63303, USA
| | - Manouk Olofsen-van Acht
- Erasmus MC - Cancer Institute, University Medical Center Rotterdam, Department of Radiotherapy, P.O. Box 2040, 3000 CA Rotterdam, The Netherlands
| | - Mischa Hoogeman
- Erasmus MC - Cancer Institute, University Medical Center Rotterdam, Department of Radiotherapy, P.O. Box 2040, 3000 CA Rotterdam, The Netherlands
| | - Steven Petit
- Erasmus MC - Cancer Institute, University Medical Center Rotterdam, Department of Radiotherapy, P.O. Box 2040, 3000 CA Rotterdam, The Netherlands
| |
Collapse
|
6
|
Yang B, Liu Y, Zhu J, Dai J, Men K. Deep learning framework to improve the quality of cone-beam computed tomography for radiotherapy scenarios. Med Phys 2023; 50:7641-7653. [PMID: 37345371 DOI: 10.1002/mp.16562] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 06/01/2023] [Accepted: 06/03/2023] [Indexed: 06/23/2023] Open
Abstract
BACKGROUND The application of cone-beam computed tomography (CBCT) in image-guided radiotherapy and adaptive radiotherapy remains limited due to its poor image quality. PURPOSE In this study, we aim to develop a deep learning framework to generate high-quality CBCT images for therapeutic applications. METHODS The synthetic CT (sCT) generation from the CBCT was proposed using a transformer-based network with a hybrid loss function. The network was trained and validated using the data from 176 patients to produce a general model that can be extensively applied to enhance CBCT images. After the first therapy, each patient can receive paired CBCT/planning CT (pCT) scans, and the obtained data were used to fine-tune the general model for further improvement. For subsequent treatment, a patient-specific, personalized model was made available. In total, 34 patients were examined for general model testing, and another six patients who underwent rescanned pCT scan were used for personalized model training and testing. RESULTS The general model decreased the mean absolute error (MAE) from 135 HU to 59 HU as compared to the CBCT. The hybrid loss function demonstrated superior performance in CT number correction and noise/artifacts reduction. The proposed transformer-based network also showed superior power in CT number correction compared to the classical convolutional neural network. The personalized model showed improvement based on the general model in some details, and the MAE was reduced from 59 HU (for the general model) to 57 HU (p < 0.05 Wilcoxon signed-rank test). CONCLUSION We established a deep learning framework based on transformer for clinical needs. The deep learning model demonstrated potential for continuous improvement with the help of a suggested personalized training strategy compatible with the clinical workflow.
Collapse
Affiliation(s)
- Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
7
|
Yoganathan S, Aouadi S, Ahmed S, Paloor S, Torfeh T, Al-Hammadi N, Hammoud R. Generating synthetic images from cone beam computed tomography using self-attention residual UNet for head and neck radiotherapy. Phys Imaging Radiat Oncol 2023; 28:100512. [PMID: 38111501 PMCID: PMC10726231 DOI: 10.1016/j.phro.2023.100512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 11/09/2023] [Accepted: 11/09/2023] [Indexed: 12/20/2023] Open
Abstract
Background and purpose Accurate CT numbers in Cone Beam CT (CBCT) are crucial for precise dose calculations in adaptive radiotherapy (ART). This study aimed to generate synthetic CT (sCT) from CBCT using deep learning (DL) models in head and neck (HN) radiotherapy. Materials and methods A novel DL model, the 'self-attention-residual-UNet' (ResUNet), was developed for accurate sCT generation. ResUNet incorporates a self-attention mechanism in its long skip connections to enhance information transfer between the encoder and decoder. Data from 93 HN patients, each with planning CT (pCT) and first-day CBCT images were used. Model performance was evaluated using two DL approaches (non-adversarial and adversarial training) and two model types (2D axial only vs. 2.5D axial, sagittal, and coronal). ResUNet was compared with the traditional UNet through image quality assessment (Mean Absolute Error (MAE), Peak-Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM)) and dose calculation accuracy evaluation (DVH deviation and gamma evaluation (1 %/1mm)). Results Image similarity evaluation results for the 2.5D-ResUNet and 2.5D-UNet models were: MAE: 46±7 HU vs. 51±9 HU, PSNR: 66.6±2.0 dB vs. 65.8±1.8 dB, and SSIM: 0.81±0.04 vs. 0.79±0.05. There were no significant differences in dose calculation accuracy between DL models. Both models demonstrated DVH deviation below 0.5 % and a gamma-pass-rate (1 %/1mm) exceeding 97 %. Conclusions ResUNet enhanced CT number accuracy and image quality of sCT and outperformed UNet in sCT generation from CBCT. This method holds promise for generating precise sCT for HN ART.
Collapse
Affiliation(s)
- S.A. Yoganathan
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Souha Aouadi
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Sharib Ahmed
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Satheesh Paloor
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Tarraf Torfeh
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Noora Al-Hammadi
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| | - Rabih Hammoud
- Department of Radiation Oncology, National Center for Cancer Care & Research (NCCCR), Hamad Medical Corporation, Doha, Qatar
| |
Collapse
|
8
|
Li Z, Zhang Q, Li H, Kong L, Wang H, Liang B, Chen M, Qin X, Yin Y, Li Z. Using RegGAN to generate synthetic CT images from CBCT images acquired with different linear accelerators. BMC Cancer 2023; 23:828. [PMID: 37670252 PMCID: PMC10478281 DOI: 10.1186/s12885-023-11274-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 08/08/2023] [Indexed: 09/07/2023] Open
Abstract
BACKGROUND The goal was to investigate the feasibility of the registration generative adversarial network (RegGAN) model in image conversion for performing adaptive radiation therapy on the head and neck and its stability under different cone beam computed tomography (CBCT) models. METHODS A total of 100 CBCT and CT images of patients diagnosed with head and neck tumors were utilized for the training phase, whereas the testing phase involved 40 distinct patients obtained from four different linear accelerators. The RegGAN model was trained and tested to evaluate its performance. The generated synthetic CT (sCT) image quality was compared to that of planning CT (pCT) images by employing metrics such as the mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM). Moreover, the radiation therapy plan was uniformly applied to both the sCT and pCT images to analyze the planning target volume (PTV) dose statistics and calculate the dose difference rate, reinforcing the model's accuracy. RESULTS The generated sCT images had good image quality, and no significant differences were observed among the different CBCT modes. The conversion effect achieved for Synergy was the best, and the MAE decreased from 231.3 ± 55.48 to 45.63 ± 10.78; the PSNR increased from 19.40 ± 1.46 to 26.75 ± 1.32; the SSIM increased from 0.82 ± 0.02 to 0.85 ± 0.04. The quality improvement effect achieved for sCT image synthesis based on RegGAN was obvious, and no significant sCT synthesis differences were observed among different accelerators. CONCLUSION The sCT images generated by the RegGAN model had high image quality, and the RegGAN model exhibited a strong generalization ability across different accelerators, enabling its outputs to be used as reference images for performing adaptive radiation therapy on the head and neck.
Collapse
Affiliation(s)
- Zhenkai Li
- Chengdu University of Technology, Chengdu, China
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | | | - Haodong Li
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Lingke Kong
- Manteia Technologies Co., Ltd., Xiamen, China
| | - Huadong Wang
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Benzhe Liang
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Mingming Chen
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Xiaohang Qin
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Yong Yin
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China.
| | - Zhenjiang Li
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China.
| |
Collapse
|
9
|
Li X, Huang S, Pan Z, Qin P, Wu W, Qi M, Ma J, Kang S, Chen J, Zhou L, Xu Y, Qin G. Deep learning based de-overlapping correction of projections from a flat-panel micro array X-ray source: Simulation study. Phys Med 2023; 111:102607. [PMID: 37210964 DOI: 10.1016/j.ejmp.2023.102607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 04/26/2023] [Accepted: 05/09/2023] [Indexed: 05/23/2023] Open
Abstract
PURPOSE Flat-panel X-ray source is an experimental X-ray emitter with target application of static computer tomography (CT), which can save imaging space and time. However, the X-ray cone beams emitted by the densely arranged micro-ray sources are overlapped, causing serious structural overlapping and visual blur in the projection results. Traditional deoverlapping methods can hardly solve this problem well. METHOD We converted the overlapping cone beam projections to parallel beam projections through a U-like neural network and selected structural similarity (SSIM) loss as the loss function. In this study, we converted three kinds of overlapping cone beam projections of the Shepp-Logan, line-pairs, and abdominal data with two overlapping levels to corresponding parallel beam projections. Training completed, we tested the model using the test set data that was not used at the training phase, and evaluated the difference between the test set conversion results and their corresponding parallel beams through three indicators: mean squared error (MSE), peak signal-to-noise ratio (PSNR) and SSIM. In addition, projections from head phantoms were applied for generalization test. RESULT In the Shepp-Logan low-overlapping task, we obtained a MSE of 1.624×10-5, a PSNR of 47.892 dB, and a SSIM of 0.998 which are the best results of the six experiments. For the most challenging abdominal task, the MSE, PSNR, and SSIM are 1.563×10-3, 28.0586 dB, and 0.983, respectively. In more generalized data, the model also achieved good results. CONCLUSION This study proves the feasibility of utilizing the end-to-end U-net for deblurring and deoverlapping in the flat-panel X-ray source domain.
Collapse
Affiliation(s)
- Xu Li
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Shuang Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Zengxiang Pan
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Peishan Qin
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Wangjiang Wu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Mengke Qi
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Jianhui Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Song Kang
- State Key Laboratory of Optoelectronic Materials and Technologies, Guangdong Province Key Laboratory of Display Material and Technology, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510515, China
| | - Jun Chen
- State Key Laboratory of Optoelectronic Materials and Technologies, Guangdong Province Key Laboratory of Display Material and Technology, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510515, China
| | - Linghong Zhou
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Yuan Xu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China.
| | - Genggeng Qin
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
10
|
Kang SR, Shin W, Yang S, Kim JE, Huh KH, Lee SS, Heo MS, Yi WJ. Structure-preserving quality improvement of cone beam CT images using contrastive learning. Comput Biol Med 2023; 158:106803. [PMID: 36989743 DOI: 10.1016/j.compbiomed.2023.106803] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 02/13/2023] [Accepted: 03/20/2023] [Indexed: 03/29/2023]
Abstract
Cone-beam CT (CBCT) is widely used in dental clinics but exhibits limitations in assessing soft tissue pathology because of its lack of contrast resolution and low Hounsfield Units (HU) quantification accuracy. We aimed to increase the image quality and HU accuracy of CBCTs while preserving anatomical structures. We generated CT-like images from CBCT images using a patchwise contrastive learning-based GAN model. Our model was trained on unpaired CT and CBCT datasets with the novel combination of losses and the feature extractor pretrained on our training dataset. We evaluated the quality of the images generated by our model in terms of Fréchet inception distance (FID), peak signal-to-noise ratio (PSNR), mean absolute error (MAE), and root mean square error (RMSE). Additionally, the structure preservation performance was assessed by the structure score. As a result, the generated CT-like images by our model were significantly superior to those generated by various baseline models in terms of FID, PSNR, MAE, RMSE, and structure score. Therefore, we demonstrated that our model provided the complementary benefits of preserving the anatomical structures of the input CBCT images and improving the image quality to be similar to those of CT images.
Collapse
|
11
|
Podobnik G, Strojan P, Peterlin P, Ibragimov B, Vrtovec T. HaN-Seg: The head and neck organ-at-risk CT and MR segmentation dataset. Med Phys 2023; 50:1917-1927. [PMID: 36594372 DOI: 10.1002/mp.16197] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 11/17/2022] [Accepted: 12/07/2022] [Indexed: 01/04/2023] Open
Abstract
PURPOSE For the cancer in the head and neck (HaN), radiotherapy (RT) represents an important treatment modality. Segmentation of organs-at-risk (OARs) is the starting point of RT planning, however, existing approaches are focused on either computed tomography (CT) or magnetic resonance (MR) images, while multimodal segmentation has not been thoroughly explored yet. We present a dataset of CT and MR images of the same patients with curated reference HaN OAR segmentations for an objective evaluation of segmentation methods. ACQUISITION AND VALIDATION METHODS The cohort consists of HaN images of 56 patients that underwent both CT and T1-weighted MR imaging for image-guided RT. For each patient, reference segmentations of up to 30 OARs were obtained by experts performing manual pixel-wise image annotation. By maintaining the distribution of patient age and gender, and annotation type, the patients were randomly split into training Set 1 (42 cases or 75%) and test Set 2 (14 cases or 25%). Baseline auto-segmentation results are also provided by training the publicly available deep nnU-Net architecture on Set 1, and evaluating its performance on Set 2. DATA FORMAT AND USAGE NOTES The data are publicly available through an open-access repository under the name HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Dataset. Images and reference segmentations are stored in the NRRD file format, where the OAR filenames correspond to the nomenclature recommended by the American Association of Physicists in Medicine, and OAR and demographics information is stored in separate comma-separated value files. POTENTIAL APPLICATIONS The HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Challenge is launched in parallel with the dataset release to promote the development of automated techniques for OAR segmentation in the HaN. Other potential applications include out-of-challenge algorithm development and benchmarking, as well as external validation of the developed algorithms.
Collapse
Affiliation(s)
- Gašper Podobnik
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | | | | | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
12
|
Sun J, Xu X, Feng S, Zhang H, Xu L, Jiang H, Sun B, Meng Y, Chen W. Rapid identification of salmonella serovars by using Raman spectroscopy and machine learning algorithm. Talanta 2023; 253:123807. [PMID: 36115103 DOI: 10.1016/j.talanta.2022.123807] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 07/26/2022] [Accepted: 07/29/2022] [Indexed: 12/13/2022]
Abstract
A widespread and escalating public health problem worldwide is foodborne illness, and foodborne Salmonella infection is one of the most common causes of human illness.For the three most pathogenic Salmonella serotypes, Raman spectroscopy was employed to acquire spectral data.As machine learning offers high efficiency and accuracy, we have chosen the convolutional neural network(CNN), which is suitable for solving multi-classification problems, to do in-depth mining and analysis of Raman spectral data.To optimize the instrument parameters, we compared three laser wavelengths: 532, 638, and 785 nm.Ultimately, the 532 nm wavelength was chosen as the most effective for detecting Salmonella.A pre-processing step is necessary to remove interference from the background noise of the Raman spectrum.Our study compared the effects of five spectral preprocessing methods, Savitzky-Golay smoothing (SG), Multivariate Scatter Correction (MSC), Standard Normal Variate (SNV), and Hilbert Transform (HT), on the predictive power of CNN models.Accuracy(ACC), Precision, Recall, and F1-score 4 machine learning evaluation indicators are used to evaluate the model performance under different preprocessing methods.In the results, SG combined with SNV was found to be the most accurate spectral pre-processing method for predicting Salmonella serotypes using Raman spectroscopy, achieving an accuracy of 98.7% for the training set and over 98.5% for the test set in CNN model.Pre-processing spectral data using this method yields higher accuracy than other methods.As a conclusion, the results of this study demonstrate that Raman spectroscopy when used in conjunction with a convolutional neural network model enables the rapid identification of three Salmonella serotypes at the single-cell level, and that the model has a great deal of potential for distinguishing between different serotypes of pathogenic bacteria and closely related bacterial species.This is vital to preventing outbreaks of foodborne illness and the spread of foodborne pathogens.
Collapse
Affiliation(s)
- Jiazheng Sun
- College of Criminal Investigation, People's Public Security University of China, Beijing, 100038, PR China
| | - Xuefang Xu
- State Key Laboratory of Communicable Disease Prevention and Control, Institute for Communicable Disease Prevention and Control, Chinese Center for Disease Control and Prevention, Beijing, 102206, PR China
| | - Songsong Feng
- College of Information and Cyber Security,People's Public Security University of China, Beijing, 100038, PR China
| | - Hanyu Zhang
- School of Criminology,People's Public Security University of China, Beijing, 100038, PR China
| | - Lingfeng Xu
- College of Criminal Investigation, People's Public Security University of China, Beijing, 100038, PR China
| | - Hong Jiang
- College of Criminal Investigation, People's Public Security University of China, Beijing, 100038, PR China.
| | - Baibing Sun
- College of Information and Cyber Security,People's Public Security University of China, Beijing, 100038, PR China
| | - Yuyan Meng
- College of Information and Cyber Security,People's Public Security University of China, Beijing, 100038, PR China
| | - Weizhou Chen
- School of Law,People's Public Security University of China, Beijing, 100038, PR China
| |
Collapse
|
13
|
Wang H, Liu X, Kong L, Huang Y, Chen H, Ma X, Duan Y, Shao Y, Feng A, Shen Z, Gu H, Kong Q, Xu Z, Zhou Y. Improving CBCT image quality to the CT level using RegGAN in esophageal cancer adaptive radiotherapy. Strahlenther Onkol 2023; 199:485-497. [PMID: 36688953 PMCID: PMC10133081 DOI: 10.1007/s00066-022-02039-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 12/04/2022] [Indexed: 01/24/2023]
Abstract
OBJECTIVE This study aimed to improve the image quality and CT Hounsfield unit accuracy of daily cone-beam computed tomography (CBCT) using registration generative adversarial networks (RegGAN) and apply synthetic CT (sCT) images to dose calculations in radiotherapy. METHODS The CBCT/planning CT images of 150 esophageal cancer patients undergoing radiotherapy were used for training (120 patients) and testing (30 patients). An unsupervised deep-learning method, the 2.5D RegGAN model with an adaptively trained registration network, was proposed, through which sCT images were generated. The quality of deep-learning-generated sCT images was quantitatively compared to the reference deformed CT (dCT) image using mean absolute error (MAE), root mean square error (RMSE) of Hounsfield units (HU), and peak signal-to-noise ratio (PSNR). The dose calculation accuracy was further evaluated for esophageal cancer radiotherapy plans, and the same plans were calculated on dCT, CBCT, and sCT images. RESULTS The quality of sCT images produced by RegGAN was significantly improved compared to the original CBCT images. ReGAN achieved image quality in the testing patients with MAE sCT vs. CBCT: 43.7 ± 4.8 vs. 80.1 ± 9.1; RMSE sCT vs. CBCT: 67.2 ± 12.4 vs. 124.2 ± 21.8; and PSNR sCT vs. CBCT: 27.9 ± 5.6 vs. 21.3 ± 4.2. The sCT images generated by the RegGAN model showed superior accuracy on dose calculation, with higher gamma passing rates (93.3 ± 4.4, 90.4 ± 5.2, and 84.3 ± 6.6) compared to original CBCT images (89.6 ± 5.7, 85.7 ± 6.9, and 72.5 ± 12.5) under the criteria of 3 mm/3%, 2 mm/2%, and 1 mm/1%, respectively. CONCLUSION The proposed deep-learning RegGAN model seems promising for generation of high-quality sCT images from stand-alone thoracic CBCT images in an efficient way and thus has the potential to support CBCT-based esophageal cancer adaptive radiotherapy.
Collapse
Affiliation(s)
- Hao Wang
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China.,Institute of Modern Physics, Fudan University, Shanghai, China
| | - Xiao Liu
- Department of Radiotherapy, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | | | - Ying Huang
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Hua Chen
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Xiurui Ma
- Department of Radiation Oncology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Yanhua Duan
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Yan Shao
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Aihui Feng
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Zhenjiong Shen
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Hengle Gu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Qing Kong
- Institute of Modern Physics, Fudan University, Shanghai, China
| | - Zhiyong Xu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Yongkang Zhou
- Department of Radiation Oncology, Zhongshan Hospital, Fudan University, Shanghai, China.
| |
Collapse
|
14
|
de Hond YJ, Kerckhaert CE, van Eijnatten MA, van Haaren PM, Hurkmans CW, Tijssen RH. Anatomical evaluation of deep-learning synthetic computed tomography images generated from male pelvis cone-beam computed tomography. Phys Imaging Radiat Oncol 2023; 25:100416. [PMID: 36969503 PMCID: PMC10037090 DOI: 10.1016/j.phro.2023.100416] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 01/17/2023] [Accepted: 01/18/2023] [Indexed: 01/25/2023] Open
Abstract
Background and purpose To improve cone-beam computed tomography (CBCT), deep-learning (DL)-models are being explored to generate synthetic CTs (sCT). The sCT evaluation is mainly focused on image quality and CT number accuracy. However, correct representation of daily anatomy of the CBCT is also important for sCTs in adaptive radiotherapy. The aim of this study was to emphasize the importance of anatomical correctness by quantitatively assessing sCT scans generated from CBCT scans using different paired and unpaired dl-models. Materials and methods Planning CTs (pCT) and CBCTs of 56 prostate cancer patients were included to generate sCTs. Three different dl-models, Dual-UNet, Single-UNet and Cycle-consistent Generative Adversarial Network (CycleGAN), were evaluated on image quality and anatomical correctness. The image quality was assessed using image metrics, such as Mean Absolute Error (MAE). The anatomical correctness between sCT and CBCT was quantified using organs-at-risk volumes and average surface distances (ASD). Results MAE was 24 Hounsfield Unit (HU) [range:19-30 HU] for Dual-UNet, 40 HU [range:34-56 HU] for Single-UNet and 41HU [range:37-46 HU] for CycleGAN. Bladder ASD was 4.5 mm [range:1.6-12.3 mm] for Dual-UNet, 0.7 mm [range:0.4-1.2 mm] for Single-UNet and 0.9 mm [range:0.4-1.1 mm] CycleGAN. Conclusions Although Dual-UNet performed best in standard image quality measures, such as MAE, the contour based anatomical feature comparison with the CBCT showed that Dual-UNet performed worst on anatomical comparison. This emphasizes the importance of adding anatomy based evaluation of sCTs generated by dl-models. For applications in the pelvic area, direct anatomical comparison with the CBCT may provide a useful method to assess the clinical applicability of dl-based sCT generation methods.
Collapse
Affiliation(s)
- Yvonne J.M. de Hond
- Department of Radiation Oncology, Catharina Hospital, Eindhoven, The Netherlands
| | - Camiel E.M. Kerckhaert
- Department of Radiation Oncology, Catharina Hospital, Eindhoven, The Netherlands
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | | | - Paul M.A. van Haaren
- Department of Radiation Oncology, Catharina Hospital, Eindhoven, The Netherlands
| | - Coen W. Hurkmans
- Department of Radiation Oncology, Catharina Hospital, Eindhoven, The Netherlands
| | - Rob H.N. Tijssen
- Department of Radiation Oncology, Catharina Hospital, Eindhoven, The Netherlands
| |
Collapse
|