1
|
Erdur AC, Rusche D, Scholz D, Kiechle J, Fischer S, Llorián-Salvador Ó, Buchner JA, Nguyen MQ, Etzel L, Weidner J, Metz MC, Wiestler B, Schnabel J, Rueckert D, Combs SE, Peeken JC. Deep learning for autosegmentation for radiotherapy treatment planning: State-of-the-art and novel perspectives. Strahlenther Onkol 2024:10.1007/s00066-024-02262-2. [PMID: 39105745 DOI: 10.1007/s00066-024-02262-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 06/13/2024] [Indexed: 08/07/2024]
Abstract
The rapid development of artificial intelligence (AI) has gained importance, with many tools already entering our daily lives. The medical field of radiation oncology is also subject to this development, with AI entering all steps of the patient journey. In this review article, we summarize contemporary AI techniques and explore the clinical applications of AI-based automated segmentation models in radiotherapy planning, focusing on delineation of organs at risk (OARs), the gross tumor volume (GTV), and the clinical target volume (CTV). Emphasizing the need for precise and individualized plans, we review various commercial and freeware segmentation tools and also state-of-the-art approaches. Through our own findings and based on the literature, we demonstrate improved efficiency and consistency as well as time savings in different clinical scenarios. Despite challenges in clinical implementation such as domain shifts, the potential benefits for personalized treatment planning are substantial. The integration of mathematical tumor growth models and AI-based tumor detection further enhances the possibilities for refining target volumes. As advancements continue, the prospect of one-stop-shop segmentation and radiotherapy planning represents an exciting frontier in radiotherapy, potentially enabling fast treatment with enhanced precision and individualization.
Collapse
Affiliation(s)
- Ayhan Can Erdur
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
| | - Daniel Rusche
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Daniel Scholz
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Johannes Kiechle
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
| | - Stefan Fischer
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
| | - Óscar Llorián-Salvador
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department for Bioinformatics and Computational Biology - i12, Technical University of Munich, Boltzmannstraße 3, 85748, Garching, Bavaria, Germany
- Institute of Organismic and Molecular Evolution, Johannes Gutenberg University Mainz (JGU), Hüsch-Weg 15, 55128, Mainz, Rhineland-Palatinate, Germany
| | - Josef A Buchner
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Mai Q Nguyen
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Lucas Etzel
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
| | - Jonas Weidner
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Marie-Christin Metz
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Benedikt Wiestler
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Julia Schnabel
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
- Institute of Machine Learning in Biomedical Imaging, Helmholtz Munich, Ingolstädter Landstraße 1, 85764, Neuherberg, Bavaria, Germany
- School of Biomedical Engineering & Imaging Sciences, King's College London, Strand, WC2R 2LS, London, London, UK
| | - Daniel Rueckert
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Faculty of Engineering, Department of Computing, Imperial College London, Exhibition Rd, SW7 2BX, London, London, UK
| | - Stephanie E Combs
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| | - Jan C Peeken
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| |
Collapse
|
2
|
Radici L, Piva C, Casanova Borca V, Cante D, Ferrario S, Paolini M, Cabras L, Petrucci E, Franco P, La Porta MR, Pasquino M. Clinical evaluation of a deep learning CBCT auto-segmentation software for prostate adaptive radiation therapy. Clin Transl Radiat Oncol 2024; 47:100796. [PMID: 38884004 PMCID: PMC11176659 DOI: 10.1016/j.ctro.2024.100796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 05/09/2024] [Accepted: 05/16/2024] [Indexed: 06/18/2024] Open
Abstract
Purpose Aim of the present study is to characterize a deep learning-based auto-segmentation software (DL) for prostate cone beam computed tomography (CBCT) images and to evaluate its applicability in clinical adaptive radiation therapy routine. Materials and methods Ten patients, who received exclusive radiation therapy with definitive intent on the prostate gland and seminal vesicles, were selected. Femoral heads, bladder, rectum, prostate, and seminal vesicles were retrospectively contoured by four different expert radiation oncologists on patients CBCT, acquired during treatment. Consensus contours (CC) were generated starting from these data and compared with those created by DL with different algorithms, trained on CBCT (DL-CBCT) or computed tomography (DL-CT). Dice similarity coefficient (DSC), centre of mass (COM) shift and volume relative variation (VRV) were chosen as comparison metrics. Since no tolerance limit can be defined, results were also compared with the inter-operator variability (IOV), using the same metrics. Results The best agreement between DL and CC was observed for femoral heads (DSC of 0.96 for both DL-CBCT and DL-CT). Performance worsened for low-contrast soft tissue organs: the worst results were found for seminal vesicles (DSC of 0.70 and 0.59 for DL-CBCT and DL-CT, respectively). The analysis shows that it is appropriate to use algorithms trained on the specific imaging modality. Furthermore, the statistical analysis showed that, for almost all considered structures, there is no significant difference between DL-CBCT and human operator in terms of IOV. Conclusions The accuracy of DL-CBCT is in accordance with CC; its use in clinical practice is justified by the comparison with the inter-operator variability.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Laura Cabras
- Medical Physics Department, ASL TO4 Ivrea, Italy
| | | | - Pierfrancesco Franco
- Department of Translational Sciences (DIMET), University of Eastern Piedmont, Novara, Italy
- Department of Radiation Oncology, 'Maggiore della Carità' University Hospital, Novara, Italy
| | | | | |
Collapse
|
3
|
Sample CM, Uribe C, Rahmim A, Bénard F, Wu J, Clark H. Heterogeneous PSMA ligand uptake inside parotid glands. Phys Med 2024; 121:103366. [PMID: 38657425 DOI: 10.1016/j.ejmp.2024.103366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 03/28/2024] [Accepted: 04/19/2024] [Indexed: 04/26/2024] Open
Abstract
The purpose of this investigation is to quantify the spatial heterogeneity of prostate-specific membrane antigen (PSMA) positron emission tomography (PET) uptake within parotid glands. We aim to quantify patterns in well-defined regions to facilitate further investigations. Furthermore, we investigate whether uptake is correlated with computed tomography (CT) texture features. METHODS Parotid glands from [18F]DCFPyL PSMA PET/CT images of 30 prostate cancer patients were analyzed. Uptake patterns were assessed with various segmentation schemes. Spearman's rank correlation coefficient was calculated between PSMA PET uptake and feature values of a Grey Level Run Length Matrix using a long and short run length emphasis (GLRLML and GLRLMS) in subregions of the parotid gland. RESULTS PSMA PET uptake was significantly higher (p < 0.001) in lateral/posterior regions of the glands than anterior/medial regions. Maximum uptake was found in the lateral half of parotid glands in 50 out of 60 glands. The difference in SUVmean between parotid halves is greatest when parotids are divided by a plane separating the anterior/medial and posterior/lateral halves symmetrically (out of 120 bisections tested). PSMA PET uptake was significantly correlated with CT GLRLML (p < 0.001), and anti-correlated with CT GLRLMS (p < 0.001). CONCLUSION Uptake of PSMA PET is heterogeneous within parotid glands, with uptake biased towards lateral/posterior regions. Uptake within parotid glands was strongly correlated with CT texture feature maps.
Collapse
Affiliation(s)
- Caleb M Sample
- Department of Physics and Astronomy, Faculty of Science, University of British Columbia, Vancouver, BC, Canada; Department of Medical Physics, BC Cancer, Surrey, BC, Canada.
| | - Carlos Uribe
- Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, BC , Canada; Department of Functional Imaging, BC Cancer, Vancouver, BC, Canada; Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, CA, Canada
| | - Arman Rahmim
- Department of Physics and Astronomy, Faculty of Science, University of British Columbia, Vancouver, BC, Canada; Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, BC , Canada; Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, CA, Canada
| | - François Bénard
- Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, BC , Canada; Department of Functional Imaging, BC Cancer, Vancouver, BC, Canada; Department of Molecular Oncology, BC Cancer, Vancouver, BC, Canada
| | - Jonn Wu
- Department of Radiation Oncology, BC Cancer, Vancouver, BC, Canada; Department of Surgery, Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Haley Clark
- Department of Physics and Astronomy, Faculty of Science, University of British Columbia, Vancouver, BC, Canada; Department of Medical Physics, BC Cancer, Surrey, BC, Canada; Department of Surgery, Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
4
|
Sample C, Rahmim A, Uribe C, Bénard F, Wu J, Fedrigo R, Clark H. Neural blind deconvolution for deblurring and supersampling PSMA PET. Phys Med Biol 2024; 69:085025. [PMID: 38513292 DOI: 10.1088/1361-6560/ad36a9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 03/21/2024] [Indexed: 03/23/2024]
Abstract
Objective. To simultaneously deblur and supersample prostate specific membrane antigen (PSMA) positron emission tomography (PET) images using neural blind deconvolution.Approach. Blind deconvolution is a method of estimating the hypothetical 'deblurred' image along with the blur kernel (related to the point spread function) simultaneously. Traditionalmaximum a posterioriblind deconvolution methods require stringent assumptions and suffer from convergence to a trivial solution. A method of modelling the deblurred image and kernel with independent neural networks, called 'neural blind deconvolution' had demonstrated success for deblurring 2D natural images in 2020. In this work, we adapt neural blind deconvolution to deblur PSMA PET images while simultaneous supersampling to double the original resolution. We compare this methodology with several interpolation methods in terms of resultant blind image quality metrics and test the model's ability to predict accurate kernels by re-running the model after applying artificial 'pseudokernels' to deblurred images. The methodology was tested on a retrospective set of 30 prostate patients as well as phantom images containing spherical lesions of various volumes.Main results. Neural blind deconvolution led to improvements in image quality over other interpolation methods in terms of blind image quality metrics, recovery coefficients, and visual assessment. Predicted kernels were similar between patients, and the model accurately predicted several artificially-applied pseudokernels. Localization of activity in phantom spheres was improved after deblurring, allowing small lesions to be more accurately defined.Significance. The intrinsically low spatial resolution of PSMA PET leads to partial volume effects (PVEs) which negatively impact uptake quantification in small regions. The proposed method can be used to mitigate this issue, and can be straightforwardly adapted for other imaging modalities.
Collapse
Affiliation(s)
- Caleb Sample
- Department of Physics and Astronomy, Faculty of Science, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Medical Physics, BC Cancer, Surrey, BC, CA, Canada
| | - Arman Rahmim
- Department of Physics and Astronomy, Faculty of Science, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, CA, Canada
| | - Carlos Uribe
- Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, CA, Canada
- Department of Functional Imaging, BC Cancer, Vancouver, BC, CA, Canada
| | - François Bénard
- Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, CA, Canada
- Department of Molecular Oncology, BC Cancer, Vancouver, BC, CA, Canada
| | - Jonn Wu
- Department of Radiation Oncology, BC Cancer, Vancouver, BC, CA, Canada
- Department of Surgery, Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
| | - Roberto Fedrigo
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, CA, Canada
- Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
| | - Haley Clark
- Department of Physics and Astronomy, Faculty of Science, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Medical Physics, BC Cancer, Surrey, BC, CA, Canada
- Department of Surgery, Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
| |
Collapse
|
5
|
Sample C, Rahmim A, Benard F, Wu J, Clark H. PSMA PET/CT as a predictive tool for subregional importance estimates in the parotid gland. Biomed Phys Eng Express 2024; 10:025020. [PMID: 38271732 DOI: 10.1088/2057-1976/ad229c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 01/25/2024] [Indexed: 01/27/2024]
Abstract
Objective. Xerostomia and radiation-induced salivary gland dysfunction remain a common side effect for head-and-neck radiotherapy patients, and attempts have been made to quantify the heterogeneity of the dose response within parotid glands. Prostate Specific Membrane Antigen (PSMA) ligands have demonstrated high uptake in salivary glands, which has been shown to correlate with gland functionality. Here we compare several models of parotid gland subregional relative importance with PSMA positron emission tomography (PET) uptake. We then develop a predictive model for Clarket al's relative importance estimates using PSMA PET and CT radiomic features, and demonstrate a methodology for predicting patient-specific importance deviations from the population.Approach. Intra-parotid gland uptake was compared with four regional importance models using 30 [18F]DCFPyL PSMA PET images. The correlation of uptake and importance was ascertained when numerous non-overlapping subregions were defined, while a paired t-test was used to compare binary region pairs. A radiomics-based predictive model of population importance was developed using a double cross-validation methodology. A model was then devised for supplementing population-level subregional importance estimates for each patient using patient-specific radiomic features.Main Results. Anticorrelative relationships were found to exist between PSMA PET uptake and four independent models of subregional parotid gland importance from the literature. Kernel Ridge Regression with principal component analysis feature selection performed best over test sets (Mean Absolute Error = 0.08), with gray level co-occurrence matrix (GLCM) features being particularly important. Deblurring PSMA PET images with neural blind deconvolution strengthened correlations and improved model performance.Significance. This study suggests that regions of relatively low PSMA PET uptake in parotid glands may exhibit relatively high dose-sensitivity. We've demonstrated the utility of PSMA PET radiomic features for predicting relative importance within subregions of parotid glands. PSMA PET appears to be a promising quantitative imaging modality for analyzing salivary gland functionality.
Collapse
Affiliation(s)
- Caleb Sample
- Department of Physics and Astronomy, Faculty of Science, University of British Columbia, Vancouver, BC, Canada
- Department of Medical Physics, BC Cancer, Surrey, BC, Canada
| | - Arman Rahmim
- Department of Physics and Astronomy, Faculty of Science, University of British Columbia, Vancouver, BC, Canada
- Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, Canada
| | - François Benard
- Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, Canada
- Department of Molecular Oncology, BC Cancer, Vancouver, BC, Canada
| | - Jonn Wu
- Department of Radiation Oncology, BC Cancer, Vancouver, BC, Canada
- Department of Surgery, Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Haley Clark
- Department of Physics and Astronomy, Faculty of Science, University of British Columbia, Vancouver, BC, Canada
- Department of Medical Physics, BC Cancer, Surrey, BC, Canada
- Department of Surgery, Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
6
|
McQuinlan Y, Brouwer CL, Lin Z, Gan Y, Sung Kim J, van Elmpt W, Gooding MJ. An investigation into the risk of population bias in deep learning autocontouring. Radiother Oncol 2023; 186:109747. [PMID: 37330053 DOI: 10.1016/j.radonc.2023.109747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 05/30/2023] [Accepted: 06/08/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND AND PURPOSE To date, data used in the development of Deep Learning-based automatic contouring (DLC) algorithms have been largely sourced from single geographic populations. This study aimed to evaluate the risk of population-based bias by determining whether the performance of an autocontouring system is impacted by geographic population. MATERIALS AND METHODS 80 Head Neck CT deidentified scans were collected from four clinics in Europe (n = 2) and Asia (n = 2). A single observer manually delineated 16 organs-at-risk in each. Subsequently, the data was contoured using a DLC solution, and trained using single institution (European) data. Autocontours were compared to manual delineations using quantitative measures. A Kruskal-Wallis test was used to test for any difference between populations. Clinical acceptability of automatic and manual contours to observers from each participating institution was assessed using a blinded subjective evaluation. RESULTS Seven organs showed a significant difference in volume between groups. Four organs showed statistical differences in quantitative similarity measures. The qualitative test showed greater variation in acceptance of contouring between observers than between data from different origins, with greater acceptance by the South Korean observers. CONCLUSION Much of the statistical difference in quantitative performance could be explained by the difference in organ volume impacting the contour similarity measures and the small sample size. However, the qualitative assessment suggests that observer perception bias has a greater impact on the apparent clinical acceptability than quantitatively observed differences. This investigation of potential geographic bias should extend to more patients, populations, and anatomical regions in the future.
Collapse
Affiliation(s)
| | - Charlotte L Brouwer
- University of Groningen, University Medical Center Groningen, Department of Radiation Oncology, Groningen, The Netherlands.
| | - Zhixiong Lin
- Shantou University Medical Centre, Guangdong, China.
| | - Yong Gan
- Shantou University Medical Centre, Guangdong, China.
| | - Jin Sung Kim
- Yonsei University Health System, Seoul, Republic of Korea.
| | - Wouter van Elmpt
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands.
| | - Mark J Gooding
- Mirada Medical Ltd, Oxford, United Kingdom; Inpictura Ltd, Oxford, United Kingdom.
| |
Collapse
|
7
|
Cui Y, Arimura H, Yoshitake T, Shioyama Y, Yabuuchi H. Deep learning model fusion improves lung tumor segmentation accuracy across variable training-to-test dataset ratios. Phys Eng Sci Med 2023; 46:1271-1285. [PMID: 37548886 DOI: 10.1007/s13246-023-01295-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Accepted: 06/20/2023] [Indexed: 08/08/2023]
Abstract
This study aimed to investigate the robustness of a deep learning (DL) fusion model for low training-to-test ratio (TTR) datasets in the segmentation of gross tumor volumes (GTVs) in three-dimensional planning computed tomography (CT) images for lung cancer stereotactic body radiotherapy (SBRT). A total of 192 patients with lung cancer (solid tumor, 118; part-solid tumor, 53; ground-glass opacity, 21) who underwent SBRT were included in this study. Regions of interest in the GTVs were cropped based on GTV centroids from planning CT images. Three DL models, 3D U-Net, V-Net, and dense V-Net, were trained to segment the GTV regions. Nine fusion models were constructed with logical AND, logical OR, and voting of the two or three outputs of the three DL models. TTR was defined as the ratio of the number of cases in a training dataset to that in a test dataset. The Dice similarity coefficients (DSCs) and Hausdorff distance (HD) of the 12 models were assessed with TTRs of 1.00 (training data: validation data: test data = 40:20:40), 0.791 (35:20:45), 0.531 (31:10:59), 0.291 (20:10:70), and 0.116 (10:5:85). The voting fusion model achieved the highest DSCs of 0.829 to 0.798 for all TTRs among the 12 models, whereas the other models showed DSCs of 0.818 to 0.804 for a TTR of 1.00 and 0.788 to 0.742 for a TTR of 0.116, and an HD of 5.40 ± 3.00 to 6.07 ± 3.26 mm better than any single DL models. The findings suggest that the proposed voting fusion model is a robust approach for low TTR datasets in segmenting GTVs in planning CT images of lung cancer SBRT.
Collapse
Affiliation(s)
- Yunhao Cui
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Hidetaka Arimura
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan.
- Division of Medical Quantum Science, Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan.
| | - Tadamasa Yoshitake
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Yoshiyuki Shioyama
- Saga International Heavy Ion Cancer Treatment Foundation, 3049 Harakogamachi, Tosu-shi, 841-0071, Saga, Japan
| | - Hidetake Yabuuchi
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| |
Collapse
|
8
|
Yu X, He L, Wang Y, Dong Y, Song Y, Yuan Z, Yan Z, Wang W. A deep learning approach for automatic tumor delineation in stereotactic radiotherapy for non-small cell lung cancer using diagnostic PET-CT and planning CT. Front Oncol 2023; 13:1235461. [PMID: 37601687 PMCID: PMC10437048 DOI: 10.3389/fonc.2023.1235461] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 07/10/2023] [Indexed: 08/22/2023] Open
Abstract
Introduction Accurate delineation of tumor targets is crucial for stereotactic body radiation therapy (SBRT) for non-small cell lung cancer (NSCLC). This study aims to develop a deep learning-based segmentation approach to accurately and efficiently delineate NSCLC targets using diagnostic PET-CT and SBRT planning CT (pCT). Methods The diagnostic PET was registered to pCT using the transform matrix from registering diagnostic CT to the pCT. We proposed a 3D-UNet-based segmentation method to segment NSCLC tumor targets on dual-modality PET-pCT images. This network contained squeeze-and-excitation and Residual blocks in each convolutional block to perform dynamic channel-wise feature recalibration. Furthermore, up-sampling paths were added to supplement low-resolution features to the model and also to compute the overall loss function. The dice similarity coefficient (DSC), precision, recall, and the average symmetric surface distances were used to assess the performance of the proposed approach on 86 pairs of diagnostic PET and pCT images. The proposed model using dual-modality images was compared with both conventional 3D-UNet architecture and single-modality image input. Results The average DSC of the proposed model with both PET and pCT images was 0.844, compared to 0.795 and 0.827, when using 3D-UNet and nnUnet. It also outperformed using either pCT or PET alone with the same network, which had DSC of 0.823 and 0.732, respectively. Discussion Therefore, our proposed segmentation approach is able to outperform the current 3D-UNet network with diagnostic PET and pCT images. The integration of two image modalities helps improve segmentation accuracy.
Collapse
Affiliation(s)
- Xuyao Yu
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
- Tianjin Medical University, Tianjin, China
| | - Lian He
- Perception Vision Medical Technologies Co Ltd, Guangzhou, China
| | - Yuwen Wang
- Department of Radiotherapy, Tianjin Cancer Hospital Airport Hospital, Tianjin, China
| | - Yang Dong
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Yongchun Song
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Zhiyong Yuan
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Ziye Yan
- Perception Vision Medical Technologies Co Ltd, Guangzhou, China
| | - Wei Wang
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| |
Collapse
|
9
|
Bhandary S, Kuhn D, Babaiee Z, Fechter T, Benndorf M, Zamboglou C, Grosu AL, Grosu R. Investigation and benchmarking of U-Nets on prostate segmentation tasks. Comput Med Imaging Graph 2023; 107:102241. [PMID: 37201475 DOI: 10.1016/j.compmedimag.2023.102241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 05/03/2023] [Accepted: 05/03/2023] [Indexed: 05/20/2023]
Abstract
In healthcare, a growing number of physicians and support staff are striving to facilitate personalized radiotherapy regimens for patients with prostate cancer. This is because individual patient biology is unique, and employing a single approach for all is inefficient. A crucial step for customizing radiotherapy planning and gaining fundamental information about the disease, is the identification and delineation of targeted structures. However, accurate biomedical image segmentation is time-consuming, requires considerable experience and is prone to observer variability. In the past decade, the use of deep learning models has significantly increased in the field of medical image segmentation. At present, a vast number of anatomical structures can be demarcated on a clinician's level with deep learning models. These models would not only unload work, but they can offer unbiased characterization of the disease. The main architectures used in segmentation are the U-Net and its variants, that exhibit outstanding performances. However, reproducing results or directly comparing methods is often limited by closed source of data and the large heterogeneity among medical images. With this in mind, our intention is to provide a reliable source for assessing deep learning models. As an example, we chose the challenging task of delineating the prostate gland in multi-modal images. First, this paper provides a comprehensive review of current state-of-the-art convolutional neural networks for 3D prostate segmentation. Second, utilizing public and in-house CT and MR datasets of varying properties, we created a framework for an objective comparison of automatic prostate segmentation algorithms. The framework was used for rigorous evaluations of the models, highlighting their strengths and weaknesses.
Collapse
Affiliation(s)
- Shrajan Bhandary
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria.
| | - Dejan Kuhn
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany
| | - Zahra Babaiee
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria
| | - Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany
| | - Matthias Benndorf
- Department of Diagnostic and Interventional Radiology, Medical Center University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany
| | - Constantinos Zamboglou
- Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany; Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; German Oncology Center, European University, Limassol, 4108, Cyprus
| | - Anca-Ligia Grosu
- Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany; Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany
| | - Radu Grosu
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria; Department of Computer Science, State University of New York at Stony Brook, NY, 11794, USA
| |
Collapse
|
10
|
Ramachandran P, Eswarlal T, Lehman M, Colbert Z. Assessment of Optimizers and their Performance in Autosegmenting Lung Tumors. J Med Phys 2023; 48:129-135. [PMID: 37576091 PMCID: PMC10419743 DOI: 10.4103/jmp.jmp_54_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 05/06/2023] [Accepted: 05/14/2023] [Indexed: 08/15/2023] Open
Abstract
Purpose Optimizers are widely utilized across various domains to enhance desired outcomes by either maximizing or minimizing objective functions. In the context of deep learning, they help to minimize the loss function and improve model's performance. This study aims to evaluate the accuracy of different optimizers employed for autosegmentation of non-small cell lung cancer (NSCLC) target volumes on thoracic computed tomography images utilized in oncology. Materials and Methods The study utilized 112 patients, comprising 92 patients from "The Cancer Imaging Archive" (TCIA) and 20 of our local clinical patients, to evaluate the efficacy of various optimizers. The gross tumor volume was selected as the foreground mask for training and testing the models. Of the 92 TCIA patients, 57 were used for training and validation, and the remaining 35 for testing using nnU-Net. The performance of the final model was further evaluated on the 20 local clinical patient datasets. Six different optimizers, namely AdaDelta, AdaGrad, Adam, NAdam, RMSprop, and stochastic gradient descent (SGD), were investigated. To assess the agreement between the predicted volume and the ground truth, several metrics including Dice similarity coefficient (DSC), Jaccard index, sensitivity, precision, Hausdorff distance (HD), 95th percentile Hausdorff distance (HD95), and average symmetric surface distance (ASSD) were utilized. Results The DSC values for AdaDelta, AdaGrad, Adam, NAdam, RMSprop, and SGD were 0.75, 0.84, 0.85, 0.84, 0.83, and 0.81, respectively, for the TCIA test data. However, when the model trained on TCIA datasets was applied to the clinical datasets, the DSC, HD, HD95, and ASSD metrics showed a statistically significant decrease in performance compared to the TCIA test datasets, indicating the presence of image and/or mask heterogeneity between the data sources. Conclusion The choice of optimizer in deep learning is a critical factor that can significantly impact the performance of autosegmentation models. However, it is worth noting that the behavior of optimizers may vary when applied to new clinical datasets, which can lead to changes in models' performance. Therefore, selecting the appropriate optimizer for a specific task is essential to ensure optimal performance and generalizability of the model to different datasets.
Collapse
Affiliation(s)
- Prabhakar Ramachandran
- Department of Radiation Oncology, Cancer Services, Princess Alexandra Hospital, Woolloongabba, Queensland, Australia
| | - Tamma Eswarlal
- Department of Engineering Mathematics, College of Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India
| | - Margot Lehman
- Department of Radiation Oncology, Cancer Services, Princess Alexandra Hospital, Woolloongabba, Queensland, Australia
| | - Zachery Colbert
- Department of Radiation Oncology, Cancer Services, Princess Alexandra Hospital, Woolloongabba, Queensland, Australia
| |
Collapse
|
11
|
Mackay K, Bernstein D, Glocker B, Kamnitsas K, Taylor A. A Review of the Metrics Used to Assess Auto-Contouring Systems in Radiotherapy. Clin Oncol (R Coll Radiol) 2023; 35:354-369. [PMID: 36803407 DOI: 10.1016/j.clon.2023.01.016] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 12/05/2022] [Accepted: 01/23/2023] [Indexed: 02/01/2023]
Abstract
Auto-contouring could revolutionise future planning of radiotherapy treatment. The lack of consensus on how to assess and validate auto-contouring systems currently limits clinical use. This review formally quantifies the assessment metrics used in studies published during one calendar year and assesses the need for standardised practice. A PubMed literature search was undertaken for papers evaluating radiotherapy auto-contouring published during 2021. Papers were assessed for types of metric and the methodology used to generate ground-truth comparators. Our PubMed search identified 212 studies, of which 117 met the criteria for clinical review. Geometric assessment metrics were used in 116 of 117 studies (99.1%). This includes the Dice Similarity Coefficient used in 113 (96.6%) studies. Clinically relevant metrics, such as qualitative, dosimetric and time-saving metrics, were less frequently used in 22 (18.8%), 27 (23.1%) and 18 (15.4%) of 117 studies, respectively. There was heterogeneity within each category of metric. Over 90 different names for geometric measures were used. Methods for qualitative assessment were different in all but two papers. Variation existed in the methods used to generate radiotherapy plans for dosimetric assessment. Consideration of editing time was only given in 11 (9.4%) papers. A single manual contour as a ground-truth comparator was used in 65 (55.6%) studies. Only 31 (26.5%) studies compared auto-contours to usual inter- and/or intra-observer variation. In conclusion, significant variation exists in how research papers currently assess the accuracy of automatically generated contours. Geometric measures are the most popular, however their clinical utility is unknown. There is heterogeneity in the methods used to perform clinical assessment. Considering the different stages of system implementation may provide a framework to decide the most appropriate metrics. This analysis supports the need for a consensus on the clinical implementation of auto-contouring.
Collapse
Affiliation(s)
- K Mackay
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK.
| | - D Bernstein
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| | - B Glocker
- Department of Computing, Imperial College London, South Kensington Campus, London, UK
| | - K Kamnitsas
- Department of Computing, Imperial College London, South Kensington Campus, London, UK; Department of Engineering Science, University of Oxford, Oxford, UK
| | - A Taylor
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| |
Collapse
|
12
|
Implementation of a Commercial Deep Learning-Based Auto Segmentation Software in Radiotherapy: Evaluation of Effectiveness and Impact on Workflow. LIFE (BASEL, SWITZERLAND) 2022; 12:life12122088. [PMID: 36556455 PMCID: PMC9782080 DOI: 10.3390/life12122088] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 11/30/2022] [Accepted: 12/09/2022] [Indexed: 12/14/2022]
Abstract
Proper delineation of both target volumes and organs at risk is a crucial step in the radiation therapy workflow. This process is normally carried out manually by medical doctors, hence demanding timewise. To improve efficiency, auto-contouring methods have been proposed. We assessed a specific commercial software to investigate its impact on the radiotherapy workflow on four specific disease sites: head and neck, prostate, breast, and rectum. For the present study, we used a commercial deep learning-based auto-segmentation software, namely Limbus Contour (LC), Version 1.5.0 (Limbus AI Inc., Regina, SK, Canada). The software uses deep convolutional neural network models based on a U-net architecture, specific for each structure. Manual and automatic segmentation were compared on disease-specific organs at risk. Contouring time, geometrical performance (volume variation, Dice Similarity Coefficient-DSC, and center of mass shift), and dosimetric impact (DVH differences) were evaluated. With respect to time savings, the maximum advantage was seen in the setting of head and neck cancer with a 65%-time reduction. The average DSC was 0.72. The best agreement was found for lungs. Good results were highlighted for bladder, heart, and femoral heads. The most relevant dosimetric difference was in the rectal cancer case, where the mean volume covered by the 45 Gy isodose was 10.4 cm3 for manual contouring and 289.4 cm3 for automatic segmentation. Automatic contouring was able to significantly reduce the time required in the procedure, simplifying the workflow, and reducing interobserver variability. Its implementation was able to improve the radiation therapy workflow in our department.
Collapse
|
13
|
Savjani RR, Lauria M, Bose S, Deng J, Yuan Y, Andrearczyk V. Automated Tumor Segmentation in Radiotherapy. Semin Radiat Oncol 2022; 32:319-329. [DOI: 10.1016/j.semradonc.2022.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
14
|
Tryggestad E, Anand A, Beltran C, Brooks J, Cimmiyotti J, Grimaldi N, Hodge T, Hunzeker A, Lucido JJ, Laack NN, Momoh R, Moseley DJ, Patel SH, Ridgway A, Seetamsetty S, Shiraishi S, Undahl L, Foote RL. Scalable radiotherapy data curation infrastructure for deep-learning based autosegmentation of organs-at-risk: A case study in head and neck cancer. Front Oncol 2022; 12:936134. [PMID: 36106100 PMCID: PMC9464982 DOI: 10.3389/fonc.2022.936134] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 08/03/2022] [Indexed: 12/02/2022] Open
Abstract
In this era of patient-centered, outcomes-driven and adaptive radiotherapy, deep learning is now being successfully applied to tackle imaging-related workflow bottlenecks such as autosegmentation and dose planning. These applications typically require supervised learning approaches enabled by relatively large, curated radiotherapy datasets which are highly reflective of the contemporary standard of care. However, little has been previously published describing technical infrastructure, recommendations, methods or standards for radiotherapy dataset curation in a holistic fashion. Our radiation oncology department has recently embarked on a large-scale project in partnership with an external partner to develop deep-learning-based tools to assist with our radiotherapy workflow, beginning with autosegmentation of organs-at-risk. This project will require thousands of carefully curated radiotherapy datasets comprising all body sites we routinely treat with radiotherapy. Given such a large project scope, we have approached the need for dataset curation rigorously, with an aim towards building infrastructure that is compatible with efficiency, automation and scalability. Focusing on our first use-case pertaining to head and neck cancer, we describe our developed infrastructure and novel methods applied to radiotherapy dataset curation, inclusive of personnel and workflow organization, dataset selection, expert organ-at-risk segmentation, quality assurance, patient de-identification, data archival and transfer. Over the course of approximately 13 months, our expert multidisciplinary team generated 490 curated head and neck radiotherapy datasets. This task required approximately 6000 human-expert hours in total (not including planning and infrastructure development time). This infrastructure continues to evolve and will support ongoing and future project efforts.
Collapse
Affiliation(s)
- E. Tryggestad
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
- *Correspondence: E. Tryggestad,
| | - A. Anand
- Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, AZ, United States
| | - C. Beltran
- Department of Radiation Oncology, Mayo Clinic Florida, Jacksonville, FL, United States
| | - J. Brooks
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - J. Cimmiyotti
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - N. Grimaldi
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - T. Hodge
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - A. Hunzeker
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - J. J. Lucido
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - N. N. Laack
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - R. Momoh
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - D. J. Moseley
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - S. H. Patel
- Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, AZ, United States
| | - A. Ridgway
- Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, AZ, United States
| | - S. Seetamsetty
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - S. Shiraishi
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - L. Undahl
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - R. L. Foote
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| |
Collapse
|
15
|
Mancosu P, Lambri N, Castiglioni I, Dei D, Iori M, Loiacono D, Russo S, Talamonti C, Villaggi E, Scorsetti M, Avanzo M. Applications of artificial intelligence in stereotactic body radiation therapy. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7e18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 07/04/2022] [Indexed: 11/12/2022]
Abstract
Abstract
This topical review focuses on the applications of artificial intelligence (AI) tools to stereotactic body radiation therapy (SBRT). The high dose per fraction and the limited number of fractions in SBRT require stricter accuracy than standard radiation therapy. The intent of this review is to describe the development and evaluate the possible benefit of AI tools integration into the radiation oncology workflow for SBRT automation. The selected papers were subdivided into four sections, representative of the whole radiotherapy process: ‘AI in SBRT target and organs at risk contouring’, ‘AI in SBRT planning’, ‘AI during the SBRT delivery’, and ‘AI for outcome prediction after SBRT’. Each section summarises the challenges, as well as limits and needs for improvement to achieve better integration of AI tools in the clinical workflow.
Collapse
|
16
|
D’Aviero A, Re A, Catucci F, Piccari D, Votta C, Piro D, Piras A, Di Dio C, Iezzi M, Preziosi F, Menna S, Quaranta F, Boschetti A, Marras M, Miccichè F, Gallus R, Indovina L, Bussu F, Valentini V, Cusumano D, Mattiucci GC. Clinical Validation of a Deep-Learning Segmentation Software in Head and Neck: An Early Analysis in a Developing Radiation Oncology Center. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19159057. [PMID: 35897425 PMCID: PMC9329735 DOI: 10.3390/ijerph19159057] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 07/12/2022] [Accepted: 07/20/2022] [Indexed: 02/01/2023]
Abstract
Background: Organs at risk (OARs) delineation is a crucial step of radiotherapy (RT) treatment planning workflow. Time-consuming and inter-observer variability are main issues in manual OAR delineation, mainly in the head and neck (H & N) district. Deep-learning based auto-segmentation is a promising strategy to improve OARs contouring in radiotherapy departments. A comparison of deep-learning-generated auto-contours (AC) with manual contours (MC) was performed by three expert radiation oncologists from a single center. Methods: Planning computed tomography (CT) scans of patients undergoing RT treatments for H&N cancers were considered. CT scans were processed by Limbus Contour auto-segmentation software, a commercial deep-learning auto-segmentation based software to generate AC. H&N protocol was used to perform AC, with the structure set consisting of bilateral brachial plexus, brain, brainstem, bilateral cochlea, pharyngeal constrictors, eye globes, bilateral lens, mandible, optic chiasm, bilateral optic nerves, oral cavity, bilateral parotids, spinal cord, bilateral submandibular glands, lips and thyroid. Manual revision of OARs was performed according to international consensus guidelines. The AC and MC were compared using the Dice similarity coefficient (DSC) and 95% Hausdorff distance transform (DT). Results: A total of 274 contours obtained by processing CT scans were included in the analysis. The highest values of DSC were obtained for the brain (DSC 1.00), left and right eye globes and the mandible (DSC 0.98). The structures with greater MC editing were optic chiasm, optic nerves and cochleae. Conclusions: In this preliminary analysis, deep-learning auto-segmentation seems to provide acceptable H&N OAR delineations. For less accurate organs, AC could be considered a starting point for review and manual adjustment. Our results suggest that AC could become a useful time-saving tool to optimize workload and resources in RT departments.
Collapse
Affiliation(s)
- Andrea D’Aviero
- Radiation Oncology, Mater Olbia Hospital, 07026 Olbia, Italy; (A.D.); (A.R.); (F.C.); (C.V.); (D.P.); (C.D.D.); (M.I.); (F.P.); (A.B.); (M.M.); (G.C.M.)
| | - Alessia Re
- Radiation Oncology, Mater Olbia Hospital, 07026 Olbia, Italy; (A.D.); (A.R.); (F.C.); (C.V.); (D.P.); (C.D.D.); (M.I.); (F.P.); (A.B.); (M.M.); (G.C.M.)
| | - Francesco Catucci
- Radiation Oncology, Mater Olbia Hospital, 07026 Olbia, Italy; (A.D.); (A.R.); (F.C.); (C.V.); (D.P.); (C.D.D.); (M.I.); (F.P.); (A.B.); (M.M.); (G.C.M.)
| | - Danila Piccari
- Radiation Oncology, Mater Olbia Hospital, 07026 Olbia, Italy; (A.D.); (A.R.); (F.C.); (C.V.); (D.P.); (C.D.D.); (M.I.); (F.P.); (A.B.); (M.M.); (G.C.M.)
- UOC Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, 00168 Roma, Italy; (F.M.); (L.I.); (V.V.)
- Correspondence:
| | - Claudio Votta
- Radiation Oncology, Mater Olbia Hospital, 07026 Olbia, Italy; (A.D.); (A.R.); (F.C.); (C.V.); (D.P.); (C.D.D.); (M.I.); (F.P.); (A.B.); (M.M.); (G.C.M.)
- UOC Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, 00168 Roma, Italy; (F.M.); (L.I.); (V.V.)
| | - Domenico Piro
- Radiation Oncology, Mater Olbia Hospital, 07026 Olbia, Italy; (A.D.); (A.R.); (F.C.); (C.V.); (D.P.); (C.D.D.); (M.I.); (F.P.); (A.B.); (M.M.); (G.C.M.)
- UOC Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, 00168 Roma, Italy; (F.M.); (L.I.); (V.V.)
| | - Antonio Piras
- UO Radioterapia Oncologica, Villa Santa Teresa, 90011 Bagheria, Italy;
| | - Carmela Di Dio
- Radiation Oncology, Mater Olbia Hospital, 07026 Olbia, Italy; (A.D.); (A.R.); (F.C.); (C.V.); (D.P.); (C.D.D.); (M.I.); (F.P.); (A.B.); (M.M.); (G.C.M.)
| | - Martina Iezzi
- Radiation Oncology, Mater Olbia Hospital, 07026 Olbia, Italy; (A.D.); (A.R.); (F.C.); (C.V.); (D.P.); (C.D.D.); (M.I.); (F.P.); (A.B.); (M.M.); (G.C.M.)
| | - Francesco Preziosi
- Radiation Oncology, Mater Olbia Hospital, 07026 Olbia, Italy; (A.D.); (A.R.); (F.C.); (C.V.); (D.P.); (C.D.D.); (M.I.); (F.P.); (A.B.); (M.M.); (G.C.M.)
| | - Sebastiano Menna
- Medical Physics, Mater Olbia Hospital, 07026 Sassari, Italy; (S.M.); (F.Q.); (D.C.)
| | | | - Althea Boschetti
- Radiation Oncology, Mater Olbia Hospital, 07026 Olbia, Italy; (A.D.); (A.R.); (F.C.); (C.V.); (D.P.); (C.D.D.); (M.I.); (F.P.); (A.B.); (M.M.); (G.C.M.)
| | - Marco Marras
- Radiation Oncology, Mater Olbia Hospital, 07026 Olbia, Italy; (A.D.); (A.R.); (F.C.); (C.V.); (D.P.); (C.D.D.); (M.I.); (F.P.); (A.B.); (M.M.); (G.C.M.)
| | - Francesco Miccichè
- UOC Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, 00168 Roma, Italy; (F.M.); (L.I.); (V.V.)
| | - Roberto Gallus
- Otolaryngology, Mater Olbia Hospital, 07026 Sassari, Italy;
| | - Luca Indovina
- UOC Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, 00168 Roma, Italy; (F.M.); (L.I.); (V.V.)
| | - Francesco Bussu
- Otolaryngology, Azienda Ospedaliero Universitaria di Sassari, 07100 Sassari, Italy;
- Dipartimento delle Scienze Mediche, Chirurgiche e Sperimentali, Università di Sassari, 07100 Sassari, Italy
| | - Vincenzo Valentini
- UOC Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, 00168 Roma, Italy; (F.M.); (L.I.); (V.V.)
- Dipartimento di Scienze Radiologiche ed Ematologiche, Università Cattolica del Sacro Cuore, 00168 Roma, Italy
| | - Davide Cusumano
- Medical Physics, Mater Olbia Hospital, 07026 Sassari, Italy; (S.M.); (F.Q.); (D.C.)
| | - Gian Carlo Mattiucci
- Radiation Oncology, Mater Olbia Hospital, 07026 Olbia, Italy; (A.D.); (A.R.); (F.C.); (C.V.); (D.P.); (C.D.D.); (M.I.); (F.P.); (A.B.); (M.M.); (G.C.M.)
- Dipartimento di Scienze Radiologiche ed Ematologiche, Università Cattolica del Sacro Cuore, 00168 Roma, Italy
| |
Collapse
|
17
|
Wang Y, Cai H, Pu Y, Li J, Yang F, Yang C, Chen L, Hu Z. The value of AI in the Diagnosis, Treatment, and Prognosis of Malignant Lung Cancer. FRONTIERS IN RADIOLOGY 2022; 2:810731. [PMID: 37492685 PMCID: PMC10365105 DOI: 10.3389/fradi.2022.810731] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Accepted: 03/30/2022] [Indexed: 07/27/2023]
Abstract
Malignant tumors is a serious public health threat. Among them, lung cancer, which has the highest fatality rate globally, has significantly endangered human health. With the development of artificial intelligence (AI) and its integration with medicine, AI research in malignant lung tumors has become critical. This article reviews the value of CAD, computer neural network deep learning, radiomics, molecular biomarkers, and digital pathology for the diagnosis, treatment, and prognosis of malignant lung tumors.
Collapse
Affiliation(s)
- Yue Wang
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Haihua Cai
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yongzhu Pu
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Jindan Li
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Fake Yang
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Conghui Yang
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Long Chen
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
18
|
Recent Applications of Artificial Intelligence in Radiotherapy: Where We Are and Beyond. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073223] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
In recent decades, artificial intelligence (AI) tools have been applied in many medical fields, opening the possibility of finding novel solutions for managing very complex and multifactorial problems, such as those commonly encountered in radiotherapy (RT). We conducted a PubMed and Scopus search to identify the AI application field in RT limited to the last four years. In total, 1824 original papers were identified, and 921 were analyzed by considering the phase of the RT workflow according to the applied AI approaches. AI permits the processing of large quantities of information, data, and images stored in RT oncology information systems, a process that is not manageable for individuals or groups. AI allows the iterative application of complex tasks in large datasets (e.g., delineating normal tissues or finding optimal planning solutions) and might support the entire community working in the various sectors of RT, as summarized in this overview. AI-based tools are now on the roadmap for RT and have been applied to the entire workflow, mainly for segmentation, the generation of synthetic images, and outcome prediction. Several concerns were raised, including the need for harmonization while overcoming ethical, legal, and skill barriers.
Collapse
|