51
|
De Kerf G, Claessens M, Raouassi F, Mercier C, Stas D, Ost P, Dirix P, Verellen D. A geometry and dose-volume based performance monitoring of artificial intelligence models in radiotherapy treatment planning for prostate cancer. Phys Imaging Radiat Oncol 2023; 28:100494. [PMID: 37809056 PMCID: PMC10550805 DOI: 10.1016/j.phro.2023.100494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 09/20/2023] [Accepted: 09/20/2023] [Indexed: 10/10/2023] Open
Abstract
Background and Purpose Clinical Artificial Intelligence (AI) implementations lack ground-truth when applied on real-world data. This study investigated how combined geometrical and dose-volume metrics can be used as performance monitoring tools to detect clinically relevant candidates for model retraining. Materials and Methods Fifty patients were analyzed for both AI-segmentation and planning. For AI-segmentation, geometrical (Standard Surface Dice 3 mm and Local Surface Dice 3 mm) and dose-volume based parameters were calculated for two organs (bladder and anorectum) to compare AI output against the clinically corrected structure. A Local Surface Dice was introduced to detect geometrical changes in the vicinity of the target volumes, while an Absolute Dose Difference (ADD) evaluation increased focus on dose-volume related changes. AI-planning performance was evaluated using clinical goal analysis in combination with volume and target overlap metrics. Results The Local Surface Dice reported equal or lower values compared to the Standard Surface Dice (anorectum: (0.93 ± 0.11) vs (0.98 ± 0.04); bladder: (0.97 ± 0.06) vs (0.98 ± 0.04)). The ADD metric showed a difference of (0.9 ± 0.8)Gy for the anorectum D 1 cm 3 . The bladder D 5cm 3 reported a difference of (0.7 ± 1.5)Gy. Mandatory clinical goals were fulfilled in 90 % of the DLP plans. Conclusions Combining dose-volume and geometrical metrics allowed detection of clinically relevant changes, applied to both auto-segmentation and auto-planning output and the Local Surface Dice was more sensitive to local changes compared to the Standard Surface Dice. This monitoring is able to evaluate AI behavior in clinical practice and allows candidate selection for active learning.
Collapse
Affiliation(s)
- Geert De Kerf
- Department of Radiation Oncology, Iridium Netwerk, Wilrijk (Antwerp), Belgium
| | - Michaël Claessens
- Department of Radiation Oncology, Iridium Netwerk, Wilrijk (Antwerp), Belgium
- Centre for Oncological Research (CORE), Integrated Personalized and Precision Oncology Network (IPPON), University of Antwerp, Antwerp, Belgium
| | - Fadoua Raouassi
- Department of Radiation Oncology, Iridium Netwerk, Wilrijk (Antwerp), Belgium
| | - Carole Mercier
- Department of Radiation Oncology, Iridium Netwerk, Wilrijk (Antwerp), Belgium
- Centre for Oncological Research (CORE), Integrated Personalized and Precision Oncology Network (IPPON), University of Antwerp, Antwerp, Belgium
| | - Daan Stas
- Department of Radiation Oncology, Iridium Netwerk, Wilrijk (Antwerp), Belgium
- Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Piet Ost
- Department of Radiation Oncology, Iridium Netwerk, Wilrijk (Antwerp), Belgium
- Centre for Oncological Research (CORE), Integrated Personalized and Precision Oncology Network (IPPON), University of Antwerp, Antwerp, Belgium
| | - Piet Dirix
- Department of Radiation Oncology, Iridium Netwerk, Wilrijk (Antwerp), Belgium
- Centre for Oncological Research (CORE), Integrated Personalized and Precision Oncology Network (IPPON), University of Antwerp, Antwerp, Belgium
| | - Dirk Verellen
- Department of Radiation Oncology, Iridium Netwerk, Wilrijk (Antwerp), Belgium
- Centre for Oncological Research (CORE), Integrated Personalized and Precision Oncology Network (IPPON), University of Antwerp, Antwerp, Belgium
| |
Collapse
|
52
|
Heilemann G, Buschmann M, Lechner W, Dick V, Eckert F, Heilmann M, Herrmann H, Moll M, Knoth J, Konrad S, Simek IM, Thiele C, Zaharie A, Georg D, Widder J, Trnkova P. Clinical Implementation and Evaluation of Auto-Segmentation Tools for Multi-Site Contouring in Radiotherapy. Phys Imaging Radiat Oncol 2023; 28:100515. [PMID: 38111502 PMCID: PMC10726238 DOI: 10.1016/j.phro.2023.100515] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 11/09/2023] [Accepted: 11/09/2023] [Indexed: 12/20/2023] Open
Abstract
Background and purpose Tools for auto-segmentation in radiotherapy are widely available, but guidelines for clinical implementation are missing. The goal was to develop a workflow for performance evaluation of three commercial auto-segmentation tools to select one candidate for clinical implementation. Materials and Methods One hundred patients with six treatment sites (brain, head-and-neck, thorax, abdomen, and pelvis) were included. Three sets of AI-based contours for organs-at-risk (OAR) generated by three software tools and manually drawn expert contours were blindly rated for contouring accuracy. The dice similarity coefficient (DSC), the Hausdorff distance, and a dose/volume evaluation based on the recalculation of the original treatment plan were assessed. Statistically significant differences were tested using the Kruskal-Wallis test and the post-hoc Dunn Test with Bonferroni correction. Results The mean DSC scores compared to expert contours for all OARs combined were 0.80 ± 0.10, 0.75 ± 0.10, and 0.74 ± 0.11 for the three software tools. Physicians' rating identified equivalent or superior performance of some AI-based contours in head (eye, lens, optic nerve, brain, chiasm), thorax (e.g., heart and lungs), and pelvis and abdomen (e.g., kidney, femoral head) compared to manual contours. For some OARs, the AI models provided results requiring only minor corrections. Bowel-bag and stomach were not fit for direct use. During the interdisciplinary discussion, the physicians' rating was considered the most relevant. Conclusion A comprehensive method for evaluation and clinical implementation of commercially available auto-segmentation software was developed. The in-depth analysis yielded clear instructions for clinical use within the radiotherapy department.
Collapse
Affiliation(s)
- Gerd Heilemann
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Martin Buschmann
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Wolfgang Lechner
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Vincent Dick
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Franziska Eckert
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Martin Heilmann
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Harald Herrmann
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Matthias Moll
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Johannes Knoth
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Stefan Konrad
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Inga-Malin Simek
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Christopher Thiele
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Alexandru Zaharie
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Dietmar Georg
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Joachim Widder
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Petra Trnkova
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| |
Collapse
|
53
|
Vaassen F, Zegers CML, Hofstede D, Wubbels M, Beurskens H, Verheesen L, Canters R, Looney P, Battye M, Gooding MJ, Compter I, Eekers DBP, van Elmpt W. Geometric and dosimetric analysis of CT- and MR-based automatic contouring for the EPTN contouring atlas in neuro-oncology. Phys Med 2023; 114:103156. [PMID: 37813050 DOI: 10.1016/j.ejmp.2023.103156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 09/21/2023] [Accepted: 09/26/2023] [Indexed: 10/11/2023] Open
Abstract
PURPOSE Atlas-based and deep-learning contouring (DLC) are methods for automatic segmentation of organs-at-risk (OARs). The European Particle Therapy Network (EPTN) published a consensus-based atlas for delineation of OARs in neuro-oncology. In this study, geometric and dosimetric evaluation of automatically-segmented neuro-oncological OARs was performed using CT- and MR-models following the EPTN-contouring atlas. METHODS Image and contouring data from 76 neuro-oncological patients were included. Two atlas-based models (CT-atlas and MR-atlas) and one DLC-model (MR-DLC) were created. Manual contours on registered CT-MR-images were used as ground-truth. Results were analyzed in terms of geometrical (volumetric Dice similarity coefficient (vDSC), surface DSC (sDSC), added path length (APL), and mean slice-wise Hausdorff distance (MSHD)) and dosimetrical accuracy. Distance-to-tumor analysis was performed to analyze to which extent the location of the OAR relative to planning target volume (PTV) has dosimetric impact, using Wilcoxon rank-sum tests. RESULTS CT-atlas outperformed MR-atlas for 22/26 OARs. MR-DLC outperformed MR-atlas for all OARs. Highest median (95 %CI) vDSC and sDSC were found for the brainstem in MR-DLC: 0.92 (0.88-0.95) and 0.84 (0.77-0.89) respectively, as well as lowest MSHD: 0.27 (0.22-0.39)cm. Median dose differences (ΔD) were within ± 1 Gy for 24/26(92 %) OARs for all three models. Distance-to-tumor showed a significant correlation for ΔDmax,0.03cc-parameters when splitting the data in ≤ 4 cm and > 4 cm OAR-distance (p < 0.001). CONCLUSION MR-based DLC and CT-based atlas-contouring enable high-quality segmentation. It was shown that a combination of both CT- and MR-autocontouring models results in the best quality.
Collapse
Affiliation(s)
- Femke Vaassen
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands.
| | - Catharina M L Zegers
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - David Hofstede
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Mart Wubbels
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Hilde Beurskens
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Lindsey Verheesen
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Richard Canters
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | | | | | | | - Inge Compter
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Daniëlle B P Eekers
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Wouter van Elmpt
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| |
Collapse
|
54
|
Zhao JY, Cao Q, Chen J, Chen W, Du SY, Yu J, Zeng YM, Wang SM, Peng JY, You C, Xu JG, Wang XY. Development and validation of a fully automatic tissue delineation model for brain metastasis using a deep neural network. Quant Imaging Med Surg 2023; 13:6724-6734. [PMID: 37869331 PMCID: PMC10585546 DOI: 10.21037/qims-22-1216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 08/04/2023] [Indexed: 10/24/2023]
Abstract
Background Stereotactic radiosurgery (SRS) treatment planning requires accurate delineation of brain metastases, a task that can be tedious and time-consuming. Although studies have explored the use of convolutional neural networks (CNNs) in magnetic resonance imaging (MRI) for automatic brain metastases delineation, none of these studies have performed clinical evaluation, raising concerns about clinical applicability. This study aimed to develop an artificial intelligence (AI) tool for the automatic delineation of single brain metastasis that could be integrated into clinical practice. Methods Data from 426 patients with postcontrast T1-weighted MRIs who underwent SRS between March 2007 and August 2019 were retrospectively collected and divided into training, validation, and testing cohorts of 299, 42, and 85 patients, respectively. Two Gamma Knife (GK) surgeons contoured the brain metastases as the ground truth. A novel 2.5D CNN network was developed for single brain metastasis delineation. The mean Dice similarity coefficient (DSC) and average surface distance (ASD) were used to assess the performance of this method. Results The mean DSC and ASD values were 88.34%±5.00% and 0.35±0.21 mm, respectively, for the contours generated with the AI tool based on the testing set. The DSC measure of the AI tool's performance was dependent on metastatic shape, reinforcement shape, and the existence of peritumoral edema (all P values <0.05). The clinical experts' subjective assessments showed that 415 out of 572 slices (72.6%) in the testing cohort were acceptable for clinical usage without revision. The average time spent editing an AI-generated contour compared with time spent with manual contouring was 74 vs. 196 seconds, respectively (P<0.01). Conclusions The contours delineated with the AI tool for single brain metastasis were in close agreement with the ground truth. The developed AI tool can effectively reduce contouring time and aid in GK treatment planning of single brain metastasis in clinical practice.
Collapse
Affiliation(s)
- Jie-Yi Zhao
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| | - Qi Cao
- Department of Reproductive Medical Center, West China Second University Hospital, Sichuan University, Chengdu, China
| | - Jing Chen
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| | - Wei Chen
- Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Si-Yu Du
- West China School of Medicine, Sichuan University, Chengdu, China
| | - Jie Yu
- West China School of Public Health, Sichuan University, Chengdu, China
| | - Yi-Miao Zeng
- West China School of Medicine, Sichuan University, Chengdu, China
| | - Shu-Min Wang
- West China School of Medicine, Sichuan University, Chengdu, China
| | - Jing-Yu Peng
- West China School of Medicine, Sichuan University, Chengdu, China
| | - Chao You
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| | - Jian-Guo Xu
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| | - Xiao-Yu Wang
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
55
|
Li J, Song Y, Wu Y, Liang L, Li G, Bai S. Clinical evaluation on automatic segmentation results of convolutional neural networks in rectal cancer radiotherapy. Front Oncol 2023; 13:1158315. [PMID: 37731629 PMCID: PMC10508953 DOI: 10.3389/fonc.2023.1158315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 08/11/2023] [Indexed: 09/22/2023] Open
Abstract
Purpose Image segmentation can be time-consuming and lacks consistency between different oncologists, which is essential in conformal radiotherapy techniques. We aimed to evaluate automatic delineation results generated by convolutional neural networks (CNNs) from geometry and dosimetry perspectives and explore the reliability of these segmentation tools in rectal cancer. Methods Forty-seven rectal cancer cases treated from February 2018 to April 2019 were randomly collected retrospectively in our cancer center. The oncologists delineated regions of interest (ROIs) on planning CT images as the ground truth, including clinical target volume (CTV), bladder, small intestine, and femoral heads. The corresponding automatic segmentation results were generated by DeepLabv3+ and ResUNet, and we also used Atlas-Based Autosegmentation (ABAS) software for comparison. The geometry evaluation was carried out using the volumetric Dice similarity coefficient (DSC) and surface DSC, and critical dose parameters were assessed based on replanning optimized by clinically approved or automatically generated CTVs and organs at risk (OARs), i.e., the Planref and Plantest. Pearson test was used to explore the correlation between geometric metrics and dose parameters. Results In geometric evaluation, DeepLabv3+ performed better in DCS metrics for the CTV (volumetric DSC, mean = 0.96, P< 0.01; surface DSC, mean = 0.78, P< 0.01) and small intestine (volumetric DSC, mean = 0.91, P< 0.01; surface DSC, mean = 0.62, P< 0.01), ResUNet had advantages in volumetric DSC of the bladder (mean = 0.97, P< 0.05). For critical dose parameters analysis between Planref and Plantest, there was a significant difference for target volumes (P< 0.01), and no significant difference was found for the ResUNet-generated small intestine (P > 0.05). For the correlation test, a negative correlation was found between DSC metrics (volumetric, surface DSC) and dosimetric parameters (δD95, δD95, HI, CI) for target volumes (P< 0.05), and no significant correlation was found for most tests of OARs (P > 0.05). Conclusions CNNs show remarkable repeatability and time-saving in automatic segmentation, and their accuracy also has a certain potential in clinical practice. Meanwhile, clinical aspects, such as dose distribution, may need to be considered when comparing the performance of auto-segmentation methods.
Collapse
Affiliation(s)
- Jing Li
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Ying Song
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
- Machine Intelligence Laboratory, College of Computer Science, Chengdu, China
| | - Yongchang Wu
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Lan Liang
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Guangjun Li
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Sen Bai
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
56
|
Wahid KA, Sahin O, Kundu S, Lin D, Alanis A, Tehami S, Kamel S, Duke S, Sherer MV, Rasmussen M, Korreman S, Fuentes D, Cislo M, Nelms BE, Christodouleas JP, Murphy JD, Mohamed ASR, He R, Naser MA, Gillespie EF, Fuller CD. Determining The Role Of Radiation Oncologist Demographic Factors On Segmentation Quality: Insights From A Crowd-Sourced Challenge Using Bayesian Estimation. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.08.30.23294786. [PMID: 37693394 PMCID: PMC10491357 DOI: 10.1101/2023.08.30.23294786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
BACKGROUND Medical image auto-segmentation is poised to revolutionize radiotherapy workflows. The quality of auto-segmentation training data, primarily derived from clinician observers, is of utmost importance. However, the factors influencing the quality of these clinician-derived segmentations have yet to be fully understood or quantified. Therefore, the purpose of this study was to determine the role of common observer demographic variables on quantitative segmentation performance. METHODS Organ at risk (OAR) and tumor volume segmentations provided by radiation oncologist observers from the Contouring Collaborative for Consensus in Radiation Oncology public dataset were utilized for this study. Segmentations were derived from five separate disease sites comprised of one patient case each: breast, sarcoma, head and neck (H&N), gynecologic (GYN), and gastrointestinal (GI). Segmentation quality was determined on a structure-by-structure basis by comparing the observer segmentations with an expert-derived consensus gold standard primarily using the Dice Similarity Coefficient (DSC); surface DSC was investigated as a secondary metric. Metrics were stratified into binary groups based on previously established structure-specific expert-derived interobserver variability (IOV) cutoffs. Generalized linear mixed-effects models using Markov chain Monte Carlo Bayesian estimation were used to investigate the association between demographic variables and the binarized segmentation quality for each disease site separately. Variables with a highest density interval excluding zero - loosely analogous to frequentist significance - were considered to substantially impact the outcome measure. RESULTS After filtering by practicing radiation oncologists, 574, 110, 452, 112, and 48 structure observations remained for the breast, sarcoma, H&N, GYN, and GI cases, respectively. The median percentage of observations that crossed the expert DSC IOV cutoff when stratified by structure type was 55% and 31% for OARs and tumor volumes, respectively. Bayesian regression analysis revealed tumor category had a substantial negative impact on binarized DSC for the breast (coefficient mean ± standard deviation: -0.97 ± 0.20), sarcoma (-1.04 ± 0.54), H&N (-1.00 ± 0.24), and GI (-2.95 ± 0.98) cases. There were no clear recurring relationships between segmentation quality and demographic variables across the cases, with most variables demonstrating large standard deviations and wide highest density intervals. CONCLUSION Our study highlights substantial uncertainty surrounding conventionally presumed factors influencing segmentation quality. Future studies should investigate additional demographic variables, more patients and imaging modalities, and alternative metrics of segmentation acceptability.
Collapse
Affiliation(s)
- Kareem A. Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Onur Sahin
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Suprateek Kundu
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Diana Lin
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Anthony Alanis
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Salik Tehami
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Serageldin Kamel
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Simon Duke
- Department of Radiation Oncology, Cambridge University Hospitals, Cambridge, UK
| | - Michael V. Sherer
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA, USA
| | | | - Stine Korreman
- Department of Oncology, Aarhus University Hospital, Denmark
| | - David Fuentes
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Michael Cislo
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY
| | | | - John P. Christodouleas
- Department of Radiation Oncology, The University of Pennsylvania Cancer Center, Philadelphia, PA, USA
- Elekta, Atlanta, GA, USA
| | - James D. Murphy
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA, USA
| | - Abdallah S. R. Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Mohammed A. Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | | | - Clifton D. Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
57
|
Zaman FA, Roy TK, Sonka M, Wu X. Patch-wise 3D segmentation quality assessment combining reconstruction and regression networks. J Med Imaging (Bellingham) 2023; 10:054002. [PMID: 37692093 PMCID: PMC10490907 DOI: 10.1117/1.jmi.10.5.054002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 08/21/2023] [Accepted: 08/28/2023] [Indexed: 09/12/2023] Open
Abstract
Purpose General deep-learning (DL)-based semantic segmentation methods with expert level accuracy may fail in 3D medical image segmentation due to complex tissue structures, lack of large datasets with ground truth, etc. For expeditious diagnosis, there is a compelling need to predict segmentation quality without ground truth. In some medical imaging applications, maintaining the quality of segmentation is crucial to the localized regions where disease is prevalent rather than just globally maintaining high-average segmentation quality. We propose a DL framework to identify regions of segmentation inaccuracies by combining a 3D generative adversarial network (GAN) and a convolutional regression network. Approach Our approach is methodologically based on the learned ability to reconstruct the original images identifying the regions of location-specific segmentation failures, in which the reconstruction does not match the underlying original image. We use conditional GAN to reconstruct input images masked by the segmentation results. The regression network is trained to predict the patch-wise Dice similarity coefficient (DSC), conditioned on the segmentation results. The method relies directly on the extracted segmentation related features and does not need to use ground truth during the inference phase to identify erroneous regions in the computed segmentation. Results We evaluated the proposed method on two public datasets: osteoarthritis initiative 4D (3D + time) knee MRI (knee-MR) and 3D non-small cell lung cancer CT (lung-CT). For the patch-wise DSC prediction, we observed the mean absolute errors of 0.01 and 0.04 with the independent standard for the knee-MR and lung-CT data, respectively. Conclusions This method shows promising results in localizing the erroneous segmentation regions that may aid the downstream analysis of disease diagnosis and prognosis prediction.
Collapse
Affiliation(s)
- Fahim Ahmed Zaman
- University of Iowa, Department of Electrical and Computer Engineering, Iowa City, Iowa, United States
| | - Tarun Kanti Roy
- University of Iowa, Department of Computer Science, Iowa City, Iowa, United States
| | - Milan Sonka
- University of Iowa, Department of Electrical and Computer Engineering, Iowa City, Iowa, United States
| | - Xiaodong Wu
- University of Iowa, Department of Electrical and Computer Engineering, Iowa City, Iowa, United States
| |
Collapse
|
58
|
McQuinlan Y, Brouwer CL, Lin Z, Gan Y, Sung Kim J, van Elmpt W, Gooding MJ. An investigation into the risk of population bias in deep learning autocontouring. Radiother Oncol 2023; 186:109747. [PMID: 37330053 DOI: 10.1016/j.radonc.2023.109747] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 05/30/2023] [Accepted: 06/08/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND AND PURPOSE To date, data used in the development of Deep Learning-based automatic contouring (DLC) algorithms have been largely sourced from single geographic populations. This study aimed to evaluate the risk of population-based bias by determining whether the performance of an autocontouring system is impacted by geographic population. MATERIALS AND METHODS 80 Head Neck CT deidentified scans were collected from four clinics in Europe (n = 2) and Asia (n = 2). A single observer manually delineated 16 organs-at-risk in each. Subsequently, the data was contoured using a DLC solution, and trained using single institution (European) data. Autocontours were compared to manual delineations using quantitative measures. A Kruskal-Wallis test was used to test for any difference between populations. Clinical acceptability of automatic and manual contours to observers from each participating institution was assessed using a blinded subjective evaluation. RESULTS Seven organs showed a significant difference in volume between groups. Four organs showed statistical differences in quantitative similarity measures. The qualitative test showed greater variation in acceptance of contouring between observers than between data from different origins, with greater acceptance by the South Korean observers. CONCLUSION Much of the statistical difference in quantitative performance could be explained by the difference in organ volume impacting the contour similarity measures and the small sample size. However, the qualitative assessment suggests that observer perception bias has a greater impact on the apparent clinical acceptability than quantitatively observed differences. This investigation of potential geographic bias should extend to more patients, populations, and anatomical regions in the future.
Collapse
Affiliation(s)
| | - Charlotte L Brouwer
- University of Groningen, University Medical Center Groningen, Department of Radiation Oncology, Groningen, The Netherlands.
| | - Zhixiong Lin
- Shantou University Medical Centre, Guangdong, China.
| | - Yong Gan
- Shantou University Medical Centre, Guangdong, China.
| | - Jin Sung Kim
- Yonsei University Health System, Seoul, Republic of Korea.
| | - Wouter van Elmpt
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands.
| | - Mark J Gooding
- Mirada Medical Ltd, Oxford, United Kingdom; Inpictura Ltd, Oxford, United Kingdom.
| |
Collapse
|
59
|
Zaman FA, Zhang L, Zhang H, Sonka M, Wu X. Segmentation quality assessment by automated detection of erroneous surface regions in medical images. Comput Biol Med 2023; 164:107324. [PMID: 37591161 PMCID: PMC10563140 DOI: 10.1016/j.compbiomed.2023.107324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 07/17/2023] [Accepted: 08/07/2023] [Indexed: 08/19/2023]
Abstract
Despite the advancement in deep learning-based semantic segmentation methods, which have achieved accuracy levels of field experts in many computer vision applications, the same general approaches may frequently fail in 3D medical image segmentation due to complex tissue structures, noisy acquisition, disease-related pathologies, as well as the lack of sufficiently large datasets with associated annotations. For expeditious diagnosis and quantitative image analysis in large-scale clinical trials, there is a compelling need to predict segmentation quality without ground truth. In this paper, we propose a deep learning framework to locate erroneous regions on the boundary surfaces of segmented objects for quality control and assessment of segmentation. A Convolutional Neural Network (CNN) is explored to learn the boundary related image features of multi-objects that can be used to identify location-specific inaccurate segmentation. The predicted error locations can facilitate efficient user interaction for interactive image segmentation (IIS). We evaluated the proposed method on two data sets: Osteoarthritis Initiative (OAI) 3D knee MRI and 3D calf muscle MRI. The average sensitivity scores of 0.95 and 0.92, and the average positive predictive values of 0.78 and 0.91 were achieved, respectively, for erroneous surface region detection of knee cartilage segmentation and calf muscle segmentation. Our experiment demonstrated promising performance of the proposed method for segmentation quality assessment by automated detection of erroneous surface regions in medical images.
Collapse
Affiliation(s)
- Fahim Ahmed Zaman
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242, USA
| | - Lichun Zhang
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242, USA
| | - Honghai Zhang
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242, USA
| | - Milan Sonka
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242, USA
| | - Xiaodong Wu
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242, USA.
| |
Collapse
|
60
|
Choi B, Olberg S, Park JC, Kim JS, Shrestha DK, Yaddanapudi S, Furutani KM, Beltran CJ. Technical note: Progressive deep learning: An accelerated training strategy for medical image segmentation. Med Phys 2023; 50:5075-5087. [PMID: 36763566 DOI: 10.1002/mp.16267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Revised: 12/30/2022] [Accepted: 01/24/2023] [Indexed: 02/11/2023] Open
Abstract
BACKGROUND Recent advancements in Deep Learning (DL) methodologies have led to state-of-the-art performance in a wide range of applications especially in object recognition, classification, and segmentation of medical images. However, training modern DL models requires a large amount of computation and long training times due to the complex nature of network structures and the large number of training datasets involved. Moreover, it is an intensive, repetitive manual process to select the optimized configuration of hyperparameters for a given DL network. PURPOSE In this study, we present a novel approach to accelerate the training time of DL models via the progressive feeding of training datasets based on similarity measures for medical image segmentation. We term this approach Progressive Deep Learning (PDL). METHODS The two-stage PDL approach was tested on the auto-segmentation task for two imaging modalities: CT and MRI. The training datasets were ranked according to similarity measures between each sample based on Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and the Universal Quality Image Index (UQI) values. At the start of the training process, a relatively coarse sampling of training datasets with higher ranks was used to optimize the hyperparameters of the DL network. Following this, the samples with higher ranks were used in step 1 to yield accelerated loss minimization in early training epochs and the total dataset was added in step 2 for the remainder of training. RESULTS Our results demonstrate that the PDL approach can reduce the training time by nearly half (∼49%) and can predict segmentations (CT U-net/DenseNet dice coefficient: 0.9506/0.9508, MR U-net/DenseNet dice coefficient: 0.9508/0.9510) without major statistical difference (Wilcoxon signed-rank test) compared to the conventional DL approach. The total training times with a fixed cutoff at 0.95 DSC for the CT dataset using DenseNet and U-Net architectures, respectively, were 17 h, 20 min and 4 h, 45 min in the conventional case compared to 8 h, 45 min and 2 h, 20 min with PDL. For the MRI dataset, the total training times using the same architectures were 2 h, 54 min and 52 min in the conventional case and 1 h, 14 min and 25 min with PDL. CONCLUSION The proposed PDL training approach offers the ability to substantially reduce the training time for medical image segmentation while maintaining the performance achieved in the conventional case.
Collapse
Affiliation(s)
- Byongsu Choi
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea
| | - Sven Olberg
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Justin C Park
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea
- Oncosoft Inc., Seoul, South Korea
| | - Deepak K Shrestha
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| | | | - Keith M Furutani
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| | - Chris J Beltran
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| |
Collapse
|
61
|
Wang J, Peng Y. MHL-Net: A Multistage Hierarchical Learning Network for Head and Neck Multiorgan Segmentation. IEEE J Biomed Health Inform 2023; 27:4074-4085. [PMID: 37171918 DOI: 10.1109/jbhi.2023.3275746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Accurate segmentation of head and neck organs at risk is crucial in radiotherapy. However, the existing methods suffer from incomplete feature mining, insufficient information utilization, and difficulty in simultaneously improving the performance of small and large organ segmentation. In this paper, a multistage hierarchical learning network is designed to fully extract multidimensional features, combined with anatomical prior information and imaging features, using multistage subnetworks to improve the segmentation performance. First, multilevel subnetworks are constructed for primary segmentation, localization, and fine segmentation by dividing organs into two levels-large and small. Different networks both have their own learning focuses and feature reuse and information sharing among each other, which comprehensively improved the segmentation performance of all organs. Second, an anatomical prior probability map and a boundary contour attention mechanism are developed to address the problem of complex anatomical shapes. Prior information and boundary contour features effectively assist in detecting and segmenting special shapes. Finally, a multidimensional combination attention mechanism is proposed to analyze axial, coronal, and sagittal information, capture spatial and channel features, and maximize the use of structural information and semantic features of 3D medical images. Experimental results on several datasets showed that our method was competitive with state-of-the-art methods and improved the segmentation results for multiscale organs.
Collapse
|
62
|
Amjad A, Xu J, Thill D, Zhang Y, Ding J, Paulson E, Hall W, Erickson BA, Li XA. Deep learning auto-segmentation on multi-sequence magnetic resonance images for upper abdominal organs. Front Oncol 2023; 13:1209558. [PMID: 37483486 PMCID: PMC10358771 DOI: 10.3389/fonc.2023.1209558] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 06/19/2023] [Indexed: 07/25/2023] Open
Abstract
Introduction Multi-sequence multi-parameter MRIs are often used to define targets and/or organs at risk (OAR) in radiation therapy (RT) planning. Deep learning has so far focused on developing auto-segmentation models based on a single MRI sequence. The purpose of this work is to develop a multi-sequence deep learning based auto-segmentation (mS-DLAS) based on multi-sequence abdominal MRIs. Materials and methods Using a previously developed 3DResUnet network, a mS-DLAS model using 4 T1 and T2 weighted MRI acquired during routine RT simulation for 71 cases with abdominal tumors was trained and tested. Strategies including data pre-processing, Z-normalization approach, and data augmentation were employed. Additional 2 sequence specific T1 weighted (T1-M) and T2 weighted (T2-M) models were trained to evaluate performance of sequence-specific DLAS. Performance of all models was quantitatively evaluated using 6 surface and volumetric accuracy metrics. Results The developed DLAS models were able to generate reasonable contours of 12 upper abdomen organs within 21 seconds for each testing case. The 3D average values of dice similarity coefficient (DSC), mean distance to agreement (MDA mm), 95 percentile Hausdorff distance (HD95% mm), percent volume difference (PVD), surface DSC (sDSC), and relative added path length (rAPL mm/cc) over all organs were 0.87, 1.79, 7.43, -8.95, 0.82, and 12.25, respectively, for mS-DLAS model. Collectively, 71% of the auto-segmented contours by the three models had relatively high quality. Additionally, the obtained mS-DLAS successfully segmented 9 out of 16 MRI sequences that were not used in the model training. Conclusion We have developed an MRI-based mS-DLAS model for auto-segmenting of upper abdominal organs on MRI. Multi-sequence segmentation is desirable in routine clinical practice of RT for accurate organ and target delineation, particularly for abdominal tumors. Our work will act as a stepping stone for acquiring fast and accurate segmentation on multi-contrast MRI and make way for MR only guided radiation therapy.
Collapse
Affiliation(s)
- Asma Amjad
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | | | - Dan Thill
- Elekta Inc., ST. Charles, MO, United States
| | - Ying Zhang
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Jie Ding
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Eric Paulson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - William Hall
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Beth A. Erickson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - X. Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| |
Collapse
|
63
|
Song Y, Hu J, Wang Q, Yu C, Su J, Chen L, Jiang X, Chen B, Zhang L, Yu Q, Li P, Wang F, Bai S, Luo Y, Yi Z. Young oncologists benefit more than experts from deep learning-based organs-at-risk contouring modeling in nasopharyngeal carcinoma radiotherapy: A multi-institution clinical study exploring working experience and institute group style factor. Clin Transl Radiat Oncol 2023; 41:100635. [PMID: 37251619 PMCID: PMC10213188 DOI: 10.1016/j.ctro.2023.100635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 04/26/2023] [Accepted: 05/01/2023] [Indexed: 05/31/2023] Open
Abstract
Background To comprehensively investigate the behaviors of oncologists with different working experiences and institute group styles in deep learning-based organs-at-risk (OAR) contouring. Methods A deep learning-based contouring system (DLCS) was modeled from 188 CT datasets of patients with nasopharyngeal carcinoma (NPC) in institute A. Three institute oncology groups, A, B, and C, were included; each contained a beginner and an expert. For each of the 28 OARs, two trials were performed with manual contouring first and post-DLCS edition later, for ten test cases. Contouring performance and group consistency were quantified by volumetric and surface Dice coefficients. A volume-based and a surface-based oncologist satisfaction rate (VOSR and SOSR) were defined to evaluate the oncologists' acceptance of DLCS. Results Based on DLCS, experience inconsistency was eliminated. Intra-institute consistency was eliminated for group C but still existed for group A and group B. Group C benefits most from DLCS with the highest number of improved OARs (8 for volumetric Dice and 10 for surface Dice), followed by group B. Beginners obtained more numbers of improved OARs than experts (7 v.s. 4 in volumetric Dice and 5 v.s. 4 in surface Dice). VOSR and SOSR varied for institute groups, but the rates of beginners were all significantly higher than those of experts for OARs with experience group significance. A remarkable positive linear relationship was found between VOSR and post-DLCS edition volumetric Dice with a coefficient of 0.78. Conclusions The DLCS was effective for various institutes and the beginners benefited more than the experts.
Collapse
Affiliation(s)
- Ying Song
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Junjie Hu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Qiang Wang
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Chengrong Yu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Jiachong Su
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Lin Chen
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Xiaorui Jiang
- Department of Oncology, First People's Hospital of Chengdu, No. 18, Wanxiang North Road, High-tech Zone, Chengdu 610041, PR China
| | - Bo Chen
- Department of Oncology, First People's Hospital of Chengdu, No. 18, Wanxiang North Road, High-tech Zone, Chengdu 610041, PR China
| | - Lei Zhang
- Department of Oncology, Second People's Hospital of Chengdu, Chengdu, PR China
| | - Qian Yu
- Department of Oncology, Second People's Hospital of Chengdu, Chengdu, PR China
| | - Ping Li
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Feng Wang
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Sen Bai
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Yong Luo
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| |
Collapse
|
64
|
Kesävuori R, Kaseva T, Salli E, Raivio P, Savolainen S, Kangasniemi M. Deep learning-aided extraction of outer aortic surface from CT angiography scans of patients with Stanford type B aortic dissection. Eur Radiol Exp 2023; 7:35. [PMID: 37380806 DOI: 10.1186/s41747-023-00342-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 04/01/2023] [Indexed: 06/30/2023] Open
Abstract
BACKGROUND Guidelines recommend that aortic dimension measurements in aortic dissection should include the aortic wall. This study aimed to evaluate two-dimensional (2D)- and three-dimensional (3D)-based deep learning approaches for extraction of outer aortic surface in computed tomography angiography (CTA) scans of Stanford type B aortic dissection (TBAD) patients and assess the speed of different whole aorta (WA) segmentation approaches. METHODS A total of 240 patients diagnosed with TBAD between January 2007 and December 2019 were retrospectively reviewed for this study; 206 CTA scans from 206 patients with acute, subacute, or chronic TBAD acquired with various scanners in multiple different hospital units were included. Ground truth (GT) WAs for 80 scans were segmented by a radiologist using an open-source software. The remaining 126 GT WAs were generated via semi-automatic segmentation process in which an ensemble of 3D convolutional neural networks (CNNs) aided the radiologist. Using 136 scans for training, 30 for validation, and 40 for testing, 2D and 3D CNNs were trained to automatically segment WA. Main evaluation metrics for outer surface extraction and segmentation accuracy were normalized surface Dice (NSD) and Dice coefficient score (DCS), respectively. RESULTS 2D CNN outperformed 3D CNN in NSD score (0.92 versus 0.90, p = 0.009), and both CNNs had equal DCS (0.96 versus 0.96, p = 0.110). Manual and semi-automatic segmentation times of one CTA scan were approximately 1 and 0.5 h, respectively. CONCLUSIONS Both CNNs segmented WA with high DCS, but based on NSD, better accuracy may be required before clinical application. CNN-based semi-automatic segmentation methods can expedite the generation of GTs. RELEVANCE STATEMENT Deep learning can speeds up the creation of ground truth segmentations. CNNs can extract the outer aortic surface in patients with type B aortic dissection. KEY POINTS • 2D and 3D convolutional neural networks (CNNs) can extract the outer aortic surface accurately. • Equal Dice coefficient score (0.96) was reached with 2D and 3D CNNs. • Deep learning can expedite the creation of ground truth segmentations.
Collapse
Affiliation(s)
- Risto Kesävuori
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland.
| | - Tuomas Kaseva
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland
| | - Eero Salli
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland
| | - Peter Raivio
- Department of Cardiac Surgery, Heart and Lung Center, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | - Sauli Savolainen
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland
- Department of Physics, University of Helsinki, Helsinki, Finland
| | - Marko Kangasniemi
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland
| |
Collapse
|
65
|
Franzese C, Dei D, Lambri N, Teriaca MA, Badalamenti M, Crespi L, Tomatis S, Loiacono D, Mancosu P, Scorsetti M. Enhancing Radiotherapy Workflow for Head and Neck Cancer with Artificial Intelligence: A Systematic Review. J Pers Med 2023; 13:946. [PMID: 37373935 DOI: 10.3390/jpm13060946] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 06/01/2023] [Accepted: 06/01/2023] [Indexed: 06/29/2023] Open
Abstract
BACKGROUND Head and neck cancer (HNC) is characterized by complex-shaped tumors and numerous organs at risk (OARs), inducing challenging radiotherapy (RT) planning, optimization, and delivery. In this review, we provided a thorough description of the applications of artificial intelligence (AI) tools in the HNC RT process. METHODS The PubMed database was queried, and a total of 168 articles (2016-2022) were screened by a group of experts in radiation oncology. The group selected 62 articles, which were subdivided into three categories, representing the whole RT workflow: (i) target and OAR contouring, (ii) planning, and (iii) delivery. RESULTS The majority of the selected studies focused on the OARs segmentation process. Overall, the performance of AI models was evaluated using standard metrics, while limited research was found on how the introduction of AI could impact clinical outcomes. Additionally, papers usually lacked information about the confidence level associated with the predictions made by the AI models. CONCLUSIONS AI represents a promising tool to automate the RT workflow for the complex field of HNC treatment. To ensure that the development of AI technologies in RT is effectively aligned with clinical needs, we suggest conducting future studies within interdisciplinary groups, including clinicians and computer scientists.
Collapse
Affiliation(s)
- Ciro Franzese
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Damiano Dei
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Nicola Lambri
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Maria Ausilia Teriaca
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Marco Badalamenti
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Leonardo Crespi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
- Centre for Health Data Science, Human Technopole, 20157 Milan, Italy
| | - Stefano Tomatis
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Daniele Loiacono
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
| | - Pietro Mancosu
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Marta Scorsetti
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| |
Collapse
|
66
|
Bakx N, van der Sangen M, Theuws J, Bluemink H, Hurkmans C. Comparison of the output of a deep learning segmentation model for locoregional breast cancer radiotherapy trained on 2 different datasets. Tech Innov Patient Support Radiat Oncol 2023; 26:100209. [PMID: 37213441 PMCID: PMC10199413 DOI: 10.1016/j.tipsro.2023.100209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 04/06/2023] [Accepted: 05/09/2023] [Indexed: 05/23/2023] Open
Abstract
Introduction The development of deep learning (DL) models for auto-segmentation is increasing and more models become commercially available. Mostly, commercial models are trained on external data. To study the effect of using a model trained on external data, compared to the same model trained on in-house collected data, the performance of these two DL models was evaluated. Methods The evaluation was performed using in-house collected data of 30 breast cancer patients. Quantitative analysis was performed using Dice similarity coefficient (DSC), surface DSC (sDSC) and 95th percentile of Hausdorff Distance (95% HD). These values were compared with previously reported inter-observer variations (IOV). Results For a number of structures, statistically significant differences were found between the two models. For organs at risk, mean values for DSC ranged from 0.63 to 0.98 and 0.71 to 0.96 for the in-house and external model, respectively. For target volumes, mean DSC values of 0.57 to 0.94 and 0.33 to 0.92 were found. The difference of 95% HD values ranged 0.08 to 3.23 mm between the two models, except for CTVn4 with 9.95 mm. For the external model, both DSC and 95% HD are outside the range of IOV for CTVn4, whereas this is the case for the DSC found for the thyroid of the in-house model. Conclusions Statistically significant differences were found between both models, which were mostly within published inter-observer variations, showing clinical usefulness of both models. Our findings could encourage discussion and revision of existing guidelines, to further decrease inter-observer, but also inter-institute variability.
Collapse
Affiliation(s)
- Nienke Bakx
- Catharina Hospital, Department of Radiation Oncology, 5602ZA Eindhoven, the Netherlands
| | | | - Jacqueline Theuws
- Catharina Hospital, Department of Radiation Oncology, 5602ZA Eindhoven, the Netherlands
| | - Hanneke Bluemink
- Catharina Hospital, Department of Radiation Oncology, 5602ZA Eindhoven, the Netherlands
| | - Coen Hurkmans
- Catharina Hospital, Department of Radiation Oncology, 5602ZA Eindhoven, the Netherlands
- Technical University Eindhoven, Faculties of Physics and Electrical Engineering, 5600MB Eindhoven, the Netherlands
| |
Collapse
|
67
|
Busch F, Xu L, Sushko D, Weidlich M, Truhn D, Müller-Franzes G, Heimer MM, Niehues SM, Makowski MR, Hinsche M, Vahldiek JL, Aerts HJ, Adams LC, Bressem KK. Dual center validation of deep learning for automated multi-label segmentation of thoracic anatomy in bedside chest radiographs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 234:107505. [PMID: 37003043 DOI: 10.1016/j.cmpb.2023.107505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 02/17/2023] [Accepted: 03/21/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND AND OBJECTIVES Bedside chest radiographs (CXRs) are challenging to interpret but important for monitoring cardiothoracic disease and invasive therapy devices in critical care and emergency medicine. Taking surrounding anatomy into account is likely to improve the diagnostic accuracy of artificial intelligence and bring its performance closer to that of a radiologist. Therefore, we aimed to develop a deep convolutional neural network for efficient automatic anatomy segmentation of bedside CXRs. METHODS To improve the efficiency of the segmentation process, we introduced a "human-in-the-loop" segmentation workflow with an active learning approach, looking at five major anatomical structures in the chest (heart, lungs, mediastinum, trachea, and clavicles). This allowed us to decrease the time needed for segmentation by 32% and select the most complex cases to utilize human expert annotators efficiently. After annotation of 2,000 CXRs from different Level 1 medical centers at Charité - University Hospital Berlin, there was no relevant improvement in model performance, and the annotation process was stopped. A 5-layer U-ResNet was trained for 150 epochs using a combined soft Dice similarity coefficient (DSC) and cross-entropy as a loss function. DSC, Jaccard index (JI), Hausdorff distance (HD) in mm, and average symmetric surface distance (ASSD) in mm were used to assess model performance. External validation was performed using an independent external test dataset from Aachen University Hospital (n = 20). RESULTS The final training, validation, and testing dataset consisted of 1900/50/50 segmentation masks for each anatomical structure. Our model achieved a mean DSC/JI/HD/ASSD of 0.93/0.88/32.1/5.8 for the lung, 0.92/0.86/21.65/4.85 for the mediastinum, 0.91/0.84/11.83/1.35 for the clavicles, 0.9/0.85/9.6/2.19 for the trachea, and 0.88/0.8/31.74/8.73 for the heart. Validation using the external dataset showed an overall robust performance of our algorithm. CONCLUSIONS Using an efficient computer-aided segmentation method with active learning, our anatomy-based model achieves comparable performance to state-of-the-art approaches. Instead of only segmenting the non-overlapping portions of the organs, as previous studies did, a closer approximation to actual anatomy is achieved by segmenting along the natural anatomical borders. This novel anatomy approach could be useful for developing pathology models for accurate and quantifiable diagnosis.
Collapse
Affiliation(s)
- Felix Busch
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany; Department of Anesthesiology, Division of Operative Intensive Care Medicine, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany.
| | - Lina Xu
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Dmitry Sushko
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Matthias Weidlich
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Gustav Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Maurice M Heimer
- Department of Radiology, Ludwig-Maximilians-University of Munich, Munich, Germany
| | - Stefan M Niehues
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Marcus R Makowski
- Department of Radiology, Technical University of Munich, Munich, Germany
| | - Markus Hinsche
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Janis L Vahldiek
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Hugo Jwl Aerts
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany; Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA; Departments of Radiation Oncology and Radiology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA, USA; Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands
| | - Lisa C Adams
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Keno K Bressem
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany; Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
68
|
Paudyal R, Shah AD, Akin O, Do RKG, Konar AS, Hatzoglou V, Mahmood U, Lee N, Wong RJ, Banerjee S, Shin J, Veeraraghavan H, Shukla-Dave A. Artificial Intelligence in CT and MR Imaging for Oncological Applications. Cancers (Basel) 2023; 15:cancers15092573. [PMID: 37174039 PMCID: PMC10177423 DOI: 10.3390/cancers15092573] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 04/13/2023] [Accepted: 04/17/2023] [Indexed: 05/15/2023] Open
Abstract
Cancer care increasingly relies on imaging for patient management. The two most common cross-sectional imaging modalities in oncology are computed tomography (CT) and magnetic resonance imaging (MRI), which provide high-resolution anatomic and physiological imaging. Herewith is a summary of recent applications of rapidly advancing artificial intelligence (AI) in CT and MRI oncological imaging that addresses the benefits and challenges of the resultant opportunities with examples. Major challenges remain, such as how best to integrate AI developments into clinical radiology practice, the vigorous assessment of quantitative CT and MR imaging data accuracy, and reliability for clinical utility and research integrity in oncology. Such challenges necessitate an evaluation of the robustness of imaging biomarkers to be included in AI developments, a culture of data sharing, and the cooperation of knowledgeable academics with vendor scientists and companies operating in radiology and oncology fields. Herein, we will illustrate a few challenges and solutions of these efforts using novel methods for synthesizing different contrast modality images, auto-segmentation, and image reconstruction with examples from lung CT as well as abdome, pelvis, and head and neck MRI. The imaging community must embrace the need for quantitative CT and MRI metrics beyond lesion size measurement. AI methods for the extraction and longitudinal tracking of imaging metrics from registered lesions and understanding the tumor environment will be invaluable for interpreting disease status and treatment efficacy. This is an exciting time to work together to move the imaging field forward with narrow AI-specific tasks. New AI developments using CT and MRI datasets will be used to improve the personalized management of cancer patients.
Collapse
Affiliation(s)
- Ramesh Paudyal
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Akash D Shah
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Oguz Akin
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Richard K G Do
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Amaresha Shridhar Konar
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Vaios Hatzoglou
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Usman Mahmood
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Nancy Lee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Richard J Wong
- Head and Neck Service, Department of Surgery, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | | | | | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Amita Shukla-Dave
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| |
Collapse
|
69
|
Lucido JJ, DeWees TA, Leavitt TR, Anand A, Beltran CJ, Brooke MD, Buroker JR, Foote RL, Foss OR, Gleason AM, Hodge TL, Hughes CO, Hunzeker AE, Laack NN, Lenz TK, Livne M, Morigami M, Moseley DJ, Undahl LM, Patel Y, Tryggestad EJ, Walker MZ, Zverovitch A, Patel SH. Validation of clinical acceptability of deep-learning-based automated segmentation of organs-at-risk for head-and-neck radiotherapy treatment planning. Front Oncol 2023; 13:1137803. [PMID: 37091160 PMCID: PMC10115982 DOI: 10.3389/fonc.2023.1137803] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 03/24/2023] [Indexed: 04/09/2023] Open
Abstract
Introduction Organ-at-risk segmentation for head and neck cancer radiation therapy is a complex and time-consuming process (requiring up to 42 individual structure, and may delay start of treatment or even limit access to function-preserving care. Feasibility of using a deep learning (DL) based autosegmentation model to reduce contouring time without compromising contour accuracy is assessed through a blinded randomized trial of radiation oncologists (ROs) using retrospective, de-identified patient data. Methods Two head and neck expert ROs used dedicated time to create gold standard (GS) contours on computed tomography (CT) images. 445 CTs were used to train a custom 3D U-Net DL model covering 42 organs-at-risk, with an additional 20 CTs were held out for the randomized trial. For each held-out patient dataset, one of the eight participant ROs was randomly allocated to review and revise the contours produced by the DL model, while another reviewed contours produced by a medical dosimetry assistant (MDA), both blinded to their origin. Time required for MDAs and ROs to contour was recorded, and the unrevised DL contours, as well as the RO-revised contours by the MDAs and DL model were compared to the GS for that patient. Results Mean time for initial MDA contouring was 2.3 hours (range 1.6-3.8 hours) and RO-revision took 1.1 hours (range, 0.4-4.4 hours), compared to 0.7 hours (range 0.1-2.0 hours) for the RO-revisions to DL contours. Total time reduced by 76% (95%-Confidence Interval: 65%-88%) and RO-revision time reduced by 35% (95%-CI,-39%-91%). All geometric and dosimetric metrics computed, agreement with GS was equivalent or significantly greater (p<0.05) for RO-revised DL contours compared to the RO-revised MDA contours, including volumetric Dice similarity coefficient (VDSC), surface DSC, added path length, and the 95%-Hausdorff distance. 32 OARs (76%) had mean VDSC greater than 0.8 for the RO-revised DL contours, compared to 20 (48%) for RO-revised MDA contours, and 34 (81%) for the unrevised DL OARs. Conclusion DL autosegmentation demonstrated significant time-savings for organ-at-risk contouring while improving agreement with the institutional GS, indicating comparable accuracy of DL model. Integration into the clinical practice with a prospective evaluation is currently underway.
Collapse
Affiliation(s)
- J. John Lucido
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Todd A. DeWees
- Department of Health Sciences Research, Mayo Clinic, Phoenix, AZ, United States
| | - Todd R. Leavitt
- Department of Health Sciences Research, Mayo Clinic, Phoenix, AZ, United States
| | - Aman Anand
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, United States
| | - Chris J. Beltran
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, FL, United States
| | | | - Justine R. Buroker
- Research Services, Comprehensive Cancer Center, Mayo Clinic, Rochester, MN, United States
| | - Robert L. Foote
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Olivia R. Foss
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Angela M. Gleason
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Teresa L. Hodge
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | - Ashley E. Hunzeker
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Nadia N. Laack
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Tamra K. Lenz
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Douglas J. Moseley
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Lisa M. Undahl
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Yojan Patel
- Google Health, Mountain View, CA, United States
| | - Erik J. Tryggestad
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Samir H. Patel
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, United States
| |
Collapse
|
70
|
de Vries L, Emmer BJ, Majoie CBLM, Marquering HA, Gavves E. PerfU-Net: Baseline infarct estimation from CT perfusion source data for acute ischemic stroke. Med Image Anal 2023; 85:102749. [PMID: 36731276 DOI: 10.1016/j.media.2023.102749] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 11/08/2022] [Accepted: 01/10/2023] [Indexed: 01/15/2023]
Abstract
CT perfusion imaging is commonly used for infarct core quantification in acute ischemic stroke patients. The outcomes and perfusion maps of CT perfusion software, however, show many discrepancies between vendors. We aim to perform infarct core segmentation directly from CT perfusion source data using machine learning, excluding the need to use the perfusion maps from standard CT perfusion software. To this end, we present a symmetry-aware spatio-temporal segmentation model that encodes the micro-perfusion dynamics in the brain, while decoding a static segmentation map for infarct core assessment. Our proposed spatio-temporal PerfU-Net employs an attention module on the skip-connections to match the dimensions of the encoder and decoder. We train and evaluate the method on 94 and 62 scans, respectively, using the Ischemic Stroke Lesion Segmentation (ISLES) 2018 challenge data. We achieve state-of-the-art results compared to methods that only use CT perfusion source imaging with a Dice score of 0.46. We are almost on par with methods that use perfusion maps from third party software, whilst it is known that there is a large variation in these perfusion maps from various vendors. Moreover, we achieve improved performance compared to simple perfusion map analysis, which is used in clinical practice.
Collapse
Affiliation(s)
- Lucas de Vries
- Amsterdam UMC, Department of Radiology and Nuclear Medicine, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands; Amsterdam UMC, Department of Biomedical Engineering and Physics, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands; University of Amsterdam, Informatics Institute, Science Park 900, Amsterdam, 1098 XH, The Netherlands.
| | - Bart J Emmer
- Amsterdam UMC, Department of Radiology and Nuclear Medicine, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands
| | - Charles B L M Majoie
- Amsterdam UMC, Department of Radiology and Nuclear Medicine, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands
| | - Henk A Marquering
- Amsterdam UMC, Department of Radiology and Nuclear Medicine, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands; Amsterdam UMC, Department of Biomedical Engineering and Physics, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands
| | - Efstratios Gavves
- University of Amsterdam, Informatics Institute, Science Park 900, Amsterdam, 1098 XH, The Netherlands
| |
Collapse
|
71
|
Naser MA, Wahid KA, Ahmed S, Salama V, Dede C, Edwards BW, Lin R, McDonald B, Salzillo TC, He R, Ding Y, Abdelaal MA, Thill D, O'Connell N, Willcut V, Christodouleas JP, Lai SY, Fuller CD, Mohamed ASR. Quality assurance assessment of intra-acquisition diffusion-weighted and T2-weighted magnetic resonance imaging registration and contour propagation for head and neck cancer radiotherapy. Med Phys 2023; 50:2089-2099. [PMID: 36519973 PMCID: PMC10121748 DOI: 10.1002/mp.16128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 11/10/2022] [Accepted: 11/13/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND/PURPOSE Adequate image registration of anatomical and functional magnetic resonance imaging (MRI) scans is necessary for MR-guided head and neck cancer (HNC) adaptive radiotherapy planning. Despite the quantitative capabilities of diffusion-weighted imaging (DWI) MRI for treatment plan adaptation, geometric distortion remains a considerable limitation. Therefore, we systematically investigated various deformable image registration (DIR) methods to co-register DWI and T2-weighted (T2W) images. MATERIALS/METHODS We compared three commercial (ADMIRE, Velocity, Raystation) and three open-source (Elastix with default settings [Elastix Default], Elastix with parameter set 23 [Elastix 23], Demons) post-acquisition DIR methods applied to T2W and DWI MRI images acquired during the same imaging session in twenty immobilized HNC patients. In addition, we used the non-registered images (None) as a control comparator. Ground-truth segmentations of radiotherapy structures (tumour and organs at risk) were generated by a physician expert on both image sequences. For each registration approach, structures were propagated from T2W to DWI images. These propagated structures were then compared with ground-truth DWI structures using the Dice similarity coefficient and mean surface distance. RESULTS 19 left submandibular glands, 18 right submandibular glands, 20 left parotid glands, 20 right parotid glands, 20 spinal cords, and 12 tumours were delineated. Most DIR methods took <30 s to execute per case, with the exception of Elastix 23 which took ∼458 s to execute per case. ADMIRE and Elastix 23 demonstrated improved performance over None for all metrics and structures (Bonferroni-corrected p < 0.05), while the other methods did not. Moreover, ADMIRE and Elastix 23 significantly improved performance in individual and pooled analysis compared to all other methods. CONCLUSIONS The ADMIRE DIR method offers improved geometric performance with reasonable execution time so should be favoured for registering T2W and DWI images acquired during the same scan session in HNC patients. These results are important to ensure the appropriate selection of registration strategies for MR-guided radiotherapy.
Collapse
Affiliation(s)
- Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Kareem A Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Sara Ahmed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Vivian Salama
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Cem Dede
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Benjamin W Edwards
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Ruitao Lin
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Brigid McDonald
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Travis C Salzillo
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Yao Ding
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Moamen Abobakr Abdelaal
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | | | | | | | | | - Stephen Y Lai
- Department of Head and Neck Surgery, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Abdallah S R Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
72
|
Wahid KA, Lin D, Sahin O, Cislo M, Nelms BE, He R, Naser MA, Duke S, Sherer MV, Christodouleas JP, Mohamed ASR, Murphy JD, Fuller CD, Gillespie EF. Large scale crowdsourced radiotherapy segmentations across a variety of cancer anatomic sites. Sci Data 2023; 10:161. [PMID: 36949088 PMCID: PMC10033824 DOI: 10.1038/s41597-023-02062-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 03/10/2023] [Indexed: 03/24/2023] Open
Abstract
Clinician generated segmentation of tumor and healthy tissue regions of interest (ROIs) on medical images is crucial for radiotherapy. However, interobserver segmentation variability has long been considered a significant detriment to the implementation of high-quality and consistent radiotherapy dose delivery. This has prompted the increasing development of automated segmentation approaches. However, extant segmentation datasets typically only provide segmentations generated by a limited number of annotators with varying, and often unspecified, levels of expertise. In this data descriptor, numerous clinician annotators manually generated segmentations for ROIs on computed tomography images across a variety of cancer sites (breast, sarcoma, head and neck, gynecologic, gastrointestinal; one patient per cancer site) for the Contouring Collaborative for Consensus in Radiation Oncology challenge. In total, over 200 annotators (experts and non-experts) contributed using a standardized annotation platform (ProKnow). Subsequently, we converted Digital Imaging and Communications in Medicine data into Neuroimaging Informatics Technology Initiative format with standardized nomenclature for ease of use. In addition, we generated consensus segmentations for experts and non-experts using the Simultaneous Truth and Performance Level Estimation method. These standardized, structured, and easily accessible data are a valuable resource for systematically studying variability in segmentation applications.
Collapse
Affiliation(s)
- Kareem A Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Diana Lin
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Onur Sahin
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Michael Cislo
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | | | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Mohammed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Simon Duke
- Department of Radiation Oncology, Cambridge University Hospitals, Cambridge, UK
| | - Michael V Sherer
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA, USA
| | - John P Christodouleas
- Department of Radiation Oncology, The University of Pennsylvania Cancer Center, Philadelphia, PA, USA
- Elekta, Atlanta, GA, USA
| | - Abdallah S R Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - James D Murphy
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA.
| | - Erin F Gillespie
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
- Fred Hutchinson Cancer Center, Seattle, WA, USA.
| |
Collapse
|
73
|
Zhong Y, Guo Y, Fang Y, Wu Z, Wang J, Hu W. Geometric and dosimetric evaluation of deep learning based auto-segmentation for clinical target volume on breast cancer. J Appl Clin Med Phys 2023:e13951. [PMID: 36920901 DOI: 10.1002/acm2.13951] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 02/09/2023] [Accepted: 02/12/2023] [Indexed: 03/16/2023] Open
Abstract
BACKGROUND Recently, target auto-segmentation techniques based on deep learning (DL) have shown promising results. However, inaccurate target delineation will directly affect the treatment planning dose distribution and the effect of subsequent radiotherapy work. Evaluation based on geometric metrics alone may not be sufficient for target delineation accuracy assessment. The purpose of this paper is to validate the performance of automatic segmentation with dosimetric metrics and try to construct new evaluation geometric metrics to comprehensively understand the dose-response relationship from the perspective of clinical application. MATERIALS AND METHODS A DL-based target segmentation model was developed by using 186 manual delineation modified radical mastectomy breast cancer cases. The resulting DL model were used to generate alternative target contours in a new set of 48 patients. The Auto-plan was reoptimized to ensure the same optimized parameters as the reference Manual-plan. To assess the dosimetric impact of target auto-segmentation, not only common geometric metrics but also new spatial parameters with distance and relative volume ( R V ${R}_V$ ) to target were used. Correlations were performed using Spearman's correlation between segmentation evaluation metrics and dosimetric changes. RESULTS Only strong (|R2 | > 0.6, p < 0.01) or moderate (|R2 | > 0.4, p < 0.01) Pearson correlation was established between the traditional geometric metric and three dosimetric evaluation indices to target (conformity index, homogeneity index, and mean dose). For organs at risk (OARs), inferior or no significant relationship was found between geometric parameters and dosimetric differences. Furthermore, we found that OARs dose distribution was affected by boundary error of target segmentation instead of distance and R V ${R}_V$ to target. CONCLUSIONS Current geometric metrics could reflect a certain degree of dose effect of target variation. To find target contour variations that do lead to OARs dosimetry changes, clinically oriented metrics that more accurately reflect how segmentation quality affects dosimetry should be constructed.
Collapse
Affiliation(s)
- Yang Zhong
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Ying Guo
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yingtao Fang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Zhiqiang Wu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| |
Collapse
|
74
|
Podobnik G, Strojan P, Peterlin P, Ibragimov B, Vrtovec T. HaN-Seg: The head and neck organ-at-risk CT and MR segmentation dataset. Med Phys 2023; 50:1917-1927. [PMID: 36594372 DOI: 10.1002/mp.16197] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 11/17/2022] [Accepted: 12/07/2022] [Indexed: 01/04/2023] Open
Abstract
PURPOSE For the cancer in the head and neck (HaN), radiotherapy (RT) represents an important treatment modality. Segmentation of organs-at-risk (OARs) is the starting point of RT planning, however, existing approaches are focused on either computed tomography (CT) or magnetic resonance (MR) images, while multimodal segmentation has not been thoroughly explored yet. We present a dataset of CT and MR images of the same patients with curated reference HaN OAR segmentations for an objective evaluation of segmentation methods. ACQUISITION AND VALIDATION METHODS The cohort consists of HaN images of 56 patients that underwent both CT and T1-weighted MR imaging for image-guided RT. For each patient, reference segmentations of up to 30 OARs were obtained by experts performing manual pixel-wise image annotation. By maintaining the distribution of patient age and gender, and annotation type, the patients were randomly split into training Set 1 (42 cases or 75%) and test Set 2 (14 cases or 25%). Baseline auto-segmentation results are also provided by training the publicly available deep nnU-Net architecture on Set 1, and evaluating its performance on Set 2. DATA FORMAT AND USAGE NOTES The data are publicly available through an open-access repository under the name HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Dataset. Images and reference segmentations are stored in the NRRD file format, where the OAR filenames correspond to the nomenclature recommended by the American Association of Physicists in Medicine, and OAR and demographics information is stored in separate comma-separated value files. POTENTIAL APPLICATIONS The HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Challenge is launched in parallel with the dataset release to promote the development of automated techniques for OAR segmentation in the HaN. Other potential applications include out-of-challenge algorithm development and benchmarking, as well as external validation of the developed algorithms.
Collapse
Affiliation(s)
- Gašper Podobnik
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | | | | | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
75
|
Huiskes M, Astreinidou E, Kong W, Breedveld S, Heijmen B, Rasch C. Dosimetric impact of adaptive proton therapy in head and neck cancer - A review. Clin Transl Radiat Oncol 2023; 39:100598. [PMID: 36860581 PMCID: PMC9969246 DOI: 10.1016/j.ctro.2023.100598] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 02/10/2023] [Accepted: 02/12/2023] [Indexed: 02/18/2023] Open
Abstract
Background Intensity Modulated Proton Therapy (IMPT) in head and neck cancer (HNC) is susceptible to anatomical changes and patient set-up inaccuracies during the radiotherapy course, which can cause discrepancies between planned and delivered dose. The discrepancies can be counteracted by adaptive replanning strategies. This article reviews the observed dosimetric impact of adaptive proton therapy (APT) and the timing to perform a plan adaptation in IMPT in HNC. Methods A literature search of articles published in PubMed/MEDLINE, EMBASE and Web of Science from January 2010 to March 2022 was performed. Among a total of 59 records assessed for possible eligibility, ten articles were included in this review. Results Included studies reported on target coverage deterioration in IMPT plans during the RT course, which was recovered with the application of an APT approach. All APT plans showed an average improved target coverage for the high- and low-dose targets as compared to the accumulated dose on the planned plans. Dose improvements up to 2.5 Gy (3.5 %) and up to 4.0 Gy (7.1 %) in the D98 of the high- and low dose targets were observed with APT. Doses to the organs at risk (OARs) remained equal or decreased slightly after APT was applied. In the included studies, APT was largely performed once, which resulted in the largest target coverage improvement, but eventual additional APT improved the target coverage further. There is no data showing what is the most appropriate timing for APT. Conclusion APT during IMPT for HNC patients improves target coverage. The largest improvement in target coverage was found with a single adaptive intervention, and an eventual second or more frequent APT application improved the target coverage further. Doses to the OARs remained equal or decreased slightly after applying APT. The most optimal timing for APT is yet to be determined.
Collapse
Affiliation(s)
- Merle Huiskes
- Department of Radiation Oncology, Leiden University Medical Center, Leiden, the Netherlands
| | - Eleftheria Astreinidou
- Department of Radiation Oncology, Leiden University Medical Center, Leiden, the Netherlands
| | - Wens Kong
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, the Netherlands
| | - Sebastiaan Breedveld
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, the Netherlands
| | - Ben Heijmen
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, the Netherlands
| | - Coen Rasch
- Department of Radiation Oncology, Leiden University Medical Center, Leiden, the Netherlands
- HollandPTC, Delft, the Netherlands
| |
Collapse
|
76
|
Sahlsten J, Jaskari J, Wahid KA, Ahmed S, Glerean E, He R, Kann BH, Mäkitie A, Fuller CD, Naser MA, Kaski K. Application of simultaneous uncertainty quantification for image segmentation with probabilistic deep learning: Performance benchmarking of oropharyngeal cancer target delineation as a use-case. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.02.20.23286188. [PMID: 36865296 PMCID: PMC9980236 DOI: 10.1101/2023.02.20.23286188] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/26/2023]
Abstract
Background Oropharyngeal cancer (OPC) is a widespread disease, with radiotherapy being a core treatment modality. Manual segmentation of the primary gross tumor volume (GTVp) is currently employed for OPC radiotherapy planning, but is subject to significant interobserver variability. Deep learning (DL) approaches have shown promise in automating GTVp segmentation, but comparative (auto)confidence metrics of these models predictions has not been well-explored. Quantifying instance-specific DL model uncertainty is crucial to improving clinician trust and facilitating broad clinical implementation. Therefore, in this study, probabilistic DL models for GTVp auto-segmentation were developed using large-scale PET/CT datasets, and various uncertainty auto-estimation methods were systematically investigated and benchmarked. Methods We utilized the publicly available 2021 HECKTOR Challenge training dataset with 224 co-registered PET/CT scans of OPC patients with corresponding GTVp segmentations as a development set. A separate set of 67 co-registered PET/CT scans of OPC patients with corresponding GTVp segmentations was used for external validation. Two approximate Bayesian deep learning methods, the MC Dropout Ensemble and Deep Ensemble, both with five submodels, were evaluated for GTVp segmentation and uncertainty performance. The segmentation performance was evaluated using the volumetric Dice similarity coefficient (DSC), mean surface distance (MSD), and Hausdorff distance at 95% (95HD). The uncertainty was evaluated using four measures from literature: coefficient of variation (CV), structure expected entropy, structure predictive entropy, and structure mutual information, and additionally with our novel Dice-risk measure. The utility of uncertainty information was evaluated with the accuracy of uncertainty-based segmentation performance prediction using the Accuracy vs Uncertainty (AvU) metric, and by examining the linear correlation between uncertainty estimates and DSC. In addition, batch-based and instance-based referral processes were examined, where the patients with high uncertainty were rejected from the set. In the batch referral process, the area under the referral curve with DSC (R-DSC AUC) was used for evaluation, whereas in the instance referral process, the DSC at various uncertainty thresholds were examined. Results Both models behaved similarly in terms of the segmentation performance and uncertainty estimation. Specifically, the MC Dropout Ensemble had 0.776 DSC, 1.703 mm MSD, and 5.385 mm 95HD. The Deep Ensemble had 0.767 DSC, 1.717 mm MSD, and 5.477 mm 95HD. The uncertainty measure with the highest DSC correlation was structure predictive entropy with correlation coefficients of 0.699 and 0.692 for the MC Dropout Ensemble and the Deep Ensemble, respectively. The highest AvU value was 0.866 for both models. The best performing uncertainty measure for both models was the CV which had R-DSC AUC of 0.783 and 0.782 for the MC Dropout Ensemble and Deep Ensemble, respectively. With referring patients based on uncertainty thresholds from 0.85 validation DSC for all uncertainty measures, on average the DSC improved from the full dataset by 4.7% and 5.0% while referring 21.8% and 22% patients for MC Dropout Ensemble and Deep Ensemble, respectively. Conclusion We found that many of the investigated methods provide overall similar but distinct utility in terms of predicting segmentation quality and referral performance. These findings are a critical first-step towards more widespread implementation of uncertainty quantification in OPC GTVp segmentation.
Collapse
Affiliation(s)
- Jaakko Sahlsten
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Joel Jaskari
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Kareem A Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Sara Ahmed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Enrico Glerean
- Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Benjamin H Kann
- Artificial Intelligence in Medicine Program, Brigham and Women's Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA USA
| | - Antti Mäkitie
- Department of Otorhinolaryngology, Head and Neck Surgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Kimmo Kaski
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| |
Collapse
|
77
|
De Biase A, Sijtsema NM, van Dijk LV, Langendijk JA, van Ooijen PMA. Deep learning aided oropharyngeal cancer segmentation with adaptive thresholding for predicted tumor probability in FDG PET and CT images. Phys Med Biol 2023; 68. [PMID: 36749988 DOI: 10.1088/1361-6560/acb9cf] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 02/07/2023] [Indexed: 02/09/2023]
Abstract
Objective. Tumor segmentation is a fundamental step for radiotherapy treatment planning. To define an accurate segmentation of the primary tumor (GTVp) of oropharyngeal cancer patients (OPC) each image volume is explored slice-by-slice from different orientations on different image modalities. However, the manual fixed boundary of segmentation neglects the spatial uncertainty known to occur in tumor delineation. This study proposes a novel deep learning-based method that generates probability maps which capture the model uncertainty in the segmentation task.Approach. We included 138 OPC patients treated with (chemo)radiation in our institute. Sequences of 3 consecutive 2D slices of concatenated FDG-PET/CT images and GTVp contours were used as input. Our framework exploits inter and intra-slice context using attention mechanisms and bi-directional long short term memory (Bi-LSTM). Each slice resulted in three predictions that were averaged. A 3-fold cross validation was performed on sequences extracted from the axial, sagittal, and coronal plane. 3D volumes were reconstructed and single- and multi-view ensembling were performed to obtain final results. The output is a tumor probability map determined by averaging multiple predictions.Main Results. Model performance was assessed on 25 patients at different probability thresholds. Predictions were the closest to the GTVp at a threshold of 0.9 (mean surface DSC of 0.81, median HD95of 3.906 mm).Significance. The promising results of the proposed method show that is it possible to offer the probability maps to radiation oncologists to guide them in a in a slice-by-slice adaptive GTVp segmentation.
Collapse
Affiliation(s)
- Alessia De Biase
- Department of Radiation Oncology, University Medical Center Groningen, Groningen, 9700RB, The Netherlands.,Data Science Center in Health (DASH), University Medical Center Groningen, Groningen, 9700RB, The Netherlands
| | - Nanna M Sijtsema
- Department of Radiation Oncology, University Medical Center Groningen, Groningen, 9700RB, The Netherlands
| | - Lisanne V van Dijk
- Department of Radiation Oncology, University Medical Center Groningen, Groningen, 9700RB, The Netherlands.,Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX-77030, Texas, United States of America
| | - Johannes A Langendijk
- Department of Radiation Oncology, University Medical Center Groningen, Groningen, 9700RB, The Netherlands
| | - Peter M A van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, Groningen, 9700RB, The Netherlands.,Data Science Center in Health (DASH), University Medical Center Groningen, Groningen, 9700RB, The Netherlands
| |
Collapse
|
78
|
Zhao Q, Wang G, Lei W, Fu H, Qu Y, Lu J, Zhang S, Zhang S. Segmentation of multiple Organs-at-Risk associated with brain tumors based on coarse-to-fine stratified networks. Med Phys 2023. [PMID: 36762594 DOI: 10.1002/mp.16247] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 12/10/2022] [Accepted: 12/27/2022] [Indexed: 02/11/2023] Open
Abstract
BACKGROUND Delineation of Organs-at-Risks (OARs) is an important step in radiotherapy treatment planning. As manual delineation is time-consuming, labor-intensive and affected by inter- and intra-observer variability, a robust and efficient automatic segmentation algorithm is highly desirable for improving the efficiency and repeatability of OAR delineation. PURPOSE Automatic segmentation of OARs in medical images is challenged by low contrast, various shapes and imbalanced sizes of different organs. We aim to overcome these challenges and develop a high-performance method for automatic segmentation of 10 OARs required in radiotherapy planning for brain tumors. METHODS A novel two-stage segmentation framework is proposed, where a coarse and simultaneous localization of all the target organs is obtained in the first stage, and a fine segmentation is achieved for each organ, respectively, in the second stage. To deal with organs with various sizes and shapes, a stratified segmentation strategy is proposed, where a High- and Low-Resolution Residual Network (HLRNet) that consists of a multiresolution branch and a high-resolution branch is introduced to segment medium-sized organs, and a High-Resolution Residual Network (HRRNet) is used to segment small organs. In addition, a label fusion strategy is proposed to better deal with symmetric pairs of organs like the left and right cochleas and lacrimal glands. RESULTS Our method was validated on the dataset of MICCAI ABCs 2020 challenge for OAR segmentation. It obtained an average Dice of 75.8% for 10 OARs, and significantly outperformed several state-of-the-art models including nnU-Net (71.6%) and FocusNet (72.4%). Our proposed HLRNet and HRRNet improved the segmentation accuracy for medium-sized and small organs, respectively. The label fusion strategy led to higher accuracy for symmetric pairs of organs. CONCLUSIONS Our proposed method is effective for the segmentation of OARs of brain tumors, with a better performance than existing methods, especially on medium-sized and small organs. It has a potential for improving the efficiency of radiotherapy planning with high segmentation accuracy.
Collapse
Affiliation(s)
- Qianfei Zhao
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China.,Shanghai AI Laboratory, Shanghai, China
| | - Wenhui Lei
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Hao Fu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yijie Qu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Jiangshan Lu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Shichuan Zhang
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Shaoting Zhang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China.,Shanghai AI Laboratory, Shanghai, China
| |
Collapse
|
79
|
Baroudi H, Brock KK, Cao W, Chen X, Chung C, Court LE, El Basha MD, Farhat M, Gay S, Gronberg MP, Gupta AC, Hernandez S, Huang K, Jaffray DA, Lim R, Marquez B, Nealon K, Netherton TJ, Nguyen CM, Reber B, Rhee DJ, Salazar RM, Shanker MD, Sjogreen C, Woodland M, Yang J, Yu C, Zhao Y. Automated Contouring and Planning in Radiation Therapy: What Is 'Clinically Acceptable'? Diagnostics (Basel) 2023; 13:diagnostics13040667. [PMID: 36832155 PMCID: PMC9955359 DOI: 10.3390/diagnostics13040667] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Revised: 01/21/2023] [Accepted: 01/30/2023] [Indexed: 02/12/2023] Open
Abstract
Developers and users of artificial-intelligence-based tools for automatic contouring and treatment planning in radiotherapy are expected to assess clinical acceptability of these tools. However, what is 'clinical acceptability'? Quantitative and qualitative approaches have been used to assess this ill-defined concept, all of which have advantages and disadvantages or limitations. The approach chosen may depend on the goal of the study as well as on available resources. In this paper, we discuss various aspects of 'clinical acceptability' and how they can move us toward a standard for defining clinical acceptability of new autocontouring and planning tools.
Collapse
Affiliation(s)
- Hana Baroudi
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Kristy K. Brock
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Wenhua Cao
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Xinru Chen
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Caroline Chung
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Laurence E. Court
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Correspondence:
| | - Mohammad D. El Basha
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Maguy Farhat
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Skylar Gay
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Mary P. Gronberg
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Aashish Chandra Gupta
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Soleil Hernandez
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Kai Huang
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - David A. Jaffray
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Rebecca Lim
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Barbara Marquez
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Kelly Nealon
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Tucker J. Netherton
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Callistus M. Nguyen
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Brandon Reber
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Dong Joo Rhee
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Ramon M. Salazar
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Mihir D. Shanker
- The University of Queensland, Saint Lucia 4072, Australia
- The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Carlos Sjogreen
- Department of Physics, University of Houston, Houston, TX 77004, USA
| | - McKell Woodland
- Department of Imaging Physics, Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Computer Science, Rice University, Houston, TX 77005, USA
| | - Jinzhong Yang
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Cenji Yu
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Yao Zhao
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| |
Collapse
|
80
|
Ali S, Jha D, Ghatwary N, Realdon S, Cannizzaro R, Salem OE, Lamarque D, Daul C, Riegler MA, Anonsen KV, Petlund A, Halvorsen P, Rittscher J, de Lange T, East JE. A multi-centre polyp detection and segmentation dataset for generalisability assessment. Sci Data 2023; 10:75. [PMID: 36746950 PMCID: PMC9902556 DOI: 10.1038/s41597-023-01981-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 01/23/2023] [Indexed: 02/08/2023] Open
Abstract
Polyps in the colon are widely known cancer precursors identified by colonoscopy. Whilst most polyps are benign, the polyp's number, size and surface structure are linked to the risk of colon cancer. Several methods have been developed to automate polyp detection and segmentation. However, the main issue is that they are not tested rigorously on a large multicentre purpose-built dataset, one reason being the lack of a comprehensive public dataset. As a result, the developed methods may not generalise to different population datasets. To this extent, we have curated a dataset from six unique centres incorporating more than 300 patients. The dataset includes both single frame and sequence data with 3762 annotated polyp labels with precise delineation of polyp boundaries verified by six senior gastroenterologists. To our knowledge, this is the most comprehensive detection and pixel-level segmentation dataset (referred to as PolypGen) curated by a team of computational scientists and expert gastroenterologists. The paper provides insight into data construction and annotation strategies, quality assurance, and technical validation.
Collapse
Affiliation(s)
- Sharib Ali
- School of Computing, University of Leeds, LS2 9JT, Leeds, United Kingdom.
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, OX3 7DQ, Oxford, United Kingdom.
- Oxford National Institute for Health Research Biomedical Research centre, OX4 2PG, Oxford, United Kingdom.
| | - Debesh Jha
- SimulaMet, Pilestredet 52, 0167, Oslo, Norway
- Department of Computer Science, UiT The Arctic University of Norway, Hansine Hansens veg 18, 9019, Tromsø, Norway
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Noha Ghatwary
- Computer Engineering Department, Arab Academy for Science and Technology,Smart Village, Giza, Egypt
| | - Stefano Realdon
- Oncological Gastroenterology - Centro di Riferimento Oncologico di Aviano (CRO), IRCCS, 2, 33081, Aviano, PN, Italy
| | - Renato Cannizzaro
- Oncological Gastroenterology - Centro di Riferimento Oncologico di Aviano (CRO), IRCCS, 2, 33081, Aviano, PN, Italy
- Department of Medical, Surgical and Health Sciences, University of Trieste, 34127, Trieste, Italy
| | - Osama E Salem
- Faculty of Medicine, University of Alexandria, 21131, Alexandria, Egypt
| | - Dominique Lamarque
- Université de Versailles St-Quentin en Yvelines, Hôpital Ambroise Paré, 9 Av. Charles de Gaulle, 92100, Boulogne-Billancourt, France
| | - Christian Daul
- CRAN UMR 7039, Université de Lorraine and CNRS, F-54010, Vandœuvre-Lès-Nancy, France
| | - Michael A Riegler
- SimulaMet, Pilestredet 52, 0167, Oslo, Norway
- Department of Computer Science, UiT The Arctic University of Norway, Hansine Hansens veg 18, 9019, Tromsø, Norway
| | - Kim V Anonsen
- Oslo University Hospital Ullevål, Kirkeveien 166, 0450, Oslo, Norway
| | | | - Pål Halvorsen
- SimulaMet, Pilestredet 52, 0167, Oslo, Norway
- Oslo Metropolitan University, Pilestredet 46, 0167, Oslo, Norway
| | - Jens Rittscher
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, OX3 7DQ, Oxford, United Kingdom
- Oxford National Institute for Health Research Biomedical Research centre, OX4 2PG, Oxford, United Kingdom
| | - Thomas de Lange
- Augere Medical, Nedre Vaskegang 6, 0186, Oslo, Norway
- Medical Department, Sahlgrenska University Hospital-Mölndal, Blå stråket 5, 413 45, Göteborg, Sweden
- Department of Molecular and Clinical Medicine, Sahlgrenska Academy, University of Gothenburg, 41345, Göteborg, Sweden
| | - James E East
- Oxford National Institute for Health Research Biomedical Research centre, OX4 2PG, Oxford, United Kingdom
- Translational Gastroenterology Unit, Experimental Medicine Div., John Radcliffe Hospital, University of Oxford, OX3 9DU, Oxford, United Kingdom
| |
Collapse
|
81
|
Lin D, Wahid KA, Nelms BE, He R, Naser MA, Duke S, Sherer MV, Christodouleas JP, Mohamed ASR, Cislo M, Murphy JD, Fuller CD, Gillespie EF. E pluribus unum: prospective acceptability benchmarking from the Contouring Collaborative for Consensus in Radiation Oncology crowdsourced initiative for multiobserver segmentation. J Med Imaging (Bellingham) 2023; 10:S11903. [PMID: 36761036 PMCID: PMC9907021 DOI: 10.1117/1.jmi.10.s1.s11903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 01/02/2023] [Indexed: 02/11/2023] Open
Abstract
Purpose Contouring Collaborative for Consensus in Radiation Oncology (C3RO) is a crowdsourced challenge engaging radiation oncologists across various expertise levels in segmentation. An obstacle to artificial intelligence (AI) development is the paucity of multiexpert datasets; consequently, we sought to characterize whether aggregate segmentations generated from multiple nonexperts could meet or exceed recognized expert agreement. Approach Participants who contoured ≥ 1 region of interest (ROI) for the breast, sarcoma, head and neck (H&N), gynecologic (GYN), or gastrointestinal (GI) cases were identified as a nonexpert or recognized expert. Cohort-specific ROIs were combined into single simultaneous truth and performance level estimation (STAPLE) consensus segmentations.STAPLE nonexpert ROIs were evaluated againstSTAPLE expert contours using Dice similarity coefficient (DSC). The expert interobserver DSC (IODSC expert ) was calculated as an acceptability threshold betweenSTAPLE nonexpert andSTAPLE expert . To determine the number of nonexperts required to match theIODSC expert for each ROI, a single consensus contour was generated using variable numbers of nonexperts and then compared to theIODSC expert . Results For all cases, the DSC values forSTAPLE nonexpert versusSTAPLE expert were higher than comparator expertIODSC expert for most ROIs. The minimum number of nonexpert segmentations needed for a consensus ROI to achieveIODSC expert acceptability criteria ranged between 2 and 4 for breast, 3 and 5 for sarcoma, 3 and 5 for H&N, 3 and 5 for GYN, and 3 for GI. Conclusions Multiple nonexpert-generated consensus ROIs met or exceeded expert-derived acceptability thresholds. Five nonexperts could potentially generate consensus segmentations for most ROIs with performance approximating experts, suggesting nonexpert segmentations as feasible cost-effective AI inputs.
Collapse
Affiliation(s)
- Diana Lin
- Memorial Sloan Kettering Cancer Center, Department of Radiation Oncology, New York, New York, United States
| | - Kareem A. Wahid
- The University of Texas MD Anderson Cancer Center, Department of Radiation Oncology, Houston, Texas, United States
| | | | - Renjie He
- The University of Texas MD Anderson Cancer Center, Department of Radiation Oncology, Houston, Texas, United States
| | - Mohammed A. Naser
- The University of Texas MD Anderson Cancer Center, Department of Radiation Oncology, Houston, Texas, United States
| | - Simon Duke
- Cambridge University Hospitals, Department of Radiation Oncology, Cambridge, United Kingdom
| | - Michael V. Sherer
- University of California San Diego, Department of Radiation Medicine and Applied Sciences, La Jolla, California, United States
| | - John P. Christodouleas
- The University of Pennsylvania Cancer Center, Department of Radiation Oncology, Philadelphia, Pennsylvania, United States
- Elekta AB, Stockholm, Sweden
| | - Abdallah S. R. Mohamed
- The University of Texas MD Anderson Cancer Center, Department of Radiation Oncology, Houston, Texas, United States
| | - Michael Cislo
- Memorial Sloan Kettering Cancer Center, Department of Radiation Oncology, New York, New York, United States
| | - James D. Murphy
- University of California San Diego, Department of Radiation Medicine and Applied Sciences, La Jolla, California, United States
| | - Clifton D. Fuller
- The University of Texas MD Anderson Cancer Center, Department of Radiation Oncology, Houston, Texas, United States
| | - Erin F. Gillespie
- Memorial Sloan Kettering Cancer Center, Department of Radiation Oncology, New York, New York, United States
- University of Washington Fred Hutchinson Cancer Center, Department of Radiation Oncology, Seattle, Washington, United States
| |
Collapse
|
82
|
Mackay K, Bernstein D, Glocker B, Kamnitsas K, Taylor A. A Review of the Metrics Used to Assess Auto-Contouring Systems in Radiotherapy. Clin Oncol (R Coll Radiol) 2023; 35:354-369. [PMID: 36803407 DOI: 10.1016/j.clon.2023.01.016] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 12/05/2022] [Accepted: 01/23/2023] [Indexed: 02/01/2023]
Abstract
Auto-contouring could revolutionise future planning of radiotherapy treatment. The lack of consensus on how to assess and validate auto-contouring systems currently limits clinical use. This review formally quantifies the assessment metrics used in studies published during one calendar year and assesses the need for standardised practice. A PubMed literature search was undertaken for papers evaluating radiotherapy auto-contouring published during 2021. Papers were assessed for types of metric and the methodology used to generate ground-truth comparators. Our PubMed search identified 212 studies, of which 117 met the criteria for clinical review. Geometric assessment metrics were used in 116 of 117 studies (99.1%). This includes the Dice Similarity Coefficient used in 113 (96.6%) studies. Clinically relevant metrics, such as qualitative, dosimetric and time-saving metrics, were less frequently used in 22 (18.8%), 27 (23.1%) and 18 (15.4%) of 117 studies, respectively. There was heterogeneity within each category of metric. Over 90 different names for geometric measures were used. Methods for qualitative assessment were different in all but two papers. Variation existed in the methods used to generate radiotherapy plans for dosimetric assessment. Consideration of editing time was only given in 11 (9.4%) papers. A single manual contour as a ground-truth comparator was used in 65 (55.6%) studies. Only 31 (26.5%) studies compared auto-contours to usual inter- and/or intra-observer variation. In conclusion, significant variation exists in how research papers currently assess the accuracy of automatically generated contours. Geometric measures are the most popular, however their clinical utility is unknown. There is heterogeneity in the methods used to perform clinical assessment. Considering the different stages of system implementation may provide a framework to decide the most appropriate metrics. This analysis supports the need for a consensus on the clinical implementation of auto-contouring.
Collapse
Affiliation(s)
- K Mackay
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK.
| | - D Bernstein
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| | - B Glocker
- Department of Computing, Imperial College London, South Kensington Campus, London, UK
| | - K Kamnitsas
- Department of Computing, Imperial College London, South Kensington Campus, London, UK; Department of Engineering Science, University of Oxford, Oxford, UK
| | - A Taylor
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| |
Collapse
|
83
|
Peng Y, Liu Y, Shen G, Chen Z, Chen M, Miao J, Zhao C, Deng J, Qi Z, Deng X. Improved accuracy of auto-segmentation of organs at risk in radiotherapy planning for nasopharyngeal carcinoma based on fully convolutional neural network deep learning. Oral Oncol 2023; 136:106261. [PMID: 36446186 DOI: 10.1016/j.oraloncology.2022.106261] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 11/13/2022] [Accepted: 11/19/2022] [Indexed: 11/27/2022]
Abstract
OBJECTIVE We examined a modified encoder-decoder architecture-based fully convolutional neural network, OrganNet, for simultaneous auto-segmentation of 24 organs at risk (OARs) in the head and neck, followed by validation tests and evaluation of clinical application. MATERIALS AND METHODS Computed tomography (CT) images from 310 radiotherapy plans were used as the experimental data set, of which 260 and 50 were used as the training and test sets, respectively. An improved U-Net architecture was established by introducing a batch normalization layer, residual squeeze-and-excitation layer, and unique organ-specific loss function for deep learning training. The performance of the trained network model was evaluated by comparing the manual-delineation and the STAPLE contour of 10 physicians from different centers. RESULTS Our model achieved good segmentation in all 24 OARs in nasopharyngeal cancer radiotherapy plan CT images, with an average Dice similarity coefficient of 83.75%. Specifically, the mean Dice coefficients in large-volume organs (brainstem, spinal cord, left/right parotid glands, left/right temporal lobes, and left/right mandibles) were 84.97% - 95.00%, and in small-volume organs (pituitary, lens, optic nerve, and optic chiasma) were 55.46% - 91.56%. respectively. Using the STAPLE contours as standard contour, the OrganNet achieved comparable or better DICE in organ segmentation then that of the manual-delineation as well. CONCLUSION The established OrganNet enables simultaneous automatic segmentation of multiple targets on CT images of the head and neck radiotherapy plans, effectively improves the accuracy of U-Net based segmentation for OARs, especially for small-volume organs.
Collapse
Affiliation(s)
- Yinglin Peng
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| | - Yimei Liu
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Guanzhu Shen
- Department of Radiation Oncology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Zijie Chen
- Shenying Medical Technology (Shenzhen) Co., Ltd., Shenzhen, Guangdong, China
| | - Meining Chen
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Jingjing Miao
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Chong Zhao
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Jincheng Deng
- Shenying Medical Technology (Shenzhen) Co., Ltd., Shenzhen, Guangdong, China
| | - Zhenyu Qi
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China.
| | - Xiaowu Deng
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China.
| |
Collapse
|
84
|
Weissmann T, Huang Y, Fischer S, Roesch J, Mansoorian S, Ayala Gaona H, Gostian AO, Hecht M, Lettmaier S, Deloch L, Frey B, Gaipl US, Distel LV, Maier A, Iro H, Semrau S, Bert C, Fietkau R, Putz F. Deep learning for automatic head and neck lymph node level delineation provides expert-level accuracy. Front Oncol 2023; 13:1115258. [PMID: 36874135 PMCID: PMC9978473 DOI: 10.3389/fonc.2023.1115258] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Accepted: 01/30/2023] [Indexed: 02/18/2023] Open
Abstract
Background Deep learning-based head and neck lymph node level (HN_LNL) autodelineation is of high relevance to radiotherapy research and clinical treatment planning but still underinvestigated in academic literature. In particular, there is no publicly available open-source solution for large-scale autosegmentation of HN_LNL in the research setting. Methods An expert-delineated cohort of 35 planning CTs was used for training of an nnU-net 3D-fullres/2D-ensemble model for autosegmentation of 20 different HN_LNL. A second cohort acquired at the same institution later in time served as the test set (n = 20). In a completely blinded evaluation, 3 clinical experts rated the quality of deep learning autosegmentations in a head-to-head comparison with expert-created contours. For a subgroup of 10 cases, intraobserver variability was compared to the average deep learning autosegmentation accuracy on the original and recontoured set of expert segmentations. A postprocessing step to adjust craniocaudal boundaries of level autosegmentations to the CT slice plane was introduced and the effect of autocontour consistency with CT slice plane orientation on geometric accuracy and expert rating was investigated. Results Blinded expert ratings for deep learning segmentations and expert-created contours were not significantly different. Deep learning segmentations with slice plane adjustment were rated numerically higher (mean, 81.0 vs. 79.6, p = 0.185) and deep learning segmentations without slice plane adjustment were rated numerically lower (77.2 vs. 79.6, p = 0.167) than manually drawn contours. In a head-to-head comparison, deep learning segmentations with CT slice plane adjustment were rated significantly better than deep learning contours without slice plane adjustment (81.0 vs. 77.2, p = 0.004). Geometric accuracy of deep learning segmentations was not different from intraobserver variability (mean Dice per level, 0.76 vs. 0.77, p = 0.307). Clinical significance of contour consistency with CT slice plane orientation was not represented by geometric accuracy metrics (volumetric Dice, 0.78 vs. 0.78, p = 0.703). Conclusions We show that a nnU-net 3D-fullres/2D-ensemble model can be used for highly accurate autodelineation of HN_LNL using only a limited training dataset that is ideally suited for large-scale standardized autodelineation of HN_LNL in the research setting. Geometric accuracy metrics are only an imperfect surrogate for blinded expert rating.
Collapse
Affiliation(s)
- Thomas Weissmann
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Yixing Huang
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Stefan Fischer
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Johannes Roesch
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Sina Mansoorian
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Horacio Ayala Gaona
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Antoniu-Oreste Gostian
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany.,Department of Otolaryngology, Head and Neck Surgery, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Markus Hecht
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Sebastian Lettmaier
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Lisa Deloch
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany.,Translational Radiobiology, Department of Radiation Oncology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Universitätsklinikum Erlangen, Erlangen, Germany
| | - Benjamin Frey
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany.,Translational Radiobiology, Department of Radiation Oncology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Universitätsklinikum Erlangen, Erlangen, Germany
| | - Udo S Gaipl
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany.,Translational Radiobiology, Department of Radiation Oncology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Universitätsklinikum Erlangen, Erlangen, Germany
| | - Luitpold Valentin Distel
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Heinrich Iro
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany.,Department of Otolaryngology, Head and Neck Surgery, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Sabine Semrau
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Christoph Bert
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Rainer Fietkau
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Florian Putz
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| |
Collapse
|
85
|
Groendahl AR, Huynh BN, Tomic O, Søvik Å, Dale E, Malinen E, Skogmo HK, Futsaether CM. Automatic gross tumor segmentation of canine head and neck cancer using deep learning and cross-species transfer learning. Front Vet Sci 2023; 10:1143986. [PMID: 37026102 PMCID: PMC10070749 DOI: 10.3389/fvets.2023.1143986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 03/01/2023] [Indexed: 04/08/2023] Open
Abstract
Background Radiotherapy (RT) is increasingly being used on dogs with spontaneous head and neck cancer (HNC), which account for a large percentage of veterinary patients treated with RT. Accurate definition of the gross tumor volume (GTV) is a vital part of RT planning, ensuring adequate dose coverage of the tumor while limiting the radiation dose to surrounding tissues. Currently the GTV is contoured manually in medical images, which is a time-consuming and challenging task. Purpose The purpose of this study was to evaluate the applicability of deep learning-based automatic segmentation of the GTV in canine patients with HNC. Materials and methods Contrast-enhanced computed tomography (CT) images and corresponding manual GTV contours of 36 canine HNC patients and 197 human HNC patients were included. A 3D U-Net convolutional neural network (CNN) was trained to automatically segment the GTV in canine patients using two main approaches: (i) training models from scratch based solely on canine CT images, and (ii) using cross-species transfer learning where models were pretrained on CT images of human patients and then fine-tuned on CT images of canine patients. For the canine patients, automatic segmentations were assessed using the Dice similarity coefficient (Dice), the positive predictive value, the true positive rate, and surface distance metrics, calculated from a four-fold cross-validation strategy where each fold was used as a validation set and test set once in independent model runs. Results CNN models trained from scratch on canine data or by using transfer learning obtained mean test set Dice scores of 0.55 and 0.52, respectively, indicating acceptable auto-segmentations, similar to the mean Dice performances reported for CT-based automatic segmentation in human HNC studies. Automatic segmentation of nasal cavity tumors appeared particularly promising, resulting in mean test set Dice scores of 0.69 for both approaches. Conclusion In conclusion, deep learning-based automatic segmentation of the GTV using CNN models based on canine data only or a cross-species transfer learning approach shows promise for future application in RT of canine HNC patients.
Collapse
Affiliation(s)
- Aurora Rosvoll Groendahl
- Faculty of Science and Technology, Department of Physics, Norwegian University of Life Sciences, Ås, Norway
| | - Bao Ngoc Huynh
- Faculty of Science and Technology, Department of Physics, Norwegian University of Life Sciences, Ås, Norway
| | - Oliver Tomic
- Faculty of Science and Technology, Department of Data Science, Norwegian University of Life Sciences, Ås, Norway
| | - Åste Søvik
- Faculty of Veterinary Medicine, Department of Companion Animal Clinical Sciences, Norwegian University of Life Sciences, Ås, Norway
| | - Einar Dale
- Department of Oncology, Oslo University Hospital, Oslo, Norway
| | - Eirik Malinen
- Department of Physics, University of Oslo, Oslo, Norway
- Department of Medical Physics, Oslo University Hospital, Oslo, Norway
| | - Hege Kippenes Skogmo
- Faculty of Veterinary Medicine, Department of Companion Animal Clinical Sciences, Norwegian University of Life Sciences, Ås, Norway
| | - Cecilia Marie Futsaether
- Faculty of Science and Technology, Department of Physics, Norwegian University of Life Sciences, Ås, Norway
- *Correspondence: Cecilia Marie Futsaether
| |
Collapse
|
86
|
Fathi Kazerooni A, Arif S, Madhogarhia R, Khalili N, Haldar D, Bagheri S, Familiar AM, Anderson H, Haldar S, Tu W, Chul Kim M, Viswanathan K, Muller S, Prados M, Kline C, Vidal L, Aboian M, Storm PB, Resnick AC, Ware JB, Vossough A, Davatzikos C, Nabavizadeh A. Automated tumor segmentation and brain tissue extraction from multiparametric MRI of pediatric brain tumors: A multi-institutional study. Neurooncol Adv 2023; 5:vdad027. [PMID: 37051331 PMCID: PMC10084501 DOI: 10.1093/noajnl/vdad027] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/18/2023] Open
Abstract
Background Brain tumors are the most common solid tumors and the leading cause of cancer-related death among all childhood cancers. Tumor segmentation is essential in surgical and treatment planning, and response assessment and monitoring. However, manual segmentation is time-consuming and has high interoperator variability. We present a multi-institutional deep learning-based method for automated brain extraction and segmentation of pediatric brain tumors based on multi-parametric MRI scans. Methods Multi-parametric scans (T1w, T1w-CE, T2, and T2-FLAIR) of 244 pediatric patients ( n = 215 internal and n = 29 external cohorts) with de novo brain tumors, including a variety of tumor subtypes, were preprocessed and manually segmented to identify the brain tissue and tumor subregions into four tumor subregions, i.e., enhancing tumor (ET), non-enhancing tumor (NET), cystic components (CC), and peritumoral edema (ED). The internal cohort was split into training ( n = 151), validation ( n = 43), and withheld internal test ( n = 21) subsets. DeepMedic, a three-dimensional convolutional neural network, was trained and the model parameters were tuned. Finally, the network was evaluated on the withheld internal and external test cohorts. Results Dice similarity score (median ± SD) was 0.91 ± 0.10/0.88 ± 0.16 for the whole tumor, 0.73 ± 0.27/0.84 ± 0.29 for ET, 0.79 ± 19/0.74 ± 0.27 for union of all non-enhancing components (i.e., NET, CC, ED), and 0.98 ± 0.02 for brain tissue in both internal/external test sets. Conclusions Our proposed automated brain extraction and tumor subregion segmentation models demonstrated accurate performance on segmentation of the brain tissue and whole tumor regions in pediatric brain tumors and can facilitate detection of abnormal regions for further clinical measurements.
Collapse
Affiliation(s)
- Anahita Fathi Kazerooni
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- AI2D Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neurosurgery, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Sherjeel Arif
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Rachel Madhogarhia
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Nastaran Khalili
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Debanjan Haldar
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Institute of Translational Medicine and Therapeutics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Sina Bagheri
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ariana M Familiar
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Hannah Anderson
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Shuvanjan Haldar
- Department of Biomedical Engineering, Rutgers University, New Brunswick, NJ, USA
| | - Wenxin Tu
- College of Arts and Sciences, University of Pennsylvania, Philadelphia, PA, USA
| | - Meen Chul Kim
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Karthik Viswanathan
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Sabine Muller
- Department of Neurology and Pediatrics, University of California San Francisco, San Francisco, CA, USA
| | - Michael Prados
- Department of Neurological Surgery and Pediatrics, University of California San Francisco, San Francisco, CA, USA
| | - Cassie Kline
- Division of Oncology, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Lorenna Vidal
- Division of Radiology, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Mariam Aboian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Phillip B Storm
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neurosurgery, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Adam C Resnick
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Neurosurgery, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Jeffrey B Ware
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Arastoo Vossough
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Division of Radiology, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Christos Davatzikos
- AI2D Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ali Nabavizadeh
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
87
|
Jin D, Guo D, Ge J, Ye X, Lu L. Towards automated organs at risk and target volumes contouring: Defining precision radiation therapy in the modern era. JOURNAL OF THE NATIONAL CANCER CENTER 2022; 2:306-313. [PMID: 39036546 PMCID: PMC11256697 DOI: 10.1016/j.jncc.2022.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 09/06/2022] [Accepted: 09/27/2022] [Indexed: 12/05/2022] Open
Abstract
Precision radiotherapy is a critical and indispensable cancer treatment means in the modern clinical workflow with the goal of achieving "quality-up and cost-down" in patient care. The challenge of this therapy lies in developing computerized clinical-assistant solutions with precision, automation, and reproducibility built-in to deliver it at scale. In this work, we provide a comprehensive yet ongoing, incomplete survey of and discussions on the recent progress of utilizing advanced deep learning, semantic organ parsing, multimodal imaging fusion, neural architecture search and medical image analytical techniques to address four corner-stone problems or sub-problems required by all precision radiotherapy workflows, namely, organs at risk (OARs) segmentation, gross tumor volume (GTV) segmentation, metastasized lymph node (LN) detection, and clinical tumor volume (CTV) segmentation. Without loss of generality, we mainly focus on using esophageal and head-and-neck cancers as examples, but the methods can be extrapolated to other types of cancers. High-precision, automated and highly reproducible OAR/GTV/LN/CTV auto-delineation techniques have demonstrated their effectiveness in reducing the inter-practitioner variabilities and the time cost to permit rapid treatment planning and adaptive replanning for the benefit of patients. Through the presentation of the achievements and limitations of these techniques in this review, we hope to encourage more collective multidisciplinary precision radiotherapy workflows to transpire.
Collapse
Affiliation(s)
- Dakai Jin
- DAMO Academy, Alibaba Group, New York, United States
| | - Dazhou Guo
- DAMO Academy, Alibaba Group, New York, United States
| | - Jia Ge
- Department of Radiation Oncology, The First Affiliated Hospital of Zhejiang University, Hangzhou, China
| | - Xianghua Ye
- Department of Radiation Oncology, The First Affiliated Hospital of Zhejiang University, Hangzhou, China
| | - Le Lu
- DAMO Academy, Alibaba Group, New York, United States
| |
Collapse
|
88
|
Deep learning-based two-step organs at risk auto-segmentation model for brachytherapy planning in parotid gland carcinoma. J Contemp Brachytherapy 2022; 14:527-535. [PMID: 36819465 PMCID: PMC9924151 DOI: 10.5114/jcb.2022.123972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 12/02/2022] [Indexed: 01/18/2023] Open
Abstract
Purpose Delineation of organs at risk (OARs) represents a crucial step for both tailored delivery of radiation doses and prevention of radiation-induced toxicity in brachytherapy. Due to lack of studies on auto-segmentation methods in head and neck cancers, our study proposed a deep learning-based two-step approach for auto-segmentation of organs at risk in parotid carcinoma brachytherapy. Material and methods Computed tomography images of 200 patients with parotid gland carcinoma were used to train and evaluate our in-house developed two-step 3D nnU-Net-based model for OARs auto-segmentation. OARs during brachytherapy were defined as the auricula, condyle process, skin, mastoid process, external auditory canal, and mandibular ramus. Auto-segmentation results were compared to those of manual segmentation by expert oncologists. Accuracy was quantitatively evaluated in terms of dice similarity coefficient (DSC), Jaccard index, 95th-percentile Hausdorff distance (95HD), and precision and recall. Qualitative evaluation of auto-segmentation results was also performed. Results The mean DSC values of each OAR were 0.88, 0.91, 0.75, 0.89, 0.74, and 0.93, respectively, indicating close resemblance of auto-segmentation results to those of manual contouring. In addition, auto-segmentation could be completed within a minute, as compared with manual segmentation, which required over 20 minutes. All generated results were deemed clinically acceptable. Conclusions Our proposed deep learning-based two-step OARs auto-segmentation model demonstrated high efficiency and good agreement with gold standard manual contours. Thereby, this novel approach carries the potential in expediting the treatment planning process of brachytherapy for parotid gland cancers, while allowing for more accurate radiation delivery to minimize toxicity.
Collapse
|
89
|
Cubero L, Castelli J, Simon A, de Crevoisier R, Acosta O, Pascau J. Deep Learning-Based Segmentation of Head and Neck Organs-at-Risk with Clinical Partially Labeled Data. ENTROPY (BASEL, SWITZERLAND) 2022; 24:e24111661. [PMID: 36421515 PMCID: PMC9689629 DOI: 10.3390/e24111661] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 10/28/2022] [Accepted: 11/09/2022] [Indexed: 06/06/2023]
Abstract
Radiotherapy is one of the main treatments for localized head and neck (HN) cancer. To design a personalized treatment with reduced radio-induced toxicity, accurate delineation of organs at risk (OAR) is a crucial step. Manual delineation is time- and labor-consuming, as well as observer-dependent. Deep learning (DL) based segmentation has proven to overcome some of these limitations, but requires large databases of homogeneously contoured image sets for robust training. However, these are not easily obtained from the standard clinical protocols as the OARs delineated may vary depending on the patient's tumor site and specific treatment plan. This results in incomplete or partially labeled data. This paper presents a solution to train a robust DL-based automated segmentation tool exploiting a clinical partially labeled dataset. We propose a two-step workflow for OAR segmentation: first, we developed longitudinal OAR-specific 3D segmentation models for pseudo-contour generation, completing the missing contours for some patients; with all OAR available, we trained a multi-class 3D convolutional neural network (nnU-Net) for final OAR segmentation. Results obtained in 44 independent datasets showed superior performance of the proposed methodology for the segmentation of fifteen OARs, with an average Dice score coefficient and surface Dice similarity coefficient of 80.59% and 88.74%. We demonstrated that the model can be straightforwardly integrated into the clinical workflow for standard and adaptive radiotherapy.
Collapse
Affiliation(s)
- Lucía Cubero
- Departamento de Bioingeniería, Universidad Carlos III de Madrid, 28911 Madrid, Spain
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Joël Castelli
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Antoine Simon
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Renaud de Crevoisier
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Oscar Acosta
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Javier Pascau
- Departamento de Bioingeniería, Universidad Carlos III de Madrid, 28911 Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain
| |
Collapse
|
90
|
Wahid KA, Xu J, El-Habashy D, Khamis Y, Abobakr M, McDonald B, O’ Connell N, Thill D, Ahmed S, Sharafi CS, Preston K, Salzillo TC, Mohamed ASR, He R, Cho N, Christodouleas J, Fuller CD, Naser MA. Deep-learning-based generation of synthetic 6-minute MRI from 2-minute MRI for use in head and neck cancer radiotherapy. Front Oncol 2022; 12:975902. [PMID: 36425548 PMCID: PMC9679225 DOI: 10.3389/fonc.2022.975902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 10/21/2022] [Indexed: 11/10/2022] Open
Abstract
Background Quick magnetic resonance imaging (MRI) scans with low contrast-to-noise ratio are typically acquired for daily MRI-guided radiotherapy setup. However, for patients with head and neck (HN) cancer, these images are often insufficient for discriminating target volumes and organs at risk (OARs). In this study, we investigated a deep learning (DL) approach to generate high-quality synthetic images from low-quality images. Methods We used 108 unique HN image sets of paired 2-minute T2-weighted scans (2mMRI) and 6-minute T2-weighted scans (6mMRI). 90 image sets (~20,000 slices) were used to train a 2-dimensional generative adversarial DL model that utilized 2mMRI as input and 6mMRI as output. Eighteen image sets were used to test model performance. Similarity metrics, including the mean squared error (MSE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) were calculated between normalized synthetic 6mMRI and ground-truth 6mMRI for all test cases. In addition, a previously trained OAR DL auto-segmentation model was used to segment the right parotid gland, left parotid gland, and mandible on all test case images. Dice similarity coefficients (DSC) were calculated between 2mMRI and either ground-truth 6mMRI or synthetic 6mMRI for each OAR; two one-sided t-tests were applied between the ground-truth and synthetic 6mMRI to determine equivalence. Finally, a visual Turing test using paired ground-truth and synthetic 6mMRI was performed using three clinician observers; the percentage of images that were correctly identified was compared to random chance using proportion equivalence tests. Results The median similarity metrics across the whole images were 0.19, 0.93, and 33.14 for MSE, SSIM, and PSNR, respectively. The median of DSCs comparing ground-truth vs. synthetic 6mMRI auto-segmented OARs were 0.86 vs. 0.85, 0.84 vs. 0.84, and 0.82 vs. 0.85 for the right parotid gland, left parotid gland, and mandible, respectively (equivalence p<0.05 for all OARs). The percent of images correctly identified was equivalent to chance (p<0.05 for all observers). Conclusions Using 2mMRI inputs, we demonstrate that DL-generated synthetic 6mMRI outputs have high similarity to ground-truth 6mMRI, but further improvements can be made. Our study facilitates the clinical incorporation of synthetic MRI in MRI-guided radiotherapy.
Collapse
Affiliation(s)
- Kareem A. Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | | | - Dina El-Habashy
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Department of Clinical Oncology and Nuclear Medicine, Menoufia University, Shebin Elkom, Egypt
| | - Yomna Khamis
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Department of Clinical Oncology and Nuclear Medicine, Faculty of Medicine, Alexandria University, Alexandria, Egypt
| | - Moamen Abobakr
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Brigid McDonald
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | | | | | - Sara Ahmed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Christina Setareh Sharafi
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Kathryn Preston
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Travis C. Salzillo
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Abdallah S. R. Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | | | | | - Clifton D. Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Mohamed A. Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| |
Collapse
|
91
|
Shi F, Hu W, Wu J, Han M, Wang J, Zhang W, Zhou Q, Zhou J, Wei Y, Shao Y, Chen Y, Yu Y, Cao X, Zhan Y, Zhou XS, Gao Y, Shen D. Deep learning empowered volume delineation of whole-body organs-at-risk for accelerated radiotherapy. Nat Commun 2022; 13:6566. [PMID: 36323677 PMCID: PMC9630370 DOI: 10.1038/s41467-022-34257-x] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 10/19/2022] [Indexed: 11/05/2022] Open
Abstract
In radiotherapy for cancer patients, an indispensable process is to delineate organs-at-risk (OARs) and tumors. However, it is the most time-consuming step as manual delineation is always required from radiation oncologists. Herein, we propose a lightweight deep learning framework for radiotherapy treatment planning (RTP), named RTP-Net, to promote an automatic, rapid, and precise initialization of whole-body OARs and tumors. Briefly, the framework implements a cascade coarse-to-fine segmentation, with adaptive module for both small and large organs, and attention mechanisms for organs and boundaries. Our experiments show three merits: 1) Extensively evaluates on 67 delineation tasks on a large-scale dataset of 28,581 cases; 2) Demonstrates comparable or superior accuracy with an average Dice of 0.95; 3) Achieves near real-time delineation in most tasks with <2 s. This framework could be utilized to accelerate the contouring process in the All-in-One radiotherapy scheme, and thus greatly shorten the turnaround time of patients.
Collapse
Affiliation(s)
- Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Weigang Hu
- grid.452404.30000 0004 1808 0942Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China ,grid.8547.e0000 0001 0125 2443Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Miaofei Han
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jiazhou Wang
- grid.452404.30000 0004 1808 0942Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China ,grid.8547.e0000 0001 0125 2443Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Wei Zhang
- grid.497849.fRadiotherapy Business Unit, Shanghai United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Qing Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jingjie Zhou
- grid.497849.fRadiotherapy Business Unit, Shanghai United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Shao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yanbo Chen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yue Yu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiaohuan Cao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yaozong Gao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China ,grid.440637.20000 0004 4657 8879School of Biomedical Engineering, ShanghaiTech University, Shanghai, China ,grid.452344.0Shanghai Clinical Research and Trial Center, Shanghai, China
| |
Collapse
|
92
|
Steybe D, Poxleitner P, Metzger MC, Brandenburg LS, Schmelzeisen R, Bamberg F, Tran PH, Kellner E, Reisert M, Russe MF. Automated segmentation of head CT scans for computer-assisted craniomaxillofacial surgery applying a hierarchical patch-based stack of convolutional neural networks. Int J Comput Assist Radiol Surg 2022; 17:2093-2101. [PMID: 35665881 PMCID: PMC9515026 DOI: 10.1007/s11548-022-02673-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 05/03/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE Computer-assisted techniques play an important role in craniomaxillofacial surgery. As segmentation of three-dimensional medical imaging represents a cornerstone for these procedures, the present study was aiming at investigating a deep learning approach for automated segmentation of head CT scans. METHODS The deep learning approach of this study was based on the patchwork toolbox, using a multiscale stack of 3D convolutional neural networks. The images were split into nested patches using a fixed 3D matrix size with decreasing physical size in a pyramid format of four scale depths. Manual segmentation of 18 craniomaxillofacial structures was performed in 20 CT scans, of which 15 were used for the training of the deep learning network and five were used for validation of the results of automated segmentation. Segmentation accuracy was evaluated by Dice similarity coefficient (DSC), surface DSC, 95% Hausdorff distance (95HD) and average symmetric surface distance (ASSD). RESULTS Mean for DSC was 0.81 ± 0.13 (range: 0.61 [mental foramen] - 0.98 [mandible]). Mean Surface DSC was 0.94 ± 0.06 (range: 0.87 [mental foramen] - 0.99 [mandible]), with values > 0.9 for all structures but the mental foramen. Mean 95HD was 1.93 ± 2.05 mm (range: 1.00 [mandible] - 4.12 mm [maxillary sinus]) and for ASSD, a mean of 0.42 ± 0.44 mm (range: 0.09 [mandible] - 1.19 mm [mental foramen]) was found, with values < 1 mm for all structures but the mental foramen. CONCLUSION In this study, high accuracy of automated segmentation of a variety of craniomaxillofacial structures could be demonstrated, suggesting this approach to be suitable for the incorporation into a computer-assisted craniomaxillofacial surgery workflow. The small amount of training data required and the flexibility of an open source-based network architecture enable a broad variety of clinical and research applications.
Collapse
Affiliation(s)
- David Steybe
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany.
| | - Philipp Poxleitner
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
- Berta-Ottenstein-Programme for Clinician Scientists, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Marc Christian Metzger
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Leonard Simon Brandenburg
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Rainer Schmelzeisen
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Phuong Hien Tran
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Elias Kellner
- Department of Medical Physics, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Marco Reisert
- Department of Medical Physics, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Maximilian Frederik Russe
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| |
Collapse
|
93
|
Krishnamurthy R, Mummudi N, Goda JS, Chopra S, Heijmen B, Swamidas J. Using Artificial Intelligence for Optimization of the Processes and Resource Utilization in Radiotherapy. JCO Glob Oncol 2022; 8:e2100393. [PMID: 36395438 PMCID: PMC10166445 DOI: 10.1200/go.21.00393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The radiotherapy (RT) process from planning to treatment delivery is a multistep, complex operation involving numerous levels of human-machine interaction and requiring high precision. These steps are labor-intensive and time-consuming and require meticulous coordination between professionals with diverse expertise. We reviewed and summarized the current status and prospects of artificial intelligence and machine learning relevant to the various steps in RT treatment planning and delivery workflow specifically in low- and middle-income countries (LMICs). We also searched the PubMed database using the search terms (Artificial Intelligence OR Machine Learning OR Deep Learning OR Automation OR knowledge-based planning AND Radiotherapy) AND (list of Low- and Middle-Income Countries as defined by the World Bank at the time of writing this review). The search yielded a total of 90 results, of which results with first authors from the LMICs were chosen. The reference lists of retrieved articles were also reviewed to search for more studies. No language restrictions were imposed. A total of 20 research items with unique study objectives conducted with the aim of enhancing RT processes were examined in detail. Artificial intelligence and machine learning can improve the overall efficiency of RT processes by reducing human intervention, aiding decision making, and efficiently executing lengthy, repetitive tasks. This improvement could permit the radiation oncologist to redistribute resources and focus on responsibilities such as patient counseling, education, and research, especially in resource-constrained LMICs.
Collapse
Affiliation(s)
- Revathy Krishnamurthy
- Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India
| | - Naveen Mummudi
- Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India
| | - Jayant Sastri Goda
- Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India
| | - Supriya Chopra
- Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India
| | - Ben Heijmen
- Division of Medical Physics, Department of Radiation Oncology, Erasmus MC Cancer Institute, Erasmus University Rotterdam, Rotterdam, the Netherlands
| | - Jamema Swamidas
- Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India
| |
Collapse
|
94
|
Tappeiner E, Welk M, Schubert R. Tackling the class imbalance problem of deep learning-based head and neck organ segmentation. Int J Comput Assist Radiol Surg 2022; 17:2103-2111. [PMID: 35578086 PMCID: PMC9515025 DOI: 10.1007/s11548-022-02649-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 04/20/2022] [Indexed: 12/03/2022]
Abstract
PURPOSE The segmentation of organs at risk (OAR) is a required precondition for the cancer treatment with image- guided radiation therapy. The automation of the segmentation task is therefore of high clinical relevance. Deep learning (DL)-based medical image segmentation is currently the most successful approach, but suffers from the over-presence of the background class and the anatomically given organ size difference, which is most severe in the head and neck (HAN) area. METHODS To tackle the HAN area-specific class imbalance problem, we first optimize the patch size of the currently best performing general-purpose segmentation framework, the nnU-Net, based on the introduced class imbalance measurement, and second introduce the class adaptive Dice loss to further compensate for the highly imbalanced setting. RESULTS Both the patch size and the loss function are parameters with direct influence on the class imbalance, and their optimization leads to a 3% increase in the Dice score and 22% reduction in the 95% Hausdorff distance compared to the baseline, finally reaching [Formula: see text] and [Formula: see text] mm for the segmentation of seven HAN organs using a single and simple neural network. CONCLUSION The patch size optimization and the class adaptive Dice loss are both simply integrable in current DL-based segmentation approaches and allow to increase the performance for class imbalance segmentation tasks.
Collapse
Affiliation(s)
- Elias Tappeiner
- Department for Biomedical Computer Science and Mechatronics, UMIT—Private University for Health Sciences, Medical Informatics and Technology, Eduard-Wallnöfer-Zentrum 1, 6060 Hall in Tyrol, Tyrol Austria
| | - Martin Welk
- Department for Biomedical Computer Science and Mechatronics, UMIT—Private University for Health Sciences, Medical Informatics and Technology, Eduard-Wallnöfer-Zentrum 1, 6060 Hall in Tyrol, Tyrol Austria
| | - Rainer Schubert
- Department for Biomedical Computer Science and Mechatronics, UMIT—Private University for Health Sciences, Medical Informatics and Technology, Eduard-Wallnöfer-Zentrum 1, 6060 Hall in Tyrol, Tyrol Austria
| |
Collapse
|
95
|
Kihara S, Koike Y, Takegawa H, Anetai Y, Nakamura S, Tanigawa N, Koizumi M. Clinical target volume segmentation based on gross tumor volume using deep learning for head and neck cancer treatment. Med Dosim 2022; 48:20-24. [PMID: 36273950 DOI: 10.1016/j.meddos.2022.09.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 02/07/2022] [Accepted: 09/17/2022] [Indexed: 02/04/2023]
Abstract
Accurate clinical target volume (CTV) delineation is important for head and neck intensity-modulated radiation therapy. However, delineation is time-consuming and susceptible to interobserver variability (IOV). Based on a manual contouring process commonly used in clinical practice, we developed a deep learning (DL)-based method to delineate a low-risk CTV with computed tomography (CT) and gross tumor volume (GTV) input and compared it with a CT-only input. A total of 310 patients with oropharynx cancer were randomly divided into the training set (250) and test set (60). The low-risk CTV and primary GTV contours were used to generate label data for the input and ground truth. A 3D U-Net with a two-channel input of CT and GTV (U-NetGTV) was proposed and its performance was compared with a U-Net with only CT input (U-NetCT). The Dice similarity coefficient (DSC) and average Hausdorff distance (AHD) were evaluated. The time required to predict the CTV was 0.86 s per patient. U-NetGTV showed a significantly higher mean DSC value than U-NetCT (0.80 ± 0.03 and 0.76 ± 0.05) and a significantly lower mean AHD value (3.0 ± 0.5 mm vs 3.5 ± 0.7 mm). Compared to the existing DL method with only CT input, the proposed GTV-based segmentation using DL showed a more precise low-risk CTV segmentation for head and neck cancer. Our findings suggest that the proposed method could reduce the contouring time of a low-risk CTV, allowing the standardization of target delineations for head and neck cancer.
Collapse
Affiliation(s)
- Sayaka Kihara
- Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Yuhei Koike
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan.
| | - Hideki Takegawa
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Yusuke Anetai
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Satoaki Nakamura
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Noboru Tanigawa
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Masahiko Koizumi
- Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
96
|
Ye X, Guo D, Ge J, Yan S, Xin Y, Song Y, Yan Y, Huang BS, Hung TM, Zhu Z, Peng L, Ren Y, Liu R, Zhang G, Mao M, Chen X, Lu Z, Li W, Chen Y, Huang L, Xiao J, Harrison AP, Lu L, Lin CY, Jin D, Ho TY. Comprehensive and clinically accurate head and neck cancer organs-at-risk delineation on a multi-institutional study. Nat Commun 2022; 13:6137. [PMID: 36253346 PMCID: PMC9576793 DOI: 10.1038/s41467-022-33178-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 09/07/2022] [Indexed: 12/24/2022] Open
Abstract
Accurate organ-at-risk (OAR) segmentation is critical to reduce radiotherapy complications. Consensus guidelines recommend delineating over 40 OARs in the head-and-neck (H&N). However, prohibitive labor costs cause most institutions to delineate a substantially smaller subset of OARs, neglecting the dose distributions of other OARs. Here, we present an automated and highly effective stratified OAR segmentation (SOARS) system using deep learning that precisely delineates a comprehensive set of 42 H&N OARs. We train SOARS using 176 patients from an internal institution and independently evaluate it on 1327 external patients across six different institutions. It consistently outperforms other state-of-the-art methods by at least 3-5% in Dice score for each institutional evaluation (up to 36% relative distance error reduction). Crucially, multi-user studies demonstrate that 98% of SOARS predictions need only minor or no revisions to achieve clinical acceptance (reducing workloads by 90%). Moreover, segmentation and dosimetric accuracy are within or smaller than the inter-user variation.
Collapse
Affiliation(s)
- Xianghua Ye
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Dazhou Guo
- grid.481557.aDAMO Academy, Alibaba Group, New York, NY USA
| | - Jia Ge
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Senxiang Yan
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Yi Xin
- Ping An Technology, Shenzhen, China
| | - Yuchen Song
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Yongheng Yan
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Bing-shen Huang
- grid.413801.f0000 0001 0711 0593Department of Radiation Oncology, Chang Gung Memorial Hospital, Linkou, Taiwan, ROC
| | - Tsung-Min Hung
- grid.413801.f0000 0001 0711 0593Department of Radiation Oncology, Chang Gung Memorial Hospital, Linkou, Taiwan, ROC
| | - Zhuotun Zhu
- grid.21107.350000 0001 2171 9311Department of Computer Science, Johns Hopkins University, Baltimore, MD USA
| | - Ling Peng
- grid.417401.70000 0004 1798 6507Department of Respiratory Disease, Zhejiang Provincial People’s Hospital, Hangzhou, Zhejiang, China
| | - Yanping Ren
- grid.413597.d0000 0004 1757 8802Department of Radiation Oncology, Huadong Hospital Affiliated to Fudan University, Shanghai, China
| | - Rui Liu
- grid.452438.c0000 0004 1760 8119Department of Radiation Oncology, The First Affiliated Hospital, Xi’an Jiaotong University, Xi’an, China
| | - Gong Zhang
- Department of Radiation Oncology, People’s Hospital of Shanxi Province, Shanxi, China
| | - Mengyuan Mao
- grid.284723.80000 0000 8877 7471Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Xiaohua Chen
- grid.412643.60000 0004 1757 2902Department of Radiation Oncology, The First Hospital of Lanzhou University, Lanzhou, Gansu China
| | - Zhongjie Lu
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Wenxiang Li
- grid.452661.20000 0004 1803 6319Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Yuzhen Chen
- grid.413801.f0000 0001 0711 0593Department of Radiation Oncology, Chang Gung Memorial Hospital, Linkou, Taiwan, ROC
| | | | | | | | - Le Lu
- grid.481557.aDAMO Academy, Alibaba Group, New York, NY USA
| | - Chien-Yu Lin
- grid.413801.f0000 0001 0711 0593Department of Radiation Oncology, Chang Gung Memorial Hospital, Linkou, Taiwan, ROC ,grid.413801.f0000 0001 0711 0593Particle Physics and Beam Delivery Core Laboratory, Chang Gung Memorial Hospital and Chang Gung University, Taoyuan, Taiwan, ROC
| | - Dakai Jin
- grid.481557.aDAMO Academy, Alibaba Group, New York, NY USA
| | - Tsung-Ying Ho
- grid.413801.f0000 0001 0711 0593Department of Nuclear Medicine, Chang Gung Memorial Hospital, Linkou, Taiwan, ROC
| |
Collapse
|
97
|
Walker Z, Bartley G, Hague C, Kelly D, Navarro C, Rogers J, South C, Temple S, Whitehurst P, Chuter R. Evaluating the Effectiveness of Deep Learning Contouring across Multiple Radiotherapy Centres. Phys Imaging Radiat Oncol 2022; 24:121-128. [PMID: 36405563 PMCID: PMC9668733 DOI: 10.1016/j.phro.2022.11.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Revised: 11/01/2022] [Accepted: 11/02/2022] [Indexed: 11/09/2022] Open
Abstract
Background and purpose Deep learning contouring (DLC) has the potential to decrease contouring time and variability of organ contours. This work evaluates the effectiveness of DLC for prostate and head and neck across four radiotherapy centres using a commercial system. Materials and methods Computed tomography scans of 123 prostate and 310 head and neck patients were evaluated. Besides one head and neck model, generic DLC models were used. Contouring time using centres' existing clinical methods and contour editing time after DLC were compared. Timing was evaluated using paired and non-paired studies. Commercial software or in-house scripts assessed dice similarity coefficient (DSC) and distance to agreement (DTA). One centre assessed head and neck inter-observer variability. Results The mean contouring time saved for prostate structures using DLC compared to the existing clinical method was 5.9 ± 3.5 min. The best agreement was shown for the femoral heads (median DSC 0.92 ± 0.03, median DTA 1.5 ± 0.3 mm) and the worst for the rectum (median DSC 0.68 ± 0.04, median DTA 4.6 ± 0.6 mm). The mean contouring time saved for head and neck structures using DLC was 16.2 ± 8.6 min. For one centre there was no DLC time-saving compared to an atlas-based method. DLC contours reduced inter-observer variability compared to manual contours for the brainstem, left parotid gland and left submandibular gland. Conclusions Generic prostate and head and neck DLC models can provide time-savings which can be assessed with paired or non-paired studies to integrate with clinical workload. Reducing inter-observer variability potential has been shown.
Collapse
Affiliation(s)
- Zoe Walker
- Medical Physics, University Hospitals Coventry and Warwickshire NHS Trust, Clifford Bridge Road, Coventry CV2 2DX, UK
| | - Gary Bartley
- Medical Physics, University Hospitals Coventry and Warwickshire NHS Trust, Clifford Bridge Road, Coventry CV2 2DX, UK
| | - Christina Hague
- Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Wilmslow Road, Manchester M20 4BX, UK
| | - Daniel Kelly
- Physics Department, The Clatterbridge Cancer Centre NHS Foundation Trust, Clatterbridge Road, Bebington, Wirral CH63 4JY, UK
| | - Clara Navarro
- Department of Medical Physics, Royal Surrey County Hospital NHS Foundation Trust, Egerton Road, Guildford, Surrey GU2 7XX, UK
| | - Jane Rogers
- Medical Physics, University Hospitals Coventry and Warwickshire NHS Trust, Clifford Bridge Road, Coventry CV2 2DX, UK
| | - Christopher South
- Department of Medical Physics, Royal Surrey County Hospital NHS Foundation Trust, Egerton Road, Guildford, Surrey GU2 7XX, UK
| | - Simon Temple
- Physics Department, The Clatterbridge Cancer Centre NHS Foundation Trust, Clatterbridge Road, Bebington, Wirral CH63 4JY, UK
| | - Philip Whitehurst
- Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Wilmslow Road, Manchester M20 4BX, UK
| | - Robert Chuter
- Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Wilmslow Road, Manchester M20 4BX, UK
- Division of Cancer Sciences, Faculty of Biology, Medicine and Heath, University of Manchester, Manchester Academic Health Science Centre, The Christie NHS Foundation Trust, Wilmslow Road, Manchester M20 4BX, UK
| |
Collapse
|
98
|
Sharkey MJ, Taylor JC, Alabed S, Dwivedi K, Karunasaagarar K, Johns CS, Rajaram S, Garg P, Alkhanfar D, Metherall P, O'Regan DP, van der Geest RJ, Condliffe R, Kiely DG, Mamalakis M, Swift AJ. Fully automatic cardiac four chamber and great vessel segmentation on CT pulmonary angiography using deep learning. Front Cardiovasc Med 2022; 9:983859. [PMID: 36225963 PMCID: PMC9549370 DOI: 10.3389/fcvm.2022.983859] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 09/02/2022] [Indexed: 11/13/2022] Open
Abstract
Introduction Computed tomography pulmonary angiography (CTPA) is an essential test in the work-up of suspected pulmonary vascular disease including pulmonary hypertension and pulmonary embolism. Cardiac and great vessel assessments on CTPA are based on visual assessment and manual measurements which are known to have poor reproducibility. The primary aim of this study was to develop an automated whole heart segmentation (four chamber and great vessels) model for CTPA. Methods A nine structure semantic segmentation model of the heart and great vessels was developed using 200 patients (80/20/100 training/validation/internal testing) with testing in 20 external patients. Ground truth segmentations were performed by consultant cardiothoracic radiologists. Failure analysis was conducted in 1,333 patients with mixed pulmonary vascular disease. Segmentation was achieved using deep learning via a convolutional neural network. Volumetric imaging biomarkers were correlated with invasive haemodynamics in the test cohort. Results Dice similarity coefficients (DSC) for segmented structures were in the range 0.58-0.93 for both the internal and external test cohorts. The left and right ventricle myocardium segmentations had lower DSC of 0.83 and 0.58 respectively while all other structures had DSC >0.89 in the internal test cohort and >0.87 in the external test cohort. Interobserver comparison found that the left and right ventricle myocardium segmentations showed the most variation between observers: mean DSC (range) of 0.795 (0.785-0.801) and 0.520 (0.482-0.542) respectively. Right ventricle myocardial volume had strong correlation with mean pulmonary artery pressure (Spearman's correlation coefficient = 0.7). The volume of segmented cardiac structures by deep learning had higher or equivalent correlation with invasive haemodynamics than by manual segmentations. The model demonstrated good generalisability to different vendors and hospitals with similar performance in the external test cohort. The failure rates in mixed pulmonary vascular disease were low (<3.9%) indicating good generalisability of the model to different diseases. Conclusion Fully automated segmentation of the four cardiac chambers and great vessels has been achieved in CTPA with high accuracy and low rates of failure. DL volumetric biomarkers can potentially improve CTPA cardiac assessment and invasive haemodynamic prediction.
Collapse
Affiliation(s)
- Michael J. Sharkey
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom
- 3D Imaging Lab, Sheffield Teaching Hospitals NHSFT, Sheffield, United Kingdom
| | - Jonathan C. Taylor
- 3D Imaging Lab, Sheffield Teaching Hospitals NHSFT, Sheffield, United Kingdom
| | - Samer Alabed
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom
| | - Krit Dwivedi
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom
- Insigneo Institute for in Silico Medicine, University of Sheffield, Sheffield, United Kingdom
| | - Kavitasagary Karunasaagarar
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom
- Radiology Department, Sheffield Teaching Hospitals NHSFT, Sheffield, United Kingdom
| | - Christopher S. Johns
- Radiology Department, Sheffield Teaching Hospitals NHSFT, Sheffield, United Kingdom
| | - Smitha Rajaram
- Radiology Department, Sheffield Teaching Hospitals NHSFT, Sheffield, United Kingdom
| | - Pankaj Garg
- Norwich Medical School, University of East Anglia, Norwich, United Kingdom
| | - Dheyaa Alkhanfar
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom
| | - Peter Metherall
- 3D Imaging Lab, Sheffield Teaching Hospitals NHSFT, Sheffield, United Kingdom
| | - Declan P. O'Regan
- MRC London Institute of Medical Sciences, Imperial College London, London, United Kingdom
| | | | - Robin Condliffe
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom
- Sheffield Pulmonary Vascular Disease Unit, Sheffield Teaching Hospitals NHS Trust, Sheffield, United Kingdom
| | - David G. Kiely
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom
- Insigneo Institute for in Silico Medicine, University of Sheffield, Sheffield, United Kingdom
- Sheffield Pulmonary Vascular Disease Unit, Sheffield Teaching Hospitals NHS Trust, Sheffield, United Kingdom
| | - Michail Mamalakis
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom
- Insigneo Institute for in Silico Medicine, University of Sheffield, Sheffield, United Kingdom
- Department of Computer Science, University of Sheffield, Sheffield, United Kingdom
| | - Andrew J. Swift
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom
- Insigneo Institute for in Silico Medicine, University of Sheffield, Sheffield, United Kingdom
| |
Collapse
|
99
|
Ma J, Zhang Y, Gu S, An X, Wang Z, Ge C, Wang C, Zhang F, Wang Y, Xu Y, Gou S, Thaler F, Payer C, Štern D, Henderson EGA, McSweeney DM, Green A, Jackson P, McIntosh L, Nguyen QC, Qayyum A, Conze PH, Huang Z, Zhou Z, Fan DP, Xiong H, Dong G, Zhu Q, He J, Yang X. Fast and Low-GPU-memory abdomen CT organ segmentation: The FLARE challenge. Med Image Anal 2022; 82:102616. [PMID: 36179380 DOI: 10.1016/j.media.2022.102616] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 06/26/2022] [Accepted: 09/02/2022] [Indexed: 11/27/2022]
Abstract
Automatic segmentation of abdominal organs in CT scans plays an important role in clinical practice. However, most existing benchmarks and datasets only focus on segmentation accuracy, while the model efficiency and its accuracy on the testing cases from different medical centers have not been evaluated. To comprehensively benchmark abdominal organ segmentation methods, we organized the first Fast and Low GPU memory Abdominal oRgan sEgmentation (FLARE) challenge, where the segmentation methods were encouraged to achieve high accuracy on the testing cases from different medical centers, fast inference speed, and low GPU memory consumption, simultaneously. The winning method surpassed the existing state-of-the-art method, achieving a 19× faster inference speed and reducing the GPU memory consumption by 60% with comparable accuracy. We provide a summary of the top methods, make their code and Docker containers publicly available, and give practical suggestions on building accurate and efficient abdominal organ segmentation models. The FLARE challenge remains open for future submissions through a live platform for benchmarking further methodology developments at https://flare.grand-challenge.org/.
Collapse
Affiliation(s)
- Jun Ma
- Department of Mathematics, Nanjing University of Science and Technology, 210094, Nanjing, China
| | - Yao Zhang
- Institute of Computing Technology, Chinese Academy of Sciences and the University of Chinese Academy of Sciences, 100019, Beijing, China
| | - Song Gu
- Department of Image Reconstruction, Nanjing Anke Medical Technology Co., Ltd., 211113, Nanjing, China
| | - Xingle An
- Infervision Technology Co. Ltd., 100020, Beijing, China
| | - Zhihe Wang
- Shenzhen Haichuang Medical Co., Ltd., 518049, Shenzhen, China
| | - Cheng Ge
- Institute of Bioinformatics and Medical Engineering, Jiangsu University of Technology, 213001, Changzhou, China
| | - Congcong Wang
- School of Computer Science and Engineering, Tianjin University of Technology, 300384, Tianjin, China; Engineering Research Center of Learning-Based Intelligent System, Ministry of Education, 300384, Tianjin, China
| | - Fan Zhang
- Radiological Algorithm, Fosun Aitrox Information Technology Co., Ltd., 200033, Shanghai, China
| | - Yu Wang
- Radiological Algorithm, Fosun Aitrox Information Technology Co., Ltd., 200033, Shanghai, China
| | - Yinan Xu
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, 710071, Shaanxi, China
| | - Shuiping Gou
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, 710071, Shaanxi, China
| | - Franz Thaler
- Gottfried Schatz Research Center: Biophysics, Medical University of Graz, 8010, Graz, Austria; Institute of Computer Graphics and Vision, Graz University of Technology, 8010, Graz, Austria
| | - Christian Payer
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010, Graz, Austria
| | - Darko Štern
- Gottfried Schatz Research Center: Biophysics, Medical University of Graz, 8010, Graz, Austria
| | - Edward G A Henderson
- Division of Cancer Sciences, The University of Manchester, M139PL, Manchester, UK; Radiotherapy Related Research, The Christie NHS Foundation Trust, M139PL, Manchester, UK
| | - Dónal M McSweeney
- Division of Cancer Sciences, The University of Manchester, M139PL, Manchester, UK; Radiotherapy Related Research, The Christie NHS Foundation Trust, M139PL, Manchester, UK
| | - Andrew Green
- Division of Cancer Sciences, The University of Manchester, M139PL, Manchester, UK; Radiotherapy Related Research, The Christie NHS Foundation Trust, M139PL, Manchester, UK
| | - Price Jackson
- Peter MacCallum Cancer Centre, 3000, Melbourne, Australia
| | | | - Quoc-Cuong Nguyen
- University of Information Technology, VNU-HCM, 700000, Ho Chi Minh City, Viet Nam
| | - Abdul Qayyum
- Brest National School of Engineering, UMR CNRS 6285 LabSTICC, 29280, Brest, France
| | | | - Ziyan Huang
- Institute of Medical Robotics, Shanghai Jiao Tong University, 200240, Shanghai, China
| | - Ziqi Zhou
- Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, 518000, Shenzhen, China
| | - Deng-Ping Fan
- College of Computer Science, Nankai University, 300071, Tianjin, China; Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Huan Xiong
- Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates; Harbin Institute of Technology, 150001, Harbin, China
| | - Guoqiang Dong
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, 210008, Nanjing, China; Department of Interventional Radiology, The Second Affiliated Hospital of Bengbu Medical College, 233017, Bengbu, China
| | - Qiongjie Zhu
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, 210008, Nanjing, China; Department of Radiology, Shidong Hospital, 200438, Shanghai, China
| | - Jian He
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, 210008, Nanjing, China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, 210093, Nanjing, China.
| |
Collapse
|
100
|
Tryggestad E, Anand A, Beltran C, Brooks J, Cimmiyotti J, Grimaldi N, Hodge T, Hunzeker A, Lucido JJ, Laack NN, Momoh R, Moseley DJ, Patel SH, Ridgway A, Seetamsetty S, Shiraishi S, Undahl L, Foote RL. Scalable radiotherapy data curation infrastructure for deep-learning based autosegmentation of organs-at-risk: A case study in head and neck cancer. Front Oncol 2022; 12:936134. [PMID: 36106100 PMCID: PMC9464982 DOI: 10.3389/fonc.2022.936134] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 08/03/2022] [Indexed: 12/02/2022] Open
Abstract
In this era of patient-centered, outcomes-driven and adaptive radiotherapy, deep learning is now being successfully applied to tackle imaging-related workflow bottlenecks such as autosegmentation and dose planning. These applications typically require supervised learning approaches enabled by relatively large, curated radiotherapy datasets which are highly reflective of the contemporary standard of care. However, little has been previously published describing technical infrastructure, recommendations, methods or standards for radiotherapy dataset curation in a holistic fashion. Our radiation oncology department has recently embarked on a large-scale project in partnership with an external partner to develop deep-learning-based tools to assist with our radiotherapy workflow, beginning with autosegmentation of organs-at-risk. This project will require thousands of carefully curated radiotherapy datasets comprising all body sites we routinely treat with radiotherapy. Given such a large project scope, we have approached the need for dataset curation rigorously, with an aim towards building infrastructure that is compatible with efficiency, automation and scalability. Focusing on our first use-case pertaining to head and neck cancer, we describe our developed infrastructure and novel methods applied to radiotherapy dataset curation, inclusive of personnel and workflow organization, dataset selection, expert organ-at-risk segmentation, quality assurance, patient de-identification, data archival and transfer. Over the course of approximately 13 months, our expert multidisciplinary team generated 490 curated head and neck radiotherapy datasets. This task required approximately 6000 human-expert hours in total (not including planning and infrastructure development time). This infrastructure continues to evolve and will support ongoing and future project efforts.
Collapse
Affiliation(s)
- E. Tryggestad
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
- *Correspondence: E. Tryggestad,
| | - A. Anand
- Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, AZ, United States
| | - C. Beltran
- Department of Radiation Oncology, Mayo Clinic Florida, Jacksonville, FL, United States
| | - J. Brooks
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - J. Cimmiyotti
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - N. Grimaldi
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - T. Hodge
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - A. Hunzeker
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - J. J. Lucido
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - N. N. Laack
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - R. Momoh
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - D. J. Moseley
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - S. H. Patel
- Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, AZ, United States
| | - A. Ridgway
- Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, AZ, United States
| | - S. Seetamsetty
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - S. Shiraishi
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - L. Undahl
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - R. L. Foote
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| |
Collapse
|