1
|
Alzahrani NM, Henry AM, Clark AK, Al-Qaisieh BM, Murray LJ, Nix MG. Dosimetric impact of contour editing on CT and MRI deep-learning autosegmentation for brain OARs. J Appl Clin Med Phys 2024; 25:e14345. [PMID: 38664894 DOI: 10.1002/acm2.14345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 01/12/2024] [Accepted: 03/05/2024] [Indexed: 05/12/2024] Open
Abstract
PURPOSE To establish the clinical applicability of deep-learning organ-at-risk autocontouring models (DL-AC) for brain radiotherapy. The dosimetric impact of contour editing, prior to model training, on performance was evaluated for both CT and MRI-based models. The correlation between geometric and dosimetric measures was also investigated to establish whether dosimetric assessment is required for clinical validation. METHOD CT and MRI-based deep learning autosegmentation models were trained using edited and unedited clinical contours. Autosegmentations were dosimetrically compared to gold standard contours for a test cohort. D1%, D5%, D50%, and maximum dose were used as clinically relevant dosimetric measures. The statistical significance of dosimetric differences between the gold standard and autocontours was established using paired Student's t-tests. Clinically significant cases were identified via dosimetric headroom to the OAR tolerance. Pearson's Correlations were used to investigate the relationship between geometric measures and absolute percentage dose changes for each autosegmentation model. RESULTS Except for the right orbit, when delineated using MRI models, the dosimetric statistical analysis revealed no superior model in terms of the dosimetric accuracy between the CT DL-AC models or between the MRI DL-AC for any investigated brain OARs. The number of patients where the clinical significance threshold was exceeded was higher for the optic chiasm D1% than other OARs, for all autosegmentation models. A weak correlation was consistently observed between the outcomes of dosimetric and geometric evaluations. CONCLUSIONS Editing contours before training the DL-AC model had no significant impact on dosimetry. The geometric test metrics were inadequate to estimate the impact of contour inaccuracies on dose. Accordingly, dosimetric analysis is needed to evaluate the clinical applicability of DL-AC models in the brain.
Collapse
Affiliation(s)
- Nouf M Alzahrani
- Department of Diagnostic Radiology, King Abdulaziz University, Jeddah, Saudi Arabia
- School of Medicine, University of Leeds, Leeds, UK
- Department of Medical Physics and Engineering, St James's University Hospital, Leeds, UK
| | - Ann M Henry
- School of Medicine, University of Leeds, Leeds, UK
- Department of Clinical Oncology, St James's University Hospital, Leeds, UK
| | - Anna K Clark
- Department of Medical Physics and Engineering, St James's University Hospital, Leeds, UK
| | - Bashar M Al-Qaisieh
- Department of Medical Physics and Engineering, St James's University Hospital, Leeds, UK
| | - Louise J Murray
- School of Medicine, University of Leeds, Leeds, UK
- Department of Clinical Oncology, St James's University Hospital, Leeds, UK
| | - Michael G Nix
- Department of Medical Physics and Engineering, St James's University Hospital, Leeds, UK
| |
Collapse
|
2
|
Yang B, Liu Y, Zhu J, Lu N, Dai J, Men K. Pretreatment information-aided automatic segmentation for online magnetic resonance imaging-guided prostate radiotherapy. Med Phys 2024; 51:922-932. [PMID: 37449545 DOI: 10.1002/mp.16608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 06/20/2023] [Accepted: 06/20/2023] [Indexed: 07/18/2023] Open
Abstract
BACKGROUND It is necessary to contour regions of interest (ROIs) for online magnetic resonance imaging (MRI)-guided adaptive radiotherapy (MRIgART). These updated contours are used for online replanning to obtain maximum dosimetric benefits. Contouring can be accomplished using deformable image registration (DIR) and deep learning (DL)-based autosegmentation methods. However, these methods may require considerable manual editing and thus prolong treatment time. PURPOSE The present study aimed to improve autosegmentation performance by integrating patients' pretreatment information in a DL-based segmentation algorithm. It is expected to improve the efficiency of current MRIgART process. METHODS Forty patients with prostate cancer were enrolled retrospectively. The online adaptive MR images, patient-specific planning computed tomography (CT), and contours in CT were used for segmentation. The deformable registration of planning CT and MR images was performed first to obtain a deformable CT and corresponding contours. A novel DL network, which can integrate such patient-specific information (deformable CT and corresponding contours) into the segmentation task of MR images was designed. We performed a four-fold cross-validation for the DL models. The proposed method was compared with DIR and DL methods on segmentation of prostate cancer. The ROIs included the clinical target volume (CTV), bladder, rectum, left femur head, and right femur head. Dosimetric parameters of automatically generated ROIs were evaluated using a clinical treatment planning system. RESULTS The proposed method enhanced the segmentation accuracy of conventional procedures. Its mean value of the dice similarity coefficient (93.5%) over the five ROIs was higher than both DIR (87.5%) and DL (87.2%). The number of patients (n = 40) that required major editing using DIR, DL, and our method were 12, 18, and 7 (CTV); 17, 4, and 1 (bladder); 8, 11, and 5 (rectum); 2, 4, and 1 (left femur head); and 3, 7, and 1 (right femur head), respectively. The Spearman rank correlation coefficient of dosimetry parameters between the proposed method and ground truth was 0.972 ± 0.040, higher than that of DIR (0.897 ± 0.098) and DL (0.871 ± 0.134). CONCLUSION This study proposed a novel method that integrates patient-specific pretreatment information into DL-based segmentation algorithm. It outperformed baseline methods, thereby improving the efficiency and segmentation accuracy in adaptive radiotherapy.
Collapse
Affiliation(s)
- Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ningning Lu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
3
|
Morton Colbert Z, Arrington D, Foote M, Gårding J, Fay D, Huo M, Pinkham M, Ramachandran P. Repurposing traditional U-Net predictions for sparse SAM prompting in medical image segmentation. Biomed Phys Eng Express 2024; 10:025004. [PMID: 38118182 DOI: 10.1088/2057-1976/ad17a7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 12/20/2023] [Indexed: 12/22/2023]
Abstract
Objective:Automated medical image segmentation (MIS) using deep learning has traditionally relied on models built and trained from scratch, or at least fine-tuned on a target dataset. The Segment Anything Model (SAM) by Meta challenges this paradigm by providing zero-shot generalisation capabilities. This study aims to develop and compare methods for refining traditional U-Net segmentations by repurposing them for automated SAM prompting.Approach:A 2D U-Net with EfficientNet-B4 encoder was trained using 4-fold cross-validation on an in-house brain metastases dataset. Segmentation predictions from each validation set were used for automatic sparse prompt generation via a bounding box prompting method (BBPM) and novel implementations of the point prompting method (PPM). The PPMs frequently produced poor slice predictions (PSPs) that required identification and substitution. A slice was identified as a PSP if it (1) contained multiple predicted regions per lesion or (2) possessed outlier foreground pixel counts relative to the patient's other slices. Each PSP was substituted with a corresponding initial U-Net or SAM BBPM prediction. The patients' mean volumetric dice similarity coefficient (DSC) was used to evaluate and compare the methods' performances.Main results:Relative to the initial U-Net segmentations, the BBPM improved mean patient DSC by 3.93 ± 1.48% to 0.847 ± 0.008 DSC. PSPs constituted 20.01-21.63% of PPMs' predictions and without substitution performance dropped by 82.94 ± 3.17% to 0.139 ± 0.023 DSC. Pairing the two PSP identification techniques yielded a sensitivity to PSPs of 92.95 ± 1.20%. By combining this approach with BBPM prediction substitution, the PPMs achieved segmentation accuracies on par with the BBPM, improving mean patient DSC by up to 4.17 ± 1.40% and reaching 0.849 ± 0.007 DSC.Significance:The proposed PSP identification and substitution techniques bridge the gap between PPM and BBPM performance for MIS. Additionally, the uniformity observed in our experiments' results demonstrates the robustness of SAM to variations in prompting style. These findings can assist in the design of both automatically and manually prompted pipelines.
Collapse
Affiliation(s)
| | | | | | | | - Dominik Fay
- Elekta Instrument AB, Sweden
- KTH Royal Institute of Technology, Sweden
| | - Michael Huo
- Princess Alexandra Hospital, Brisbane, Australia
| | - Mark Pinkham
- Princess Alexandra Hospital, Brisbane, Australia
| | | |
Collapse
|
4
|
Kehayias CE, Yan Y, Bontempi D, Quirk S, Bitterman DS, Bredfeldt JS, Aerts HJWL, Mak RH, Guthier CV. Prospective deployment of an automated implementation solution for artificial intelligence translation to clinical radiation oncology. Front Oncol 2024; 13:1305511. [PMID: 38239639 PMCID: PMC10794768 DOI: 10.3389/fonc.2023.1305511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 12/11/2023] [Indexed: 01/22/2024] Open
Abstract
Introduction Artificial intelligence (AI)-based technologies embody countless solutions in radiation oncology, yet translation of AI-assisted software tools to actual clinical environments remains unrealized. We present the Deep Learning On-Demand Assistant (DL-ODA), a fully automated, end-to-end clinical platform that enables AI interventions for any disease site featuring an automated model-training pipeline, auto-segmentations, and QA reporting. Materials and methods We developed, tested, and prospectively deployed the DL-ODA system at a large university affiliated hospital center. Medical professionals activate the DL-ODA via two pathways (1): On-Demand, used for immediate AI decision support for a patient-specific treatment plan, and (2) Ambient, in which QA is provided for all daily radiotherapy (RT) plans by comparing DL segmentations with manual delineations and calculating the dosimetric impact. To demonstrate the implementation of a new anatomy segmentation, we used the model-training pipeline to generate a breast segmentation model based on a large clinical dataset. Additionally, the contour QA functionality of existing models was assessed using a retrospective cohort of 3,399 lung and 885 spine RT cases. Ambient QA was performed for various disease sites including spine RT and heart for dosimetric sparing. Results Successful training of the breast model was completed in less than a day and resulted in clinically viable whole breast contours. For the retrospective analysis, we evaluated manual-versus-AI similarity for the ten most common structures. The DL-ODA detected high similarities in heart, lung, liver, and kidney delineations but lower for esophagus, trachea, stomach, and small bowel due largely to incomplete manual contouring. The deployed Ambient QAs for heart and spine sites have prospectively processed over 2,500 cases and 230 cases over 9 months and 5 months, respectively, automatically alerting the RT personnel. Discussion The DL-ODA capabilities in providing universal AI interventions were demonstrated for On-Demand contour QA, DL segmentations, and automated model training, and confirmed successful integration of the system into a large academic radiotherapy department. The novelty of deploying the DL-ODA as a multi-modal, fully automated end-to-end AI clinical implementation solution marks a significant step towards a generalizable framework that leverages AI to improve the efficiency and reliability of RT systems.
Collapse
Affiliation(s)
- Christopher E. Kehayias
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Yujie Yan
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Dennis Bontempi
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
- Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, Netherlands
| | - Sarah Quirk
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Danielle S. Bitterman
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Jeremy S. Bredfeldt
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Hugo J. W. L. Aerts
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
- Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, Netherlands
| | - Raymond H. Mak
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
| | - Christian V. Guthier
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
5
|
McDonald BA, Cardenas CE, O'Connell N, Ahmed S, Naser MA, Wahid KA, Xu J, Thill D, Zuhour RJ, Mesko S, Augustyn A, Buszek SM, Grant S, Chapman BV, Bagley AF, He R, Mohamed ASR, Christodouleas J, Brock KK, Fuller CD. Investigation of autosegmentation techniques on T2-weighted MRI for off-line dose reconstruction in MR-linac workflow for head and neck cancers. Med Phys 2024; 51:278-291. [PMID: 37475466 PMCID: PMC10799175 DOI: 10.1002/mp.16582] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 06/01/2023] [Accepted: 06/12/2023] [Indexed: 07/22/2023] Open
Abstract
BACKGROUND In order to accurately accumulate delivered dose for head and neck cancer patients treated with the Adapt to Position workflow on the 1.5T magnetic resonance imaging (MRI)-linear accelerator (MR-linac), the low-resolution T2-weighted MRIs used for daily setup must be segmented to enable reconstruction of the delivered dose at each fraction. PURPOSE In this pilot study, we evaluate various autosegmentation methods for head and neck organs at risk (OARs) on on-board setup MRIs from the MR-linac for off-line reconstruction of delivered dose. METHODS Seven OARs (parotid glands, submandibular glands, mandible, spinal cord, and brainstem) were contoured on 43 images by seven observers each. Ground truth contours were generated using a simultaneous truth and performance level estimation (STAPLE) algorithm. Twenty total autosegmentation methods were evaluated in ADMIRE: 1-9) atlas-based autosegmentation using a population atlas library (PAL) of 5/10/15 patients with STAPLE, patch fusion (PF), random forest (RF) for label fusion; 10-19) autosegmentation using images from a patient's 1-4 prior fractions (individualized patient prior [IPP]) using STAPLE/PF/RF; 20) deep learning (DL) (3D ResUNet trained on 43 ground truth structure sets plus 45 contoured by one observer). Execution time was measured for each method. Autosegmented structures were compared to ground truth structures using the Dice similarity coefficient, mean surface distance (MSD), Hausdorff distance (HD), and Jaccard index (JI). For each metric and OAR, performance was compared to the inter-observer variability using Dunn's test with control. Methods were compared pairwise using the Steel-Dwass test for each metric pooled across all OARs. Further dosimetric analysis was performed on three high-performing autosegmentation methods (DL, IPP with RF and 4 fractions [IPP_RF_4], IPP with 1 fraction [IPP_1]), and one low-performing (PAL with STAPLE and 5 atlases [PAL_ST_5]). For five patients, delivered doses from clinical plans were recalculated on setup images with ground truth and autosegmented structure sets. Differences in maximum and mean dose to each structure between the ground truth and autosegmented structures were calculated and correlated with geometric metrics. RESULTS DL and IPP methods performed best overall, all significantly outperforming inter-observer variability and with no significant difference between methods in pairwise comparison. PAL methods performed worst overall; most were not significantly different from the inter-observer variability or from each other. DL was the fastest method (33 s per case) and PAL methods the slowest (3.7-13.8 min per case). Execution time increased with a number of prior fractions/atlases for IPP and PAL. For DL, IPP_1, and IPP_RF_4, the majority (95%) of dose differences were within ± 250 cGy from ground truth, but outlier differences up to 785 cGy occurred. Dose differences were much higher for PAL_ST_5, with outlier differences up to 1920 cGy. Dose differences showed weak but significant correlations with all geometric metrics (R2 between 0.030 and 0.314). CONCLUSIONS The autosegmentation methods offering the best combination of performance and execution time are DL and IPP_1. Dose reconstruction on on-board T2-weighted MRIs is feasible with autosegmented structures with minimal dosimetric variation from ground truth, but contours should be visually inspected prior to dose reconstruction in an end-to-end dose accumulation workflow.
Collapse
Affiliation(s)
- Brigid A McDonald
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Carlos E Cardenas
- Department of Radiation Oncology, The University of Alabama at Birmingham, Birmingham, Alabama, USA
| | | | - Sara Ahmed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Kareem A Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | | | | | - Raed J Zuhour
- Department of Radiation Oncology, The University of Texas Medical Branch, Galveston, Texas, USA
| | - Shane Mesko
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Alexander Augustyn
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Samantha M Buszek
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Stephen Grant
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Bhavana V Chapman
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Alexander F Bagley
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Abdallah S R Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | | | - Kristy K Brock
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
6
|
Weissmann T, Mansoorian S, May MS, Lettmaier S, Höfler D, Deloch L, Speer S, Balk M, Frey B, Gaipl US, Bert C, Distel LV, Walter F, Belka C, Semrau S, Iro H, Fietkau R, Huang Y, Putz F. Deep Learning and Registration-Based Mapping for Analyzing the Distribution of Nodal Metastases in Head and Neck Cancer Cohorts: Informing Optimal Radiotherapy Target Volume Design. Cancers (Basel) 2023; 15:4620. [PMID: 37760588 PMCID: PMC10526893 DOI: 10.3390/cancers15184620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 09/15/2023] [Accepted: 09/15/2023] [Indexed: 09/29/2023] Open
Abstract
We introduce a deep-learning- and a registration-based method for automatically analyzing the spatial distribution of nodal metastases (LNs) in head and neck (H/N) cancer cohorts to inform radiotherapy (RT) target volume design. The two methods are evaluated in a cohort of 193 H/N patients/planning CTs with a total of 449 LNs. In the deep learning method, a previously developed nnU-Net 3D/2D ensemble model is used to autosegment 20 H/N levels, with each LN subsequently being algorithmically assigned to the closest-level autosegmentation. In the nonrigid-registration-based mapping method, LNs are mapped into a calculated template CT representing the cohort-average patient anatomy, and kernel density estimation is employed to estimate the underlying average 3D-LN probability distribution allowing for analysis and visualization without prespecified level definitions. Multireader assessment by three radio-oncologists with majority voting was used to evaluate the deep learning method and obtain the ground-truth distribution. For the mapping technique, the proportion of LNs predicted by the 3D probability distribution for each level was calculated and compared to the deep learning and ground-truth distributions. As determined by a multireader review with majority voting, the deep learning method correctly categorized all 449 LNs to their respective levels. Level 2 showed the highest LN involvement (59.0%). The level involvement predicted by the mapping technique was consistent with the ground-truth distribution (p for difference 0.915). Application of the proposed methods to multicenter cohorts with selected H/N tumor subtypes for informing optimal RT target volume design is promising.
Collapse
Affiliation(s)
- Thomas Weissmann
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany; (T.W.); (S.L.); (D.H.); (L.D.); (S.S.); (B.F.); (U.S.G.); (C.B.); (L.V.D.); (S.S.); (R.F.)
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054 Erlangen, Germany; (M.S.M.); (M.B.); (H.I.)
- Bavarian Cancer Research Center (BZKF), 81377 Munich, Germany; (S.M.); (F.W.); (C.B.)
| | - Sina Mansoorian
- Bavarian Cancer Research Center (BZKF), 81377 Munich, Germany; (S.M.); (F.W.); (C.B.)
- Department of Radiation Oncology, University Hospital, Ludwig Maximilian University of Munich, 81377 Munich, Germany
| | - Matthias Stefan May
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054 Erlangen, Germany; (M.S.M.); (M.B.); (H.I.)
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany
| | - Sebastian Lettmaier
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany; (T.W.); (S.L.); (D.H.); (L.D.); (S.S.); (B.F.); (U.S.G.); (C.B.); (L.V.D.); (S.S.); (R.F.)
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054 Erlangen, Germany; (M.S.M.); (M.B.); (H.I.)
| | - Daniel Höfler
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany; (T.W.); (S.L.); (D.H.); (L.D.); (S.S.); (B.F.); (U.S.G.); (C.B.); (L.V.D.); (S.S.); (R.F.)
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054 Erlangen, Germany; (M.S.M.); (M.B.); (H.I.)
| | - Lisa Deloch
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany; (T.W.); (S.L.); (D.H.); (L.D.); (S.S.); (B.F.); (U.S.G.); (C.B.); (L.V.D.); (S.S.); (R.F.)
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054 Erlangen, Germany; (M.S.M.); (M.B.); (H.I.)
- Translational Radiobiology, Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany
| | - Stefan Speer
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany; (T.W.); (S.L.); (D.H.); (L.D.); (S.S.); (B.F.); (U.S.G.); (C.B.); (L.V.D.); (S.S.); (R.F.)
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054 Erlangen, Germany; (M.S.M.); (M.B.); (H.I.)
| | - Matthias Balk
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054 Erlangen, Germany; (M.S.M.); (M.B.); (H.I.)
- Department of Otolaryngology, Head and Neck Surgery, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany
| | - Benjamin Frey
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany; (T.W.); (S.L.); (D.H.); (L.D.); (S.S.); (B.F.); (U.S.G.); (C.B.); (L.V.D.); (S.S.); (R.F.)
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054 Erlangen, Germany; (M.S.M.); (M.B.); (H.I.)
- Translational Radiobiology, Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany
| | - Udo S. Gaipl
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany; (T.W.); (S.L.); (D.H.); (L.D.); (S.S.); (B.F.); (U.S.G.); (C.B.); (L.V.D.); (S.S.); (R.F.)
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054 Erlangen, Germany; (M.S.M.); (M.B.); (H.I.)
- Translational Radiobiology, Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany
| | - Christoph Bert
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany; (T.W.); (S.L.); (D.H.); (L.D.); (S.S.); (B.F.); (U.S.G.); (C.B.); (L.V.D.); (S.S.); (R.F.)
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054 Erlangen, Germany; (M.S.M.); (M.B.); (H.I.)
| | - Luitpold Valentin Distel
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany; (T.W.); (S.L.); (D.H.); (L.D.); (S.S.); (B.F.); (U.S.G.); (C.B.); (L.V.D.); (S.S.); (R.F.)
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054 Erlangen, Germany; (M.S.M.); (M.B.); (H.I.)
| | - Franziska Walter
- Bavarian Cancer Research Center (BZKF), 81377 Munich, Germany; (S.M.); (F.W.); (C.B.)
- Department of Radiation Oncology, University Hospital, Ludwig Maximilian University of Munich, 81377 Munich, Germany
| | - Claus Belka
- Bavarian Cancer Research Center (BZKF), 81377 Munich, Germany; (S.M.); (F.W.); (C.B.)
- Department of Radiation Oncology, University Hospital, Ludwig Maximilian University of Munich, 81377 Munich, Germany
| | - Sabine Semrau
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany; (T.W.); (S.L.); (D.H.); (L.D.); (S.S.); (B.F.); (U.S.G.); (C.B.); (L.V.D.); (S.S.); (R.F.)
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054 Erlangen, Germany; (M.S.M.); (M.B.); (H.I.)
| | - Heinrich Iro
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054 Erlangen, Germany; (M.S.M.); (M.B.); (H.I.)
- Department of Otolaryngology, Head and Neck Surgery, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany
| | - Rainer Fietkau
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany; (T.W.); (S.L.); (D.H.); (L.D.); (S.S.); (B.F.); (U.S.G.); (C.B.); (L.V.D.); (S.S.); (R.F.)
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054 Erlangen, Germany; (M.S.M.); (M.B.); (H.I.)
- Bavarian Cancer Research Center (BZKF), 81377 Munich, Germany; (S.M.); (F.W.); (C.B.)
| | - Yixing Huang
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany; (T.W.); (S.L.); (D.H.); (L.D.); (S.S.); (B.F.); (U.S.G.); (C.B.); (L.V.D.); (S.S.); (R.F.)
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054 Erlangen, Germany; (M.S.M.); (M.B.); (H.I.)
| | - Florian Putz
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany; (T.W.); (S.L.); (D.H.); (L.D.); (S.S.); (B.F.); (U.S.G.); (C.B.); (L.V.D.); (S.S.); (R.F.)
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054 Erlangen, Germany; (M.S.M.); (M.B.); (H.I.)
- Bavarian Cancer Research Center (BZKF), 81377 Munich, Germany; (S.M.); (F.W.); (C.B.)
| |
Collapse
|
7
|
Ham DW, Choi YS, Yoo Y, Park SM, Song KS. Measurement of interspinous motion in dynamic cervical radiographs using a deep learning-based segmentation model. J Neurosurg Spine 2023; 39:329-334. [PMID: 37327141 DOI: 10.3171/2023.5.spine23293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/01/2023] [Indexed: 06/18/2023]
Abstract
OBJECTIVE Interspinous motion (ISM) is a representative method for evaluating the functional fusion status following anterior cervical discectomy and fusion (ACDF) surgery, but the associated measuring difficulty and potential errors in the clinical setting remain concerns. The aim of this study was to investigate the feasibility of a deep learning-based segmentation model for measuring ISM in patients who underwent ACDF surgery. METHODS This study is a retrospective analysis of flexion-extension dynamic cervical radiographs from a single institution and a validation of a convolutional neural network (CNN)-based artificial intelligence (AI) algorithm for measuring ISM. Data from 150 lateral cervical radiographs from the normal adult population were used to train the AI algorithm. A total of 106 pairs of dynamic flexion-extension radiographs from patients who underwent ACDF at a single institution were analyzed and validated for measuring ISM. To evaluate the agreement power between human experts and the AI algorithm, the authors assessed the interrater reliability using the intraclass correlation coefficient and root mean square error (RMSE) and performed a Bland-Altman plot analysis. They processed 106 pairs of radiographs from ACDF patients into the AI algorithm for autosegmenting the spinous process created using 150 normal population radiographs. The algorithm automatically segmented the spinous process and converted it to a binary large object (BLOB) image. The rightmost coordinate value of each spinous process from the BLOB image was extracted, and the pixel distance between the upper and lower spinous process coordinate value was calculated. The AI-measured ISM was calculated by multiplying the pixel distance by the pixel spacing value included in the DICOM tag of each radiograph. RESULTS The AI algorithm showed a favorable prediction power for detecting spinous processes with an accuracy of 99.2% in the test set radiographs. The interrater reliability between the human and AI algorithm of ISM was 0.88 (95% CI 0.83-0.91), and its RMSE was 0.68. In the Bland-Altman plot analysis, the 95% limit of interrater differences ranged from 0.11 to 1.36 mm, and a few observations were outside the 95% limit. The mean difference between observers was 0.02 ± 0.68 mm. CONCLUSIONS This novel CNN-based autosegmentation algorithm for measuring ISM in dynamic cervical radiographs showed strong agreement power to expert human raters and could help clinicians to evaluate segmental motion following ACDF surgery in clinical settings.
Collapse
Affiliation(s)
- Dae-Woong Ham
- 1Department of Orthopedic Surgery, Chung-Ang University Hospital, College of Medicine, Chung-Ang University, Dongjak-gu, Seoul
| | - Yang-Seon Choi
- 1Department of Orthopedic Surgery, Chung-Ang University Hospital, College of Medicine, Chung-Ang University, Dongjak-gu, Seoul
| | - Yisack Yoo
- 1Department of Orthopedic Surgery, Chung-Ang University Hospital, College of Medicine, Chung-Ang University, Dongjak-gu, Seoul
| | - Sang-Min Park
- 2Department of Orthopedic Surgery, Seoul National University College of Medicine and Seoul National University Bundang Hospital, Gyeonggi-do, Seoul, South Korea
| | - Kwang-Sup Song
- 1Department of Orthopedic Surgery, Chung-Ang University Hospital, College of Medicine, Chung-Ang University, Dongjak-gu, Seoul
| |
Collapse
|
8
|
Alzahrani N, Henry A, Clark A, Murray L, Nix M, Al-Qaisieh B. Geometric evaluations of CT and MRI based deep learning segmentation for brain OARs in radiotherapy. Phys Med Biol 2023; 68:175035. [PMID: 37579753 DOI: 10.1088/1361-6560/acf023] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 08/14/2023] [Indexed: 08/16/2023]
Abstract
Objective.Deep-learning auto-contouring (DL-AC) promises standardisation of organ-at-risk (OAR) contouring, enhancing quality and improving efficiency in radiotherapy. No commercial models exist for OAR contouring based on brain magnetic resonance imaging (MRI). We trained and evaluated computed tomography (CT) and MRI OAR autosegmentation models in RayStation. To ascertain clinical usability, we investigated the geometric impact of contour editing before training on model quality.Approach.Retrospective glioma cases were randomly selected for training (n= 32, 47) and validation (n= 9, 10) for MRI and CT, respectively. Clinical contours were edited using international consensus (gold standard) based on MRI and CT. MRI models were trained (i) using the original clinical contours based on planning CT and rigidly registered T1-weighted gadolinium-enhanced MRI (MRIu), (ii) as (i), further edited based on CT anatomy, to meet international consensus guidelines (MRIeCT), and (iii) as (i), further edited based on MRI anatomy (MRIeMRI). CT models were trained using: (iv) original clinical contours (CTu) and (v) clinical contours edited based on CT anatomy (CTeCT). Auto-contours were geometrically compared to gold standard validation contours (CTeCT or MRIeMRI) using Dice Similarity Coefficient, sensitivity, and mean distance to agreement. Models' performances were compared using paired Student's t-testing.Main results.The edited autosegmentation models successfully generated more segmentations than the unedited models. Paired t-testing showed editing pituitary, orbits, optic nerves, lenses, and optic chiasm on MRI before training significantly improved at least one geometry metric. MRI-based DL-AC performed worse than CT-based in delineating the lacrimal gland, whereas the CT-based performed worse in delineating the optic chiasm. No significant differences were found between the CTeCT and CTu except for optic chiasm.Significance.T1w-MRI DL-AC could segment all brain OARs except the lacrimal glands, which cannot be easily visualized on T1w-MRI. Editing contours on MRI before model training improved geometric performance. MRI DL-AC in RT may improve consistency, quality and efficiency but requires careful editing of training contours.
Collapse
Affiliation(s)
- Nouf Alzahrani
- King Abdulaziz University, Department of Diagnostic Radiology, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah, Saudi Arabia
- University of Leeds, School of Medicine, Leeds, United Kingdom
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| | - Ann Henry
- University of Leeds, School of Medicine, Leeds, United Kingdom
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| | - Anna Clark
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| | - Louise Murray
- University of Leeds, School of Medicine, Leeds, United Kingdom
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| | - Michael Nix
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| | - Bashar Al-Qaisieh
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| |
Collapse
|
9
|
Lin D, Wahid KA, Nelms BE, He R, Naser MA, Duke S, Sherer MV, Christodouleas JP, Mohamed ASR, Cislo M, Murphy JD, Fuller CD, Gillespie EF. E pluribus unum: prospective acceptability benchmarking from the Contouring Collaborative for Consensus in Radiation Oncology crowdsourced initiative for multiobserver segmentation. J Med Imaging (Bellingham) 2023; 10:S11903. [PMID: 36761036 PMCID: PMC9907021 DOI: 10.1117/1.jmi.10.s1.s11903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 01/02/2023] [Indexed: 02/11/2023] Open
Abstract
Purpose Contouring Collaborative for Consensus in Radiation Oncology (C3RO) is a crowdsourced challenge engaging radiation oncologists across various expertise levels in segmentation. An obstacle to artificial intelligence (AI) development is the paucity of multiexpert datasets; consequently, we sought to characterize whether aggregate segmentations generated from multiple nonexperts could meet or exceed recognized expert agreement. Approach Participants who contoured ≥ 1 region of interest (ROI) for the breast, sarcoma, head and neck (H&N), gynecologic (GYN), or gastrointestinal (GI) cases were identified as a nonexpert or recognized expert. Cohort-specific ROIs were combined into single simultaneous truth and performance level estimation (STAPLE) consensus segmentations.STAPLE nonexpert ROIs were evaluated againstSTAPLE expert contours using Dice similarity coefficient (DSC). The expert interobserver DSC (IODSC expert ) was calculated as an acceptability threshold betweenSTAPLE nonexpert andSTAPLE expert . To determine the number of nonexperts required to match theIODSC expert for each ROI, a single consensus contour was generated using variable numbers of nonexperts and then compared to theIODSC expert . Results For all cases, the DSC values forSTAPLE nonexpert versusSTAPLE expert were higher than comparator expertIODSC expert for most ROIs. The minimum number of nonexpert segmentations needed for a consensus ROI to achieveIODSC expert acceptability criteria ranged between 2 and 4 for breast, 3 and 5 for sarcoma, 3 and 5 for H&N, 3 and 5 for GYN, and 3 for GI. Conclusions Multiple nonexpert-generated consensus ROIs met or exceeded expert-derived acceptability thresholds. Five nonexperts could potentially generate consensus segmentations for most ROIs with performance approximating experts, suggesting nonexpert segmentations as feasible cost-effective AI inputs.
Collapse
Affiliation(s)
- Diana Lin
- Memorial Sloan Kettering Cancer Center, Department of Radiation Oncology, New York, New York, United States
| | - Kareem A. Wahid
- The University of Texas MD Anderson Cancer Center, Department of Radiation Oncology, Houston, Texas, United States
| | | | - Renjie He
- The University of Texas MD Anderson Cancer Center, Department of Radiation Oncology, Houston, Texas, United States
| | - Mohammed A. Naser
- The University of Texas MD Anderson Cancer Center, Department of Radiation Oncology, Houston, Texas, United States
| | - Simon Duke
- Cambridge University Hospitals, Department of Radiation Oncology, Cambridge, United Kingdom
| | - Michael V. Sherer
- University of California San Diego, Department of Radiation Medicine and Applied Sciences, La Jolla, California, United States
| | - John P. Christodouleas
- The University of Pennsylvania Cancer Center, Department of Radiation Oncology, Philadelphia, Pennsylvania, United States
- Elekta AB, Stockholm, Sweden
| | - Abdallah S. R. Mohamed
- The University of Texas MD Anderson Cancer Center, Department of Radiation Oncology, Houston, Texas, United States
| | - Michael Cislo
- Memorial Sloan Kettering Cancer Center, Department of Radiation Oncology, New York, New York, United States
| | - James D. Murphy
- University of California San Diego, Department of Radiation Medicine and Applied Sciences, La Jolla, California, United States
| | - Clifton D. Fuller
- The University of Texas MD Anderson Cancer Center, Department of Radiation Oncology, Houston, Texas, United States
| | - Erin F. Gillespie
- Memorial Sloan Kettering Cancer Center, Department of Radiation Oncology, New York, New York, United States
- University of Washington Fred Hutchinson Cancer Center, Department of Radiation Oncology, Seattle, Washington, United States
| |
Collapse
|
10
|
Weissmann T, Huang Y, Fischer S, Roesch J, Mansoorian S, Ayala Gaona H, Gostian AO, Hecht M, Lettmaier S, Deloch L, Frey B, Gaipl US, Distel LV, Maier A, Iro H, Semrau S, Bert C, Fietkau R, Putz F. Deep learning for automatic head and neck lymph node level delineation provides expert-level accuracy. Front Oncol 2023; 13:1115258. [PMID: 36874135 PMCID: PMC9978473 DOI: 10.3389/fonc.2023.1115258] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Accepted: 01/30/2023] [Indexed: 02/18/2023] Open
Abstract
Background Deep learning-based head and neck lymph node level (HN_LNL) autodelineation is of high relevance to radiotherapy research and clinical treatment planning but still underinvestigated in academic literature. In particular, there is no publicly available open-source solution for large-scale autosegmentation of HN_LNL in the research setting. Methods An expert-delineated cohort of 35 planning CTs was used for training of an nnU-net 3D-fullres/2D-ensemble model for autosegmentation of 20 different HN_LNL. A second cohort acquired at the same institution later in time served as the test set (n = 20). In a completely blinded evaluation, 3 clinical experts rated the quality of deep learning autosegmentations in a head-to-head comparison with expert-created contours. For a subgroup of 10 cases, intraobserver variability was compared to the average deep learning autosegmentation accuracy on the original and recontoured set of expert segmentations. A postprocessing step to adjust craniocaudal boundaries of level autosegmentations to the CT slice plane was introduced and the effect of autocontour consistency with CT slice plane orientation on geometric accuracy and expert rating was investigated. Results Blinded expert ratings for deep learning segmentations and expert-created contours were not significantly different. Deep learning segmentations with slice plane adjustment were rated numerically higher (mean, 81.0 vs. 79.6, p = 0.185) and deep learning segmentations without slice plane adjustment were rated numerically lower (77.2 vs. 79.6, p = 0.167) than manually drawn contours. In a head-to-head comparison, deep learning segmentations with CT slice plane adjustment were rated significantly better than deep learning contours without slice plane adjustment (81.0 vs. 77.2, p = 0.004). Geometric accuracy of deep learning segmentations was not different from intraobserver variability (mean Dice per level, 0.76 vs. 0.77, p = 0.307). Clinical significance of contour consistency with CT slice plane orientation was not represented by geometric accuracy metrics (volumetric Dice, 0.78 vs. 0.78, p = 0.703). Conclusions We show that a nnU-net 3D-fullres/2D-ensemble model can be used for highly accurate autodelineation of HN_LNL using only a limited training dataset that is ideally suited for large-scale standardized autodelineation of HN_LNL in the research setting. Geometric accuracy metrics are only an imperfect surrogate for blinded expert rating.
Collapse
Affiliation(s)
- Thomas Weissmann
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Yixing Huang
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Stefan Fischer
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Johannes Roesch
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Sina Mansoorian
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Horacio Ayala Gaona
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Antoniu-Oreste Gostian
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany.,Department of Otolaryngology, Head and Neck Surgery, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Markus Hecht
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Sebastian Lettmaier
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Lisa Deloch
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany.,Translational Radiobiology, Department of Radiation Oncology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Universitätsklinikum Erlangen, Erlangen, Germany
| | - Benjamin Frey
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany.,Translational Radiobiology, Department of Radiation Oncology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Universitätsklinikum Erlangen, Erlangen, Germany
| | - Udo S Gaipl
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany.,Translational Radiobiology, Department of Radiation Oncology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Universitätsklinikum Erlangen, Erlangen, Germany
| | - Luitpold Valentin Distel
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Heinrich Iro
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany.,Department of Otolaryngology, Head and Neck Surgery, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Sabine Semrau
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Christoph Bert
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Rainer Fietkau
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Florian Putz
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| |
Collapse
|
11
|
Sahlsten J, Wahid KA, Glerean E, Jaskari J, Naser MA, He R, Kann BH, Mäkitie A, Fuller CD, Kaski K. Segmentation stability of human head and neck cancer medical images for radiotherapy applications under de-identification conditions: Benchmarking data sharing and artificial intelligence use-cases. Front Oncol 2023; 13:1120392. [PMID: 36925936 PMCID: PMC10011442 DOI: 10.3389/fonc.2023.1120392] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 02/13/2023] [Indexed: 03/08/2023] Open
Abstract
Background Demand for head and neck cancer (HNC) radiotherapy data in algorithmic development has prompted increased image dataset sharing. Medical images must comply with data protection requirements so that re-use is enabled without disclosing patient identifiers. Defacing, i.e., the removal of facial features from images, is often considered a reasonable compromise between data protection and re-usability for neuroimaging data. While defacing tools have been developed by the neuroimaging community, their acceptability for radiotherapy applications have not been explored. Therefore, this study systematically investigated the impact of available defacing algorithms on HNC organs at risk (OARs). Methods A publicly available dataset of magnetic resonance imaging scans for 55 HNC patients with eight segmented OARs (bilateral submandibular glands, parotid glands, level II neck lymph nodes, level III neck lymph nodes) was utilized. Eight publicly available defacing algorithms were investigated: afni_refacer, DeepDefacer, defacer, fsl_deface, mask_face, mri_deface, pydeface, and quickshear. Using a subset of scans where defacing succeeded (N=29), a 5-fold cross-validation 3D U-net based OAR auto-segmentation model was utilized to perform two main experiments: 1.) comparing original and defaced data for training when evaluated on original data; 2.) using original data for training and comparing the model evaluation on original and defaced data. Models were primarily assessed using the Dice similarity coefficient (DSC). Results Most defacing methods were unable to produce any usable images for evaluation, while mask_face, fsl_deface, and pydeface were unable to remove the face for 29%, 18%, and 24% of subjects, respectively. When using the original data for evaluation, the composite OAR DSC was statistically higher (p ≤ 0.05) for the model trained with the original data with a DSC of 0.760 compared to the mask_face, fsl_deface, and pydeface models with DSCs of 0.742, 0.736, and 0.449, respectively. Moreover, the model trained with original data had decreased performance (p ≤ 0.05) when evaluated on the defaced data with DSCs of 0.673, 0.693, and 0.406 for mask_face, fsl_deface, and pydeface, respectively. Conclusion Defacing algorithms may have a significant impact on HNC OAR auto-segmentation model training and testing. This work highlights the need for further development of HNC-specific image anonymization methods.
Collapse
Affiliation(s)
- Jaakko Sahlsten
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Kareem A. Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Enrico Glerean
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Joel Jaskari
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Mohamed A. Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Benjamin H. Kann
- Artificial Intelligence in Medicine Program, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Antti Mäkitie
- Department of Otorhinolaryngology, Head and Neck Surgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Clifton D. Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
- *Correspondence: Clifton D. Fuller, ; Kimmo Kaski,
| | - Kimmo Kaski
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
- *Correspondence: Clifton D. Fuller, ; Kimmo Kaski,
| |
Collapse
|
12
|
Yoo SK, Kim TH, Chun J, Choi BS, Kim H, Yang S, Yoon HI, Kim JS. Deep-Learning-Based Automatic Detection and Segmentation of Brain Metastases with Small Volume for Stereotactic Ablative Radiotherapy. Cancers (Basel) 2022; 14:2555. [PMID: 35626158 DOI: 10.3390/cancers14102555] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 05/11/2022] [Accepted: 05/18/2022] [Indexed: 02/01/2023] Open
Abstract
Simple Summary With advances in radiotherapy (RT) technique and more frequent use of stereotactic ablative radiotherapy (SABR), precise segmentation of all brain metastases (BM) including a small volume of BM is essential to choose an appropriate treatment modality. However, the process of detecting and manually delineating BM with small volumes often results in missing delineation and requires a great amount of labor. To address this issue, we present a useful deep learning (DL) model for the detection and segmentation of BMwith contrast-enhanced magnetic resonance images. Specifically, we applied effective training techniques to detect and segment a BM of less than 0.04 cc, which is relatively small compared to previous studies. The results of our DL model demonstrated that the proposed methods provide considerable benefit for BM, even small-volume BM, detection, and segmentation for SABR. Abstract Recently, several efforts have been made to develop the deep learning (DL) algorithms for automatic detection and segmentation of brain metastases (BM). In this study, we developed an advanced DL model to BM detection and segmentation, especially for small-volume BM. From the institutional cancer registry, contrast-enhanced magnetic resonance images of 65 patients and 603 BM were collected to train and evaluate our DL model. Of the 65 patients, 12 patients with 58 BM were assigned to test-set for performance evaluation. Ground-truth for BM was assigned to one radiation oncologist to manually delineate BM and another one to cross-check. Unlike other previous studies, our study dealt with relatively small BM, so the area occupied by the BM in the high-resolution images were small. Our study applied training techniques such as the overlapping patch technique and 2.5-dimensional (2.5D) training to the well-known U-Net architecture to learn better in smaller BM. As a DL architecture, 2D U-Net was utilized by 2.5D training. For better efficacy and accuracy of a two-dimensional U-Net, we applied effective preprocessing include 2.5D overlapping patch technique. The sensitivity and average false positive rate were measured as detection performance, and their values were 97% and 1.25 per patient, respectively. The dice coefficient with dilation and 95% Hausdorff distance were measured as segmentation performance, and their values were 75% and 2.057 mm, respectively. Our DL model can detect and segment BM with small volume with good performance. Our model provides considerable benefit for clinicians with automatic detection and segmentation of BM for stereotactic ablative radiotherapy.
Collapse
|
13
|
Chang Y, Wang Z, Peng Z, Zhou J, Pi Y, Xu XG, Pei X. Clinical application and improvement of a CNN-based autosegmentation model for clinical target volumes in cervical cancer radiotherapy. J Appl Clin Med Phys 2021; 22:115-125. [PMID: 34643320 PMCID: PMC8598149 DOI: 10.1002/acm2.13440] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 09/16/2021] [Accepted: 09/17/2021] [Indexed: 12/29/2022] Open
Abstract
OBJECTIVE Clinical target volume (CTV) autosegmentation for cervical cancer is desirable for radiation therapy. Data heterogeneity and interobserver variability (IOV) limit the clinical adaptability of such methods. The adaptive method is proposed to improve the adaptability of CNN-based autosegmentation of CTV contours in cervical cancer. METHODS This study included 400 cervical cancer treatment planning cases with CTV delineated by radiation oncologists from three hospitals. The datasets were divided into five subdatasets (80 cases each). The cases in datasets 1, 2, and 3 were delineated by physicians A, B, and C, respectively. The cases in datasets 4 and 5 were delineated by multiple physicians. Dataset 1 was divided into training (50 cases), validation (10 cases), and testing (20 cases) cohorts, and they were used to construct the pretrained model. Datasets 2-5 were regarded as host datasets to evaluate the accuracy of the pretrained model. In the adaptive process, the pretrained model was fine-tuned to measure improvements by gradually adding more training cases selected from the host datasets. The accuracy of the autosegmentation model on each host dataset was evaluated using the corresponding test cases. The Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD_95) were used to evaluate the accuracy. RESULTS Before and after adaptive improvements, the average DSC values on the host datasets were 0.818 versus 0.882, 0.763 versus 0.810, 0.727 versus 0.772, and 0.679 versus 0.789, which are improvements of 7.82%, 6.16%, 6.19%, and 16.05%, respectively. The average HD_95 values were 11.143 mm versus 6.853 mm, 22.402 mm versus 14.076 mm, 28.145 mm versus 16.437 mm, and 33.034 mm versus 16.441 mm, which are improvements of 37.94%, 37.17%, 41.60%, and 50.23%, respectively. CONCLUSION The proposed method improved the adaptability of the CNN-based autosegmentation model when applied to host datasets.
Collapse
Affiliation(s)
- Yankui Chang
- Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, China
| | - Zhi Wang
- Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, China.,Radiation Oncology Department, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Zhao Peng
- Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, China
| | - Jieping Zhou
- Radiation Oncology Department, First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| | - Yifei Pi
- Radiation Oncology Department, First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - X George Xu
- Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, China.,Radiation Oncology Department, First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| | - Xi Pei
- Institute of Nuclear Medical Physics, University of Science and Technology of China, Hefei, China.,Anhui Wisdom Technology Co., Ltd., Hefei, Anhui, China
| |
Collapse
|
14
|
Kiljunen T, Akram S, Niemelä J, Löyttyniemi E, Seppälä J, Heikkilä J, Vuolukka K, Kääriäinen OS, Heikkilä VP, Lehtiö K, Nikkinen J, Gershkevitsh E, Borkvel A, Adamson M, Zolotuhhin D, Kolk K, Pang EPP, Tuan JKL, Master Z, Chua MLK, Joensuu T, Kononen J, Myllykangas M, Riener M, Mokka M, Keyriläinen J. A Deep Learning-Based Automated CT Segmentation of Prostate Cancer Anatomy for Radiation Therapy Planning-A Retrospective Multicenter Study. Diagnostics (Basel) 2020; 10:E959. [PMID: 33212793 PMCID: PMC7697786 DOI: 10.3390/diagnostics10110959] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 11/06/2020] [Accepted: 11/13/2020] [Indexed: 12/24/2022] Open
Abstract
A commercial deep learning (DL)-based automated segmentation tool (AST) for computed tomography (CT) is evaluated for accuracy and efficiency gain within prostate cancer patients. Thirty patients from six clinics were reviewed with manual- (MC), automated- (AC) and automated and edited (AEC) contouring methods. In the AEC group, created contours (prostate, seminal vesicles, bladder, rectum, femoral heads and penile bulb) were edited, whereas the MC group included empty datasets for MC. In one clinic, lymph node CTV delineations were evaluated for interobserver variability. Compared to MC, the mean time saved using the AST was 12 min for the whole data set (46%) and 12 min for the lymph node CTV (60%), respectively. The delineation consistency between MC and AEC groups according to the Dice similarity coefficient (DSC) improved from 0.78 to 0.94 for the whole data set and from 0.76 to 0.91 for the lymph nodes. The mean DSCs between MC and AC for all six clinics were 0.82 for prostate, 0.72 for seminal vesicles, 0.93 for bladder, 0.84 for rectum, 0.69 for femoral heads and 0.51 for penile bulb. This study proves that using a general DL-based AST for CT images saves time and improves consistency.
Collapse
Affiliation(s)
- Timo Kiljunen
- Docrates Cancer Center, Saukonpaadenranta 2, FI-00180 Helsinki, Finland; (T.J.); (J.K.); (M.M.); (M.R.)
| | - Saad Akram
- MVision Ai, c/o Terkko Health hub, Haartmaninkatu 4, FI-00290 Helsinki, Finland; (S.A.); (J.N.)
| | - Jarkko Niemelä
- MVision Ai, c/o Terkko Health hub, Haartmaninkatu 4, FI-00290 Helsinki, Finland; (S.A.); (J.N.)
| | - Eliisa Löyttyniemi
- Department of Biostatistics, University of Turku, Kiinamyllynkatu 10, FI-20014 Turku, Finland;
| | - Jan Seppälä
- Kuopio University Hospital, Center of Oncology, Kelkkailijantie 7, FI-70210 Kuopio, Finland; (J.S.); (J.H.); (K.V.); (O.-S.K.)
| | - Janne Heikkilä
- Kuopio University Hospital, Center of Oncology, Kelkkailijantie 7, FI-70210 Kuopio, Finland; (J.S.); (J.H.); (K.V.); (O.-S.K.)
| | - Kristiina Vuolukka
- Kuopio University Hospital, Center of Oncology, Kelkkailijantie 7, FI-70210 Kuopio, Finland; (J.S.); (J.H.); (K.V.); (O.-S.K.)
| | - Okko-Sakari Kääriäinen
- Kuopio University Hospital, Center of Oncology, Kelkkailijantie 7, FI-70210 Kuopio, Finland; (J.S.); (J.H.); (K.V.); (O.-S.K.)
| | - Vesa-Pekka Heikkilä
- Oulu University Hospital, Department of Oncology and Radiotherapy, Kajaanintie 50, FI-90220 Oulu, Finland; (V.-P.H.); (K.L.); (J.N.)
- University of Oulu, Research Unit of Medical Imaging, Physics and Technology, Aapistie 5 A, FI-90220 Oulu, Finland
| | - Kaisa Lehtiö
- Oulu University Hospital, Department of Oncology and Radiotherapy, Kajaanintie 50, FI-90220 Oulu, Finland; (V.-P.H.); (K.L.); (J.N.)
| | - Juha Nikkinen
- Oulu University Hospital, Department of Oncology and Radiotherapy, Kajaanintie 50, FI-90220 Oulu, Finland; (V.-P.H.); (K.L.); (J.N.)
- University of Oulu, Research Unit of Medical Imaging, Physics and Technology, Aapistie 5 A, FI-90220 Oulu, Finland
| | - Eduard Gershkevitsh
- North Estonia Medical Centre, J. Sütiste tee 19, 13419 Tallinn, Estonia; (E.G.); (A.B.); (M.A.); (D.Z.); (K.K.)
| | - Anni Borkvel
- North Estonia Medical Centre, J. Sütiste tee 19, 13419 Tallinn, Estonia; (E.G.); (A.B.); (M.A.); (D.Z.); (K.K.)
| | - Merve Adamson
- North Estonia Medical Centre, J. Sütiste tee 19, 13419 Tallinn, Estonia; (E.G.); (A.B.); (M.A.); (D.Z.); (K.K.)
| | - Daniil Zolotuhhin
- North Estonia Medical Centre, J. Sütiste tee 19, 13419 Tallinn, Estonia; (E.G.); (A.B.); (M.A.); (D.Z.); (K.K.)
| | - Kati Kolk
- North Estonia Medical Centre, J. Sütiste tee 19, 13419 Tallinn, Estonia; (E.G.); (A.B.); (M.A.); (D.Z.); (K.K.)
| | - Eric Pei Ping Pang
- National Cancer Centre Singapore, Division of Radiation Oncology, 11 Hospital Crescent, Singapore 169610, Singapore; (E.P.P.P); (J.K.L.T); (Z.M.); (M.L.K.C)
| | - Jeffrey Kit Loong Tuan
- National Cancer Centre Singapore, Division of Radiation Oncology, 11 Hospital Crescent, Singapore 169610, Singapore; (E.P.P.P); (J.K.L.T); (Z.M.); (M.L.K.C)
- Oncology Academic Programme, Duke-NUS Medical School, Singapore 169857, Singapore
| | - Zubin Master
- National Cancer Centre Singapore, Division of Radiation Oncology, 11 Hospital Crescent, Singapore 169610, Singapore; (E.P.P.P); (J.K.L.T); (Z.M.); (M.L.K.C)
| | - Melvin Lee Kiang Chua
- National Cancer Centre Singapore, Division of Radiation Oncology, 11 Hospital Crescent, Singapore 169610, Singapore; (E.P.P.P); (J.K.L.T); (Z.M.); (M.L.K.C)
- Oncology Academic Programme, Duke-NUS Medical School, Singapore 169857, Singapore
- National Cancer Centre Singapore, Division of Medical Sciences, Singapore 169610, Singapore
| | - Timo Joensuu
- Docrates Cancer Center, Saukonpaadenranta 2, FI-00180 Helsinki, Finland; (T.J.); (J.K.); (M.M.); (M.R.)
| | - Juha Kononen
- Docrates Cancer Center, Saukonpaadenranta 2, FI-00180 Helsinki, Finland; (T.J.); (J.K.); (M.M.); (M.R.)
| | - Mikko Myllykangas
- Docrates Cancer Center, Saukonpaadenranta 2, FI-00180 Helsinki, Finland; (T.J.); (J.K.); (M.M.); (M.R.)
| | - Maigo Riener
- Docrates Cancer Center, Saukonpaadenranta 2, FI-00180 Helsinki, Finland; (T.J.); (J.K.); (M.M.); (M.R.)
| | - Miia Mokka
- Turku University Hospital, Department of Oncology and Radiotherapy, Hämeentie 11, FI-20521 Turku, Finland; (M.M.); (J.K.)
| | - Jani Keyriläinen
- Turku University Hospital, Department of Oncology and Radiotherapy, Hämeentie 11, FI-20521 Turku, Finland; (M.M.); (J.K.)
- Turku University Hospital, Department of Medical Physics, Hämeentie 11, FI-20521 Turku, Finland
| |
Collapse
|
15
|
Cao M, Stiehl B, Yu VY, Sheng K, Kishan AU, Chin RK, Yang Y, Ruan D. Analysis of Geometric Performance and Dosimetric Impact of Using Automatic Contour Segmentation for Radiotherapy Planning. Front Oncol 2020; 10:1762. [PMID: 33102206 PMCID: PMC7546883 DOI: 10.3389/fonc.2020.01762] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 08/06/2020] [Indexed: 11/13/2022] Open
Abstract
Purpose: To analyze geometric discrepancy and dosimetric impact in using contours generated by auto-segmentation (AS) against manually segmented (MS) clinical contours. Methods: A 48-subject prostate atlas was created and another 15 patients were used for testing. Contours were generated using a commercial atlas-based segmentation tool and compared to their clinical MS counterparts. The geometric correlation was evaluated using the Dice similarity coefficient (DSC) and Hausdorff distance (HD). Dosimetric relevance was evaluated for a subset of patients by assessing the DVH differences derived by optimizing plan dose using the AS and MS contours, respectively, and evaluating with respect to each. A paired t-test was employed for statistical comparison. The discrepancy in plan quality with respect to clinical dosimetric endpoints was evaluated. The analysis was repeated for head/neck (HN) with a 31-subject atlas and 15 test cases. Results: Dice agreement between AS and MS differed significantly across structures: from (L:0.92/R: 0.91) for the femoral heads to seminal vesical of 0.38 in the prostate cohort, and from 0.98 for the brain, to 0.36 for the chiasm of the HN group. Despite the geometric disagreement, the paired t-tests showed the lack of statistical evidence for systematic differences in dosimetric plan quality yielded by the AS and MS approach for the prostate cohort. In HN cases, statistically significant differences in dosimetric endpoints were observed in structures with small volumes or elongated shapes such as cord (p = 0.01) and esophagus (p = 0.04). The largest absolute dose difference of 11 Gy was seen in the mean pharynx dose. Conclusion: Varying AS performance among structures suggests a differential approach of using AS on a subset of structures and focus MS on the rest. The discrepancy between geometric and dosimetric-end-point driven evaluation also indicates the clinical utility of AS contours in optimization and evaluating plan quality despite of suboptimal geometrical accuracy.
Collapse
Affiliation(s)
- Minsong Cao
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Bradley Stiehl
- Physics & Biology in Medicine Graduate Program, University of California, Los Angeles, Los Angeles, CA, United States
| | - Victoria Y Yu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Ke Sheng
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Amar U Kishan
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Robert K Chin
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Yingli Yang
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Dan Ruan
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
16
|
Chan JW, Kearney V, Haaf S, Wu S, Bogdanov M, Reddick M, Dixit N, Sudhyadhom A, Chen J, Yom SS, Solberg TD. A convolutional neural network algorithm for automatic segmentation of head and neck organs at risk using deep lifelong learning. Med Phys 2019; 46:2204-2213. [PMID: 30887523 DOI: 10.1002/mp.13495] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Revised: 02/19/2019] [Accepted: 02/19/2019] [Indexed: 01/17/2023] Open
Abstract
PURPOSE This study suggests a lifelong learning-based convolutional neural network (LL-CNN) algorithm as a superior alternative to single-task learning approaches for automatic segmentation of head and neck (OARs) organs at risk. METHODS AND MATERIALS Lifelong learning-based convolutional neural network was trained on twelve head and neck OARs simultaneously using a multitask learning framework. Once the weights of the shared network were established, the final multitask convolutional layer was replaced by a single-task convolutional layer. The single-task transfer learning network was trained on each OAR separately with early stoppage. The accuracy of LL-CNN was assessed based on Dice score and root-mean-square error (RMSE) compared to manually delineated contours set as the gold standard. LL-CNN was compared with 2D-UNet, 3D-UNet, a single-task CNN (ST-CNN), and a pure multitask CNN (MT-CNN). Training, validation, and testing followed Kaggle competition rules, where 160 patients were used for training, 20 were used for internal validation, and 20 in a separate test set were used to report final prediction accuracies. RESULTS On average contours generated with LL-CNN had higher Dice coefficients and lower RMSE than 2D-UNet, 3D-Unet, ST- CNN, and MT-CNN. LL-CNN required ~72 hrs to train using a distributed learning framework on 2 Nvidia 1080Ti graphics processing units. LL-CNN required 20 s to predict all 12 OARs, which was approximately as fast as the fastest alternative methods with the exception of MT-CNN. CONCLUSIONS This study demonstrated that for head and neck organs at risk, LL-CNN achieves a prediction accuracy superior to all alternative algorithms.
Collapse
Affiliation(s)
- Jason W Chan
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Vasant Kearney
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Samuel Haaf
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Susan Wu
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Madeleine Bogdanov
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Mariah Reddick
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Nayha Dixit
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Atchar Sudhyadhom
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Josephine Chen
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Sue S Yom
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Timothy D Solberg
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| |
Collapse
|
17
|
Shelley LEA, Sutcliffe MPF, Harrison K, Scaife JE, Parker MA, Romanchikova M, Thomas SJ, Jena R, Burnet NG. Autosegmentation of the rectum on megavoltage image guidance scans. Biomed Phys Eng Express 2019; 5:025006. [PMID: 31057946 PMCID: PMC6466640 DOI: 10.1088/2057-1976/aaf1db] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 11/07/2018] [Accepted: 11/19/2018] [Indexed: 11/12/2022]
Abstract
Autosegmentation of image guidance (IG) scans is crucial for streamlining and optimising delivered dose calculation in radiotherapy. By accounting for interfraction motion, daily delivered dose can be accumulated and incorporated into automated systems for adaptive radiotherapy. Autosegmentation of IG scans is challenging due to poorer image quality than typical planning kilovoltage computed tomography (kVCT) systems, and the resulting reduction of soft tissue contrast in regions such as the pelvis makes organ boundaries less distinguishable. Current autosegmentation solutions generally involve propagation of planning contours to the IG scan by deformable image registration (DIR). Here, we present a novel approach for primary autosegmentation of the rectum on megavoltage IG scans acquired during prostate radiotherapy, based on the Chan-Vese algorithm. Pre-processing steps such as Hounsfield unit/intensity scaling, identifying search regions, dealing with air, and handling the prostate, are detailed. Post-processing features include identification of implausible contours (nominally those affected by muscle or air), 3D self-checking, smoothing, and interpolation. In cases where the algorithm struggles, the best estimate on a given slice may revert to the propagated kVCT rectal contour. Algorithm parameters were optimised systematically for a training cohort of 26 scans, and tested on a validation cohort of 30 scans, from 10 patients. Manual intervention was not required. Comparing Chan-Vese autocontours with contours manually segmented by an experienced clinical oncologist achieved a mean Dice Similarity Coefficient of 0.78 (SE < 0.011). This was comparable with DIR methods for kVCT and CBCT published in the literature. The autosegmentation system was developed within the VoxTox Research Programme for accumulation of delivered dose to the rectum in prostate radiotherapy, but may have applicability to further anatomical sites and imaging modalities.
Collapse
Affiliation(s)
- L E A Shelley
- University of Cambridge, Department of Engineering, Cambridge, United Kingdom
- Addenbrooke's Hospital, Department of Medical Physics and Clinical Engineering, Cambridge, United Kingdom
- Cambridge University Hospitals NHS Foundation Trust, Cancer Research UK VoxTox Research Group, Cambridge, United Kingdom
| | - M P F Sutcliffe
- University of Cambridge, Department of Engineering, Cambridge, United Kingdom
- Cambridge University Hospitals NHS Foundation Trust, Cancer Research UK VoxTox Research Group, Cambridge, United Kingdom
| | - K Harrison
- Cambridge University Hospitals NHS Foundation Trust, Cancer Research UK VoxTox Research Group, Cambridge, United Kingdom
- University of Cambridge, Cavendish Laboratory, Cambridge, United Kingdom
| | - J E Scaife
- Gloucestershire Oncology Centre, Cheltenham General Hospital, Cheltenham, United Kingdom
| | - M A Parker
- Cambridge University Hospitals NHS Foundation Trust, Cancer Research UK VoxTox Research Group, Cambridge, United Kingdom
- University of Cambridge, Cavendish Laboratory, Cambridge, United Kingdom
| | - M Romanchikova
- Cambridge University Hospitals NHS Foundation Trust, Cancer Research UK VoxTox Research Group, Cambridge, United Kingdom
- National Physical Laboratory, Teddington, United Kingdom
| | - S J Thomas
- Addenbrooke's Hospital, Department of Medical Physics and Clinical Engineering, Cambridge, United Kingdom
- Cambridge University Hospitals NHS Foundation Trust, Cancer Research UK VoxTox Research Group, Cambridge, United Kingdom
| | - R Jena
- Cambridge University Hospitals NHS Foundation Trust, Cancer Research UK VoxTox Research Group, Cambridge, United Kingdom
- Addenbrooke's Hospital, Oncology Centre, Cambridge, United Kingdom
| | - N G Burnet
- Cambridge University Hospitals NHS Foundation Trust, Cancer Research UK VoxTox Research Group, Cambridge, United Kingdom
- University of Manchester, Manchester Academic Health Science Centre, Manchester, United Kingdom
| |
Collapse
|
18
|
Wang J, Lu J, Qin G, Shen L, Sun Y, Ying H, Zhang Z, Hu W. Technical Note: A deep learning-based autosegmentation of rectal tumors in MR images. Med Phys 2018; 45:2560-2564. [PMID: 29663417 DOI: 10.1002/mp.12918] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 03/12/2018] [Accepted: 03/30/2018] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Manual contouring of gross tumor volumes (GTV) is a crucial and time-consuming process in rectum cancer radiotherapy. This study aims to develop a simple deep learning-based autosegmentation algorithm to segment rectal tumors on T2-weighted MR images. MATERIAL AND METHODS MRI scans (3T, T2-weighted) of 93 patients with locally advanced (cT3-4 and/or cN1-2) rectal cancer treated with neoadjuvant chemoradiotherapy followed by surgery were enrolled in this study. A 2D U-net similar network was established as a training model. The model was trained in two phases to increase efficiency. These phases were tumor recognition and tumor segmentation. An opening (erosion and dilation) process was implemented to smooth contours after segmentation. Data were randomly separated into training (90%) and validation (10%) datasets for a 10-folder cross-validation. Additionally, 20 patients were double contoured for performance evaluation. Four indices were calculated to evaluate the similarity of automated and manual segmentation, including Hausdorff distance (HD), average surface distance (ASD), Dice index (DSC), and Jaccard index (JSC). RESULTS The DSC, JSC, HD, and ASD (mean ± SD) were 0.74 ± 0.14, 0.60 ± 0.16, 20.44 ± 13.35, and 3.25 ± 1.69 mm for validation dataset; and these indices were 0.71 ± 0.13, 0.57 ± 0.15, 14.91 ± 7.62, and 2.67 ± 1.46 mm between two human radiation oncologists, respectively. No significant difference has been observed between automated segmentation and manual segmentation considering DSC (P = 0.42), JSC (P = 0.35), HD (P = 0.079), and ASD (P = 0.16). However, significant difference was found for HD (P = 0.0027) without opening process. CONCLUSION This study showed that a simple deep learning neural network can perform segmentation for rectum cancer based on MRI T2 images with results comparable to a human.
Collapse
Affiliation(s)
- Jiazhou Wang
- Department of radiation oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Jiayu Lu
- Department of radiation oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Gan Qin
- Department of radiation oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Lijun Shen
- Department of radiation oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Yiqun Sun
- Department of radiation oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Hongmei Ying
- Department of radiation oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Zhen Zhang
- Department of radiation oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Weigang Hu
- Department of radiation oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| |
Collapse
|
19
|
Mitchell RA, Wai P, Colgan R, Kirby AM, Donovan EM. Improving the efficiency of breast radiotherapy treatment planning using a semi-automated approach. J Appl Clin Med Phys 2017; 18:18-24. [PMID: 28291912 PMCID: PMC5689888 DOI: 10.1002/acm2.12006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2016] [Accepted: 07/05/2016] [Indexed: 11/20/2022] Open
Abstract
OBJECTIVES To reduce treatment planning times while maintaining plan quality through the introduction of semi-automated planning techniques for breast radiotherapy. METHODS Automatic critical structure delineation was examined using the Smart Probabilistic Image Contouring Engine (SPICE) commercial autosegmentation software (Philips Radiation Oncology Systems, Fitchburg, WI) for a cohort of ten patients. Semiautomated planning was investigated by employing scripting in the treatment planning system to automate segment creation for breast step-and-shoot planning and create objectives for segment weight optimization; considerations were made for three different multileaf collimator (MLC) configurations. Forty patients were retrospectively planned using the script and a planning time comparison performed. RESULTS The SPICE heart and lung outlines agreed closely with clinician-defined outlines (median Dice Similarity Coefficient > 0.9); median difference in mean heart dose was 0.0 cGy (range -10.8 to 5.4 cGy). Scripted treatment plans demonstrated equivalence with their clinical counterparts. No statistically significant differences were found for target parameters. Minimal ipsilateral lung dose increases were also observed. Statistically significant (P < 0.01) time reductions were achievable for MLCi and Agility MLC (Elekta Ltd, Crawley, UK) plans (median 4.9 and 5.9 min, respectively). CONCLUSIONS The use of commercial autosegmentation software enables breast plan adjustment based on doses to organs at risk. Semi-automated techniques for breast radiotherapy planning offer modest reductions in planning times. However, in the context of a typical department's breast radiotherapy workload, minor savings per plan translate into greater efficiencies overall.
Collapse
Affiliation(s)
- Robert A Mitchell
- Joint Department of PhysicsThe Royal Marsden NHS Foundation Trust/Institute of Cancer ResearchSuttonSurreyUK
| | - Philip Wai
- Joint Department of PhysicsThe Royal Marsden NHS Foundation Trust/Institute of Cancer ResearchSuttonSurreyUK
| | - Ruth Colgan
- Joint Department of PhysicsThe Royal Marsden NHS Foundation Trust/Institute of Cancer ResearchSuttonSurreyUK
| | - Anna M Kirby
- Department of RadiotherapyThe Royal Marsden NHS Foundation TrustSuttonSurreyUK
| | - Ellen M Donovan
- Joint Department of PhysicsThe Royal Marsden NHS Foundation Trust/Institute of Cancer ResearchSuttonSurreyUK
| |
Collapse
|