1
|
Vossough A, Khalili N, Familiar AM, Gandhi D, Viswanathan K, Tu W, Haldar D, Bagheri S, Anderson H, Haldar S, Storm PB, Resnick A, Ware JB, Nabavizadeh A, Fathi Kazerooni A. Training and Comparison of nnU-Net and DeepMedic Methods for Autosegmentation of Pediatric Brain Tumors. AJNR Am J Neuroradiol 2024; 45:1081-1089. [PMID: 38724204 DOI: 10.3174/ajnr.a8293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Accepted: 03/01/2024] [Indexed: 08/11/2024]
Abstract
BACKGROUND AND PURPOSE Tumor segmentation is essential in surgical and treatment planning and response assessment and monitoring in pediatric brain tumors, the leading cause of cancer-related death among children. However, manual segmentation is time-consuming and has high interoperator variability, underscoring the need for more efficient methods. After training, we compared 2 deep-learning-based 3D segmentation models, DeepMedic and nnU-Net, with pediatric-specific multi-institutional brain tumor data based on multiparametric MR images. MATERIALS AND METHODS Multiparametric preoperative MR imaging scans of 339 pediatric patients (n = 293 internal and n = 46 external cohorts) with a variety of tumor subtypes were preprocessed and manually segmented into 4 tumor subregions, ie, enhancing tumor, nonenhancing tumor, cystic components, and peritumoral edema. After training, performances of the 2 models on internal and external test sets were evaluated with reference to ground truth manual segmentations. Additionally, concordance was assessed by comparing the volume of the subregions as a percentage of the whole tumor between model predictions and ground truth segmentations using the Pearson or Spearman correlation coefficients and the Bland-Altman method. RESULTS The mean Dice score for nnU-Net internal test set was 0.9 (SD, 0.07) (median, 0.94) for whole tumor; 0.77 (SD, 0.29) for enhancing tumor; 0.66 (SD, 0.32) for nonenhancing tumor; 0.71 (SD, 0.33) for cystic components, and 0.71 (SD, 0.40) for peritumoral edema, respectively. For DeepMedic, the mean Dice scores were 0.82 (SD, 0.16) for whole tumor; 0.66 (SD, 0.32) for enhancing tumor; 0.48 (SD, 0.27) for nonenhancing tumor; 0.48 (SD, 0.36) for cystic components, and 0.19 (SD, 0.33) for peritumoral edema, respectively. Dice scores were significantly higher for nnU-Net (P ≤ .01). Correlation coefficients for tumor subregion percentage volumes were higher (0.98 versus 0.91 for enhancing tumor, 0.97 versus 0.75 for nonenhancing tumor, 0.98 versus 0.80 for cystic components, 0.95 versus 0.33 for peritumoral edema in the internal test set). Bland-Altman plots were better for nnU-Net compared with DeepMedic. External validation of the trained nnU-Net model on the multi-institutional Brain Tumor Segmentation Challenge in Pediatrics (BraTS-PEDs) 2023 data set revealed high generalization capability in the segmentation of whole tumor, tumor core (a combination of enhancing tumor, nonenhancing tumor, and cystic components), and enhancing tumor with mean Dice scores of 0.87 (SD, 0.13) (median, 0.91), 0.83 (SD, 0.18) (median, 0.89), and 0.48 (SD, 0.38) (median, 0.58), respectively. CONCLUSIONS The pediatric-specific data-trained nnU-Net model is superior to DeepMedic for whole tumor and subregion segmentation of pediatric brain tumors.
Collapse
Affiliation(s)
- Arastoo Vossough
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
- Department of Radiology (A.V., S.B., J.B.W., A.N.), University of Pennsylvania, Philadelphia, Pennsylvania
- Department of Radiology (A.V.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Nastaran Khalili
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Ariana M Familiar
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Deep Gandhi
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Karthik Viswanathan
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Wenxin Tu
- College of Arts and Sciences (W.T.), University of Pennsylvania, Philadelphia, Pennsylvania
| | - Debanjan Haldar
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Sina Bagheri
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
- Department of Radiology (A.V., S.B., J.B.W., A.N.), University of Pennsylvania, Philadelphia, Pennsylvania
| | - Hannah Anderson
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Shuvanjan Haldar
- School of Engineering (S.H.), Rutgers University, New Brunswick, New Jersey
| | - Phillip B Storm
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
- Department of Neurosurgery (P.B.S., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Adam Resnick
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Jeffrey B Ware
- Department of Radiology (A.V., S.B., J.B.W., A.N.), University of Pennsylvania, Philadelphia, Pennsylvania
| | - Ali Nabavizadeh
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
- Department of Radiology (A.V., S.B., J.B.W., A.N.), University of Pennsylvania, Philadelphia, Pennsylvania
| | - Anahita Fathi Kazerooni
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
- Department of Neurosurgery (P.B.S., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
- Center for AI & Data Science for Integrated Diagnostics (A.F.K.), University of Pennsylvania, Philadelphia, Pennsylvania
- Center for Biomedical Image Computing and Analytics (A.F.K.), University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
2
|
Wong YM, Koh CWY, Lew KS, Chua CGA, Yeap PL, Zhang ET, Ong ALK, Tuan JKL, Ng BF, Lew WS, Lee JCL, Tan HQ. Deformable anthropomorphic pelvis phantom for dose accumulation verification. Phys Med Biol 2024; 69:12NT01. [PMID: 38821109 DOI: 10.1088/1361-6560/ad52e4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Accepted: 05/31/2024] [Indexed: 06/02/2024]
Abstract
Objective.The validation of deformable image registration (DIR) for contour propagation is often done using contour-based metrics. Meanwhile, dose accumulation requires evaluation of voxel mapping accuracy, which might not be accurately represented by contour-based metrics. By fabricating a deformable anthropomorphic pelvis phantom, we aim to (1) quantify the voxel mapping accuracy for various deformation scenarios, in high- and low-contrast regions, and (2) identify any correlation between dice similarity coefficient (DSC), a commonly used contour-based metric, and the voxel mapping accuracy for each organ.Approach. Four organs, i.e. pelvic bone, prostate, bladder and rectum (PBR), were 3D printed using PLA and a Polyjet digital material, and assembled. The latter three were implanted with glass bead and CT markers within or on their surfaces. Four deformation scenarios were simulated by varying the bladder and rectum volumes. For each scenario, nine DIRs with different parameters were performed on RayStation v10B. The voxel mapping accuracy was quantified by finding the discrepancy between true and mapped marker positions, termed the target registration error (TRE). Pearson correlation test was done between the DSC and mean TRE for each organ.Main results. For the first time, we fabricated a deformable phantom purely from 3D printing, which successfully reproduced realistic anatomical deformations. Overall, the voxel mapping accuracy dropped with increasing deformation magnitude, but improved when more organs were used to guide the DIR or limit the registration region. DSC was found to be a good indicator of voxel mapping accuracy for prostate and rectum, but a comparatively poorer one for bladder. DSC > 0.85/0.90 was established as the threshold of mean TRE ⩽ 0.3 cm for rectum/prostate. For bladder, extra metrics in addition to DSC should be considered.Significance. This work presented a 3D printed phantom, which enabled quantification of voxel mapping accuracy and evaluation of correlation between DSC and voxel mapping accuracy.
Collapse
Affiliation(s)
- Yun Ming Wong
- Division of Physics and Applied Physics, Nanyang Technological University, Singapore, Singapore
| | - Calvin Wei Yang Koh
- Division of Radiation Oncology, National Cancer Centre Singapore, Singapore, Singapore
- Ai3 Lab, National Cancer Centre Singapore, Singapore, Singapore
| | - Kah Seng Lew
- Division of Physics and Applied Physics, Nanyang Technological University, Singapore, Singapore
- Division of Radiation Oncology, National Cancer Centre Singapore, Singapore, Singapore
| | - Clifford Ghee Ann Chua
- Division of Radiation Oncology, National Cancer Centre Singapore, Singapore, Singapore
- Ai3 Lab, National Cancer Centre Singapore, Singapore, Singapore
| | - Ping Lin Yeap
- Division of Radiation Oncology, National Cancer Centre Singapore, Singapore, Singapore
- Department of Oncology, University of Cambridge, Cambridge, United Kingdom
| | - Ee Teng Zhang
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore, Singapore
- Singapore Centre for 3D Printing, Nanyang Technological University, Singapore, Singapore
| | - Ashley Li Kuan Ong
- Division of Radiation Oncology, National Cancer Centre Singapore, Singapore, Singapore
- Ai3 Lab, National Cancer Centre Singapore, Singapore, Singapore
| | - Jeffrey Kit Loong Tuan
- Division of Radiation Oncology, National Cancer Centre Singapore, Singapore, Singapore
- Ai3 Lab, National Cancer Centre Singapore, Singapore, Singapore
| | - Bing Feng Ng
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore, Singapore
| | - Wen Siang Lew
- Division of Physics and Applied Physics, Nanyang Technological University, Singapore, Singapore
| | - James Cheow Lei Lee
- Division of Physics and Applied Physics, Nanyang Technological University, Singapore, Singapore
- Division of Radiation Oncology, National Cancer Centre Singapore, Singapore, Singapore
| | - Hong Qi Tan
- Division of Physics and Applied Physics, Nanyang Technological University, Singapore, Singapore
- Division of Radiation Oncology, National Cancer Centre Singapore, Singapore, Singapore
- Oncology Academic Clinical Programme, Duke-NUS Medical School, Singapore, Singapore
- Ai3 Lab, National Cancer Centre Singapore, Singapore, Singapore
| |
Collapse
|
3
|
Chelliah A, Wood DA, Canas LS, Shuaib H, Currie S, Fatania K, Frood R, Rowland-Hill C, Thust S, Wastling SJ, Tenant S, McBain C, Foweraker K, Williams M, Wang Q, Roman A, Dragos C, MacDonald M, Lau YH, Linares CA, Bassiouny A, Luis A, Young T, Brock J, Chandy E, Beaumont E, Lam TC, Welsh L, Lewis J, Mathew R, Kerfoot E, Brown R, Beasley D, Glendenning J, Brazil L, Swampillai A, Ashkan K, Ourselin S, Modat M, Booth TC. Glioblastoma and radiotherapy: A multicenter AI study for Survival Predictions from MRI (GRASP study). Neuro Oncol 2024; 26:1138-1151. [PMID: 38285679 PMCID: PMC11145448 DOI: 10.1093/neuonc/noae017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Indexed: 01/31/2024] Open
Abstract
BACKGROUND The aim was to predict survival of glioblastoma at 8 months after radiotherapy (a period allowing for completing a typical course of adjuvant temozolomide), by applying deep learning to the first brain MRI after radiotherapy completion. METHODS Retrospective and prospective data were collected from 206 consecutive glioblastoma, isocitrate dehydrogenase -wildtype patients diagnosed between March 2014 and February 2022 across 11 UK centers. Models were trained on 158 retrospective patients from 3 centers. Holdout test sets were retrospective (n = 19; internal validation), and prospective (n = 29; external validation from 8 distinct centers). Neural network branches for T2-weighted and contrast-enhanced T1-weighted inputs were concatenated to predict survival. A nonimaging branch (demographics/MGMT/treatment data) was also combined with the imaging model. We investigated the influence of individual MR sequences; nonimaging features; and weighted dense blocks pretrained for abnormality detection. RESULTS The imaging model outperformed the nonimaging model in all test sets (area under the receiver-operating characteristic curve, AUC P = .038) and performed similarly to a combined imaging/nonimaging model (P > .05). Imaging, nonimaging, and combined models applied to amalgamated test sets gave AUCs of 0.93, 0.79, and 0.91. Initializing the imaging model with pretrained weights from 10 000s of brain MRIs improved performance considerably (amalgamated test sets without pretraining 0.64; P = .003). CONCLUSIONS A deep learning model using MRI images after radiotherapy reliably and accurately determined survival of glioblastoma. The model serves as a prognostic biomarker identifying patients who will not survive beyond a typical course of adjuvant temozolomide, thereby stratifying patients into those who might require early second-line or clinical trial treatment.
Collapse
Affiliation(s)
- Alysha Chelliah
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
| | - David A Wood
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
| | - Liane S Canas
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
| | - Haris Shuaib
- Guy’s and St. Thomas’ NHS Foundation Trust, London, UK
- Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, UK
| | | | | | | | | | - Stefanie Thust
- University College London Hospitals NHS Foundation Trust, London, UK
- Institute of Neurology, University College London, London, UK
- Nottingham University Hospitals NHS Trust, Nottingham, UK
- Precision Imaging Beacon, School of Medicine, University of Nottingham, Nottingham, UK
| | - Stephen J Wastling
- University College London Hospitals NHS Foundation Trust, London, UK
- Institute of Neurology, University College London, London, UK
| | - Sean Tenant
- The Christie NHS Foundation Trust, Withington, Manchester, UK
| | | | | | - Matthew Williams
- Radiotherapy Department, Imperial College Healthcare NHS Trust, London, UK
- Institute for Global Health Improvement, Imperial College London, London, UK
| | - Qiquan Wang
- Radiotherapy Department, Imperial College Healthcare NHS Trust, London, UK
- Institute for Global Health Improvement, Imperial College London, London, UK
| | - Andrei Roman
- Guy’s and St. Thomas’ NHS Foundation Trust, London, UK
- Oncology Institute Prof. Dr. Ion Chiricuta, Cluj-Napoca, Romania
| | | | | | - Yue Hui Lau
- King’s College Hospital NHS Foundation Trust, London, UK
| | | | - Ahmed Bassiouny
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
- Department of Radiology, Mansoura University, Mansoura, Egypt
| | - Aysha Luis
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
- King’s College Hospital NHS Foundation Trust, London, UK
| | - Thomas Young
- Guy’s and St. Thomas’ NHS Foundation Trust, London, UK
| | - Juliet Brock
- Brighton and Sussex University Hospitals NHS Trust, England, UK
| | - Edward Chandy
- Brighton and Sussex University Hospitals NHS Trust, England, UK
| | - Erica Beaumont
- Lancashire Teaching Hospitals NHS Foundation Trust, England, UK
| | - Tai-Chung Lam
- Lancashire Teaching Hospitals NHS Foundation Trust, England, UK
| | - Liam Welsh
- The Royal Marsden NHS Foundation Trust, London, UK
| | - Joanne Lewis
- Newcastle upon Tyne Hospitals NHS Foundation Trust, England, UK
| | - Ryan Mathew
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
- School of Medicine, University of Leeds, Leeds, UK
| | - Eric Kerfoot
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
| | - Richard Brown
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
| | - Daniel Beasley
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
- Guy’s and St. Thomas’ NHS Foundation Trust, London, UK
| | | | - Lucy Brazil
- Guy’s and St. Thomas’ NHS Foundation Trust, London, UK
| | | | - Keyoumars Ashkan
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
- King’s College Hospital NHS Foundation Trust, London, UK
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
| | - Thomas C Booth
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
- King’s College Hospital NHS Foundation Trust, London, UK
| |
Collapse
|
4
|
Rong Y, Chen Q, Fu Y, Yang X, Al-Hallaq HA, Wu QJ, Yuan L, Xiao Y, Cai B, Latifi K, Benedict SH, Buchsbaum JC, Qi XS. NRG Oncology Assessment of Artificial Intelligence Deep Learning-Based Auto-segmentation for Radiation Therapy: Current Developments, Clinical Considerations, and Future Directions. Int J Radiat Oncol Biol Phys 2024; 119:261-280. [PMID: 37972715 PMCID: PMC11023777 DOI: 10.1016/j.ijrobp.2023.10.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 09/16/2023] [Accepted: 10/14/2023] [Indexed: 11/19/2023]
Abstract
Deep learning neural networks (DLNN) in Artificial intelligence (AI) have been extensively explored for automatic segmentation in radiotherapy (RT). In contrast to traditional model-based methods, data-driven AI-based models for auto-segmentation have shown high accuracy in early studies in research settings and controlled environment (single institution). Vendor-provided commercial AI models are made available as part of the integrated treatment planning system (TPS) or as a stand-alone tool that provides streamlined workflow interacting with the main TPS. These commercial tools have drawn clinics' attention thanks to their significant benefit in reducing the workload from manual contouring and shortening the duration of treatment planning. However, challenges occur when applying these commercial AI-based segmentation models to diverse clinical scenarios, particularly in uncontrolled environments. Contouring nomenclature and guideline standardization has been the main task undertaken by the NRG Oncology. AI auto-segmentation holds the potential clinical trial participants to reduce interobserver variations, nomenclature non-compliance, and contouring guideline deviations. Meanwhile, trial reviewers could use AI tools to verify contour accuracy and compliance of those submitted datasets. In recognizing the growing clinical utilization and potential of these commercial AI auto-segmentation tools, NRG Oncology has formed a working group to evaluate the clinical utilization and potential of commercial AI auto-segmentation tools. The group will assess in-house and commercially available AI models, evaluation metrics, clinical challenges, and limitations, as well as future developments in addressing these challenges. General recommendations are made in terms of the implementation of these commercial AI models, as well as precautions in recognizing the challenges and limitations.
Collapse
Affiliation(s)
- Yi Rong
- Mayo Clinic Arizona, Phoenix, AZ
| | - Quan Chen
- City of Hope Comprehensive Cancer Center Duarte, CA
| | - Yabo Fu
- Memorial Sloan Kettering Cancer Center, Commack, NY
| | | | | | | | - Lulin Yuan
- Virginia Commonwealth University, Richmond, VA
| | - Ying Xiao
- University of Pennsylvania/Abramson Cancer Center, Philadelphia, PA
| | - Bin Cai
- The University of Texas Southwestern Medical Center, Dallas, TX
| | | | - Stanley H Benedict
- University of California Davis Comprehensive Cancer Center, Sacramento, CA
| | | | - X Sharon Qi
- University of California Los Angeles, Los Angeles, CA
| |
Collapse
|
5
|
Barry N, Koh ES, Ebert MA, Moore A, Francis RJ, Rowshanfarzad P, Hassan GM, Ng SP, Back M, Chua B, Pinkham MB, Pullar A, Phillips C, Sia J, Gorayski P, Le H, Gill S, Croker J, Bucknell N, Bettington C, Syed F, Jung K, Chang J, Bece A, Clark C, Wada M, Cook O, Whitehead A, Rossi A, Grose A, Scott AM. [18]F-fluoroethyl-l-tyrosine positron emission tomography for radiotherapy target delineation: Results from a Radiation Oncology credentialing program. Phys Imaging Radiat Oncol 2024; 30:100568. [PMID: 38585372 PMCID: PMC10998205 DOI: 10.1016/j.phro.2024.100568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 03/11/2024] [Accepted: 03/11/2024] [Indexed: 04/09/2024] Open
Abstract
Background and purpose The [18]F-fluoroethyl-l-tyrosine (FET) PET in Glioblastoma (FIG) study is an Australian prospective, multi-centre trial evaluating FET PET for newly diagnosed glioblastoma management. The Radiation Oncology credentialing program aimed to assess the feasibility in Radiation Oncologist (RO) derivation of standard-of-care target volumes (TVMR) and hybrid target volumes (TVMR+FET) incorporating pre-defined FET PET biological tumour volumes (BTVs). Materials and methods Central review and analysis of TVMR and TVMR+FET was undertaken across three benchmarking cases. BTVs were pre-defined by a sole nuclear medicine expert. Intraclass correlation coefficient (ICC) confidence intervals (CIs) evaluated volume agreement. RO contour spatial and boundary agreement were evaluated (Dice similarity coefficient [DSC], Jaccard index [JAC], overlap volume [OV], Hausdorff distance [HD] and mean absolute surface distance [MASD]). Dose plan generation (one case per site) was assessed. Results Data from 19 ROs across 10 trial sites (54 initial submissions, 8 resubmissions requested, 4 conditional passes) was assessed with an initial pass rate of 77.8 %; all resubmissions passed. TVMR+FET were significantly larger than TVMR (p < 0.001) for all cases. RO gross tumour volume (GTV) agreement was moderate-to-excellent for GTVMR (ICC = 0.910; 95 % CI, 0.708-0.997) and good-to-excellent for GTVMR+FET (ICC = 0.965; 95 % CI, 0.871-0.999). GTVMR+FET showed greater spatial overlap and boundary agreement compared to GTVMR. For the clinical target volume (CTV), CTVMR+FET showed lower average boundary agreement versus CTVMR (MASD: 1.73 mm vs. 1.61 mm, p = 0.042). All sites passed the planning exercise. Conclusions The credentialing program demonstrated feasibility in successful credentialing of 19 ROs across 10 sites, increasing national expertise in TVMR+FET delineation.
Collapse
Affiliation(s)
- Nathaniel Barry
- School of Physics, Mathematics and Computing, University of Western Australia, Crawley, WA, Australia
- Centre for Advanced Technologies in Cancer Research (CATCR), Perth, WA, Australia
| | - Eng-Siew Koh
- South Western Sydney Clinical School, University of New South Wales, Australia
| | - Martin A. Ebert
- School of Physics, Mathematics and Computing, University of Western Australia, Crawley, WA, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Nedlands, WA, Australia
- Australian Centre for Quantitative Imaging, Medical School, University of Western Australia, Crawley, WA, Australia
- Centre for Advanced Technologies in Cancer Research (CATCR), Perth, WA, Australia
| | - Alisha Moore
- Trans Tasman Radiation Oncology Group (TROG) Cancer Research, Newcastle, NSW Australia
| | - Roslyn J. Francis
- Department of Nuclear Medicine, Sir Charles Gairdner Hospital, Nedlands, WA, Australia
- Australian Centre for Quantitative Imaging, Medical School, University of Western Australia, Crawley, WA, Australia
| | - Pejman Rowshanfarzad
- School of Physics, Mathematics and Computing, University of Western Australia, Crawley, WA, Australia
- Centre for Advanced Technologies in Cancer Research (CATCR), Perth, WA, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, University of Western Australia, Crawley, WA, Australia
| | - Sweet P. Ng
- Department of Radiation Oncology, Austin Health, Heidelberg, VIC, Australia
| | - Michael Back
- Department of Radiation Oncology, Royal North Shore Hospital, Sydney, NSW, Australia
| | - Benjamin Chua
- Department of Radiation Oncology, Royal Brisbane Womens Hospital, Brisbane, QLD, Australia
| | - Mark B. Pinkham
- Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane, QLD, Australia
| | - Andrew Pullar
- Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane, QLD, Australia
| | - Claire Phillips
- Department of Radiation Oncology, Peter MacCallum Cancer Centre, VIC, Australia
| | - Joseph Sia
- Department of Radiation Oncology, Peter MacCallum Cancer Centre, VIC, Australia
| | - Peter Gorayski
- Department of Radiation Oncology, Royal Adelaide Hospital, Adelaide, SA, Australia
| | - Hien Le
- Department of Radiation Oncology, Royal Adelaide Hospital, Adelaide, SA, Australia
| | - Suki Gill
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Nedlands, WA, Australia
| | - Jeremy Croker
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Nedlands, WA, Australia
| | - Nicholas Bucknell
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Nedlands, WA, Australia
| | - Catherine Bettington
- Department of Radiation Oncology, Royal Brisbane Womens Hospital, Brisbane, QLD, Australia
| | - Farhan Syed
- Department of Radiation Oncology, The Canberra Hospital, Canberra, ACT, Australia
| | - Kylie Jung
- Department of Radiation Oncology, The Canberra Hospital, Canberra, ACT, Australia
| | - Joe Chang
- South Western Sydney Clinical School, University of New South Wales, Australia
| | - Andrej Bece
- Department of Radiation Oncology, St George Hospital, Kogarah, NSW, Australia
| | - Catherine Clark
- Department of Radiation Oncology, St George Hospital, Kogarah, NSW, Australia
| | - Mori Wada
- Department of Radiation Oncology, Austin Health, Heidelberg, VIC, Australia
| | - Olivia Cook
- Trans Tasman Radiation Oncology Group (TROG) Cancer Research, Newcastle, NSW Australia
| | - Angela Whitehead
- Trans Tasman Radiation Oncology Group (TROG) Cancer Research, Newcastle, NSW Australia
| | - Alana Rossi
- Trans Tasman Radiation Oncology Group (TROG) Cancer Research, Newcastle, NSW Australia
| | - Andrew Grose
- Trans Tasman Radiation Oncology Group (TROG) Cancer Research, Newcastle, NSW Australia
| | - Andrew M. Scott
- Department of Molecular Imaging and Therapy, Austin Health, and University of Melbourne, Melbourne, VIC, Australia
- Olivia Newton-John Cancer Research Institute, and School of Cancer Medicine La Trobe University, Melbourne, VIC, Australia
| |
Collapse
|
6
|
Rossi E, Emin S, Gubanski M, Gagliardi G, Hedman M, Villegas F. Contouring practices and artefact management within a synthetic CT-based radiotherapy workflow for the central nervous system. Radiat Oncol 2024; 19:27. [PMID: 38424642 PMCID: PMC11320867 DOI: 10.1186/s13014-024-02422-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 02/19/2024] [Indexed: 03/02/2024] Open
Abstract
BACKGROUND The incorporation of magnetic resonance (MR) imaging in radiotherapy (RT) workflows improves contouring precision, yet it introduces geometrical uncertainties when registered with computed tomography (CT) scans. Synthetic CT (sCT) images could minimize these uncertainties and streamline the RT workflow. This study aims to compare the contouring capabilities of sCT images with conventional CT-based/MR-assisted RT workflows, with an emphasis on managing artefacts caused by surgical fixation devices (SFDs). METHODS The study comprised a commissioning cohort of 100 patients with cranial tumors treated using a conventional CT-based/MR-assisted RT workflow and a validation cohort of 30 patients with grade IV glioblastomas treated using an MR-only workflow. A CE-marked artificial-intelligence-based sCT product was utilized. The delineation accuracy comparison was performed using dice similarity coefficient (DSC) and average Hausdorff distance (AHD). Artefacts within the commissioning cohort were visually inspected, classified and an estimation of thickness was derived using Hausdorff distance (HD). For the validation cohort, boolean operators were used to extract artefact volumes adjacent to the target and contrasted to the planning treatment volume. RESULTS The combination of high DSC (0.94) and low AHD (0.04 mm) indicates equal target delineation capacity between sCT images and conventional CT scans. However, the results for organs at risk delineation were less consistent, likely because of voxel size differences between sCT images and CT scans and absence of standardized delineation routines. Artefacts observed in sCT images appeared as enhancements of cranial bone. When close to the target, they could affect its definition. Therefore, in the validation cohort the clinical target volume (CTV) was expanded towards the bone by 3.5 mm, as estimated by HD analysis. Subsequent analysis on cone-beam CT scans showed that the CTV adjustment was enough to provide acceptable target coverage. CONCLUSION The tested sCT product performed on par with conventional CT in terms of contouring capability. Additionally, this study provides both the first comprehensive classification of metal artefacts on a sCT product and a novel method to assess the clinical impact of artefacts caused by SFDs on target delineation. This methodology encourages similar analysis for other sCT products.
Collapse
Affiliation(s)
- Elia Rossi
- Department of Radiation Oncology, Karolinska University Hospital, Solna, Sweden
| | - Sevgi Emin
- Radiotherapy Physics and Engineering, Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Solna, Sweden
| | - Michael Gubanski
- Department of Radiation Oncology, Karolinska University Hospital, Solna, Sweden
- Department of Oncology-Pathology, Karolinska Institutet, Solna, Sweden
| | - Giovanna Gagliardi
- Radiotherapy Physics and Engineering, Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Solna, Sweden
- Department of Oncology-Pathology, Karolinska Institutet, Solna, Sweden
| | - Mattias Hedman
- Department of Radiation Oncology, Karolinska University Hospital, Solna, Sweden
- Department of Oncology-Pathology, Karolinska Institutet, Solna, Sweden
| | - Fernanda Villegas
- Radiotherapy Physics and Engineering, Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Solna, Sweden.
- Department of Oncology-Pathology, Karolinska Institutet, Solna, Sweden.
| |
Collapse
|
7
|
Molière S, Hamzaoui D, Granger B, Montagne S, Allera A, Ezziane M, Luzurier A, Quint R, Kalai M, Ayache N, Delingette H, Renard-Penna R. Reference standard for the evaluation of automatic segmentation algorithms: Quantification of inter observer variability of manual delineation of prostate contour on MRI. Diagn Interv Imaging 2024; 105:65-73. [PMID: 37822196 DOI: 10.1016/j.diii.2023.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 07/28/2023] [Accepted: 08/01/2023] [Indexed: 10/13/2023]
Abstract
PURPOSE The purpose of this study was to investigate the relationship between inter-reader variability in manual prostate contour segmentation on magnetic resonance imaging (MRI) examinations and determine the optimal number of readers required to establish a reliable reference standard. MATERIALS AND METHODS Seven radiologists with various experiences independently performed manual segmentation of the prostate contour (whole-gland [WG] and transition zone [TZ]) on 40 prostate MRI examinations obtained in 40 patients. Inter-reader variability in prostate contour delineations was estimated using standard metrics (Dice similarity coefficient [DSC], Hausdorff distance and volume-based metrics). The impact of the number of readers (from two to seven) on segmentation variability was assessed using pairwise metrics (consistency) and metrics with respect to a reference segmentation (conformity), obtained either with majority voting or simultaneous truth and performance level estimation (STAPLE) algorithm. RESULTS The average segmentation DSC for two readers in pairwise comparison was 0.919 for WG and 0.876 for TZ. Variability decreased with the number of readers: the interquartile ranges of the DSC were 0.076 (WG) / 0.021 (TZ) for configurations with two readers, 0.005 (WG) / 0.012 (TZ) for configurations with three readers, and 0.002 (WG) / 0.0037 (TZ) for configurations with six readers. The interquartile range decreased slightly faster between two and three readers than between three and six readers. When using consensus methods, variability often reached its minimum with three readers (with STAPLE, DSC = 0.96 [range: 0.945-0.971] for WG and DSC = 0.94 [range: 0.912-0.957] for TZ, and interquartile range was minimal for configurations with three readers. CONCLUSION The number of readers affects the inter-reader variability, in terms of inter-reader consistency and conformity to a reference. Variability is minimal for three readers, or three readers represent a tipping point in the variability evolution, with both pairwise-based metrics or metrics with respect to a reference. Accordingly, three readers may represent an optimal number to determine references for artificial intelligence applications.
Collapse
Affiliation(s)
- Sébastien Molière
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France; Breast and Thyroid Imaging Unit, Institut de Cancérologie Strasbourg Europe, 67200, Strasbourg, France; IGBMC, Institut de Génétique et de Biologie Moléculaire et Cellulaire, 67400, Illkirch, France.
| | - Dimitri Hamzaoui
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, 06902, Nice, France
| | - Benjamin Granger
- Sorbonne Université, INSERM, Institut Pierre Louis d'Epidémiologie et de Santé Publique, IPLESP, AP-HP, Hôpital Pitié Salpêtrière, Département de Santé Publique, 75013, Paris, France
| | - Sarah Montagne
- Department of Radiology, Hôpital Tenon, Assistance Publique-Hôpitaux de Paris, 75020, Paris, France; Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France; GRC N° 5, Oncotype-Uro, Sorbonne Université, 75020, Paris, France
| | - Alexandre Allera
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Malek Ezziane
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Anna Luzurier
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Raphaelle Quint
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Mehdi Kalai
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Nicholas Ayache
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France
| | - Hervé Delingette
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France
| | - Raphaële Renard-Penna
- Department of Radiology, Hôpital Tenon, Assistance Publique-Hôpitaux de Paris, 75020, Paris, France; Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France; GRC N° 5, Oncotype-Uro, Sorbonne Université, 75020, Paris, France
| |
Collapse
|
8
|
Archawametheekul K, Puttanawarut C, Suphaphong S, Jiarpinitnun C, Sakulsingharoj S, Stansook N, Khachonkham S. The Investigating Image Registration Accuracy and Contour Propagation for Adaptive Radiotherapy Purposes in Line with the Task Group No. 132 Recommendation. J Med Phys 2024; 49:64-72. [PMID: 38828076 PMCID: PMC11141753 DOI: 10.4103/jmp.jmp_168_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 02/12/2024] [Accepted: 02/12/2024] [Indexed: 06/05/2024] Open
Abstract
Purpose Image registration is a crucial component of the adaptive radiotherapy workflow. This study investigates the accuracy of the deformable image registration (DIR) and contour propagation features of SmartAdapt, an application in the Eclipse treatment planning system (TPS) version 16.1. Materials and Methods The registration accuracy was validated using the Task Group No. 132 (TG-132) virtual phantom, which features contour evaluation and landmark analysis based on the quantitative criteria recommended in the American Association of Physicists in Medicine TG-132 report. The target registration error, Dice similarity coefficient (DSC), and center of mass displacement were used as quantitative validation metrics. The performance of the contour propagation feature was evaluated using clinical datasets (head and neck, pelvis, and chest) and an additional four-dimensional computed tomography (CT) dataset from TG-132. The primary planning and the second CT images were appropriately registered and deformed. The DSC was used to find the volume overlapping between the deformed contours and the radiation oncologist (RO)-drawn contour. The clinical value of the DIR-generated structure was reviewed and scored by an experienced RO to make a qualitative assessment. Results The registration accuracy fell within the specified tolerances. SmartAdapt exhibited a reasonably propagated contour for the chest and head-and-neck regions, with DSC values of 0.80 for organs at risk. Misregistration is frequently observed in the pelvic region, which is specified as a low-contrast region. However, 78% of structures required no modification or minor modification, demonstrating good agreement between contour comparison and the qualitative analysis. Conclusions SmartAdapt has adequate efficiency for image registration and contour propagation for adaptive purposes in various anatomical sites. However, there should be concern about its performance in regions with low contrast and small volumes.
Collapse
Affiliation(s)
- Kamonchanok Archawametheekul
- Division of Radiation Oncology, Department of Diagnostic and Therapeutic Radiology, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
| | - Chanon Puttanawarut
- Chakri Naruebodindra Medical Institute, Mahidol University, Samut Prakan, Thailand
- Department of Clinical Epidemiology and Biostatistics, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
| | - Sithiphong Suphaphong
- Division of Radiation Oncology, Department of Diagnostic and Therapeutic Radiology, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
| | - Chuleeporn Jiarpinitnun
- Division of Radiation Oncology, Department of Diagnostic and Therapeutic Radiology, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
| | - Siwaporn Sakulsingharoj
- Division of Radiation Oncology, Department of Diagnostic and Therapeutic Radiology, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
| | - Nauljun Stansook
- Division of Radiation Oncology, Department of Diagnostic and Therapeutic Radiology, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
| | - Suphalak Khachonkham
- Division of Radiation Oncology, Department of Diagnostic and Therapeutic Radiology, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
| |
Collapse
|
9
|
Pemberton HG, Wu J, Kommers I, Müller DMJ, Hu Y, Goodkin O, Vos SB, Bisdas S, Robe PA, Ardon H, Bello L, Rossi M, Sciortino T, Nibali MC, Berger MS, Hervey-Jumper SL, Bouwknegt W, Van den Brink WA, Furtner J, Han SJ, Idema AJS, Kiesel B, Widhalm G, Kloet A, Wagemakers M, Zwinderman AH, Krieg SM, Mandonnet E, Prados F, de Witt Hamer P, Barkhof F, Eijgelaar RS. Multi-class glioma segmentation on real-world data with missing MRI sequences: comparison of three deep learning algorithms. Sci Rep 2023; 13:18911. [PMID: 37919354 PMCID: PMC10622563 DOI: 10.1038/s41598-023-44794-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 10/12/2023] [Indexed: 11/04/2023] Open
Abstract
This study tests the generalisability of three Brain Tumor Segmentation (BraTS) challenge models using a multi-center dataset of varying image quality and incomplete MRI datasets. In this retrospective study, DeepMedic, no-new-Unet (nn-Unet), and NVIDIA-net (nv-Net) were trained and tested using manual segmentations from preoperative MRI of glioblastoma (GBM) and low-grade gliomas (LGG) from the BraTS 2021 dataset (1251 in total), in addition to 275 GBM and 205 LGG acquired clinically across 12 hospitals worldwide. Data was split into 80% training, 5% validation, and 15% internal test data. An additional external test-set of 158 GBM and 69 LGG was used to assess generalisability to other hospitals' data. All models' median Dice similarity coefficient (DSC) for both test sets were within, or higher than, previously reported human inter-rater agreement (range of 0.74-0.85). For both test sets, nn-Unet achieved the highest DSC (internal = 0.86, external = 0.93) and the lowest Hausdorff distances (10.07, 13.87 mm, respectively) for all tumor classes (p < 0.001). By applying Sparsified training, missing MRI sequences did not statistically affect the performance. nn-Unet achieves accurate segmentations in clinical settings even in the presence of incomplete MRI datasets. This facilitates future clinical adoption of automated glioma segmentation, which could help inform treatment planning and glioma monitoring.
Collapse
Affiliation(s)
- Hugh G Pemberton
- Centre for Medical Image Computing (CMIC), University College London, London, UK
- Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Jiaming Wu
- Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - Ivar Kommers
- Neurosurgical Center Amsterdam, Amsterdam UMC, Vrije Universiteit, Amsterdam, The Netherlands
| | - Domenique M J Müller
- Neurosurgical Center Amsterdam, Amsterdam UMC, Vrije Universiteit, Amsterdam, The Netherlands
| | - Yipeng Hu
- Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - Olivia Goodkin
- Centre for Medical Image Computing (CMIC), University College London, London, UK
- Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Sjoerd B Vos
- Centre for Medical Image Computing (CMIC), University College London, London, UK
- Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Sotirios Bisdas
- Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Pierre A Robe
- Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Hilko Ardon
- Department of Neurosurgery, St. Elisabeth Hospital, Tilburg, The Netherlands
| | - Lorenzo Bello
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Milan, Italy
| | - Marco Rossi
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Milan, Italy
| | - Tommaso Sciortino
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Milan, Italy
| | - Marco Conti Nibali
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Milan, Italy
| | - Mitchel S Berger
- Department of Neurological Surgery, University of California, San Francisco, CA, USA
| | - Shawn L Hervey-Jumper
- Department of Neurological Surgery, University of California, San Francisco, CA, USA
| | - Wim Bouwknegt
- Department of Neurosurgery, Medical Center Slotervaart, Amsterdam, The Netherlands
| | | | - Julia Furtner
- Department of Biomedical Imaging and Image-Guided Therapy, Medical University Vienna, Vienna, Austria
| | - Seunggu J Han
- Department of Neurological Surgery, Stanford University, Stanford, USA
| | - Albert J S Idema
- Department of Neurosurgery, Northwest Clinics, Alkmaar, The Netherlands
| | - Barbara Kiesel
- Department of Neurosurgery, Medical University Vienna, Vienna, Austria
| | - Georg Widhalm
- Department of Neurosurgery, Medical University Vienna, Vienna, Austria
| | - Alfred Kloet
- Department of Neurosurgery, Medical Center Haaglanden, The Hague, The Netherlands
| | - Michiel Wagemakers
- Department of Neurosurgery, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Aeilko H Zwinderman
- Department of Clinical Epidemiology and Biostatistics, Academic Medical Center, Amsterdam, The Netherlands
| | - Sandro M Krieg
- TUM-Neuroimaging Center, Klinikum rechts der Isar, Technische Universität München, Munich, Germany
- Department of Neurosurgery, Klinikum rechts der Isar, Technische Universität München, Munich, Germany
| | | | - Ferran Prados
- Centre for Medical Image Computing (CMIC), University College London, London, UK
- Department of Neuroinflammation, Faculty of Brain Sciences, Queen Square MS Centre, UCL Institute of Neurology, University College London, London, UK
- e-Health Center, Universitat Oberta de Catalunya, Barcelona, Spain
| | - Philip de Witt Hamer
- Neurosurgical Center Amsterdam, Amsterdam UMC, Vrije Universiteit, Amsterdam, The Netherlands
| | - Frederik Barkhof
- Centre for Medical Image Computing (CMIC), University College London, London, UK
- Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, UK
- Radiology & Nuclear Medicine, VU University Medical Center, Amsterdam, the Netherlands
| | - Roelant S Eijgelaar
- Neurosurgical Center Amsterdam, Amsterdam UMC, Vrije Universiteit, Amsterdam, The Netherlands.
| |
Collapse
|
10
|
Boyd A, Ye Z, Prabhu S, Tjong MC, Zha Y, Zapaishchykova A, Vajapeyam S, Hayat H, Chopra R, Liu KX, Nabavidazeh A, Resnick A, Mueller S, Haas-Kogan D, Aerts HJ, Poussaint T, Kann BH. Expert-level pediatric brain tumor segmentation in a limited data scenario with stepwise transfer learning. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.06.29.23292048. [PMID: 37425854 PMCID: PMC10327271 DOI: 10.1101/2023.06.29.23292048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
Purpose Artificial intelligence (AI)-automated tumor delineation for pediatric gliomas would enable real-time volumetric evaluation to support diagnosis, treatment response assessment, and clinical decision-making. Auto-segmentation algorithms for pediatric tumors are rare, due to limited data availability, and algorithms have yet to demonstrate clinical translation. Methods We leveraged two datasets from a national brain tumor consortium (n=184) and a pediatric cancer center (n=100) to develop, externally validate, and clinically benchmark deep learning neural networks for pediatric low-grade glioma (pLGG) segmentation using a novel in-domain, stepwise transfer learning approach. The best model [via Dice similarity coefficient (DSC)] was externally validated and subject to randomized, blinded evaluation by three expert clinicians wherein clinicians assessed clinical acceptability of expert- and AI-generated segmentations via 10-point Likert scales and Turing tests. Results The best AI model utilized in-domain, stepwise transfer learning (median DSC: 0.877 [IQR 0.715-0.914]) versus baseline model (median DSC 0.812 [IQR 0.559-0.888]; p<0.05). On external testing (n=60), the AI model yielded accuracy comparable to inter-expert agreement (median DSC: 0.834 [IQR 0.726-0.901] vs. 0.861 [IQR 0.795-0.905], p=0.13). On clinical benchmarking (n=100 scans, 300 segmentations from 3 experts), the experts rated the AI model higher on average compared to other experts (median Likert rating: 9 [IQR 7-9]) vs. 7 [IQR 7-9], p<0.05 for each). Additionally, the AI segmentations had significantly higher (p<0.05) overall acceptability compared to experts on average (80.2% vs. 65.4%). Experts correctly predicted the origins of AI segmentations in an average of 26.0% of cases. Conclusions Stepwise transfer learning enabled expert-level, automated pediatric brain tumor auto-segmentation and volumetric measurement with a high level of clinical acceptability. This approach may enable development and translation of AI imaging segmentation algorithms in limited data scenarios.
Collapse
Affiliation(s)
- Aidan Boyd
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Zezhong Ye
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Sanjay Prabhu
- Department of Radiology, Boston Children’s Hospital, Harvard Medical School, Boston, MA
| | - Michael C. Tjong
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Yining Zha
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Anna Zapaishchykova
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Sridhar Vajapeyam
- Department of Radiology, Boston Children’s Hospital, Harvard Medical School, Boston, MA
| | - Hasaan Hayat
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Rishi Chopra
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Kevin X. Liu
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Ali Nabavidazeh
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Adam Resnick
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA
- Department of Neurosurgery, Children’s Hospital of Philadelphia, Philadelphia, PA
| | - Sabine Mueller
- Department of Neurology, University of California San Francisco, San Francisco, California
- Department of Pediatrics, University of California San Francisco, San Francisco, California
- Department of Neurological Surgery, University of California San Francisco, San Francisco, California
| | - Daphne Haas-Kogan
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | - Hugo J.W.L. Aerts
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
- Department of Radiology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
- Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands
| | - Tina Poussaint
- Department of Radiology, Boston Children’s Hospital, Harvard Medical School, Boston, MA
| | - Benjamin H. Kann
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| |
Collapse
|
11
|
Chen Y, Yang Z, Zhao J, Adamson J, Sheng Y, Yin FF, Wang C. A radiomics-incorporated deep ensemble learning model for multi-parametric MRI-based glioma segmentation. Phys Med Biol 2023; 68:185025. [PMID: 37586382 DOI: 10.1088/1361-6560/acf10d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/16/2023] [Indexed: 08/18/2023]
Abstract
Objective.To develop a deep ensemble learning (DEL) model with radiomics spatial encoding execution for improved glioma segmentation accuracy using multi-parametric magnetic resonance imaging (mp-MRI).Approach.This model was developed using 369 glioma patients with a four-modality mp-MRI protocol: T1, contrast-enhanced T1 (T1-Ce), T2, and FLAIR. In each modality volume, a 3D sliding kernel was implemented across the brain to capture image heterogeneity: 56 radiomic features were extracted within the kernel, resulting in a fourth-order tensor. Each radiomic feature can then be encoded as a 3D image volume, namely a radiomic feature map (RFM). For each patient, all RFMs extracted from all four modalities were processed using principal component analysis for dimension reduction, and the first four principal components (PCs) were selected. Next, a DEL model comprised of four U-Net sub-models was trained for the segmentation of a region-of-interest: each sub-model utilizes the mp-MRI and one of the four PCs as a five-channel input for 2D execution. Last, four softmax probability results given by the DEL model were superimposed and binarized using Otsu's method as the segmentation results. Three DEL models were trained to segment the enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The segmentation results given by the proposed ensemble were compared to the mp-MRI-only U-Net results.Main Results.All three radiomics-incorporated DEL models were successfully implemented: compared to the mp-MRI-only U-net results, the dice coefficients of ET (0.777 → 0.817), TC (0.742 → 0.757), and WT (0.823 → 0.854) demonstrated improvement. The accuracy, sensitivity, and specificity results demonstrated similar patterns.Significance.The adopted radiomics spatial encoding execution enriches the image heterogeneity information that leads to the successful demonstration of the proposed DEL model, which offers a new tool for mp-MRI-based medical image segmentation.
Collapse
Affiliation(s)
- Yang Chen
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu 215316, People's Republic of China
| | - Zhenyu Yang
- Department of Radiation Oncology, Duke University, Durham, NC, 27710, United States of America
| | - Jingtong Zhao
- Department of Radiation Oncology, Duke University, Durham, NC, 27710, United States of America
| | - Justus Adamson
- Department of Radiation Oncology, Duke University, Durham, NC, 27710, United States of America
| | - Yang Sheng
- Department of Radiation Oncology, Duke University, Durham, NC, 27710, United States of America
| | - Fang-Fang Yin
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu 215316, People's Republic of China
- Department of Radiation Oncology, Duke University, Durham, NC, 27710, United States of America
| | - Chunhao Wang
- Department of Radiation Oncology, Duke University, Durham, NC, 27710, United States of America
| |
Collapse
|
12
|
Poel R, Kamath AJ, Willmann J, Andratschke N, Ermiş E, Aebersold DM, Manser P, Reyes M. Deep-Learning-Based Dose Predictor for Glioblastoma-Assessing the Sensitivity and Robustness for Dose Awareness in Contouring. Cancers (Basel) 2023; 15:4226. [PMID: 37686501 PMCID: PMC10486555 DOI: 10.3390/cancers15174226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 08/16/2023] [Accepted: 08/21/2023] [Indexed: 09/10/2023] Open
Abstract
External beam radiation therapy requires a sophisticated and laborious planning procedure. To improve the efficiency and quality of this procedure, machine-learning models that predict these dose distributions were introduced. The most recent dose prediction models are based on deep-learning architectures called 3D U-Nets that give good approximations of the dose in 3D almost instantly. Our purpose was to train such a 3D dose prediction model for glioblastoma VMAT treatment and test its robustness and sensitivity for the purpose of quality assurance of automatic contouring. From a cohort of 125 glioblastoma (GBM) patients, VMAT plans were created according to a clinical protocol. The initial model was trained on a cascaded 3D U-Net. A total of 60 cases were used for training, 15 for validation and 20 for testing. The prediction model was tested for sensitivity to dose changes when subject to realistic contour variations. Additionally, the model was tested for robustness by exposing it to a worst-case test set containing out-of-distribution cases. The initially trained prediction model had a dose score of 0.94 Gy and a mean DVH (dose volume histograms) score for all structures of 1.95 Gy. In terms of sensitivity, the model was able to predict the dose changes that occurred due to the contour variations with a mean error of 1.38 Gy. We obtained a 3D VMAT dose prediction model for GBM with limited data, providing good sensitivity to realistic contour variations. We tested and improved the model's robustness by targeted updates to the training set, making it a useful technique for introducing dose awareness in the contouring evaluation and quality assurance process.
Collapse
Affiliation(s)
- Robert Poel
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, CH-3010 Bern, Switzerland
- ARTORG Center for Biomedical Research, University of Bern, CH-3010 Bern, Switzerland
| | - Amith J. Kamath
- ARTORG Center for Biomedical Research, University of Bern, CH-3010 Bern, Switzerland
| | - Jonas Willmann
- Department of Radiation Oncology, University Hospital Zurich, University of Zurich, CH-8091 Zurich, Switzerland
| | - Nicolaus Andratschke
- Department of Radiation Oncology, University Hospital Zurich, University of Zurich, CH-8091 Zurich, Switzerland
| | - Ekin Ermiş
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, CH-3010 Bern, Switzerland
| | - Daniel M. Aebersold
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, CH-3010 Bern, Switzerland
| | - Peter Manser
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, CH-3010 Bern, Switzerland
- Division of Medical Radiation Physics, Inselspital, Bern University Hospital, University of Bern, CH-3010 Bern, Switzerland
| | - Mauricio Reyes
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, CH-3010 Bern, Switzerland
- ARTORG Center for Biomedical Research, University of Bern, CH-3010 Bern, Switzerland
| |
Collapse
|
13
|
Smolders A, Choulilitsa E, Czerska K, Bizzocchi N, Krcek R, Lomax A, Weber DC, Albertini F. Dosimetric comparison of autocontouring techniques for online adaptive proton therapy. Phys Med Biol 2023; 68:175006. [PMID: 37385266 DOI: 10.1088/1361-6560/ace307] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 06/29/2023] [Indexed: 07/01/2023]
Abstract
Objective.Anatomical and daily set-up uncertainties impede high precision delivery of proton therapy. With online adaptation, the daily plan is reoptimized on an image taken shortly before the treatment, reducing these uncertainties and, hence, allowing a more accurate delivery. This reoptimization requires target and organs-at-risk (OAR) contours on the daily image, which need to be delineated automatically since manual contouring is too slow. Whereas multiple methods for autocontouring exist, none of them are fully accurate, which affects the daily dose. This work aims to quantify the magnitude of this dosimetric effect for four contouring techniques.Approach.Plans reoptimized on automatic contours are compared with plans reoptimized on manual contours. The methods include rigid and deformable registration (DIR), deep-learning based segmentation and patient-specific segmentation.Main results.It was found that independently of the contouring method, the dosimetric influence of usingautomaticOARcontoursis small (<5% prescribed dose in most cases), with DIR yielding the best results. Contrarily, the dosimetric effect of using theautomatic target contourwas larger (>5% prescribed dose in most cases), indicating that manual verification of that contour remains necessary. However, when compared to non-adaptive therapy, the dose differences caused by automatically contouring the target were small and target coverage was improved, especially for DIR.Significance.The results show that manual adjustment of OARs is rarely necessary and that several autocontouring techniques are directly usable. Contrarily, manual adjustment of the target is important. This allows prioritizing tasks during time-critical online adaptive proton therapy and therefore supports its further clinical implementation.
Collapse
Affiliation(s)
- A Smolders
- Paul Scherrer Institute, Center for Proton Therapy, Switzerland
- Department of Physics, ETH Zurich, Switzerland
| | - E Choulilitsa
- Paul Scherrer Institute, Center for Proton Therapy, Switzerland
- Department of Physics, ETH Zurich, Switzerland
| | - K Czerska
- Paul Scherrer Institute, Center for Proton Therapy, Switzerland
| | - N Bizzocchi
- Paul Scherrer Institute, Center for Proton Therapy, Switzerland
| | - R Krcek
- Paul Scherrer Institute, Center for Proton Therapy, Switzerland
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - A Lomax
- Paul Scherrer Institute, Center for Proton Therapy, Switzerland
- Department of Physics, ETH Zurich, Switzerland
| | - D C Weber
- Paul Scherrer Institute, Center for Proton Therapy, Switzerland
- Department of Radiation Oncology, University Hospital Zurich, Switzerland
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - F Albertini
- Paul Scherrer Institute, Center for Proton Therapy, Switzerland
| |
Collapse
|
14
|
Yang Z, Hu Z, Ji H, Lafata K, Vaios E, Floyd S, Yin FF, Wang C. A neural ordinary differential equation model for visualizing deep neural network behaviors in multi-parametric MRI-based glioma segmentation. Med Phys 2023; 50:4825-4838. [PMID: 36840621 PMCID: PMC10440249 DOI: 10.1002/mp.16286] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 01/26/2023] [Accepted: 01/30/2023] [Indexed: 02/26/2023] Open
Abstract
PURPOSE To develop a neural ordinary differential equation (ODE) model for visualizing deep neural network behavior during multi-parametric MRI-based glioma segmentation as a method to enhance deep learning explainability. METHODS By hypothesizing that deep feature extraction can be modeled as a spatiotemporally continuous process, we implemented a novel deep learning model, Neural ODE, in which deep feature extraction was governed by an ODE parameterized by a neural network. The dynamics of (1) MR images after interactions with the deep neural network and (2) segmentation formation can thus be visualized after solving the ODE. An accumulative contribution curve (ACC) was designed to quantitatively evaluate each MR image's utilization by the deep neural network toward the final segmentation results. The proposed Neural ODE model was demonstrated using 369 glioma patients with a 4-modality multi-parametric MRI protocol: T1, contrast-enhanced T1 (T1-Ce), T2, and FLAIR. Three Neural ODE models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The key MRI modalities with significant utilization by deep neural networks were identified based on ACC analysis. Segmentation results by deep neural networks using only the key MRI modalities were compared to those using all four MRI modalities in terms of Dice coefficient, accuracy, sensitivity, and specificity. RESULTS All Neural ODE models successfully illustrated image dynamics as expected. ACC analysis identified T1-Ce as the only key modality in ET and TC segmentations, while both FLAIR and T2 were key modalities in WT segmentation. Compared to the U-Net results using all four MRI modalities, the Dice coefficient of ET (0.784→0.775), TC (0.760→0.758), and WT (0.841→0.837) using the key modalities only had minimal differences without significance. Accuracy, sensitivity, and specificity results demonstrated the same patterns. CONCLUSION The Neural ODE model offers a new tool for optimizing the deep learning model inputs with enhanced explainability. The presented methodology can be generalized to other medical image-related deep-learning applications.
Collapse
Affiliation(s)
- Zhenyu Yang
- Deparment of Radiation Oncology, Duke University, Durham, NC, 27710
| | - Zongsheng Hu
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China, 215316
| | - Hangjie Ji
- Department of Mathematics, North Carolina State University, Raleigh, NC, 27695
| | - Kyle Lafata
- Deparment of Radiation Oncology, Duke University, Durham, NC, 27710
- Department of Radiology, Duke University, Durham, NC, 27710
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27710
| | - Eugene Vaios
- Deparment of Radiation Oncology, Duke University, Durham, NC, 27710
| | - Scott Floyd
- Deparment of Radiation Oncology, Duke University, Durham, NC, 27710
| | - Fang-Fang Yin
- Deparment of Radiation Oncology, Duke University, Durham, NC, 27710
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China, 215316
| | - Chunhao Wang
- Deparment of Radiation Oncology, Duke University, Durham, NC, 27710
| |
Collapse
|
15
|
Iporre-Rivas A, Saur D, Rohr K, Scheuermann G, Gillmann C. Stroke-GFCN: ischemic stroke lesion prediction with a fully convolutional graph network. J Med Imaging (Bellingham) 2023; 10:044502. [PMID: 37465592 PMCID: PMC10350625 DOI: 10.1117/1.jmi.10.4.044502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 06/13/2023] [Accepted: 06/20/2023] [Indexed: 07/20/2023] Open
Abstract
Purpose The interpretation of image data plays a critical role during acute brain stroke diagnosis, and promptly defining the requirement of a surgical intervention will drastically impact the patient's outcome. However, determining stroke lesions purely from images can be a daunting task. Many studies proposed automatic segmentation methods for brain stroke lesions from medical images in different modalities, though heretofore results do not satisfy the requirements to be clinically reliable. We investigate the segmentation of brain stroke lesions using a geometric deep learning model that takes advantage of the intrinsic interconnected diffusion features in a set of multi-modal inputs consisting of computer tomography (CT) perfusion parameters. Approach We propose a geometric deep learning model for the segmentation of ischemic stroke brain lesions that employs spline convolutions and unpooling/pooling operators on graphs to excerpt graph-structured features in a fully convolutional network architecture. In addition, we seek to understand the underlying principles governing the different components of our model. Accordingly, we structure the experiments in two parts: an evaluation of different architecture hyperparameters and a comparison with state-of-the-art methods. Results The ablation study shows that deeper layers obtain a higher Dice coefficient score (DCS) of up to 0.3654. Comparing different pooling and unpooling methods shows that the best performing unpooling method is the proportional approach, yet it often smooths the segmentation border. Unpooling achieves segmentation results more adapted to the lesion boundary corroborated with systematic lower values of Hausdorff distance. The model performs at the level of state-of-the-art models without optimized training methods, such as augmentation or patches, with a DCS of 0.4553 ± 0.0031 . Conclusions We proposed and evaluated an end-to-end trainable fully convolutional graph network architecture using spline convolutional layers for the ischemic stroke lesion prediction. We propose a model that employs graph-based operations to predict acute stroke brain lesions from CT perfusion parameters. Our results prove the feasibility of using geometric deep learning to solve segmentation problems, and our model shows a better performance than other models evaluated. The proposed model achieves improved metric values for the DCS metric, ranging from 8.61% to 69.05%, compared with other models trained under the same conditions. Next, we compare different pooling and unpooling operations in relation to their segmentation results, and we show that the model can produce segmentation outputs that adapt to irregular segmentation boundaries when using simple heuristic unpooling operations.
Collapse
Affiliation(s)
- Ariel Iporre-Rivas
- Leipzig University, Institute for Computer Science, Faculty of Mathematics and Computer Science, Signal and Image Processing Group, Leipzig, Germany
- Max-Plank-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- ScaDS.AI, Leipzig, Germany
| | - Dorothee Saur
- Leipzig University, Department of Neurology, Leipzig, Germany
| | - Karl Rohr
- Heidelberg University, BioQuant Center, IPMB and DKFZ, Biomedical Computer Vision Group, Heidelberg, Germany
| | - Gerik Scheuermann
- Leipzig University, Institute for Computer Science, Faculty of Mathematics and Computer Science, Signal and Image Processing Group, Leipzig, Germany
| | - Christina Gillmann
- Leipzig University, Institute for Computer Science, Faculty of Mathematics and Computer Science, Signal and Image Processing Group, Leipzig, Germany
- ScaDS.AI, Leipzig, Germany
| |
Collapse
|
16
|
Walther E, Griffin L, Randall E, Sandmeyer L, Osinchuk S, Sukut S, Hansen K, Keyerleber M, Lawrence J, Parker S, Mayer M. Contouring in the optic plane improves the accuracy of computed tomography-based segmentation of the optic pathway. Vet Radiol Ultrasound 2023. [PMID: 37335283 DOI: 10.1111/vru.13261] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 05/08/2023] [Accepted: 05/08/2023] [Indexed: 06/21/2023] Open
Abstract
Canine optic pathway structures are often contoured on CT images, despite the difficulty of visualizing the optic pathway with CT using standard planes. The purpose of this prospective, analytical, diagnostic accuracy study was to examine the accuracy of optic pathway contouring by veterinary radiation oncologists (ROs) before and after training on optic plane contouring. Optic pathway contours used as the gold standard for comparison were created based on expert consensus from registered CT and MRI for eight dogs. Twenty-one ROs contoured the optic pathway on CT using their preferred method, and again following atlas and video training demonstrating contouring on the optic plane. The Dice similarity coefficient (DSC) was used to assess contour accuracy. A multilevel mixed model with random effects to account for repeated measures was used to examine DSC differences. The median DSC (5th and 95th percentile) before and after training was 0.31 (0.06, 0.48) and 0.41 (0.18, 0.53), respectively. The mean DSC was significantly higher after training compared with before training (mean difference = 0.10; 95% CI, 0.08-0.12; P < 0.001) across all observers and patients. DSC values were comparable to those reported (0.4-0.5) for segmentation of the optic chiasm and nerves in human patients. Contour accuracy improved after training but remained low, potentially due to the small optic pathway volumes. When registered CT-MRI images are not available, our study supports routine addition of an optic plane with specific window settings to improve segmentation accuracy in mesaticephalic dogs ≥11 kg.
Collapse
Affiliation(s)
- Eric Walther
- Department of Small Animal Clinical Sciences, Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Lynn Griffin
- Department of Environmental and Radiological Health Sciences, Colorado State University, Fort Collins, Colorado, USA
| | - Elissa Randall
- Department of Environmental and Radiological Health Sciences, Colorado State University, Fort Collins, Colorado, USA
| | - Lynne Sandmeyer
- Department of Small Animal Clinical Sciences, Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Stephanie Osinchuk
- Department of Small Animal Clinical Sciences, Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Sally Sukut
- Department of Small Animal Clinical Sciences, Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Katherine Hansen
- Surgical and Radiological Sciences, Davis Veterinary Medicine, University of California, Davis, California, USA
| | - Michele Keyerleber
- Tufts University Cummings School of Veterinary Medicine, North Grafton, Massachusetts, USA
| | - Jessica Lawrence
- Department of Veterinary Clinical Sciences, College of Veterinary Medicine, University of Minnesota, St. Paul, Minnesota, USA
| | - Sarah Parker
- Department of Large Animal Clinical Sciences, Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Monique Mayer
- Department of Small Animal Clinical Sciences, Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| |
Collapse
|
17
|
Zhang Y, Chen C, Huang W, Teng Y, Shu X, Zhao F, Xu J, Zhang L. Preoperative volume of the optic chiasm is an easily obtained predictor for visual recovery of pituitary adenoma patients following endoscopic endonasal transsphenoidal surgery: a cohort study. Int J Surg 2023; 109:896-904. [PMID: 36999782 PMCID: PMC10389445 DOI: 10.1097/js9.0000000000000357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 03/13/2023] [Indexed: 04/01/2023]
Abstract
BACKGROUND Predicting the postoperative visual outcome of pituitary adenoma patients is important but remains challenging. This study aimed to identify a novel prognostic predictor which can be automatically obtained from routine MRI using a deep learning approach. MATERIALS AND METHODS A total of 220 pituitary adenoma patients were prospectively enrolled and stratified into the recovery and nonrecovery groups according to the visual outcome at 6 months after endoscopic endonasal transsphenoidal surgery. The optic chiasm was manually segmented on preoperative coronal T2WI, and its morphometric parameters were measured, including suprasellar extension distance, chiasmal thickness, and chiasmal volume. Univariate and multivariate analyses were conducted on clinical and morphometric parameters to identify predictors for visual recovery. Additionally, a deep learning model for automated segmentation and volumetric measurement of optic chiasm was developed with nnU-Net architecture and evaluated in a multicenter data set covering 1026 pituitary adenoma patients from four institutions. RESULTS Larger preoperative chiasmal volume was significantly associated with better visual outcomes ( P =0.001). Multivariate logistic regression suggested it could be taken as the independent predictor for visual recovery (odds ratio=2.838, P <0.001). The auto-segmentation model represented good performances and generalizability in internal (Dice=0.813) and three independent external test sets (Dice=0.786, 0.818, and 0.808, respectively). Moreover, the model achieved accurate volumetric evaluation of the optic chiasm with an intraclass correlation coefficient of more than 0.83 in both internal and external test sets. CONCLUSION The preoperative volume of the optic chiasm could be utilized as the prognostic predictor for visual recovery of pituitary adenoma patients after surgery. Moreover, the proposed deep learning-based model allowed for automated segmentation and volumetric measurement of the optic chiasm on routine MRI.
Collapse
Affiliation(s)
- Yang Zhang
- Department of Neurosurgery, West China Hospital, Sichuan University
- Department of Radiology, West China Hospital, Sichuan University
| | - Chaoyue Chen
- Department of Neurosurgery, West China Hospital, Sichuan University
- Department of Radiology, West China Hospital, Sichuan University
| | - Wei Huang
- College of Computer Science, Sichuan University
| | - Yuen Teng
- Department of Neurosurgery, West China Hospital, Sichuan University
- Department of Radiology, West China Hospital, Sichuan University
| | - Xin Shu
- College of Computer Science, Sichuan University
| | - Fumin Zhao
- Department of Radiology, West China Second University Hospital, Sichuan University
| | - Jianguo Xu
- Department of Neurosurgery, West China Hospital, Sichuan University
- Department of Radiology, West China Hospital, Sichuan University
| | - Lei Zhang
- College of Computer Science, Sichuan University
| |
Collapse
|
18
|
Lin CY, Chou LS, Wu YH, Kuo JS, Mehta MP, Shiau AC, Liang JA, Hsu SM, Wang TH. Developing an AI-assisted planning pipeline for hippocampal avoidance whole brain radiotherapy. Radiother Oncol 2023; 181:109528. [PMID: 36773828 DOI: 10.1016/j.radonc.2023.109528] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 01/04/2023] [Accepted: 02/03/2023] [Indexed: 02/12/2023]
Abstract
BACKGROUND AND PURPOSE Hippocampal avoidance whole brain radiotherapy (HA-WBRT) is effective for controlling disease and preserving neuro-cognitive function for brain metastases. However, contouring and planning of HA-WBRT is complex and time-consuming. We designed and evaluated a pipeline using deep learning tools for a fully automated treatment planning workflow to generate HA-WBRT radiotherapy plans. MATERIALS AND METHODS We retrospectively collected 50 adult patients who received HA-WBRT. Using RTOG- 0933 clinical trial protocol guidelines, all organs-at-risk (OARs) and the clinical target volume (CTV) were contoured by experienced radiation oncologists. A deep-learning segmentation model was designed and trained. Next, we developed a volumetric-modulated arc therapy (VMAT) auto-planning algorithm for 30 Gy in 10 fractions. Automated segmentations were evaluated using the Dice similarity coefficient (DSC) and 95th-percentile Hausdorff distance (95 % HD). Auto-plans were evaluated by the percentage of PTV volume that receives 30 Gy (V30Gy), conformity index (CI), and homogeneity index (HI) of planning target volume (PTV) and the minimum dose (D100%) and maximum dose (Dmax) for the hippocampus, Dmax for the lens, eyes, optic nerve, brain stem, and chiasm. RESULTS We developed a deep-learning segmentation model and an auto-planning script. For the 10 cases in the independent test set, the overall average DSC and 95 % HD of contours were greater than 0.8 and less than 7 mm, respectively. All auto-plans met the RTOG- 0933 criteria. The HA-WBRT plan automatically created time was about 10 min. CONCLUSIONS An artificial intelligence (AI)-assisted pipeline using deep learning tools can rapidly and accurately generate clinically acceptable HA-WBRT plans with minimal manual intervention and increase efficiency of this treatment for brain metastases.
Collapse
Affiliation(s)
- Chih-Yuan Lin
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan; Medical Physics and Radiation Measurements Laboratory, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Lin-Shan Chou
- Division of Radiation Oncology, Department of Oncology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Yuan-Hung Wu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan; Medical Physics and Radiation Measurements Laboratory, National Yang Ming Chiao Tung University, Taipei, Taiwan; Division of Radiation Oncology, Department of Oncology, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - John S Kuo
- Neuroscience and Brain Disease Center, China Medical University, Taichung, Taiwan; Graduate Institute of Biomedical Sciences, China Medical University, Taichung, Taiwan; Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Minesh P Mehta
- Department of Radiation Oncology, Miami Cancer Institute, Baptist Health South Florida, Miami, Florida, USA; Florida International University, Miami, Florida, USA
| | - An-Cheng Shiau
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan; Department of Radiation Oncology, China Medical University Hospital, Taichung, Taiwan
| | - Ji-An Liang
- Department of Radiation Oncology, China Medical University Hospital, Taichung, Taiwan; Department of Medicine, China Medical University, Taichung, Taiwan
| | - Shih-Ming Hsu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan; Medical Physics and Radiation Measurements Laboratory, National Yang Ming Chiao Tung University, Taipei, Taiwan.
| | - Ti-Hao Wang
- Department of Radiation Oncology, China Medical University Hospital, Taichung, Taiwan.
| |
Collapse
|
19
|
Fully automated clinical target volume segmentation for glioblastoma radiotherapy using a deep convolutional neural network. Pol J Radiol 2023; 88:e31-e40. [PMID: 36819221 PMCID: PMC9907163 DOI: 10.5114/pjr.2023.124434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 09/12/2022] [Indexed: 02/09/2023] Open
Abstract
Purpose Target volume delineation is a crucial step prior to radiotherapy planning in radiotherapy for glioblastoma. This step is performed manually, which is time-consuming and prone to intra- and inter-rater variabilities. Therefore, the purpose of this study is to evaluate a deep convolutional neural network (CNN) model for automatic segmentation of clinical target volume (CTV) in glioblastoma patients. Material and methods In this study, the modified Segmentation-Net (SegNet) model with deep supervision and residual-based skip connection mechanism was trained on 259 glioblastoma patients from the Multimodal Brain Tumour Image Segmentation Benchmark (BraTS) 2019 Challenge dataset for segmentation of gross tumour volume (GTV). Then, the pre-trained CNN model was fine-tuned with an independent clinical dataset (n = 37) to perform the CTV segmentation. In the process of fine-tuning, to generate a CT segmentation mask, both CT and MRI scans were simultaneously used as input data. The performance of the CNN model in terms of segmentation accuracy was evaluated on an independent clinical test dataset (n = 15) using the Dice Similarity Coefficient (DSC) and Hausdorff distance. The impact of auto-segmented CTV definition on dosimetry was also analysed. Results The proposed model achieved the segmentation results with a DSC of 89.60 ± 3.56% and Hausdorff distance of 1.49 ± 0.65 mm. A statistically significant difference was found for the Dmin and Dmax of the CTV between manually and automatically planned doses. Conclusions The results of our study suggest that our CNN-based auto-contouring system can be used for segmentation of CTVs to facilitate the brain tumour radiotherapy workflow.
Collapse
|
20
|
Kazerooni AF, Arif S, Madhogarhia R, Khalili N, Haldar D, Bagheri S, Familiar AM, Anderson H, Haldar S, Tu W, Kim MC, Viswanathan K, Muller S, Prados M, Kline C, Vidal L, Aboian M, Storm PB, Resnick AC, Ware JB, Vossough A, Davatzikos C, Nabavizadeh A. Automated Tumor Segmentation and Brain Tissue Extraction from Multiparametric MRI of Pediatric Brain Tumors: A Multi-Institutional Study. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.01.02.22284037. [PMID: 36711966 PMCID: PMC9882407 DOI: 10.1101/2023.01.02.22284037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Background Brain tumors are the most common solid tumors and the leading cause of cancer-related death among all childhood cancers. Tumor segmentation is essential in surgical and treatment planning, and response assessment and monitoring. However, manual segmentation is time-consuming and has high interoperator variability. We present a multi-institutional deep learning-based method for automated brain extraction and segmentation of pediatric brain tumors based on multi-parametric MRI scans. Methods Multi-parametric scans (T1w, T1w-CE, T2, and T2-FLAIR) of 244 pediatric patients (n=215 internal and n=29 external cohorts) with de novo brain tumors, including a variety of tumor subtypes, were preprocessed and manually segmented to identify the brain tissue and tumor subregions into four tumor subregions, i.e., enhancing tumor (ET), non-enhancing tumor (NET), cystic components (CC), and peritumoral edema (ED). The internal cohort was split into training (n=151), validation (n=43), and withheld internal test (n=21) subsets. DeepMedic, a three-dimensional convolutional neural network, was trained and the model parameters were tuned. Finally, the network was evaluated on the withheld internal and external test cohorts. Results Dice similarity score (median±SD) was 0.91±0.10/0.88±0.16 for the whole tumor, 0.73±0.27/0.84±0.29 for ET, 0.79±19/0.74±0.27 for union of all non-enhancing components (i.e., NET, CC, ED), and 0.98±0.02 for brain tissue in both internal/external test sets. Conclusions Our proposed automated brain extraction and tumor subregion segmentation models demonstrated accurate performance on segmentation of the brain tissue and whole tumor regions in pediatric brain tumors and can facilitate detection of abnormal regions for further clinical measurements. Key Points We proposed automated tumor segmentation and brain extraction on pediatric MRI.The volumetric measurements using our models agree with ground truth segmentations. Importance of the Study The current response assessment in pediatric brain tumors (PBTs) is currently based on bidirectional or 2D measurements, which underestimate the size of non-spherical and complex PBTs in children compared to volumetric or 3D methods. There is a need for development of automated methods to reduce manual burden and intra- and inter-rater variability to segment tumor subregions and assess volumetric changes. Most currently available automated segmentation tools are developed on adult brain tumors, and therefore, do not generalize well to PBTs that have different radiological appearances. To address this, we propose a deep learning (DL) auto-segmentation method that shows promising results in PBTs, collected from a publicly available large-scale imaging dataset (Children's Brain Tumor Network; CBTN) that comprises multi-parametric MRI scans of multiple PBT types acquired across multiple institutions on different scanners and protocols. As a complementary to tumor segmentation, we propose an automated DL model for brain tissue extraction.
Collapse
|
21
|
Ginn JS, Gay HA, Hilliard J, Shah J, Mistry N, Möhler C, Hugo GD, Hao Y. A clinical and time savings evaluation of a deep learning automatic contouring algorithm. Med Dosim 2022; 48:55-60. [PMID: 36550000 DOI: 10.1016/j.meddos.2022.11.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 10/27/2022] [Accepted: 11/22/2022] [Indexed: 12/24/2022]
Abstract
Automatic contouring algorithms may streamline clinical workflows by reducing normal organ-at-risk (OAR) contouring time. Here we report the first comprehensive quantitative and qualitative evaluation, along with time savings assessment for a prototype deep learning segmentation algorithm from Siemens Healthineers. The accuracy of contours generated by the prototype were evaluated quantitatively using the Sorensen-Dice coefficient (Dice), Jaccard index (JC), and Hausdorff distance (Haus). Normal pelvic and head and neck OAR contours were evaluated retrospectively comparing the automatic and manual clinical contours in 100 patient cases. Contouring performance outliers were investigated. To quantify the time savings, a certified medical dosimetrist manually contoured de novo and, separately, edited the generated OARs for 10 head and neck and 10 pelvic patients. The automatic, edited, and manually generated contours were visually evaluated and scored by a practicing radiation oncologist on a scale of 1-4, where a higher score indicated better performance. The quantitative comparison revealed high (> 0.8) Dice and JC performance for relatively large organs such as the lungs, brain, femurs, and kidneys. Smaller elongated structures that had relatively low Dice and JC values tended to have low Hausdorff distances. Poor performing outlier cases revealed common anatomical inconsistencies including overestimation of the bladder and incorrect superior-inferior truncation of the spinal cord and femur contours. In all cases, editing contours was faster than manual contouring with an average time saving of 43.4% or 11.8 minutes per patient. The physician scored 240 structures with > 95% of structures receiving a score of 3 or 4. Of the structures reviewed, only 11 structures needed major revision or to be redone entirely. Our results indicate the evaluated auto-contouring solution has the potential to reduce clinical contouring time. The algorithm's performance is promising, but human review and some editing is required prior to clinical use.
Collapse
Affiliation(s)
- John S Ginn
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | - Hiram A Gay
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Jessica Hilliard
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | | | | | | | - Geoffrey D Hugo
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Yao Hao
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO 63110, USA
| |
Collapse
|
22
|
Dalakleidi KV, Papadelli M, Kapolos I, Papadimitriou K. Applying Image-Based Food-Recognition Systems on Dietary Assessment: A Systematic Review. Adv Nutr 2022; 13:2590-2619. [PMID: 35803496 PMCID: PMC9776640 DOI: 10.1093/advances/nmac078] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 06/06/2022] [Accepted: 07/06/2022] [Indexed: 01/29/2023] Open
Abstract
Dietary assessment can be crucial for the overall well-being of humans and, at least in some instances, for the prevention and management of chronic, life-threatening diseases. Recall and manual record-keeping methods for food-intake monitoring are available, but often inaccurate when applied for a long period of time. On the other hand, automatic record-keeping approaches that adopt mobile cameras and computer vision methods seem to simplify the process and can improve current human-centric diet-monitoring methods. Here we present an extended critical literature overview of image-based food-recognition systems (IBFRS) combining a camera of the user's mobile device with computer vision methods and publicly available food datasets (PAFDs). In brief, such systems consist of several phases, such as the segmentation of the food items on the plate, the classification of the food items in a specific food category, and the estimation phase of volume, calories, or nutrients of each food item. A total of 159 studies were screened in this systematic review of IBFRS. A detailed overview of the methods adopted in each of the 78 included studies of this systematic review of IBFRS is provided along with their performance on PAFDs. Studies that included IBFRS without presenting their performance in at least 1 of the above-mentioned phases were excluded. Among the included studies, 45 (58%) studies adopted deep learning methods and especially convolutional neural networks (CNNs) in at least 1 phase of the IBFRS with input PAFDs. Among the implemented techniques, CNNs outperform all other approaches on the PAFDs with a large volume of data, since the richness of these datasets provides adequate training resources for such algorithms. We also present evidence for the benefits of application of IBFRS in professional dietetic practice. Furthermore, challenges related to the IBFRS presented here are also thoroughly discussed along with future directions.
Collapse
Affiliation(s)
- Kalliopi V Dalakleidi
- Department of Food Science and Technology, University of the Peloponnese, Kalamata, Greece
| | - Marina Papadelli
- Department of Food Science and Technology, University of the Peloponnese, Kalamata, Greece
| | - Ioannis Kapolos
- Department of Food Science and Technology, University of the Peloponnese, Kalamata, Greece
| | - Konstantinos Papadimitriou
- Laboratory of Food Quality Control and Hygiene, Department of Food Science and Human Nutrition, Agricultural University of Athens, Athens, Greece
| |
Collapse
|
23
|
Li X, Wei Y, Hu Q, Wang C, Yang J. Learning to segment subcortical structures from noisy annotations with a novel uncertainty-reliability aware learning framework. Comput Biol Med 2022; 151:106326. [PMID: 36442274 DOI: 10.1016/j.compbiomed.2022.106326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 10/24/2022] [Accepted: 11/14/2022] [Indexed: 11/17/2022]
Abstract
Accurate segmentation of subcortical structures is an important task in quantitative brain image analysis. Convolutional neural networks (CNNs) have achieved remarkable results in medical image segmentation. However, due to the difficulty of acquiring high-quality annotations of brain subcortical structures, learning segmentation networks using noisy annotations is an inevitable topic. A common practice is to select images or pixels with reliable annotations for training, which usually may not make full use of the information from the training samples, thus affecting the performance of the learned segmentation model. To address the above problem, in this work, we propose a novel robust learning method and denote it as uncertainty-reliability awareness learning (URAL), which can make sufficient use of all training pixels. At each training iteration, the proposed method first selects training pixels with reliable annotations from the set of pixels with uncertain network prediction, by utilizing a small clean validation set following a meta-learning paradigm. Meanwhile, we propose the online prototypical soft label correction (PSLC) method to estimate the pseudo-labels of label-unreliable pixels. Then, the segmentation loss of label-reliable pixels and the semi-supervised segmentation loss of label-unreliable pixels are used to calibrate the total segmentation loss. Finally, we propose a category-wise contrastive regularization to learn compact feature representations of all uncertain training pixels. Comprehensive experiments are performed on two publicly available brain MRI datasets. The proposed method achieves the best Dice scores and MHD values on both datasets compared to several recent state-of-the-art methods under all label noise settings. Our code is available at https://github.com/neulxlx/URAL.
Collapse
Affiliation(s)
- Xiang Li
- College of Information Science and Engineering, Northeastern University, Shenyang, 110819, China.
| | - Ying Wei
- College of Information Science and Engineering, Northeastern University, Shenyang, 110819, China; Information Technology R&D Innovation Center of Peking University, Shaoxing, China; Changsha Hisense Intelligent System Research Institute Co., Ltd., China.
| | - Qian Hu
- College of Information Science and Engineering, Northeastern University, Shenyang, 110819, China.
| | - Chuyuan Wang
- College of Information Science and Engineering, Northeastern University, Shenyang, 110819, China.
| | - Jingjing Yang
- College of Information Science and Engineering, Northeastern University, Shenyang, 110819, China.
| |
Collapse
|
24
|
Babajide R, Lembrikova K, Ziemba J, Ding J, Li Y, Fermin AS, Fan Y, Tasian GE. Automated Machine Learning Segmentation and Measurement of Urinary Stones on CT Scan. Urology 2022; 169:41-46. [PMID: 35908740 PMCID: PMC9936246 DOI: 10.1016/j.urology.2022.07.029] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 07/06/2022] [Accepted: 07/17/2022] [Indexed: 10/16/2022]
Abstract
OBJECTIVES To evaluate the performance of an engineered machine learning algorithm to identify kidney stones and measure stone characteristics without the need for human input. METHODS We performed a cross-sectional study of 94 children and adults who had kidney stones identified on non-contrast CT. A previously developed deep learning algorithm was trained to segment renal anatomy and kidney stones and to measure stone features. The performance and speed of the algorithm to measure renal anatomy and kidney stone features were compared to the current gold standard of human measurement performed by 3 independent reviewers. RESULTS The algorithm was 100% sensitive and 100% specific in detecting individual kidney stones. The mean stone volume segmented by the algorithm was smaller than that of human reviewers and had moderate overlap (Dice score: 0.66). There was substantial variation between human reviewers in total segmented stone volume (Jaccard score: 0.17) and volume of the single largest stone (Jaccard score: 0.33). Stone segmentations performed by the machine learning algorithm more precisely approximated stone borders than those performed by human reviewers on qualitative assessment. CONCLUSION An engineered machine learning algorithm can identify and characterize stones more accurately and reliably than humans, which has the potential to improve the precision and efficiency of assessing kidney stone burden.
Collapse
Affiliation(s)
- Rilwan Babajide
- University of Chicago Pritzker School of Medicine, Chicago, IL
| | | | - Justin Ziemba
- University of Pennsylvania Perelman School of Medicine, Philadelphia, PA; Department of Surgery, Division of Urology, Hospital of the University of Pennsylvania, Philadelphia, PA
| | - James Ding
- University of Pennsylvania Perelman School of Medicine, Philadelphia, PA
| | - Yuemeng Li
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA; The Center for Biomedical Image Computing and Analytics, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA
| | - Antoine Selman Fermin
- Department of Surgery, Division of Pediatric Urology, The Children's Hospital of Philadelphia, Philadelphia, PA
| | - Yong Fan
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA; The Center for Biomedical Image Computing and Analytics, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA
| | - Gregory E Tasian
- Department of Surgery, Division of Pediatric Urology, The Children's Hospital of Philadelphia, Philadelphia, PA; Department of Biostatistics, Epidemiology, and Informatics; Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA.
| |
Collapse
|
25
|
Pati S, Baid U, Edwards B, Sheller MJ, Foley P, Reina GA, Thakur S, Sako C, Bilello M, Davatzikos C, Martin J, Shah P, Menze B, Bakas S. The federated tumor segmentation (FeTS) tool: an open-source solution to further solid tumor research. Phys Med Biol 2022; 67:10.1088/1361-6560/ac9449. [PMID: 36137534 PMCID: PMC9592188 DOI: 10.1088/1361-6560/ac9449] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 09/22/2022] [Indexed: 11/11/2022]
Abstract
Objective.De-centralized data analysis becomes an increasingly preferred option in the healthcare domain, as it alleviates the need for sharing primary patient data across collaborating institutions. This highlights the need for consistent harmonized data curation, pre-processing, and identification of regions of interest based on uniform criteria.Approach.Towards this end, this manuscript describes theFederatedTumorSegmentation (FeTS) tool, in terms of software architecture and functionality.Main results.The primary aim of the FeTS tool is to facilitate this harmonized processing and the generation of gold standard reference labels for tumor sub-compartments on brain magnetic resonance imaging, and further enable federated training of a tumor sub-compartment delineation model across numerous sites distributed across the globe, without the need to share patient data.Significance.Building upon existing open-source tools such as the Insight Toolkit and Qt, the FeTS tool is designed to enable training deep learning models targeting tumor delineation in either centralized or federated settings. The target audience of the FeTS tool is primarily the computational researcher interested in developing federated learning models, and interested in joining a global federation towards this effort. The tool is open sourced athttps://github.com/FETS-AI/Front-End.
Collapse
Affiliation(s)
- Sarthak Pati
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Ujjwal Baid
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | | | | | | | | | - Siddhesh Thakur
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Chiharu Sako
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Michel Bilello
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | | | | | - Bjoern Menze
- Department of Informatics, Technical University of Munich, Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
26
|
Young LH, Kim J, Yakin M, Lin H, Dao DT, Kodati S, Sharma S, Lee AY, Lee CS, Sen HN. Automated Detection of Vascular Leakage in Fluorescein Angiography - A Proof of Concept. Transl Vis Sci Technol 2022; 11:19. [PMID: 35877095 PMCID: PMC9339697 DOI: 10.1167/tvst.11.7.19] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose The purpose of this paper was to develop a deep learning algorithm to detect retinal vascular leakage (leakage) in fluorescein angiography (FA) of patients with uveitis and use the trained algorithm to determine clinically notable leakage changes. Methods An algorithm was trained and tested to detect leakage on a set of 200 FA images (61 patients) and evaluated on a separate 50-image test set (21 patients). The ground truth was leakage segmentation by two clinicians. The Dice Similarity Coefficient (DSC) was used to measure concordance. Results During training, the algorithm achieved a best average DSC of 0.572 (95% confidence interval [CI] = 0.548–0.596). The trained algorithm achieved a DSC of 0.563 (95% CI = 0.543–0.582) when tested on an additional set of 50 images. The trained algorithm was then used to detect leakage on pairs of FA images from longitudinal patient visits. Longitudinal leakage follow-up showed a >2.21% change in the visible retina area covered by leakage (as detected by the algorithm) had a sensitivity and specificity of 90% (area under the curve [AUC] = 0.95) of detecting a clinically notable change compared to the gold standard, an expert clinician's assessment. Conclusions This deep learning algorithm showed modest concordance in identifying vascular leakage compared to ground truth but was able to aid in identifying vascular FA leakage changes over time. Translational Relevance This is a proof-of-concept study that vascular leakage can be detected in a more standardized way and that tools can be developed to help clinicians more objectively compare vascular leakage between FAs.
Collapse
Affiliation(s)
- LeAnne H Young
- National Eye Institute, Bethesda, MD, USA.,Cleveland Clinic Lerner College of Medicine, Cleveland, OH, USA
| | - Jongwoo Kim
- National Library of Medicine, Bethesda, MD, USA
| | | | - Henry Lin
- National Eye Institute, Bethesda, MD, USA
| | | | | | - Sumit Sharma
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | | | | | - H Nida Sen
- National Eye Institute, Bethesda, MD, USA
| |
Collapse
|
27
|
Li X, Wei Y, Wang C, Hu Q, Liu C. Contextual-wise discriminative feature extraction and robust network learning for subcortical structure segmentation. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03848-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
28
|
Wu L, Hu S, Liu C. MR brain segmentation based on DE-ResUnet combining texture features and background knowledge. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103541] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
29
|
Wang J, Chen Z, Yang C, Qu B, Ma L, Fan W, Zhou Q, Zheng Q, Xu S. Evaluation Exploration of Atlas-Based and Deep Learning-Based Automatic Contouring for Nasopharyngeal Carcinoma. Front Oncol 2022; 12:833816. [PMID: 35433460 PMCID: PMC9008357 DOI: 10.3389/fonc.2022.833816] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Accepted: 02/25/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose The purpose of this study was to evaluate and explore the difference between an atlas-based and deep learning (DL)-based auto-segmentation scheme for organs at risk (OARs) of nasopharyngeal carcinoma cases to provide valuable help for clinical practice. Methods 120 nasopharyngeal carcinoma cases were established in the MIM Maestro (atlas) database and trained by a DL-based model (AccuContour®), and another 20 nasopharyngeal carcinoma cases were randomly selected outside the atlas database. The experienced physicians contoured 14 OARs from 20 patients based on the published consensus guidelines, and these were defined as the reference volumes (Vref). Meanwhile, these OARs were auto-contoured using an atlas-based model, a pre-built DL-based model, and an on-site trained DL-based model. These volumes were named Vatlas, VDL-pre-built, and VDL-trained, respectively. The similarities between Vatlas, VDL-pre-built, VDL-trained, and Vref were assessed using the Dice similarity coefficient (DSC), Jaccard coefficient (JAC), maximum Hausdorff distance (HDmax), and deviation of centroid (DC) methods. A one-way ANOVA test was carried out to show the differences (between each two of them). Results The results of the three methods were almost similar for the brainstem and eyes. For inner ears and temporomandibular joints, the results of the pre-built DL-based model are the worst, as well as the results of atlas-based auto-segmentation for the lens. For the segmentation of optic nerves, the trained DL-based model shows the best performance (p < 0.05). For the contouring of the oral cavity, the DSC value of VDL-pre-built is the smallest, and VDL-trained is the most significant (p < 0.05). For the parotid glands, the DSC of Vatlas is the minimum (about 0.80 or so), and VDL-pre-built and VDL-trained are slightly larger (about 0.82 or so). In addition to the oral cavity, parotid glands, and the brainstem, the maximum Hausdorff distances of the other organs are below 0.5 cm using the trained DL-based segmentation model. The trained DL-based segmentation method behaves well in the contouring of all the organs that the maximum average deviation of the centroid is no more than 0.3 cm. Conclusion The trained DL-based segmentation performs significantly better than atlas-based segmentation for nasopharyngeal carcinoma, especially for the OARs with small volumes. Although some delineation results still need further modification, auto-segmentation methods improve the work efficiency and provide a level of help for clinical work.
Collapse
Affiliation(s)
- Jinyuan Wang
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | | | | | - Baolin Qu
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Lin Ma
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Wenjun Fan
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Qichao Zhou
- Manteia Technologies Co., Ltd., Xiamen, China
| | - Qingzeng Zheng
- Department of Radiation Oncology, Beijing Geriatric Hospital, Beijing, China
| | - Shouping Xu
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| |
Collapse
|
30
|
Kawahara D, Tsuneda M, Ozawa S, Okamoto H, Nakamura M, Nishio T, Saito A, Nagata Y. Stepwise deep neural network (stepwise-net) for head and neck auto-segmentation on CT images. Comput Biol Med 2022; 143:105295. [PMID: 35168082 DOI: 10.1016/j.compbiomed.2022.105295] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Revised: 01/08/2022] [Accepted: 02/02/2022] [Indexed: 11/20/2022]
Abstract
OBJECTIVE The current study aims to propose the auto-segmentation model on CT images of head and neck cancer using a stepwise deep neural network (stepwise-net). MATERIAL AND METHODS Six normal tissue structures in the head and neck region of 3D CT images: Brainstem, optic nerve, parotid glands (left and right), and submandibular glands (left and right) were segmented with deep learning. In addition to a conventional convolutional neural network (CNN) on U-net, a stepwise neural network (stepwise-network) was developed. The stepwise-network was based on 3D FCN. We designed two networks in the stepwise-network. One is identifying the target region for the segmentation with the low-resolution images. Then, the target region is cropped, which used for the input image for the prediction of the segmentation. These were compared with a clinical used atlas-based segmentation. RESULTS The DSCs of the stepwise-net was significantly higher than the atlas-based method for all organ at risk structures. Similarly, the JSCs of the stepwise-net was significantly higher than the atlas-based methods for all organ at risk structures. The Hausdorff distance (HD) was significantly smaller than the atlas-based method for all organ at-risk structures. For the comparison of the stepwise-net and U-net, the stepwise-net had a higher DSC and JSC and a smaller HD than the conventional U-net. CONCLUSIONS We found that the stepwise-network plays a role is superior to conventional U-net-based and atlas-based segmentation. Our proposed model that is a potentially valuable method for improving the efficiency of head and neck radiotherapy treatment planning.
Collapse
Affiliation(s)
- Daisuke Kawahara
- Department of Radiation Oncology, Graduate School of Biomedical Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan.
| | - Masato Tsuneda
- Department of Radiation Oncology, MR Linac ART Division, Graduate School of Medicine, Chiba University, Chiba, 260-8670, Japan
| | - Shuichi Ozawa
- Hiroshima High-Precision Radiotherapy Cancer Center, Hiroshima, 732-0057, Japan
| | - Hiroyuki Okamoto
- Department of Medical Physics, National Cancer Center Hospital, Tokyo, 104-0045, Japan
| | - Mitsuhiro Nakamura
- Division of Medical Physics, Department of Information Technology and Medical Engineering, Human Health Sciences, Graduate School of Medicine, Kyoto University, Kyoto, 606-8507, Japan
| | - Teiji Nishio
- Medical Physics Laboratory, Division of Health Science, Graduate School of Medicine, Osaka University, Osaka, 565-0871, Japan
| | - Akito Saito
- Department of Radiation Oncology, Graduate School of Biomedical Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan
| | - Yasushi Nagata
- Department of Radiation Oncology, Graduate School of Biomedical Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan; Hiroshima High-Precision Radiotherapy Cancer Center, Hiroshima, 732-0057, Japan
| |
Collapse
|
31
|
Yang Y, Huang R, Lv G, Hu Z, Shan G, Zhang J, Bai X, Liu P, Li H, Chen M. Automatic segmentation of the clinical target volume and organs at risk for rectal cancer radiotherapy using structure-contextual representations based on 3D high-resolution network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
32
|
Peng J, Kim DD, Patel JB, Zeng X, Huang J, Chang K, Xun X, Zhang C, Sollee J, Wu J, Dalal DJ, Feng X, Zhou H, Zhu C, Zou B, Jin K, Wen PY, Boxerman JL, Warren KE, Poussaint TY, States LJ, Kalpathy-Cramer J, Yang L, Huang RY, Bai HX. Deep learning-based automatic tumor burden assessment of pediatric high-grade gliomas, medulloblastomas, and other leptomeningeal seeding tumors. Neuro Oncol 2022; 24:289-299. [PMID: 34174070 PMCID: PMC8804897 DOI: 10.1093/neuonc/noab151] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
BACKGROUND Longitudinal measurement of tumor burden with magnetic resonance imaging (MRI) is an essential component of response assessment in pediatric brain tumors. We developed a fully automated pipeline for the segmentation of tumors in pediatric high-grade gliomas, medulloblastomas, and leptomeningeal seeding tumors. We further developed an algorithm for automatic 2D and volumetric size measurement of tumors. METHODS The preoperative and postoperative cohorts were randomly split into training and testing sets in a 4:1 ratio. A 3D U-Net neural network was trained to automatically segment the tumor on T1 contrast-enhanced and T2/FLAIR images. The product of the maximum bidimensional diameters according to the RAPNO (Response Assessment in Pediatric Neuro-Oncology) criteria (AutoRAPNO) was determined. Performance was compared to that of 2 expert human raters who performed assessments independently. Volumetric measurements of predicted and expert segmentations were computationally derived and compared. RESULTS A total of 794 preoperative MRIs from 794 patients and 1003 postoperative MRIs from 122 patients were included. There was excellent agreement of volumes between preoperative and postoperative predicted and manual segmentations, with intraclass correlation coefficients (ICCs) of 0.912 and 0.960 for the 2 preoperative and 0.947 and 0.896 for the 2 postoperative models. There was high agreement between AutoRAPNO scores on predicted segmentations and manually calculated scores based on manual segmentations (Rater 2 ICC = 0.909; Rater 3 ICC = 0.851). Lastly, the performance of AutoRAPNO was superior in repeatability to that of human raters for MRIs with multiple lesions. CONCLUSIONS Our automated deep learning pipeline demonstrates potential utility for response assessment in pediatric brain tumors. The tool should be further validated in prospective studies.
Collapse
Affiliation(s)
- Jian Peng
- Department of Neurology, The Second Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Daniel D Kim
- Department of Diagnostic Imaging, Rhode Island Hospital and Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Jay B Patel
- Department of Radiology, Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Xiaowei Zeng
- Department of Neurology, The Second Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Jiaer Huang
- School of Computer Science and Engineering, Central South University, Changsha, Hunan, China
| | - Ken Chang
- Department of Radiology, Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Xinping Xun
- Department of Neurology, The Second Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Chen Zhang
- Department of Neurology, The Second Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - John Sollee
- Department of Diagnostic Imaging, Rhode Island Hospital and Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Jing Wu
- Department of Radiology, The Second Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Deepa J Dalal
- Department of Radiology, Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania, USA
| | - Xue Feng
- Department of Biomedical Engineering, University of Virginia, Charlottesville, Virginia, USA
| | - Hao Zhou
- Department of Neurology, Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Chengzhang Zhu
- Department of Neurology, The Second Xiangya Hospital of Central South University, Changsha, Hunan, China
- School of Computer Science and Engineering, Central South University, Changsha, Hunan, China
| | - Beiji Zou
- School of Computer Science and Engineering, Central South University, Changsha, Hunan, China
| | - Ke Jin
- Department of Radiology, Hunan Children’s Hospital, Changsha, Hunan, China
| | - Patrick Y Wen
- Center for Neuro-Oncology, Dana Farber Cancer Institute, Boston, Massachusetts, USA
| | - Jerrold L Boxerman
- Department of Diagnostic Imaging, Rhode Island Hospital and Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Katherine E Warren
- Department of Pediatrics, Dana Farber Cancer Institute, Boston, Massachusetts, USA
| | - Tina Y Poussaint
- Department of Radiology, Boston Children’s Hospital, Boston, Massachusetts, USA
| | - Lisa J States
- Department of Radiology, Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania, USA
| | - Jayashree Kalpathy-Cramer
- Department of Radiology, Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Li Yang
- Department of Neurology, The Second Xiangya Hospital of Central South University, Changsha, Hunan, China
| | - Raymond Y Huang
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Harrison X Bai
- Department of Diagnostic Imaging, Rhode Island Hospital and Alpert Medical School of Brown University, Providence, Rhode Island, USA
| |
Collapse
|
33
|
Lappas G, Staut N, Lieuwes NG, Biemans R, Wolfs CJ, van Hoof SJ, Dubois LJ, Verhaegen F. Inter-observer variability of organ contouring for preclinical studies with cone beam Computed Tomography imaging. Phys Imaging Radiat Oncol 2022; 21:11-17. [PMID: 35111981 PMCID: PMC8790504 DOI: 10.1016/j.phro.2022.01.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 01/05/2022] [Accepted: 01/12/2022] [Indexed: 12/28/2022] Open
Abstract
Background and purpose In preclinical radiation studies, there is great interest in quantifying the radiation response of healthy tissues. Manual contouring has significant impact on the treatment-planning because of variation introduced by human interpretation. This results in inconsistencies when assessing normal tissue volumes. Evaluation of these discrepancies can provide a better understanding on the limitations of the current preclinical radiation workflow. In the present work, interobserver variability (IOV) in manual contouring of rodent normal tissues on cone-beam Computed Tomography, in head and thorax regions was evaluated. Materials and methods Two animal technicians performed manually (assisted) contouring of normal tissues located within the thorax and head regions of rodents, 20 cases per body site. Mean surface distance (MSD), displacement of center of mass (ΔCoM), DICE similarity coefficient (DSC) and the 95th percentile Hausdorff distance (HD95) were calculated between the contours of the two observers to evaluate the IOV. Results For the thorax organs, right lung had the lowest IOV (ΔCoM: 0.08 ± 0.04 mm, DSC: 0.96 ± 0.01, MSD:0.07 ± 0.01 mm, HD95:0.20 ± 0.03 mm) while spinal cord, the highest IOV (ΔCoM:0.5 ± 0.3 mm, DSC:0.81 ± 0.05, MSD:0.14 ± 0.03 mm, HD95:0.8 ± 0.2 mm). Regarding head organs, right eye demonstrated the lowest IOV (ΔCoM:0.12 ± 0.08 mm, DSC: 0.93 ± 0.02, MSD: 0.15 ± 0.04 mm, HD95: 0.29 ± 0.07 mm) while complete brain, the highest IOV (ΔCoM: 0.2 ± 0.1 mm, DSC: 0.94 ± 0.02, MSD: 0.3 ± 0.1 mm, HD95: 0.5 ± 0.1 mm). Conclusions Our findings reveal small IOV, within the sub-mm range, for thorax and head normal tissues in rodents. The set of contours can serve as a basis for developing an automated delineation method for e.g., treatment planning.
Collapse
Affiliation(s)
- Georgios Lappas
- Department of Radiation Oncology (MAASTRO), GROW – School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, the Netherlands
| | - Nick Staut
- Department of Radiation Oncology (MAASTRO), GROW – School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, the Netherlands
- The M-Lab, Department of Precision Medicine, GROW – School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
| | | | - Rianne Biemans
- SmART Scientific Solutions BV, Maastricht, the Netherlands
| | - Cecile J.A. Wolfs
- Department of Radiation Oncology (MAASTRO), GROW – School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, the Netherlands
| | - Stefan J. van Hoof
- The M-Lab, Department of Precision Medicine, GROW – School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
| | | | - Frank Verhaegen
- Department of Radiation Oncology (MAASTRO), GROW – School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, the Netherlands
- The M-Lab, Department of Precision Medicine, GROW – School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
- Corresponding author at: Department of Radiation Oncology (MAASTRO), GROW – School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, the Netherlands.
| |
Collapse
|
34
|
Yoganathan SA, Zhang R. Segmentation of Organs and Tumor within Brain Magnetic Resonance Images Using K-Nearest Neighbor Classification. J Med Phys 2022; 47:40-49. [PMID: 35548028 PMCID: PMC9084578 DOI: 10.4103/jmp.jmp_87_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 10/24/2021] [Accepted: 12/11/2021] [Indexed: 11/29/2022] Open
Abstract
PURPOSE To fully exploit the benefits of magnetic resonance imaging (MRI) for radiotherapy, it is desirable to develop segmentation methods to delineate patients' MRI images fast and accurately. The purpose of this work is to develop a semi-automatic method to segment organs and tumor within the brain on standard T1- and T2-weighted MRI images. METHODS AND MATERIALS Twelve brain cancer patients were retrospectively included in this study, and a simple rigid registration was used to align all the images to the same spatial coordinates. Regions of interest were created for organs and tumor segmentations. The K-nearest neighbor (KNN) classification algorithm was used to characterize the knowledge of previous segmentations using 15 image features (T1 and T2 image intensity, 4 Gabor filtered images, 6 image gradients, and 3 Cartesian coordinates), and the trained models were used to predict organ and tumor contours. Dice similarity coefficient (DSC), normalized surface dice, sensitivity, specificity, and Hausdorff distance were used to evaluate the performance of segmentations. RESULTS Our semi-automatic segmentations matched with the ground truths closely. The mean DSC value was between 0.49 (optical chiasm) and 0.89 (right eye) for organ segmentations and was 0.87 for tumor segmentation. Overall performance of our method is comparable or superior to the previous work, and the accuracy of our semi-automatic segmentation is generally better for large volume objects. CONCLUSION The proposed KNN method can accurately segment organs and tumor using standard brain MRI images, provides fast and accurate image processing and planning tools, and paves the way for clinical implementation of MRI-guided radiotherapy and adaptive radiotherapy.
Collapse
Affiliation(s)
- S. A. Yoganathan
- Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana, USA
| | - Rui Zhang
- Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana, USA,Department of Radiation Oncology, Mary Bird Perkins Cancer Center, Baton Rouge, Louisiana, USA,Address for correspondence: Dr. Rui Zhang, Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana, USA. E-mail:
| |
Collapse
|
35
|
Lorenzen EL, Kallehauge JF, Byskov CS, Dahlrot RH, Haslund CA, Guldberg TL, Lassen-Ramshad Y, Lukacova S, Muhic A, Witt Nyström P, Haldbo-Classen L, Bahij I, Larsen L, Weber B, Hansen CR. A national study on the inter-observer variability in the delineation of organs at risk in the brain. Acta Oncol 2021; 60:1548-1554. [PMID: 34629014 DOI: 10.1080/0284186x.2021.1975813] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
BACKGROUND The Danish Neuro Oncology Group (DNOG) has established national consensus guidelines for the delineation of organs at risk (OAR) structures based on published literature. This study was conducted to finalise these guidelines and evaluate the inter-observer variability of the delineated OAR structures by expert observers. MATERIAL AND METHODS The DNOG delineation guidelines were formed by participants from all Danish centres that treat brain tumours with radiotherapy. In a two-day workshop, guidelines were discussed and finalised based on a pilot study. Following this, the ten participants contoured the following OARs on T1-weighted gadolinium enhanced MRI from 13 patients with brain tumours: optic tracts, optic nerves, chiasm, spinal cord, brainstem, pituitary gland and hippocampus. The metrics used for comparison were the Dice similarity coefficient (Dice), mean surface distance (MSD) and others. RESULTS A total of 968 contours were delineated across the 13 patients. On average eight (range six to nine) individual contour sets were made per patient. Good agreement was found across all structures with a median MSD below 1 mm for most structures, with the chiasm performing the best with a median MSD of 0.45 mm. The Dice was as expected highly volume dependent, the brainstem (the largest structure) had the highest Dice value with a median of 0.89 whereas smaller volumes such as the chiasm had a Dice of 0.71. CONCLUSION Except for the caudal definition of the spinal cord, the variances observed in the contours of OARs in the brain were generally low and consistent. Surface mapping revealed sub-regions of higher variance for some organs. The data set is being prepared as a validation data set for auto-segmentation algorithms for use within the Danish Comprehensive Cancer Centre - Radiotherapy and potential collaborators.
Collapse
Affiliation(s)
| | - Jesper Folsted Kallehauge
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Camilla Skinnerup Byskov
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Rikke Hedegaard Dahlrot
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Oncology, Odense University Hospital, Odense, Denmark
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark
| | | | | | | | - Slávka Lukacova
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Aida Muhic
- Department of Oncology, Rigshospitalet, Copenhagen, Denmark
| | - Petra Witt Nyström
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | | | - Ihsan Bahij
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | - Lone Larsen
- Department of Oncology, Aalborg University Hospital, Aalborg, Denmark
| | - Britta Weber
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | - Christian Rønn Hansen
- Laboratory of Radiation Physics, Odense University Hospital, Odense, Denmark
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
36
|
Williams S, Layard Horsfall H, Funnell JP, Hanrahan JG, Khan DZ, Muirhead W, Stoyanov D, Marcus HJ. Artificial Intelligence in Brain Tumour Surgery-An Emerging Paradigm. Cancers (Basel) 2021; 13:cancers13195010. [PMID: 34638495 PMCID: PMC8508169 DOI: 10.3390/cancers13195010] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 10/02/2021] [Accepted: 10/03/2021] [Indexed: 01/01/2023] Open
Abstract
Artificial intelligence (AI) platforms have the potential to cause a paradigm shift in brain tumour surgery. Brain tumour surgery augmented with AI can result in safer and more effective treatment. In this review article, we explore the current and future role of AI in patients undergoing brain tumour surgery, including aiding diagnosis, optimising the surgical plan, providing support during the operation, and better predicting the prognosis. Finally, we discuss barriers to the successful clinical implementation, the ethical concerns, and we provide our perspective on how the field could be advanced.
Collapse
Affiliation(s)
- Simon Williams
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
- Correspondence:
| | - Hugo Layard Horsfall
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Jonathan P. Funnell
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - John G. Hanrahan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Danyal Z. Khan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - William Muirhead
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Danail Stoyanov
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Hani J. Marcus
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| |
Collapse
|
37
|
MSGSE-Net: Multi-scale guided squeeze-and-excitation network for subcortical brain structure segmentation. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.07.018] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
38
|
Poel R, Rüfenacht E, Hermann E, Scheib S, Manser P, Aebersold DM, Reyes M. The predictive value of segmentation metrics on dosimetry in organs at risk of the brain. Med Image Anal 2021; 73:102161. [PMID: 34293536 DOI: 10.1016/j.media.2021.102161] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 06/29/2021] [Accepted: 07/02/2021] [Indexed: 12/31/2022]
Abstract
BACKGROUND Fully automatic medical image segmentation has been a long pursuit in radiotherapy (RT). Recent developments involving deep learning show promising results yielding consistent and time efficient contours. In order to train and validate these systems, several geometric based metrics, such as Dice Similarity Coefficient (DSC), Hausdorff, and other related metrics are currently the standard in automated medical image segmentation challenges. However, the relevance of these metrics in RT is questionable. The quality of automated segmentation results needs to reflect clinical relevant treatment outcomes, such as dosimetry and related tumor control and toxicity. In this study, we present results investigating the correlation between popular geometric segmentation metrics and dose parameters for Organs-At-Risk (OAR) in brain tumor patients, and investigate properties that might be predictive for dose changes in brain radiotherapy. METHODS A retrospective database of glioblastoma multiforme patients was stratified for planning difficulty, from which 12 cases were selected and reference sets of OARs and radiation targets were defined. In order to assess the relation between segmentation quality -as measured by standard segmentation assessment metrics- and quality of RT plans, clinically realistic, yet alternative contours for each OAR of the selected cases were obtained through three methods: (i) Manual contours by two additional human raters. (ii) Realistic manual manipulations of reference contours. (iii) Through deep learning based segmentation results. On the reference structure set a reference plan was generated that was re-optimized for each corresponding alternative contour set. The correlation between segmentation metrics, and dosimetric changes was obtained and analyzed for each OAR, by means of the mean dose and maximum dose to 1% of the volume (Dmax 1%). Furthermore, we conducted specific experiments to investigate the dosimetric effect of alternative OAR contours with respect to the proximity to the target, size, particular shape and relative location to the target. RESULTS We found a low correlation between the DSC, reflecting the alternative OAR contours, and dosimetric changes. The Pearson correlation coefficient between the mean OAR dose effect and the Dice was -0.11. For Dmax 1%, we found a correlation of -0.13. Similar low correlations were found for 22 other segmentation metrics. The organ based analysis showed that there is a better correlation for the larger OARs (i.e. brainstem and eyes) as for the smaller OARs (i.e. optic nerves and chiasm). Furthermore, we found that proximity to the target does not make contour variations more susceptible to the dose effect. However, the direction of the contour variation with respect to the relative location of the target seems to have a strong correlation with the dose effect. CONCLUSIONS This study shows a low correlation between segmentation metrics and dosimetric changes for OARs in brain tumor patients. Results suggest that the current metrics for image segmentation in RT, as well as deep learning systems employing such metrics, need to be revisited towards clinically oriented metrics that better reflect how segmentation quality affects dose distribution and related tumor control and toxicity.
Collapse
Affiliation(s)
- Robert Poel
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland; ARTORG Center for Biomedical Research, University of Bern, Bern, Switzerland
| | - Elias Rüfenacht
- ARTORG Center for Biomedical Research, University of Bern, Bern, Switzerland
| | - Evelyn Hermann
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland; Radiotherapy Department, Riviera-Chablais Hospital, Rennaz, Switzerland
| | - Stefan Scheib
- Varian Medical Systems Imaging Laboratory, GmbH, Switzerland
| | - Peter Manser
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Daniel M Aebersold
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Mauricio Reyes
- ARTORG Center for Biomedical Research, University of Bern, Bern, Switzerland.
| |
Collapse
|
39
|
Sipos D, László Z, Tóth Z, Kovács P, Tollár J, Gulybán A, Lakosi F, Repa I, Kovács A. Additional Value of 18F-FDOPA Amino Acid Analog Radiotracer to Irradiation Planning Process of Patients With Glioblastoma Multiforme. Front Oncol 2021; 11:699360. [PMID: 34295825 PMCID: PMC8290215 DOI: 10.3389/fonc.2021.699360] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 06/11/2021] [Indexed: 01/25/2023] Open
Abstract
PURPOSE To investigate the added value of 6-(18F]-fluoro-L-3,4-dihydroxyphenylalanine (FDOPA) PET to radiotherapy planning in glioblastoma multiforme (GBM). METHODS From September 2017 to December 2020, 17 patients with GBM received external beam radiotherapy up to 60 Gy with concurrent and adjuvant temozolamide. Target volume delineations followed the European guideline with a 2-cm safety margin clinical target volume (CTV) around the contrast-enhanced lesion+resection cavity on MRI gross tumor volume (GTV). All patients had FDOPA hybrid PET/MRI followed by PET/CT before radiotherapy planning. PET segmentation followed international recommendation: T/N 1.7 (BTV1.7) and T/N 2 (BTV2.0) SUV thresholds were used for biological target volume (BTV) delineation. For GTV-BTVs agreements, 95% of the Hausdorff distance (HD95%) from GTV to the BTVs were calculated, additionally, BTV portions outside of the GTV and coverage by the 95% isodose contours were also determined. In case of recurrence, the latest MR images were co-registered to planning CT to evaluate its location relative to BTVs and 95% isodose contours. RESULTS Average (range) GTV, BTV1.7, and BTV2.0 were 46.58 (6-182.5), 68.68 (9.6-204.1), 42.89 (3.8-147.6) cm3, respectively. HD95% from GTV were 15.5 mm (7.9-30.7 mm) and 10.5 mm (4.3-21.4 mm) for BTV1.7 and BTV2.0, respectively. Based on volumetric assessment, 58.8% (28-100%) of BTV1.7 and 45.7% of BTV2.0 (14-100%) were outside of the standard GTV, still all BTVs were encompassed by the 95% dose. All recurrences were confirmed by follow-up imaging, all occurred within PTV, with an additional outfield recurrence in a single case, which was not DOPA-positive at the beginning of treatment. Good correlation was found between the mean and median values of PET/CT and PET/MRI segmented volumes relative to corresponding brain-accumulated enhancement (r = 0.75; r = 0.72). CONCLUSION 18FFDOPA PET resulted in substantial larger tumor volumes compared to MRI; however, its added value is unclear as vast majority of recurrences occurred within the prescribed dose level. Use of PET/CT signals proved to be feasible in the absence of direct segmentation possibilities of PET/MR in TPS. The added value of 18FFDOPA may be better exploited in the context of integrated dose escalation.
Collapse
Affiliation(s)
- David Sipos
- Dr. József Baka Diagnostic, Radiation Oncology, Research and Teaching Center, “Moritz Kaposi” Teaching Hospital, Kaposvár, Hungary
- Doctoral School of Health Sciences, University of Pécs, Pécs, Hungary
- Department of Medical Imaging, Faculty of Health Sciences, University of Pécs, Pécs, Hungary
| | - Zoltan László
- Dr. József Baka Diagnostic, Radiation Oncology, Research and Teaching Center, “Moritz Kaposi” Teaching Hospital, Kaposvár, Hungary
| | - Zoltan Tóth
- Doctoral School of Health Sciences, University of Pécs, Pécs, Hungary
- MEDICOPUS Healthcare Provider and Public Nonprofit Ltd., Somogy County Moritz Kaposi Teaching Hospital, Kaposvár, Hungary
| | - Peter Kovács
- Dr. József Baka Diagnostic, Radiation Oncology, Research and Teaching Center, “Moritz Kaposi” Teaching Hospital, Kaposvár, Hungary
- Department of Medical Imaging, Faculty of Health Sciences, University of Pécs, Pécs, Hungary
| | - Jozsef Tollár
- Department of Medical Imaging, Faculty of Health Sciences, University of Pécs, Pécs, Hungary
- Department of Neurology, Somogy County Moritz Kaposi Teaching Hospital, Kaposvár, Hungary
| | - Akos Gulybán
- Medical Physics Department, Institut Jules Bordet, Bruxelles, Belgium
| | - Ferenc Lakosi
- Dr. József Baka Diagnostic, Radiation Oncology, Research and Teaching Center, “Moritz Kaposi” Teaching Hospital, Kaposvár, Hungary
- Department of Medical Imaging, Faculty of Health Sciences, University of Pécs, Pécs, Hungary
| | - Imre Repa
- Dr. József Baka Diagnostic, Radiation Oncology, Research and Teaching Center, “Moritz Kaposi” Teaching Hospital, Kaposvár, Hungary
- Doctoral School of Health Sciences, University of Pécs, Pécs, Hungary
| | - Arpad Kovács
- Doctoral School of Health Sciences, University of Pécs, Pécs, Hungary
- Department of Medical Imaging, Faculty of Health Sciences, University of Pécs, Pécs, Hungary
- Department of Oncoradiology, Faculty of Medicine, University of Debrecen, Debrecen, Hungary
| |
Collapse
|
40
|
van der Veen J, Gulyban A, Willems S, Maes F, Nuyts S. Interobserver variability in organ at risk delineation in head and neck cancer. Radiat Oncol 2021; 16:120. [PMID: 34183040 PMCID: PMC8240214 DOI: 10.1186/s13014-020-01677-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 09/24/2020] [Indexed: 11/25/2022] Open
Abstract
Background In radiotherapy inaccuracy in organ at risk (OAR) delineation can impact treatment plan optimisation and treatment plan evaluation. Brouwer et al. showed significant interobserver variability (IOV) in OAR delineation in head and neck cancer (HNC) and published international consensus guidelines (ICG) for OAR delineation in 2015. The aim of our study was to evaluate IOV in the presence of these guidelines. Methods HNC radiation oncologists (RO) from each Belgian radiotherapy centre were invited to complete a survey and submit contours for 5 HNC cases. Reference contours (OARref) were obtained by a clinically validated artificial intelligence-tool trained using ICG. Dice similarity coefficients (DSC), mean surface distance (MSD) and 95% Hausdorff distances (HD95) were used for comparison. Results Fourteen of twenty-two RO (64%) completed the survey and submitted delineations. Thirteen (93%) confirmed the use of delineation guidelines, of which six (43%) used the ICG. The OARs whose delineations agreed best with the OARref were mandible [median DSC 0.9, range (0.8–0.9); median MSD 1.1 mm, range (0.8–8.3), median HD95 3.4 mm, range (1.5–38.7)], brainstem [median DSC 0.9 (0.6–0.9); median MSD 1.5 mm (1.1–4.0), median HD95 4.0 mm (2.3–15.0)], submandibular glands [median DSC 0.8 (0.5–0.9); median MSD 1.2 mm (0.9–2.5), median HD95 3.1 mm (1.8–12.2)] and parotids [median DSC 0.9 (0.6–0.9); median MSD 1.9 mm (1.2–4.2), median HD95 5.1 mm (3.1–19.2)]. Oral cavity, cochleas, PCMs, supraglottic larynx and glottic area showed more variation. RO who used the consensus guidelines showed significantly less IOV (p = 0.008). Conclusions Although ICG for delineation of OARs in HNC exist, they are only implemented by about half of RO participating in this study, which partly explains the delineation variability. However, this study highlights that guidelines alone do not suffice to eliminate IOV and that more effort needs to be done to accomplish further treatment standardisation, for example with artificial intelligence.
Supplementary information Supplementary information accompanies this paper at 10.1186/s13014-020-01677-2.
Collapse
Affiliation(s)
- J van der Veen
- Department of Oncology, Radiation-Oncology, University of Leuven, University Hospitals Leuven, 3000, Leuven, KU, Belgium
| | - A Gulyban
- Department of Medical Physics, Jules Bordet Institute, Brussels, Belgium.
| | - S Willems
- Department ESAT, Processing Speech and Images (PSI), Medical Imaging Research Center, KU Leuven, University Hospitals Leuven, 3000, Leuven, Belgium
| | - F Maes
- Department ESAT, Processing Speech and Images (PSI), Medical Imaging Research Center, KU Leuven, University Hospitals Leuven, 3000, Leuven, Belgium
| | - S Nuyts
- Department of Oncology, Radiation-Oncology, University of Leuven, University Hospitals Leuven, 3000, Leuven, KU, Belgium.
| |
Collapse
|
41
|
Zhong Y, Yang Y, Fang Y, Wang J, Hu W. A Preliminary Experience of Implementing Deep-Learning Based Auto-Segmentation in Head and Neck Cancer: A Study on Real-World Clinical Cases. Front Oncol 2021; 11:638197. [PMID: 34026615 PMCID: PMC8132944 DOI: 10.3389/fonc.2021.638197] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Accepted: 04/15/2021] [Indexed: 12/29/2022] Open
Abstract
Purpose While artificial intelligence has shown great promise in organs-at-risk (OARs) auto segmentation for head and neck cancer (HNC) radiotherapy, to reach the level of clinical acceptance of this technology in real-world routine practice is still a challenge. The purpose of this study was to validate a U-net-based full convolutional neural network (CNN) for the automatic delineation of OARs of HNC, focusing on clinical implementation and evaluation. Methods In the first phase, the CNN was trained on 364 clinical HNC patients’ CT images with annotated contouring from routine clinical cases by different oncologists. The automated delineation accuracy was quantified using the Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD). To assess efficiency, the time required to edit the auto-contours to a clinically acceptable standard was evaluated by a questionnaire. For subjective evaluation, expert oncologists (more than 10 years’ experience) were randomly presented with automated delineations or manual contours of 15 OARs for 30 patient cases. In the second phase, the network was retrained with an additional 300 patients, which were generated by pre-trained CNN and edited by oncologists until to meet clinical acceptance. Results Based on DSC, the CNN performed best for the spinal cord, brainstem, temporal lobe, eyes, optic nerve, parotid glands and larynx (DSC >0.7). Higher conformity for the OARs delineation was achieved by retraining our architecture, largest DSC improvement on oral cavity (0.53 to 0.93). Compared with the manual delineation time, after using auto-contouring, this duration was significantly shortened from hours to minutes. In the subjective evaluation, two observes showed an apparent inclination on automatic OARs contouring, even for relatively low DSC values. Most of the automated OARs segmentation can reach the clinical acceptance level compared to manual delineations. Conclusions After retraining, the CNN developed for OARs automated delineation in HNC was proved to be more robust, efficiency and consistency in clinical practice. Deep learning-based auto-segmentation shows great potential to alleviate the labor-intensive contouring of OAR for radiotherapy treatment planning.
Collapse
Affiliation(s)
- Yang Zhong
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yanju Yang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yingtao Fang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| |
Collapse
|
42
|
Vickress J, Rangel Baltazar MA, Afsharpour H. Evaluation of Varian's SmartAdapt for clinical use in radiation therapy for patients with thoracic lesions. J Appl Clin Med Phys 2021; 22:150-156. [PMID: 33570225 PMCID: PMC7984488 DOI: 10.1002/acm2.13194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 05/21/2020] [Accepted: 01/05/2021] [Indexed: 11/25/2022] Open
Abstract
INTRODUCTION Deformable image registration (DIR) is a required tool in any adaptive radiotherapy program to help account for anatomical changes that occur during a multifraction treatment. SmartAdapt is a DIR tool from Varian incorporated within the eclipse treatment planning system, that can be used for contour propagation and transfer of PET, MRI, or computed tomography (CT) data. The purpose of this work is to evaluate the registration and contour propagation accuracy of SmartAdapt for thoracic CT studies using the guidelines from AAPM TG 132. METHODS To evaluate the registration accuracy of SmartAdapt the mean target registration error (TRE) was measured for ten landmarked 4DCT images from the https://www.dir-labs.com/ which included 300 landmarks matching the inspiration and expiration phase images. To further characterize the registration accuracy, the magnitude of deformation for each 4DCT was measured and compared against the mean TRE for each study. Contour propagation accuracy was evaluated using 22 randomly selected lung cancer cases from our center where there was either a replan, or the patient was treated for a new lesion within the lung. Contours evaluated included the right and left lung, esophagus, spinal canal, heart and the GTV and the results were quantified using the DICE similarity coefficient. RESULTS The mean TRE from all ten cases was 1.89 mm, the maximum mean TRE per case was 3.8 mm from case #8, which also had the most landmark pairs with displacements >2 cm. For contour propagation accuracy, the DICE coefficient results for left lung, right lung, heart, esophagus, and spinal canal were 0.93, 0.94, 0.90, 0.61, and 0.82 respectively. CONCLUSION The results from our study demonstrate that for thoracic images SmartAdapt in most cases will be accurate to below 2 mm in registration error unless there is deformation greater than 2 cm.
Collapse
Affiliation(s)
- Jason Vickress
- Trillium Health Partners/the Credit Valley HospitalMississaugaONCanada
- Department of Radiation OncologyUniversity of TorontoTorontoONCanada
| | | | - Hossein Afsharpour
- Trillium Health Partners/the Credit Valley HospitalMississaugaONCanada
- Department of Radiation OncologyUniversity of TorontoTorontoONCanada
| |
Collapse
|
43
|
Vrtovec T, Močnik D, Strojan P, Pernuš F, Ibragimov B. Auto-segmentation of organs at risk for head and neck radiotherapy planning: From atlas-based to deep learning methods. Med Phys 2020; 47:e929-e950. [PMID: 32510603 DOI: 10.1002/mp.14320] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2019] [Revised: 05/27/2020] [Accepted: 05/29/2020] [Indexed: 02/06/2023] Open
Abstract
Radiotherapy (RT) is one of the basic treatment modalities for cancer of the head and neck (H&N), which requires a precise spatial description of the target volumes and organs at risk (OARs) to deliver a highly conformal radiation dose to the tumor cells while sparing the healthy tissues. For this purpose, target volumes and OARs have to be delineated and segmented from medical images. As manual delineation is a tedious and time-consuming task subjected to intra/interobserver variability, computerized auto-segmentation has been developed as an alternative. The field of medical imaging and RT planning has experienced an increased interest in the past decade, with new emerging trends that shifted the field of H&N OAR auto-segmentation from atlas-based to deep learning-based approaches. In this review, we systematically analyzed 78 relevant publications on auto-segmentation of OARs in the H&N region from 2008 to date, and provided critical discussions and recommendations from various perspectives: image modality - both computed tomography and magnetic resonance image modalities are being exploited, but the potential of the latter should be explored more in the future; OAR - the spinal cord, brainstem, and major salivary glands are the most studied OARs, but additional experiments should be conducted for several less studied soft tissue structures; image database - several image databases with the corresponding ground truth are currently available for methodology evaluation, but should be augmented with data from multiple observers and multiple institutions; methodology - current methods have shifted from atlas-based to deep learning auto-segmentation, which is expected to become even more sophisticated; ground truth - delineation guidelines should be followed and participation of multiple experts from multiple institutions is recommended; performance metrics - the Dice coefficient as the standard volumetric overlap metrics should be accompanied with at least one distance metrics, and combined with clinical acceptability scores and risk assessments; segmentation performance - the best performing methods achieve clinically acceptable auto-segmentation for several OARs, however, the dosimetric impact should be also studied to provide clinically relevant endpoints for RT planning.
Collapse
Affiliation(s)
- Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Domen Močnik
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Primož Strojan
- Institute of Oncology Ljubljana, Zaloška cesta 2, Ljubljana, SI-1000, Slovenia
| | - Franjo Pernuš
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia.,Department of Computer Science, University of Copenhagen, Universitetsparken 1, Copenhagen, D-2100, Denmark
| |
Collapse
|
44
|
Gau K, Schmidt CSM, Urbach H, Zentner J, Schulze-Bonhage A, Kaller CP, Foit NA. Accuracy and practical aspects of semi- and fully automatic segmentation methods for resected brain areas. Neuroradiology 2020; 62:1637-1648. [PMID: 32691076 PMCID: PMC7666677 DOI: 10.1007/s00234-020-02481-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2020] [Accepted: 06/14/2020] [Indexed: 11/28/2022]
Abstract
Purpose Precise segmentation of brain lesions is essential for neurological research. Specifically, resection volume estimates can aid in the assessment of residual postoperative tissue, e.g. following surgery for glioma. Furthermore, behavioral lesion-symptom mapping in epilepsy relies on accurate delineation of surgical lesions. We sought to determine whether semi- and fully automatic segmentation methods can be applied to resected brain areas and which approach provides the most accurate and cost-efficient results. Methods We compared a semi-automatic (ITK-SNAP) with a fully automatic (lesion_GNB) method for segmentation of resected brain areas in terms of accuracy with manual segmentation serving as reference. Additionally, we evaluated processing times of all three methods. We used T1w, MRI-data of epilepsy patients (n = 27; 11 m; mean age 39 years, range 16–69) who underwent temporal lobe resections (17 left). Results The semi-automatic approach yielded superior accuracy (p < 0.001) with a median Dice similarity coefficient (mDSC) of 0.78 and a median average Hausdorff distance (maHD) of 0.44 compared with the fully automatic approach (mDSC 0.58, maHD 1.32). There was no significant difference between the median percent volume difference of the two approaches (p > 0.05). Manual segmentation required more human input (30.41 min/subject) and therefore inferring significantly higher costs than semi- (3.27 min/subject) or fully automatic approaches (labor and cost approaching zero). Conclusion Semi-automatic segmentation offers the most accurate results in resected brain areas with a moderate amount of human input, thus representing a viable alternative compared with manual segmentation, especially for studies with large patient cohorts.
Collapse
Affiliation(s)
- Karin Gau
- Epilepsy Center, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg im Breisgau, Germany.
| | - Charlotte S M Schmidt
- Epilepsy Center, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg im Breisgau, Germany
- Freiburg Brain Imaging, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg im Breisgau, Germany
| | - Horst Urbach
- Department of Neuroradiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg im Breisgau, Germany
| | - Josef Zentner
- Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg im Breisgau, Germany
| | - Andreas Schulze-Bonhage
- Epilepsy Center, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg im Breisgau, Germany
| | - Christoph P Kaller
- Freiburg Brain Imaging, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg im Breisgau, Germany
- Department of Neuroradiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg im Breisgau, Germany
| | - Niels Alexander Foit
- Freiburg Brain Imaging, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg im Breisgau, Germany
- Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg im Breisgau, Germany
| |
Collapse
|
45
|
Opposits G, Aranyi C, Glavák C, Cselik Z, Trón L, Sipos D, Hadjiev J, Berényi E, Repa I, Emri M, Kovács Á. OAR sparing 3D radiotherapy planning supported by fMRI brain mapping investigations. Med Dosim 2020; 45:e1-e8. [PMID: 32505630 DOI: 10.1016/j.meddos.2020.04.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Revised: 03/21/2020] [Accepted: 04/08/2020] [Indexed: 11/28/2022]
Abstract
The human brain as an organ has numerous functions; some of them can be visualized by functional imaging techniques (e.g., functional MRI [fMRI] or positron emission tomography). The localization of the appropriate activity clusters requires sophisticated instrumentation and complex measuring protocol. As the inclusion of the activation pattern in modern self-tailored 3D based radiotherapy has notable advantages, this method is applied frequently. Unfortunately, no standardized method has been published yet for the integration of the fMRI data into the planning process and the detailed description of the individual applications is usually missing. Thirteen patients with brain tumors, receiving fMRI based RT planning were enrolled in this study. The delivered dose maps were exported from the treatment planning system and processed for further statistical analysis. Two parameters were introduced to measure the geometrical distance Hausdorff Distance (HD), and volumetric overlap Dice Similarity Coefficient (DSC) of fMRI corrected and not corrected dose matrices as calculated by 3D planning to characterize similarity and/or dissimilarity of these dose matrices. Statistical analysis of bootstrapped HD and DSC data was performed to determine confidence intervals of these parameters. The calculated confidence intervals for HD and DSC were (5.04, 7.09), (0.79, 0.86), respectively for the 40 Gy and (5.2, 7.85), (0.74, 0.83), respectively for the 60 Gy dose volumes. These data indicate that in the case of HD < 5.04 and/or DSC > 0.86, the 40 Gy dose volumes obtained with and without fMRI activation pattern do not show a significant difference (5% significance level). The same conditions for the 60 Gy dose volumes were HD < 5.2 and/or DSC > 0.83. At the same time, with HD > 7.09 and/or DSC < 0.79 for 40 Gy and HD > 7.85 and/or DSC < 0.74 for 60 Gy the impact of fMRI utilization in RT planning is excessive. The fMRI activation clusters can be used in daily RT planning routine to spare activation clusters as critical areas in the brain and avoid their high dose irradiation. Parameters HD (as distance) and DSC (as overlap) can be used to characterize the difference and similarity between the radiotherapy planning target volumes and indicate whether the fMRI delivered activation patterns and consequent fMRI corrected planning volumes are reliable or not.
Collapse
Affiliation(s)
- Gábor Opposits
- University of Debrecen, Faculty of Medicine, Department of Medical Imaging, Division of Nuclear Medicine and Translational Imaging, Nagyerdei krt. 98., Debrecen 4032, Hungary.
| | - Csaba Aranyi
- University of Debrecen, Faculty of Medicine, Department of Medical Imaging, Division of Nuclear Medicine and Translational Imaging, Nagyerdei krt. 98., Debrecen 4032, Hungary
| | - Csaba Glavák
- Kaposi Somogy County Teaching Hospital Dr. József Baka Diagnostic, Radiation Oncology, Research and Teaching Center, Kaposvár, Hungary
| | - Zsolt Cselik
- Veszprém County Hospital, Oncoradiology, Veszprém, Hungary
| | - Lajos Trón
- University of Debrecen, Faculty of Medicine, Department of Medical Imaging, Division of Nuclear Medicine and Translational Imaging, Nagyerdei krt. 98., Debrecen 4032, Hungary
| | - Dávid Sipos
- Kaposi Somogy County Teaching Hospital Dr. József Baka Diagnostic, Radiation Oncology, Research and Teaching Center, Kaposvár, Hungary; University of Pécs Doctoral School of Health Sciences, Pécs, Hungary
| | - Janaki Hadjiev
- Kaposi Somogy County Teaching Hospital Dr. József Baka Diagnostic, Radiation Oncology, Research and Teaching Center, Kaposvár, Hungary
| | - Ervin Berényi
- University of Debrecen, Faculty of Medicine, Department of Medical Imaging, Division of Nuclear Medicine and Translational Imaging, Nagyerdei krt. 98., Debrecen 4032, Hungary
| | - Imre Repa
- Kaposi Somogy County Teaching Hospital Dr. József Baka Diagnostic, Radiation Oncology, Research and Teaching Center, Kaposvár, Hungary
| | - Miklós Emri
- University of Debrecen, Faculty of Medicine, Department of Medical Imaging, Division of Nuclear Medicine and Translational Imaging, Nagyerdei krt. 98., Debrecen 4032, Hungary
| | - Árpád Kovács
- University of Debrecen, Faculty of Medicine, Department of Oncoradiology, Debrecen, Hungary; Kaposi Somogy County Teaching Hospital Dr. József Baka Diagnostic, Radiation Oncology, Research and Teaching Center, Kaposvár, Hungary; University of Pécs Doctoral School of Health Sciences, Pécs, Hungary
| |
Collapse
|
46
|
Mang A, Bakas S, Subramanian S, Davatzikos C, Biros G. Integrated Biophysical Modeling and Image Analysis: Application to Neuro-Oncology. Annu Rev Biomed Eng 2020; 22:309-341. [PMID: 32501772 PMCID: PMC7520881 DOI: 10.1146/annurev-bioeng-062117-121105] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Central nervous system (CNS) tumors come with vastly heterogeneous histologic, molecular, and radiographic landscapes, rendering their precise characterization challenging. The rapidly growing fields of biophysical modeling and radiomics have shown promise in better characterizing the molecular, spatial, and temporal heterogeneity of tumors. Integrative analysis of CNS tumors, including clinically acquired multi-parametric magnetic resonance imaging (mpMRI) and the inverse problem of calibrating biophysical models to mpMRI data, assists in identifying macroscopic quantifiable tumor patterns of invasion and proliferation, potentially leading to improved (a) detection/segmentation of tumor subregions and (b) computer-aided diagnostic/prognostic/predictive modeling. This article presents a summary of (a) biophysical growth modeling and simulation,(b) inverse problems for model calibration, (c) these models' integration with imaging workflows, and (d) their application to clinically relevant studies. We anticipate that such quantitative integrative analysis may even be beneficial in a future revision of the World Health Organization (WHO) classification for CNS tumors, ultimately improving patient survival prospects.
Collapse
Affiliation(s)
- Andreas Mang
- Department of Mathematics, University of Houston, Houston, Texas 77204, USA;
| | - Spyridon Bakas
- Department of Mathematics, University of Houston, Houston, Texas 77204, USA;
| | - Shashank Subramanian
- Oden Institute of Computational Engineering and Sciences, The University of Texas at Austin, Austin, Texas 78712, USA; ,
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA); Department of Radiology; and Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA; ,
| | - George Biros
- Oden Institute of Computational Engineering and Sciences, The University of Texas at Austin, Austin, Texas 78712, USA; ,
| |
Collapse
|
47
|
Ermiş E, Jungo A, Poel R, Blatti-Moreno M, Meier R, Knecht U, Aebersold DM, Fix MK, Manser P, Reyes M, Herrmann E. Fully automated brain resection cavity delineation for radiation target volume definition in glioblastoma patients using deep learning. Radiat Oncol 2020; 15:100. [PMID: 32375839 PMCID: PMC7204033 DOI: 10.1186/s13014-020-01553-z] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Accepted: 04/27/2020] [Indexed: 11/23/2022] Open
Abstract
Background Automated brain tumor segmentation methods are computational algorithms that yield tumor delineation from, in this case, multimodal magnetic resonance imaging (MRI). We present an automated segmentation method and its results for resection cavity (RC) in glioblastoma multiforme (GBM) patients using deep learning (DL) technologies. Methods Post-operative, T1w with and without contrast, T2w and fluid attenuated inversion recovery MRI studies of 30 GBM patients were included. Three radiation oncologists manually delineated the RC to obtain a reference segmentation. We developed a DL cavity segmentation method, which utilizes all four MRI sequences and the reference segmentation to learn to perform RC delineations. We evaluated the segmentation method in terms of Dice coefficient (DC) and estimated volume measurements. Results Median DC of the three radiation oncologist were 0.85 (interquartile range [IQR]: 0.08), 0.84 (IQR: 0.07), and 0.86 (IQR: 0.07). The results of the automatic segmentation compared to the three different raters were 0.83 (IQR: 0.14), 0.81 (IQR: 0.12), and 0.81 (IQR: 0.13) which was significantly lower compared to the DC among raters (chi-square = 11.63, p = 0.04). We did not detect a statistically significant difference of the measured RC volumes for the different raters and the automated method (Kruskal-Wallis test: chi-square = 1.46, p = 0.69). The main sources of error were due to signal inhomogeneity and similar intensity patterns between cavity and brain tissues. Conclusions The proposed DL approach yields promising results for automated RC segmentation in this proof of concept study. Compared to human experts, the DC are still subpar.
Collapse
Affiliation(s)
- Ekin Ermiş
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Alain Jungo
- Insel Data Science Center, Inselspital, Bern University Hospital, Bern, Switzerland.,ARTORG Center for Biomedical Research, University of Bern, Bern, Switzerland
| | - Robert Poel
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Marcela Blatti-Moreno
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Raphael Meier
- Institute for Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Urspeter Knecht
- Institute for Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Daniel M Aebersold
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Michael K Fix
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Peter Manser
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Mauricio Reyes
- Insel Data Science Center, Inselspital, Bern University Hospital, Bern, Switzerland.,ARTORG Center for Biomedical Research, University of Bern, Bern, Switzerland
| | - Evelyn Herrmann
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Freiburgstrasse, 3010, Bern, Switzerland.
| |
Collapse
|
48
|
Fung NTC, Hung WM, Sze CK, Lee MCH, Ng WT. Automatic segmentation for adaptive planning in nasopharyngeal carcinoma IMRT: Time, geometrical, and dosimetric analysis. Med Dosim 2020; 45:60-65. [DOI: 10.1016/j.meddos.2019.06.002] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Revised: 04/04/2019] [Accepted: 06/03/2019] [Indexed: 10/26/2022]
|
49
|
Vaassen F, Hazelaar C, Vaniqui A, Gooding M, van der Heyden B, Canters R, van Elmpt W. Evaluation of measures for assessing time-saving of automatic organ-at-risk segmentation in radiotherapy. PHYSICS & IMAGING IN RADIATION ONCOLOGY 2019; 13:1-6. [PMID: 33458300 PMCID: PMC7807544 DOI: 10.1016/j.phro.2019.12.001] [Citation(s) in RCA: 88] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Revised: 12/02/2019] [Accepted: 12/02/2019] [Indexed: 12/01/2022]
Abstract
Automatic delineation software shows promising results in terms of time-saving. Standard geometry measures do not have a high correlation with delineation time. New evaluation measures were introduced: added path length (APL) and surface DSC. (Added) path length showed the highest correlation with time-recordings. This makes APL the most representative measure for clinical usefulness.
Background and purpose In radiotherapy, automatic organ-at-risk segmentation algorithms allow faster delineation times, but clinically relevant contour evaluation remains challenging. Commonly used measures to assess automatic contours, such as volumetric Dice Similarity Coefficient (DSC) or Hausdorff distance, have shown to be good measures for geometric similarity, but do not always correlate with clinical applicability of the contours, or time needed to adjust them. This study aimed to evaluate the correlation of new and commonly used evaluation measures with time-saving during contouring. Materials and methods Twenty lung cancer patients were used to compare user-adjustments after atlas-based and deep-learning contouring with manual contouring. The absolute time needed (s) of adjusting the auto-contour compared to manual contouring was recorded, from this relative time-saving (%) was calculated. New evaluation measures (surface DSC and added path length, APL) and conventional evaluation measures (volumetric DSC and Hausdorff distance) were correlated with time-recordings and time-savings, quantified with the Pearson correlation coefficient, R. Results The highest correlation (R = 0.87) was found between APL and absolute adaption time. Lower correlations were found for APL with relative time-saving (R = −0.38), for surface DSC with absolute adaption time (R = −0.69) and relative time-saving (R = 0.57). Volumetric DSC and Hausdorff distance also showed lower correlation coefficients for absolute adaptation time (R = −0.32 and 0.64, respectively) and relative time-saving (R = 0.44 and −0.64, respectively). Conclusion Surface DSC and APL are better indicators for contour adaptation time and time-saving when using auto-segmentation and provide more clinically relevant and better quantitative measures for automatically-generated contour quality, compared to commonly-used geometry-based measures.
Collapse
Affiliation(s)
- Femke Vaassen
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Colien Hazelaar
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Ana Vaniqui
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | | | - Brent van der Heyden
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Richard Canters
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Wouter van Elmpt
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| |
Collapse
|
50
|
Chang K, Beers AL, Bai HX, Brown JM, Ly KI, Li X, Senders JT, Kavouridis VK, Boaro A, Su C, Bi WL, Rapalino O, Liao W, Shen Q, Zhou H, Xiao B, Wang Y, Zhang PJ, Pinho MC, Wen PY, Batchelor TT, Boxerman JL, Arnaout O, Rosen BR, Gerstner ER, Yang L, Huang RY, Kalpathy-Cramer J. Automatic assessment of glioma burden: a deep learning algorithm for fully automated volumetric and bidimensional measurement. Neuro Oncol 2019; 21:1412-1422. [PMID: 31190077 PMCID: PMC6827825 DOI: 10.1093/neuonc/noz106] [Citation(s) in RCA: 101] [Impact Index Per Article: 20.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
BACKGROUND Longitudinal measurement of glioma burden with MRI is the basis for treatment response assessment. In this study, we developed a deep learning algorithm that automatically segments abnormal fluid attenuated inversion recovery (FLAIR) hyperintensity and contrast-enhancing tumor, quantitating tumor volumes as well as the product of maximum bidimensional diameters according to the Response Assessment in Neuro-Oncology (RANO) criteria (AutoRANO). METHODS Two cohorts of patients were used for this study. One consisted of 843 preoperative MRIs from 843 patients with low- or high-grade gliomas from 4 institutions and the second consisted of 713 longitudinal postoperative MRI visits from 54 patients with newly diagnosed glioblastomas (each with 2 pretreatment "baseline" MRIs) from 1 institution. RESULTS The automatically generated FLAIR hyperintensity volume, contrast-enhancing tumor volume, and AutoRANO were highly repeatable for the double-baseline visits, with an intraclass correlation coefficient (ICC) of 0.986, 0.991, and 0.977, respectively, on the cohort of postoperative GBM patients. Furthermore, there was high agreement between manually and automatically measured tumor volumes, with ICC values of 0.915, 0.924, and 0.965 for preoperative FLAIR hyperintensity, postoperative FLAIR hyperintensity, and postoperative contrast-enhancing tumor volumes, respectively. Lastly, the ICCs for comparing manually and automatically derived longitudinal changes in tumor burden were 0.917, 0.966, and 0.850 for FLAIR hyperintensity volume, contrast-enhancing tumor volume, and RANO measures, respectively. CONCLUSIONS Our automated algorithm demonstrates potential utility for evaluating tumor burden in complex posttreatment settings, although further validation in multicenter clinical trials will be needed prior to widespread implementation.
Collapse
Affiliation(s)
- Ken Chang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Andrew L Beers
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Harrison X Bai
- Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - James M Brown
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - K Ina Ly
- Stephen E. and Catherine Pappas Center for Neuro-Oncology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Xuejun Li
- Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Joeky T Senders
- Computational Neuroscience Outcomes Center, Department of Neurosurgery, Brigham and Women’s Hospital, Boston, Massachusetts, USA
| | - Vasileios K Kavouridis
- Computational Neuroscience Outcomes Center, Department of Neurosurgery, Brigham and Women’s Hospital, Boston, Massachusetts, USA
| | - Alessandro Boaro
- Computational Neuroscience Outcomes Center, Department of Neurosurgery, Brigham and Women’s Hospital, Boston, Massachusetts, USA
| | - Chang Su
- Yale School of Medicine, New Haven, Connecticut, USA
| | - Wenya Linda Bi
- Center for Skull Base and Pituitary Surgery, Department of Neurosurgery, Brigham and Women’s Hospital, Boston, Massachusetts, USA
| | - Otto Rapalino
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Weihua Liao
- Department of Radiology, Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Qin Shen
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Hao Zhou
- Department of Neurology, Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Bo Xiao
- Department of Neurology, Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Yinyan Wang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Paul J Zhang
- Department of Pathology and Laboratory Medicine, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Marco C Pinho
- Department of Radiology and Advanced Imaging Research Center, UT Southwestern Medical Center, Dallas, Texas, USA
| | - Patrick Y Wen
- Center For Neuro-Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, Massachusetts, USA
| | - Tracy T Batchelor
- Department of Neurology, Brigham and Women’s Hospital, Boston, Massachusetts, USA
| | - Jerrold L Boxerman
- Department of Diagnostic Imaging, Rhode Island Hospital and Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Omar Arnaout
- Computational Neuroscience Outcomes Center, Department of Neurosurgery, Brigham and Women’s Hospital, Boston, Massachusetts, USA
| | - Bruce R Rosen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Elizabeth R Gerstner
- Stephen E. and Catherine Pappas Center for Neuro-Oncology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Li Yang
- Department of Neurology, The Second Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Raymond Y Huang
- Department of Radiology, Brigham and Women’s Hospital, Boston, Massachusetts, USA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| |
Collapse
|