1
|
Li C, Mao Y, Liang S, Li J, Wang Y, Guo Y. Deep causal learning for pancreatic cancer segmentation in CT sequences. Neural Netw 2024; 175:106294. [PMID: 38657562 DOI: 10.1016/j.neunet.2024.106294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 03/19/2024] [Accepted: 04/05/2024] [Indexed: 04/26/2024]
Abstract
Segmenting the irregular pancreas and inconspicuous tumor simultaneously is an essential but challenging step in diagnosing pancreatic cancer. Current deep-learning (DL) methods usually segment the pancreas or tumor independently using mixed image features, which are disrupted by surrounding complex and low-contrast background tissues. Here, we proposed a deep causal learning framework named CausegNet for pancreas and tumor co-segmentation in 3D CT sequences. Specifically, a causality-aware module and a counterfactual loss are employed to enhance the DL network's comprehension of the anatomical causal relationship between the foreground elements (pancreas and tumor) and the background. By integrating causality into CausegNet, the network focuses solely on extracting intrinsic foreground causal features while effectively learning the potential causality between the pancreas and the tumor. Then based on the extracted causal features, CausegNet applies a counterfactual inference to significantly reduce the background interference and sequentially search for pancreas and tumor from the foreground. Consequently, our approach can handle deformable pancreas and obscure tumors, resulting in superior co-segmentation performance in both public and real clinical datasets, achieving the highest pancreas/tumor Dice coefficients of 86.67%/84.28%. The visualized features and anti-noise experiments further demonstrate the causal interpretability and stability of our method. Furthermore, our approach improves the accuracy and sensitivity of downstream pancreatic cancer risk assessment task by 12.50% and 50.00%, respectively, compared to experienced clinicians, indicating promising clinical applications.
Collapse
Affiliation(s)
- Chengkang Li
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China
| | - Yishen Mao
- Department of Pancreatic Surgery, Pancreatic Disease Institute, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
| | - Shuyu Liang
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China
| | - Ji Li
- Department of Pancreatic Surgery, Pancreatic Disease Institute, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China.
| | - Yuanyuan Wang
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China.
| | - Yi Guo
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China.
| |
Collapse
|
2
|
Bakx N, Van der Sangen M, Theuws J, Bluemink J, Hurkmans C. Comparison of the use of a clinically implemented deep learning segmentation model with the simulated study setting for breast cancer patients receiving radiotherapy. Acta Oncol 2024; 63:477-481. [PMID: 38899395 DOI: 10.2340/1651-226x.2024.34986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 04/24/2024] [Indexed: 06/21/2024]
Abstract
BACKGROUND Deep learning (DL) models for auto-segmentation in radiotherapy have been extensively studied in retrospective and pilot settings. However, these studies might not reflect the clinical setting. This study compares the use of a clinically implemented in-house trained DL segmentation model for breast cancer to a previously performed pilot study to assess possible differences in performance or acceptability. MATERIAL AND METHODS Sixty patients with whole breast radiotherapy, with or without an indication for locoregional radiotherapy were included. Structures were qualitatively scored by radiotherapy technologists and radiation oncologists. Quantitative evaluation was performed using dice-similarity coefficient (DSC), 95th percentile of Hausdorff Distance (95%HD) and surface DSC (sDSC), and time needed for generating, checking, and correcting structures was measured. RESULTS Ninety-three percent of all contours in clinic were scored as clinically acceptable or usable as a starting point, comparable to 92% achieved in the pilot study. Compared to the pilot study, no significant changes in time reduction were achieved for organs at risks (OARs). For target volumes, significantly more time was needed compared to the pilot study for patients including lymph node levels 1-4, although time reduction was still 33% compared to manual segmentation. Almost all contours have better DSC and 95%HD than inter-observer variations. Only CTVn4 scored worse for both metrics, and the thyroid had a higher 95%HD value. INTERPRETATION The use of the DL model in clinical practice is comparable to the pilot study, showing high acceptability rates and time reduction.
Collapse
Affiliation(s)
- Nienke Bakx
- Catharina Hospital, Department of Radiation Oncology, Eindhoven, The Netherlands.
| | | | - Jacqueline Theuws
- Catharina Hospital, Department of Radiation Oncology, Eindhoven, The Netherlands
| | - Johanna Bluemink
- Catharina Hospital, Department of Radiation Oncology, Eindhoven, The Netherlands
| | - Coen Hurkmans
- Catharina Hospital, Department of Radiation Oncology, Eindhoven, The Netherlands; Technical University Eindhoven, Departments of Applied Physics and Electrical Engineering, Eindhoven, The Netherlands
| |
Collapse
|
3
|
Zeverino M, Piccolo C, Marguet M, Jeanneret-Sozzi W, Bourhis J, Bochud F, Moeckli R. Sensitivity of automated and manual treatment planning approaches to contouring variation in early-breast cancer treatment. Phys Med 2024; 123:103402. [PMID: 38875932 DOI: 10.1016/j.ejmp.2024.103402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 05/24/2024] [Accepted: 06/05/2024] [Indexed: 06/16/2024] Open
Abstract
PURPOSE One of the advantages of integrating automated processes in treatment planning is the reduction of manual planning variability. This study aims to assess whether a deep-learning-based auto-planning solution can also reduce the contouring variation-related impact on the planned dose for early-breast cancer treatment. METHODS Auto- and manual plans were optimized for 20 patients using both auto- and manual OARs, including both lungs, right breast, heart, and left-anterior-descending (LAD) artery. Differences in terms of recalculated dose (ΔDrcM,ΔDrcA) and reoptimized dose (ΔDroM,ΔDroA) for manual (M) and auto (A)-plans, were evaluated on manual structures. The correlation between several geometric similarities and dose differences was also explored (Spearman's test). RESULTS Auto-contours were found slightly smaller in size than manual contours for right breast and heart and more than twice larger for LAD. Recalculated dose differences were found negligible for both planning approaches except for heart (ΔDrcM=-0.4 Gy, ΔDrcA=-0.3 Gy) and right breast (ΔDrcM=-1.2 Gy, ΔDrcA=-1.3 Gy) maximum dose. Re-optimized dose differences were considered equivalent to recalculated ones for both lungs and LAD, while they were significantly smaller for heart (ΔDroM=-0.2 Gy, ΔDroA=-0.2 Gy) and right breast (ΔDroM =-0.3 Gy, ΔDroA=-0.9 Gy) maximum dose. Twenty-one correlations were found for ΔDrcM,A (M=8,A=13) that reduced to four for ΔDroM,A (M=3,A=1). CONCLUSIONS The sensitivity of auto-planning to contouring variation was found not relevant when compared to manual planning, regardless of the method used to calculate the dose differences. Nonetheless, the method employed to define the dose differences strongly affected the correlation analysis resulting highly reduced when dose was reoptimized, regardless of the planning approach.
Collapse
Affiliation(s)
- Michele Zeverino
- Institute of Radiation Physics, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Consiglia Piccolo
- Institute of Radiation Physics, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Maud Marguet
- Institute of Radiation Physics, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Wendy Jeanneret-Sozzi
- Radiation Oncology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Jean Bourhis
- Radiation Oncology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Francois Bochud
- Institute of Radiation Physics, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Raphaël Moeckli
- Institute of Radiation Physics, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
| |
Collapse
|
4
|
Sahlsten J, Jaskari J, Wahid KA, Ahmed S, Glerean E, He R, Kann BH, Mäkitie A, Fuller CD, Naser MA, Kaski K. Application of simultaneous uncertainty quantification and segmentation for oropharyngeal cancer use-case with Bayesian deep learning. COMMUNICATIONS MEDICINE 2024; 4:110. [PMID: 38851837 PMCID: PMC11162474 DOI: 10.1038/s43856-024-00528-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 05/16/2024] [Indexed: 06/10/2024] Open
Abstract
BACKGROUND Radiotherapy is a core treatment modality for oropharyngeal cancer (OPC), where the primary gross tumor volume (GTVp) is manually segmented with high interobserver variability. This calls for reliable and trustworthy automated tools in clinician workflow. Therefore, accurate uncertainty quantification and its downstream utilization is critical. METHODS Here we propose uncertainty-aware deep learning for OPC GTVp segmentation, and illustrate the utility of uncertainty in multiple applications. We examine two Bayesian deep learning (BDL) models and eight uncertainty measures, and utilize a large multi-institute dataset of 292 PET/CT scans to systematically analyze our approach. RESULTS We show that our uncertainty-based approach accurately predicts the quality of the deep learning segmentation in 86.6% of cases, identifies low performance cases for semi-automated correction, and visualizes regions of the scans where the segmentations likely fail. CONCLUSIONS Our BDL-based analysis provides a first-step towards more widespread implementation of uncertainty quantification in OPC GTVp segmentation.
Collapse
Affiliation(s)
- Jaakko Sahlsten
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Joel Jaskari
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Kareem A Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Sara Ahmed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Enrico Glerean
- Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Benjamin H Kann
- Artificial Intelligence in Medicine Program, Brigham and Women's Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Antti Mäkitie
- Department of Otorhinolaryngology, Head and Neck Surgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
- Research Program in Systems Oncology, University of Helsinki, Helsinki, Finland
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA.
| | - Kimmo Kaski
- Department of Computer Science, Aalto University School of Science, Espoo, Finland.
| |
Collapse
|
5
|
Takeya A, Watanabe K, Haga A. Fine structural human phantom in dentistry and instance tooth segmentation. Sci Rep 2024; 14:12630. [PMID: 38824210 PMCID: PMC11144222 DOI: 10.1038/s41598-024-63319-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2024] [Accepted: 05/28/2024] [Indexed: 06/03/2024] Open
Abstract
In this study, we present the development of a fine structural human phantom designed specifically for applications in dentistry. This research focused on assessing the viability of applying medical computer vision techniques to the task of segmenting individual teeth within a phantom. Using a virtual cone-beam computed tomography (CBCT) system, we generated over 170,000 training datasets. These datasets were produced by varying the elemental densities and tooth sizes within the human phantom, as well as varying the X-ray spectrum, noise intensity, and projection cutoff intensity in the virtual CBCT system. The deep-learning (DL) based tooth segmentation model was trained using the generated datasets. The results demonstrate an agreement with manual contouring when applied to clinical CBCT data. Specifically, the Dice similarity coefficient exceeded 0.87, indicating the robust performance of the developed segmentation model even when virtual imaging was used. The present results show the practical utility of virtual imaging techniques in dentistry and highlight the potential of medical computer vision for enhancing precision and efficiency in dental imaging processes.
Collapse
Affiliation(s)
- Atsushi Takeya
- Graduate School of Biomedical Sciences, Tokushima University, 3-18-15 Kuramoto-cho, Tokushima, 770-8503, Japan
| | - Keiichiro Watanabe
- Graduate School of Biomedical Sciences, Tokushima University, 3-18-15 Kuramoto-cho, Tokushima, 770-8503, Japan
| | - Akihiro Haga
- Graduate School of Biomedical Sciences, Tokushima University, 3-18-15 Kuramoto-cho, Tokushima, 770-8503, Japan.
| |
Collapse
|
6
|
Wahid KA, Sahin O, Kundu S, Lin D, Alanis A, Tehami S, Kamel S, Duke S, Sherer MV, Rasmussen M, Korreman S, Fuentes D, Cislo M, Nelms BE, Christodouleas JP, Murphy JD, Mohamed ASR, He R, Naser MA, Gillespie EF, Fuller CD. Associations Between Radiation Oncologist Demographic Factors and Segmentation Similarity Benchmarks: Insights From a Crowd-Sourced Challenge Using Bayesian Estimation. JCO Clin Cancer Inform 2024; 8:e2300174. [PMID: 38870441 DOI: 10.1200/cci.23.00174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 01/08/2024] [Accepted: 04/03/2024] [Indexed: 06/15/2024] Open
Abstract
PURPOSE The quality of radiotherapy auto-segmentation training data, primarily derived from clinician observers, is of utmost importance. However, the factors influencing the quality of clinician-derived segmentations are poorly understood; our study aims to quantify these factors. METHODS Organ at risk (OAR) and tumor-related segmentations provided by radiation oncologists from the Contouring Collaborative for Consensus in Radiation Oncology data set were used. Segmentations were derived from five disease sites: breast, sarcoma, head and neck (H&N), gynecologic (GYN), and GI. Segmentation quality was determined on a structure-by-structure basis by comparing the observer segmentations with an expert-derived consensus, which served as a reference standard benchmark. The Dice similarity coefficient (DSC) was primarily used as a metric for the comparisons. DSC was stratified into binary groups on the basis of structure-specific expert-derived interobserver variability (IOV) cutoffs. Generalized linear mixed-effects models using Bayesian estimation were used to investigate the association between demographic variables and the binarized DSC for each disease site. Variables with a highest density interval excluding zero were considered to substantially affect the outcome measure. RESULTS Five hundred seventy-four, 110, 452, 112, and 48 segmentations were used for the breast, sarcoma, H&N, GYN, and GI cases, respectively. The median percentage of segmentations that crossed the expert DSC IOV cutoff when stratified by structure type was 55% and 31% for OARs and tumors, respectively. Regression analysis revealed that the structure being tumor-related had a substantial negative impact on binarized DSC for the breast, sarcoma, H&N, and GI cases. There were no recurring relationships between segmentation quality and demographic variables across the cases, with most variables demonstrating large standard deviations. CONCLUSION Our study highlights substantial uncertainty surrounding conventionally presumed factors influencing segmentation quality relative to benchmarks.
Collapse
Affiliation(s)
- Kareem A Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Onur Sahin
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Suprateek Kundu
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Diana Lin
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Anthony Alanis
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Salik Tehami
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Serageldin Kamel
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Simon Duke
- Department of Radiation Oncology, Cambridge University Hospitals, Cambridge, United Kingdom
| | - Michael V Sherer
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA
| | - Mathis Rasmussen
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Stine Korreman
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - David Fuentes
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Michael Cislo
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY
| | | | - John P Christodouleas
- Department of Radiation Oncology, The University of Pennsylvania Cancer Center, Philadelphia, PA
- Elekta, Atlanta, GA
| | - James D Murphy
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA
| | - Abdallah S R Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Mohammed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Erin F Gillespie
- Department of Radiation Oncology, University of Washington Fred Hutchinson Cancer Center, Seattle, WA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| |
Collapse
|
7
|
Alrashdi I. Fog-based deep learning framework for real-time pandemic screening in smart cities from multi-site tomographies. BMC Med Imaging 2024; 24:123. [PMID: 38797827 PMCID: PMC11129417 DOI: 10.1186/s12880-024-01302-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Accepted: 05/20/2024] [Indexed: 05/29/2024] Open
Abstract
The quick proliferation of pandemic diseases has been imposing many concerns on the international health infrastructure. To combat pandemic diseases in smart cities, Artificial Intelligence of Things (AIoT) technology, based on the integration of artificial intelligence (AI) with the Internet of Things (IoT), is commonly used to promote efficient control and diagnosis during the outbreak, thereby minimizing possible losses. However, the presence of multi-source institutional data remains one of the major challenges hindering the practical usage of AIoT solutions for pandemic disease diagnosis. This paper presents a novel framework that utilizes multi-site data fusion to boost the accurateness of pandemic disease diagnosis. In particular, we focus on a case study of COVID-19 lesion segmentation, a crucial task for understanding disease progression and optimizing treatment strategies. In this study, we propose a novel multi-decoder segmentation network for efficient segmentation of infections from cross-domain CT scans in smart cities. The multi-decoder segmentation network leverages data from heterogeneous domains and utilizes strong learning representations to accurately segment infections. Performance evaluation of the multi-decoder segmentation network was conducted on three publicly accessible datasets, demonstrating robust results with an average dice score of 89.9% and an average surface dice of 86.87%. To address scalability and latency issues associated with centralized cloud systems, fog computing (FC) emerges as a viable solution. FC brings resources closer to the operator, offering low latency and energy-efficient data management and processing. In this context, we propose a unique FC technique called PANDFOG to deploy the multi-decoder segmentation network on edge nodes for practical and clinical applications of automated COVID-19 pneumonia analysis. The results of this study highlight the efficacy of the multi-decoder segmentation network in accurately segmenting infections from cross-domain CT scans. Moreover, the proposed PANDFOG system demonstrates the practical deployment of the multi-decoder segmentation network on edge nodes, providing real-time access to COVID-19 segmentation findings for improved patient monitoring and clinical decision-making.
Collapse
Affiliation(s)
- Ibrahim Alrashdi
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, 72388, Sakaka, Aljouf, Saudi Arabia.
| |
Collapse
|
8
|
Nielsen CP, Lorenzen EL, Jensen K, Eriksen JG, Johansen J, Gyldenkerne N, Zukauskaite R, Kjellgren M, Maare C, Lønkvist CK, Nowicka-Matus K, Szejniuk WM, Farhadi M, Ujmajuridze Z, Marienhagen K, Johansen TS, Friborg J, Overgaard J, Hansen CR. Interobserver variation in organs at risk contouring in head and neck cancer according to the DAHANCA guidelines. Radiother Oncol 2024; 197:110337. [PMID: 38772479 DOI: 10.1016/j.radonc.2024.110337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Revised: 04/24/2024] [Accepted: 05/14/2024] [Indexed: 05/23/2024]
Affiliation(s)
- Camilla Panduro Nielsen
- Laboratory of Radiation Physics, Odense University Hospital, Odense, Denmark; Department of Clinical Research, University of Southern Denmark, Odense, Denmark.
| | - Ebbe L Lorenzen
- Laboratory of Radiation Physics, Odense University Hospital, Odense, Denmark; Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Kenneth Jensen
- Danish Centre for Particle Therapy, Aarhus University Hospital, Denmark
| | - Jesper Grau Eriksen
- Department of Oncology, Aarhus University Hospital, Denmark; Department of Experimental Clinical Oncology, Aarhus University Hospital, Denmark
| | - Jørgen Johansen
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Danish Centre for Particle Therapy, Aarhus University Hospital, Denmark; Department of Oncology, Odense University Hospital, Denmark
| | | | - Ruta Zukauskaite
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Department of Oncology, Odense University Hospital, Denmark
| | - Martin Kjellgren
- Laboratory of Radiation Physics, Odense University Hospital, Odense, Denmark
| | - Christian Maare
- Department of Oncology, Copenhagen University Hospital Herlev, Denmark
| | | | - Kinga Nowicka-Matus
- Department of Oncology & Clinical Cancer Research Center, Aalborg University Hospital, Denmark
| | - Weronika Maria Szejniuk
- Danish Centre for Particle Therapy, Aarhus University Hospital, Denmark; Department of Oncology & Clinical Cancer Research Center, Aalborg University Hospital, Denmark; Department of Clinical Medicine, Aalborg University, Denmark
| | - Mohammad Farhadi
- Department of Oncology, Zealand University Hospital Næstved, Denmark
| | - Zaza Ujmajuridze
- Department of Oncology, Zealand University Hospital Næstved, Denmark
| | | | - Tanja Stagaard Johansen
- Danish Centre for Particle Therapy, Aarhus University Hospital, Denmark; Department of Oncology, Rigshospitalet, Denmark
| | | | - Jens Overgaard
- Department of Experimental Clinical Oncology, Aarhus University Hospital, Denmark
| | - Christian Rønn Hansen
- Laboratory of Radiation Physics, Odense University Hospital, Odense, Denmark; Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Danish Centre for Particle Therapy, Aarhus University Hospital, Denmark
| |
Collapse
|
9
|
Koitka S, Baldini G, Kroll L, van Landeghem N, Pollok OB, Haubold J, Pelka O, Kim M, Kleesiek J, Nensa F, Hosch R. SAROS: A dataset for whole-body region and organ segmentation in CT imaging. Sci Data 2024; 11:483. [PMID: 38729970 PMCID: PMC11087485 DOI: 10.1038/s41597-024-03337-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 05/01/2024] [Indexed: 05/12/2024] Open
Abstract
The Sparsely Annotated Region and Organ Segmentation (SAROS) dataset was created using data from The Cancer Imaging Archive (TCIA) to provide a large open-access CT dataset with high-quality annotations of body landmarks. In-house segmentation models were employed to generate annotation proposals on randomly selected cases from TCIA. The dataset includes 13 semantic body region labels (abdominal/thoracic cavity, bones, brain, breast implant, mediastinum, muscle, parotid/submandibular/thyroid glands, pericardium, spinal cord, subcutaneous tissue) and six body part labels (left/right arm/leg, head, torso). Case selection was based on the DICOM series description, gender, and imaging protocol, resulting in 882 patients (438 female) for a total of 900 CTs. Manual review and correction of proposals were conducted in a continuous quality control cycle. Only every fifth axial slice was annotated, yielding 20150 annotated slices from 28 data collections. For the reproducibility on downstream tasks, five cross-validation folds and a test set were pre-defined. The SAROS dataset serves as an open-access resource for training and evaluating novel segmentation models, covering various scanner vendors and diseases.
Collapse
Affiliation(s)
- Sven Koitka
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Giulia Baldini
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Lennard Kroll
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Natalie van Landeghem
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Olivia B Pollok
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Johannes Haubold
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Obioma Pelka
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
- Data Integration Center, Central IT Department, University Hospital Essen, Essen, Germany
| | - Moon Kim
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Felix Nensa
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - René Hosch
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Essen, Germany.
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany.
| |
Collapse
|
10
|
Rong Y, Chen Q, Fu Y, Yang X, Al-Hallaq HA, Wu QJ, Yuan L, Xiao Y, Cai B, Latifi K, Benedict SH, Buchsbaum JC, Qi XS. NRG Oncology Assessment of Artificial Intelligence Deep Learning-Based Auto-segmentation for Radiation Therapy: Current Developments, Clinical Considerations, and Future Directions. Int J Radiat Oncol Biol Phys 2024; 119:261-280. [PMID: 37972715 PMCID: PMC11023777 DOI: 10.1016/j.ijrobp.2023.10.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 09/16/2023] [Accepted: 10/14/2023] [Indexed: 11/19/2023]
Abstract
Deep learning neural networks (DLNN) in Artificial intelligence (AI) have been extensively explored for automatic segmentation in radiotherapy (RT). In contrast to traditional model-based methods, data-driven AI-based models for auto-segmentation have shown high accuracy in early studies in research settings and controlled environment (single institution). Vendor-provided commercial AI models are made available as part of the integrated treatment planning system (TPS) or as a stand-alone tool that provides streamlined workflow interacting with the main TPS. These commercial tools have drawn clinics' attention thanks to their significant benefit in reducing the workload from manual contouring and shortening the duration of treatment planning. However, challenges occur when applying these commercial AI-based segmentation models to diverse clinical scenarios, particularly in uncontrolled environments. Contouring nomenclature and guideline standardization has been the main task undertaken by the NRG Oncology. AI auto-segmentation holds the potential clinical trial participants to reduce interobserver variations, nomenclature non-compliance, and contouring guideline deviations. Meanwhile, trial reviewers could use AI tools to verify contour accuracy and compliance of those submitted datasets. In recognizing the growing clinical utilization and potential of these commercial AI auto-segmentation tools, NRG Oncology has formed a working group to evaluate the clinical utilization and potential of commercial AI auto-segmentation tools. The group will assess in-house and commercially available AI models, evaluation metrics, clinical challenges, and limitations, as well as future developments in addressing these challenges. General recommendations are made in terms of the implementation of these commercial AI models, as well as precautions in recognizing the challenges and limitations.
Collapse
Affiliation(s)
- Yi Rong
- Mayo Clinic Arizona, Phoenix, AZ
| | - Quan Chen
- City of Hope Comprehensive Cancer Center Duarte, CA
| | - Yabo Fu
- Memorial Sloan Kettering Cancer Center, Commack, NY
| | | | | | | | - Lulin Yuan
- Virginia Commonwealth University, Richmond, VA
| | - Ying Xiao
- University of Pennsylvania/Abramson Cancer Center, Philadelphia, PA
| | - Bin Cai
- The University of Texas Southwestern Medical Center, Dallas, TX
| | | | - Stanley H Benedict
- University of California Davis Comprehensive Cancer Center, Sacramento, CA
| | | | - X Sharon Qi
- University of California Los Angeles, Los Angeles, CA
| |
Collapse
|
11
|
Tada DK, Teng P, Vyapari K, Banola A, Foster G, Diaz E, Kim GHJ, Goldin JG, Abtin F, McNitt-Gray M, Brown MS. Quantifying lung fissure integrity using a three-dimensional patch-based convolutional neural network on CT images for emphysema treatment planning. J Med Imaging (Bellingham) 2024; 11:034502. [PMID: 38817711 PMCID: PMC11135203 DOI: 10.1117/1.jmi.11.3.034502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 04/19/2024] [Accepted: 05/03/2024] [Indexed: 06/01/2024] Open
Abstract
Purpose Evaluation of lung fissure integrity is required to determine whether emphysema patients have complete fissures and are candidates for endobronchial valve (EBV) therapy. We propose a deep learning (DL) approach to segment fissures using a three-dimensional patch-based convolutional neural network (CNN) and quantitatively assess fissure integrity on CT to evaluate it in subjects with severe emphysema. Approach From an anonymized image database of patients with severe emphysema, 129 CT scans were used. Lung lobe segmentations were performed to identify lobar regions, and the boundaries among these regions were used to construct approximate interlobar regions of interest (ROIs). The interlobar ROIs were annotated by expert image analysts to identify voxels where the fissure was present and create a reference ROI that excluded non-fissure voxels (where the fissure is incomplete). A CNN configured by nnU-Net was trained using 86 CT scans and their corresponding reference ROIs to segment the ROIs of left oblique fissure (LOF), right oblique fissure (ROF), and right horizontal fissure (RHF). For an independent test set of 43 cases, fissure integrity was quantified by mapping the segmented fissure ROI along the interlobar ROI. A fissure integrity score (FIS) was then calculated as the percentage of labeled fissure voxels divided by total voxels in the interlobar ROI. Predicted FIS (p-FIS) was quantified from the CNN output, and statistical analyses were performed comparing p-FIS and reference FIS (r-FIS). Results The absolute percent error mean (±SD) between r-FIS and p-FIS for the test set was 4.0% (± 4.1 % ), 6.0% (± 9.3 % ), and 12.2% (± 12.5 % ) for the LOF, ROF, and RHF, respectively. Conclusions A DL approach was developed to segment lung fissures on CT images and accurately quantify FIS. It has potential to assist in the identification of emphysema patients who would benefit from EBV treatment.
Collapse
Affiliation(s)
- Dallas K. Tada
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Pangyu Teng
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Kalyani Vyapari
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Ashley Banola
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - George Foster
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Esteban Diaz
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Grace Hyun J. Kim
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Jonathan G. Goldin
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Fereidoun Abtin
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Michael McNitt-Gray
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Matthew S. Brown
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| |
Collapse
|
12
|
Strasberg HR, Jackson GP, Bakken SR, Boxwala A, Richardson JE, Morrow JD. Perspectives on the role of industry in informatics research and authorship. J Am Med Inform Assoc 2024; 31:1206-1210. [PMID: 38531679 PMCID: PMC11031207 DOI: 10.1093/jamia/ocae063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 01/23/2024] [Accepted: 03/06/2024] [Indexed: 03/28/2024] Open
Abstract
OBJECTIVES Advances in informatics research come from academic, nonprofit, and for-profit industry organizations, and from academic-industry partnerships. While scientific studies of commercial products may offer critical lessons for the field, manuscripts authored by industry scientists are sometimes categorically rejected. We review historical context, community perceptions, and guidelines on informatics authorship. PROCESS We convened an expert panel at the American Medical Informatics Association 2022 Annual Symposium to explore the role of industry in informatics research and authorship with community input. The panel summarized session themes and prepared recommendations. CONCLUSIONS Authorship for informatics research, regardless of affiliation, should be determined by International Committee of Medical Journal Editors uniform requirements for authorship. All authors meeting criteria should be included, and categorical rejection based on author affiliation is unethical. Informatics research should be evaluated based on its scientific rigor; all sources of bias and conflicts of interest should be addressed through disclosure and, when possible, methodological mitigation.
Collapse
Affiliation(s)
- Howard R Strasberg
- Clinical Effectiveness, Wolters Kluwer Health, Waltham, MA 02451, United States
| | - Gretchen Purcell Jackson
- Intuitive Surgical, Sunnyvale, CA 94086, United States
- Department of Pediatric Surgery, Vanderbilt University Medical Center, Nashville, TN 37232, United States
| | - Suzanne R Bakken
- School of Nursing, Department of Biomedical Informatics, and Data Science Institute, Columbia University, New York, NY 10032, United States
| | - Aziz Boxwala
- Elimu Informatics, La Jolla, CA 92037, United States
| | - Joshua E Richardson
- Center for Informatics, RTI International, Berkeley, CA 94704, United States
| | - Jon D Morrow
- Department of Obstetrics and Gynecology, New York University School of Medicine, New York, NY 10016, United States
| |
Collapse
|
13
|
Mody P, Huiskes M, Chaves-de-Plaza NF, Onderwater A, Lamsma R, Hildebrandt K, Hoekstra N, Astreinidou E, Staring M, Dankers F. Large-scale dose evaluation of deep learning organ contours in head-and-neck radiotherapy by leveraging existing plans. Phys Imaging Radiat Oncol 2024; 30:100572. [PMID: 38633281 PMCID: PMC11021837 DOI: 10.1016/j.phro.2024.100572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Revised: 03/21/2024] [Accepted: 03/21/2024] [Indexed: 04/19/2024] Open
Abstract
Background and purpose Retrospective dose evaluation for organ-at-risk auto-contours has previously used small cohorts due to additional manual effort required for treatment planning on auto-contours. We aimed to do this at large scale, by a) proposing and assessing an automated plan optimization workflow that used existing clinical plan parameters and b) using it for head-and-neck auto-contour dose evaluation. Materials and methods Our automated workflow emulated our clinic's treatment planning protocol and reused existing clinical plan optimization parameters. This workflow recreated the original clinical plan (P OG ) with manual contours (P MC ) and evaluated the dose effect (P OG - P MC ) on 70 photon and 30 proton plans of head-and-neck patients. As a use-case, the same workflow (and parameters) created a plan using auto-contours (P AC ) of eight head-and-neck organs-at-risk from a commercial tool and evaluated their dose effect (P MC - P AC ). Results For plan recreation (P OG - P MC ), our workflow had a median impact of 1.0% and 1.5% across dose metrics of auto-contours, for photon and proton respectively. Computer time of automated planning was 25% (photon) and 42% (proton) of manual planning time. For auto-contour evaluation (P MC - P AC ), we noticed an impact of 2.0% and 2.6% for photon and proton radiotherapy. All evaluations had a median Δ NTCP (Normal Tissue Complication Probability) less than 0.3%. Conclusions The plan replication capability of our automated program provides a blueprint for other clinics to perform auto-contour dose evaluation with large patient cohorts. Finally, despite geometric differences, auto-contours had a minimal median dose impact, hence inspiring confidence in their utility and facilitating their clinical adoption.
Collapse
Affiliation(s)
- Prerak Mody
- Division of Image Processing (LKEB), Department of Radiology, Leiden University Medical Center, Leiden 2333 ZA, The Netherlands
- HollandPTC consortium – Erasmus Medical Center, Rotterdam, Holland Proton Therapy Centre, Delft, Leiden University Medical Center (LUMC), Leiden and Delft University of Technology, Delft, The Netherlands
| | - Merle Huiskes
- Department of Radiation Oncology, Leiden University Medical Center, Leiden 2333 ZA, The Netherlands
| | - Nicolas F. Chaves-de-Plaza
- HollandPTC consortium – Erasmus Medical Center, Rotterdam, Holland Proton Therapy Centre, Delft, Leiden University Medical Center (LUMC), Leiden and Delft University of Technology, Delft, The Netherlands
- Computer Graphics and Visualization Group, EEMCS, TU Delft, Delft 2628 CD, The Netherlands
| | - Alice Onderwater
- Department of Radiation Oncology, Leiden University Medical Center, Leiden 2333 ZA, The Netherlands
| | - Rense Lamsma
- Department of Radiation Oncology, Leiden University Medical Center, Leiden 2333 ZA, The Netherlands
| | - Klaus Hildebrandt
- Computer Graphics and Visualization Group, EEMCS, TU Delft, Delft 2628 CD, The Netherlands
| | - Nienke Hoekstra
- Department of Radiation Oncology, Leiden University Medical Center, Leiden 2333 ZA, The Netherlands
| | - Eleftheria Astreinidou
- Department of Radiation Oncology, Leiden University Medical Center, Leiden 2333 ZA, The Netherlands
| | - Marius Staring
- Division of Image Processing (LKEB), Department of Radiology, Leiden University Medical Center, Leiden 2333 ZA, The Netherlands
- Department of Radiation Oncology, Leiden University Medical Center, Leiden 2333 ZA, The Netherlands
| | - Frank Dankers
- Department of Radiation Oncology, Leiden University Medical Center, Leiden 2333 ZA, The Netherlands
| |
Collapse
|
14
|
Huang Y, Yang J, Sun Q, Yuan Y, Li H, Hou Y. Multi-residual 2D network integrating spatial correlation for whole heart segmentation. Comput Biol Med 2024; 172:108261. [PMID: 38508056 DOI: 10.1016/j.compbiomed.2024.108261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 02/21/2024] [Accepted: 03/06/2024] [Indexed: 03/22/2024]
Abstract
Whole heart segmentation (WHS) has significant clinical value for cardiac anatomy, modeling, and analysis of cardiac function. This study aims to address the WHS accuracy on cardiac CT images, as well as the fast inference speed and low graphics processing unit (GPU) memory consumption required by practical clinical applications. Thus, we propose a multi-residual two-dimensional (2D) network integrating spatial correlation for WHS. The network performs slice-by-slice segmentation on three-dimensional cardiac CT images in a 2D encoder-decoder manner. In the network, a convolutional long short-term memory skip connection module is designed to perform spatial correlation feature extraction on the feature maps at different resolutions extracted by the sub-modules of the pre-trained ResNet-based encoder. Moreover, a decoder based on the multi-residual module is designed to analyze the extracted features from the perspectives of multi-scale and channel attention, thereby accurately delineating the various substructures of the heart. The proposed method is verified on a dataset of the multi-modality WHS challenge, an in-house WHS dataset, and a dataset of the abdominal organ segmentation challenge. The dice, Jaccard, average symmetric surface distance, Hausdorff distance, inference time, and maximum GPU memory of the WHS are 0.914, 0.843, 1.066 mm, 15.778 mm, 9.535 s, and 1905 MB, respectively. The proposed network has high accuracy, fast inference speed, minimal GPU memory consumption, strong robustness, and good generalization. It can be deployed to clinical practical applications for WHS and can be effectively extended and applied to other multi-organ segmentation fields. The source code is publicly available at https://github.com/nancy1984yan/MultiResNet-SC.
Collapse
Affiliation(s)
- Yan Huang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China; School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China; School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China; National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Shenyang, Liaoning, China.
| | - Qi Sun
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China; School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Yuliang Yuan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China; School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Honghe Li
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China; School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Yang Hou
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, Liaoning, China
| |
Collapse
|
15
|
Welch ML, Kim S, Hope AJ, Huang SH, Lu Z, Marsilla J, Kazmierski M, Rey-McIntyre K, Patel T, O'Sullivan B, Waldron J, Bratman S, Haibe-Kains B, Tadic T. RADCURE: An open-source head and neck cancer CT dataset for clinical radiation therapy insights. Med Phys 2024; 51:3101-3109. [PMID: 38362943 DOI: 10.1002/mp.16972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 01/17/2024] [Accepted: 01/17/2024] [Indexed: 02/17/2024] Open
Abstract
PURPOSE This manuscript presents RADCURE, one of the most extensive head and neck cancer (HNC) imaging datasets accessible to the public. Initially collected for clinical radiation therapy (RT) treatment planning, this dataset has been retrospectively reconstructed for use in imaging research. ACQUISITION AND VALIDATION METHODS RADCURE encompasses data from 3346 patients, featuring computed tomography (CT) RT simulation images with corresponding target and organ-at-risk contours. These CT scans were collected using systems from three different manufacturers. Standard clinical imaging protocols were followed, and contours were manually generated and reviewed at weekly RT quality assurance rounds. RADCURE imaging and structure set data was extracted from our institution's radiation treatment planning and oncology information systems using a custom-built data mining and processing system. Furthermore, images were linked to our clinical anthology of outcomes data for each patient and includes demographic, clinical and treatment information based on the 7th edition TNM staging system (Tumor-Node-Metastasis Classification System of Malignant Tumors). The median patient age is 63, with the final dataset including 80% males. Half of the cohort is diagnosed with oropharyngeal cancer, while laryngeal, nasopharyngeal, and hypopharyngeal cancers account for 25%, 12%, and 5% of cases, respectively. The median duration of follow-up is five years, with 60% of the cohort surviving until the last follow-up point. DATA FORMAT AND USAGE NOTES The dataset provides images and contours in DICOM CT and RT-STRUCT formats, respectively. We have standardized the nomenclature for individual contours-such as the gross primary tumor, gross nodal volumes, and 19 organs-at-risk-to enhance the RT-STRUCT files' utility. Accompanying demographic, clinical, and treatment data are supplied in a comma-separated values (CSV) file format. This comprehensive dataset is publicly accessible via The Cancer Imaging Archive. POTENTIAL APPLICATIONS RADCURE's amalgamation of imaging, clinical, demographic, and treatment data renders it an invaluable resource for a broad spectrum of radiomics image analysis research endeavors. Researchers can utilize this dataset to advance routine clinical procedures using machine learning or artificial intelligence, to identify new non-invasive biomarkers, or to forge prognostic models.
Collapse
Affiliation(s)
- Mattea L Welch
- Princess Margaret Cancer Centre, Toronto, ON, Canada
- Cancer Digital Intelligence Program, Toronto, ON, Canada
| | - Sejin Kim
- Princess Margaret Cancer Centre, Toronto, ON, Canada
- Cancer Digital Intelligence Program, Toronto, ON, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Andrew J Hope
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
| | - Shao Hui Huang
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
| | - Zhibin Lu
- Princess Margaret Cancer Centre, Toronto, ON, Canada
| | - Joseph Marsilla
- Princess Margaret Cancer Centre, Toronto, ON, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Michal Kazmierski
- Princess Margaret Cancer Centre, Toronto, ON, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Katrina Rey-McIntyre
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON, Canada
| | - Tirth Patel
- Cancer Digital Intelligence Program, Toronto, ON, Canada
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON, Canada
- TECHNA Institute, University Health Network, Toronto, ON, Canada
| | - Brian O'Sullivan
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
| | - John Waldron
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
| | - Scott Bratman
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Benjamin Haibe-Kains
- Princess Margaret Cancer Centre, Toronto, ON, Canada
- Cancer Digital Intelligence Program, Toronto, ON, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
- TECHNA Institute, University Health Network, Toronto, ON, Canada
| | - Tony Tadic
- Cancer Digital Intelligence Program, Toronto, ON, Canada
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, ON, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
16
|
Podobnik G, Ibragimov B, Peterlin P, Strojan P, Vrtovec T. vOARiability: Interobserver and intermodality variability analysis in OAR contouring from head and neck CT and MR images. Med Phys 2024; 51:2175-2186. [PMID: 38230752 DOI: 10.1002/mp.16924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 10/31/2023] [Accepted: 12/13/2023] [Indexed: 01/18/2024] Open
Abstract
BACKGROUND Accurate and consistent contouring of organs-at-risk (OARs) from medical images is a key step of radiotherapy (RT) cancer treatment planning. Most contouring approaches rely on computed tomography (CT) images, but the integration of complementary magnetic resonance (MR) modality is highly recommended, especially from the perspective of OAR contouring, synthetic CT and MR image generation for MR-only RT, and MR-guided RT. Although MR has been recognized as valuable for contouring OARs in the head and neck (HaN) region, the accuracy and consistency of the resulting contours have not been yet objectively evaluated. PURPOSE To analyze the interobserver and intermodality variability in contouring OARs in the HaN region, performed by observers with different level of experience from CT and MR images of the same patients. METHODS In the final cohort of 27 CT and MR images of the same patients, contours of up to 31 OARs were obtained by a radiation oncology resident (junior observer, JO) and a board-certified radiation oncologist (senior observer, SO). The resulting contours were then evaluated in terms of interobserver variability, characterized as the agreement among different observers (JO and SO) when contouring OARs in a selected modality (CT or MR), and intermodality variability, characterized as the agreement among different modalities (CT and MR) when OARs were contoured by a selected observer (JO or SO), both by the Dice coefficient (DC) and 95-percentile Hausdorff distance (HD95 $_{95}$ ). RESULTS The mean (±standard deviation) interobserver variability was 69.0 ± 20.2% and 5.1 ± 4.1 mm, while the mean intermodality variability was 61.6 ± 19.0% and 6.1 ± 4.3 mm in terms of DC and HD95 $_{95}$ , respectively, across all OARs. Statistically significant differences were only found for specific OARs. The performed MR to CT image registration resulted in a mean target registration error of 1.7 ± 0.5 mm, which was considered as valid for the analysis of intermodality variability. CONCLUSIONS The contouring variability was, in general, similar for both image modalities, and experience did not considerably affect the contouring performance. However, the results indicate that an OAR is difficult to contour regardless of whether it is contoured in the CT or MR image, and that observer experience may be an important factor for OARs that are deemed difficult to contour. Several of the differences in the resulting variability can be also attributed to adherence to guidelines, especially for OARs with poor visibility or without distinctive boundaries in either CT or MR images. Although considerable contouring differences were observed for specific OARs, it can be concluded that almost all OARs can be contoured with a similar degree of variability in either the CT or MR modality, which works in favor of MR images from the perspective of MR-only and MR-guided RT.
Collapse
Affiliation(s)
- Gašper Podobnik
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | - Bulat Ibragimov
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | | | | | - Tomaž Vrtovec
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
17
|
Koo J, Caudell J, Latifi K, Moros EG, Feygelman V. Essentially unedited deep-learning-based OARs are suitable for rigorous oropharyngeal and laryngeal cancer treatment planning. J Appl Clin Med Phys 2024; 25:e14202. [PMID: 37942993 DOI: 10.1002/acm2.14202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 10/19/2023] [Accepted: 10/25/2023] [Indexed: 11/10/2023] Open
Abstract
Quality of organ at risk (OAR) autosegmentation is often judged by concordance metrics against the human-generated gold standard. However, the ultimate goal is the ability to use unedited autosegmented OARs in treatment planning, while maintaining the plan quality. We tested this approach with head and neck (HN) OARs generated by a prototype deep-learning (DL) model on patients previously treated for oropharyngeal and laryngeal cancer. Forty patients were selected, with all structures delineated by an experienced physician. For each patient, a set of 13 OARs were generated by the DL model. Each patient was re-planned based on original targets and unedited DL-produced OARs. The new dose distributions were then applied back to the manually delineated structures. The target coverage was evaluated with inhomogeneity index (II) and the relative volume of regret. For the OARs, Dice similarity coefficient (DSC) of areas under the DVH curves, individual DVH objectives, and composite continuous plan quality metric (PQM) were compared. The nearly identical primary target coverage for the original and re-generated plans was achieved, with the same II and relative volume of regret values. The average DSC of the areas under the corresponding pairs of DVH curves was 0.97 ± 0.06. The number of critical DVH points which met the clinical objectives with the dose optimized on autosegmented structures but failed when evaluated on the manual ones was 5 of 896 (0.6%). The average OAR PQM score with the re-planned dose distributions was essentially the same when evaluated either on the autosegmented or manual OARs. Thus, rigorous HN treatment planning is possible with OARs segmented by a prototype DL algorithm with minimal, if any, manual editing.
Collapse
Affiliation(s)
- Jihye Koo
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida, USA
- Department of Physics, University of South Florida, Tampa, Florida, USA
| | - Jimmy Caudell
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida, USA
| | - Kujtim Latifi
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida, USA
| | - Eduardo G Moros
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida, USA
| | - Vladimir Feygelman
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida, USA
| |
Collapse
|
18
|
REINKE ANNIKA, TIZABI MINUD, BAUMGARTNER MICHAEL, EISENMANN MATTHIAS, HECKMANN-NÖTZEL DOREEN, KAVUR AEMRE, RÄDSCH TIM, SUDRE CAROLEH, ACION LAURA, ANTONELLI MICHELA, ARBEL TAL, BAKAS SPYRIDON, BENIS ARRIEL, BLASCHKO MATTHEWB, BUETTNER FLORIAN, CARDOSO MJORGE, CHEPLYGINA VERONIKA, CHEN JIANXU, CHRISTODOULOU EVANGELIA, CIMINI BETHA, COLLINS GARYS, FARAHANI KEYVAN, FERRER LUCIANA, GALDRAN ADRIAN, VAN GINNEKEN BRAM, GLOCKER BEN, GODAU PATRICK, HAASE ROBERT, HASHIMOTO DANIELA, HOFFMAN MICHAELM, HUISMAN MEREL, ISENSEE FABIAN, JANNIN PIERRE, KAHN CHARLESE, KAINMUELLER DAGMAR, KAINZ BERNHARD, KARARGYRIS ALEXANDROS, KARTHIKESALINGAM ALAN, KENNGOTT HANNES, KLEESIEK JENS, KOFLER FLORIAN, KOOI THIJS, KOPP-SCHNEIDER ANNETTE, KOZUBEK MICHAL, KRESHUK ANNA, KURC TAHSIN, LANDMAN BENNETTA, LITJENS GEERT, MADANI AMIN, MAIER-HEIN KLAUS, MARTEL ANNEL, MATTSON PETER, MEIJERING ERIK, MENZE BJOERN, MOONS KARELG, MÜLLER HENNING, NICHYPORUK BRENNAN, NICKEL FELIX, PETERSEN JENS, RAFELSKI SUSANNEM, RAJPOOT NASIR, REYES MAURICIO, RIEGLER MICHAELA, RIEKE NICOLA, SAEZ-RODRIGUEZ JULIO, SÁNCHEZ CLARAI, SHETTY SHRAVYA, SUMMERS RONALDM, TAHA ABDELA, TIULPIN ALEKSEI, TSAFTARIS SOTIRIOSA, VAN CALSTER BEN, VAROQUAUX GAËL, YANIV ZIVR, JÄGER PAULF, MAIER-HEIN LENA. Understanding metric-related pitfalls in image analysis validation. ARXIV 2024:arXiv:2302.01790v4. [PMID: 36945687 PMCID: PMC10029046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 03/23/2023]
Abstract
Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.
Collapse
Affiliation(s)
- ANNIKA REINKE
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems and HI Helmholtz Imaging, Germany and Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
| | - MINU D. TIZABI
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Germany and National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Germany
| | - MICHAEL BAUMGARTNER
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Germany
| | - MATTHIAS EISENMANN
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Germany
| | - DOREEN HECKMANN-NÖTZEL
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Germany and National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Germany
| | - A. EMRE KAVUR
- HI Applied Computer Vision Lab, Division of Medical Image Computing; German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Germany
| | - TIM RÄDSCH
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems and HI Helmholtz Imaging, Germany
| | - CAROLE H. SUDRE
- MRC Unit for Lifelong Health and Ageing at UCL and Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK and School of Biomedical Engineering and Imaging Science, King’s College London, London, UK
| | - LAURA ACION
- Instituto de Cálculo, CONICET – Universidad de Buenos Aires, Buenos Aires, Argentina
| | - MICHELA ANTONELLI
- School of Biomedical Engineering and Imaging Science, King’s College London, London, UK and Centre for Medical Image Computing, University College London, London, UK
| | - TAL ARBEL
- Centre for Intelligent Machines and MILA (Quebec Artificial Intelligence Institute), McGill University, Montreal, Canada
| | - SPYRIDON BAKAS
- Division of Computational Pathology, Dept of Pathology & Laboratory Medicine, Indiana University School of Medicine, IU Health Information and Translational Sciences Building, Indianapolis, USA and Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Richards Medical Research Laboratories FL7, Philadelphia, PA, USA
| | - ARRIEL BENIS
- Department of Digital Medical Technologies, Holon Institute of Technology, Holon, Israel and European Federation for Medical Informatics, Le Mont-sur-Lausanne, Switzerland
| | - MATTHEW B. BLASCHKO
- Center for Processing Speech and Images, Department of Electrical Engineering, KU Leuven, Leuven, Belgium
| | - FLORIAN BUETTNER
- German Cancer Consortium (DKTK), partner site Frankfurt/Mainz, a partnership between DKFZ and UCT Frankfurt-Marburg, Germany, German Cancer Research Center (DKFZ) Heidelberg, Germany, Goethe University Frankfurt, Department of Medicine, Germany, Goethe University Frankfurt, Department of Informatics, Germany, and Frankfurt Cancer Insititute, Germany
| | - M. JORGE CARDOSO
- School of Biomedical Engineering and Imaging Science, King’s College London, London, UK
| | - VERONIKA CHEPLYGINA
- Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | - JIANXU CHEN
- Leibniz-Institut für Analytische Wissenschaften – ISAS – e.V., Dortmund, Germany
| | - EVANGELIA CHRISTODOULOU
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Germany
| | - BETH A. CIMINI
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, Massachusetts, USA
| | - GARY S. COLLINS
- Centre for Statistics in Medicine, University of Oxford, Oxford, UK
| | - KEYVAN FARAHANI
- Center for Biomedical Informatics and Information Technology, National Cancer Institute, Bethesda, MD, USA
| | - LUCIANA FERRER
- Instituto de Investigación en Ciencias de la Computación (ICC), CONICET-UBA, Ciudad Universitaria, Ciudad Autónoma de Buenos Aires, Argentina
| | - ADRIAN GALDRAN
- Universitat Pompeu Fabra, Barcelona, Spain and University of Adelaide, Adelaide, Australia
| | - BRAM VAN GINNEKEN
- Fraunhofer MEVIS, Bremen, Germany and Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, The Netherlands
| | - BEN GLOCKER
- Department of Computing, Imperial College London, London, UK
| | - PATRICK GODAU
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Germany, Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany, and National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Germany
| | - ROBERT HAASE
- Now with: Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI), Leipzig University, Leipzig, Germany, DFG Cluster of Excellence “Physics of Life”, Technische Universität (TU) Dresden, Dresden, Germany, and Center for Systems Biology , Dresden, Germany
| | - DANIEL A. HASHIMOTO
- Department of Surgery, Perelman School of Medicine, Philadelphia, PA, USA and General Robotics Automation Sensing and Perception Laboratory, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
| | - MICHAEL M. HOFFMAN
- Princess Margaret Cancer Centre, University Health Network, Toronto, Canada, Department of Medical Biophysics, University of Toronto, Toronto, Canada, Department of Computer Science, University of Toronto, Toronto, Canada, and Vector Institute for Artificial Intelligence, Toronto, Canada
| | - MEREL HUISMAN
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - FABIAN ISENSEE
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing and HI Applied Computer Vision Lab, Germany
| | - PIERRE JANNIN
- Laboratoire Traitement du Signal et de l’Image – UMR_S 1099, Université de Rennes 1, Rennes, France and INSERM, Paris Cedex, France
| | - CHARLES E. KAHN
- Department of Radiology and Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA
| | - DAGMAR KAINMUELLER
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Biomedical Image Analysis and HI Helmholtz Imaging, Berlin, Germany and University of Potsdam, Digital Engineering Faculty, Potsdam, Germany
| | - BERNHARD KAINZ
- Department of Computing, Faculty of Engineering, Imperial College London, London, UK and Department AIBE, Friedrich-Alexander-Universität (FAU), Erlangen-Nürnberg, Germany
| | | | | | - HANNES KENNGOTT
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - JENS KLEESIEK
- Translational Image-guided Oncology (TIO), Institute for AI in Medicine (IKIM), University Medicine Essen, Essen, Germany
| | | | | | | | - MICHAL KOZUBEK
- Centre for Biomedical Image Analysis and Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - ANNA KRESHUK
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany
| | - TAHSIN KURC
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | | | - GEERT LITJENS
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - AMIN MADANI
- Department of Surgery, University Health Network, Philadelphia, PA, Canada
| | - KLAUS MAIER-HEIN
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing and HI Helmholtz Imaging, Germany and Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - ANNE L. MARTEL
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada and Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | | | - ERIK MEIJERING
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| | - BJOERN MENZE
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - KAREL G.M. MOONS
- Julius Center for Health Sciences and Primary Care, UMC Utrecht, Utrecht University, Utrecht, The Netherlands
| | - HENNING MÜLLER
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland and Medical Faculty, University of Geneva, Geneva, Switzerland
| | | | - FELIX NICKEL
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - JENS PETERSEN
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Germany
| | | | - NASIR RAJPOOT
- Tissue Image Analytics Laboratory, Department of Computer Science, University of Warwick, Coventry, UK
| | - MAURICIO REYES
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland and Department of Radiation Oncology, University Hospital Bern, University of Bern, Bern, Switzerland
| | - MICHAEL A. RIEGLER
- Simula Metropolitan Center for Digital Engineering, Oslo, Norway and UiT The Arctic University of Norway, Tromsø, Norway
| | | | - JULIO SAEZ-RODRIGUEZ
- Institute for Computational Biomedicine, Heidelberg University, Heidelberg. Germany and Faculty of Medicine, Heidelberg University Hospital, Heidelberg, Germany
| | - CLARA I. SÁNCHEZ
- Informatics Institute, Faculty of Science, University of Amsterdam, Amsterdam, The Netherlands
| | | | | | - ABDEL A. TAHA
- Institute of Information Systems Engineering, TU Wien, Vienna, Austria
| | - ALEKSEI TIULPIN
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland and Neurocenter Oulu, Oulu University Hospital, Oulu, Finland
| | | | - BEN VAN CALSTER
- Department of Development and Regeneration and EPI-centre, KU Leuven, Leuven, Belgium and Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, The Netherlands
| | - GAËL VAROQUAUX
- Parietal project team, INRIA Saclay-Île de France, Palaiseau, France
| | - ZIV R. YANIV
- National Institute of Allergy and Infectious Diseases, National Institutes of Health, Bethesda, MD, USA
| | - PAUL F. JÄGER
- German Cancer Research Center (DKFZ) Heidelberg, Interactive Machine Learning Group and HI Helmholtz Imaging, Germany
| | - LENA MAIER-HEIN
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems and HI Helmholtz Imaging, Germany, Faculty of Mathematics and Computer Science and Medical Faculty, Heidelberg University, Heidelberg, Germany, and National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Germany
| |
Collapse
|
19
|
Walter A, Hoegen-Saßmannshausen P, Stanic G, Rodrigues JP, Adeberg S, Jäkel O, Frank M, Giske K. Segmentation of 71 Anatomical Structures Necessary for the Evaluation of Guideline-Conforming Clinical Target Volumes in Head and Neck Cancers. Cancers (Basel) 2024; 16:415. [PMID: 38254904 PMCID: PMC11154560 DOI: 10.3390/cancers16020415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 12/28/2023] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
The delineation of the clinical target volumes (CTVs) for radiation therapy is time-consuming, requires intensive training and shows high inter-observer variability. Supervised deep-learning methods depend heavily on consistent training data; thus, State-of-the-Art research focuses on making CTV labels more homogeneous and strictly bounding them to current standards. International consensus expert guidelines standardize CTV delineation by conditioning the extension of the clinical target volume on the surrounding anatomical structures. Training strategies that directly follow the construction rules given in the expert guidelines or the possibility of quantifying the conformance of manually drawn contours to the guidelines are still missing. Seventy-one anatomical structures that are relevant to CTV delineation in head- and neck-cancer patients, according to the expert guidelines, were segmented on 104 computed tomography scans, to assess the possibility of automating their segmentation by State-of-the-Art deep learning methods. All 71 anatomical structures were subdivided into three subsets of non-overlapping structures, and a 3D nnU-Net model with five-fold cross-validation was trained for each subset, to automatically segment the structures on planning computed tomography scans. We report the DICE, Hausdorff distance and surface DICE for 71 + 5 anatomical structures, for most of which no previous segmentation accuracies have been reported. For those structures for which prediction values have been reported, our segmentation accuracy matched or exceeded the reported values. The predictions from our models were always better than those predicted by the TotalSegmentator. The sDICE with 2 mm margin was larger than 80% for almost all the structures. Individual structures with decreased segmentation accuracy are analyzed and discussed with respect to their impact on the CTV delineation following the expert guidelines. No deviation is expected to affect the rule-based automation of the CTV delineation.
Collapse
Affiliation(s)
- Alexandra Walter
- Department of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany; (G.S.); (J.P.R.); (O.J.); (K.G.)
- Heidelberg Institute of Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), 69120 Heidelberg, Germany;
- Karlsruhe Institute of Technology (KIT), Scientific Computing Center, Zirkel 2, 76131 Karlsruhe, Germany;
| | - Philipp Hoegen-Saßmannshausen
- Heidelberg Institute of Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), 69120 Heidelberg, Germany;
- Department of Radiation Oncology, Heidelberg University Hospital, 69120 Heidelberg, Germany
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center (DKFZ), 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, 69120 Heidelberg, Germany
| | - Goran Stanic
- Department of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany; (G.S.); (J.P.R.); (O.J.); (K.G.)
- Heidelberg Institute of Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), 69120 Heidelberg, Germany;
- Faculty of Physics and Astronomy, University of Heidelberg, 69120 Heidelberg, Germany
| | - Joao Pedro Rodrigues
- Department of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany; (G.S.); (J.P.R.); (O.J.); (K.G.)
- Heidelberg Institute of Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), 69120 Heidelberg, Germany;
| | - Sebastian Adeberg
- Department of Radiotherapy and Radiation Oncology, Marburg University Hospital, 35043 Marburg, Germany;
- Marburg Ion-Beam Therapy Center (MIT), 35043 Marburg, Germany
- Universitäres Centrum für Tumorerkrankungen (UCT), 35033 Marburg, Germany
| | - Oliver Jäkel
- Department of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany; (G.S.); (J.P.R.); (O.J.); (K.G.)
- Heidelberg Institute of Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), 69120 Heidelberg, Germany;
- Heidelberg Ion-Beam Therapy Center (HIT), 69120 Heidelberg, Germany
| | - Martin Frank
- Karlsruhe Institute of Technology (KIT), Scientific Computing Center, Zirkel 2, 76131 Karlsruhe, Germany;
| | - Kristina Giske
- Department of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany; (G.S.); (J.P.R.); (O.J.); (K.G.)
- Heidelberg Institute of Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), 69120 Heidelberg, Germany;
| |
Collapse
|
20
|
Kehayias CE, Yan Y, Bontempi D, Quirk S, Bitterman DS, Bredfeldt JS, Aerts HJWL, Mak RH, Guthier CV. Prospective deployment of an automated implementation solution for artificial intelligence translation to clinical radiation oncology. Front Oncol 2024; 13:1305511. [PMID: 38239639 PMCID: PMC10794768 DOI: 10.3389/fonc.2023.1305511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 12/11/2023] [Indexed: 01/22/2024] Open
Abstract
Introduction Artificial intelligence (AI)-based technologies embody countless solutions in radiation oncology, yet translation of AI-assisted software tools to actual clinical environments remains unrealized. We present the Deep Learning On-Demand Assistant (DL-ODA), a fully automated, end-to-end clinical platform that enables AI interventions for any disease site featuring an automated model-training pipeline, auto-segmentations, and QA reporting. Materials and methods We developed, tested, and prospectively deployed the DL-ODA system at a large university affiliated hospital center. Medical professionals activate the DL-ODA via two pathways (1): On-Demand, used for immediate AI decision support for a patient-specific treatment plan, and (2) Ambient, in which QA is provided for all daily radiotherapy (RT) plans by comparing DL segmentations with manual delineations and calculating the dosimetric impact. To demonstrate the implementation of a new anatomy segmentation, we used the model-training pipeline to generate a breast segmentation model based on a large clinical dataset. Additionally, the contour QA functionality of existing models was assessed using a retrospective cohort of 3,399 lung and 885 spine RT cases. Ambient QA was performed for various disease sites including spine RT and heart for dosimetric sparing. Results Successful training of the breast model was completed in less than a day and resulted in clinically viable whole breast contours. For the retrospective analysis, we evaluated manual-versus-AI similarity for the ten most common structures. The DL-ODA detected high similarities in heart, lung, liver, and kidney delineations but lower for esophagus, trachea, stomach, and small bowel due largely to incomplete manual contouring. The deployed Ambient QAs for heart and spine sites have prospectively processed over 2,500 cases and 230 cases over 9 months and 5 months, respectively, automatically alerting the RT personnel. Discussion The DL-ODA capabilities in providing universal AI interventions were demonstrated for On-Demand contour QA, DL segmentations, and automated model training, and confirmed successful integration of the system into a large academic radiotherapy department. The novelty of deploying the DL-ODA as a multi-modal, fully automated end-to-end AI clinical implementation solution marks a significant step towards a generalizable framework that leverages AI to improve the efficiency and reliability of RT systems.
Collapse
Affiliation(s)
- Christopher E. Kehayias
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Yujie Yan
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Dennis Bontempi
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
- Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, Netherlands
| | - Sarah Quirk
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Danielle S. Bitterman
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Jeremy S. Bredfeldt
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Hugo J. W. L. Aerts
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
- Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, Netherlands
| | - Raymond H. Mak
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
| | - Christian V. Guthier
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
21
|
Maroongroge S, Mohamed ASR, Nguyen C, Guma De la Vega J, Frank SJ, Garden AS, Gunn BG, Lee A, Mayo L, Moreno A, Morrison WH, Phan J, Spiotto MT, Court LE, Fuller CD, Rosenthal DI, Netherton TJ. Clinical acceptability of automatically generated lymph node levels and structures of deglutition and mastication for head and neck radiation therapy. Phys Imaging Radiat Oncol 2024; 29:100540. [PMID: 38356692 PMCID: PMC10864833 DOI: 10.1016/j.phro.2024.100540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 01/22/2024] [Accepted: 01/24/2024] [Indexed: 02/16/2024] Open
Abstract
Background and Purpose Auto-contouring of complex anatomy in computed tomography (CT) scans is a highly anticipated solution to many problems in radiotherapy. In this study, artificial intelligence (AI)-based auto-contouring models were clinically validated for lymph node levels and structures of swallowing and chewing in the head and neck. Materials and Methods CT scans of 145 head and neck radiotherapy patients were retrospectively curated. One cohort (n = 47) was used to analyze seven lymph node levels and the other (n = 98) used to analyze 17 swallowing and chewing structures. Separate nnUnet models were trained and validated using the separate cohorts. For the lymph node levels, preference and clinical acceptability of AI vs human contours were scored. For the swallowing and chewing structures, clinical acceptability was scored. Quantitative analyses of the test sets were performed for AI vs human contours for all structures using overlap and distance metrics. Results Median Dice Similarity Coefficient ranged from 0.77 to 0.89 for lymph node levels and 0.86 to 0.96 for chewing and swallowing structures. The AI contours were superior to or equally preferred to the manual contours at rates ranging from 75% to 91%; there was not a significant difference in clinical acceptability for nodal levels I-V for manual versus AI contours. Across all AI-generated lymph node level contours, 92% were rated as usable with stylistic to no edits. Of the 340 contours in the chewing and swallowing cohort, 4% required minor edits. Conclusions An accurate approach was developed to auto-contour lymph node levels and chewing and swallowing structures on CT images for patients with intact nodal anatomy. Only a small portion of test set auto-contours required minor edits.
Collapse
Affiliation(s)
- Sean Maroongroge
- Department of Radiation Oncology, Division of Radiation Oncology, University of Texas MD Anderson Cancer Center, United States
| | - Abdallah SR. Mohamed
- Department of Radiation Oncology, Division of Radiation Oncology, University of Texas MD Anderson Cancer Center, United States
| | - Callistus Nguyen
- Department of Radiation Physics, Division of Radiation Oncology, University of Texas MD Anderson Cancer Center, United States
| | - Jean Guma De la Vega
- Department of Radiation Physics, Division of Radiation Oncology, University of Texas MD Anderson Cancer Center, United States
| | - Steven J. Frank
- Department of Radiation Oncology, Division of Radiation Oncology, University of Texas MD Anderson Cancer Center, United States
| | - Adam S. Garden
- Department of Radiation Oncology, Division of Radiation Oncology, University of Texas MD Anderson Cancer Center, United States
| | - Brandon G. Gunn
- Department of Radiation Oncology, Division of Radiation Oncology, University of Texas MD Anderson Cancer Center, United States
| | - Anna Lee
- Department of Radiation Oncology, Division of Radiation Oncology, University of Texas MD Anderson Cancer Center, United States
| | - Lauren Mayo
- Department of Radiation Oncology, Division of Radiation Oncology, University of Texas MD Anderson Cancer Center, United States
| | - Amy Moreno
- Department of Radiation Oncology, Division of Radiation Oncology, University of Texas MD Anderson Cancer Center, United States
| | - William H. Morrison
- Department of Radiation Oncology, Division of Radiation Oncology, University of Texas MD Anderson Cancer Center, United States
| | - Jack Phan
- Department of Radiation Oncology, Division of Radiation Oncology, University of Texas MD Anderson Cancer Center, United States
| | - Michael T. Spiotto
- Department of Radiation Oncology, Division of Radiation Oncology, University of Texas MD Anderson Cancer Center, United States
| | - Laurence E. Court
- Department of Radiation Physics, Division of Radiation Oncology, University of Texas MD Anderson Cancer Center, United States
| | - Clifton D. Fuller
- Department of Radiation Oncology, Division of Radiation Oncology, University of Texas MD Anderson Cancer Center, United States
| | - David I. Rosenthal
- Department of Radiation Oncology, Division of Radiation Oncology, University of Texas MD Anderson Cancer Center, United States
| | - Tucker J. Netherton
- Department of Radiation Physics, Division of Radiation Oncology, University of Texas MD Anderson Cancer Center, United States
| |
Collapse
|
22
|
Fernandes MG, Bussink J, Wijsman R, Stam B, Monshouwer R. Estimating how contouring differences affect normal tissue complication probability modelling. Phys Imaging Radiat Oncol 2024; 29:100533. [PMID: 38292649 PMCID: PMC10825684 DOI: 10.1016/j.phro.2024.100533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 11/15/2023] [Accepted: 12/30/2023] [Indexed: 02/01/2024] Open
Abstract
Background and purpose Normal tissue complication probability (NTCP) models are developed from large retrospective datasets where automatic contouring is often used to contour the organs at risk. This study proposes a methodology to estimate how discrepancies between two sets of contours are reflected on NTCP model performance. We apply this methodology to heart contours within a dataset of non-small cell lung cancer (NSCLC) patients. Materials and methods One of the contour sets is designated the ground truth and a dosimetric parameter derived from it is used to simulate outcomes via a predefined NTCP relationship. For each simulated outcome, the selected dosimetric parameters associated with each contour set are individually used to fit a toxicity model and their performance is compared. Our dataset comprised 605 stage IIA-IIIB NSCLC patients. Manual, deep learning, and atlas-based heart contours were available. Results How contour differences were reflected in NTCP model performance depended on the slope of the predefined model, the dosimetric parameter utilized, and the size of the cohort. The impact of contour differences on NTCP model performance increased with steeper NTCP curves. In our dataset, parameters on the low range of the dose-volume histogram were more robust to contour differences. Conclusions Our methodology can be used to estimate whether a given contouring model is fit for NTCP model development. For the heart in comparable datasets, average Dice should be at least as high as between our manual and deep learning contours for shallow NTCP relationships (88.5 ± 4.5 %) and higher for steep relationships.
Collapse
Affiliation(s)
| | - Johan Bussink
- Department of Radiation Oncology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Robin Wijsman
- Department of Radiation Oncology, University Medical Center Groningen, Groningen, The Netherlands
| | - Barbara Stam
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - René Monshouwer
- Department of Radiation Oncology, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
23
|
Chen Y, Pahlavian SH, Jacobs P, Neupane T, Forghani-Arani F, Castillo E, Castillo R, Vinogradskiy Y. Systematic Evaluation of the Impact of Lung Segmentation Methods on 4-Dimensional Computed Tomography Ventilation Imaging Using a Large Patient Database. Int J Radiat Oncol Biol Phys 2024; 118:242-252. [PMID: 37607642 PMCID: PMC10842520 DOI: 10.1016/j.ijrobp.2023.08.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 08/04/2023] [Accepted: 08/08/2023] [Indexed: 08/24/2023]
Abstract
PURPOSE A novel form of lung functional imaging applied for functional avoidance radiation therapy has been developed that uses 4-dimensional computed tomography (4DCT) data and image processing techniques to calculate lung ventilation (4DCT-ventilation). Lung segmentation is a common step to define a region of interest for 4DCT-ventilation generation. The purpose of this study was to quantitatively evaluate the sensitivity of 4DCT-ventilation imaging using different lung segmentation methods. METHODS AND MATERIALS The 4DCT data of 350 patients from 2 institutions were used. Lung contours were generated using 3 methods: (1) reference segmentations that removed airways and pulmonary vasculature manually (Lung-Manual), (2) standard lung contours used for planning (Lung-RadOnc), and (3) artificial intelligence (AI)-based contours that removed the airways and pulmonary vasculature (Lung-AI). The AI model was based on a residual 3-dimensional U-Net and was trained using the Lung-Manual contours of 279 patients. We compared the Lung-RadOnc or Lung-AI with Lung-Manual contours for the entire 4DCT-ventilation functional avoidance process including lung segmentation (surface Dice similarity coefficient [Surface DSC]), 4DCT-ventilation generation (correlation), and subanalysis of 10 patients on a dosimetric endpoint (percentage of high functional volume of lung receiving ≥20 Gy [fV20{%}]). RESULTS Surface DSC comparing Lung-Manual/Lung-RadOnc and Lung-Manual/Lung-AI contours was 0.40 ± 0.06 and 0.86 ± 0.04, respectively. The correlation between 4DCT-ventilation images generated with Lung-Manual/Lung-RadOnc and Lung-Manual/Lung-AI were 0.48 ± 0.14 and 0.85 ± 0.14, respectively. The difference in fV20[%] between 4DCT-ventilation generated with Lung-Manual/Lung-RadOnc and Lung-Manual/Lung-AI was 2.5% ± 4.1% and 0.3% ± 0.5%, respectively. CONCLUSIONS Our work showed that using standard planning lung contours can result in significantly variable 4DCT-ventilation images. The study demonstrated that AI-based segmentations generate lung contours and 4DCT-ventilation images that are similar to those generated using manual methods. The significance of the study is that it characterizes the lung segmentation sensitivity of the 4DCT-ventilation process and develops methods that can facilitate the integration of this novel imaging in busy clinics.
Collapse
Affiliation(s)
- Yingxuan Chen
- Department of Radiation Oncology, Thomas Jefferson University, Philadelphia, Pennsylvania
| | | | | | - Taindra Neupane
- Department of Radiation Oncology, Thomas Jefferson University, Philadelphia, Pennsylvania
| | | | - Edward Castillo
- Department of Biomedical Engineering, University of Texas at Austin, Austin, Texas
| | - Richard Castillo
- Department of Radiation Oncology, Emory University, Atlanta, Georgia
| | - Yevgeniy Vinogradskiy
- Department of Radiation Oncology, Thomas Jefferson University, Philadelphia, Pennsylvania.
| |
Collapse
|
24
|
Paudyal R, Jiang J, Han J, Diplas BH, Riaz N, Hatzoglou V, Lee N, Deasy JO, Veeraraghavan H, Shukla-Dave A. Auto-segmentation of neck nodal metastases using self-distilled masked image transformer on longitudinal MR images. BJR ARTIFICIAL INTELLIGENCE 2024; 1:ubae004. [PMID: 38476956 PMCID: PMC10928808 DOI: 10.1093/bjrai/ubae004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 01/23/2024] [Accepted: 01/24/2024] [Indexed: 03/14/2024]
Abstract
Objectives Auto-segmentation promises greater speed and lower inter-reader variability than manual segmentations in radiation oncology clinical practice. This study aims to implement and evaluate the accuracy of the auto-segmentation algorithm, "Masked Image modeling using the vision Transformers (SMIT)," for neck nodal metastases on longitudinal T2-weighted (T2w) MR images in oropharyngeal squamous cell carcinoma (OPSCC) patients. Methods This prospective clinical trial study included 123 human papillomaviruses (HPV-positive [+]) related OSPCC patients who received concurrent chemoradiotherapy. T2w MR images were acquired on 3 T at pre-treatment (Tx), week 0, and intra-Tx weeks (1-3). Manual delineations of metastatic neck nodes from 123 OPSCC patients were used for the SMIT auto-segmentation, and total tumor volumes were calculated. Standard statistical analyses compared contour volumes from SMIT vs manual segmentation (Wilcoxon signed-rank test [WSRT]), and Spearman's rank correlation coefficients (ρ) were computed. Segmentation accuracy was evaluated on the test data set using the dice similarity coefficient (DSC) metric value. P-values <0.05 were considered significant. Results No significant difference in manual and SMIT delineated tumor volume at pre-Tx (8.68 ± 7.15 vs 8.38 ± 7.01 cm3, P = 0.26 [WSRT]), and the Bland-Altman method established the limits of agreement as -1.71 to 2.31 cm3, with a mean difference of 0.30 cm3. SMIT model and manually delineated tumor volume estimates were highly correlated (ρ = 0.84-0.96, P < 0.001). The mean DSC metric values were 0.86, 0.85, 0.77, and 0.79 at the pre-Tx and intra-Tx weeks (1-3), respectively. Conclusions The SMIT algorithm provides sufficient segmentation accuracy for oncological applications in HPV+ OPSCC. Advances in knowledge First evaluation of auto-segmentation with SMIT using longitudinal T2w MRI in HPV+ OPSCC.
Collapse
Affiliation(s)
- Ramesh Paudyal
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - James Han
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Bill H Diplas
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Nadeem Riaz
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Vaios Hatzoglou
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Nancy Lee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Joseph O Deasy
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Amita Shukla-Dave
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| |
Collapse
|
25
|
Li B, Zhang J, Wang Q, Li H, Wang Q. Three-dimensional spine reconstruction from biplane radiographs using convolutional neural networks. Med Eng Phys 2024; 123:104088. [PMID: 38365341 DOI: 10.1016/j.medengphy.2023.104088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Revised: 12/04/2023] [Accepted: 12/10/2023] [Indexed: 02/18/2024]
Abstract
PURPOSE The purpose of this study was to develop and evaluate a deep learning network for three-dimensional reconstruction of the spine from biplanar radiographs. METHODS The proposed approach focused on extracting similar features and multiscale features of bone tissue in biplanar radiographs. Bone tissue features were reconstructed for feature representation across dimensions to generate three-dimensional volumes. The number of feature mappings was gradually reduced in the reconstruction to transform the high-dimensional features into the three-dimensional image domain. We produced and made eight public datasets to train and test the proposed network. Two evaluation metrics were proposed and combined with four classical evaluation metrics to measure the performance of the method. RESULTS In comparative experiments, the reconstruction results of this method achieved a Hausdorff distance of 1.85 mm, a surface overlap of 0.2 mm, a volume overlap of 0.9664, and an offset distance of only 0.21 mm from the vertebral body centroid. The results of this study indicate that the proposed method is reliable.
Collapse
Affiliation(s)
- Bo Li
- Department of Electronic Engineering, Yunnan University, Kunming, China
| | - Junhua Zhang
- Department of Electronic Engineering, Yunnan University, Kunming, China.
| | - Qian Wang
- Department of Electronic Engineering, Yunnan University, Kunming, China
| | - Hongjian Li
- The First People's Hospital of Yunnan Province, China
| | - Qiyang Wang
- The First People's Hospital of Yunnan Province, China
| |
Collapse
|
26
|
Gay SS, Cardenas CE, Nguyen C, Netherton TJ, Yu C, Zhao Y, Skett S, Patel T, Adjogatse D, Guerrero Urbano T, Naidoo K, Beadle BM, Yang J, Aggarwal A, Court LE. Fully-automated, CT-only GTV contouring for palliative head and neck radiotherapy. Sci Rep 2023; 13:21797. [PMID: 38066074 PMCID: PMC10709623 DOI: 10.1038/s41598-023-48944-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/01/2023] [Indexed: 12/18/2023] Open
Abstract
Planning for palliative radiotherapy is performed without the advantage of MR or PET imaging in many clinics. Here, we investigated CT-only GTV delineation for palliative treatment of head and neck cancer. Two multi-institutional datasets of palliative-intent treatment plans were retrospectively acquired: a set of 102 non-contrast-enhanced CTs and a set of 96 contrast-enhanced CTs. The nnU-Net auto-segmentation network was chosen for its strength in medical image segmentation, and five approaches separately trained: (1) heuristic-cropped, non-contrast images with a single GTV channel, (2) cropping around a manually-placed point in the tumor center for non-contrast images with a single GTV channel, (3) contrast-enhanced images with a single GTV channel, (4) contrast-enhanced images with separate primary and nodal GTV channels, and (5) contrast-enhanced images along with synthetic MR images with separate primary and nodal GTV channels. Median Dice similarity coefficient ranged from 0.6 to 0.7, surface Dice from 0.30 to 0.56, and 95th Hausdorff distance from 14.7 to 19.7 mm across the five approaches. Only surface Dice exhibited statistically-significant difference across these five approaches using a two-tailed Wilcoxon Rank-Sum test (p ≤ 0.05). Our CT-only results met or exceeded published values for head and neck GTV autocontouring using multi-modality images. However, significant edits would be necessary before clinical use in palliative radiotherapy.
Collapse
Affiliation(s)
- Skylar S Gay
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA.
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA.
| | - Carlos E Cardenas
- Department of Radiation Oncology, The University of Alabama at Birmingham, Birmingham, AL, USA
| | - Callistus Nguyen
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
| | - Tucker J Netherton
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
| | - Cenji Yu
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Yao Zhao
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | | | | | | | | | | | | | - Jinzhong Yang
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
| | | | - Laurence E Court
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
| |
Collapse
|
27
|
Kovacs B, Netzer N, Baumgartner M, Schrader A, Isensee F, Weißer C, Wolf I, Görtz M, Jaeger PF, Schütz V, Floca R, Gnirs R, Stenzinger A, Hohenfellner M, Schlemmer HP, Bonekamp D, Maier-Hein KH. Addressing image misalignments in multi-parametric prostate MRI for enhanced computer-aided diagnosis of prostate cancer. Sci Rep 2023; 13:19805. [PMID: 37957250 PMCID: PMC10643562 DOI: 10.1038/s41598-023-46747-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 11/04/2023] [Indexed: 11/15/2023] Open
Abstract
Prostate cancer (PCa) diagnosis on multi-parametric magnetic resonance images (MRI) requires radiologists with a high level of expertise. Misalignments between the MRI sequences can be caused by patient movement, elastic soft-tissue deformations, and imaging artifacts. They further increase the complexity of the task prompting radiologists to interpret the images. Recently, computer-aided diagnosis (CAD) tools have demonstrated potential for PCa diagnosis typically relying on complex co-registration of the input modalities. However, there is no consensus among research groups on whether CAD systems profit from using registration. Furthermore, alternative strategies to handle multi-modal misalignments have not been explored so far. Our study introduces and compares different strategies to cope with image misalignments and evaluates them regarding to their direct effect on diagnostic accuracy of PCa. In addition to established registration algorithms, we propose 'misalignment augmentation' as a concept to increase CAD robustness. As the results demonstrate, misalignment augmentations can not only compensate for a complete lack of registration, but if used in conjunction with registration, also improve the overall performance on an independent test set.
Collapse
Affiliation(s)
- Balint Kovacs
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany.
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany.
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany.
| | - Nils Netzer
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Michael Baumgartner
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
| | - Adrian Schrader
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
| | - Cedric Weißer
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Ivo Wolf
- Mannheim University of Applied Sciences, Mannheim, Germany
| | - Magdalena Görtz
- Junior Clinical Cooperation Unit 'Multiparametric Methods for Early Detection of Prostate Cancer', German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Paul F Jaeger
- Helmholtz Imaging, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Interactive Machine Learning Group, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
| | - Victoria Schütz
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Ralf Floca
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Regula Gnirs
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
| | - Albrecht Stenzinger
- Institute of Pathology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Markus Hohenfellner
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Heinz-Peter Schlemmer
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- German Cancer Consortium (DKTK), DKFZ, Core Center Heidelberg, Heidelberg, Germany
| | - David Bonekamp
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
- German Cancer Consortium (DKTK), DKFZ, Core Center Heidelberg, Heidelberg, Germany
| | - Klaus H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- German Cancer Consortium (DKTK), DKFZ, Core Center Heidelberg, Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
28
|
Davey A, Pan S, Bryce-Atkinson A, Mandeville H, Janssens GO, Kelly SM, Hol M, Tang V, Davies LSC, SIOP-Europe Radiation Oncology Working Group, Aznar M. The need for consensus on delineation and dose constraints of dentofacial structures in paediatric radiotherapy: Outcomes of a SIOP Europe survey. Clin Transl Radiat Oncol 2023; 43:100681. [PMID: 37790584 PMCID: PMC10543782 DOI: 10.1016/j.ctro.2023.100681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Accepted: 09/21/2023] [Indexed: 10/05/2023] Open
Abstract
Background and purpose Children receiving radiotherapy for head-and-neck tumours often experience severe dentofacial side effects. Despite this, recommendations for contouring and dose constraints to dentofacial structures are lacking in clinical practice. We report on a survey aiming to understand current practice in contouring and dose assessment to dentofacial structures. Methods A digital survey was distributed to European Society for Paediatric Oncology members of the Radiation Oncology Working Group, and member-affiliated centres in Europe, Australia, and New Zealand. The questions focused on clinical practice and aimed to establish areas for future development. Results Results from 52 paediatric radiotherapy centres across 27 countries are reported. Only 29/52 centres routinely delineated some dentofacial structures, with the most common being the mandible (25 centres), temporo-mandibular joint (22), dentition (13), orbit (10) and maxillary bone (eight). For most bones contoured, an 'As Low As Reasonably Achievable' dose objective was implemented. Only four centres reported age-adapted dose constraints.The largest barrier to clinical implementation of dose constraints was firstly, the lack of contouring guidance (49/52, 94%) and secondly, that delineation is time-consuming (33/52, 63%). Most respondents who routinely contour dentofacial structures (25/27, 90%) agreed a contouring atlas would aid delineation. Conclusion Routine delineation of dentofacial structures is infrequent in paediatric radiotherapy. Based on survey findings, we aim to 1) define a consensus-contouring atlas for dentofacial structures, 2) develop auto-contouring solutions for dentofacial structures to aid clinical implementation, and 3) carry out treatment planning studies to investigate the importance of delineation of these structures for planning optimisation.
Collapse
Affiliation(s)
- Angela Davey
- Division of Cancer Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK
| | - Shermaine Pan
- Department of Proton Therapy, The Christie NHS Foundation Trust, Manchester, UK
| | - Abigail Bryce-Atkinson
- Division of Cancer Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK
| | | | - Geert O. Janssens
- Princess Maxima Center for Paediatric Oncology, Utrecht, The Netherlands
- Department of Radiation Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Sarah M. Kelly
- European Society for Paediatric Oncology (SIOP Europe), Clos Chapelle-aux-Champs 30, Brussels, Belgium
- The European Organisation for Research and Treatment of Cancer (EORTC) Headquarters, Avenue E. Mounier 83, Brussels, Belgium
- Faculty of Medicine and Health Sciences, Ghent University, Ghent, Belgium
| | - Marinka Hol
- Princess Maxima Center for Paediatric Oncology, Utrecht, The Netherlands
- Department of Otorhinolaryngology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Vivian Tang
- Paediatric Radiology, Royal Manchester Children’s Hospital, Manchester, UK
| | | | | | - Marianne Aznar
- Division of Cancer Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK
| |
Collapse
|
29
|
Zhong NN, Wang HQ, Huang XY, Li ZZ, Cao LM, Huo FY, Liu B, Bu LL. Enhancing head and neck tumor management with artificial intelligence: Integration and perspectives. Semin Cancer Biol 2023; 95:52-74. [PMID: 37473825 DOI: 10.1016/j.semcancer.2023.07.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 07/11/2023] [Accepted: 07/15/2023] [Indexed: 07/22/2023]
Abstract
Head and neck tumors (HNTs) constitute a multifaceted ensemble of pathologies that primarily involve regions such as the oral cavity, pharynx, and nasal cavity. The intricate anatomical structure of these regions poses considerable challenges to efficacious treatment strategies. Despite the availability of myriad treatment modalities, the overall therapeutic efficacy for HNTs continues to remain subdued. In recent years, the deployment of artificial intelligence (AI) in healthcare practices has garnered noteworthy attention. AI modalities, inclusive of machine learning (ML), neural networks (NNs), and deep learning (DL), when amalgamated into the holistic management of HNTs, promise to augment the precision, safety, and efficacy of treatment regimens. The integration of AI within HNT management is intricately intertwined with domains such as medical imaging, bioinformatics, and medical robotics. This article intends to scrutinize the cutting-edge advancements and prospective applications of AI in the realm of HNTs, elucidating AI's indispensable role in prevention, diagnosis, treatment, prognostication, research, and inter-sectoral integration. The overarching objective is to stimulate scholarly discourse and invigorate insights among medical practitioners and researchers to propel further exploration, thereby facilitating superior therapeutic alternatives for patients.
Collapse
Affiliation(s)
- Nian-Nian Zhong
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Han-Qi Wang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Xin-Yue Huang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Zi-Zhan Li
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Lei-Ming Cao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Fang-Yi Huo
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Bing Liu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China; Department of Oral & Maxillofacial - Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China.
| | - Lin-Lin Bu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China; Department of Oral & Maxillofacial - Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China.
| |
Collapse
|
30
|
De Kerf G, Claessens M, Raouassi F, Mercier C, Stas D, Ost P, Dirix P, Verellen D. A geometry and dose-volume based performance monitoring of artificial intelligence models in radiotherapy treatment planning for prostate cancer. Phys Imaging Radiat Oncol 2023; 28:100494. [PMID: 37809056 PMCID: PMC10550805 DOI: 10.1016/j.phro.2023.100494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 09/20/2023] [Accepted: 09/20/2023] [Indexed: 10/10/2023] Open
Abstract
Background and Purpose Clinical Artificial Intelligence (AI) implementations lack ground-truth when applied on real-world data. This study investigated how combined geometrical and dose-volume metrics can be used as performance monitoring tools to detect clinically relevant candidates for model retraining. Materials and Methods Fifty patients were analyzed for both AI-segmentation and planning. For AI-segmentation, geometrical (Standard Surface Dice 3 mm and Local Surface Dice 3 mm) and dose-volume based parameters were calculated for two organs (bladder and anorectum) to compare AI output against the clinically corrected structure. A Local Surface Dice was introduced to detect geometrical changes in the vicinity of the target volumes, while an Absolute Dose Difference (ADD) evaluation increased focus on dose-volume related changes. AI-planning performance was evaluated using clinical goal analysis in combination with volume and target overlap metrics. Results The Local Surface Dice reported equal or lower values compared to the Standard Surface Dice (anorectum: (0.93 ± 0.11) vs (0.98 ± 0.04); bladder: (0.97 ± 0.06) vs (0.98 ± 0.04)). The ADD metric showed a difference of (0.9 ± 0.8)Gy for the anorectum D 1 cm 3 . The bladder D 5cm 3 reported a difference of (0.7 ± 1.5)Gy. Mandatory clinical goals were fulfilled in 90 % of the DLP plans. Conclusions Combining dose-volume and geometrical metrics allowed detection of clinically relevant changes, applied to both auto-segmentation and auto-planning output and the Local Surface Dice was more sensitive to local changes compared to the Standard Surface Dice. This monitoring is able to evaluate AI behavior in clinical practice and allows candidate selection for active learning.
Collapse
Affiliation(s)
- Geert De Kerf
- Department of Radiation Oncology, Iridium Netwerk, Wilrijk (Antwerp), Belgium
| | - Michaël Claessens
- Department of Radiation Oncology, Iridium Netwerk, Wilrijk (Antwerp), Belgium
- Centre for Oncological Research (CORE), Integrated Personalized and Precision Oncology Network (IPPON), University of Antwerp, Antwerp, Belgium
| | - Fadoua Raouassi
- Department of Radiation Oncology, Iridium Netwerk, Wilrijk (Antwerp), Belgium
| | - Carole Mercier
- Department of Radiation Oncology, Iridium Netwerk, Wilrijk (Antwerp), Belgium
- Centre for Oncological Research (CORE), Integrated Personalized and Precision Oncology Network (IPPON), University of Antwerp, Antwerp, Belgium
| | - Daan Stas
- Department of Radiation Oncology, Iridium Netwerk, Wilrijk (Antwerp), Belgium
- Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Piet Ost
- Department of Radiation Oncology, Iridium Netwerk, Wilrijk (Antwerp), Belgium
- Centre for Oncological Research (CORE), Integrated Personalized and Precision Oncology Network (IPPON), University of Antwerp, Antwerp, Belgium
| | - Piet Dirix
- Department of Radiation Oncology, Iridium Netwerk, Wilrijk (Antwerp), Belgium
- Centre for Oncological Research (CORE), Integrated Personalized and Precision Oncology Network (IPPON), University of Antwerp, Antwerp, Belgium
| | - Dirk Verellen
- Department of Radiation Oncology, Iridium Netwerk, Wilrijk (Antwerp), Belgium
- Centre for Oncological Research (CORE), Integrated Personalized and Precision Oncology Network (IPPON), University of Antwerp, Antwerp, Belgium
| |
Collapse
|
31
|
Heilemann G, Buschmann M, Lechner W, Dick V, Eckert F, Heilmann M, Herrmann H, Moll M, Knoth J, Konrad S, Simek IM, Thiele C, Zaharie A, Georg D, Widder J, Trnkova P. Clinical Implementation and Evaluation of Auto-Segmentation Tools for Multi-Site Contouring in Radiotherapy. Phys Imaging Radiat Oncol 2023; 28:100515. [PMID: 38111502 PMCID: PMC10726238 DOI: 10.1016/j.phro.2023.100515] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 11/09/2023] [Accepted: 11/09/2023] [Indexed: 12/20/2023] Open
Abstract
Background and purpose Tools for auto-segmentation in radiotherapy are widely available, but guidelines for clinical implementation are missing. The goal was to develop a workflow for performance evaluation of three commercial auto-segmentation tools to select one candidate for clinical implementation. Materials and Methods One hundred patients with six treatment sites (brain, head-and-neck, thorax, abdomen, and pelvis) were included. Three sets of AI-based contours for organs-at-risk (OAR) generated by three software tools and manually drawn expert contours were blindly rated for contouring accuracy. The dice similarity coefficient (DSC), the Hausdorff distance, and a dose/volume evaluation based on the recalculation of the original treatment plan were assessed. Statistically significant differences were tested using the Kruskal-Wallis test and the post-hoc Dunn Test with Bonferroni correction. Results The mean DSC scores compared to expert contours for all OARs combined were 0.80 ± 0.10, 0.75 ± 0.10, and 0.74 ± 0.11 for the three software tools. Physicians' rating identified equivalent or superior performance of some AI-based contours in head (eye, lens, optic nerve, brain, chiasm), thorax (e.g., heart and lungs), and pelvis and abdomen (e.g., kidney, femoral head) compared to manual contours. For some OARs, the AI models provided results requiring only minor corrections. Bowel-bag and stomach were not fit for direct use. During the interdisciplinary discussion, the physicians' rating was considered the most relevant. Conclusion A comprehensive method for evaluation and clinical implementation of commercially available auto-segmentation software was developed. The in-depth analysis yielded clear instructions for clinical use within the radiotherapy department.
Collapse
Affiliation(s)
- Gerd Heilemann
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Martin Buschmann
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Wolfgang Lechner
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Vincent Dick
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Franziska Eckert
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Martin Heilmann
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Harald Herrmann
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Matthias Moll
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Johannes Knoth
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Stefan Konrad
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Inga-Malin Simek
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Christopher Thiele
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Alexandru Zaharie
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Dietmar Georg
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Joachim Widder
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Petra Trnkova
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| |
Collapse
|
32
|
Vaassen F, Zegers CML, Hofstede D, Wubbels M, Beurskens H, Verheesen L, Canters R, Looney P, Battye M, Gooding MJ, Compter I, Eekers DBP, van Elmpt W. Geometric and dosimetric analysis of CT- and MR-based automatic contouring for the EPTN contouring atlas in neuro-oncology. Phys Med 2023; 114:103156. [PMID: 37813050 DOI: 10.1016/j.ejmp.2023.103156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 09/21/2023] [Accepted: 09/26/2023] [Indexed: 10/11/2023] Open
Abstract
PURPOSE Atlas-based and deep-learning contouring (DLC) are methods for automatic segmentation of organs-at-risk (OARs). The European Particle Therapy Network (EPTN) published a consensus-based atlas for delineation of OARs in neuro-oncology. In this study, geometric and dosimetric evaluation of automatically-segmented neuro-oncological OARs was performed using CT- and MR-models following the EPTN-contouring atlas. METHODS Image and contouring data from 76 neuro-oncological patients were included. Two atlas-based models (CT-atlas and MR-atlas) and one DLC-model (MR-DLC) were created. Manual contours on registered CT-MR-images were used as ground-truth. Results were analyzed in terms of geometrical (volumetric Dice similarity coefficient (vDSC), surface DSC (sDSC), added path length (APL), and mean slice-wise Hausdorff distance (MSHD)) and dosimetrical accuracy. Distance-to-tumor analysis was performed to analyze to which extent the location of the OAR relative to planning target volume (PTV) has dosimetric impact, using Wilcoxon rank-sum tests. RESULTS CT-atlas outperformed MR-atlas for 22/26 OARs. MR-DLC outperformed MR-atlas for all OARs. Highest median (95 %CI) vDSC and sDSC were found for the brainstem in MR-DLC: 0.92 (0.88-0.95) and 0.84 (0.77-0.89) respectively, as well as lowest MSHD: 0.27 (0.22-0.39)cm. Median dose differences (ΔD) were within ± 1 Gy for 24/26(92 %) OARs for all three models. Distance-to-tumor showed a significant correlation for ΔDmax,0.03cc-parameters when splitting the data in ≤ 4 cm and > 4 cm OAR-distance (p < 0.001). CONCLUSION MR-based DLC and CT-based atlas-contouring enable high-quality segmentation. It was shown that a combination of both CT- and MR-autocontouring models results in the best quality.
Collapse
Affiliation(s)
- Femke Vaassen
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands.
| | - Catharina M L Zegers
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - David Hofstede
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Mart Wubbels
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Hilde Beurskens
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Lindsey Verheesen
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Richard Canters
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | | | | | | | - Inge Compter
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Daniëlle B P Eekers
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Wouter van Elmpt
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| |
Collapse
|
33
|
Zhao JY, Cao Q, Chen J, Chen W, Du SY, Yu J, Zeng YM, Wang SM, Peng JY, You C, Xu JG, Wang XY. Development and validation of a fully automatic tissue delineation model for brain metastasis using a deep neural network. Quant Imaging Med Surg 2023; 13:6724-6734. [PMID: 37869331 PMCID: PMC10585546 DOI: 10.21037/qims-22-1216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 08/04/2023] [Indexed: 10/24/2023]
Abstract
Background Stereotactic radiosurgery (SRS) treatment planning requires accurate delineation of brain metastases, a task that can be tedious and time-consuming. Although studies have explored the use of convolutional neural networks (CNNs) in magnetic resonance imaging (MRI) for automatic brain metastases delineation, none of these studies have performed clinical evaluation, raising concerns about clinical applicability. This study aimed to develop an artificial intelligence (AI) tool for the automatic delineation of single brain metastasis that could be integrated into clinical practice. Methods Data from 426 patients with postcontrast T1-weighted MRIs who underwent SRS between March 2007 and August 2019 were retrospectively collected and divided into training, validation, and testing cohorts of 299, 42, and 85 patients, respectively. Two Gamma Knife (GK) surgeons contoured the brain metastases as the ground truth. A novel 2.5D CNN network was developed for single brain metastasis delineation. The mean Dice similarity coefficient (DSC) and average surface distance (ASD) were used to assess the performance of this method. Results The mean DSC and ASD values were 88.34%±5.00% and 0.35±0.21 mm, respectively, for the contours generated with the AI tool based on the testing set. The DSC measure of the AI tool's performance was dependent on metastatic shape, reinforcement shape, and the existence of peritumoral edema (all P values <0.05). The clinical experts' subjective assessments showed that 415 out of 572 slices (72.6%) in the testing cohort were acceptable for clinical usage without revision. The average time spent editing an AI-generated contour compared with time spent with manual contouring was 74 vs. 196 seconds, respectively (P<0.01). Conclusions The contours delineated with the AI tool for single brain metastasis were in close agreement with the ground truth. The developed AI tool can effectively reduce contouring time and aid in GK treatment planning of single brain metastasis in clinical practice.
Collapse
Affiliation(s)
- Jie-Yi Zhao
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| | - Qi Cao
- Department of Reproductive Medical Center, West China Second University Hospital, Sichuan University, Chengdu, China
| | - Jing Chen
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| | - Wei Chen
- Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Si-Yu Du
- West China School of Medicine, Sichuan University, Chengdu, China
| | - Jie Yu
- West China School of Public Health, Sichuan University, Chengdu, China
| | - Yi-Miao Zeng
- West China School of Medicine, Sichuan University, Chengdu, China
| | - Shu-Min Wang
- West China School of Medicine, Sichuan University, Chengdu, China
| | - Jing-Yu Peng
- West China School of Medicine, Sichuan University, Chengdu, China
| | - Chao You
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| | - Jian-Guo Xu
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| | - Xiao-Yu Wang
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
34
|
Li J, Song Y, Wu Y, Liang L, Li G, Bai S. Clinical evaluation on automatic segmentation results of convolutional neural networks in rectal cancer radiotherapy. Front Oncol 2023; 13:1158315. [PMID: 37731629 PMCID: PMC10508953 DOI: 10.3389/fonc.2023.1158315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 08/11/2023] [Indexed: 09/22/2023] Open
Abstract
Purpose Image segmentation can be time-consuming and lacks consistency between different oncologists, which is essential in conformal radiotherapy techniques. We aimed to evaluate automatic delineation results generated by convolutional neural networks (CNNs) from geometry and dosimetry perspectives and explore the reliability of these segmentation tools in rectal cancer. Methods Forty-seven rectal cancer cases treated from February 2018 to April 2019 were randomly collected retrospectively in our cancer center. The oncologists delineated regions of interest (ROIs) on planning CT images as the ground truth, including clinical target volume (CTV), bladder, small intestine, and femoral heads. The corresponding automatic segmentation results were generated by DeepLabv3+ and ResUNet, and we also used Atlas-Based Autosegmentation (ABAS) software for comparison. The geometry evaluation was carried out using the volumetric Dice similarity coefficient (DSC) and surface DSC, and critical dose parameters were assessed based on replanning optimized by clinically approved or automatically generated CTVs and organs at risk (OARs), i.e., the Planref and Plantest. Pearson test was used to explore the correlation between geometric metrics and dose parameters. Results In geometric evaluation, DeepLabv3+ performed better in DCS metrics for the CTV (volumetric DSC, mean = 0.96, P< 0.01; surface DSC, mean = 0.78, P< 0.01) and small intestine (volumetric DSC, mean = 0.91, P< 0.01; surface DSC, mean = 0.62, P< 0.01), ResUNet had advantages in volumetric DSC of the bladder (mean = 0.97, P< 0.05). For critical dose parameters analysis between Planref and Plantest, there was a significant difference for target volumes (P< 0.01), and no significant difference was found for the ResUNet-generated small intestine (P > 0.05). For the correlation test, a negative correlation was found between DSC metrics (volumetric, surface DSC) and dosimetric parameters (δD95, δD95, HI, CI) for target volumes (P< 0.05), and no significant correlation was found for most tests of OARs (P > 0.05). Conclusions CNNs show remarkable repeatability and time-saving in automatic segmentation, and their accuracy also has a certain potential in clinical practice. Meanwhile, clinical aspects, such as dose distribution, may need to be considered when comparing the performance of auto-segmentation methods.
Collapse
Affiliation(s)
- Jing Li
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Ying Song
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
- Machine Intelligence Laboratory, College of Computer Science, Chengdu, China
| | - Yongchang Wu
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Lan Liang
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Guangjun Li
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Sen Bai
- Radiotherapy Physics & Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
35
|
Wahid KA, Sahin O, Kundu S, Lin D, Alanis A, Tehami S, Kamel S, Duke S, Sherer MV, Rasmussen M, Korreman S, Fuentes D, Cislo M, Nelms BE, Christodouleas JP, Murphy JD, Mohamed ASR, He R, Naser MA, Gillespie EF, Fuller CD. Determining The Role Of Radiation Oncologist Demographic Factors On Segmentation Quality: Insights From A Crowd-Sourced Challenge Using Bayesian Estimation. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.08.30.23294786. [PMID: 37693394 PMCID: PMC10491357 DOI: 10.1101/2023.08.30.23294786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
BACKGROUND Medical image auto-segmentation is poised to revolutionize radiotherapy workflows. The quality of auto-segmentation training data, primarily derived from clinician observers, is of utmost importance. However, the factors influencing the quality of these clinician-derived segmentations have yet to be fully understood or quantified. Therefore, the purpose of this study was to determine the role of common observer demographic variables on quantitative segmentation performance. METHODS Organ at risk (OAR) and tumor volume segmentations provided by radiation oncologist observers from the Contouring Collaborative for Consensus in Radiation Oncology public dataset were utilized for this study. Segmentations were derived from five separate disease sites comprised of one patient case each: breast, sarcoma, head and neck (H&N), gynecologic (GYN), and gastrointestinal (GI). Segmentation quality was determined on a structure-by-structure basis by comparing the observer segmentations with an expert-derived consensus gold standard primarily using the Dice Similarity Coefficient (DSC); surface DSC was investigated as a secondary metric. Metrics were stratified into binary groups based on previously established structure-specific expert-derived interobserver variability (IOV) cutoffs. Generalized linear mixed-effects models using Markov chain Monte Carlo Bayesian estimation were used to investigate the association between demographic variables and the binarized segmentation quality for each disease site separately. Variables with a highest density interval excluding zero - loosely analogous to frequentist significance - were considered to substantially impact the outcome measure. RESULTS After filtering by practicing radiation oncologists, 574, 110, 452, 112, and 48 structure observations remained for the breast, sarcoma, H&N, GYN, and GI cases, respectively. The median percentage of observations that crossed the expert DSC IOV cutoff when stratified by structure type was 55% and 31% for OARs and tumor volumes, respectively. Bayesian regression analysis revealed tumor category had a substantial negative impact on binarized DSC for the breast (coefficient mean ± standard deviation: -0.97 ± 0.20), sarcoma (-1.04 ± 0.54), H&N (-1.00 ± 0.24), and GI (-2.95 ± 0.98) cases. There were no clear recurring relationships between segmentation quality and demographic variables across the cases, with most variables demonstrating large standard deviations and wide highest density intervals. CONCLUSION Our study highlights substantial uncertainty surrounding conventionally presumed factors influencing segmentation quality. Future studies should investigate additional demographic variables, more patients and imaging modalities, and alternative metrics of segmentation acceptability.
Collapse
Affiliation(s)
- Kareem A. Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Onur Sahin
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Suprateek Kundu
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Diana Lin
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Anthony Alanis
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Salik Tehami
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Serageldin Kamel
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Simon Duke
- Department of Radiation Oncology, Cambridge University Hospitals, Cambridge, UK
| | - Michael V. Sherer
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA, USA
| | | | - Stine Korreman
- Department of Oncology, Aarhus University Hospital, Denmark
| | - David Fuentes
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Michael Cislo
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY
| | | | - John P. Christodouleas
- Department of Radiation Oncology, The University of Pennsylvania Cancer Center, Philadelphia, PA, USA
- Elekta, Atlanta, GA, USA
| | - James D. Murphy
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA, USA
| | - Abdallah S. R. Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Mohammed A. Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | | | - Clifton D. Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
36
|
Zaman FA, Roy TK, Sonka M, Wu X. Patch-wise 3D segmentation quality assessment combining reconstruction and regression networks. J Med Imaging (Bellingham) 2023; 10:054002. [PMID: 37692093 PMCID: PMC10490907 DOI: 10.1117/1.jmi.10.5.054002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 08/21/2023] [Accepted: 08/28/2023] [Indexed: 09/12/2023] Open
Abstract
Purpose General deep-learning (DL)-based semantic segmentation methods with expert level accuracy may fail in 3D medical image segmentation due to complex tissue structures, lack of large datasets with ground truth, etc. For expeditious diagnosis, there is a compelling need to predict segmentation quality without ground truth. In some medical imaging applications, maintaining the quality of segmentation is crucial to the localized regions where disease is prevalent rather than just globally maintaining high-average segmentation quality. We propose a DL framework to identify regions of segmentation inaccuracies by combining a 3D generative adversarial network (GAN) and a convolutional regression network. Approach Our approach is methodologically based on the learned ability to reconstruct the original images identifying the regions of location-specific segmentation failures, in which the reconstruction does not match the underlying original image. We use conditional GAN to reconstruct input images masked by the segmentation results. The regression network is trained to predict the patch-wise Dice similarity coefficient (DSC), conditioned on the segmentation results. The method relies directly on the extracted segmentation related features and does not need to use ground truth during the inference phase to identify erroneous regions in the computed segmentation. Results We evaluated the proposed method on two public datasets: osteoarthritis initiative 4D (3D + time) knee MRI (knee-MR) and 3D non-small cell lung cancer CT (lung-CT). For the patch-wise DSC prediction, we observed the mean absolute errors of 0.01 and 0.04 with the independent standard for the knee-MR and lung-CT data, respectively. Conclusions This method shows promising results in localizing the erroneous segmentation regions that may aid the downstream analysis of disease diagnosis and prognosis prediction.
Collapse
Affiliation(s)
- Fahim Ahmed Zaman
- University of Iowa, Department of Electrical and Computer Engineering, Iowa City, Iowa, United States
| | - Tarun Kanti Roy
- University of Iowa, Department of Computer Science, Iowa City, Iowa, United States
| | - Milan Sonka
- University of Iowa, Department of Electrical and Computer Engineering, Iowa City, Iowa, United States
| | - Xiaodong Wu
- University of Iowa, Department of Electrical and Computer Engineering, Iowa City, Iowa, United States
| |
Collapse
|
37
|
McQuinlan Y, Brouwer CL, Lin Z, Gan Y, Sung Kim J, van Elmpt W, Gooding MJ. An investigation into the risk of population bias in deep learning autocontouring. Radiother Oncol 2023; 186:109747. [PMID: 37330053 DOI: 10.1016/j.radonc.2023.109747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 05/30/2023] [Accepted: 06/08/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND AND PURPOSE To date, data used in the development of Deep Learning-based automatic contouring (DLC) algorithms have been largely sourced from single geographic populations. This study aimed to evaluate the risk of population-based bias by determining whether the performance of an autocontouring system is impacted by geographic population. MATERIALS AND METHODS 80 Head Neck CT deidentified scans were collected from four clinics in Europe (n = 2) and Asia (n = 2). A single observer manually delineated 16 organs-at-risk in each. Subsequently, the data was contoured using a DLC solution, and trained using single institution (European) data. Autocontours were compared to manual delineations using quantitative measures. A Kruskal-Wallis test was used to test for any difference between populations. Clinical acceptability of automatic and manual contours to observers from each participating institution was assessed using a blinded subjective evaluation. RESULTS Seven organs showed a significant difference in volume between groups. Four organs showed statistical differences in quantitative similarity measures. The qualitative test showed greater variation in acceptance of contouring between observers than between data from different origins, with greater acceptance by the South Korean observers. CONCLUSION Much of the statistical difference in quantitative performance could be explained by the difference in organ volume impacting the contour similarity measures and the small sample size. However, the qualitative assessment suggests that observer perception bias has a greater impact on the apparent clinical acceptability than quantitatively observed differences. This investigation of potential geographic bias should extend to more patients, populations, and anatomical regions in the future.
Collapse
Affiliation(s)
| | - Charlotte L Brouwer
- University of Groningen, University Medical Center Groningen, Department of Radiation Oncology, Groningen, The Netherlands.
| | - Zhixiong Lin
- Shantou University Medical Centre, Guangdong, China.
| | - Yong Gan
- Shantou University Medical Centre, Guangdong, China.
| | - Jin Sung Kim
- Yonsei University Health System, Seoul, Republic of Korea.
| | - Wouter van Elmpt
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands.
| | - Mark J Gooding
- Mirada Medical Ltd, Oxford, United Kingdom; Inpictura Ltd, Oxford, United Kingdom.
| |
Collapse
|
38
|
Zaman FA, Zhang L, Zhang H, Sonka M, Wu X. Segmentation quality assessment by automated detection of erroneous surface regions in medical images. Comput Biol Med 2023; 164:107324. [PMID: 37591161 PMCID: PMC10563140 DOI: 10.1016/j.compbiomed.2023.107324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 07/17/2023] [Accepted: 08/07/2023] [Indexed: 08/19/2023]
Abstract
Despite the advancement in deep learning-based semantic segmentation methods, which have achieved accuracy levels of field experts in many computer vision applications, the same general approaches may frequently fail in 3D medical image segmentation due to complex tissue structures, noisy acquisition, disease-related pathologies, as well as the lack of sufficiently large datasets with associated annotations. For expeditious diagnosis and quantitative image analysis in large-scale clinical trials, there is a compelling need to predict segmentation quality without ground truth. In this paper, we propose a deep learning framework to locate erroneous regions on the boundary surfaces of segmented objects for quality control and assessment of segmentation. A Convolutional Neural Network (CNN) is explored to learn the boundary related image features of multi-objects that can be used to identify location-specific inaccurate segmentation. The predicted error locations can facilitate efficient user interaction for interactive image segmentation (IIS). We evaluated the proposed method on two data sets: Osteoarthritis Initiative (OAI) 3D knee MRI and 3D calf muscle MRI. The average sensitivity scores of 0.95 and 0.92, and the average positive predictive values of 0.78 and 0.91 were achieved, respectively, for erroneous surface region detection of knee cartilage segmentation and calf muscle segmentation. Our experiment demonstrated promising performance of the proposed method for segmentation quality assessment by automated detection of erroneous surface regions in medical images.
Collapse
Affiliation(s)
- Fahim Ahmed Zaman
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242, USA
| | - Lichun Zhang
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242, USA
| | - Honghai Zhang
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242, USA
| | - Milan Sonka
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242, USA
| | - Xiaodong Wu
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242, USA.
| |
Collapse
|
39
|
Choi B, Olberg S, Park JC, Kim JS, Shrestha DK, Yaddanapudi S, Furutani KM, Beltran CJ. Technical note: Progressive deep learning: An accelerated training strategy for medical image segmentation. Med Phys 2023; 50:5075-5087. [PMID: 36763566 DOI: 10.1002/mp.16267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Revised: 12/30/2022] [Accepted: 01/24/2023] [Indexed: 02/11/2023] Open
Abstract
BACKGROUND Recent advancements in Deep Learning (DL) methodologies have led to state-of-the-art performance in a wide range of applications especially in object recognition, classification, and segmentation of medical images. However, training modern DL models requires a large amount of computation and long training times due to the complex nature of network structures and the large number of training datasets involved. Moreover, it is an intensive, repetitive manual process to select the optimized configuration of hyperparameters for a given DL network. PURPOSE In this study, we present a novel approach to accelerate the training time of DL models via the progressive feeding of training datasets based on similarity measures for medical image segmentation. We term this approach Progressive Deep Learning (PDL). METHODS The two-stage PDL approach was tested on the auto-segmentation task for two imaging modalities: CT and MRI. The training datasets were ranked according to similarity measures between each sample based on Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and the Universal Quality Image Index (UQI) values. At the start of the training process, a relatively coarse sampling of training datasets with higher ranks was used to optimize the hyperparameters of the DL network. Following this, the samples with higher ranks were used in step 1 to yield accelerated loss minimization in early training epochs and the total dataset was added in step 2 for the remainder of training. RESULTS Our results demonstrate that the PDL approach can reduce the training time by nearly half (∼49%) and can predict segmentations (CT U-net/DenseNet dice coefficient: 0.9506/0.9508, MR U-net/DenseNet dice coefficient: 0.9508/0.9510) without major statistical difference (Wilcoxon signed-rank test) compared to the conventional DL approach. The total training times with a fixed cutoff at 0.95 DSC for the CT dataset using DenseNet and U-Net architectures, respectively, were 17 h, 20 min and 4 h, 45 min in the conventional case compared to 8 h, 45 min and 2 h, 20 min with PDL. For the MRI dataset, the total training times using the same architectures were 2 h, 54 min and 52 min in the conventional case and 1 h, 14 min and 25 min with PDL. CONCLUSION The proposed PDL training approach offers the ability to substantially reduce the training time for medical image segmentation while maintaining the performance achieved in the conventional case.
Collapse
Affiliation(s)
- Byongsu Choi
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea
| | - Sven Olberg
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Justin C Park
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea
- Oncosoft Inc., Seoul, South Korea
| | - Deepak K Shrestha
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| | | | - Keith M Furutani
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| | - Chris J Beltran
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| |
Collapse
|
40
|
Wang J, Peng Y. MHL-Net: A Multistage Hierarchical Learning Network for Head and Neck Multiorgan Segmentation. IEEE J Biomed Health Inform 2023; 27:4074-4085. [PMID: 37171918 DOI: 10.1109/jbhi.2023.3275746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Accurate segmentation of head and neck organs at risk is crucial in radiotherapy. However, the existing methods suffer from incomplete feature mining, insufficient information utilization, and difficulty in simultaneously improving the performance of small and large organ segmentation. In this paper, a multistage hierarchical learning network is designed to fully extract multidimensional features, combined with anatomical prior information and imaging features, using multistage subnetworks to improve the segmentation performance. First, multilevel subnetworks are constructed for primary segmentation, localization, and fine segmentation by dividing organs into two levels-large and small. Different networks both have their own learning focuses and feature reuse and information sharing among each other, which comprehensively improved the segmentation performance of all organs. Second, an anatomical prior probability map and a boundary contour attention mechanism are developed to address the problem of complex anatomical shapes. Prior information and boundary contour features effectively assist in detecting and segmenting special shapes. Finally, a multidimensional combination attention mechanism is proposed to analyze axial, coronal, and sagittal information, capture spatial and channel features, and maximize the use of structural information and semantic features of 3D medical images. Experimental results on several datasets showed that our method was competitive with state-of-the-art methods and improved the segmentation results for multiscale organs.
Collapse
|
41
|
Amjad A, Xu J, Thill D, Zhang Y, Ding J, Paulson E, Hall W, Erickson BA, Li XA. Deep learning auto-segmentation on multi-sequence magnetic resonance images for upper abdominal organs. Front Oncol 2023; 13:1209558. [PMID: 37483486 PMCID: PMC10358771 DOI: 10.3389/fonc.2023.1209558] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 06/19/2023] [Indexed: 07/25/2023] Open
Abstract
Introduction Multi-sequence multi-parameter MRIs are often used to define targets and/or organs at risk (OAR) in radiation therapy (RT) planning. Deep learning has so far focused on developing auto-segmentation models based on a single MRI sequence. The purpose of this work is to develop a multi-sequence deep learning based auto-segmentation (mS-DLAS) based on multi-sequence abdominal MRIs. Materials and methods Using a previously developed 3DResUnet network, a mS-DLAS model using 4 T1 and T2 weighted MRI acquired during routine RT simulation for 71 cases with abdominal tumors was trained and tested. Strategies including data pre-processing, Z-normalization approach, and data augmentation were employed. Additional 2 sequence specific T1 weighted (T1-M) and T2 weighted (T2-M) models were trained to evaluate performance of sequence-specific DLAS. Performance of all models was quantitatively evaluated using 6 surface and volumetric accuracy metrics. Results The developed DLAS models were able to generate reasonable contours of 12 upper abdomen organs within 21 seconds for each testing case. The 3D average values of dice similarity coefficient (DSC), mean distance to agreement (MDA mm), 95 percentile Hausdorff distance (HD95% mm), percent volume difference (PVD), surface DSC (sDSC), and relative added path length (rAPL mm/cc) over all organs were 0.87, 1.79, 7.43, -8.95, 0.82, and 12.25, respectively, for mS-DLAS model. Collectively, 71% of the auto-segmented contours by the three models had relatively high quality. Additionally, the obtained mS-DLAS successfully segmented 9 out of 16 MRI sequences that were not used in the model training. Conclusion We have developed an MRI-based mS-DLAS model for auto-segmenting of upper abdominal organs on MRI. Multi-sequence segmentation is desirable in routine clinical practice of RT for accurate organ and target delineation, particularly for abdominal tumors. Our work will act as a stepping stone for acquiring fast and accurate segmentation on multi-contrast MRI and make way for MR only guided radiation therapy.
Collapse
Affiliation(s)
- Asma Amjad
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | | | - Dan Thill
- Elekta Inc., ST. Charles, MO, United States
| | - Ying Zhang
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Jie Ding
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Eric Paulson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - William Hall
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Beth A. Erickson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - X. Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| |
Collapse
|
42
|
Song Y, Hu J, Wang Q, Yu C, Su J, Chen L, Jiang X, Chen B, Zhang L, Yu Q, Li P, Wang F, Bai S, Luo Y, Yi Z. Young oncologists benefit more than experts from deep learning-based organs-at-risk contouring modeling in nasopharyngeal carcinoma radiotherapy: A multi-institution clinical study exploring working experience and institute group style factor. Clin Transl Radiat Oncol 2023; 41:100635. [PMID: 37251619 PMCID: PMC10213188 DOI: 10.1016/j.ctro.2023.100635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 04/26/2023] [Accepted: 05/01/2023] [Indexed: 05/31/2023] Open
Abstract
Background To comprehensively investigate the behaviors of oncologists with different working experiences and institute group styles in deep learning-based organs-at-risk (OAR) contouring. Methods A deep learning-based contouring system (DLCS) was modeled from 188 CT datasets of patients with nasopharyngeal carcinoma (NPC) in institute A. Three institute oncology groups, A, B, and C, were included; each contained a beginner and an expert. For each of the 28 OARs, two trials were performed with manual contouring first and post-DLCS edition later, for ten test cases. Contouring performance and group consistency were quantified by volumetric and surface Dice coefficients. A volume-based and a surface-based oncologist satisfaction rate (VOSR and SOSR) were defined to evaluate the oncologists' acceptance of DLCS. Results Based on DLCS, experience inconsistency was eliminated. Intra-institute consistency was eliminated for group C but still existed for group A and group B. Group C benefits most from DLCS with the highest number of improved OARs (8 for volumetric Dice and 10 for surface Dice), followed by group B. Beginners obtained more numbers of improved OARs than experts (7 v.s. 4 in volumetric Dice and 5 v.s. 4 in surface Dice). VOSR and SOSR varied for institute groups, but the rates of beginners were all significantly higher than those of experts for OARs with experience group significance. A remarkable positive linear relationship was found between VOSR and post-DLCS edition volumetric Dice with a coefficient of 0.78. Conclusions The DLCS was effective for various institutes and the beginners benefited more than the experts.
Collapse
Affiliation(s)
- Ying Song
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Junjie Hu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Qiang Wang
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Chengrong Yu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Jiachong Su
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Lin Chen
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Xiaorui Jiang
- Department of Oncology, First People's Hospital of Chengdu, No. 18, Wanxiang North Road, High-tech Zone, Chengdu 610041, PR China
| | - Bo Chen
- Department of Oncology, First People's Hospital of Chengdu, No. 18, Wanxiang North Road, High-tech Zone, Chengdu 610041, PR China
| | - Lei Zhang
- Department of Oncology, Second People's Hospital of Chengdu, Chengdu, PR China
| | - Qian Yu
- Department of Oncology, Second People's Hospital of Chengdu, Chengdu, PR China
| | - Ping Li
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Feng Wang
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Sen Bai
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Yong Luo
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| |
Collapse
|
43
|
Kesävuori R, Kaseva T, Salli E, Raivio P, Savolainen S, Kangasniemi M. Deep learning-aided extraction of outer aortic surface from CT angiography scans of patients with Stanford type B aortic dissection. Eur Radiol Exp 2023; 7:35. [PMID: 37380806 DOI: 10.1186/s41747-023-00342-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 04/01/2023] [Indexed: 06/30/2023] Open
Abstract
BACKGROUND Guidelines recommend that aortic dimension measurements in aortic dissection should include the aortic wall. This study aimed to evaluate two-dimensional (2D)- and three-dimensional (3D)-based deep learning approaches for extraction of outer aortic surface in computed tomography angiography (CTA) scans of Stanford type B aortic dissection (TBAD) patients and assess the speed of different whole aorta (WA) segmentation approaches. METHODS A total of 240 patients diagnosed with TBAD between January 2007 and December 2019 were retrospectively reviewed for this study; 206 CTA scans from 206 patients with acute, subacute, or chronic TBAD acquired with various scanners in multiple different hospital units were included. Ground truth (GT) WAs for 80 scans were segmented by a radiologist using an open-source software. The remaining 126 GT WAs were generated via semi-automatic segmentation process in which an ensemble of 3D convolutional neural networks (CNNs) aided the radiologist. Using 136 scans for training, 30 for validation, and 40 for testing, 2D and 3D CNNs were trained to automatically segment WA. Main evaluation metrics for outer surface extraction and segmentation accuracy were normalized surface Dice (NSD) and Dice coefficient score (DCS), respectively. RESULTS 2D CNN outperformed 3D CNN in NSD score (0.92 versus 0.90, p = 0.009), and both CNNs had equal DCS (0.96 versus 0.96, p = 0.110). Manual and semi-automatic segmentation times of one CTA scan were approximately 1 and 0.5 h, respectively. CONCLUSIONS Both CNNs segmented WA with high DCS, but based on NSD, better accuracy may be required before clinical application. CNN-based semi-automatic segmentation methods can expedite the generation of GTs. RELEVANCE STATEMENT Deep learning can speeds up the creation of ground truth segmentations. CNNs can extract the outer aortic surface in patients with type B aortic dissection. KEY POINTS • 2D and 3D convolutional neural networks (CNNs) can extract the outer aortic surface accurately. • Equal Dice coefficient score (0.96) was reached with 2D and 3D CNNs. • Deep learning can expedite the creation of ground truth segmentations.
Collapse
Affiliation(s)
- Risto Kesävuori
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland.
| | - Tuomas Kaseva
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland
| | - Eero Salli
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland
| | - Peter Raivio
- Department of Cardiac Surgery, Heart and Lung Center, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | - Sauli Savolainen
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland
- Department of Physics, University of Helsinki, Helsinki, Finland
| | - Marko Kangasniemi
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland
| |
Collapse
|
44
|
Franzese C, Dei D, Lambri N, Teriaca MA, Badalamenti M, Crespi L, Tomatis S, Loiacono D, Mancosu P, Scorsetti M. Enhancing Radiotherapy Workflow for Head and Neck Cancer with Artificial Intelligence: A Systematic Review. J Pers Med 2023; 13:946. [PMID: 37373935 DOI: 10.3390/jpm13060946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 06/01/2023] [Accepted: 06/01/2023] [Indexed: 06/29/2023] Open
Abstract
BACKGROUND Head and neck cancer (HNC) is characterized by complex-shaped tumors and numerous organs at risk (OARs), inducing challenging radiotherapy (RT) planning, optimization, and delivery. In this review, we provided a thorough description of the applications of artificial intelligence (AI) tools in the HNC RT process. METHODS The PubMed database was queried, and a total of 168 articles (2016-2022) were screened by a group of experts in radiation oncology. The group selected 62 articles, which were subdivided into three categories, representing the whole RT workflow: (i) target and OAR contouring, (ii) planning, and (iii) delivery. RESULTS The majority of the selected studies focused on the OARs segmentation process. Overall, the performance of AI models was evaluated using standard metrics, while limited research was found on how the introduction of AI could impact clinical outcomes. Additionally, papers usually lacked information about the confidence level associated with the predictions made by the AI models. CONCLUSIONS AI represents a promising tool to automate the RT workflow for the complex field of HNC treatment. To ensure that the development of AI technologies in RT is effectively aligned with clinical needs, we suggest conducting future studies within interdisciplinary groups, including clinicians and computer scientists.
Collapse
Affiliation(s)
- Ciro Franzese
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Damiano Dei
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Nicola Lambri
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Maria Ausilia Teriaca
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Marco Badalamenti
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Leonardo Crespi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
- Centre for Health Data Science, Human Technopole, 20157 Milan, Italy
| | - Stefano Tomatis
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Daniele Loiacono
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
| | - Pietro Mancosu
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Marta Scorsetti
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| |
Collapse
|
45
|
Bakx N, van der Sangen M, Theuws J, Bluemink H, Hurkmans C. Comparison of the output of a deep learning segmentation model for locoregional breast cancer radiotherapy trained on 2 different datasets. Tech Innov Patient Support Radiat Oncol 2023; 26:100209. [PMID: 37213441 PMCID: PMC10199413 DOI: 10.1016/j.tipsro.2023.100209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 04/06/2023] [Accepted: 05/09/2023] [Indexed: 05/23/2023] Open
Abstract
Introduction The development of deep learning (DL) models for auto-segmentation is increasing and more models become commercially available. Mostly, commercial models are trained on external data. To study the effect of using a model trained on external data, compared to the same model trained on in-house collected data, the performance of these two DL models was evaluated. Methods The evaluation was performed using in-house collected data of 30 breast cancer patients. Quantitative analysis was performed using Dice similarity coefficient (DSC), surface DSC (sDSC) and 95th percentile of Hausdorff Distance (95% HD). These values were compared with previously reported inter-observer variations (IOV). Results For a number of structures, statistically significant differences were found between the two models. For organs at risk, mean values for DSC ranged from 0.63 to 0.98 and 0.71 to 0.96 for the in-house and external model, respectively. For target volumes, mean DSC values of 0.57 to 0.94 and 0.33 to 0.92 were found. The difference of 95% HD values ranged 0.08 to 3.23 mm between the two models, except for CTVn4 with 9.95 mm. For the external model, both DSC and 95% HD are outside the range of IOV for CTVn4, whereas this is the case for the DSC found for the thyroid of the in-house model. Conclusions Statistically significant differences were found between both models, which were mostly within published inter-observer variations, showing clinical usefulness of both models. Our findings could encourage discussion and revision of existing guidelines, to further decrease inter-observer, but also inter-institute variability.
Collapse
Affiliation(s)
- Nienke Bakx
- Catharina Hospital, Department of Radiation Oncology, 5602ZA Eindhoven, the Netherlands
| | | | - Jacqueline Theuws
- Catharina Hospital, Department of Radiation Oncology, 5602ZA Eindhoven, the Netherlands
| | - Hanneke Bluemink
- Catharina Hospital, Department of Radiation Oncology, 5602ZA Eindhoven, the Netherlands
| | - Coen Hurkmans
- Catharina Hospital, Department of Radiation Oncology, 5602ZA Eindhoven, the Netherlands
- Technical University Eindhoven, Faculties of Physics and Electrical Engineering, 5600MB Eindhoven, the Netherlands
- Corresponding author at: Catharina Hospital, Department of Radiation Oncology, 5602ZA Eindhoven, the Netherlands.
| |
Collapse
|
46
|
Busch F, Xu L, Sushko D, Weidlich M, Truhn D, Müller-Franzes G, Heimer MM, Niehues SM, Makowski MR, Hinsche M, Vahldiek JL, Aerts HJ, Adams LC, Bressem KK. Dual center validation of deep learning for automated multi-label segmentation of thoracic anatomy in bedside chest radiographs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 234:107505. [PMID: 37003043 DOI: 10.1016/j.cmpb.2023.107505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 02/17/2023] [Accepted: 03/21/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND AND OBJECTIVES Bedside chest radiographs (CXRs) are challenging to interpret but important for monitoring cardiothoracic disease and invasive therapy devices in critical care and emergency medicine. Taking surrounding anatomy into account is likely to improve the diagnostic accuracy of artificial intelligence and bring its performance closer to that of a radiologist. Therefore, we aimed to develop a deep convolutional neural network for efficient automatic anatomy segmentation of bedside CXRs. METHODS To improve the efficiency of the segmentation process, we introduced a "human-in-the-loop" segmentation workflow with an active learning approach, looking at five major anatomical structures in the chest (heart, lungs, mediastinum, trachea, and clavicles). This allowed us to decrease the time needed for segmentation by 32% and select the most complex cases to utilize human expert annotators efficiently. After annotation of 2,000 CXRs from different Level 1 medical centers at Charité - University Hospital Berlin, there was no relevant improvement in model performance, and the annotation process was stopped. A 5-layer U-ResNet was trained for 150 epochs using a combined soft Dice similarity coefficient (DSC) and cross-entropy as a loss function. DSC, Jaccard index (JI), Hausdorff distance (HD) in mm, and average symmetric surface distance (ASSD) in mm were used to assess model performance. External validation was performed using an independent external test dataset from Aachen University Hospital (n = 20). RESULTS The final training, validation, and testing dataset consisted of 1900/50/50 segmentation masks for each anatomical structure. Our model achieved a mean DSC/JI/HD/ASSD of 0.93/0.88/32.1/5.8 for the lung, 0.92/0.86/21.65/4.85 for the mediastinum, 0.91/0.84/11.83/1.35 for the clavicles, 0.9/0.85/9.6/2.19 for the trachea, and 0.88/0.8/31.74/8.73 for the heart. Validation using the external dataset showed an overall robust performance of our algorithm. CONCLUSIONS Using an efficient computer-aided segmentation method with active learning, our anatomy-based model achieves comparable performance to state-of-the-art approaches. Instead of only segmenting the non-overlapping portions of the organs, as previous studies did, a closer approximation to actual anatomy is achieved by segmenting along the natural anatomical borders. This novel anatomy approach could be useful for developing pathology models for accurate and quantifiable diagnosis.
Collapse
Affiliation(s)
- Felix Busch
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany; Department of Anesthesiology, Division of Operative Intensive Care Medicine, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany.
| | - Lina Xu
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Dmitry Sushko
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Matthias Weidlich
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Gustav Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Maurice M Heimer
- Department of Radiology, Ludwig-Maximilians-University of Munich, Munich, Germany
| | - Stefan M Niehues
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Marcus R Makowski
- Department of Radiology, Technical University of Munich, Munich, Germany
| | - Markus Hinsche
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Janis L Vahldiek
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Hugo Jwl Aerts
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany; Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA; Departments of Radiation Oncology and Radiology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA, USA; Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands
| | - Lisa C Adams
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Keno K Bressem
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany; Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
47
|
Paudyal R, Shah AD, Akin O, Do RKG, Konar AS, Hatzoglou V, Mahmood U, Lee N, Wong RJ, Banerjee S, Shin J, Veeraraghavan H, Shukla-Dave A. Artificial Intelligence in CT and MR Imaging for Oncological Applications. Cancers (Basel) 2023; 15:cancers15092573. [PMID: 37174039 PMCID: PMC10177423 DOI: 10.3390/cancers15092573] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 04/13/2023] [Accepted: 04/17/2023] [Indexed: 05/15/2023] Open
Abstract
Cancer care increasingly relies on imaging for patient management. The two most common cross-sectional imaging modalities in oncology are computed tomography (CT) and magnetic resonance imaging (MRI), which provide high-resolution anatomic and physiological imaging. Herewith is a summary of recent applications of rapidly advancing artificial intelligence (AI) in CT and MRI oncological imaging that addresses the benefits and challenges of the resultant opportunities with examples. Major challenges remain, such as how best to integrate AI developments into clinical radiology practice, the vigorous assessment of quantitative CT and MR imaging data accuracy, and reliability for clinical utility and research integrity in oncology. Such challenges necessitate an evaluation of the robustness of imaging biomarkers to be included in AI developments, a culture of data sharing, and the cooperation of knowledgeable academics with vendor scientists and companies operating in radiology and oncology fields. Herein, we will illustrate a few challenges and solutions of these efforts using novel methods for synthesizing different contrast modality images, auto-segmentation, and image reconstruction with examples from lung CT as well as abdome, pelvis, and head and neck MRI. The imaging community must embrace the need for quantitative CT and MRI metrics beyond lesion size measurement. AI methods for the extraction and longitudinal tracking of imaging metrics from registered lesions and understanding the tumor environment will be invaluable for interpreting disease status and treatment efficacy. This is an exciting time to work together to move the imaging field forward with narrow AI-specific tasks. New AI developments using CT and MRI datasets will be used to improve the personalized management of cancer patients.
Collapse
Affiliation(s)
- Ramesh Paudyal
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Akash D Shah
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Oguz Akin
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Richard K G Do
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Amaresha Shridhar Konar
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Vaios Hatzoglou
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Usman Mahmood
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Nancy Lee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Richard J Wong
- Head and Neck Service, Department of Surgery, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | | | | | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Amita Shukla-Dave
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| |
Collapse
|
48
|
Lucido JJ, DeWees TA, Leavitt TR, Anand A, Beltran CJ, Brooke MD, Buroker JR, Foote RL, Foss OR, Gleason AM, Hodge TL, Hughes CO, Hunzeker AE, Laack NN, Lenz TK, Livne M, Morigami M, Moseley DJ, Undahl LM, Patel Y, Tryggestad EJ, Walker MZ, Zverovitch A, Patel SH. Validation of clinical acceptability of deep-learning-based automated segmentation of organs-at-risk for head-and-neck radiotherapy treatment planning. Front Oncol 2023; 13:1137803. [PMID: 37091160 PMCID: PMC10115982 DOI: 10.3389/fonc.2023.1137803] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 03/24/2023] [Indexed: 04/09/2023] Open
Abstract
IntroductionOrgan-at-risk segmentation for head and neck cancer radiation therapy is a complex and time-consuming process (requiring up to 42 individual structure, and may delay start of treatment or even limit access to function-preserving care. Feasibility of using a deep learning (DL) based autosegmentation model to reduce contouring time without compromising contour accuracy is assessed through a blinded randomized trial of radiation oncologists (ROs) using retrospective, de-identified patient data.MethodsTwo head and neck expert ROs used dedicated time to create gold standard (GS) contours on computed tomography (CT) images. 445 CTs were used to train a custom 3D U-Net DL model covering 42 organs-at-risk, with an additional 20 CTs were held out for the randomized trial. For each held-out patient dataset, one of the eight participant ROs was randomly allocated to review and revise the contours produced by the DL model, while another reviewed contours produced by a medical dosimetry assistant (MDA), both blinded to their origin. Time required for MDAs and ROs to contour was recorded, and the unrevised DL contours, as well as the RO-revised contours by the MDAs and DL model were compared to the GS for that patient.ResultsMean time for initial MDA contouring was 2.3 hours (range 1.6-3.8 hours) and RO-revision took 1.1 hours (range, 0.4-4.4 hours), compared to 0.7 hours (range 0.1-2.0 hours) for the RO-revisions to DL contours. Total time reduced by 76% (95%-Confidence Interval: 65%-88%) and RO-revision time reduced by 35% (95%-CI,-39%-91%). All geometric and dosimetric metrics computed, agreement with GS was equivalent or significantly greater (p<0.05) for RO-revised DL contours compared to the RO-revised MDA contours, including volumetric Dice similarity coefficient (VDSC), surface DSC, added path length, and the 95%-Hausdorff distance. 32 OARs (76%) had mean VDSC greater than 0.8 for the RO-revised DL contours, compared to 20 (48%) for RO-revised MDA contours, and 34 (81%) for the unrevised DL OARs.ConclusionDL autosegmentation demonstrated significant time-savings for organ-at-risk contouring while improving agreement with the institutional GS, indicating comparable accuracy of DL model. Integration into the clinical practice with a prospective evaluation is currently underway.
Collapse
Affiliation(s)
- J. John Lucido
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
- *Correspondence: J. John Lucido,
| | - Todd A. DeWees
- Department of Health Sciences Research, Mayo Clinic, Phoenix, AZ, United States
| | - Todd R. Leavitt
- Department of Health Sciences Research, Mayo Clinic, Phoenix, AZ, United States
| | - Aman Anand
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, United States
| | - Chris J. Beltran
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, FL, United States
| | | | - Justine R. Buroker
- Research Services, Comprehensive Cancer Center, Mayo Clinic, Rochester, MN, United States
| | - Robert L. Foote
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Olivia R. Foss
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Angela M. Gleason
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Teresa L. Hodge
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | - Ashley E. Hunzeker
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Nadia N. Laack
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Tamra K. Lenz
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Douglas J. Moseley
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Lisa M. Undahl
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Yojan Patel
- Google Health, Mountain View, CA, United States
| | - Erik J. Tryggestad
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Samir H. Patel
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, United States
| |
Collapse
|
49
|
de Vries L, Emmer BJ, Majoie CBLM, Marquering HA, Gavves E. PerfU-Net: Baseline infarct estimation from CT perfusion source data for acute ischemic stroke. Med Image Anal 2023; 85:102749. [PMID: 36731276 DOI: 10.1016/j.media.2023.102749] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 11/08/2022] [Accepted: 01/10/2023] [Indexed: 01/15/2023]
Abstract
CT perfusion imaging is commonly used for infarct core quantification in acute ischemic stroke patients. The outcomes and perfusion maps of CT perfusion software, however, show many discrepancies between vendors. We aim to perform infarct core segmentation directly from CT perfusion source data using machine learning, excluding the need to use the perfusion maps from standard CT perfusion software. To this end, we present a symmetry-aware spatio-temporal segmentation model that encodes the micro-perfusion dynamics in the brain, while decoding a static segmentation map for infarct core assessment. Our proposed spatio-temporal PerfU-Net employs an attention module on the skip-connections to match the dimensions of the encoder and decoder. We train and evaluate the method on 94 and 62 scans, respectively, using the Ischemic Stroke Lesion Segmentation (ISLES) 2018 challenge data. We achieve state-of-the-art results compared to methods that only use CT perfusion source imaging with a Dice score of 0.46. We are almost on par with methods that use perfusion maps from third party software, whilst it is known that there is a large variation in these perfusion maps from various vendors. Moreover, we achieve improved performance compared to simple perfusion map analysis, which is used in clinical practice.
Collapse
Affiliation(s)
- Lucas de Vries
- Amsterdam UMC, Department of Radiology and Nuclear Medicine, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands; Amsterdam UMC, Department of Biomedical Engineering and Physics, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands; University of Amsterdam, Informatics Institute, Science Park 900, Amsterdam, 1098 XH, The Netherlands.
| | - Bart J Emmer
- Amsterdam UMC, Department of Radiology and Nuclear Medicine, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands
| | - Charles B L M Majoie
- Amsterdam UMC, Department of Radiology and Nuclear Medicine, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands
| | - Henk A Marquering
- Amsterdam UMC, Department of Radiology and Nuclear Medicine, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands; Amsterdam UMC, Department of Biomedical Engineering and Physics, Meibergdreef 9, Amsterdam, 1105 AZ, The Netherlands
| | - Efstratios Gavves
- University of Amsterdam, Informatics Institute, Science Park 900, Amsterdam, 1098 XH, The Netherlands
| |
Collapse
|
50
|
Naser MA, Wahid KA, Ahmed S, Salama V, Dede C, Edwards BW, Lin R, McDonald B, Salzillo TC, He R, Ding Y, Abdelaal MA, Thill D, O'Connell N, Willcut V, Christodouleas JP, Lai SY, Fuller CD, Mohamed ASR. Quality assurance assessment of intra-acquisition diffusion-weighted and T2-weighted magnetic resonance imaging registration and contour propagation for head and neck cancer radiotherapy. Med Phys 2023; 50:2089-2099. [PMID: 36519973 PMCID: PMC10121748 DOI: 10.1002/mp.16128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 11/10/2022] [Accepted: 11/13/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND/PURPOSE Adequate image registration of anatomical and functional magnetic resonance imaging (MRI) scans is necessary for MR-guided head and neck cancer (HNC) adaptive radiotherapy planning. Despite the quantitative capabilities of diffusion-weighted imaging (DWI) MRI for treatment plan adaptation, geometric distortion remains a considerable limitation. Therefore, we systematically investigated various deformable image registration (DIR) methods to co-register DWI and T2-weighted (T2W) images. MATERIALS/METHODS We compared three commercial (ADMIRE, Velocity, Raystation) and three open-source (Elastix with default settings [Elastix Default], Elastix with parameter set 23 [Elastix 23], Demons) post-acquisition DIR methods applied to T2W and DWI MRI images acquired during the same imaging session in twenty immobilized HNC patients. In addition, we used the non-registered images (None) as a control comparator. Ground-truth segmentations of radiotherapy structures (tumour and organs at risk) were generated by a physician expert on both image sequences. For each registration approach, structures were propagated from T2W to DWI images. These propagated structures were then compared with ground-truth DWI structures using the Dice similarity coefficient and mean surface distance. RESULTS 19 left submandibular glands, 18 right submandibular glands, 20 left parotid glands, 20 right parotid glands, 20 spinal cords, and 12 tumours were delineated. Most DIR methods took <30 s to execute per case, with the exception of Elastix 23 which took ∼458 s to execute per case. ADMIRE and Elastix 23 demonstrated improved performance over None for all metrics and structures (Bonferroni-corrected p < 0.05), while the other methods did not. Moreover, ADMIRE and Elastix 23 significantly improved performance in individual and pooled analysis compared to all other methods. CONCLUSIONS The ADMIRE DIR method offers improved geometric performance with reasonable execution time so should be favoured for registering T2W and DWI images acquired during the same scan session in HNC patients. These results are important to ensure the appropriate selection of registration strategies for MR-guided radiotherapy.
Collapse
Affiliation(s)
- Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Kareem A Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Sara Ahmed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Vivian Salama
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Cem Dede
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Benjamin W Edwards
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Ruitao Lin
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Brigid McDonald
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Travis C Salzillo
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Yao Ding
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Moamen Abobakr Abdelaal
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | | | | | | | | | - Stephen Y Lai
- Department of Head and Neck Surgery, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Abdallah S R Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|