1
|
Tian S, Liu Y, Mao X, Xu X, He S, Jia L, Zhang W, Peng P, Wang J. A multicenter study on deep learning for glioblastoma auto-segmentation with prior knowledge in multimodal imaging. Cancer Sci 2024; 115:3415-3425. [PMID: 39119927 PMCID: PMC11447882 DOI: 10.1111/cas.16304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 07/19/2024] [Accepted: 07/22/2024] [Indexed: 08/10/2024] Open
Abstract
A precise radiotherapy plan is crucial to ensure accurate segmentation of glioblastomas (GBMs) for radiation therapy. However, the traditional manual segmentation process is labor-intensive and heavily reliant on the experience of radiation oncologists. In this retrospective study, a novel auto-segmentation method is proposed to address these problems. To assess the method's applicability across diverse scenarios, we conducted its development and evaluation using a cohort of 148 eligible patients drawn from four multicenter datasets and retrospective data collection including noncontrast CT, multisequence MRI scans, and corresponding medical records. All patients were diagnosed with histologically confirmed high-grade glioma (HGG). A deep learning-based method (PKMI-Net) for automatically segmenting gross tumor volume (GTV) and clinical target volumes (CTV1 and CTV2) of GBMs was proposed by leveraging prior knowledge from multimodal imaging. The proposed PKMI-Net demonstrated high accuracy in segmenting, respectively, GTV, CTV1, and CTV2 in an 11-patient test set, achieving Dice similarity coefficients (DSC) of 0.94, 0.95, and 0.92; 95% Hausdorff distances (HD95) of 2.07, 1.18, and 3.95 mm; average surface distances (ASD) of 0.69, 0.39, and 1.17 mm; and relative volume differences (RVD) of 5.50%, 9.68%, and 3.97%. Moreover, the vast majority of GTV, CTV1, and CTV2 produced by PKMI-Net are clinically acceptable and require no revision for clinical practice. In our multicenter evaluation, the PKMI-Net exhibited consistent and robust generalizability across the various datasets, demonstrating its effectiveness in automatically segmenting GBMs. The proposed method using prior knowledge in multimodal imaging can improve the contouring accuracy of GBMs, which holds the potential to improve the quality and efficiency of GBMs' radiotherapy.
Collapse
Affiliation(s)
- Suqing Tian
- Department of Radiation OncologyPeking University Third HospitalBeijingChina
| | - Yinglong Liu
- United Imaging Research Institute of Innovative Medical EquipmentShenzhenChina
| | - Xinhui Mao
- Radiotherapy CenterPeople's Hospital of Xinjiang Uygur Autonomous RegionUrumqiChina
| | - Xin Xu
- Department of Radiation OncologyThe Second Affiliated Hospital of Shandong First Medical UniversityTai'anChina
| | - Shumeng He
- Intelligent Radiation Treatment LaboratoryUnited Imaging Research Institute of Intelligent ImagingBeijingChina
| | - Lecheng Jia
- United Imaging Research Institute of Innovative Medical EquipmentShenzhenChina
| | - Wei Zhang
- Radiotherapy Business UnitShanghai United Imaging Healthcare Co., Ltd.ShanghaiChina
| | - Peng Peng
- United Imaging Research Institute of Innovative Medical EquipmentShenzhenChina
| | - Junjie Wang
- Department of Radiation OncologyPeking University Third HospitalBeijingChina
| |
Collapse
|
2
|
Sahu M, Xiao Y, Porras JL, Amanian A, Jain A, Thamboo A, Taylor RH, Creighton FX, Ishii M. A Label-Efficient Framework for Automated Sinonasal CT Segmentation in Image-Guided Surgery. Otolaryngol Head Neck Surg 2024; 171:1217-1225. [PMID: 38922721 DOI: 10.1002/ohn.868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 04/20/2024] [Accepted: 06/03/2024] [Indexed: 06/28/2024]
Abstract
OBJECTIVE Segmentation, the partitioning of patient imaging into multiple, labeled segments, has several potential clinical benefits but when performed manually is tedious and resource intensive. Automated deep learning (DL)-based segmentation methods can streamline the process. The objective of this study was to evaluate a label-efficient DL pipeline that requires only a small number of annotated scans for semantic segmentation of sinonasal structures in CT scans. STUDY DESIGN Retrospective cohort study. SETTING Academic institution. METHODS Forty CT scans were used in this study including 16 scans in which the nasal septum (NS), inferior turbinate (IT), maxillary sinus (MS), and optic nerve (ON) were manually annotated using an open-source software. A label-efficient DL framework was used to train jointly on a few manually labeled scans and the remaining unlabeled scans. Quantitative analysis was then performed to obtain the number of annotated scans needed to achieve submillimeter average surface distances (ASDs). RESULTS Our findings reveal that merely four labeled scans are necessary to achieve median submillimeter ASDs for large sinonasal structures-NS (0.96 mm), IT (0.74 mm), and MS (0.43 mm), whereas eight scans are required for smaller structures-ON (0.80 mm). CONCLUSION We have evaluated a label-efficient pipeline for segmentation of sinonasal structures. Empirical results demonstrate that automated DL methods can achieve submillimeter accuracy using a small number of labeled CT scans. Our pipeline has the potential to improve pre-operative planning workflows, robotic- and image-guidance navigation systems, computer-assisted diagnosis, and the construction of statistical shape models to quantify population variations. LEVEL OF EVIDENCE N/A.
Collapse
Affiliation(s)
- Manish Sahu
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland, USA
| | - Yuliang Xiao
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland, USA
| | - Jose L Porras
- Department of Neurosurgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Ameen Amanian
- Division of Otolaryngology, Department of Surgery, University of British Columbia, Vancouver, British Columbia, Canada
| | - Aseem Jain
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland, USA
| | - Andrew Thamboo
- Division of Otolaryngology, Department of Surgery, University of British Columbia, Vancouver, British Columbia, Canada
| | - Russell H Taylor
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland, USA
| | - Francis X Creighton
- Department of Otolaryngology, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Masaru Ishii
- Department of Otolaryngology, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
3
|
Choi B, Beltran CJ, Yoo SK, Kwon NH, Kim JS, Park JC. The InterVision Framework: An Enhanced Fine-Tuning Deep Learning Strategy for Auto-Segmentation in Head and Neck. J Pers Med 2024; 14:979. [PMID: 39338233 PMCID: PMC11432789 DOI: 10.3390/jpm14090979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Revised: 08/13/2024] [Accepted: 09/10/2024] [Indexed: 09/30/2024] Open
Abstract
Adaptive radiotherapy (ART) workflows are increasingly adopted to achieve dose escalation and tissue sparing under dynamic anatomical conditions. However, recontouring and time constraints hinder the implementation of real-time ART workflows. Various auto-segmentation methods, including deformable image registration, atlas-based segmentation, and deep learning-based segmentation (DLS), have been developed to address these challenges. Despite the potential of DLS methods, clinical implementation remains difficult due to the need for large, high-quality datasets to ensure model generalizability. This study introduces an InterVision framework for segmentation. The InterVision framework can interpolate or create intermediate visuals between existing images to generate specific patient characteristics. The InterVision model is trained in two steps: (1) generating a general model using the dataset, and (2) tuning the general model using the dataset generated from the InterVision framework. The InterVision framework generates intermediate images between existing patient image slides using deformable vectors, effectively capturing unique patient characteristics. By creating a more comprehensive dataset that reflects these individual characteristics, the InterVision model demonstrates the ability to produce more accurate contours compared to general models. Models are evaluated using the volumetric dice similarity coefficient (VDSC) and the Hausdorff distance 95% (HD95%) for 18 structures in 20 test patients. As a result, the Dice score was 0.81 ± 0.05 for the general model, 0.82 ± 0.04 for the general fine-tuning model, and 0.85 ± 0.03 for the InterVision model. The Hausdorff distance was 3.06 ± 1.13 for the general model, 2.81 ± 0.77 for the general fine-tuning model, and 2.52 ± 0.50 for the InterVision model. The InterVision model showed the best performance compared to the general model. The InterVision framework presents a versatile approach adaptable to various tasks where prior information is accessible, such as in ART settings. This capability is particularly valuable for accurately predicting complex organs and targets that pose challenges for traditional deep learning algorithms.
Collapse
Affiliation(s)
- Byongsu Choi
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, FL 32224, USA; (B.C.); (C.J.B.); (J.C.P.)
- Yonsei Cancer Center, Department of Radiation Oncology, Yonsei Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul 03722, Republic of Korea; (S.K.Y.); (N.H.K.)
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul 03722, Republic of Korea
| | - Chris J. Beltran
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, FL 32224, USA; (B.C.); (C.J.B.); (J.C.P.)
| | - Sang Kyun Yoo
- Yonsei Cancer Center, Department of Radiation Oncology, Yonsei Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul 03722, Republic of Korea; (S.K.Y.); (N.H.K.)
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul 03722, Republic of Korea
| | - Na Hye Kwon
- Yonsei Cancer Center, Department of Radiation Oncology, Yonsei Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul 03722, Republic of Korea; (S.K.Y.); (N.H.K.)
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul 03722, Republic of Korea
| | - Jin Sung Kim
- Yonsei Cancer Center, Department of Radiation Oncology, Yonsei Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul 03722, Republic of Korea; (S.K.Y.); (N.H.K.)
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul 03722, Republic of Korea
- OncoSoft Inc., Seoul 03776, Republic of Korea
| | - Justin Chunjoo Park
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, FL 32224, USA; (B.C.); (C.J.B.); (J.C.P.)
| |
Collapse
|
4
|
Podobnik G, Ibragimov B, Tappeiner E, Lee C, Kim JS, Mesbah Z, Modzelewski R, Ma Y, Yang F, Rudecki M, Wodziński M, Peterlin P, Strojan P, Vrtovec T. HaN-Seg: The head and neck organ-at-risk CT and MR segmentation challenge. Radiother Oncol 2024; 198:110410. [PMID: 38917883 DOI: 10.1016/j.radonc.2024.110410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 06/12/2024] [Accepted: 06/15/2024] [Indexed: 06/27/2024]
Abstract
BACKGROUND AND PURPOSE To promote the development of auto-segmentation methods for head and neck (HaN) radiation treatment (RT) planning that exploit the information of computed tomography (CT) and magnetic resonance (MR) imaging modalities, we organized HaN-Seg: The Head and Neck Organ-at-Risk CT and MR Segmentation Challenge. MATERIALS AND METHODS The challenge task was to automatically segment 30 organs-at-risk (OARs) of the HaN region in 14 withheld test cases given the availability of 42 publicly available training cases. Each case consisted of one contrast-enhanced CT and one T1-weighted MR image of the HaN region of the same patient, with up to 30 corresponding reference OAR delineation masks. The performance was evaluated in terms of the Dice similarity coefficient (DSC) and 95-percentile Hausdorff distance (HD95), and statistical ranking was applied for each metric by pairwise comparison of the submitted methods using the Wilcoxon signed-rank test. RESULTS While 23 teams registered for the challenge, only seven submitted their methods for the final phase. The top-performing team achieved a DSC of 76.9 % and a HD95 of 3.5 mm. All participating teams utilized architectures based on U-Net, with the winning team leveraging rigid MR to CT registration combined with network entry-level concatenation of both modalities. CONCLUSION This challenge simulated a real-world clinical scenario by providing non-registered MR and CT images with varying fields-of-view and voxel sizes. Remarkably, the top-performing teams achieved segmentation performance surpassing the inter-observer agreement on the same dataset. These results set a benchmark for future research on this publicly available dataset and on paired multi-modal image segmentation in general.
Collapse
Affiliation(s)
- Gašper Podobnik
- University of Ljubljana, Faculty Electrical Engineering, Tržaška cesta 25, Ljubljana 1000, Slovenia.
| | - Bulat Ibragimov
- University of Ljubljana, Faculty Electrical Engineering, Tržaška cesta 25, Ljubljana 1000, Slovenia; University of Copenhagen, Department of Computer Science, Universitetsparken 1, Copenhagen 2100, Denmark
| | - Elias Tappeiner
- UMIT Tirol - Private University for Health Sciences and Health Technology, Eduard-Wallnöfer-Zentrum 1, Hall in Tirol 6060, Austria
| | - Chanwoong Lee
- Yonsei University, College of Medicine, 50 Yonsei-ro, Seodaemun-gu, Seoul 03722, South Korea; Yonsei Cancer Center, Department of RadiationOncology, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul 03722, South Korea
| | - Jin Sung Kim
- Yonsei University, College of Medicine, 50 Yonsei-ro, Seodaemun-gu, Seoul 03722, South Korea; Yonsei Cancer Center, Department of RadiationOncology, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul 03722, South Korea; Oncosoft Inc, 37 Myeongmul-gil, Seodaemun-gu, Seoul 03722, South Korea
| | - Zacharia Mesbah
- Henri Becquerel Cancer Center, 1 Rue d'Amiens, Rouen 76000, France; Siemens Healthineers, 6 Rue du Général Audran, CS20146, Courbevoie 92412, France
| | - Romain Modzelewski
- Henri Becquerel Cancer Center, 1 Rue d'Amiens, Rouen 76000, France; Litis UR 4108, 684 Av. de l'Université, Saint- Étienne-du-Rouvray 76800, France
| | - Yihao Ma
- Guizhou Medical University, School of Biology & Engineering, 9FW8+2P3, Ankang Avenue, Gui'an New Area, Guiyang, Guizhou Province 561113, China
| | - Fan Yang
- Guizhou Medical University, School of Biology & Engineering, 9FW8+2P3, Ankang Avenue, Gui'an New Area, Guiyang, Guizhou Province 561113, China
| | - Mikołaj Rudecki
- AGH University of Kraków, Department of Measurement and Electronicsal, Mickiewicza 30, Kraków 30-059, Poland
| | - Marek Wodziński
- AGH University of Kraków, Department of Measurement and Electronicsal, Mickiewicza 30, Kraków 30-059, Poland; University of Applied Sciences Western Switzerland, Information Systems Institute, Rue de la Plaine 2, Sierre 3960, Switzerland
| | - Primož Peterlin
- Institute of Oncology, Ljubljana, Zaloška cesta 2, Ljubljana 1000, Slovenia
| | - Primož Strojan
- Institute of Oncology, Ljubljana, Zaloška cesta 2, Ljubljana 1000, Slovenia
| | - Tomaž Vrtovec
- University of Ljubljana, Faculty Electrical Engineering, Tržaška cesta 25, Ljubljana 1000, Slovenia
| |
Collapse
|
5
|
Erdur AC, Rusche D, Scholz D, Kiechle J, Fischer S, Llorián-Salvador Ó, Buchner JA, Nguyen MQ, Etzel L, Weidner J, Metz MC, Wiestler B, Schnabel J, Rueckert D, Combs SE, Peeken JC. Deep learning for autosegmentation for radiotherapy treatment planning: State-of-the-art and novel perspectives. Strahlenther Onkol 2024:10.1007/s00066-024-02262-2. [PMID: 39105745 DOI: 10.1007/s00066-024-02262-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 06/13/2024] [Indexed: 08/07/2024]
Abstract
The rapid development of artificial intelligence (AI) has gained importance, with many tools already entering our daily lives. The medical field of radiation oncology is also subject to this development, with AI entering all steps of the patient journey. In this review article, we summarize contemporary AI techniques and explore the clinical applications of AI-based automated segmentation models in radiotherapy planning, focusing on delineation of organs at risk (OARs), the gross tumor volume (GTV), and the clinical target volume (CTV). Emphasizing the need for precise and individualized plans, we review various commercial and freeware segmentation tools and also state-of-the-art approaches. Through our own findings and based on the literature, we demonstrate improved efficiency and consistency as well as time savings in different clinical scenarios. Despite challenges in clinical implementation such as domain shifts, the potential benefits for personalized treatment planning are substantial. The integration of mathematical tumor growth models and AI-based tumor detection further enhances the possibilities for refining target volumes. As advancements continue, the prospect of one-stop-shop segmentation and radiotherapy planning represents an exciting frontier in radiotherapy, potentially enabling fast treatment with enhanced precision and individualization.
Collapse
Affiliation(s)
- Ayhan Can Erdur
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
| | - Daniel Rusche
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Daniel Scholz
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Johannes Kiechle
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
| | - Stefan Fischer
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
| | - Óscar Llorián-Salvador
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department for Bioinformatics and Computational Biology - i12, Technical University of Munich, Boltzmannstraße 3, 85748, Garching, Bavaria, Germany
- Institute of Organismic and Molecular Evolution, Johannes Gutenberg University Mainz (JGU), Hüsch-Weg 15, 55128, Mainz, Rhineland-Palatinate, Germany
| | - Josef A Buchner
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Mai Q Nguyen
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Lucas Etzel
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
| | - Jonas Weidner
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Marie-Christin Metz
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Benedikt Wiestler
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Julia Schnabel
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
- Institute of Machine Learning in Biomedical Imaging, Helmholtz Munich, Ingolstädter Landstraße 1, 85764, Neuherberg, Bavaria, Germany
- School of Biomedical Engineering & Imaging Sciences, King's College London, Strand, WC2R 2LS, London, London, UK
| | - Daniel Rueckert
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Faculty of Engineering, Department of Computing, Imperial College London, Exhibition Rd, SW7 2BX, London, London, UK
| | - Stephanie E Combs
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| | - Jan C Peeken
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| |
Collapse
|
6
|
Marruecos Querol J, Jurado-Bruggeman D, Lopez-Vidal A, Mesía Nin R, Rubió-Casadevall J, Buxó M, Eraso Urien A. Contouring aid tools in radiotherapy. Smoothing: the false friend. Clin Transl Oncol 2024; 26:1956-1967. [PMID: 38493446 DOI: 10.1007/s12094-024-03420-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 02/23/2024] [Indexed: 03/19/2024]
Abstract
OBJECTIVE Contouring accuracy is critical in modern radiotherapy. Several tools are available to assist clinicians in this task. This study aims to evaluate the performance of the smoothing tool in the ARIA system to obtain more consistent volumes. METHODS Eleven different geometric shapes were delineated in ARIA v15.6 (Sphere, Cube, Square Prism, Six-Pointed Star Prism, Arrow Prism, And Cylinder and the respective volumes at 45° of axis deviation (_45)) in 1, 3, 5, 7, and 10 cm side or diameter each. Post-processing drawing tools to smooth those first-generated volumes were applied in different options (2D-ALL vs 3D) and grades (1, 3, 5, 10, 15, and 20). These volumetric transformations were analyzed by comparing different parameters: volume changes, center of mass, and DICE similarity coefficient index. Then we studied how smoothing affected two different volumes in a head and neck cancer patient: a single rounded node and the volume delineating cervical nodal areas. RESULTS No changes in data were found between 2D-ALL or 3D smoothing. Minimum deviations were found (range from 0 to 0.45 cm) in the center of mass. Volumes and the DICE index decreased as the degree of smoothing increased. Some discrepancies were found, especially in figures with cleft and spikes that behave differently. In the clinical case, smoothing should be applied only once throughout the target delineation process, preferably in the largest volume (PTV) to minimize errors. CONCLUSION Smoothing is a good tool to reduce artifacts due to the manual delineation of radiotherapy volumes. The resulting volumes must be always carefully reviewed.
Collapse
Affiliation(s)
- Jordi Marruecos Querol
- Radiation Oncology Department, Catalan Institute of Oncology, Girona, Spain.
- Research Group in Radiation Oncology and Medical Physics of Girona, Girona Biomedical Research Institute (IDIBGI), Girona, Spain.
- Department of Radiation Oncology, ICO, Girona, Spain.
| | - Diego Jurado-Bruggeman
- Research Group in Radiation Oncology and Medical Physics of Girona, Girona Biomedical Research Institute (IDIBGI), Girona, Spain
- Medical Physics and Radiation Protection Department, Catalan Institute of Oncology, Girona, Spain
| | - Anna Lopez-Vidal
- Medical Oncology Department, Catalan Institute of Oncology, Girona, Spain
| | - Ricard Mesía Nin
- Medical Oncology Department, Catalan Institute of Oncology, B-ARGO Group, IGTP, Badalona, Spain
| | | | - Maria Buxó
- Girona Biomedical Research Institute (IDIBGI), Girona, Spain
| | - Aranzazu Eraso Urien
- Radiation Oncology Department, Catalan Institute of Oncology, Girona, Spain
- Research Group in Radiation Oncology and Medical Physics of Girona, Girona Biomedical Research Institute (IDIBGI), Girona, Spain
| |
Collapse
|
7
|
Rasmussen ME, Akbarov K, Titovich E, Nijkamp JA, Van Elmpt W, Primdahl H, Lassen P, Cacicedo J, Cordero-Mendez L, Uddin AFMK, Mohamed A, Prajogi B, Brohet KE, Nyongesa C, Lomidze D, Prasiko G, Ferraris G, Mahmood H, Stojkovski I, Isayev I, Mohamad I, Shirley L, Kochbati L, Eftodiev L, Piatkevich M, Bonilla Jara MM, Spahiu O, Aralbayev R, Zakirova R, Subramaniam S, Kibudde S, Tsegmed U, Korreman SS, Eriksen JG. Potential of E-Learning Interventions and Artificial Intelligence-Assisted Contouring Skills in Radiotherapy: The ELAISA Study. JCO Glob Oncol 2024; 10:e2400173. [PMID: 39236283 PMCID: PMC11404336 DOI: 10.1200/go.24.00173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 06/19/2024] [Accepted: 07/10/2024] [Indexed: 09/07/2024] Open
Abstract
PURPOSE Most research on artificial intelligence-based auto-contouring as template (AI-assisted contouring) for organs-at-risk (OARs) stem from high-income countries. The effect and safety are, however, likely to depend on local factors. This study aimed to investigate the effects of AI-assisted contouring and teaching on contouring time and contour quality among radiation oncologists (ROs) working in low- and middle-income countries (LMICs). MATERIALS AND METHODS Ninety-seven ROs were randomly assigned to either manual or AI-assisted contouring of eight OARs for two head-and-neck cancer cases with an in-between teaching session on contouring guidelines. Thereby, the effect of teaching (yes/no) and AI-assisted contouring (yes/no) was quantified. Second, ROs completed short-term and long-term follow-up cases all using AI assistance. Contour quality was quantified with Dice Similarity Coefficient (DSC) between ROs' contours and expert consensus contours. Groups were compared using absolute differences in medians with 95% CIs. RESULTS AI-assisted contouring without previous teaching increased absolute DSC for optic nerve (by 0.05 [0.01; 0.10]), oral cavity (0.10 [0.06; 0.13]), parotid (0.07 [0.05; 0.12]), spinal cord (0.04 [0.01; 0.06]), and mandible (0.02 [0.01; 0.03]). Contouring time decreased for brain stem (-1.41 [-2.44; -0.25]), mandible (-6.60 [-8.09; -3.35]), optic nerve (-0.19 [-0.47; -0.02]), parotid (-1.80 [-2.66; -0.32]), and thyroid (-1.03 [-2.18; -0.05]). Without AI-assisted contouring, teaching increased DSC for oral cavity (0.05 [0.01; 0.09]) and thyroid (0.04 [0.02; 0.07]), and contouring time increased for mandible (2.36 [-0.51; 5.14]), oral cavity (1.42 [-0.08; 4.14]), and thyroid (1.60 [-0.04; 2.22]). CONCLUSION The study suggested that AI-assisted contouring is safe and beneficial to ROs working in LMICs. Prospective clinical trials on AI-assisted contouring should, however, be conducted upon clinical implementation to confirm the effects.
Collapse
Affiliation(s)
| | | | | | | | - Wouter Van Elmpt
- MAASTRO clinic, Maastricht University Medical Centre, Maastricht, the Netherlands
| | - Hanne Primdahl
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Pernille Lassen
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Jon Cacicedo
- Department of Radiation Oncology, Cruces University Hospital, Bilbao, Spain
| | | | - A F M Kamal Uddin
- Labaid Cancer Hospital and Super Speciality Centre, Dhaka, Bangladesh
| | - Ahmed Mohamed
- National Cancer Institute, University of Gezira, Wad Madani, Sudan
| | - Ben Prajogi
- Cipto Mangunkusumo Hospital, Jakarta, Indonesia
| | | | | | - Darejan Lomidze
- Tbilisi State Medical University and Ingorokva High Medical Technology University Clinic, Tbilisi, Georgia
| | | | | | | | - Igor Stojkovski
- University Clinic of Radiotherapy and Oncology, Skopje, Macedonia
| | - Isa Isayev
- National Center of Oncology, Baku, Azerbaijan
| | | | - Leivon Shirley
- Christian Institute of Health Science and Research, Dimapur, India
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
8
|
Geng J, Sui X, Du R, Feng J, Wang R, Wang M, Yao K, Chen Q, Bai L, Wang S, Li Y, Wu H, Hu X, Du Y. Localized fine-tuning and clinical evaluation of deep-learning based auto-segmentation (DLAS) model for clinical target volume (CTV) and organs-at-risk (OAR) in rectal cancer radiotherapy. Radiat Oncol 2024; 19:87. [PMID: 38956690 PMCID: PMC11221028 DOI: 10.1186/s13014-024-02463-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Accepted: 06/03/2024] [Indexed: 07/04/2024] Open
Abstract
BACKGROUND AND PURPOSE Various deep learning auto-segmentation (DLAS) models have been proposed, some of which have been commercialized. However, the issue of performance degradation is notable when pretrained models are deployed in the clinic. This study aims to enhance precision of a popular commercial DLAS product in rectal cancer radiotherapy by localized fine-tuning, addressing challenges in practicality and generalizability in real-world clinical settings. MATERIALS AND METHODS A total of 120 Stage II/III mid-low rectal cancer patients were retrospectively enrolled and divided into three datasets: training (n = 60), external validation (ExVal, n = 30), and generalizability evaluation (GenEva, n = 30) datasets respectively. The patients in the training and ExVal dataset were acquired on the same CT simulator, while those in GenEva were on a different CT simulator. The commercial DLAS software was first localized fine-tuned (LFT) for clinical target volume (CTV) and organs-at-risk (OAR) using the training data, and then validated on ExVal and GenEva respectively. Performance evaluation involved comparing the LFT model with the vendor-provided pretrained model (VPM) against ground truth contours, using metrics like Dice similarity coefficient (DSC), 95th Hausdorff distance (95HD), sensitivity and specificity. RESULTS LFT significantly improved CTV delineation accuracy (p < 0.05) with LFT outperforming VPM in target volume, DSC, 95HD and specificity. Both models exhibited adequate accuracy for bladder and femoral heads, and LFT demonstrated significant enhancement in segmenting the more complex small intestine. We did not identify performance degradation when LFT and VPM models were applied in the GenEva dataset. CONCLUSIONS The necessity and potential benefits of LFT DLAS towards institution-specific model adaption is underscored. The commercial DLAS software exhibits superior accuracy once localized fine-tuned, and is highly robust to imaging equipment changes.
Collapse
Affiliation(s)
- Jianhao Geng
- Department of Radiation Oncology, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Xin Sui
- Department of Radiation Oncology, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Rongxu Du
- Department of Radiation Oncology, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Jialin Feng
- Department of Radiation Oncology, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Ruoxi Wang
- Department of Radiation Oncology, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Meijiao Wang
- Department of Radiation Oncology, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Kaining Yao
- Department of Radiation Oncology, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Qi Chen
- Research and Development Department, MedMind Technology Co., Ltd, Beijing, 100083, China
| | - Lu Bai
- Research and Development Department, MedMind Technology Co., Ltd, Beijing, 100083, China
| | - Shaobin Wang
- Research and Development Department, MedMind Technology Co., Ltd, Beijing, 100083, China
| | - Yongheng Li
- Department of Radiation Oncology, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Hao Wu
- Department of Radiation Oncology, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Peking University Cancer Hospital & Institute, Beijing, 100142, China
- Institute of Medical Technology, Peking University Health Science Center, Beijing, 100191, China
| | - Xiangmin Hu
- Beijing Key Lab of Nanophotonics and Ultrafine Optoelectronic Systems, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Yi Du
- Department of Radiation Oncology, Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Peking University Cancer Hospital & Institute, Beijing, 100142, China.
- Institute of Medical Technology, Peking University Health Science Center, Beijing, 100191, China.
| |
Collapse
|
9
|
Xue X, Liang D, Wang K, Gao J, Ding J, Zhou F, Xu J, Liu H, Sun Q, Jiang P, Tao L, Shi W, Cheng J. A deep learning-based 3D Prompt-nnUnet model for automatic segmentation in brachytherapy of postoperative endometrial carcinoma. J Appl Clin Med Phys 2024; 25:e14371. [PMID: 38682540 PMCID: PMC11244685 DOI: 10.1002/acm2.14371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 02/07/2024] [Accepted: 03/25/2024] [Indexed: 05/01/2024] Open
Abstract
PURPOSE To create and evaluate a three-dimensional (3D) Prompt-nnUnet module that utilizes the prompts-based model combined with 3D nnUnet for producing the rapid and consistent autosegmentation of high-risk clinical target volume (HR CTV) and organ at risk (OAR) in high-dose-rate brachytherapy (HDR BT) for patients with postoperative endometrial carcinoma (EC). METHODS AND MATERIALS On two experimental batches, a total of 321 computed tomography (CT) scans were obtained for HR CTV segmentation from 321 patients with EC, and 125 CT scans for OARs segmentation from 125 patients. The numbers of training/validation/test were 257/32/32 and 87/13/25 for HR CTV and OARs respectively. A novel comparison of the deep learning neural network 3D Prompt-nnUnet and 3D nnUnet was applied for HR CTV and OARs segmentation. Three-fold cross validation and several quantitative metrics were employed, including Dice similarity coefficient (DSC), Hausdorff distance (HD), 95th percentile of Hausdorff distance (HD95%), and intersection over union (IoU). RESULTS The Prompt-nnUnet included two forms of parameters Predict-Prompt (PP) and Label-Prompt (LP), with the LP performing most similarly to the experienced radiation oncologist and outperforming the less experienced ones. During the testing phase, the mean DSC values for the LP were 0.96 ± 0.02, 0.91 ± 0.02, and 0.83 ± 0.07 for HR CTV, rectum and urethra, respectively. The mean HD values (mm) were 2.73 ± 0.95, 8.18 ± 4.84, and 2.11 ± 0.50, respectively. The mean HD95% values (mm) were 1.66 ± 1.11, 3.07 ± 0.94, and 1.35 ± 0.55, respectively. The mean IoUs were 0.92 ± 0.04, 0.84 ± 0.03, and 0.71 ± 0.09, respectively. A delineation time < 2.35 s per structure in the new model was observed, which was available to save clinician time. CONCLUSION The Prompt-nnUnet architecture, particularly the LP, was highly consistent with ground truth (GT) in HR CTV or OAR autosegmentation, reducing interobserver variability and shortening treatment time.
Collapse
Affiliation(s)
- Xian Xue
- Secondary Standard Dosimetry LaboratoryNational Institute for Radiological ProtectionChinese Center for Disease Control and Prevention (CDC)BeijingChina
| | - Dazhu Liang
- Digital Health China Technologies Co., LTDBeijingChina
| | - Kaiyue Wang
- Department of RadiotherapyPeking University Third HospitalBeijingChina
| | - Jianwei Gao
- Digital Health China Technologies Co., LTDBeijingChina
| | - Jingjing Ding
- Department of RadiotherapyChinese People's Liberation Army (PLA) General HospitalBeijingChina
| | - Fugen Zhou
- Department of Aero‐space Information EngineeringBeihang UniversityBeijingChina
| | - Juan Xu
- Digital Health China Technologies Co., LTDBeijingChina
| | - Hefeng Liu
- Digital Health China Technologies Co., LTDBeijingChina
| | - Quanfu Sun
- Secondary Standard Dosimetry LaboratoryNational Institute for Radiological ProtectionChinese Center for Disease Control and Prevention (CDC)BeijingChina
| | - Ping Jiang
- Department of RadiotherapyPeking University Third HospitalBeijingChina
| | - Laiyuan Tao
- Digital Health China Technologies Co., LTDBeijingChina
| | - Wenzhao Shi
- Digital Health China Technologies Co., LTDBeijingChina
| | - Jinsheng Cheng
- Secondary Standard Dosimetry LaboratoryNational Institute for Radiological ProtectionChinese Center for Disease Control and Prevention (CDC)BeijingChina
| |
Collapse
|
10
|
Clark B, Hardcastle N, Johnston LA, Korte J. Transfer learning for auto-segmentation of 17 organs-at-risk in the head and neck: Bridging the gap between institutional and public datasets. Med Phys 2024; 51:4767-4777. [PMID: 38376454 DOI: 10.1002/mp.16997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 12/04/2023] [Accepted: 01/31/2024] [Indexed: 02/21/2024] Open
Abstract
BACKGROUND Auto-segmentation of organs-at-risk (OARs) in the head and neck (HN) on computed tomography (CT) images is a time-consuming component of the radiation therapy pipeline that suffers from inter-observer variability. Deep learning (DL) has shown state-of-the-art results in CT auto-segmentation, with larger and more diverse datasets showing better segmentation performance. Institutional CT auto-segmentation datasets have been small historically (n < 50) due to the time required for manual curation of images and anatomical labels. Recently, large public CT auto-segmentation datasets (n > 1000 aggregated) have become available through online repositories such as The Cancer Imaging Archive. Transfer learning is a technique applied when training samples are scarce, but a large dataset from a closely related domain is available. PURPOSE The purpose of this study was to investigate whether a large public dataset could be used in place of an institutional dataset (n > 500), or to augment performance via transfer learning, when building HN OAR auto-segmentation models for institutional use. METHODS Auto-segmentation models were trained on a large public dataset (public models) and a smaller institutional dataset (institutional models). The public models were fine-tuned on the institutional dataset using transfer learning (transfer models). We assessed both public model generalizability and transfer model performance by comparison with institutional models. Additionally, the effect of institutional dataset size on both transfer and institutional models was investigated. All DL models used a high-resolution, two-stage architecture based on the popular 3D U-Net. Model performance was evaluated using five geometric measures: the dice similarity coefficient (DSC), surface DSC, 95th percentile Hausdorff distance, mean surface distance (MSD), and added path length. RESULTS For a small subset of OARs (left/right optic nerve, spinal cord, left submandibular), the public models performed significantly better (p < 0.05) than, or showed no significant difference to, the institutional models under most of the metrics examined. For the remaining OARs, the public models were inferior to the institutional models, although performance differences were small (DSC ≤ 0.03, MSD < 0.5 mm) for seven OARs (brainstem, left/right lens, left/right parotid, mandible, right submandibular). The transfer models performed significantly better than the institutional models for seven OARs (brainstem, right lens, left/right optic nerve, left/right parotid, spinal cord) with a small margin of improvement (DSC ≤ 0.02, MSD < 0.4 mm). When numbers of institutional training samples were limited, public and transfer models outperformed the institutional models for most OARs (brainstem, left/right lens, left/right optic nerve, left/right parotid, spinal cord, and left/right submandibular). CONCLUSION Training auto-segmentation models with public data alone was suitable for a small number of OARs. Using only public data incurred a small performance deficit for most other OARs, when compared with institutional data alone, but may be preferable over time-consuming curation of a large institutional dataset. When a large institutional dataset was available, transfer learning with models pretrained on a large public dataset provided a modest performance improvement for several OARs. When numbers of institutional samples were limited, using the public dataset alone, or as a pretrained model, was beneficial for most OARs.
Collapse
Affiliation(s)
- Brett Clark
- Department of Biomedical Engineering, University of Melbourne, Melbourne, Australia
- Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, Australia
| | - Nicholas Hardcastle
- Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, Australia
- Centre for Medical Radiation Physics, University of Wollongong, Wollongong, Australia
- Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Australia
| | - Leigh A Johnston
- Department of Biomedical Engineering, University of Melbourne, Melbourne, Australia
- Melbourne Brain Centre Imaging Unit, University of Melbourne, Melbourne, Australia
- Graeme Clark Institute, University of Melbourne, Melbourne, Australia
| | - James Korte
- Department of Biomedical Engineering, University of Melbourne, Melbourne, Australia
- Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, Australia
| |
Collapse
|
11
|
Liu X, Qu L, Xie Z, Zhao J, Shi Y, Song Z. Towards more precise automatic analysis: a systematic review of deep learning-based multi-organ segmentation. Biomed Eng Online 2024; 23:52. [PMID: 38851691 PMCID: PMC11162022 DOI: 10.1186/s12938-024-01238-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 04/11/2024] [Indexed: 06/10/2024] Open
Abstract
Accurate segmentation of multiple organs in the head, neck, chest, and abdomen from medical images is an essential step in computer-aided diagnosis, surgical navigation, and radiation therapy. In the past few years, with a data-driven feature extraction approach and end-to-end training, automatic deep learning-based multi-organ segmentation methods have far outperformed traditional methods and become a new research topic. This review systematically summarizes the latest research in this field. We searched Google Scholar for papers published from January 1, 2016 to December 31, 2023, using keywords "multi-organ segmentation" and "deep learning", resulting in 327 papers. We followed the PRISMA guidelines for paper selection, and 195 studies were deemed to be within the scope of this review. We summarized the two main aspects involved in multi-organ segmentation: datasets and methods. Regarding datasets, we provided an overview of existing public datasets and conducted an in-depth analysis. Concerning methods, we categorized existing approaches into three major classes: fully supervised, weakly supervised and semi-supervised, based on whether they require complete label information. We summarized the achievements of these methods in terms of segmentation accuracy. In the discussion and conclusion section, we outlined and summarized the current trends in multi-organ segmentation.
Collapse
Affiliation(s)
- Xiaoyu Liu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Linhao Qu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Ziyue Xie
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Jiayue Zhao
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Yonghong Shi
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| |
Collapse
|
12
|
Salimi Y, Mansouri Z, Hajianfar G, Sanaat A, Shiri I, Zaidi H. Fully automated explainable abdominal CT contrast media phase classification using organ segmentation and machine learning. Med Phys 2024; 51:4095-4104. [PMID: 38629779 DOI: 10.1002/mp.17076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 03/19/2024] [Accepted: 04/02/2024] [Indexed: 06/05/2024] Open
Abstract
BACKGROUND Contrast-enhanced computed tomography (CECT) provides much more information compared to non-enhanced CT images, especially for the differentiation of malignancies, such as liver carcinomas. Contrast media injection phase information is usually missing on public datasets and not standardized in the clinic even in the same region and language. This is a barrier to effective use of available CECT images in clinical research. PURPOSE The aim of this study is to detect contrast media injection phase from CT images by means of organ segmentation and machine learning algorithms. METHODS A total number of 2509 CT images split into four subsets of non-contrast (class #0), arterial (class #1), venous (class #2), and delayed (class #3) after contrast media injection were collected from two CT scanners. Seven organs including the liver, spleen, heart, kidneys, lungs, urinary bladder, and aorta along with body contour masks were generated by pre-trained deep learning algorithms. Subsequently, five first-order statistical features including average, standard deviation, 10, 50, and 90 percentiles extracted from the above-mentioned masks were fed to machine learning models after feature selection and reduction to classify the CT images in one of four above mentioned classes. A 10-fold data split strategy was followed. The performance of our methodology was evaluated in terms of classification accuracy metrics. RESULTS The best performance was achieved by Boruta feature selection and RF model with average area under the curve of more than 0.999 and accuracy of 0.9936 averaged over four classes and 10 folds. Boruta feature selection selected all predictor features. The lowest classification was observed for class #2 (0.9888), which is already an excellent result. In the 10-fold strategy, only 33 cases from 2509 cases (∼1.4%) were misclassified. The performance over all folds was consistent. CONCLUSIONS We developed a fast, accurate, reliable, and explainable methodology to classify contrast media phases which may be useful in data curation and annotation in big online datasets or local datasets with non-standard or no series description. Our model containing two steps of deep learning and machine learning may help to exploit available datasets more effectively.
Collapse
Affiliation(s)
- Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
- University Research and Innovation Center, Óbuda University, Budapest, Hungary
| |
Collapse
|
13
|
Temple SWP, Rowbottom CG. Gross failure rates and failure modes for a commercial AI-based auto-segmentation algorithm in head and neck cancer patients. J Appl Clin Med Phys 2024; 25:e14273. [PMID: 38263866 PMCID: PMC11163497 DOI: 10.1002/acm2.14273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 12/15/2023] [Accepted: 12/20/2023] [Indexed: 01/25/2024] Open
Abstract
PURPOSE Artificial intelligence (AI) based commercial software can be used to automatically delineate organs at risk (OAR), with potential for efficiency savings in the radiotherapy treatment planning pathway, and reduction of inter- and intra-observer variability. There has been little research investigating gross failure rates and failure modes of such systems. METHOD 50 head and neck (H&N) patient data sets with "gold standard" contours were compared to AI-generated contours to produce expected mean and standard deviation values for the Dice Similarity Coefficient (DSC), for four common H&N OARs (brainstem, mandible, left and right parotid). An AI-based commercial system was applied to 500 H&N patients. AI-generated contours were compared to manual contours, outlined by an expert human, and a gross failure was set at three standard deviations below the expected mean DSC. Failures were inspected to assess reason for failure of the AI-based system with failures relating to suboptimal manual contouring censored. True failures were classified into 4 sub-types (setup position, anatomy, image artefacts and unknown). RESULTS There were 24 true failures of the AI-based commercial software, a gross failure rate of 1.2%. Fifteen failures were due to patient anatomy, four were due to dental image artefacts, three were due to patient position and two were unknown. True failure rates by OAR were 0.4% (brainstem), 2.2% (mandible), 1.4% (left parotid) and 0.8% (right parotid). CONCLUSION True failures of the AI-based system were predominantly associated with a non-standard element within the CT scan. It is likely that these non-standard elements were the reason for the gross failure, and suggests that patient datasets used to train the AI model did not contain sufficient heterogeneity of data. Regardless of the reasons for failure, the true failure rate for the AI-based system in the H&N region for the OARs investigated was low (∼1%).
Collapse
Affiliation(s)
- Simon W. P. Temple
- Medical Physics DepartmentThe Clatterbridge Cancer Centre NHS Foundation TrustLiverpoolUK
| | - Carl G. Rowbottom
- Medical Physics DepartmentThe Clatterbridge Cancer Centre NHS Foundation TrustLiverpoolUK
- Department of PhysicsUniversity of LiverpoolLiverpoolUK
| |
Collapse
|
14
|
Han X, Chen Z, Lin G, Lv W, Zheng C, Lu W, Sun Y, Lu L. Semi-supervised model based on implicit neural representation and mutual learning (SIMN) for multi-center nasopharyngeal carcinoma segmentation on MRI. Comput Biol Med 2024; 175:108368. [PMID: 38663351 DOI: 10.1016/j.compbiomed.2024.108368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 03/06/2024] [Accepted: 03/24/2024] [Indexed: 05/15/2024]
Abstract
BACKGROUND The issue of using deep learning to obtain accurate gross tumor volume (GTV) and metastatic lymph nodes (MLN) segmentation for nasopharyngeal carcinoma (NPC) on heterogeneous magnetic resonance imaging (MRI) images with limited labeling remains unsolved. METHOD We collected 918 patients with MRI images from three hospitals to develop and validate models and proposed a semi-supervised framework for the fine delineation of multi-center NPC boundaries by integrating uncertainty-based implicit neural representations named SIMN. The framework utilizes the deep mutual learning approach with CNN and Transformer, incorporating dynamic thresholds. Additionally, domain adaptive algorithms are employed to enhance the performance. RESULTS SIMN predictions have a high overlap ratio with the ground truth. Under the 20 % labeled cases, for the internal test cohorts, the average DSC in GTV and MLN are 0.7981 and 0.7804, respectively; for external test cohort Wu Zhou Red Cross Hospital, the average DSC in GTV and MLN are 0.7217 and 0.7581, respectively; for external test cohorts First People Hospital of Foshan, the average DSC in GTV and MLN are 0.7004 and 0.7692, respectively. No significant differences are found in DSC, HD95, ASD, and Recall for patients with different clinical categories. Moreover, SIMN outperformed existing classical semi-supervised methods. CONCLUSIONS SIMN showed a highly accurate GTV and MLN segmentation for NPC on multi-center MRI images under Semi-Supervised Learning (SSL), which can easily transfer to other centers without fine-tuning. It suggests that it has the potential to act as a generalized delineation solution for heterogeneous MRI images with limited labels in clinical deployment.
Collapse
Affiliation(s)
- Xu Han
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Pazhou Lab, Guangzhou, 510515, China
| | - Zihang Chen
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, 510060, China
| | - Guoyu Lin
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, 510060, China
| | - Wenbing Lv
- School of Information Science and Engineering, Yunnan University, Kunming, 650504, China
| | - Chundan Zheng
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Pazhou Lab, Guangzhou, 510515, China
| | - Wantong Lu
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Pazhou Lab, Guangzhou, 510515, China
| | - Ying Sun
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, 510060, China.
| | - Lijun Lu
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China; Pazhou Lab, Guangzhou, 510515, China.
| |
Collapse
|
15
|
Wang CK, Wang TW, Yang YX, Wu YT. Deep Learning for Nasopharyngeal Carcinoma Segmentation in Magnetic Resonance Imaging: A Systematic Review and Meta-Analysis. Bioengineering (Basel) 2024; 11:504. [PMID: 38790370 PMCID: PMC11118180 DOI: 10.3390/bioengineering11050504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 05/11/2024] [Accepted: 05/15/2024] [Indexed: 05/26/2024] Open
Abstract
Nasopharyngeal carcinoma is a significant health challenge that is particularly prevalent in Southeast Asia and North Africa. MRI is the preferred diagnostic tool for NPC due to its superior soft tissue contrast. The accurate segmentation of NPC in MRI is crucial for effective treatment planning and prognosis. We conducted a search across PubMed, Embase, and Web of Science from inception up to 20 March 2024, adhering to the PRISMA 2020 guidelines. Eligibility criteria focused on studies utilizing DL for NPC segmentation in adults via MRI. Data extraction and meta-analysis were conducted to evaluate the performance of DL models, primarily measured by Dice scores. We assessed methodological quality using the CLAIM and QUADAS-2 tools, and statistical analysis was performed using random effects models. The analysis incorporated 17 studies, demonstrating a pooled Dice score of 78% for DL models (95% confidence interval: 74% to 83%), indicating a moderate to high segmentation accuracy by DL models. Significant heterogeneity and publication bias were observed among the included studies. Our findings reveal that DL models, particularly convolutional neural networks, offer moderately accurate NPC segmentation in MRI. This advancement holds the potential for enhancing NPC management, necessitating further research toward integration into clinical practice.
Collapse
Affiliation(s)
- Chih-Keng Wang
- School of Medicine, College of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan; (C.-K.W.)
- Department of Otolaryngology-Head and Neck Surgery, Taichung Veterans General Hospital, Taichung 407219, Taiwan
| | - Ting-Wei Wang
- School of Medicine, College of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan; (C.-K.W.)
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
| | - Ya-Xuan Yang
- Department of Otolaryngology-Head and Neck Surgery, Taichung Veterans General Hospital, Taichung 407219, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
| |
Collapse
|
16
|
Rong Y, Chen Q, Fu Y, Yang X, Al-Hallaq HA, Wu QJ, Yuan L, Xiao Y, Cai B, Latifi K, Benedict SH, Buchsbaum JC, Qi XS. NRG Oncology Assessment of Artificial Intelligence Deep Learning-Based Auto-segmentation for Radiation Therapy: Current Developments, Clinical Considerations, and Future Directions. Int J Radiat Oncol Biol Phys 2024; 119:261-280. [PMID: 37972715 PMCID: PMC11023777 DOI: 10.1016/j.ijrobp.2023.10.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 09/16/2023] [Accepted: 10/14/2023] [Indexed: 11/19/2023]
Abstract
Deep learning neural networks (DLNN) in Artificial intelligence (AI) have been extensively explored for automatic segmentation in radiotherapy (RT). In contrast to traditional model-based methods, data-driven AI-based models for auto-segmentation have shown high accuracy in early studies in research settings and controlled environment (single institution). Vendor-provided commercial AI models are made available as part of the integrated treatment planning system (TPS) or as a stand-alone tool that provides streamlined workflow interacting with the main TPS. These commercial tools have drawn clinics' attention thanks to their significant benefit in reducing the workload from manual contouring and shortening the duration of treatment planning. However, challenges occur when applying these commercial AI-based segmentation models to diverse clinical scenarios, particularly in uncontrolled environments. Contouring nomenclature and guideline standardization has been the main task undertaken by the NRG Oncology. AI auto-segmentation holds the potential clinical trial participants to reduce interobserver variations, nomenclature non-compliance, and contouring guideline deviations. Meanwhile, trial reviewers could use AI tools to verify contour accuracy and compliance of those submitted datasets. In recognizing the growing clinical utilization and potential of these commercial AI auto-segmentation tools, NRG Oncology has formed a working group to evaluate the clinical utilization and potential of commercial AI auto-segmentation tools. The group will assess in-house and commercially available AI models, evaluation metrics, clinical challenges, and limitations, as well as future developments in addressing these challenges. General recommendations are made in terms of the implementation of these commercial AI models, as well as precautions in recognizing the challenges and limitations.
Collapse
Affiliation(s)
- Yi Rong
- Mayo Clinic Arizona, Phoenix, AZ
| | - Quan Chen
- City of Hope Comprehensive Cancer Center Duarte, CA
| | - Yabo Fu
- Memorial Sloan Kettering Cancer Center, Commack, NY
| | | | | | | | - Lulin Yuan
- Virginia Commonwealth University, Richmond, VA
| | - Ying Xiao
- University of Pennsylvania/Abramson Cancer Center, Philadelphia, PA
| | - Bin Cai
- The University of Texas Southwestern Medical Center, Dallas, TX
| | | | - Stanley H Benedict
- University of California Davis Comprehensive Cancer Center, Sacramento, CA
| | | | - X Sharon Qi
- University of California Los Angeles, Los Angeles, CA
| |
Collapse
|
17
|
Shan G, Yu S, Lai Z, Xuan Z, Zhang J, Wang B, Ge Y. A Review of Artificial Intelligence Application for Radiotherapy. Dose Response 2024; 22:15593258241263687. [PMID: 38912333 PMCID: PMC11193352 DOI: 10.1177/15593258241263687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Accepted: 05/03/2024] [Indexed: 06/25/2024] Open
Abstract
Background and Purpose Artificial intelligence (AI) is a technique which tries to think like humans and mimic human behaviors. It has been considered as an alternative in a lot of human-dependent steps in radiotherapy (RT), since the human participation is a principal uncertainty source in RT. The aim of this work is to provide a systematic summary of the current literature on AI application for RT, and to clarify its role for RT practice in terms of clinical views. Materials and Methods A systematic literature search of PubMed and Google Scholar was performed to identify original articles involving the AI applications in RT from the inception to 2022. Studies were included if they reported original data and explored the clinical applications of AI in RT. Results The selected studies were categorized into three aspects of RT: organ and lesion segmentation, treatment planning and quality assurance. For each aspect, this review discussed how these AI tools could be involved in the RT protocol. Conclusions Our study revealed that AI was a potential alternative for the human-dependent steps in the complex process of RT.
Collapse
Affiliation(s)
- Guoping Shan
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
- Zhejiang Cancer Hospital, Hangzhou, China
| | - Shunfei Yu
- Zhejiang Provincial Center for Disease Control and Prevention, Hangzhou, China
| | - Zhongjun Lai
- Zhejiang Provincial Center for Disease Control and Prevention, Hangzhou, China
| | - Zhiqiang Xuan
- Zhejiang Provincial Center for Disease Control and Prevention, Hangzhou, China
| | - Jie Zhang
- Zhejiang Cancer Hospital, Hangzhou, China
| | | | - Yun Ge
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
| |
Collapse
|
18
|
Podobnik G, Ibragimov B, Peterlin P, Strojan P, Vrtovec T. vOARiability: Interobserver and intermodality variability analysis in OAR contouring from head and neck CT and MR images. Med Phys 2024; 51:2175-2186. [PMID: 38230752 DOI: 10.1002/mp.16924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 10/31/2023] [Accepted: 12/13/2023] [Indexed: 01/18/2024] Open
Abstract
BACKGROUND Accurate and consistent contouring of organs-at-risk (OARs) from medical images is a key step of radiotherapy (RT) cancer treatment planning. Most contouring approaches rely on computed tomography (CT) images, but the integration of complementary magnetic resonance (MR) modality is highly recommended, especially from the perspective of OAR contouring, synthetic CT and MR image generation for MR-only RT, and MR-guided RT. Although MR has been recognized as valuable for contouring OARs in the head and neck (HaN) region, the accuracy and consistency of the resulting contours have not been yet objectively evaluated. PURPOSE To analyze the interobserver and intermodality variability in contouring OARs in the HaN region, performed by observers with different level of experience from CT and MR images of the same patients. METHODS In the final cohort of 27 CT and MR images of the same patients, contours of up to 31 OARs were obtained by a radiation oncology resident (junior observer, JO) and a board-certified radiation oncologist (senior observer, SO). The resulting contours were then evaluated in terms of interobserver variability, characterized as the agreement among different observers (JO and SO) when contouring OARs in a selected modality (CT or MR), and intermodality variability, characterized as the agreement among different modalities (CT and MR) when OARs were contoured by a selected observer (JO or SO), both by the Dice coefficient (DC) and 95-percentile Hausdorff distance (HD95 $_{95}$ ). RESULTS The mean (±standard deviation) interobserver variability was 69.0 ± 20.2% and 5.1 ± 4.1 mm, while the mean intermodality variability was 61.6 ± 19.0% and 6.1 ± 4.3 mm in terms of DC and HD95 $_{95}$ , respectively, across all OARs. Statistically significant differences were only found for specific OARs. The performed MR to CT image registration resulted in a mean target registration error of 1.7 ± 0.5 mm, which was considered as valid for the analysis of intermodality variability. CONCLUSIONS The contouring variability was, in general, similar for both image modalities, and experience did not considerably affect the contouring performance. However, the results indicate that an OAR is difficult to contour regardless of whether it is contoured in the CT or MR image, and that observer experience may be an important factor for OARs that are deemed difficult to contour. Several of the differences in the resulting variability can be also attributed to adherence to guidelines, especially for OARs with poor visibility or without distinctive boundaries in either CT or MR images. Although considerable contouring differences were observed for specific OARs, it can be concluded that almost all OARs can be contoured with a similar degree of variability in either the CT or MR modality, which works in favor of MR images from the perspective of MR-only and MR-guided RT.
Collapse
Affiliation(s)
- Gašper Podobnik
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | - Bulat Ibragimov
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | | | | | - Tomaž Vrtovec
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
19
|
Koo J, Caudell J, Latifi K, Moros EG, Feygelman V. Essentially unedited deep-learning-based OARs are suitable for rigorous oropharyngeal and laryngeal cancer treatment planning. J Appl Clin Med Phys 2024; 25:e14202. [PMID: 37942993 DOI: 10.1002/acm2.14202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 10/19/2023] [Accepted: 10/25/2023] [Indexed: 11/10/2023] Open
Abstract
Quality of organ at risk (OAR) autosegmentation is often judged by concordance metrics against the human-generated gold standard. However, the ultimate goal is the ability to use unedited autosegmented OARs in treatment planning, while maintaining the plan quality. We tested this approach with head and neck (HN) OARs generated by a prototype deep-learning (DL) model on patients previously treated for oropharyngeal and laryngeal cancer. Forty patients were selected, with all structures delineated by an experienced physician. For each patient, a set of 13 OARs were generated by the DL model. Each patient was re-planned based on original targets and unedited DL-produced OARs. The new dose distributions were then applied back to the manually delineated structures. The target coverage was evaluated with inhomogeneity index (II) and the relative volume of regret. For the OARs, Dice similarity coefficient (DSC) of areas under the DVH curves, individual DVH objectives, and composite continuous plan quality metric (PQM) were compared. The nearly identical primary target coverage for the original and re-generated plans was achieved, with the same II and relative volume of regret values. The average DSC of the areas under the corresponding pairs of DVH curves was 0.97 ± 0.06. The number of critical DVH points which met the clinical objectives with the dose optimized on autosegmented structures but failed when evaluated on the manual ones was 5 of 896 (0.6%). The average OAR PQM score with the re-planned dose distributions was essentially the same when evaluated either on the autosegmented or manual OARs. Thus, rigorous HN treatment planning is possible with OARs segmented by a prototype DL algorithm with minimal, if any, manual editing.
Collapse
Affiliation(s)
- Jihye Koo
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida, USA
- Department of Physics, University of South Florida, Tampa, Florida, USA
| | - Jimmy Caudell
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida, USA
| | - Kujtim Latifi
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida, USA
| | - Eduardo G Moros
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida, USA
| | - Vladimir Feygelman
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida, USA
| |
Collapse
|
20
|
Zhang HW, Huang DL, Wang YR, Zhong HS, Pang HW. CT radiomics based on different machine learning models for classifying gross tumor volume and normal liver tissue in hepatocellular carcinoma. Cancer Imaging 2024; 24:20. [PMID: 38279133 PMCID: PMC10811872 DOI: 10.1186/s40644-024-00652-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 12/29/2023] [Indexed: 01/28/2024] Open
Abstract
BACKGROUND & AIMS The present study utilized extracted computed tomography radiomics features to classify the gross tumor volume and normal liver tissue in hepatocellular carcinoma by mainstream machine learning methods, aiming to establish an automatic classification model. METHODS We recruited 104 pathologically confirmed hepatocellular carcinoma patients for this study. GTV and normal liver tissue samples were manually segmented into regions of interest and randomly divided into five-fold cross-validation groups. Dimensionality reduction using LASSO regression. Radiomics models were constructed via logistic regression, support vector machine (SVM), random forest, Xgboost, and Adaboost algorithms. The diagnostic efficacy, discrimination, and calibration of algorithms were verified using area under the receiver operating characteristic curve (AUC) analyses and calibration plot comparison. RESULTS Seven screened radiomics features excelled at distinguishing the gross tumor area. The Xgboost machine learning algorithm had the best discrimination and comprehensive diagnostic performance with an AUC of 0.9975 [95% confidence interval (CI): 0.9973-0.9978] and mean MCC of 0.9369. SVM had the second best discrimination and diagnostic performance with an AUC of 0.9846 (95% CI: 0.9835- 0.9857), mean Matthews correlation coefficient (MCC)of 0.9105, and a better calibration. All other algorithms showed an excellent ability to distinguish between gross tumor area and normal liver tissue (mean AUC 0.9825, 0.9861,0.9727,0.9644 for Adaboost, random forest, logistic regression, naivem Bayes algorithm respectively). CONCLUSION CT radiomics based on machine learning algorithms can accurately classify GTV and normal liver tissue, while the Xgboost and SVM algorithms served as the best complementary algorithms.
Collapse
Affiliation(s)
- Huai-Wen Zhang
- Department of Radiotherapy, The Second Affiliated Hospital of Nanchang Medical College, Jiangxi Clinical Research Center for Cancer, Jiangxi Cancer Hospital, 330029, Nanchang, China
- Department of Oncology, The third people's hospital of Jingdezhen, The third people's hospital of Jingdezhen affiliated to Nanchang Medical College, 333000, Jingdezhen, China
| | - De-Long Huang
- School of Clinical Medicine, Southwest Medical University, 646000, Luzhou, China
| | - Yi-Ren Wang
- School of Nursing, Southwest Medical University, 646000, Luzhou, China
| | - Hao-Shu Zhong
- Department of Hematology, Huashan Hospital, Fudan University, 200040, Shanghai, China.
| | - Hao-Wen Pang
- Department of Oncology, The Affiliated Hospital of Southwest Medical University, 646000, Luzhou, China.
| |
Collapse
|
21
|
Kawamura M, Kamomae T, Yanagawa M, Kamagata K, Fujita S, Ueda D, Matsui Y, Fushimi Y, Fujioka T, Nozaki T, Yamada A, Hirata K, Ito R, Fujima N, Tatsugami F, Nakaura T, Tsuboyama T, Naganawa S. Revolutionizing radiation therapy: the role of AI in clinical practice. JOURNAL OF RADIATION RESEARCH 2024; 65:1-9. [PMID: 37996085 PMCID: PMC10803173 DOI: 10.1093/jrr/rrad090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 09/25/2023] [Accepted: 10/16/2023] [Indexed: 11/25/2023]
Abstract
This review provides an overview of the application of artificial intelligence (AI) in radiation therapy (RT) from a radiation oncologist's perspective. Over the years, advances in diagnostic imaging have significantly improved the efficiency and effectiveness of radiotherapy. The introduction of AI has further optimized the segmentation of tumors and organs at risk, thereby saving considerable time for radiation oncologists. AI has also been utilized in treatment planning and optimization, reducing the planning time from several days to minutes or even seconds. Knowledge-based treatment planning and deep learning techniques have been employed to produce treatment plans comparable to those generated by humans. Additionally, AI has potential applications in quality control and assurance of treatment plans, optimization of image-guided RT and monitoring of mobile tumors during treatment. Prognostic evaluation and prediction using AI have been increasingly explored, with radiomics being a prominent area of research. The future of AI in radiation oncology offers the potential to establish treatment standardization by minimizing inter-observer differences in segmentation and improving dose adequacy evaluation. RT standardization through AI may have global implications, providing world-standard treatment even in resource-limited settings. However, there are challenges in accumulating big data, including patient background information and correlating treatment plans with disease outcomes. Although challenges remain, ongoing research and the integration of AI technology hold promise for further advancements in radiation oncology.
Collapse
Affiliation(s)
- Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Takeshi Kamomae
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, 565-0871, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1 Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Shohei Fujita
- Department of Radiology, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata-cho, Kitaku, Okayama, 700-8558, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawaharacho, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, 3-1-1 Asahi, Matsumoto, Nagano, 390-8621, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Faculty of Medicine, Hokkaido University, Kita15, Nishi7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Kita15, Nishi7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, 1-1-1 Honjo, Chuo-ku, Kumamoto, 860-8556, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, 565-0871, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| |
Collapse
|
22
|
Nenoff L, Amstutz F, Murr M, Archibald-Heeren B, Fusella M, Hussein M, Lechner W, Zhang Y, Sharp G, Vasquez Osorio E. Review and recommendations on deformable image registration uncertainties for radiotherapy applications. Phys Med Biol 2023; 68:24TR01. [PMID: 37972540 PMCID: PMC10725576 DOI: 10.1088/1361-6560/ad0d8a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 10/30/2023] [Accepted: 11/15/2023] [Indexed: 11/19/2023]
Abstract
Deformable image registration (DIR) is a versatile tool used in many applications in radiotherapy (RT). DIR algorithms have been implemented in many commercial treatment planning systems providing accessible and easy-to-use solutions. However, the geometric uncertainty of DIR can be large and difficult to quantify, resulting in barriers to clinical practice. Currently, there is no agreement in the RT community on how to quantify these uncertainties and determine thresholds that distinguish a good DIR result from a poor one. This review summarises the current literature on sources of DIR uncertainties and their impact on RT applications. Recommendations are provided on how to handle these uncertainties for patient-specific use, commissioning, and research. Recommendations are also provided for developers and vendors to help users to understand DIR uncertainties and make the application of DIR in RT safer and more reliable.
Collapse
Affiliation(s)
- Lena Nenoff
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, United States of America
- Harvard Medical School, Boston, MA, United States of America
- OncoRay—National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden—Rossendorf, Dresden Germany
- Helmholtz-Zentrum Dresden—Rossendorf, Institute of Radiooncology—OncoRay, Dresden, Germany
| | - Florian Amstutz
- Department of Physics, ETH Zurich, Switzerland
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Martina Murr
- Section for Biomedical Physics, Department of Radiation Oncology, University of Tübingen, Germany
| | | | - Marco Fusella
- Department of Radiation Oncology, Abano Terme Hospital, Italy
| | - Mohammad Hussein
- Metrology for Medical Physics, National Physical Laboratory, Teddington, United Kingdom
| | - Wolfgang Lechner
- Department of Radiation Oncology, Medical University of Vienna, Austria
| | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
| | - Greg Sharp
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, United States of America
- Harvard Medical School, Boston, MA, United States of America
| | - Eliana Vasquez Osorio
- Division of Cancer Sciences, The University of Manchester, Manchester, United Kingdom
| |
Collapse
|
23
|
Gay SS, Cardenas CE, Nguyen C, Netherton TJ, Yu C, Zhao Y, Skett S, Patel T, Adjogatse D, Guerrero Urbano T, Naidoo K, Beadle BM, Yang J, Aggarwal A, Court LE. Fully-automated, CT-only GTV contouring for palliative head and neck radiotherapy. Sci Rep 2023; 13:21797. [PMID: 38066074 PMCID: PMC10709623 DOI: 10.1038/s41598-023-48944-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/01/2023] [Indexed: 12/18/2023] Open
Abstract
Planning for palliative radiotherapy is performed without the advantage of MR or PET imaging in many clinics. Here, we investigated CT-only GTV delineation for palliative treatment of head and neck cancer. Two multi-institutional datasets of palliative-intent treatment plans were retrospectively acquired: a set of 102 non-contrast-enhanced CTs and a set of 96 contrast-enhanced CTs. The nnU-Net auto-segmentation network was chosen for its strength in medical image segmentation, and five approaches separately trained: (1) heuristic-cropped, non-contrast images with a single GTV channel, (2) cropping around a manually-placed point in the tumor center for non-contrast images with a single GTV channel, (3) contrast-enhanced images with a single GTV channel, (4) contrast-enhanced images with separate primary and nodal GTV channels, and (5) contrast-enhanced images along with synthetic MR images with separate primary and nodal GTV channels. Median Dice similarity coefficient ranged from 0.6 to 0.7, surface Dice from 0.30 to 0.56, and 95th Hausdorff distance from 14.7 to 19.7 mm across the five approaches. Only surface Dice exhibited statistically-significant difference across these five approaches using a two-tailed Wilcoxon Rank-Sum test (p ≤ 0.05). Our CT-only results met or exceeded published values for head and neck GTV autocontouring using multi-modality images. However, significant edits would be necessary before clinical use in palliative radiotherapy.
Collapse
Affiliation(s)
- Skylar S Gay
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA.
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA.
| | - Carlos E Cardenas
- Department of Radiation Oncology, The University of Alabama at Birmingham, Birmingham, AL, USA
| | - Callistus Nguyen
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
| | - Tucker J Netherton
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
| | - Cenji Yu
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Yao Zhao
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | | | | | | | | | | | | | - Jinzhong Yang
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
| | | | - Laurence E Court
- Unit 1472, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
| |
Collapse
|
24
|
Liao W, Luo X, He Y, Dong Y, Li C, Li K, Zhang S, Zhang S, Wang G, Xiao J. Comprehensive Evaluation of a Deep Learning Model for Automatic Organs-at-Risk Segmentation on Heterogeneous Computed Tomography Images for Abdominal Radiation Therapy. Int J Radiat Oncol Biol Phys 2023; 117:994-1006. [PMID: 37244625 DOI: 10.1016/j.ijrobp.2023.05.034] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 03/13/2023] [Accepted: 05/18/2023] [Indexed: 05/29/2023]
Abstract
PURPOSE Our purpose was to develop a deep learning model (AbsegNet) that produces accurate contours of 16 organs at risk (OARs) for abdominal malignancies as an essential part of fully automated radiation treatment planning. METHODS AND MATERIALS Three data sets with 544 computed tomography scans were retrospectively collected. Data set 1 was split into 300 training cases and 128 test cases (cohort 1) for AbsegNet. Data set 2, including cohort 2 (n = 24) and cohort 3 (n = 20), were used to validate AbsegNet externally. Data set 3, including cohort 4 (n = 40) and cohort 5 (n = 32), were used to clinically assess the accuracy of AbsegNet-generated contours. Each cohort was from a different center. The Dice similarity coefficient and 95th-percentile Hausdorff distance were calculated to evaluate the delineation quality for each OAR. Clinical accuracy evaluation was classified into 4 levels: no revision, minor revisions (0% < volumetric revision degrees [VRD] ≤ 10%), moderate revisions (10% ≤ VRD < 20%), and major revisions (VRD ≥20%). RESULTS For all OARs, AbsegNet achieved a mean Dice similarity coefficient of 86.73%, 85.65%, and 88.04% in cohorts 1, 2, and 3, respectively, and a mean 95th-percentile Hausdorff distance of 8.92, 10.18, and 12.40 mm, respectively. The performance of AbsegNet outperformed SwinUNETR, DeepLabV3+, Attention-UNet, UNet, and 3D-UNet. When experts evaluated contours from cohorts 4 and 5, 4 OARs (liver, kidney_L, kidney_R, and spleen) of all patients were scored as having no revision, and over 87.5% of patients with contours of the stomach, esophagus, adrenals, or rectum were considered as having no or minor revisions. Only 15.0% of patients with colon and small bowel contours required major revisions. CONCLUSIONS We propose a novel deep-learning model to delineate OARs on diverse data sets. Most contours produced by AbsegNet are accurate and robust and are, therefore, clinically applicable and helpful to facilitate radiation therapy workflow.
Collapse
Affiliation(s)
- Wenjun Liao
- Department of Radiation Oncology, Radiation Oncology Key Laboratory of Sichuan Province, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, Affiliated Cancer Hospital of University of Electronic Science and Technology of China, Chengdu, China
| | - Xiangde Luo
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; Shanghai AI Laboratory, Shanghai, China
| | - Yuan He
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Ye Dong
- Department of NanFang PET Center, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Churong Li
- Department of Radiation Oncology, Radiation Oncology Key Laboratory of Sichuan Province, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, Affiliated Cancer Hospital of University of Electronic Science and Technology of China, Chengdu, China
| | - Kang Li
- West China Biomedical Big Data Center
| | - Shichuan Zhang
- Department of Radiation Oncology, Radiation Oncology Key Laboratory of Sichuan Province, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, Affiliated Cancer Hospital of University of Electronic Science and Technology of China, Chengdu, China
| | - Shaoting Zhang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; Shanghai AI Laboratory, Shanghai, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; Shanghai AI Laboratory, Shanghai, China
| | - Jianghong Xiao
- Radiotherapy Physics & Technology Center, Department of Radiation Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, China.
| |
Collapse
|
25
|
Ma L, Yang Y, Ma J, Mao L, Li X, Feng L, Abulimiti M, Xiang X, Fu F, Tan Y, Zhang W, Li YX, Jin J, Li N. Correlation between AI-based CT organ features and normal lung dose in adjuvant radiotherapy following breast-conserving surgery: a multicenter prospective study. BMC Cancer 2023; 23:1085. [PMID: 37946125 PMCID: PMC10636953 DOI: 10.1186/s12885-023-11554-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Accepted: 10/20/2023] [Indexed: 11/12/2023] Open
Abstract
BACKGROUND Radiation pneumonitis (RP) is one of the common side effects after adjuvant radiotherapy in breast cancer. Irradiation dose to normal lung was related to RP. We aimed to propose an organ features based on deep learning (DL) model and to evaluate the correlation between normal lung dose and organ features. METHODS Patients with pathology-confirmed invasive breast cancer treated with adjuvant radiotherapy following breast-conserving surgery in four centers were included. From 2019 to 2020, a total of 230 patients from four nationwide centers in China were screened, of whom 208 were enrolled for DL modeling, and 22 patients from another three centers formed the external testing cohort. The subset of the internal testing cohort (n = 42) formed the internal correlation testing cohort for correlation analysis. The outline of the ipsilateral breast was marked with a lead wire before the scanning. Then, a DL model based on the High-Resolution Net was developed to detect the lead wire marker in each slice of the CT images automatically, and an in-house model was applied to segment the ipsilateral lung region. The mean and standard deviation of the distance error, the average precision, and average recall were used to measure the performance of the lead wire marker detection model. Based on these DL model results, we proposed an organ feature, and the Pearson correlation coefficient was calculated between the proposed organ feature and ipsilateral lung volume receiving 20 Gray (Gy) or more (V20). RESULTS For the lead wire marker detection model, the mean and standard deviation of the distance error, AP (5 mm) and AR (5 mm) reached 3.415 ± 4.529, 0.860, 0.883, and 4.189 ± 8.390, 0.848, 0.830 in the internal testing cohort and external testing cohort, respectively. The proposed organ feature calculated from the detected marker correlated with ipsilateral lung V20 (Pearson correlation coefficient, 0.542 with p < 0.001 in the internal correlation testing cohort and 0.554 with p = 0.008 in the external testing cohort). CONCLUSIONS The proposed artificial Intelligence-based CT organ feature was correlated with normal lung dose in adjuvant radiotherapy following breast-conserving surgery in patients with invasive breast cancer. TRIAL REGISTRATION NCT05609058 (08/11/2022).
Collapse
Affiliation(s)
- Li Ma
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China
| | - Yongjing Yang
- Department of Radiation Oncology, Jilin Cancer Hospital, Changchun, Jilin, 130012, China
| | - Jiabao Ma
- Department of Radiation Oncology, Sichuan Cancer Hospital & Research Institute, No. 55, the 4Th Section, Renmin South Road, Chengdu, 610041, China
| | - Li Mao
- AI Lab, Deepwise Healthcare, Beijing, 100080, People's Republic of China
| | - Xiuli Li
- AI Lab, Deepwise Healthcare, Beijing, 100080, People's Republic of China
| | - Lingling Feng
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China
| | - Muyasha Abulimiti
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China
| | - Xiaoyong Xiang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China
| | - Fangmeng Fu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China
| | - Yutong Tan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China
| | - Wenjue Zhang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China
| | - Ye-Xiong Li
- Department of Radiation Oncology, Cancer Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Jing Jin
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China.
- Department of Radiation Oncology, Cancer Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China.
| | - Ning Li
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China.
- Department of Radiation Oncology, Cancer Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China.
- Shanxi Province Cancer Hospital/Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University, Jinzhong, China.
| |
Collapse
|
26
|
Duan J, Bernard ME, Rong Y, Castle JR, Feng X, Johnson JD, Chen Q. Contour subregion error detection methodology using deep learning auto-segmentation. Med Phys 2023; 50:6673-6683. [PMID: 37793103 DOI: 10.1002/mp.16768] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 07/26/2023] [Accepted: 09/17/2023] [Indexed: 10/06/2023] Open
Abstract
BACKGROUND Inaccurate manual organ delineation is one of the high-risk failure modes in radiation treatment. Numerous automated contour quality assurance (QA) systems have been developed to assess contour acceptability; however, manual inspection of flagged cases is a time-consuming and challenging process, and can lead to users overlooking the exact error location. PURPOSE Our aim is to develop and validate a contour QA system that can effectively detect and visualize subregional contour errors, both qualitatively and quantitatively. METHODS/MATERIALS A novel contour subregion error detection (CSED) system was developed using subregional surface distance discrepancies between manual and deep learning auto-segmentation (DLAS) contours. A validation study was conducted using a head and neck public dataset containing 339 cases and evaluated according to knowledge-based pass criteria derived from a clinical training dataset of 60 cases. A blind qualitative evaluation was conducted, comparing the results from the CSED system with manual labels. Subsequently, the CSED-flagged cases were re-examined by a radiation oncologist. RESULTS The CSED system could visualize the diverse types of subregional contour errors qualitatively and quantitatively. In the validation dataset, the CSED system resulted in true positive rates (TPR) of 0.814, 0.800, and 0.771; false positive rates (FPR) of 0.310, 0.267, and 0.298; and accuracies of 0.735, 0.759, and 0.730, for brainstem and left and right parotid contours, respectively. The CSED-assisted manual review caught 13 brainstem, 19 left parotid, and 21 right parotid contour errors missed by conventional human review. The TPR/FPR/accuracy of the CSED-assisted manual review improved to 0.836/0.253/0.784, 0.831/0.171/0.830, and 0.808/0.193/0.807 for each structure, respectively. Further, the time savings achieved through CSED-assisted review improved by 75%, with the time for review taking 24.81 ± 12.84, 26.75 ± 10.41, and 28.71 ± 13.72 s for each structure, respectively. CONCLUSIONS The CSED system enables qualitative and quantitative detection, localization, and visualization of manual segmentation subregional errors utilizing DLAS contours as references. The use of this system has been shown to help reduce the risk of high-risk failure modes resulting from inaccurate organ segmentation.
Collapse
Affiliation(s)
- Jingwei Duan
- Department of Radiation Medicine, University of Kentucky, Lexington, Kentucky, USA
| | - Mark E Bernard
- Department of Radiation Medicine, University of Kentucky, Lexington, Kentucky, USA
| | - Yi Rong
- Department of Radiation Oncology, Mayo Clinic, Phoenix, Arizona, USA
| | - James R Castle
- Department of Radiation Medicine, University of Kentucky, Lexington, Kentucky, USA
| | - Xue Feng
- Carina Medical LLC, Lexington, Kentucky, USA
| | - Jeremiah D Johnson
- Department of Radiation Medicine, University of Kentucky, Lexington, Kentucky, USA
| | - Quan Chen
- Department of Radiation Medicine, University of Kentucky, Lexington, Kentucky, USA
- Department of Radiation Oncology, City of Hope Comprehensive Cancer Center, Duarte, California, USA
| |
Collapse
|
27
|
Tatsugami F, Nakaura T, Yanagawa M, Fujita S, Kamagata K, Ito R, Kawamura M, Fushimi Y, Ueda D, Matsui Y, Yamada A, Fujima N, Fujioka T, Nozaki T, Tsuboyama T, Hirata K, Naganawa S. Recent advances in artificial intelligence for cardiac CT: Enhancing diagnosis and prognosis prediction. Diagn Interv Imaging 2023; 104:521-528. [PMID: 37407346 DOI: 10.1016/j.diii.2023.06.011] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 06/20/2023] [Indexed: 07/07/2023]
Abstract
Recent advances in artificial intelligence (AI) for cardiac computed tomography (CT) have shown great potential in enhancing diagnosis and prognosis prediction in patients with cardiovascular disease. Deep learning, a type of machine learning, has revolutionized radiology by enabling automatic feature extraction and learning from large datasets, particularly in image-based applications. Thus, AI-driven techniques have enabled a faster analysis of cardiac CT examinations than when they are analyzed by humans, while maintaining reproducibility. However, further research and validation are required to fully assess the diagnostic performance, radiation dose-reduction capabilities, and clinical correctness of these AI-driven techniques in cardiac CT. This review article presents recent advances of AI in the field of cardiac CT, including deep-learning-based image reconstruction, coronary artery motion correction, automatic calcium scoring, automatic epicardial fat measurement, coronary artery stenosis diagnosis, fractional flow reserve prediction, and prognosis prediction, analyzes current limitations of these techniques and discusses future challenges.
Collapse
Affiliation(s)
- Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan.
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, 1-1-1 Honjo Chuo-ku, Kumamoto, 860-8556, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| | - Shohei Fujita
- Departmen of Radiology, Graduate School of Medicine and Faculty of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo 113-8421, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawaharacho, Sakyoku, Kyoto, 606-8507, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata-cho, Kita-ku, Okayama, 700-8558, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, 3-1-1 Asahi, Matsumoto, Nagano, 390-8621, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital N15, W5, Kita-Ku, Sapporo 060-8638, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8519, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, 160-0016, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Kita 15 Nishi 7, Kita-Ku, Sapporo, Hokkaido, 060-8648, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| |
Collapse
|
28
|
Luan S, Wei C, Ding Y, Xue X, Wei W, Yu X, Wang X, Ma C, Zhu B. PCG-net: feature adaptive deep learning for automated head and neck organs-at-risk segmentation. Front Oncol 2023; 13:1177788. [PMID: 37927463 PMCID: PMC10623055 DOI: 10.3389/fonc.2023.1177788] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 10/03/2023] [Indexed: 11/07/2023] Open
Abstract
Introduction Radiation therapy is a common treatment option for Head and Neck Cancer (HNC), where the accurate segmentation of Head and Neck (HN) Organs-AtRisks (OARs) is critical for effective treatment planning. Manual labeling of HN OARs is time-consuming and subjective. Therefore, deep learning segmentation methods have been widely used. However, it is still a challenging task for HN OARs segmentation due to some small-sized OARs such as optic chiasm and optic nerve. Methods To address this challenge, we propose a parallel network architecture called PCG-Net, which incorporates both convolutional neural networks (CNN) and a Gate-Axial-Transformer (GAT) to effectively capture local information and global context. Additionally, we employ a cascade graph module (CGM) to enhance feature fusion through message-passing functions and information aggregation strategies. We conducted extensive experiments to evaluate the effectiveness of PCG-Net and its robustness in three different downstream tasks. Results The results show that PCG-Net outperforms other methods, improves the accuracy of HN OARs segmentation, which can potentially improve treatment planning for HNC patients. Discussion In summary, the PCG-Net model effectively establishes the dependency between local information and global context and employs CGM to enhance feature fusion for accurate segment HN OARs. The results demonstrate the superiority of PCGNet over other methods, making it a promising approach for HNC treatment planning.
Collapse
Affiliation(s)
- Shunyao Luan
- School of Integrated Circuit, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| | - Changchao Wei
- Key Laboratory of Artificial Micro and Nano-structures of Ministry of Education, Center for Theoretical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Yi Ding
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Xudong Xue
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Wei Wei
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Xiao Yu
- Department of Radiation Oncology, The First Affiliated Hospital of University of Science and Technology of China, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Xiao Wang
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Rutgers-Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Chi Ma
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Rutgers-Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Benpeng Zhu
- School of Integrated Circuit, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
29
|
Heilemann G, Buschmann M, Lechner W, Dick V, Eckert F, Heilmann M, Herrmann H, Moll M, Knoth J, Konrad S, Simek IM, Thiele C, Zaharie A, Georg D, Widder J, Trnkova P. Clinical Implementation and Evaluation of Auto-Segmentation Tools for Multi-Site Contouring in Radiotherapy. Phys Imaging Radiat Oncol 2023; 28:100515. [PMID: 38111502 PMCID: PMC10726238 DOI: 10.1016/j.phro.2023.100515] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 11/09/2023] [Accepted: 11/09/2023] [Indexed: 12/20/2023] Open
Abstract
Background and purpose Tools for auto-segmentation in radiotherapy are widely available, but guidelines for clinical implementation are missing. The goal was to develop a workflow for performance evaluation of three commercial auto-segmentation tools to select one candidate for clinical implementation. Materials and Methods One hundred patients with six treatment sites (brain, head-and-neck, thorax, abdomen, and pelvis) were included. Three sets of AI-based contours for organs-at-risk (OAR) generated by three software tools and manually drawn expert contours were blindly rated for contouring accuracy. The dice similarity coefficient (DSC), the Hausdorff distance, and a dose/volume evaluation based on the recalculation of the original treatment plan were assessed. Statistically significant differences were tested using the Kruskal-Wallis test and the post-hoc Dunn Test with Bonferroni correction. Results The mean DSC scores compared to expert contours for all OARs combined were 0.80 ± 0.10, 0.75 ± 0.10, and 0.74 ± 0.11 for the three software tools. Physicians' rating identified equivalent or superior performance of some AI-based contours in head (eye, lens, optic nerve, brain, chiasm), thorax (e.g., heart and lungs), and pelvis and abdomen (e.g., kidney, femoral head) compared to manual contours. For some OARs, the AI models provided results requiring only minor corrections. Bowel-bag and stomach were not fit for direct use. During the interdisciplinary discussion, the physicians' rating was considered the most relevant. Conclusion A comprehensive method for evaluation and clinical implementation of commercially available auto-segmentation software was developed. The in-depth analysis yielded clear instructions for clinical use within the radiotherapy department.
Collapse
Affiliation(s)
- Gerd Heilemann
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Martin Buschmann
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Wolfgang Lechner
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Vincent Dick
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Franziska Eckert
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Martin Heilmann
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Harald Herrmann
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Matthias Moll
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Johannes Knoth
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Stefan Konrad
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Inga-Malin Simek
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Christopher Thiele
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Alexandru Zaharie
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Dietmar Georg
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Joachim Widder
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Petra Trnkova
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| |
Collapse
|
30
|
Vaassen F, Zegers CML, Hofstede D, Wubbels M, Beurskens H, Verheesen L, Canters R, Looney P, Battye M, Gooding MJ, Compter I, Eekers DBP, van Elmpt W. Geometric and dosimetric analysis of CT- and MR-based automatic contouring for the EPTN contouring atlas in neuro-oncology. Phys Med 2023; 114:103156. [PMID: 37813050 DOI: 10.1016/j.ejmp.2023.103156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 09/21/2023] [Accepted: 09/26/2023] [Indexed: 10/11/2023] Open
Abstract
PURPOSE Atlas-based and deep-learning contouring (DLC) are methods for automatic segmentation of organs-at-risk (OARs). The European Particle Therapy Network (EPTN) published a consensus-based atlas for delineation of OARs in neuro-oncology. In this study, geometric and dosimetric evaluation of automatically-segmented neuro-oncological OARs was performed using CT- and MR-models following the EPTN-contouring atlas. METHODS Image and contouring data from 76 neuro-oncological patients were included. Two atlas-based models (CT-atlas and MR-atlas) and one DLC-model (MR-DLC) were created. Manual contours on registered CT-MR-images were used as ground-truth. Results were analyzed in terms of geometrical (volumetric Dice similarity coefficient (vDSC), surface DSC (sDSC), added path length (APL), and mean slice-wise Hausdorff distance (MSHD)) and dosimetrical accuracy. Distance-to-tumor analysis was performed to analyze to which extent the location of the OAR relative to planning target volume (PTV) has dosimetric impact, using Wilcoxon rank-sum tests. RESULTS CT-atlas outperformed MR-atlas for 22/26 OARs. MR-DLC outperformed MR-atlas for all OARs. Highest median (95 %CI) vDSC and sDSC were found for the brainstem in MR-DLC: 0.92 (0.88-0.95) and 0.84 (0.77-0.89) respectively, as well as lowest MSHD: 0.27 (0.22-0.39)cm. Median dose differences (ΔD) were within ± 1 Gy for 24/26(92 %) OARs for all three models. Distance-to-tumor showed a significant correlation for ΔDmax,0.03cc-parameters when splitting the data in ≤ 4 cm and > 4 cm OAR-distance (p < 0.001). CONCLUSION MR-based DLC and CT-based atlas-contouring enable high-quality segmentation. It was shown that a combination of both CT- and MR-autocontouring models results in the best quality.
Collapse
Affiliation(s)
- Femke Vaassen
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands.
| | - Catharina M L Zegers
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - David Hofstede
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Mart Wubbels
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Hilde Beurskens
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Lindsey Verheesen
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Richard Canters
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | | | | | | | - Inge Compter
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Daniëlle B P Eekers
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Wouter van Elmpt
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| |
Collapse
|
31
|
Fiandra C, Rosati S, Arcadipane F, Dinapoli N, Fato M, Franco P, Gallio E, Scaffidi Gennarino D, Silvetti P, Zara S, Ricardi U, Balestra G. Active bone marrow segmentation based on computed tomography imaging in anal cancer patients: A machine-learning-based proof of concept. Phys Med 2023; 113:102657. [PMID: 37567068 DOI: 10.1016/j.ejmp.2023.102657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 06/30/2023] [Accepted: 08/05/2023] [Indexed: 08/13/2023] Open
Abstract
PURPOSE Different methods are available to identify haematopoietically active bone marrow (ActBM). However, their use can be challenging for radiotherapy routine treatments, since they require specific equipment and dedicated time. A machine learning (ML) approach, based on radiomic features as inputs to three different classifiers, was applied to computed tomography (CT) images to identify haematopoietically active bone marrow in anal cancer patients. METHODS A total of 40 patients was assigned to the construction set (training set + test set). Fluorine-18-Fluorodeoxyglucose Positron Emission Tomography (18FDG-PET) images were used to detect the active part of the pelvic bone marrow (ActPBM) and stored as ground-truth for three subregions: iliac, lower pelvis and lumbosacral bone marrow (ActIBM, ActLPBM, ActLSBM). Three parameters were used for the correspondence analyses between 18FDG-PET and ML classifiers: DICE index, Precision and Recall. RESULTS For the 40-patient cohort, median values [min; max] of the Dice index were 0.69 [0.20; 0.84], 0.76 [0.25; 0.89], and 0.36 [0.15; 0.67] for ActIBM, ActLSBM, and ActLPBM, respectively. The Precision/Recall (P/R) ratio median value for the ActLPBM structure was 0.59 [0.20; 1.84] (over segmentation), while for the other two subregions the P/R ratio median has values of 1.249 [0.43; 4.15] for ActIBM and 1.093 [0.24; 1.91] for ActLSBM (under segmentation). CONCLUSION A satisfactory degree of overlap compared to 18FDG-PET was found for 2 out of the 3 subregions within pelvic bones. Further optimization and generalization of the process is required before clinical implementation.
Collapse
Affiliation(s)
- C Fiandra
- Department of Oncology, University of Turin, Turin, Italy.
| | - S Rosati
- Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - F Arcadipane
- Department of Oncology, University of Turin, Turin, Italy
| | - N Dinapoli
- UOC Radioterapia Oncologica, Dipartimento Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - M Fato
- Department of Informatics, Bioengineering, Robotics and System Engineering (DIBRIS), University of Genova, Genova, Italy
| | - P Franco
- Department of Oncology, University of Turin, Turin, Italy
| | - E Gallio
- Medical Physics Unit, A.O.U. Città della Salute e della Scienza, Turin, Italy
| | - D Scaffidi Gennarino
- Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - P Silvetti
- Department of Oncology, University of Turin, Turin, Italy
| | - S Zara
- Tecnologie Avanzate, Torino, Italy
| | - U Ricardi
- Department of Oncology, University of Turin, Turin, Italy
| | - G Balestra
- Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| |
Collapse
|
32
|
Doolan PJ, Charalambous S, Roussakis Y, Leczynski A, Peratikou M, Benjamin M, Ferentinos K, Strouthos I, Zamboglou C, Karagiannis E. A clinical evaluation of the performance of five commercial artificial intelligence contouring systems for radiotherapy. Front Oncol 2023; 13:1213068. [PMID: 37601695 PMCID: PMC10436522 DOI: 10.3389/fonc.2023.1213068] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 07/17/2023] [Indexed: 08/22/2023] Open
Abstract
Purpose/objectives Auto-segmentation with artificial intelligence (AI) offers an opportunity to reduce inter- and intra-observer variability in contouring, to improve the quality of contours, as well as to reduce the time taken to conduct this manual task. In this work we benchmark the AI auto-segmentation contours produced by five commercial vendors against a common dataset. Methods and materials The organ at risk (OAR) contours generated by five commercial AI auto-segmentation solutions (Mirada (Mir), MVision (MV), Radformation (Rad), RayStation (Ray) and TheraPanacea (Ther)) were compared to manually-drawn expert contours from 20 breast, 20 head and neck, 20 lung and 20 prostate patients. Comparisons were made using geometric similarity metrics including volumetric and surface Dice similarity coefficient (vDSC and sDSC), Hausdorff distance (HD) and Added Path Length (APL). To assess the time saved, the time taken to manually draw the expert contours, as well as the time to correct the AI contours, were recorded. Results There are differences in the number of CT contours offered by each AI auto-segmentation solution at the time of the study (Mir 99; MV 143; Rad 83; Ray 67; Ther 86), with all offering contours of some lymph node levels as well as OARs. Averaged across all structures, the median vDSCs were good for all systems and compared favorably with existing literature: Mir 0.82; MV 0.88; Rad 0.86; Ray 0.87; Ther 0.88. All systems offer substantial time savings, ranging between: breast 14-20 mins; head and neck 74-93 mins; lung 20-26 mins; prostate 35-42 mins. The time saved, averaged across all structures, was similar for all systems: Mir 39.8 mins; MV 43.6 mins; Rad 36.6 min; Ray 43.2 mins; Ther 45.2 mins. Conclusions All five commercial AI auto-segmentation solutions evaluated in this work offer high quality contours in significantly reduced time compared to manual contouring, and could be used to render the radiotherapy workflow more efficient and standardized.
Collapse
Affiliation(s)
- Paul J. Doolan
- Department of Medical Physics, German Oncology Center, Limassol, Cyprus
| | | | - Yiannis Roussakis
- Department of Medical Physics, German Oncology Center, Limassol, Cyprus
| | - Agnes Leczynski
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
| | - Mary Peratikou
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
| | - Melka Benjamin
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
| | - Konstantinos Ferentinos
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
- School of Medicine, European University Cyprus, Nicosia, Cyprus
| | - Iosif Strouthos
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
- School of Medicine, European University Cyprus, Nicosia, Cyprus
| | - Constantinos Zamboglou
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
- School of Medicine, European University Cyprus, Nicosia, Cyprus
- Department of Radiation Oncology, Medical Center – University of Freiberg, Freiberg, Germany
| | - Efstratios Karagiannis
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
- School of Medicine, European University Cyprus, Nicosia, Cyprus
| |
Collapse
|
33
|
Franzese C, Dei D, Lambri N, Teriaca MA, Badalamenti M, Crespi L, Tomatis S, Loiacono D, Mancosu P, Scorsetti M. Enhancing Radiotherapy Workflow for Head and Neck Cancer with Artificial Intelligence: A Systematic Review. J Pers Med 2023; 13:946. [PMID: 37373935 DOI: 10.3390/jpm13060946] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 06/01/2023] [Accepted: 06/01/2023] [Indexed: 06/29/2023] Open
Abstract
BACKGROUND Head and neck cancer (HNC) is characterized by complex-shaped tumors and numerous organs at risk (OARs), inducing challenging radiotherapy (RT) planning, optimization, and delivery. In this review, we provided a thorough description of the applications of artificial intelligence (AI) tools in the HNC RT process. METHODS The PubMed database was queried, and a total of 168 articles (2016-2022) were screened by a group of experts in radiation oncology. The group selected 62 articles, which were subdivided into three categories, representing the whole RT workflow: (i) target and OAR contouring, (ii) planning, and (iii) delivery. RESULTS The majority of the selected studies focused on the OARs segmentation process. Overall, the performance of AI models was evaluated using standard metrics, while limited research was found on how the introduction of AI could impact clinical outcomes. Additionally, papers usually lacked information about the confidence level associated with the predictions made by the AI models. CONCLUSIONS AI represents a promising tool to automate the RT workflow for the complex field of HNC treatment. To ensure that the development of AI technologies in RT is effectively aligned with clinical needs, we suggest conducting future studies within interdisciplinary groups, including clinicians and computer scientists.
Collapse
Affiliation(s)
- Ciro Franzese
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Damiano Dei
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Nicola Lambri
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Maria Ausilia Teriaca
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Marco Badalamenti
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Leonardo Crespi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
- Centre for Health Data Science, Human Technopole, 20157 Milan, Italy
| | - Stefano Tomatis
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Daniele Loiacono
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
| | - Pietro Mancosu
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Marta Scorsetti
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| |
Collapse
|
34
|
Lucido JJ, DeWees TA, Leavitt TR, Anand A, Beltran CJ, Brooke MD, Buroker JR, Foote RL, Foss OR, Gleason AM, Hodge TL, Hughes CO, Hunzeker AE, Laack NN, Lenz TK, Livne M, Morigami M, Moseley DJ, Undahl LM, Patel Y, Tryggestad EJ, Walker MZ, Zverovitch A, Patel SH. Validation of clinical acceptability of deep-learning-based automated segmentation of organs-at-risk for head-and-neck radiotherapy treatment planning. Front Oncol 2023; 13:1137803. [PMID: 37091160 PMCID: PMC10115982 DOI: 10.3389/fonc.2023.1137803] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 03/24/2023] [Indexed: 04/09/2023] Open
Abstract
Introduction Organ-at-risk segmentation for head and neck cancer radiation therapy is a complex and time-consuming process (requiring up to 42 individual structure, and may delay start of treatment or even limit access to function-preserving care. Feasibility of using a deep learning (DL) based autosegmentation model to reduce contouring time without compromising contour accuracy is assessed through a blinded randomized trial of radiation oncologists (ROs) using retrospective, de-identified patient data. Methods Two head and neck expert ROs used dedicated time to create gold standard (GS) contours on computed tomography (CT) images. 445 CTs were used to train a custom 3D U-Net DL model covering 42 organs-at-risk, with an additional 20 CTs were held out for the randomized trial. For each held-out patient dataset, one of the eight participant ROs was randomly allocated to review and revise the contours produced by the DL model, while another reviewed contours produced by a medical dosimetry assistant (MDA), both blinded to their origin. Time required for MDAs and ROs to contour was recorded, and the unrevised DL contours, as well as the RO-revised contours by the MDAs and DL model were compared to the GS for that patient. Results Mean time for initial MDA contouring was 2.3 hours (range 1.6-3.8 hours) and RO-revision took 1.1 hours (range, 0.4-4.4 hours), compared to 0.7 hours (range 0.1-2.0 hours) for the RO-revisions to DL contours. Total time reduced by 76% (95%-Confidence Interval: 65%-88%) and RO-revision time reduced by 35% (95%-CI,-39%-91%). All geometric and dosimetric metrics computed, agreement with GS was equivalent or significantly greater (p<0.05) for RO-revised DL contours compared to the RO-revised MDA contours, including volumetric Dice similarity coefficient (VDSC), surface DSC, added path length, and the 95%-Hausdorff distance. 32 OARs (76%) had mean VDSC greater than 0.8 for the RO-revised DL contours, compared to 20 (48%) for RO-revised MDA contours, and 34 (81%) for the unrevised DL OARs. Conclusion DL autosegmentation demonstrated significant time-savings for organ-at-risk contouring while improving agreement with the institutional GS, indicating comparable accuracy of DL model. Integration into the clinical practice with a prospective evaluation is currently underway.
Collapse
Affiliation(s)
- J. John Lucido
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Todd A. DeWees
- Department of Health Sciences Research, Mayo Clinic, Phoenix, AZ, United States
| | - Todd R. Leavitt
- Department of Health Sciences Research, Mayo Clinic, Phoenix, AZ, United States
| | - Aman Anand
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, United States
| | - Chris J. Beltran
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, FL, United States
| | | | - Justine R. Buroker
- Research Services, Comprehensive Cancer Center, Mayo Clinic, Rochester, MN, United States
| | - Robert L. Foote
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Olivia R. Foss
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Angela M. Gleason
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Teresa L. Hodge
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | - Ashley E. Hunzeker
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Nadia N. Laack
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Tamra K. Lenz
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Douglas J. Moseley
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Lisa M. Undahl
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Yojan Patel
- Google Health, Mountain View, CA, United States
| | - Erik J. Tryggestad
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Samir H. Patel
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, United States
| |
Collapse
|
35
|
Shanbhag NM, Sulaiman Bin Sumaida A, Saleh M. Achieving Exceptional Cochlea Delineation in Radiotherapy Scans: The Impact of Optimal Window Width and Level Settings. Cureus 2023; 15:e37741. [PMID: 37091485 PMCID: PMC10115744 DOI: 10.7759/cureus.37741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/17/2023] [Indexed: 04/25/2023] Open
Abstract
Introduction Radiation therapy (RT) aims to maximize the dose to the target volume while minimizing the dose to organs at risk (OAR), which is crucial for optimal treatment outcomes and minimal side effects. The complex anatomy of the head and neck regions, including the cochlea, presents challenges in radiotherapy. Accurate delineation of the cochlea is essential to prevent toxicities such as sensorineural hearing loss. Educational interventions, including seminars, atlases, and multidisciplinary discussions, can improve accuracy and interobserver agreement in contouring. This study seeks to provide radiation oncology practitioners with the necessary window width and window level settings in computed tomography (CT) scans to accurately and precisely delineate the cochlea, using a pre-and post-learning phase approach to assess the change in accuracy. Methods and materials The study used the ProKnow Contouring Accuracy Program (ProKnow, LLC, Florida, United States), which employs the StructSure method and the Dice coefficient to assess the precision of a user's contour compared to an expert contour. The StructSure method offers superior sensitivity and accuracy, while the Dice coefficient is a more rudimentary and less sensitive approach. Two datasets of CT scans, one for each left and right cochlea, were used. The author delineated the cochlea before and after applying the proposed technique for window width and window level, comparing the results with those of the expert and general population. The study included a step-by-step method for cochlea delineation using window width and window level settings. Data analysis was performed using IBM SPSS Statistics for Windows, Version 26.0 (Released 2019; IBM Corp., Armonk, New York, United States). Results The implementation of the proposed step-by-step method for adjusting window width and window level led to significant improvements in contouring accuracy and delineation quality in radiation therapy planning. Comparing pre- and post-intervention scenarios, the author exhibited increased StructSure scores (right cochlea: 88.81 to 99.15; left cochlea: 88.45 to 99.85) and Dice coefficient scores (right cochlea: 0.62 to 0.80; left cochlea: 0.73 to 0.86). The author consistently demonstrated higher contouring accuracy and greater similarity to expert contours compared to the group's mean scores both before and after the intervention. These results suggest that the proposed method enhances the precision of cochlea delineation in radiotherapy planning. Conclusion In conclusion, this study demonstrated that a step-by-step instructional approach for adjusting window width and window level significantly improved cochlea delineation accuracy in radiotherapy contouring. The findings hold potential clinical implications for reducing radiation-related side effects and improving patient outcomes. This study supports the integration of the instructional technique into radiation oncology training and encourages further exploration of advanced imaging processing and artificial intelligence applications in radiotherapy contouring.
Collapse
Affiliation(s)
- Nandan M Shanbhag
- Department of Oncology/Palliative Care, Tawam Hospital, Al Ain, ARE
- Department of Oncology/Radiation Oncolgy, Tawam Hospital, Al Ain, ARE
| | | | | |
Collapse
|
36
|
Martín-Noguerol T, Oñate Miranda M, Amrhein TJ, Paulano-Godino F, Xiberta P, Vilanova JC, Luna A. The role of Artificial intelligence in the assessment of the spine and spinal cord. Eur J Radiol 2023; 161:110726. [PMID: 36758280 DOI: 10.1016/j.ejrad.2023.110726] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 01/13/2023] [Accepted: 01/31/2023] [Indexed: 02/05/2023]
Abstract
Artificial intelligence (AI) application development is underway in all areas of radiology where many promising tools are focused on the spine and spinal cord. In the past decade, multiple spine AI algorithms have been created based on radiographs, computed tomography, and magnetic resonance imaging. These algorithms have wide-ranging purposes including automatic labeling of vertebral levels, automated description of disc degenerative changes, detection and classification of spine trauma, identification of osseous lesions, and the assessment of cord pathology. The overarching goals for these algorithms include improved patient throughput, reducing radiologist workload burden, and improving diagnostic accuracy. There are several pre-requisite tasks required in order to achieve these goals, such as automatic image segmentation, facilitating image acquisition and postprocessing. In this narrative review, we discuss some of the important imaging AI solutions that have been developed for the assessment of the spine and spinal cord. We focus on their practical applications and briefly discuss some key requirements for the successful integration of these tools into practice. The potential impact of AI in the imaging assessment of the spine and cord is vast and promises to provide broad reaching improvements for clinicians, radiologists, and patients alike.
Collapse
Affiliation(s)
| | - Marta Oñate Miranda
- Department of Radiology, Centre Hospitalier Universitaire de Sherbrooke, Sherbrooke, Quebec, Canada.
| | - Timothy J Amrhein
- Department of Radiology, Duke University Medical Center, Durham, USA.
| | | | - Pau Xiberta
- Graphics and Imaging Laboratory (GILAB), University of Girona, 17003 Girona, Spain.
| | - Joan C Vilanova
- Department of Radiology. Clinica Girona, Diagnostic Imaging Institute (IDI), University of Girona, 17002 Girona, Spain.
| | - Antonio Luna
- MRI unit, Radiology department. HT medica, Carmelo Torres n°2, 23007 Jaén, Spain.
| |
Collapse
|
37
|
Podobnik G, Strojan P, Peterlin P, Ibragimov B, Vrtovec T. HaN-Seg: The head and neck organ-at-risk CT and MR segmentation dataset. Med Phys 2023; 50:1917-1927. [PMID: 36594372 DOI: 10.1002/mp.16197] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 11/17/2022] [Accepted: 12/07/2022] [Indexed: 01/04/2023] Open
Abstract
PURPOSE For the cancer in the head and neck (HaN), radiotherapy (RT) represents an important treatment modality. Segmentation of organs-at-risk (OARs) is the starting point of RT planning, however, existing approaches are focused on either computed tomography (CT) or magnetic resonance (MR) images, while multimodal segmentation has not been thoroughly explored yet. We present a dataset of CT and MR images of the same patients with curated reference HaN OAR segmentations for an objective evaluation of segmentation methods. ACQUISITION AND VALIDATION METHODS The cohort consists of HaN images of 56 patients that underwent both CT and T1-weighted MR imaging for image-guided RT. For each patient, reference segmentations of up to 30 OARs were obtained by experts performing manual pixel-wise image annotation. By maintaining the distribution of patient age and gender, and annotation type, the patients were randomly split into training Set 1 (42 cases or 75%) and test Set 2 (14 cases or 25%). Baseline auto-segmentation results are also provided by training the publicly available deep nnU-Net architecture on Set 1, and evaluating its performance on Set 2. DATA FORMAT AND USAGE NOTES The data are publicly available through an open-access repository under the name HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Dataset. Images and reference segmentations are stored in the NRRD file format, where the OAR filenames correspond to the nomenclature recommended by the American Association of Physicists in Medicine, and OAR and demographics information is stored in separate comma-separated value files. POTENTIAL APPLICATIONS The HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Challenge is launched in parallel with the dataset release to promote the development of automated techniques for OAR segmentation in the HaN. Other potential applications include out-of-challenge algorithm development and benchmarking, as well as external validation of the developed algorithms.
Collapse
Affiliation(s)
- Gašper Podobnik
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | | | | | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
38
|
Luo X, Liao W, He Y, Tang F, Wu M, Shen Y, Huang H, Song T, Li K, Zhang S, Zhang S, Wang G. Deep learning-based accurate delineation of primary gross tumor volume of nasopharyngeal carcinoma on heterogeneous magnetic resonance imaging: A large-scale and multi-center study. Radiother Oncol 2023; 180:109480. [PMID: 36657723 DOI: 10.1016/j.radonc.2023.109480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 01/07/2023] [Accepted: 01/08/2023] [Indexed: 01/18/2023]
Abstract
BACKGROUND AND PURPOSE The problem of obtaining accurate primary gross tumor volume (GTVp) segmentation for nasopharyngeal carcinoma (NPC) on heterogeneous magnetic resonance imaging (MRI) images with deep learning remains unsolved. Herein, we reported a new deep-learning method than can accurately delineate GTVp for NPC on multi-center MRI scans. MATERIAL AND METHODS We collected 1057 patients with MRI images from five hospitals and randomly selected 600 patients from three hospitals to constitute a mixed training cohort for model development. The resting patients were used as internal (n = 259) and external (n = 198) testing cohorts for model evaluation. An augmentation-invariant strategy was proposed to delineate GTVp from multi-center MRI images, which encouraged networks to produce similar predictions for inputs with different augmentations to learn invariant anatomical structure features. The Dice similarity coefficient (DSC), 95 % Hausdorff distance (HD95), average surface distance (ASD), and relative absolute volume difference (RAVD) were used to measure segmentation performance. RESULTS The model-generated predictions had a high overlap ratio with the ground truth. For the internal testing cohorts, the average DSC, HD95, ASD, and RAVD were 0.88, 4.99 mm, 1.03 mm, and 0.13, respectively. For external testing cohorts, the average DSC, HD95, ASD, and RAVD were 0.88, 3.97 mm, 0.97 mm, and 0.10, respectively. No significant differences were found in DSC, HD95, and ASD for patients with different T categories, MRI thickness, or in-plane spacings. Moreover, the proposed augmentation-invariant strategy outperformed the widely-used nnUNet, which uses conventional data augmentation approaches. CONCLUSION Our proposed method showed a highly accurate GTVp segmentation for NPC on multi-center MRI images, suggesting that it has the potential to act as a generalized delineation solution for heterogeneous MRI images.
Collapse
Affiliation(s)
- Xiangde Luo
- University of Electronic Science and Technology of China, Chengdu 611731, China; Shanghai AI Laboratory, Shanghai 200030, China
| | - Wenjun Liao
- University of Electronic Science and Technology of China, Chengdu 611731, China; Department of Radiation Oncology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu 610041, China.
| | - Yuan He
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui 23000, China
| | - Fan Tang
- Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Mengwan Wu
- Department of Radiation Oncology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu 610041, China
| | - Yuanyuan Shen
- Department of Radiation Oncology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu 610041, China
| | - Hui Huang
- Cancer center, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 610072, China
| | - Tao Song
- SenseTime Research, Shanghai 200233, China
| | - Kang Li
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Shichuan Zhang
- University of Electronic Science and Technology of China, Chengdu 611731, China; Department of Radiation Oncology, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu 610041, China
| | - Shaoting Zhang
- University of Electronic Science and Technology of China, Chengdu 611731, China; Shanghai AI Laboratory, Shanghai 200030, China
| | - Guotai Wang
- University of Electronic Science and Technology of China, Chengdu 611731, China; Shanghai AI Laboratory, Shanghai 200030, China.
| |
Collapse
|
39
|
Zhao Q, Wang G, Lei W, Fu H, Qu Y, Lu J, Zhang S, Zhang S. Segmentation of multiple Organs-at-Risk associated with brain tumors based on coarse-to-fine stratified networks. Med Phys 2023. [PMID: 36762594 DOI: 10.1002/mp.16247] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 12/10/2022] [Accepted: 12/27/2022] [Indexed: 02/11/2023] Open
Abstract
BACKGROUND Delineation of Organs-at-Risks (OARs) is an important step in radiotherapy treatment planning. As manual delineation is time-consuming, labor-intensive and affected by inter- and intra-observer variability, a robust and efficient automatic segmentation algorithm is highly desirable for improving the efficiency and repeatability of OAR delineation. PURPOSE Automatic segmentation of OARs in medical images is challenged by low contrast, various shapes and imbalanced sizes of different organs. We aim to overcome these challenges and develop a high-performance method for automatic segmentation of 10 OARs required in radiotherapy planning for brain tumors. METHODS A novel two-stage segmentation framework is proposed, where a coarse and simultaneous localization of all the target organs is obtained in the first stage, and a fine segmentation is achieved for each organ, respectively, in the second stage. To deal with organs with various sizes and shapes, a stratified segmentation strategy is proposed, where a High- and Low-Resolution Residual Network (HLRNet) that consists of a multiresolution branch and a high-resolution branch is introduced to segment medium-sized organs, and a High-Resolution Residual Network (HRRNet) is used to segment small organs. In addition, a label fusion strategy is proposed to better deal with symmetric pairs of organs like the left and right cochleas and lacrimal glands. RESULTS Our method was validated on the dataset of MICCAI ABCs 2020 challenge for OAR segmentation. It obtained an average Dice of 75.8% for 10 OARs, and significantly outperformed several state-of-the-art models including nnU-Net (71.6%) and FocusNet (72.4%). Our proposed HLRNet and HRRNet improved the segmentation accuracy for medium-sized and small organs, respectively. The label fusion strategy led to higher accuracy for symmetric pairs of organs. CONCLUSIONS Our proposed method is effective for the segmentation of OARs of brain tumors, with a better performance than existing methods, especially on medium-sized and small organs. It has a potential for improving the efficiency of radiotherapy planning with high segmentation accuracy.
Collapse
Affiliation(s)
- Qianfei Zhao
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China.,Shanghai AI Laboratory, Shanghai, China
| | - Wenhui Lei
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Hao Fu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yijie Qu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Jiangshan Lu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Shichuan Zhang
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, University of Electronic Science and Technology of China, Chengdu, China
| | - Shaoting Zhang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China.,Shanghai AI Laboratory, Shanghai, China
| |
Collapse
|
40
|
Groendahl AR, Huynh BN, Tomic O, Søvik Å, Dale E, Malinen E, Skogmo HK, Futsaether CM. Automatic gross tumor segmentation of canine head and neck cancer using deep learning and cross-species transfer learning. Front Vet Sci 2023; 10:1143986. [PMID: 37026102 PMCID: PMC10070749 DOI: 10.3389/fvets.2023.1143986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 03/01/2023] [Indexed: 04/08/2023] Open
Abstract
Background Radiotherapy (RT) is increasingly being used on dogs with spontaneous head and neck cancer (HNC), which account for a large percentage of veterinary patients treated with RT. Accurate definition of the gross tumor volume (GTV) is a vital part of RT planning, ensuring adequate dose coverage of the tumor while limiting the radiation dose to surrounding tissues. Currently the GTV is contoured manually in medical images, which is a time-consuming and challenging task. Purpose The purpose of this study was to evaluate the applicability of deep learning-based automatic segmentation of the GTV in canine patients with HNC. Materials and methods Contrast-enhanced computed tomography (CT) images and corresponding manual GTV contours of 36 canine HNC patients and 197 human HNC patients were included. A 3D U-Net convolutional neural network (CNN) was trained to automatically segment the GTV in canine patients using two main approaches: (i) training models from scratch based solely on canine CT images, and (ii) using cross-species transfer learning where models were pretrained on CT images of human patients and then fine-tuned on CT images of canine patients. For the canine patients, automatic segmentations were assessed using the Dice similarity coefficient (Dice), the positive predictive value, the true positive rate, and surface distance metrics, calculated from a four-fold cross-validation strategy where each fold was used as a validation set and test set once in independent model runs. Results CNN models trained from scratch on canine data or by using transfer learning obtained mean test set Dice scores of 0.55 and 0.52, respectively, indicating acceptable auto-segmentations, similar to the mean Dice performances reported for CT-based automatic segmentation in human HNC studies. Automatic segmentation of nasal cavity tumors appeared particularly promising, resulting in mean test set Dice scores of 0.69 for both approaches. Conclusion In conclusion, deep learning-based automatic segmentation of the GTV using CNN models based on canine data only or a cross-species transfer learning approach shows promise for future application in RT of canine HNC patients.
Collapse
Affiliation(s)
- Aurora Rosvoll Groendahl
- Faculty of Science and Technology, Department of Physics, Norwegian University of Life Sciences, Ås, Norway
| | - Bao Ngoc Huynh
- Faculty of Science and Technology, Department of Physics, Norwegian University of Life Sciences, Ås, Norway
| | - Oliver Tomic
- Faculty of Science and Technology, Department of Data Science, Norwegian University of Life Sciences, Ås, Norway
| | - Åste Søvik
- Faculty of Veterinary Medicine, Department of Companion Animal Clinical Sciences, Norwegian University of Life Sciences, Ås, Norway
| | - Einar Dale
- Department of Oncology, Oslo University Hospital, Oslo, Norway
| | - Eirik Malinen
- Department of Physics, University of Oslo, Oslo, Norway
- Department of Medical Physics, Oslo University Hospital, Oslo, Norway
| | - Hege Kippenes Skogmo
- Faculty of Veterinary Medicine, Department of Companion Animal Clinical Sciences, Norwegian University of Life Sciences, Ås, Norway
| | - Cecilia Marie Futsaether
- Faculty of Science and Technology, Department of Physics, Norwegian University of Life Sciences, Ås, Norway
- *Correspondence: Cecilia Marie Futsaether
| |
Collapse
|
41
|
Li Y, Gao X, Tang X, Lin S, Pang H. Research on automatic classification technology of kidney tumor and normal kidney tissue based on computed tomography radiomics. Front Oncol 2023; 13:1013085. [PMID: 36910615 PMCID: PMC9998940 DOI: 10.3389/fonc.2023.1013085] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Accepted: 02/13/2023] [Indexed: 03/14/2023] Open
Abstract
Purpose By using a radiomics-based approach, multiple radiomics features can be extracted from regions of interest in computed tomography (CT) images, which may be applied to automatically classify kidney tumors and normal kidney tissues. The study proposes a method based on CT radiomics and aims to use extracted radiomics features to automatically classify of kidney tumors and normal kidney tissues and to establish an automatic classification model. Methods CT data were retrieved from the 2019 Kidney and Kidney Tumor Segmentation Challenge (KiTS19) in The Cancer Imaging Archive (TCIA) open access database. Arterial phase-enhanced CT images from 210 cases were used to establish an automatic classification model. These CT images of patients were randomly divided into training (168 cases) and test (42 cases) sets. Furthermore, the radiomics features of gross tumor volume (GTV) and normal kidney tissues in the training set were extracted and screened, and a binary logistic regression model was established. For the test set, the radiomic features and cutoff value of P were consistent with the training set. Results Three radiomics features were selected to establish the binary logistic regression model. The accuracy (ACC), sensitivity (SENS), specificity (SPEC), area under the curve (AUC), and Youden index of the training and test sets based on the CT radiomics classification model were all higher than 0.85. Conclusion The automatic classification model of kidney tumors and normal kidney tissues based on CT radiomics exhibited good classification ability. Kidney tumors could be distinguished from normal kidney tissues. This study may complement automated tumor delineation techniques and warrants further research.
Collapse
Affiliation(s)
- Yunfei Li
- Department of Oncology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Xinrui Gao
- Department of Oncology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Xuemei Tang
- Department of Oncology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Sheng Lin
- Department of Oncology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Haowen Pang
- Department of Oncology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| |
Collapse
|
42
|
Roper J, Lin M, Rong Y. Extensive upfront validation and testing are needed prior to the clinical implementation of AI-based auto-segmentation tools. J Appl Clin Med Phys 2022; 24:e13873. [PMID: 36545883 PMCID: PMC9859989 DOI: 10.1002/acm2.13873] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 11/30/2022] [Accepted: 12/01/2022] [Indexed: 12/24/2022] Open
Affiliation(s)
- Justin Roper
- Department of Radiation OncologyWinship Cancer Institute of Emory UniversityAtlantaGeorgiaUSA
| | - Mu‐Han Lin
- Department of Radiation OncologyUniversity of Texas Southwestern Medical CenterDallasTexasUSA
| | - Yi Rong
- Department of Radiation OncologyMayo Clinic HospitalsPhoenixArizonaUSA
| |
Collapse
|
43
|
Wu C, Montagne S, Hamzaoui D, Ayache N, Delingette H, Renard-Penna R. Automatic segmentation of prostate zonal anatomy on MRI: a systematic review of the literature. Insights Imaging 2022; 13:202. [PMID: 36543901 PMCID: PMC9772373 DOI: 10.1186/s13244-022-01340-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 11/27/2022] [Indexed: 12/24/2022] Open
Abstract
OBJECTIVES Accurate zonal segmentation of prostate boundaries on MRI is a critical prerequisite for automated prostate cancer detection based on PI-RADS. Many articles have been published describing deep learning methods offering great promise for fast and accurate segmentation of prostate zonal anatomy. The objective of this review was to provide a detailed analysis and comparison of applicability and efficiency of the published methods for automatic segmentation of prostate zonal anatomy by systematically reviewing the current literature. METHODS A Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) was conducted until June 30, 2021, using PubMed, ScienceDirect, Web of Science and EMBase databases. Risk of bias and applicability based on Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) criteria adjusted with Checklist for Artificial Intelligence in Medical Imaging (CLAIM) were assessed. RESULTS A total of 458 articles were identified, and 33 were included and reviewed. Only 2 articles had a low risk of bias for all four QUADAS-2 domains. In the remaining, insufficient details about database constitution and segmentation protocol provided sources of bias (inclusion criteria, MRI acquisition, ground truth). Eighteen different types of terminology for prostate zone segmentation were found, while 4 anatomic zones are described on MRI. Only 2 authors used a blinded reading, and 4 assessed inter-observer variability. CONCLUSIONS Our review identified numerous methodological flaws and underlined biases precluding us from performing quantitative analysis for this review. This implies low robustness and low applicability in clinical practice of the evaluated methods. Actually, there is not yet consensus on quality criteria for database constitution and zonal segmentation methodology.
Collapse
Affiliation(s)
- Carine Wu
- Sorbonne Université, Paris, France.
- Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020, Paris, France.
| | - Sarah Montagne
- Sorbonne Université, Paris, France
- Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020, Paris, France
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
- GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| | - Dimitri Hamzaoui
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, Nice, France
| | - Nicholas Ayache
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, Nice, France
| | - Hervé Delingette
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, Nice, France
| | - Raphaële Renard-Penna
- Sorbonne Université, Paris, France
- Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020, Paris, France
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
- GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| |
Collapse
|
44
|
Costea M, Zlate A, Durand M, Baudier T, Grégoire V, Sarrut D, Biston MC. Comparison of atlas-based and deep learning methods for organs at risk delineation on head-and-neck CT images using an automated treatment planning system. Radiother Oncol 2022; 177:61-70. [PMID: 36328093 DOI: 10.1016/j.radonc.2022.10.029] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 10/21/2022] [Accepted: 10/23/2022] [Indexed: 11/06/2022]
Abstract
BACKGROUND AND PURPOSE To investigate the performance of head-and-neck (HN) organs-at-risk (OAR) automatic segmentation (AS) using four atlas-based (ABAS) and two deep learning (DL) solutions. MATERIAL AND METHODS All patients underwent iodine contrast-enhanced planning CT. Fourteen OAR were manually delineated. DL.1 and DL.2 solutions were trained with 63 mono-centric patients and > 1000 multi-centric patients, respectively. Ten and 15 patients with varied anatomies were selected for the atlas library and for testing, respectively. The evaluation was based on geometric indices (DICE coefficient and 95th percentile-Hausdorff Distance (HD95%)), time needed for manual corrections and clinical dosimetric endpoints obtained using automated treatment planning. RESULTS Both DICE and HD95% results indicated that DL algorithms generally performed better compared with ABAS algorithms for automatic segmentation of HN OAR. However, the hybrid-ABAS (ABAS.3) algorithm sometimes provided the highest agreement to the reference contours compared with the 2 DL. Compared with DL.2 and ABAS.3, DL.1 contours were the fastest to correct. For the 3 solutions, the differences in dose distributions obtained using AS contours and AS + manually corrected contours were not statistically significant. High dose differences could be observed when OAR contours were at short distances to the targets. However, this was not always interrelated. CONCLUSION DL methods generally showed higher delineation accuracy compared with ABAS methods for AS segmentation of HN OAR. Most ABAS contours had high conformity to the reference but were more time consuming than DL algorithms, especially when considering the computing time and the time spent on manual corrections.
Collapse
Affiliation(s)
- Madalina Costea
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | | | - Morgane Durand
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France
| | - Thomas Baudier
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | | | - David Sarrut
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | - Marie-Claude Biston
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France.
| |
Collapse
|
45
|
Wahid KA, Xu J, El-Habashy D, Khamis Y, Abobakr M, McDonald B, O’ Connell N, Thill D, Ahmed S, Sharafi CS, Preston K, Salzillo TC, Mohamed ASR, He R, Cho N, Christodouleas J, Fuller CD, Naser MA. Deep-learning-based generation of synthetic 6-minute MRI from 2-minute MRI for use in head and neck cancer radiotherapy. Front Oncol 2022; 12:975902. [PMID: 36425548 PMCID: PMC9679225 DOI: 10.3389/fonc.2022.975902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 10/21/2022] [Indexed: 11/10/2022] Open
Abstract
Background Quick magnetic resonance imaging (MRI) scans with low contrast-to-noise ratio are typically acquired for daily MRI-guided radiotherapy setup. However, for patients with head and neck (HN) cancer, these images are often insufficient for discriminating target volumes and organs at risk (OARs). In this study, we investigated a deep learning (DL) approach to generate high-quality synthetic images from low-quality images. Methods We used 108 unique HN image sets of paired 2-minute T2-weighted scans (2mMRI) and 6-minute T2-weighted scans (6mMRI). 90 image sets (~20,000 slices) were used to train a 2-dimensional generative adversarial DL model that utilized 2mMRI as input and 6mMRI as output. Eighteen image sets were used to test model performance. Similarity metrics, including the mean squared error (MSE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) were calculated between normalized synthetic 6mMRI and ground-truth 6mMRI for all test cases. In addition, a previously trained OAR DL auto-segmentation model was used to segment the right parotid gland, left parotid gland, and mandible on all test case images. Dice similarity coefficients (DSC) were calculated between 2mMRI and either ground-truth 6mMRI or synthetic 6mMRI for each OAR; two one-sided t-tests were applied between the ground-truth and synthetic 6mMRI to determine equivalence. Finally, a visual Turing test using paired ground-truth and synthetic 6mMRI was performed using three clinician observers; the percentage of images that were correctly identified was compared to random chance using proportion equivalence tests. Results The median similarity metrics across the whole images were 0.19, 0.93, and 33.14 for MSE, SSIM, and PSNR, respectively. The median of DSCs comparing ground-truth vs. synthetic 6mMRI auto-segmented OARs were 0.86 vs. 0.85, 0.84 vs. 0.84, and 0.82 vs. 0.85 for the right parotid gland, left parotid gland, and mandible, respectively (equivalence p<0.05 for all OARs). The percent of images correctly identified was equivalent to chance (p<0.05 for all observers). Conclusions Using 2mMRI inputs, we demonstrate that DL-generated synthetic 6mMRI outputs have high similarity to ground-truth 6mMRI, but further improvements can be made. Our study facilitates the clinical incorporation of synthetic MRI in MRI-guided radiotherapy.
Collapse
Affiliation(s)
- Kareem A. Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | | | - Dina El-Habashy
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Department of Clinical Oncology and Nuclear Medicine, Menoufia University, Shebin Elkom, Egypt
| | - Yomna Khamis
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Department of Clinical Oncology and Nuclear Medicine, Faculty of Medicine, Alexandria University, Alexandria, Egypt
| | - Moamen Abobakr
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Brigid McDonald
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | | | | | - Sara Ahmed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Christina Setareh Sharafi
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Kathryn Preston
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Travis C. Salzillo
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Abdallah S. R. Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | | | | | - Clifton D. Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Mohamed A. Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| |
Collapse
|
46
|
Steybe D, Poxleitner P, Metzger MC, Brandenburg LS, Schmelzeisen R, Bamberg F, Tran PH, Kellner E, Reisert M, Russe MF. Automated segmentation of head CT scans for computer-assisted craniomaxillofacial surgery applying a hierarchical patch-based stack of convolutional neural networks. Int J Comput Assist Radiol Surg 2022; 17:2093-2101. [PMID: 35665881 PMCID: PMC9515026 DOI: 10.1007/s11548-022-02673-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 05/03/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE Computer-assisted techniques play an important role in craniomaxillofacial surgery. As segmentation of three-dimensional medical imaging represents a cornerstone for these procedures, the present study was aiming at investigating a deep learning approach for automated segmentation of head CT scans. METHODS The deep learning approach of this study was based on the patchwork toolbox, using a multiscale stack of 3D convolutional neural networks. The images were split into nested patches using a fixed 3D matrix size with decreasing physical size in a pyramid format of four scale depths. Manual segmentation of 18 craniomaxillofacial structures was performed in 20 CT scans, of which 15 were used for the training of the deep learning network and five were used for validation of the results of automated segmentation. Segmentation accuracy was evaluated by Dice similarity coefficient (DSC), surface DSC, 95% Hausdorff distance (95HD) and average symmetric surface distance (ASSD). RESULTS Mean for DSC was 0.81 ± 0.13 (range: 0.61 [mental foramen] - 0.98 [mandible]). Mean Surface DSC was 0.94 ± 0.06 (range: 0.87 [mental foramen] - 0.99 [mandible]), with values > 0.9 for all structures but the mental foramen. Mean 95HD was 1.93 ± 2.05 mm (range: 1.00 [mandible] - 4.12 mm [maxillary sinus]) and for ASSD, a mean of 0.42 ± 0.44 mm (range: 0.09 [mandible] - 1.19 mm [mental foramen]) was found, with values < 1 mm for all structures but the mental foramen. CONCLUSION In this study, high accuracy of automated segmentation of a variety of craniomaxillofacial structures could be demonstrated, suggesting this approach to be suitable for the incorporation into a computer-assisted craniomaxillofacial surgery workflow. The small amount of training data required and the flexibility of an open source-based network architecture enable a broad variety of clinical and research applications.
Collapse
Affiliation(s)
- David Steybe
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany.
| | - Philipp Poxleitner
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
- Berta-Ottenstein-Programme for Clinician Scientists, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Marc Christian Metzger
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Leonard Simon Brandenburg
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Rainer Schmelzeisen
- Department of Oral and Maxillofacial Surgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Phuong Hien Tran
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Elias Kellner
- Department of Medical Physics, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Marco Reisert
- Department of Medical Physics, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Maximilian Frederik Russe
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| |
Collapse
|
47
|
Tappeiner E, Welk M, Schubert R. Tackling the class imbalance problem of deep learning-based head and neck organ segmentation. Int J Comput Assist Radiol Surg 2022; 17:2103-2111. [PMID: 35578086 PMCID: PMC9515025 DOI: 10.1007/s11548-022-02649-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 04/20/2022] [Indexed: 12/03/2022]
Abstract
PURPOSE The segmentation of organs at risk (OAR) is a required precondition for the cancer treatment with image- guided radiation therapy. The automation of the segmentation task is therefore of high clinical relevance. Deep learning (DL)-based medical image segmentation is currently the most successful approach, but suffers from the over-presence of the background class and the anatomically given organ size difference, which is most severe in the head and neck (HAN) area. METHODS To tackle the HAN area-specific class imbalance problem, we first optimize the patch size of the currently best performing general-purpose segmentation framework, the nnU-Net, based on the introduced class imbalance measurement, and second introduce the class adaptive Dice loss to further compensate for the highly imbalanced setting. RESULTS Both the patch size and the loss function are parameters with direct influence on the class imbalance, and their optimization leads to a 3% increase in the Dice score and 22% reduction in the 95% Hausdorff distance compared to the baseline, finally reaching [Formula: see text] and [Formula: see text] mm for the segmentation of seven HAN organs using a single and simple neural network. CONCLUSION The patch size optimization and the class adaptive Dice loss are both simply integrable in current DL-based segmentation approaches and allow to increase the performance for class imbalance segmentation tasks.
Collapse
Affiliation(s)
- Elias Tappeiner
- Department for Biomedical Computer Science and Mechatronics, UMIT—Private University for Health Sciences, Medical Informatics and Technology, Eduard-Wallnöfer-Zentrum 1, 6060 Hall in Tyrol, Tyrol Austria
| | - Martin Welk
- Department for Biomedical Computer Science and Mechatronics, UMIT—Private University for Health Sciences, Medical Informatics and Technology, Eduard-Wallnöfer-Zentrum 1, 6060 Hall in Tyrol, Tyrol Austria
| | - Rainer Schubert
- Department for Biomedical Computer Science and Mechatronics, UMIT—Private University for Health Sciences, Medical Informatics and Technology, Eduard-Wallnöfer-Zentrum 1, 6060 Hall in Tyrol, Tyrol Austria
| |
Collapse
|
48
|
Artificial intelligence and machine learning in cancer imaging. COMMUNICATIONS MEDICINE 2022; 2:133. [PMID: 36310650 PMCID: PMC9613681 DOI: 10.1038/s43856-022-00199-0] [Citation(s) in RCA: 73] [Impact Index Per Article: 36.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Accepted: 10/06/2022] [Indexed: 11/16/2022] Open
Abstract
An increasing array of tools is being developed using artificial intelligence (AI) and machine learning (ML) for cancer imaging. The development of an optimal tool requires multidisciplinary engagement to ensure that the appropriate use case is met, as well as to undertake robust development and testing prior to its adoption into healthcare systems. This multidisciplinary review highlights key developments in the field. We discuss the challenges and opportunities of AI and ML in cancer imaging; considerations for the development of algorithms into tools that can be widely used and disseminated; and the development of the ecosystem needed to promote growth of AI and ML in cancer imaging.
Collapse
|
49
|
Savjani RR, Lauria M, Bose S, Deng J, Yuan Y, Andrearczyk V. Automated Tumor Segmentation in Radiotherapy. Semin Radiat Oncol 2022; 32:319-329. [DOI: 10.1016/j.semradonc.2022.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
50
|
Tryggestad E, Anand A, Beltran C, Brooks J, Cimmiyotti J, Grimaldi N, Hodge T, Hunzeker A, Lucido JJ, Laack NN, Momoh R, Moseley DJ, Patel SH, Ridgway A, Seetamsetty S, Shiraishi S, Undahl L, Foote RL. Scalable radiotherapy data curation infrastructure for deep-learning based autosegmentation of organs-at-risk: A case study in head and neck cancer. Front Oncol 2022; 12:936134. [PMID: 36106100 PMCID: PMC9464982 DOI: 10.3389/fonc.2022.936134] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 08/03/2022] [Indexed: 12/02/2022] Open
Abstract
In this era of patient-centered, outcomes-driven and adaptive radiotherapy, deep learning is now being successfully applied to tackle imaging-related workflow bottlenecks such as autosegmentation and dose planning. These applications typically require supervised learning approaches enabled by relatively large, curated radiotherapy datasets which are highly reflective of the contemporary standard of care. However, little has been previously published describing technical infrastructure, recommendations, methods or standards for radiotherapy dataset curation in a holistic fashion. Our radiation oncology department has recently embarked on a large-scale project in partnership with an external partner to develop deep-learning-based tools to assist with our radiotherapy workflow, beginning with autosegmentation of organs-at-risk. This project will require thousands of carefully curated radiotherapy datasets comprising all body sites we routinely treat with radiotherapy. Given such a large project scope, we have approached the need for dataset curation rigorously, with an aim towards building infrastructure that is compatible with efficiency, automation and scalability. Focusing on our first use-case pertaining to head and neck cancer, we describe our developed infrastructure and novel methods applied to radiotherapy dataset curation, inclusive of personnel and workflow organization, dataset selection, expert organ-at-risk segmentation, quality assurance, patient de-identification, data archival and transfer. Over the course of approximately 13 months, our expert multidisciplinary team generated 490 curated head and neck radiotherapy datasets. This task required approximately 6000 human-expert hours in total (not including planning and infrastructure development time). This infrastructure continues to evolve and will support ongoing and future project efforts.
Collapse
Affiliation(s)
- E. Tryggestad
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
- *Correspondence: E. Tryggestad,
| | - A. Anand
- Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, AZ, United States
| | - C. Beltran
- Department of Radiation Oncology, Mayo Clinic Florida, Jacksonville, FL, United States
| | - J. Brooks
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - J. Cimmiyotti
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - N. Grimaldi
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - T. Hodge
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - A. Hunzeker
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - J. J. Lucido
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - N. N. Laack
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - R. Momoh
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - D. J. Moseley
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - S. H. Patel
- Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, AZ, United States
| | - A. Ridgway
- Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, AZ, United States
| | - S. Seetamsetty
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - S. Shiraishi
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - L. Undahl
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| | - R. L. Foote
- Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, MN, United States
| |
Collapse
|