1
|
Bouamrane A, Derdour M, Bennour A, Elfadil Eisa TA, M. Emara AH, Al-Sarem M, Kurdi NA. Toward Robust Lung Cancer Diagnosis: Integrating Multiple CT Datasets, Curriculum Learning, and Explainable AI. Diagnostics (Basel) 2024; 15:1. [PMID: 39795530 PMCID: PMC11720071 DOI: 10.3390/diagnostics15010001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Revised: 12/06/2024] [Accepted: 12/10/2024] [Indexed: 01/13/2025] Open
Abstract
Background and Objectives: Computer-aided diagnostic systems have achieved remarkable success in the medical field, particularly in diagnosing malignant tumors, and have done so at a rapid pace. However, the generalizability of the results remains a challenge for researchers and decreases the credibility of these models, which represents a point of criticism by physicians and specialists, especially given the sensitivity of the field. This study proposes a novel model based on deep learning to enhance lung cancer diagnosis quality, understandability, and generalizability. Methods: The proposed approach uses five computed tomography (CT) datasets to assess diversity and heterogeneity. Moreover, the mixup augmentation technique was adopted to facilitate the reliance on salient characteristics by combining features and CT scan labels from datasets to reduce their biases and subjectivity, thus improving the model's generalization ability and enhancing its robustness. Curriculum learning was used to train the model, starting with simple sets to learn complicated ones quickly. Results: The proposed approach achieved promising results, with an accuracy of 99.38%; precision, specificity, and area under the curve (AUC) of 100%; sensitivity of 98.76%; and F1-score of 99.37%. Additionally, it scored a 00% false positive rate and only a 1.23% false negative rate. An external dataset was used to further validate the proposed method's effectiveness. The proposed approach achieved optimal results of 100% in all metrics, with 00% false positive and false negative rates. Finally, explainable artificial intelligence (XAI) using Gradient-weighted Class Activation Mapping (Grad-CAM) was employed to better understand the model. Conclusions: This research proposes a robust and interpretable model for lung cancer diagnostics with improved generalizability and validity. Incorporating mixup and curriculum training supported by several datasets underlines its promise for employment as a diagnostic device in the medical industry.
Collapse
Affiliation(s)
- Amira Bouamrane
- LIAOA Laboratory, University of Oum El-Bouaghi-Larbi Benmhidi, Oum El-Bouaghi 04000, Algeria; (A.B.); (M.D.)
| | - Makhlouf Derdour
- LIAOA Laboratory, University of Oum El-Bouaghi-Larbi Benmhidi, Oum El-Bouaghi 04000, Algeria; (A.B.); (M.D.)
| | - Akram Bennour
- LAMIS Laboratory, Echahid Cheikh Larbi Tebessi University, Tebessa 12002, Algeria
| | | | - Abdel-Hamid M. Emara
- Department of Computers and Systems Engineering, Faculty of Engineering, Al-Azhar University, Cairo 11884, Egypt;
| | - Mohammed Al-Sarem
- Department of Information Technology, Aylol University College, Yarim 547, Yemen;
| | - Neesrin Ali Kurdi
- College of Computer Science and Engineering, Taibah University, Medina 41477, Saudi Arabia;
| |
Collapse
|
2
|
Carles M, Kuhn D, Fechter T, Baltas D, Mix M, Nestle U, Grosu AL, Martí-Bonmatí L, Radicioni G, Gkika E. Development and evaluation of two open-source nnU-Net models for automatic segmentation of lung tumors on PET and CT images with and without respiratory motion compensation. Eur Radiol 2024; 34:6701-6711. [PMID: 38662100 PMCID: PMC11399280 DOI: 10.1007/s00330-024-10751-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Revised: 02/22/2024] [Accepted: 03/28/2024] [Indexed: 04/26/2024]
Abstract
OBJECTIVES In lung cancer, one of the main limitations for the optimal integration of the biological and anatomical information derived from Positron Emission Tomography (PET) and Computed Tomography (CT) is the time and expertise required for the evaluation of the different respiratory phases. In this study, we present two open-source models able to automatically segment lung tumors on PET and CT, with and without motion compensation. MATERIALS AND METHODS This study involved time-bin gated (4D) and non-gated (3D) PET/CT images from two prospective lung cancer cohorts (Trials 108237 and 108472) and one retrospective. For model construction, the ground truth (GT) was defined by consensus of two experts, and the nnU-Net with 5-fold cross-validation was applied to 560 4D-images for PET and 100 3D-images for CT. The test sets included 270 4D- images and 19 3D-images for PET and 80 4D-images and 27 3D-images for CT, recruited at 10 different centres. RESULTS In the performance evaluation with the multicentre test sets, the Dice Similarity Coefficients (DSC) obtained for our PET model were DSC(4D-PET) = 0.74 ± 0.06, improving 19% relative to the DSC between experts and DSC(3D-PET) = 0.82 ± 0.11. The performance for CT was DSC(4D-CT) = 0.61 ± 0.28 and DSC(3D-CT) = 0.63 ± 0.34, improving 4% and 15% relative to DSC between experts. CONCLUSIONS Performance evaluation demonstrated that the automatic segmentation models have the potential to achieve accuracy comparable to manual segmentation and thus hold promise for clinical application. The resulting models can be freely downloaded and employed to support the integration of 3D- or 4D- PET/CT and to facilitate the evaluation of its impact on lung cancer clinical practice. CLINICAL RELEVANCE STATEMENT We provide two open-source nnU-Net models for the automatic segmentation of lung tumors on PET/CT to facilitate the optimal integration of biological and anatomical information in clinical practice. The models have superior performance compared to the variability observed in manual segmentations by the different experts for images with and without motion compensation, allowing to take advantage in the clinical practice of the more accurate and robust 4D-quantification. KEY POINTS Lung tumor segmentation on PET/CT imaging is limited by respiratory motion and manual delineation is time consuming and suffer from inter- and intra-variability. Our segmentation models had superior performance compared to the manual segmentations by different experts. Automating PET image segmentation allows for easier clinical implementation of biological information.
Collapse
Affiliation(s)
- Montserrat Carles
- La Fe Health Research Institute, Biomedical Imaging Research Group (GIBI230-PREBI) and Imaging La Fe node at Distributed Network for Biomedical Imaging (ReDIB) Unique Scientific and Technical Infra-structures (ICTS), Valencia, Spain.
| | - Dejan Kuhn
- Department of Radiation Oncology, Division of Medical Physics, University Medical Center Freiburg, Faculty of Medicine, Freiburg, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tobias Fechter
- Department of Radiation Oncology, Division of Medical Physics, University Medical Center Freiburg, Faculty of Medicine, Freiburg, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Dimos Baltas
- Department of Radiation Oncology, Division of Medical Physics, University Medical Center Freiburg, Faculty of Medicine, Freiburg, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Michael Mix
- Department of Nuclear Medicine, Faculty of Medicine, University Medical Center Freiburg, Freiburg, Germany
| | - Ursula Nestle
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Department of Radiation Oncology, Faculty of Medicine, University Medical Center Freiburg, Freiburg, Germany
- Department of Radiation Oncology, Kliniken Maria Hilf GmbH Moenchengladbach, Moechengladbach, Germany
| | - Anca L Grosu
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Department of Radiation Oncology, Faculty of Medicine, University Medical Center Freiburg, Freiburg, Germany
| | - Luis Martí-Bonmatí
- La Fe Health Research Institute, Biomedical Imaging Research Group (GIBI230-PREBI) and Imaging La Fe node at Distributed Network for Biomedical Imaging (ReDIB) Unique Scientific and Technical Infra-structures (ICTS), Valencia, Spain
| | - Gianluca Radicioni
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Department of Radiation Oncology, Faculty of Medicine, University Medical Center Freiburg, Freiburg, Germany
| | - Eleni Gkika
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Department of Radiation Oncology, Faculty of Medicine, University Medical Center Freiburg, Freiburg, Germany
| |
Collapse
|
3
|
Erdur AC, Rusche D, Scholz D, Kiechle J, Fischer S, Llorián-Salvador Ó, Buchner JA, Nguyen MQ, Etzel L, Weidner J, Metz MC, Wiestler B, Schnabel J, Rueckert D, Combs SE, Peeken JC. Deep learning for autosegmentation for radiotherapy treatment planning: State-of-the-art and novel perspectives. Strahlenther Onkol 2024:10.1007/s00066-024-02262-2. [PMID: 39105745 DOI: 10.1007/s00066-024-02262-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 06/13/2024] [Indexed: 08/07/2024]
Abstract
The rapid development of artificial intelligence (AI) has gained importance, with many tools already entering our daily lives. The medical field of radiation oncology is also subject to this development, with AI entering all steps of the patient journey. In this review article, we summarize contemporary AI techniques and explore the clinical applications of AI-based automated segmentation models in radiotherapy planning, focusing on delineation of organs at risk (OARs), the gross tumor volume (GTV), and the clinical target volume (CTV). Emphasizing the need for precise and individualized plans, we review various commercial and freeware segmentation tools and also state-of-the-art approaches. Through our own findings and based on the literature, we demonstrate improved efficiency and consistency as well as time savings in different clinical scenarios. Despite challenges in clinical implementation such as domain shifts, the potential benefits for personalized treatment planning are substantial. The integration of mathematical tumor growth models and AI-based tumor detection further enhances the possibilities for refining target volumes. As advancements continue, the prospect of one-stop-shop segmentation and radiotherapy planning represents an exciting frontier in radiotherapy, potentially enabling fast treatment with enhanced precision and individualization.
Collapse
Affiliation(s)
- Ayhan Can Erdur
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
| | - Daniel Rusche
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Daniel Scholz
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Johannes Kiechle
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
| | - Stefan Fischer
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
| | - Óscar Llorián-Salvador
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department for Bioinformatics and Computational Biology - i12, Technical University of Munich, Boltzmannstraße 3, 85748, Garching, Bavaria, Germany
- Institute of Organismic and Molecular Evolution, Johannes Gutenberg University Mainz (JGU), Hüsch-Weg 15, 55128, Mainz, Rhineland-Palatinate, Germany
| | - Josef A Buchner
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Mai Q Nguyen
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Lucas Etzel
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
| | - Jonas Weidner
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Marie-Christin Metz
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Benedikt Wiestler
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Julia Schnabel
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
- Institute of Machine Learning in Biomedical Imaging, Helmholtz Munich, Ingolstädter Landstraße 1, 85764, Neuherberg, Bavaria, Germany
- School of Biomedical Engineering & Imaging Sciences, King's College London, Strand, WC2R 2LS, London, London, UK
| | - Daniel Rueckert
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Faculty of Engineering, Department of Computing, Imperial College London, Exhibition Rd, SW7 2BX, London, London, UK
| | - Stephanie E Combs
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| | - Jan C Peeken
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| |
Collapse
|
4
|
Skett S, Patel T, Duprez D, Gupta S, Netherton T, Trauernicht C, Aldridge S, Eaton D, Cardenas C, Court LE, Smith D, Aggarwal A. Autocontouring of primary lung lesions and nodal disease for radiotherapy based only on computed tomography images. Phys Imaging Radiat Oncol 2024; 31:100637. [PMID: 39297080 PMCID: PMC11408859 DOI: 10.1016/j.phro.2024.100637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 08/21/2024] [Accepted: 08/21/2024] [Indexed: 09/21/2024] Open
Abstract
Background and purpose In many clinics, positron-emission tomography is unavailable and clinician time extremely limited. Here we describe a deep-learning model for autocontouring gross disease for patients undergoing palliative radiotherapy for primary lung lesions and/or hilar/mediastinal nodal disease, based only on computed tomography (CT) images. Materials and methods An autocontouring model (nnU-Net) was trained to contour gross disease in 379 cases (352 training, 27 test); 11 further test cases from an external centre were also included. Anchor-point-based post-processing was applied to remove extraneous autocontoured regions. The autocontours were evaluated quantitatively in terms of volume similarity (Dice similarity coefficient [DSC], surface Dice coefficient, 95th percentile Hausdorff distance [HD95], and mean surface distance), and scored for usability by two consultant oncologists. The magnitude of treatment margin needed to account for geometric discrepancies was also assessed. Results The anchor point process successfully removed all erroneous regions from the autocontoured disease, and identified two cases to be excluded from further analysis due to 'missed' disease. The average DSC and HD95 were 0.8 ± 0.1 and 10.5 ± 7.3 mm, respectively. A 10-mm uniform margin-distance applied to the autocontoured region was found to yield "full coverage" (sensitivity > 0.99) of the clinical contour for 64 % of cases. Ninety-seven percent of evaluated autocontours were scored by both clinicians as requiring no or minor edits. Conclusions Our autocontouring model was shown to produce clinically usable disease outlines, based on CT alone, for approximately two-thirds of patients undergoing lung radiotherapy. Further work is necessary to improve this before clinical implementation.
Collapse
Affiliation(s)
- Stephen Skett
- Guy's and St. Thomas' NHS Foundation Trust, London, United Kingdom
| | - Tina Patel
- Guy's and St. Thomas' NHS Foundation Trust, London, United Kingdom
| | - Didier Duprez
- Stellenbosch University Faculty of Medicine and Health Sciences, Tygerberg Hospital, Cape Town, South Africa
| | - Sunnia Gupta
- Guy's and St. Thomas' NHS Foundation Trust, London, United Kingdom
| | - Tucker Netherton
- The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Christoph Trauernicht
- Stellenbosch University Faculty of Medicine and Health Sciences, Tygerberg Hospital, Cape Town, South Africa
| | - Sarah Aldridge
- Guy's and St. Thomas' NHS Foundation Trust, London, United Kingdom
| | - David Eaton
- Guy's and St. Thomas' NHS Foundation Trust, London, United Kingdom
| | - Carlos Cardenas
- University of Alabama at Birmingham Hazelrig-Salter Radiation Oncology Center, Birmingham, AL, United States
| | - Laurence E Court
- The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Daniel Smith
- Guy's and St. Thomas' NHS Foundation Trust, London, United Kingdom
| | - Ajay Aggarwal
- Guy's and St. Thomas' NHS Foundation Trust, London, United Kingdom
| |
Collapse
|
5
|
Almeida ND, Shekher R, Pepin A, Schrand TV, Goulenko V, Singh AK, Fung-Kee-Fung S. Artificial Intelligence Potential Impact on Resident Physician Education in Radiation Oncology. Adv Radiat Oncol 2024; 9:101505. [PMID: 38799112 PMCID: PMC11127091 DOI: 10.1016/j.adro.2024.101505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 03/16/2024] [Indexed: 05/29/2024] Open
Affiliation(s)
- Neil D. Almeida
- Department of Radiation Medicine, Roswell Park Comprehensive Cancer Center, Buffalo, New York
| | - Rohil Shekher
- Department of Radiation Medicine, Roswell Park Comprehensive Cancer Center, Buffalo, New York
| | - Abigail Pepin
- Department of Radiation Oncology, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania
| | - Tyler V. Schrand
- Department of Radiation Medicine, Roswell Park Comprehensive Cancer Center, Buffalo, New York
- Department of Chemistry, Bowling Green State University, Bowling Green, Ohio
| | - Victor Goulenko
- Department of Radiation Medicine, Roswell Park Comprehensive Cancer Center, Buffalo, New York
| | - Anurag K. Singh
- Department of Radiation Medicine, Roswell Park Comprehensive Cancer Center, Buffalo, New York
| | - Simon Fung-Kee-Fung
- Department of Radiation Medicine, Roswell Park Comprehensive Cancer Center, Buffalo, New York
| |
Collapse
|
6
|
Wei X, Yi J, Zhang C, Wang M, Wang R, Xu W, Zhao M, Zhao M, Yang T, Wei W, Jin S, Gao H. Enhancement of the Tumor Suppression Effect of High-dose Radiation by Low-dose Pre-radiation Through Inhibition of DNA Damage Repair and Increased Pyroptosis. Dose Response 2024; 22:15593258241245804. [PMID: 38617388 PMCID: PMC11010768 DOI: 10.1177/15593258241245804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 03/21/2024] [Indexed: 04/16/2024] Open
Abstract
Radiation therapy has been a critical and effective treatment for cancer. However, not all cells are destroyed by radiation due to the presence of tumor cell radioresistance. In the current study, we investigated the effect of low-dose radiation (LDR) on the tumor suppressive effect of high-dose radiation (HDR) and its mechanism from the perspective of tumor cell death mode and DNA damage repair, aiming to provide a foundation for improving the efficacy of clinical tumor radiotherapy. We found that LDR pre-irradiation strengthened the HDR-inhibited A549 cell proliferation, HDR-induced apoptosis, and G2 phase cell cycle arrest under co-culture conditions. RNA-sequencing showed that differentially expressed genes after irradiation contained pyroptosis-related genes and DNA damage repair related genes. By detecting pyroptosis-related proteins, we found that LDR could enhance HDR-induced pyroptosis. Furthermore, under co-culture conditions, LDR pre-irradiation enhances the HDR-induced DNA damage and further suppresses the DNA damage-repairing process, which eventually leads to cell death. Lastly, we established a tumor-bearing mouse model and further demonstrated that LDR local pre-irradiation could enhance the cancer suppressive effect of HDR. To summarize, our study proved that LDR pre-irradiation enhances the tumor-killing function of HDR when cancer cells and immune cells were coexisting.
Collapse
Affiliation(s)
- Xinfeng Wei
- NHC Key Laboratory of Radiobiology, School of Public Health, Jilin University, Changchun, China
| | - Junxuan Yi
- NHC Key Laboratory of Radiobiology, School of Public Health, Jilin University, Changchun, China
| | - Citong Zhang
- Department of Oral Comprehensive Therapy, School of Stomatology, Jilin University, Changchun, China
| | - Mingwei Wang
- NHC Key Laboratory of Radiobiology, School of Public Health, Jilin University, Changchun, China
| | - Rui Wang
- NHC Key Laboratory of Radiobiology, School of Public Health, Jilin University, Changchun, China
| | - Weiqiang Xu
- NHC Key Laboratory of Radiobiology, School of Public Health, Jilin University, Changchun, China
| | - Mingqi Zhao
- NHC Key Laboratory of Radiobiology, School of Public Health, Jilin University, Changchun, China
| | - Mengdie Zhao
- NHC Key Laboratory of Radiobiology, School of Public Health, Jilin University, Changchun, China
| | - Teng Yang
- Department of Orthopedics, The First Hospital of Jilin University, Changchun, China
| | - Wei Wei
- Department of Radiotherapy, Chinese PLA General Hospital, Beijing, China
| | - Shunzi Jin
- NHC Key Laboratory of Radiobiology, School of Public Health, Jilin University, Changchun, China
| | - Hui Gao
- NHC Key Laboratory of Radiobiology, School of Public Health, Jilin University, Changchun, China
- Department of Orthopedics, The First Hospital of Jilin University, Changchun, China
| |
Collapse
|
7
|
Liu X, Geng LS, Huang D, Cai J, Yang R. Deep learning-based target tracking with X-ray images for radiotherapy: a narrative review. Quant Imaging Med Surg 2024; 14:2671-2692. [PMID: 38545053 PMCID: PMC10963821 DOI: 10.21037/qims-23-1489] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 01/08/2024] [Indexed: 11/10/2024]
Abstract
Background and Objective As one of the main treatment modalities, radiotherapy (RT) (also known as radiation therapy) plays an increasingly important role in the treatment of cancer. RT could benefit greatly from the accurate localization of the gross tumor volume and circumambient organs at risk (OARs). Modern linear accelerators (LINACs) are typically equipped with either gantry-mounted or room-mounted X-ray imaging systems, which provide possibilities for marker-less tracking with two-dimensional (2D) kV X-ray images. However, due to organ overlapping and poor soft tissue contrast, it is challenging to track the target directly and precisely with 2D kV X-ray images. With the flourishing development of deep learning in the field of image processing, it is possible to achieve real-time marker-less tracking of targets with 2D kV X-ray images in RT using advanced deep-learning frameworks. This article sought to review the current development of deep learning-based target tracking with 2D kV X-ray images and discuss the existing limitations and potential solutions. Finally, it also discusses some common challenges and potential future developments. Methods Manual searches of the Web of Science, and PubMed, and Google Scholar were carried out to retrieve English-language articles. The keywords used in the searches included "radiotherapy, radiation therapy, motion tracking, target tracking, motion estimation, motion monitoring, X-ray images, digitally reconstructed radiographs, deep learning, convolutional neural network, and deep neural network". Only articles that met the predetermined eligibility criteria were included in the review. Ultimately, 23 articles published between March 2019 and December 2023 were included in the review. Key Content and Findings In this article, we narratively reviewed deep learning-based target tracking with 2D kV X-ray images in RT. The existing limitations, common challenges, possible solutions, and future directions of deep learning-based target tracking were also discussed. The use of deep learning-based methods has been shown to be feasible in marker-less target tracking and real-time motion management. However, it is still quite challenging to directly locate tumor and OARs in real-time with 2D kV X-ray images, and more technical and clinical efforts are needed. Conclusions Deep learning-based target tracking with 2D kV X-ray images is a promising method in motion management during RT. It has the potential to track the target in real time, recognize motion, reduce the extended margin, and better spare the normal tissue. However, it still has many issues that demand prompt attention, and further development before it can be put into clinical practice.
Collapse
Affiliation(s)
- Xi Liu
- School of Physics, Beihang University, Beijing, China
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing, China
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Li-Sheng Geng
- School of Physics, Beihang University, Beijing, China
- Beijing Key Laboratory of Advanced Nuclear Materials and Physics, Beihang University, Beijing, China
- Peng Huanwu Collaborative Center for Research and Education, Beihang University, Beijing, China
| | - David Huang
- Peng Huanwu Collaborative Center for Research and Education, Beihang University, Beijing, China
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Ruijie Yang
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing, China
| |
Collapse
|
8
|
Kunkyab T, Bahrami Z, Zhang H, Liu Z, Hyde D. A deep learning-based framework (Co-ReTr) for auto-segmentation of non-small cell-lung cancer in computed tomography images. J Appl Clin Med Phys 2024; 25:e14297. [PMID: 38373289 DOI: 10.1002/acm2.14297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 01/15/2024] [Accepted: 01/23/2024] [Indexed: 02/21/2024] Open
Abstract
PURPOSE Deep learning-based auto-segmentation algorithms can improve clinical workflow by defining accurate regions of interest while reducing manual labor. Over the past decade, convolutional neural networks (CNNs) have become prominent in medical image segmentation applications. However, CNNs have limitations in learning long-range spatial dependencies due to the locality of the convolutional layers. Transformers were introduced to address this challenge. In transformers with self-attention mechanism, even the first layer of information processing makes connections between distant image locations. Our paper presents a novel framework that bridges these two unique techniques, CNNs and transformers, to segment the gross tumor volume (GTV) accurately and efficiently in computed tomography (CT) images of non-small cell-lung cancer (NSCLC) patients. METHODS Under this framework, input of multiple resolution images was used with multi-depth backbones to retain the benefits of high-resolution and low-resolution images in the deep learning architecture. Furthermore, a deformable transformer was utilized to learn the long-range dependency on the extracted features. To reduce computational complexity and to efficiently process multi-scale, multi-depth, high-resolution 3D images, this transformer pays attention to small key positions, which were identified by a self-attention mechanism. We evaluated the performance of the proposed framework on a NSCLC dataset which contains 563 training images and 113 test images. Our novel deep learning algorithm was benchmarked against five other similar deep learning models. RESULTS The experimental results indicate that our proposed framework outperforms other CNN-based, transformer-based, and hybrid methods in terms of Dice score (0.92) and Hausdorff Distance (1.33). Therefore, our proposed model could potentially improve the efficiency of auto-segmentation of early-stage NSCLC during the clinical workflow. This type of framework may potentially facilitate online adaptive radiotherapy, where an efficient auto-segmentation workflow is required. CONCLUSIONS Our deep learning framework, based on CNN and transformer, performs auto-segmentation efficiently and could potentially assist clinical radiotherapy workflow.
Collapse
Affiliation(s)
- Tenzin Kunkyab
- Department of Computer Science, Mathematics, Physics and Statistics, University of British Columbia Okanagan, Kelowna, British Columbia, Canada
| | - Zhila Bahrami
- School of Engineering, The University of British Columbia Okanagan Campus, Kelowna, British Columbia, Canada
| | - Heqing Zhang
- School of Engineering, The University of British Columbia Okanagan Campus, Kelowna, British Columbia, Canada
| | - Zheng Liu
- School of Engineering, The University of British Columbia Okanagan Campus, Kelowna, British Columbia, Canada
| | - Derek Hyde
- Department of Medical Physics, BC Cancer - Kelowna, Kelowna, Canada
| |
Collapse
|
9
|
Kawamura M, Kamomae T, Yanagawa M, Kamagata K, Fujita S, Ueda D, Matsui Y, Fushimi Y, Fujioka T, Nozaki T, Yamada A, Hirata K, Ito R, Fujima N, Tatsugami F, Nakaura T, Tsuboyama T, Naganawa S. Revolutionizing radiation therapy: the role of AI in clinical practice. JOURNAL OF RADIATION RESEARCH 2024; 65:1-9. [PMID: 37996085 PMCID: PMC10803173 DOI: 10.1093/jrr/rrad090] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 09/25/2023] [Accepted: 10/16/2023] [Indexed: 11/25/2023]
Abstract
This review provides an overview of the application of artificial intelligence (AI) in radiation therapy (RT) from a radiation oncologist's perspective. Over the years, advances in diagnostic imaging have significantly improved the efficiency and effectiveness of radiotherapy. The introduction of AI has further optimized the segmentation of tumors and organs at risk, thereby saving considerable time for radiation oncologists. AI has also been utilized in treatment planning and optimization, reducing the planning time from several days to minutes or even seconds. Knowledge-based treatment planning and deep learning techniques have been employed to produce treatment plans comparable to those generated by humans. Additionally, AI has potential applications in quality control and assurance of treatment plans, optimization of image-guided RT and monitoring of mobile tumors during treatment. Prognostic evaluation and prediction using AI have been increasingly explored, with radiomics being a prominent area of research. The future of AI in radiation oncology offers the potential to establish treatment standardization by minimizing inter-observer differences in segmentation and improving dose adequacy evaluation. RT standardization through AI may have global implications, providing world-standard treatment even in resource-limited settings. However, there are challenges in accumulating big data, including patient background information and correlating treatment plans with disease outcomes. Although challenges remain, ongoing research and the integration of AI technology hold promise for further advancements in radiation oncology.
Collapse
Affiliation(s)
- Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Takeshi Kamomae
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, 565-0871, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1 Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Shohei Fujita
- Department of Radiology, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata-cho, Kitaku, Okayama, 700-8558, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawaharacho, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, 3-1-1 Asahi, Matsumoto, Nagano, 390-8621, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Faculty of Medicine, Hokkaido University, Kita15, Nishi7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Kita15, Nishi7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, 1-1-1 Honjo, Chuo-ku, Kumamoto, 860-8556, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, 565-0871, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| |
Collapse
|
10
|
Liu X, Yang R, Xiong T, Yang X, Li W, Song L, Zhu J, Wang M, Cai J, Geng L. CBCT-to-CT Synthesis for Cervical Cancer Adaptive Radiotherapy via U-Net-Based Model Hierarchically Trained with Hybrid Dataset. Cancers (Basel) 2023; 15:5479. [PMID: 38001738 PMCID: PMC10670900 DOI: 10.3390/cancers15225479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/11/2023] [Accepted: 11/14/2023] [Indexed: 11/26/2023] Open
Abstract
PURPOSE To develop a deep learning framework based on a hybrid dataset to enhance the quality of CBCT images and obtain accurate HU values. MATERIALS AND METHODS A total of 228 cervical cancer patients treated in different LINACs were enrolled. We developed an encoder-decoder architecture with residual learning and skip connections. The model was hierarchically trained and validated on 5279 paired CBCT/planning CT images and tested on 1302 paired images. The mean absolute error (MAE), peak signal to noise ratio (PSNR), and structural similarity index (SSIM) were utilized to access the quality of the synthetic CT images generated by our model. RESULTS The MAE between synthetic CT images generated by our model and planning CT was 10.93 HU, compared to 50.02 HU for the CBCT images. The PSNR increased from 27.79 dB to 33.91 dB, and the SSIM increased from 0.76 to 0.90. Compared with synthetic CT images generated by the convolution neural networks with residual blocks, our model had superior performance both in qualitative and quantitative aspects. CONCLUSIONS Our model could synthesize CT images with enhanced image quality and accurate HU values. The synthetic CT images preserved the edges of tissues well, which is important for downstream tasks in adaptive radiotherapy.
Collapse
Affiliation(s)
- Xi Liu
- School of Physics, Beihang University, Beijing 102206, China; (X.L.); (X.Y.)
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China; (R.Y.)
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Ruijie Yang
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China; (R.Y.)
| | - Tianyu Xiong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Xueying Yang
- School of Physics, Beihang University, Beijing 102206, China; (X.L.); (X.Y.)
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China; (R.Y.)
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Liming Song
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Jiarui Zhu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Mingqing Wang
- Department of Radiation Oncology, Cancer Center, Peking University Third Hospital, Beijing 100191, China; (R.Y.)
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR 999077, China; (T.X.)
| | - Lisheng Geng
- School of Physics, Beihang University, Beijing 102206, China; (X.L.); (X.Y.)
- Beijing Key Laboratory of Advanced Nuclear Materials and Physics, Beihang University, Beijing 102206, China
- Peng Huanwu Collaborative Center for Research and Education, Beihang University, Beijing 100191, China
| |
Collapse
|
11
|
O'Shea R, Manickavasagar T, Horst C, Hughes D, Cusack J, Tsoka S, Cook G, Goh V. Weakly supervised segmentation models as explainable radiological classifiers for lung tumour detection on CT images. Insights Imaging 2023; 14:195. [PMID: 37980637 PMCID: PMC10657919 DOI: 10.1186/s13244-023-01542-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 10/13/2023] [Indexed: 11/21/2023] Open
Abstract
PURPOSE Interpretability is essential for reliable convolutional neural network (CNN) image classifiers in radiological applications. We describe a weakly supervised segmentation model that learns to delineate the target object, trained with only image-level labels ("image contains object" or "image does not contain object"), presenting a different approach towards explainable object detectors for radiological imaging tasks. METHODS A weakly supervised Unet architecture (WSUnet) was trained to learn lung tumour segmentation from image-level labelled data. WSUnet generates voxel probability maps with a Unet and then constructs an image-level prediction by global max-pooling, thereby facilitating image-level training. WSUnet's voxel-level predictions were compared to traditional model interpretation techniques (class activation mapping, integrated gradients and occlusion sensitivity) in CT data from three institutions (training/validation: n = 412; testing: n = 142). Methods were compared using voxel-level discrimination metrics and clinical value was assessed with a clinician preference survey on data from external institutions. RESULTS Despite the absence of voxel-level labels in training, WSUnet's voxel-level predictions localised tumours precisely in both validation (precision: 0.77, 95% CI: [0.76-0.80]; dice: 0.43, 95% CI: [0.39-0.46]), and external testing (precision: 0.78, 95% CI: [0.76-0.81]; dice: 0.33, 95% CI: [0.32-0.35]). WSUnet's voxel-level discrimination outperformed the best comparator in validation (area under precision recall curve (AUPR): 0.55, 95% CI: [0.49-0.56] vs. 0.23, 95% CI: [0.21-0.25]) and testing (AUPR: 0.40, 95% CI: [0.38-0.41] vs. 0.36, 95% CI: [0.34-0.37]). Clinicians preferred WSUnet predictions in most instances (clinician preference rate: 0.72 95% CI: [0.68-0.77]). CONCLUSION Weakly supervised segmentation is a viable approach by which explainable object detection models may be developed for medical imaging. CRITICAL RELEVANCE STATEMENT WSUnet learns to segment images at voxel level, training only with image-level labels. A Unet backbone first generates a voxel-level probability map and then extracts the maximum voxel prediction as the image-level prediction. Thus, training uses only image-level annotations, reducing human workload. WSUnet's voxel-level predictions provide a causally verifiable explanation for its image-level prediction, improving interpretability. KEY POINTS • Explainability and interpretability are essential for reliable medical image classifiers. • This study applies weakly supervised segmentation to generate explainable image classifiers. • The weakly supervised Unet inherently explains its image-level predictions at voxel level.
Collapse
Affiliation(s)
- Robert O'Shea
- Department of Cancer Imaging, King's College London, London, UK.
| | | | - Carolyn Horst
- Department of Radiology, Guy's and St Thomas' NHS Foundation Trust, London, UK
| | - Daniel Hughes
- Department of Cancer Imaging, King's College London, London, UK
| | - James Cusack
- Department of Radiology, Liverpool University Hospitals NHS Foundation Trust, Liverpool, UK
| | - Sophia Tsoka
- Department of Natural and Mathematical Sciences, King's College London, London, UK
| | - Gary Cook
- King's College London & Guy's and St Thomas' PET Centre, Guy's and St Thomas' NHS Foundation Trust, London, UK
| | - Vicky Goh
- Department of Radiology, Guy's and St Thomas' NHS Foundation Trust, London, UK
| |
Collapse
|
12
|
Isaksson LJ, Summers P, Mastroleo F, Marvaso G, Corrao G, Vincini MG, Zaffaroni M, Ceci F, Petralia G, Orecchia R, Jereczek-Fossa BA. Automatic Segmentation with Deep Learning in Radiotherapy. Cancers (Basel) 2023; 15:4389. [PMID: 37686665 PMCID: PMC10486603 DOI: 10.3390/cancers15174389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 08/28/2023] [Accepted: 08/30/2023] [Indexed: 09/10/2023] Open
Abstract
This review provides a formal overview of current automatic segmentation studies that use deep learning in radiotherapy. It covers 807 published papers and includes multiple cancer sites, image types (CT/MRI/PET), and segmentation methods. We collect key statistics about the papers to uncover commonalities, trends, and methods, and identify areas where more research might be needed. Moreover, we analyzed the corpus by posing explicit questions aimed at providing high-quality and actionable insights, including: "What should researchers think about when starting a segmentation study?", "How can research practices in medical image segmentation be improved?", "What is missing from the current corpus?", and more. This allowed us to provide practical guidelines on how to conduct a good segmentation study in today's competitive environment that will be useful for future research within the field, regardless of the specific radiotherapeutic subfield. To aid in our analysis, we used the large language model ChatGPT to condense information.
Collapse
Affiliation(s)
- Lars Johannes Isaksson
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20141 Milan, Italy; (F.C.); (G.P.)
| | - Paul Summers
- Division of Radiology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy;
| | - Federico Mastroleo
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
- Department of Translational Medicine, University of Piemonte Orientale (UPO), 20188 Novara, Italy
| | - Giulia Marvaso
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
| | - Giulia Corrao
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
| | - Maria Giulia Vincini
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
| | - Mattia Zaffaroni
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
| | - Francesco Ceci
- Department of Oncology and Hemato-Oncology, University of Milan, 20141 Milan, Italy; (F.C.); (G.P.)
- Division of Nuclear Medicine, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy
| | - Giuseppe Petralia
- Department of Oncology and Hemato-Oncology, University of Milan, 20141 Milan, Italy; (F.C.); (G.P.)
- Precision Imaging and Research Unit, Department of Medical Imaging and Radiation Sciences, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy
| | - Roberto Orecchia
- Scientific Directorate, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy;
| | - Barbara Alicja Jereczek-Fossa
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.J.I.); (F.M.); (G.C.); (M.G.V.); (M.Z.); (B.A.J.-F.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20141 Milan, Italy; (F.C.); (G.P.)
| |
Collapse
|
13
|
Ribeiro MF, Marschner S, Kawula M, Rabe M, Corradini S, Belka C, Riboldi M, Landry G, Kurz C. Deep learning based automatic segmentation of organs-at-risk for 0.35 T MRgRT of lung tumors. Radiat Oncol 2023; 18:135. [PMID: 37574549 PMCID: PMC10424424 DOI: 10.1186/s13014-023-02330-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 08/03/2023] [Indexed: 08/15/2023] Open
Abstract
BACKGROUND AND PURPOSE Magnetic resonance imaging guided radiotherapy (MRgRT) offers treatment plan adaptation to the anatomy of the day. In the current MRgRT workflow, this requires the time consuming and repetitive task of manual delineation of organs-at-risk (OARs), which is also prone to inter- and intra-observer variability. Therefore, deep learning autosegmentation (DLAS) is becoming increasingly attractive. No investigation of its application to OARs in thoracic magnetic resonance images (MRIs) from MRgRT has been done so far. This study aimed to fill this gap. MATERIALS AND METHODS 122 planning MRIs from patients treated at a 0.35 T MR-Linac were retrospectively collected. Using an 80/19/23 (training/validation/test) split, individual 3D U-Nets for segmentation of the left lung, right lung, heart, aorta, spinal canal and esophagus were trained. These were compared to the clinically used contours based on Dice similarity coefficient (DSC) and Hausdorff distance (HD). They were also graded on their clinical usability by a radiation oncologist. RESULTS Median DSC was 0.96, 0.96, 0.94, 0.90, 0.88 and 0.78 for left lung, right lung, heart, aorta, spinal canal and esophagus, respectively. Median 95th percentile values of the HD were 3.9, 5.3, 5.8, 3.0, 2.6 and 3.5 mm, respectively. The physician preferred the network generated contours over the clinical contours, deeming 85 out of 129 to not require any correction, 25 immediately usable for treatment planning, 15 requiring minor and 4 requiring major corrections. CONCLUSIONS We trained 3D U-Nets on clinical MRI planning data which produced accurate delineations in the thoracic region. DLAS contours were preferred over the clinical contours.
Collapse
Affiliation(s)
- Marvin F Ribeiro
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Sebastian Marschner
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Maria Kawula
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Moritz Rabe
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Stefanie Corradini
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Claus Belka
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
- German Cancer Consortium (DKTK), Partner Site Munich, Munich, Germany
- Bavarian Cancer Research Center (BZKF), Munich, Germany
| | - Marco Riboldi
- Department of Medical Physics, Ludwig-Maximilians-Universität München, Garching, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Christopher Kurz
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany.
| |
Collapse
|
14
|
Yang T, Zhu G, Cai L, Yeo JH, Mao Y, Yang J. A benchmark study of convolutional neural networks in fully automatic segmentation of aortic root. Front Bioeng Biotechnol 2023; 11:1171868. [PMID: 37397959 PMCID: PMC10311214 DOI: 10.3389/fbioe.2023.1171868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 06/06/2023] [Indexed: 07/04/2023] Open
Abstract
Recent clinical studies have suggested that introducing 3D patient-specific aortic root models into the pre-operative assessment procedure of transcatheter aortic valve replacement (TAVR) would reduce the incident rate of peri-operative complications. Tradition manual segmentation is labor-intensive and low-efficient, which cannot meet the clinical demands of processing large data volumes. Recent developments in machine learning provided a viable way for accurate and efficient medical image segmentation for 3D patient-specific models automatically. This study quantitively evaluated the auto segmentation quality and efficiency of the four popular segmentation-dedicated three-dimensional (3D) convolutional neural network (CNN) architectures, including 3D UNet, VNet, 3D Res-UNet and SegResNet. All the CNNs were implemented in PyTorch platform, and low-dose CTA image sets of 98 anonymized patients were retrospectively selected from the database for training and testing of the CNNs. The results showed that despite all four 3D CNNs having similar recall, Dice similarity coefficient (DSC), and Jaccard index on the segmentation of the aortic root, the Hausdorff distance (HD) of the segmentation results from 3D Res-UNet is 8.56 ± 2.28, which is only 9.8% higher than that of VNet, but 25.5% and 86.4% lower than that of 3D UNet and SegResNet, respectively. In addition, 3D Res-UNet and VNet also performed better in the 3D deviation location of interest analysis focusing on the aortic valve and the bottom of the aortic root. Although 3D Res-UNet and VNet are evenly matched in the aspect of classical segmentation quality evaluation metrics and 3D deviation location of interest analysis, 3D Res-UNet is the most efficient CNN architecture with an average segmentation time of 0.10 ± 0.04 s, which is 91.2%, 95.3% and 64.3% faster than 3D UNet, VNet and SegResNet, respectively. The results from this study suggested that 3D Res-UNet is a suitable candidate for accurate and fast automatic aortic root segmentation for pre-operative assessment of TAVR.
Collapse
Affiliation(s)
- Tingting Yang
- School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an, China
| | - Guangyu Zhu
- School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an, China
| | - Li Cai
- School of Mathematics and Statistics, Northwestern Polytechnical University, Xi’an, China
| | - Joon Hock Yeo
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore, Singapore
| | - Yu Mao
- Department of Cardiac Surgery, Xijing Hospital, The Fourth Military Medical University, Xi’an, China
| | - Jian Yang
- Department of Cardiac Surgery, Xijing Hospital, The Fourth Military Medical University, Xi’an, China
| |
Collapse
|
15
|
Bourbonne V, Laville A, Wagneur N, Ghannam Y, Larnaudie A. Excitement and Concerns of Young Radiation Oncologists over Automatic Segmentation: A French Perspective. Cancers (Basel) 2023; 15:cancers15072040. [PMID: 37046704 PMCID: PMC10093734 DOI: 10.3390/cancers15072040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 03/21/2023] [Accepted: 03/24/2023] [Indexed: 04/01/2023] Open
Abstract
Introduction: Segmentation of organs at risk (OARs) and target volumes need time and precision but are highly repetitive tasks. Radiation oncology has known tremendous technological advances in recent years, the latest being brought by artificial intelligence (AI). Despite the advantages brought by AI for segmentation, some concerns were raised by academics regarding the impact on young radiation oncologists’ training. A survey was thus conducted on young french radiation oncologists (ROs) by the SFjRO (Société Française des jeunes Radiothérapeutes Oncologues). Methodology: The SFjRO organizes regular webinars focusing on anatomical localization, discussing either segmentation or dosimetry. Completion of the survey was mandatory for registration to a dosimetry webinar dedicated to head and neck (H & N) cancers. The survey was generated in accordance with the CHERRIES guidelines. Quantitative data (e.g., time savings and correction needs) were not measured but determined among the propositions. Results: 117 young ROs from 35 different and mostly academic centers participated. Most centers were either already equipped with such solutions or planning to be equipped in the next two years. AI segmentation software was mostly useful for H & N cases. While for the definition of OARs, participants experienced a significant time gain using AI-proposed delineations, with almost 35% of the participants saving between 50–100% of the segmentation time, time gained for target volumes was significantly lower, with only 8.6% experiencing a 50–100% gain. Contours still needed to be thoroughly checked, especially target volumes for some, and edited. The majority of participants suggested that these tools should be integrated into the training so that future radiation oncologists do not neglect the importance of radioanatomy. Fully aware of this risk, up to one-third of them even suggested that AI tools should be reserved for senior physicians only. Conclusions: We believe this survey on automatic segmentation to be the first to focus on the perception of young radiation oncologists. Software developers should focus on enhancing the quality of proposed segmentations, while young radiation oncologists should become more acquainted with these tools.
Collapse
Affiliation(s)
- Vincent Bourbonne
- Radiation Oncology Department, University Hospital Brest, 2 Avenue Foch, 29200 Brest, France
- Société Française des Jeunes Radiothérapeutes Oncologues, 47 Rue de la Colonie, 75013 Paris, France
- Correspondence: ; Tel.: +33-298223398; Fax: +33-98223087
| | - Adrien Laville
- Radiation Oncology Department, University Hospital Amiens-Picardie, 30 Avenue de la Croix Jourdain, 80054 Amiens, France
| | - Nicolas Wagneur
- Société Française des Jeunes Radiothérapeutes Oncologues, 47 Rue de la Colonie, 75013 Paris, France
- Radiation Oncology Department, Institut de Cancérologie de l’Ouest, Centre Paul Papin, 15 Rue André Bocquel, 49055 Angers, France
| | - Youssef Ghannam
- Société Française des Jeunes Radiothérapeutes Oncologues, 47 Rue de la Colonie, 75013 Paris, France
- Radiation Oncology Department, Institut de Cancérologie de l’Ouest, Centre Paul Papin, 15 Rue André Bocquel, 49055 Angers, France
| | - Audrey Larnaudie
- Société Française des Jeunes Radiothérapeutes Oncologues, 47 Rue de la Colonie, 75013 Paris, France
- Radiation Oncology Department, Centre François Baclesse, 3 Avenue du Général Harris, 14000 Caen, France
| |
Collapse
|
16
|
Chen K, Wang M, Song Z. Multi-task learning-based histologic subtype classification of non-small cell lung cancer. LA RADIOLOGIA MEDICA 2023; 128:537-543. [PMID: 36976403 DOI: 10.1007/s11547-023-01621-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 03/15/2023] [Indexed: 03/29/2023]
Abstract
PURPOSE In clinical applications, accurate histologic subtype classification of lung cancer is important for determining appropriate treatment plans. The purpose of this paper is to evaluate the role of multi-task learning in the classification of adenocarcinoma and squamous cell carcinoma. MATERIAL AND METHODS In this paper, we propose a novel multi-task learning model for histologic subtype classification of non-small cell lung cancer based on computed tomography (CT) images. The model consists of a histologic subtype classification branch and a staging branch, which share a part of the feature extraction layers and are simultaneously trained. By optimizing on the two tasks simultaneously, our model could achieve high accuracy in histologic subtype classification of non-small cell lung cancer without relying on physician's precise labeling of tumor areas. In this study, 402 cases from The Cancer Imaging Archive (TCIA) were used in total, and they were split into training set (n = 258), internal test set (n = 66) and external test set (n = 78). RESULTS Compared with the radiomics method and single-task networks, our multi-task model could reach an AUC of 0.843 and 0.732 on internal and external test set, respectively. In addition, multi-task network can achieve higher accuracy and specificity than single-task network. CONCLUSION Compared with the radiomics methods and single-task networks, our multi-task learning model could improve the accuracy of histologic subtype classification of non-small cell lung cancer by sharing network layers, which no longer relies on the physician's precise labeling of lesion regions and could further reduce the manual workload of physicians.
Collapse
Affiliation(s)
- Kun Chen
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, 200032, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Manning Wang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, 200032, China.
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, 200032, China.
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| |
Collapse
|
17
|
Zhang T, Wang K, Cui H, Jin Q, Cheng P, Nakaguchi T, Li C, Ning Z, Wang L, Xuan P. Topological structure and global features enhanced graph reasoning model for non-small cell lung cancer segmentation from CT. Phys Med Biol 2023; 68. [PMID: 36625358 DOI: 10.1088/1361-6560/acabff] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 12/15/2022] [Indexed: 01/11/2023]
Abstract
Objective.Accurate and automated segmentation of lung tumors from computed tomography (CT) images is critical yet challenging. Lung tumors are of various sizes and locations and have indistinct boundaries adjacent to other normal tissues.Approach.We propose a new segmentation model that can integrate the topological structure and global features of image region nodes to address the challenges. Firstly, we construct a weighted graph with image region nodes. The graph topology reflects the complex spatial relationships among these nodes, and each node has its specific attributes. Secondly, we propose a node-wise topological feature learning module based on a new graph convolutional autoencoder (GCA). Meanwhile, a node information supplementation (GNIS) module is established by integrating specific features of each node extracted by a convolutional neural network (CNN) into each encoding layer of GCA. Afterwards, we construct a global feature extraction model based on multi-layer perceptron (MLP) to encode the features learnt from all the image region nodes which are crucial complementary information for tumor segmentation.Main results.Ablation study results over the public lung tumor segmentation dataset demonstrate the contributions of our major technical innovations. Compared with other segmentation methods, our new model improves the segmentation performance and has generalization ability on different 3D image segmentation backbones. Our model achieved Dice of 0.7827, IoU of 0.6981, and HD of 32.1743 mm on the public dataset 2018 Medical Segmentation Decathlon challenge, and Dice of 0.7004, IoU of 0.5704 and HD of 64.4661 mm on lung tumor dataset from Shandong Cancer Hospital.Significance. The novel model improves automated lung tumor segmentation performance especially the challenging and complex cases using topological structure and global features of image region nodes. It is of great potential to apply the model to other CT segmentation tasks.
Collapse
Affiliation(s)
- Tiangang Zhang
- School of Computer Science and Technology, Heilongjiang University, Harbin, People's Republic of China.,School of Mathematical Science, Heilongjiang University, Harbin, People's Republic of China
| | - Kai Wang
- School of Computer Science and Technology, Heilongjiang University, Harbin, People's Republic of China
| | - Hui Cui
- Department of Computer Science and Information Technology, La Trobe University, Melbourne, Australia
| | - Qiangguo Jin
- School of Software, Northwestern Polytechnical University, Xi'an, People's Republic of China
| | - Peng Cheng
- Department of Computer Science and Information Technology, La Trobe University, Melbourne, Australia
| | - Toshiya Nakaguchi
- Center for Frontier Medical Engineering, Chiba University, Chiba, Japan
| | | | - Zhiyu Ning
- Sydney Polytechnic Institute, Sydney, Australia
| | - Linlin Wang
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical Universitmy of Medical Sciences, Jinan, People's Republic of China
| | - Ping Xuan
- Department of Computer Science, School of Engineering, Shantou University, Shantou, People's Republic of China
| |
Collapse
|
18
|
Hallinan JTPD, Zhu L, Zhang W, Ge S, Muhamat Nor FE, Ong HY, Eide SE, Cheng AJL, Kuah T, Lim DSW, Low XZ, Yeong KY, AlMuhaish MI, Alsooreti A, Kumarakulasinghe NB, Teo EC, Yap QV, Chan YH, Lin S, Tan JH, Kumar N, Vellayappan BA, Ooi BC, Quek ST, Makmur A. Deep learning assessment compared to radiologist reporting for metastatic spinal cord compression on CT. Front Oncol 2023; 13:1151073. [PMID: 37213273 PMCID: PMC10193838 DOI: 10.3389/fonc.2023.1151073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 03/16/2023] [Indexed: 05/23/2023] Open
Abstract
Introduction Metastatic spinal cord compression (MSCC) is a disastrous complication of advanced malignancy. A deep learning (DL) algorithm for MSCC classification on CT could expedite timely diagnosis. In this study, we externally test a DL algorithm for MSCC classification on CT and compare with radiologist assessment. Methods Retrospective collection of CT and corresponding MRI from patients with suspected MSCC was conducted from September 2007 to September 2020. Exclusion criteria were scans with instrumentation, no intravenous contrast, motion artefacts and non-thoracic coverage. Internal CT dataset split was 84% for training/validation and 16% for testing. An external test set was also utilised. Internal training/validation sets were labelled by radiologists with spine imaging specialization (6 and 11-years post-board certification) and were used to further develop a DL algorithm for MSCC classification. The spine imaging specialist (11-years expertise) labelled the test sets (reference standard). For evaluation of DL algorithm performance, internal and external test data were independently reviewed by four radiologists: two spine specialists (Rad1 and Rad2, 7 and 5-years post-board certification, respectively) and two oncological imaging specialists (Rad3 and Rad4, 3 and 5-years post-board certification, respectively). DL model performance was also compared against the CT report issued by the radiologist in a real clinical setting. Inter-rater agreement (Gwet's kappa) and sensitivity/specificity/AUCs were calculated. Results Overall, 420 CT scans were evaluated (225 patients, mean age=60 ± 11.9[SD]); 354(84%) CTs for training/validation and 66(16%) CTs for internal testing. The DL algorithm showed high inter-rater agreement for three-class MSCC grading with kappas of 0.872 (p<0.001) and 0.844 (p<0.001) on internal and external testing, respectively. On internal testing DL algorithm inter-rater agreement (κ=0.872) was superior to Rad 2 (κ=0.795) and Rad 3 (κ=0.724) (both p<0.001). DL algorithm kappa of 0.844 on external testing was superior to Rad 3 (κ=0.721) (p<0.001). CT report classification of high-grade MSCC disease was poor with only slight inter-rater agreement (κ=0.027) and low sensitivity (44.0), relative to the DL algorithm with almost-perfect inter-rater agreement (κ=0.813) and high sensitivity (94.0) (p<0.001). Conclusion Deep learning algorithm for metastatic spinal cord compression on CT showed superior performance to the CT report issued by experienced radiologists and could aid earlier diagnosis.
Collapse
Affiliation(s)
- James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- *Correspondence: James Thomas Patrick Decourcy Hallinan,
| | - Lei Zhu
- Department of Computer Science, School of Computing, National University of Singapore, Singapore, Singapore
| | - Wenqiao Zhang
- Department of Computer Science, School of Computing, National University of Singapore, Singapore, Singapore
| | - Shuliang Ge
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | - Faimee Erwan Muhamat Nor
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Han Yang Ong
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Sterling Ellis Eide
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Amanda J. L. Cheng
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Tricia Kuah
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | - Desmond Shi Wei Lim
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | - Xi Zhen Low
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | - Kuan Yuen Yeong
- Department of Radiology, Ng Teng Fong General Hospital, Singapore, Singapore
| | - Mona I. AlMuhaish
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Radiology, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Ahmed Mohamed Alsooreti
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Diagnostic Imaging, Salmaniya Medical Complex, Manama, Bahrain
| | | | - Ee Chin Teo
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | - Qai Ven Yap
- Biostatistics Unit, Yong Loo Lin School of Medicine, Singapore, Singapore
| | - Yiong Huak Chan
- Biostatistics Unit, Yong Loo Lin School of Medicine, Singapore, Singapore
| | - Shuxun Lin
- Division of Spine Surgery, Department of Orthopaedic Surgery, Ng Teng Fong General Hospital, Singapore, Singapore
| | - Jiong Hao Tan
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, Singapore, Singapore
| | - Naresh Kumar
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, Singapore, Singapore
| | - Balamurugan A. Vellayappan
- Department of Radiation Oncology, National University Cancer Institute Singapore, National University Hospital, Singapore, Singapore
| | - Beng Chin Ooi
- Department of Computer Science, School of Computing, National University of Singapore, Singapore, Singapore
| | - Swee Tian Quek
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| |
Collapse
|
19
|
Ferrante M, Rinaldi L, Botta F, Hu X, Dolp A, Minotti M, De Piano F, Funicelli G, Volpe S, Bellerba F, De Marco P, Raimondi S, Rizzo S, Shi K, Cremonesi M, Jereczek-Fossa BA, Spaggiari L, De Marinis F, Orecchia R, Origgi D. Application of nnU-Net for Automatic Segmentation of Lung Lesions on CT Images and Its Implication for Radiomic Models. J Clin Med 2022; 11:7334. [PMID: 36555950 PMCID: PMC9784875 DOI: 10.3390/jcm11247334] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/05/2022] [Accepted: 12/07/2022] [Indexed: 12/14/2022] Open
Abstract
Radiomics investigates the predictive role of quantitative parameters calculated from radiological images. In oncology, tumour segmentation constitutes a crucial step of the radiomic workflow. Manual segmentation is time-consuming and prone to inter-observer variability. In this study, a state-of-the-art deep-learning network for automatic segmentation (nnU-Net) was applied to computed tomography images of lung tumour patients, and its impact on the performance of survival radiomic models was assessed. In total, 899 patients were included, from two proprietary and one public datasets. Different network architectures (2D, 3D) were trained and tested on different combinations of the datasets. Automatic segmentations were compared to reference manual segmentations performed by physicians using the DICE similarity coefficient. Subsequently, the accuracy of radiomic models for survival classification based on either manual or automatic segmentations were compared, considering both hand-crafted and deep-learning features. The best agreement between automatic and manual contours (DICE = 0.78 ± 0.12) was achieved averaging 2D and 3D predictions and applying customised post-processing. The accuracy of the survival classifier (ranging between 0.65 and 0.78) was not statistically different when using manual versus automatic contours, both with hand-crafted and deep features. These results support the promising role nnU-Net can play in automatic segmentation, accelerating the radiomic workflow without impairing the models' accuracy. Further investigations on different clinical endpoints and populations are encouraged to confirm and generalise these findings.
Collapse
Affiliation(s)
- Matteo Ferrante
- Medical Physics Unit, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Lisa Rinaldi
- Radiation Research Unit, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Francesca Botta
- Medical Physics Unit, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Xiaobin Hu
- Department of Informatics, Technical University of Munich, Arcisstraße 21, 80333 Munich, Germany
| | - Andreas Dolp
- Department of Informatics, Technical University of Munich, Arcisstraße 21, 80333 Munich, Germany
| | - Marta Minotti
- Division of Radiology, IEO, European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Francesca De Piano
- Division of Radiology, IEO, European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Gianluigi Funicelli
- Division of Radiology, IEO, European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Stefania Volpe
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, via Festa del Perdono 7, 20122 Milan, Italy
| | - Federica Bellerba
- Department of Experimental Oncology, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Paolo De Marco
- Medical Physics Unit, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Sara Raimondi
- Department of Experimental Oncology, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Stefania Rizzo
- Clinica di Radiologia EOC, Istituto Imaging della Svizzera Italiana (IIMSI), via Tesserete 46, 6900 Lugano, Switzerland
- Faculty of Biomedical Sciences, Università della Svizzera Italiana (USI), via G. Buffi 13, 6900 Lugano, Switzerland
| | - Kuangyu Shi
- Chair for Computer-Aided Medical Procedures, Department of Informatics, Technical University of Munich, Arcisstraße 21, 80333 Munich, Germany
- Department of Nuclear Medicine, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
| | - Marta Cremonesi
- Radiation Research Unit, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Barbara A. Jereczek-Fossa
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, via Festa del Perdono 7, 20122 Milan, Italy
| | - Lorenzo Spaggiari
- Department of Oncology and Hemato-Oncology, University of Milan, via Festa del Perdono 7, 20122 Milan, Italy
- Division of Thoracic Surgery, IEO, European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Filippo De Marinis
- Division of Thoracic Oncology, IEO, European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Roberto Orecchia
- Division of Radiology, IEO, European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
- Scientific Direction, IEO, European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Daniela Origgi
- Medical Physics Unit, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| |
Collapse
|
20
|
Yang C, Qin LH, Xie YE, Liao JY. Deep learning in CT image segmentation of cervical cancer: a systematic review and meta-analysis. Radiat Oncol 2022; 17:175. [PMID: 36344989 PMCID: PMC9641941 DOI: 10.1186/s13014-022-02148-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 10/16/2022] [Indexed: 11/09/2022] Open
Abstract
Background This paper attempts to conduct a systematic review and meta-analysis of deep learning (DLs) models for cervical cancer CT image segmentation. Methods Relevant studies were systematically searched in PubMed, Embase, The Cochrane Library, and Web of science. The literature on DLs for cervical cancer CT image segmentation were included, a meta-analysis was performed on the dice similarity coefficient (DSC) of the segmentation results of the included DLs models. We also did subgroup analyses according to the size of the sample, type of segmentation (i.e., two dimensions and three dimensions), and three organs at risk (i.e., bladder, rectum, and femur). This study was registered in PROSPERO prior to initiation (CRD42022307071). Results A total of 1893 articles were retrieved and 14 articles were included in the meta-analysis. The pooled effect of DSC score of clinical target volume (CTV), bladder, rectum, femoral head were 0.86(95%CI 0.84 to 0.87), 0.91(95%CI 0.89 to 0.93), 0.83(95%CI 0.79 to 0.88), and 0.92(95%CI 0.91to 0.94), respectively. For the performance of segmented CTV by two dimensions (2D) and three dimensions (3D) model, the DSC score value for 2D model was 0.87 (95%CI 0.85 to 0.90), while the DSC score for 3D model was 0.85 (95%CI 0.82 to 0.87). As for the effect of the capacity of sample on segmentation performance, no matter whether the sample size is divided into two groups: greater than 100 and less than 100, or greater than 150 and less than 150, the results show no difference (P > 0.05). Four papers reported the time for segmentation from 15 s to 2 min. Conclusion DLs have good accuracy in automatic segmentation of CT images of cervical cancer with a less time consuming and have good prospects for future radiotherapy applications, but still need public high-quality databases and large-scale research verification. Supplementary Information The online version contains supplementary material available at 10.1186/s13014-022-02148-6.
Collapse
|
21
|
Artificial intelligence and machine learning in cancer imaging. COMMUNICATIONS MEDICINE 2022; 2:133. [PMID: 36310650 PMCID: PMC9613681 DOI: 10.1038/s43856-022-00199-0] [Citation(s) in RCA: 79] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Accepted: 10/06/2022] [Indexed: 11/16/2022] Open
Abstract
An increasing array of tools is being developed using artificial intelligence (AI) and machine learning (ML) for cancer imaging. The development of an optimal tool requires multidisciplinary engagement to ensure that the appropriate use case is met, as well as to undertake robust development and testing prior to its adoption into healthcare systems. This multidisciplinary review highlights key developments in the field. We discuss the challenges and opportunities of AI and ML in cancer imaging; considerations for the development of algorithms into tools that can be widely used and disseminated; and the development of the ecosystem needed to promote growth of AI and ML in cancer imaging.
Collapse
|
22
|
Sun S, Ren L, Miao Z, Hua L, Wang D, Deng J, Chen J, Liu N, Gong Y. Application of MRI-Based Radiomics in Preoperative Prediction of NF2 Alteration in Intracranial Meningiomas. Front Oncol 2022; 12:879528. [PMID: 36267986 PMCID: PMC9578175 DOI: 10.3389/fonc.2022.879528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2022] [Accepted: 06/06/2022] [Indexed: 11/13/2022] Open
Abstract
PurposeThis study aimed to investigate the feasibility of predicting NF2 mutation status based on the MR radiomic analysis in patients with intracranial meningioma.MethodsThis retrospective study included 105 patients with meningiomas, including 60 NF2-mutant samples and 45 wild-type samples. Radiomic features were extracted from magnetic resonance imaging scans, including T1-weighted, T2-weighted, and contrast T1-weighted images. Student’s t-test and LASSO regression were performed to select the radiomic features. All patients were randomly divided into training and validation cohorts in a 7:3 ratio. Five linear models (RF, SVM, LR, KNN, and xgboost) were trained to predict the NF2 mutational status. Receiver operating characteristic curve and precision-recall analyses were used to evaluate the model performance. Student’s t-tests were then used to compare the posterior probabilities of NF2 mut/loss prediction for patients with different NF2 statuses.ResultsNine features had nonzero coefficients in the LASSO regression model. No significant differences was observed in the clinical features. Nine features showed significant differences in patients with different NF2 statuses. Among all machine learning algorithms, SVM showed the best performance. The area under curve and accuracy of the predictive model were 0.85; the F1-score of the precision-recall curve was 0.80. The model risk was assessed by plotting calibration curves. The p-value for the H-L goodness of fit test was 0.411 (p> 0.05), which indicated that the difference between the obtained model and the perfect model was statistically insignificant. The AUC of our model in external validation was 0.83.ConclusionA combination of radiomic analysis and machine learning showed potential clinical utility in the prediction of preoperative NF2 status. These findings could aid in developing customized neurosurgery plans and meningioma management strategies before postoperative pathology.
Collapse
Affiliation(s)
- Shuchen Sun
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
- Institute of Neurosurgery, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, China
| | - Leihao Ren
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
- Institute of Neurosurgery, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, China
| | - Zong Miao
- Department of Neurosurgery, Changhai Hospital, Naval Medical University (Second Military Medical University), Shanghai, China
| | - Lingyang Hua
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
- Institute of Neurosurgery, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, China
| | - Daijun Wang
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
- Institute of Neurosurgery, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, China
| | - Jiaojiao Deng
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
- Institute of Neurosurgery, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, China
| | - Jiawei Chen
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
- Institute of Neurosurgery, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, China
| | - Ning Liu
- Department of Neurosurgery, Changhai Hospital, Naval Medical University (Second Military Medical University), Shanghai, China
| | - Ye Gong
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
- Institute of Neurosurgery, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, China
- Department of Critical Care Medicine, Huashan Hospital, Fudan University, Shanghai, China
- *Correspondence: Ye Gong,
| |
Collapse
|
23
|
Im JH, Lee IJ, Choi Y, Sung J, Ha JS, Lee H. Impact of Denoising on Deep-Learning-Based Automatic Segmentation Framework for Breast Cancer Radiotherapy Planning. Cancers (Basel) 2022; 14:cancers14153581. [PMID: 35892839 PMCID: PMC9332287 DOI: 10.3390/cancers14153581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 07/08/2022] [Accepted: 07/20/2022] [Indexed: 02/04/2023] Open
Abstract
Objective: This study aimed to investigate the segmentation accuracy of organs at risk (OARs) when denoised computed tomography (CT) images are used as input data for a deep-learning-based auto-segmentation framework. Methods: We used non-contrast enhanced planning CT scans from 40 patients with breast cancer. The heart, lungs, esophagus, spinal cord, and liver were manually delineated by two experienced radiation oncologists in a double-blind manner. The denoised CT images were used as input data for the AccuContourTM segmentation software to increase the signal difference between structures of interest and unwanted noise in non-contrast CT. The accuracy of the segmentation was assessed using the Dice similarity coefficient (DSC), and the results were compared with those of conventional deep-learning-based auto-segmentation without denoising. Results: The average DSC outcomes were higher than 0.80 for all OARs except for the esophagus. AccuContourTM-based and denoising-based auto-segmentation demonstrated comparable performance for the lungs and spinal cord but showed limited performance for the esophagus. Denoising-based auto-segmentation for the liver was minimal but had statistically significantly better DSC than AccuContourTM-based auto-segmentation (p < 0.05). Conclusions: Denoising-based auto-segmentation demonstrated satisfactory performance in automatic liver segmentation from non-contrast enhanced CT scans. Further external validation studies with larger cohorts are needed to verify the usefulness of denoising-based auto-segmentation.
Collapse
Affiliation(s)
- Jung Ho Im
- CHA Bundang Medical Center, Department of Radiation Oncology, CHA University School of Medicine, Seongnam 13496, Korea;
| | - Ik Jae Lee
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
| | - Yeonho Choi
- Department of Radiation Oncology, Gangnam Severance Hospital, Seoul 06273, Korea; (Y.C.); (J.S.H.)
| | - Jiwon Sung
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
| | - Jin Sook Ha
- Department of Radiation Oncology, Gangnam Severance Hospital, Seoul 06273, Korea; (Y.C.); (J.S.H.)
| | - Ho Lee
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
- Correspondence: ; Tel.: +82-2-2228-8109; Fax: +82-2-2227-7823
| |
Collapse
|
24
|
Deep Learning Model for Grading Metastatic Epidural Spinal Cord Compression on Staging CT. Cancers (Basel) 2022; 14:cancers14133219. [PMID: 35804990 PMCID: PMC9264856 DOI: 10.3390/cancers14133219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 06/21/2022] [Accepted: 06/24/2022] [Indexed: 02/02/2023] Open
Abstract
Background: Metastatic epidural spinal cord compression (MESCC) is a disastrous complication of advanced malignancy. Deep learning (DL) models for automatic MESCC classification on staging CT were developed to aid earlier diagnosis. Methods: This retrospective study included 444 CT staging studies from 185 patients with suspected MESCC who underwent MRI spine studies within 60 days of the CT studies. The DL model training/validation dataset consisted of 316/358 (88%) and the test set of 42/358 (12%) CT studies. Training/validation and test datasets were labeled in consensus by two subspecialized radiologists (6 and 11-years-experience) using the MRI studies as the reference standard. Test sets were labeled by the developed DL models and four radiologists (2−7 years of experience) for comparison. Results: DL models showed almost-perfect interobserver agreement for classification of CT spine images into normal, low, and high-grade MESCC, with kappas ranging from 0.873−0.911 (p < 0.001). The DL models (lowest κ = 0.873, 95% CI 0.858−0.887) also showed superior interobserver agreement compared to two of the four radiologists for three-class classification, including a specialist (κ = 0.820, 95% CI 0.803−0.837) and general radiologist (κ = 0.726, 95% CI 0.706−0.747), both p < 0.001. Conclusion: DL models for the MESCC classification on a CT showed comparable to superior interobserver agreement to radiologists and could be used to aid earlier diagnosis.
Collapse
|
25
|
Hallinan JTPD, Zhu L, Zhang W, Lim DSW, Baskar S, Low XZ, Yeong KY, Teo EC, Kumarakulasinghe NB, Yap QV, Chan YH, Lin S, Tan JH, Kumar N, Vellayappan BA, Ooi BC, Quek ST, Makmur A. Deep Learning Model for Classifying Metastatic Epidural Spinal Cord Compression on MRI. Front Oncol 2022; 12:849447. [PMID: 35600347 PMCID: PMC9114468 DOI: 10.3389/fonc.2022.849447] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 03/18/2022] [Indexed: 11/13/2022] Open
Abstract
Background Metastatic epidural spinal cord compression (MESCC) is a devastating complication of advanced cancer. A deep learning (DL) model for automated MESCC classification on MRI could aid earlier diagnosis and referral. Purpose To develop a DL model for automated classification of MESCC on MRI. Materials and Methods Patients with known MESCC diagnosed on MRI between September 2007 and September 2017 were eligible. MRI studies with instrumentation, suboptimal image quality, and non-thoracic regions were excluded. Axial T2-weighted images were utilized. The internal dataset split was 82% and 18% for training/validation and test sets, respectively. External testing was also performed. Internal training/validation data were labeled using the Bilsky MESCC classification by a musculoskeletal radiologist (10-year experience) and a neuroradiologist (5-year experience). These labels were used to train a DL model utilizing a prototypical convolutional neural network. Internal and external test sets were labeled by the musculoskeletal radiologist as the reference standard. For assessment of DL model performance and interobserver variability, test sets were labeled independently by the neuroradiologist (5-year experience), a spine surgeon (5-year experience), and a radiation oncologist (11-year experience). Inter-rater agreement (Gwet’s kappa) and sensitivity/specificity were calculated. Results Overall, 215 MRI spine studies were analyzed [164 patients, mean age = 62 ± 12(SD)] with 177 (82%) for training/validation and 38 (18%) for internal testing. For internal testing, the DL model and specialists all showed almost perfect agreement (kappas = 0.92–0.98, p < 0.001) for dichotomous Bilsky classification (low versus high grade) compared to the reference standard. Similar performance was seen for external testing on a set of 32 MRI spines with the DL model and specialists all showing almost perfect agreement (kappas = 0.94–0.95, p < 0.001) compared to the reference standard. Conclusion A DL model showed comparable agreement to a subspecialist radiologist and clinical specialists for the classification of malignant epidural spinal cord compression and could optimize earlier diagnosis and surgical referral.
Collapse
Affiliation(s)
- James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore.,Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Lei Zhu
- NUS Graduate School, Integrative Sciences and Engineering Programme, National University of Singapore, Singapore, Singapore
| | - Wenqiao Zhang
- Department of Computer Science, School of Computing, National University of Singapore, Singapore, Singapore
| | - Desmond Shi Wei Lim
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore.,Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Sangeetha Baskar
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | - Xi Zhen Low
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore.,Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Kuan Yuen Yeong
- Department of Radiology, Ng Teng Fong General Hospital, Singapore, Singapore
| | - Ee Chin Teo
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | | | - Qai Ven Yap
- Biostatistics Unit, Yong Loo Lin School of Medicine, Singapore, Singapore
| | - Yiong Huak Chan
- Biostatistics Unit, Yong Loo Lin School of Medicine, Singapore, Singapore
| | - Shuxun Lin
- Division of Spine Surgery, Department of Orthopaedic Surgery, Ng Teng Fong General Hospital, Singapore, Singapore
| | - Jiong Hao Tan
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, Singapore, Singapore
| | - Naresh Kumar
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, Singapore, Singapore
| | - Balamurugan A Vellayappan
- Department of Radiation Oncology, National University Cancer Institute Singapore, National University Hospital, Singapore, Singapore
| | - Beng Chin Ooi
- Department of Computer Science, School of Computing, National University of Singapore, Singapore, Singapore
| | - Swee Tian Quek
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore.,Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore.,Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| |
Collapse
|
26
|
Recent Applications of Artificial Intelligence in Radiotherapy: Where We Are and Beyond. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073223] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
In recent decades, artificial intelligence (AI) tools have been applied in many medical fields, opening the possibility of finding novel solutions for managing very complex and multifactorial problems, such as those commonly encountered in radiotherapy (RT). We conducted a PubMed and Scopus search to identify the AI application field in RT limited to the last four years. In total, 1824 original papers were identified, and 921 were analyzed by considering the phase of the RT workflow according to the applied AI approaches. AI permits the processing of large quantities of information, data, and images stored in RT oncology information systems, a process that is not manageable for individuals or groups. AI allows the iterative application of complex tasks in large datasets (e.g., delineating normal tissues or finding optimal planning solutions) and might support the entire community working in the various sectors of RT, as summarized in this overview. AI-based tools are now on the roadmap for RT and have been applied to the entire workflow, mainly for segmentation, the generation of synthetic images, and outcome prediction. Several concerns were raised, including the need for harmonization while overcoming ethical, legal, and skill barriers.
Collapse
|
27
|
Liu Y, Chen Z, Wang J, Wang X, Qu B, Ma L, Zhao W, Zhang G, Xu S. Dose Prediction Using a Three-Dimensional Convolutional Neural Network for Nasopharyngeal Carcinoma With Tomotherapy. Front Oncol 2021; 11:752007. [PMID: 34858825 PMCID: PMC8631763 DOI: 10.3389/fonc.2021.752007] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 10/21/2021] [Indexed: 01/14/2023] Open
Abstract
Purpose This study focused on predicting 3D dose distribution at high precision and generated the prediction methods for nasopharyngeal carcinoma patients (NPC) treated with Tomotherapy based on the patient-specific gap between organs at risk (OARs) and planning target volumes (PTVs). Methods A convolutional neural network (CNN) is trained using the CT and contour masks as the input and dose distributions as output. The CNN is based on the "3D Dense-U-Net", which combines the U-Net and the Dense-Net. To evaluate the model, we retrospectively used 124 NPC patients treated with Tomotherapy, in which 96 and 28 patients were randomly split and used for model training and test, respectively. We performed comparison studies using different training matrix shapes and dimensions for the CNN models, i.e., 128 ×128 ×48 (for Model I), 128 ×128 ×16 (for Model II), and 2D Dense U-Net (for Model III). The performance of these models was quantitatively evaluated using clinically relevant metrics and statistical analysis. Results We found a more considerable height of the training patch size yields a better model outcome. The study calculated the corresponding errors by comparing the predicted dose with the ground truth. The mean deviations from the mean and maximum doses of PTVs and OARs were 2.42 and 2.93%. Error for the maximum dose of right optic nerves in Model I was 4.87 ± 6.88%, compared with 7.9 ± 6.8% in Model II (p=0.08) and 13.85 ± 10.97% in Model III (p<0.01); the Model I performed the best. The gamma passing rates of PTV60 for 3%/3 mm criteria was 83.6 ± 5.2% in Model I, compared with 75.9 ± 5.5% in Model II (p<0.001) and 77.2 ± 7.3% in Model III (p<0.01); the Model I also gave the best outcome. The prediction error of D95 for PTV60 was 0.64 ± 0.68% in Model I, compared with 2.04 ± 1.38% in Model II (p<0.01) and 1.05 ± 0.96% in Model III (p=0.01); the Model I was also the best one. Conclusions It is significant to train the dose prediction model by exploiting deep-learning techniques with various clinical logic concepts. Increasing the height (Y direction) of training patch size can improve the dose prediction accuracy of tiny OARs and the whole body. Our dose prediction network model provides a clinically acceptable result and a training strategy for a dose prediction model. It should be helpful to build automatic Tomotherapy planning.
Collapse
Affiliation(s)
- Yaoying Liu
- Department of Radiation Oncology, the First Medical Center of the People's Liberation Army General Hospital, Beijing, China.,School of Physics, Beihang University, Beijing, China
| | | | - Jinyuan Wang
- Department of Radiation Oncology, the First Medical Center of the People's Liberation Army General Hospital, Beijing, China
| | - Xiaoshen Wang
- Department of Radiation Oncology, the First Medical Center of the People's Liberation Army General Hospital, Beijing, China
| | - Baolin Qu
- Department of Radiation Oncology, the First Medical Center of the People's Liberation Army General Hospital, Beijing, China
| | - Lin Ma
- Department of Radiation Oncology, the First Medical Center of the People's Liberation Army General Hospital, Beijing, China
| | - Wei Zhao
- School of Physics, Beihang University, Beijing, China
| | - Gaolong Zhang
- School of Physics, Beihang University, Beijing, China
| | - Shouping Xu
- Department of Radiation Oncology, the First Medical Center of the People's Liberation Army General Hospital, Beijing, China
| |
Collapse
|