1
|
Borges MG, Gruenwaldt J, Barsanelli DM, Ishikawa KE, Stuart SR. Automatic segmentation of cardiac structures can change the way we evaluate dose limits for radiotherapy in the left breast. J Med Imaging Radiat Sci 2024; 56:101844. [PMID: 39740303 DOI: 10.1016/j.jmir.2024.101844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 12/03/2024] [Accepted: 12/12/2024] [Indexed: 01/02/2025]
Abstract
PURPOSE Radiotherapy is a crucial part of breast cancer treatment. Precision in dose assessment is essential to minimize side effects. Traditionally, anatomical structures are delineated manually, a time-consuming process subject to variability. automatic segmentation, including methods based on multiple atlases and deep learning, offers a promising alternative. For the radiotherapy treatment of the left breast, the RTOG 1005 protocol highlights the importance of cardiac delineation and the need to minimize cardiac exposure to radiation. Our study aims to evaluate dose distribution in auto-segmented substructures and establish models to correlate them with dose in the cardiac area. METHODS AND MATERIALS Anatomical structures were auto-segmented using TotalSegmentator and Limbus AI. The relationship between the volume of the cardiac area and of organs at risk was assessed using log-linear regressions. RESULTS The mean dose distribution was considerable for LAD (left anterior descending coronary artery), heart, and left ventricle. The volumetric distribution of organs at risk is evaluated for specific RTOG 1005 isodoses. We highlight the greater variability in the absolute volumetric evaluation. Log-linear regression models are presented to estimate dose constraint parameters. We highlight a greater number of highly correlated comparisons for absolute dose-volume assessment. CONCLUSIONS Dose-volume assessment protocols in patients with left breast cancer often neglect cardiac substructures. However, automatic tools can overcome these technical difficulties. In this study, we correlated the dose in the cardiac area with the doses in specific substructures and suggested limits for planning evaluation. Our data also indicates that statistical models could be applied in the assessment of those substructures where an automatic segmentation tool is not available. Our data also shows a benefit in reporting absolute dose-volume thresholds for future cause-effect assessments.
Collapse
Affiliation(s)
- Murilo Guimarães Borges
- Department of Medical Physics, Centre for Biomedical Engineering (CEB), University of Campinas, Rua Alexander Fleming, 163, Cidade Universitária, 13083-881 Campinas, SP, Brazil; Hospital da Mulher Prof. Dr. José Aristodemo Pinotti (CAISM), University of Campinas, R. Alexander Fleming, 101, Cidade Universitária, 13083-881 Campinas, SP, Brazil.
| | - Joyce Gruenwaldt
- Centro de Oncologia Campinas (COC), R. Alberto de Salvo, 311, Barão Geraldo, 13084-759 Campinas, SP, Brazil; Department of Radiotherapy, Hospital das Clínicas, University of Campinas, R. Vital Brasil, 251, Cidade Universitária, 13083-888 Campinas, SP, Brazil
| | - Danilo Matheus Barsanelli
- Centro de Oncologia Campinas (COC), R. Alberto de Salvo, 311, Barão Geraldo, 13084-759 Campinas, SP, Brazil
| | - Karina Emy Ishikawa
- Centro de Oncologia Campinas (COC), R. Alberto de Salvo, 311, Barão Geraldo, 13084-759 Campinas, SP, Brazil
| | - Silvia Radwanski Stuart
- Department of Radiotherapy, Instituto Brasileiro de Controle do Câncer (IBCC), Avenida Alcântara Machado, 2576, Mooca, 03102-002 São Paulo, SP, Brazil; Department of Radiotherapy, Instituto de Radiologia do Hospital das Clínicas - HCFMUSP (InRad), Hospital das Clínicas, University of São Paulo, Rua Doutor Ovídio Pires de Campos, 75, Portaria 1, Cerqueira César, 05403-010 São Paulo, SP, Brazil
| |
Collapse
|
2
|
Reinders FC, Savenije MH, de Ridder M, Maspero M, Doornaert PA, Terhaard CH, Raaijmakers CP, Zakeri K, Lee NY, Aliotta E, Rangnekar A, Veeraraghavan H, Philippens ME. Automatic segmentation for magnetic resonance imaging guided individual elective lymph node irradiation in head and neck cancer patients. Phys Imaging Radiat Oncol 2024; 32:100655. [PMID: 39502445 PMCID: PMC11536060 DOI: 10.1016/j.phro.2024.100655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Revised: 09/26/2024] [Accepted: 09/26/2024] [Indexed: 11/08/2024] Open
Abstract
Background and purpose In head and neck squamous cell carcinoma (HNSCC) patients, the radiation dose to nearby organs at risk can be reduced by restricting elective neck irradiation from lymph node levels to individual lymph nodes. However, manual delineation of every individual lymph node is time-consuming and error prone. Therefore, automatic magnetic resonance imaging (MRI) segmentation of individual lymph nodes was developed and tested using a convolutional neural network (CNN). Materials and methods In 50 HNSCC patients (UMC-Utrecht), individual lymph nodes located in lymph node levels Ib-II-III-IV-V were manually segmented on MRI by consensus of two experts, obtaining ground truth segmentations. A 3D CNN (nnU-Net) was trained on 40 patients and tested on 10. Evaluation metrics were Dice Similarity Coefficient (DSC), recall, precision, and F1-score. The segmentations of the CNN was compared to segmentations of two observers. Transfer learning was used with 20 additional patients to re-train and test the CNN in another medical center. Results nnU-Net produced automatic segmentations of elective lymph nodes with median DSC: 0.72, recall: 0.76, precision: 0.78, and F1-score: 0.78. The CNN had higher recall compared to both observers (p = 0.002). No difference in evaluation scores of the networks in both medical centers was found after re-training with 5 or 10 patients. Conclusion nnU-Net was able to automatically segment individual lymph nodes on MRI. The detection rate of lymph nodes using nnU-Net was higher than manual segmentations. Re-training nnU-Net was required to successfully transfer the network to the other medical center.
Collapse
Affiliation(s)
| | - Mark H.F. Savenije
- Department of Radiotherapy, University Medical Centre Utrecht, the Netherlands
- Computational Imaging Group for MR Therapy and Diagnostics, Cancer and Imaging Division, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Mischa de Ridder
- Department of Radiotherapy, University Medical Centre Utrecht, the Netherlands
| | - Matteo Maspero
- Department of Radiotherapy, University Medical Centre Utrecht, the Netherlands
- Computational Imaging Group for MR Therapy and Diagnostics, Cancer and Imaging Division, University Medical Center Utrecht, Utrecht, the Netherlands
| | | | - Chris H.J. Terhaard
- Department of Radiotherapy, University Medical Centre Utrecht, the Netherlands
| | | | - Kaveh Zakeri
- Department of Radiotherapy, Memorial Sloan Kettering Cancer Centre, New York, United States
| | - Nancy Y. Lee
- Department of Radiotherapy, Memorial Sloan Kettering Cancer Centre, New York, United States
| | - Eric Aliotta
- Department of Radiotherapy, Memorial Sloan Kettering Cancer Centre, New York, United States
| | - Aneesh Rangnekar
- Department of Radiotherapy, Memorial Sloan Kettering Cancer Centre, New York, United States
| | - Harini Veeraraghavan
- Department of Radiotherapy, Memorial Sloan Kettering Cancer Centre, New York, United States
| | | |
Collapse
|
3
|
Lagedamon V, Leni PE, Gschwind R. Deep learning applied to dose prediction in external radiation therapy: A narrative review. Cancer Radiother 2024; 28:402-414. [PMID: 39138047 DOI: 10.1016/j.canrad.2024.03.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Revised: 03/28/2024] [Accepted: 03/29/2024] [Indexed: 08/15/2024]
Abstract
Over the last decades, the use of artificial intelligence, machine learning and deep learning in medical fields has skyrocketed. Well known for their results in segmentation, motion management and posttreatment outcome tasks, investigations of machine learning and deep learning models as fast dose calculation or quality assurance tools have been present since 2000. The main motivation for this increasing research and interest in artificial intelligence, machine learning and deep learning is the enhancement of treatment workflows, specifically dosimetry and quality assurance accuracy and time points, which remain important time-consuming aspects of clinical patient management. Since 2014, the evolution of models and architectures for dose calculation has been related to innovations and interest in the theory of information research with pronounced improvements in architecture design. The use of knowledge-based approaches to patient-specific methods has also considerably improved the accuracy of dose predictions. This paper covers the state of all known deep learning architectures and models applied to external radiotherapy with a description of each architecture, followed by a discussion on the performance and future of deep learning predictive models in external radiotherapy.
Collapse
Affiliation(s)
- V Lagedamon
- Laboratoire chronoenvironnement, UMR 6249, université de Franche-Comté, CNRS, 4, place Tharradin, 25200 Montbéliard, France
| | - P-E Leni
- Laboratoire chronoenvironnement, UMR 6249, université de Franche-Comté, CNRS, 4, place Tharradin, 25200 Montbéliard, France.
| | - R Gschwind
- Laboratoire chronoenvironnement, UMR 6249, université de Franche-Comté, CNRS, 4, place Tharradin, 25200 Montbéliard, France
| |
Collapse
|
4
|
Russo L, Charles-Davies D, Bottazzi S, Sala E, Boldrini L. Radiomics for clinical decision support in radiation oncology. Clin Oncol (R Coll Radiol) 2024; 36:e269-e281. [PMID: 38548581 DOI: 10.1016/j.clon.2024.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 02/14/2024] [Accepted: 03/08/2024] [Indexed: 07/09/2024]
Abstract
Radiomics is a promising tool for the development of quantitative biomarkers to support clinical decision-making. It has been shown to improve the prediction of response to treatment and outcome in different settings, particularly in the field of radiation oncology by optimising the dose delivery solutions and reducing the rate of radiation-induced side effects, leading to a fully personalised approach. Despite the promising results offered by radiomics at each of these stages, standardised methodologies, reproducibility and interpretability of results are still lacking, limiting the potential clinical impact of these tools. In this review, we briefly describe the principles of radiomics and the most relevant applications of radiomics at each stage of cancer management in the framework of radiation oncology. Furthermore, the integration of radiomics into clinical decision support systems is analysed, defining the challenges and offering possible solutions for translating radiomics into a clinically applicable tool.
Collapse
Affiliation(s)
- L Russo
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Dipartimento di Scienze Radiologiche ed Ematologiche. Università Cattolica Del Sacro Cuore, Rome, Italy.
| | - D Charles-Davies
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - S Bottazzi
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - E Sala
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Dipartimento di Scienze Radiologiche ed Ematologiche. Università Cattolica Del Sacro Cuore, Rome, Italy
| | - L Boldrini
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| |
Collapse
|
5
|
Tan HS, Wang K, Mcbeth R. Exploring UMAP in hybrid models of entropy-based and representativeness sampling for active learning in biomedical segmentation. Comput Biol Med 2024; 176:108605. [PMID: 38772054 DOI: 10.1016/j.compbiomed.2024.108605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 02/18/2024] [Accepted: 05/11/2024] [Indexed: 05/23/2024]
Abstract
In this work, we study various hybrid models of entropy-based and representativeness sampling techniques in the context of active learning in medical segmentation, in particular examining the role of UMAP (Uniform Manifold Approximation and Projection) as a technique for capturing representativeness. Although UMAP has been shown viable as a general purpose dimension reduction method in diverse areas, its role in deep learning-based medical segmentation has yet been extensively explored. Using the cardiac and prostate datasets in the Medical Segmentation Decathlon for validation, we found that a novel hybrid combination of Entropy-UMAP sampling technique achieved a statistically significant Dice score advantage over the random baseline (3.2% for cardiac, 4.5% for prostate), and attained the highest Dice coefficient among the spectrum of 10 distinct active learning methodologies we examined. This provides preliminary evidence that there is an interesting synergy between entropy-based and UMAP methods when the former precedes the latter in a hybrid model of active learning.
Collapse
Affiliation(s)
- Hai Siong Tan
- University of Pennsylvania, Perelman School of Medicine, Department of Radiation Oncology, Philadelphia, USA.
| | | | - Rafe Mcbeth
- University of Pennsylvania, Perelman School of Medicine, Department of Radiation Oncology, Philadelphia, USA
| |
Collapse
|
6
|
Lastrucci A, Wandael Y, Ricci R, Maccioni G, Giansanti D. The Integration of Deep Learning in Radiotherapy: Exploring Challenges, Opportunities, and Future Directions through an Umbrella Review. Diagnostics (Basel) 2024; 14:939. [PMID: 38732351 PMCID: PMC11083654 DOI: 10.3390/diagnostics14090939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 04/23/2024] [Accepted: 04/24/2024] [Indexed: 05/13/2024] Open
Abstract
This study investigates, through a narrative review, the transformative impact of deep learning (DL) in the field of radiotherapy, particularly in light of the accelerated developments prompted by the COVID-19 pandemic. The proposed approach was based on an umbrella review following a standard narrative checklist and a qualification process. The selection process identified 19 systematic review studies. Through an analysis of current research, the study highlights the revolutionary potential of DL algorithms in optimizing treatment planning, image analysis, and patient outcome prediction in radiotherapy. It underscores the necessity of further exploration into specific research areas to unlock the full capabilities of DL technology. Moreover, the study emphasizes the intricate interplay between digital radiology and radiotherapy, revealing how advancements in one field can significantly influence the other. This interdependence is crucial for addressing complex challenges and advancing the integration of cutting-edge technologies into clinical practice. Collaborative efforts among researchers, clinicians, and regulatory bodies are deemed essential to effectively navigate the evolving landscape of DL in radiotherapy. By fostering interdisciplinary collaborations and conducting thorough investigations, stakeholders can fully leverage the transformative power of DL to enhance patient care and refine therapeutic strategies. Ultimately, this promises to usher in a new era of personalized and optimized radiotherapy treatment for improved patient outcomes.
Collapse
Affiliation(s)
- Andrea Lastrucci
- Department of Allied Health Professions, Azienda Ospedaliero-Universitaria Careggi, 50134 Florence, Italy; (A.L.); (Y.W.); (R.R.)
| | - Yannick Wandael
- Department of Allied Health Professions, Azienda Ospedaliero-Universitaria Careggi, 50134 Florence, Italy; (A.L.); (Y.W.); (R.R.)
| | - Renzo Ricci
- Department of Allied Health Professions, Azienda Ospedaliero-Universitaria Careggi, 50134 Florence, Italy; (A.L.); (Y.W.); (R.R.)
| | | | | |
Collapse
|
7
|
Zhang L, Liu Z, Zhang L, Wu Z, Yu X, Holmes J, Feng H, Dai H, Li X, Li Q, Wong WW, Vora SA, Zhu D, Liu T, Liu W. Technical Note: Generalizable and Promptable Artificial Intelligence Model to Augment Clinical Delineation in Radiation Oncology. Med Phys 2024; 51:2187-2199. [PMID: 38319676 PMCID: PMC10939804 DOI: 10.1002/mp.16965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 12/29/2023] [Accepted: 01/14/2024] [Indexed: 03/13/2024] Open
Abstract
BACKGROUND Efficient and accurate delineation of organs at risk (OARs) is a critical procedure for treatment planning and dose evaluation. Deep learning-based auto-segmentation of OARs has shown promising results and is increasingly being used in radiation therapy. However, existing deep learning-based auto-segmentation approaches face two challenges in clinical practice: generalizability and human-AI interaction. A generalizable and promptable auto-segmentation model, which segments OARs of multiple disease sites simultaneously and supports on-the-fly human-AI interaction, can significantly enhance the efficiency of radiation therapy treatment planning. PURPOSE Meta's segment anything model (SAM) was proposed as a generalizable and promptable model for next-generation natural image segmentation. We further evaluated the performance of SAM in radiotherapy segmentation. METHODS Computed tomography (CT) images of clinical cases from four disease sites at our institute were collected: prostate, lung, gastrointestinal, and head & neck. For each case, we selected the OARs important in radiotherapy treatment planning. We then compared both the Dice coefficients and Jaccard indices derived from three distinct methods: manual delineation (ground truth), automatic segmentation using SAM's 'segment anything' mode, and automatic segmentation using SAM's 'box prompt' mode that implements manual interaction via live prompts during segmentation. RESULTS Our results indicate that SAM's segment anything mode can achieve clinically acceptable segmentation results in most OARs with Dice scores higher than 0.7. SAM's box prompt mode further improves Dice scores by 0.1∼0.5. Similar results were observed for Jaccard indices. The results show that SAM performs better for prostate and lung, but worse for gastrointestinal and head & neck. When considering the size of organs and the distinctiveness of their boundaries, SAM shows better performance for large organs with distinct boundaries, such as lung and liver, and worse for smaller organs with less distinct boundaries, like parotid and cochlea. CONCLUSIONS Our results demonstrate SAM's robust generalizability with consistent accuracy in automatic segmentation for radiotherapy. Furthermore, the advanced box-prompt method enables the users to augment auto-segmentation interactively and dynamically, leading to patient-specific auto-segmentation in radiation therapy. SAM's generalizability across different disease sites and different modalities makes it feasible to develop a generic auto-segmentation model in radiotherapy.
Collapse
Affiliation(s)
- Lian Zhang
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
| | - Zhengliang Liu
- School of Computing, University of Georgia, Athens, GA 30602, USA
| | - Lu Zhang
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 76019, USA
| | - Zihao Wu
- School of Computing, University of Georgia, Athens, GA 30602, USA
| | - Xiaowei Yu
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 76019, USA
| | - Jason Holmes
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
| | - Hongying Feng
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
| | - Haixing Dai
- School of Computing, University of Georgia, Athens, GA 30602, USA
| | - Xiang Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02115, USA
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02115, USA
| | - William W. Wong
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
| | - Sujay A. Vora
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
| | - Dajiang Zhu
- School of Computing, University of Georgia, Athens, GA 30602, USA
| | - Tianming Liu
- School of Computing, University of Georgia, Athens, GA 30602, USA
| | - Wei Liu
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
| |
Collapse
|
8
|
Tatsugami F, Nakaura T, Yanagawa M, Fujita S, Kamagata K, Ito R, Kawamura M, Fushimi Y, Ueda D, Matsui Y, Yamada A, Fujima N, Fujioka T, Nozaki T, Tsuboyama T, Hirata K, Naganawa S. Recent advances in artificial intelligence for cardiac CT: Enhancing diagnosis and prognosis prediction. Diagn Interv Imaging 2023; 104:521-528. [PMID: 37407346 DOI: 10.1016/j.diii.2023.06.011] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 06/20/2023] [Indexed: 07/07/2023]
Abstract
Recent advances in artificial intelligence (AI) for cardiac computed tomography (CT) have shown great potential in enhancing diagnosis and prognosis prediction in patients with cardiovascular disease. Deep learning, a type of machine learning, has revolutionized radiology by enabling automatic feature extraction and learning from large datasets, particularly in image-based applications. Thus, AI-driven techniques have enabled a faster analysis of cardiac CT examinations than when they are analyzed by humans, while maintaining reproducibility. However, further research and validation are required to fully assess the diagnostic performance, radiation dose-reduction capabilities, and clinical correctness of these AI-driven techniques in cardiac CT. This review article presents recent advances of AI in the field of cardiac CT, including deep-learning-based image reconstruction, coronary artery motion correction, automatic calcium scoring, automatic epicardial fat measurement, coronary artery stenosis diagnosis, fractional flow reserve prediction, and prognosis prediction, analyzes current limitations of these techniques and discusses future challenges.
Collapse
Affiliation(s)
- Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan.
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, 1-1-1 Honjo Chuo-ku, Kumamoto, 860-8556, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| | - Shohei Fujita
- Departmen of Radiology, Graduate School of Medicine and Faculty of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo 113-8421, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawaharacho, Sakyoku, Kyoto, 606-8507, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata-cho, Kita-ku, Okayama, 700-8558, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, 3-1-1 Asahi, Matsumoto, Nagano, 390-8621, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital N15, W5, Kita-Ku, Sapporo 060-8638, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8519, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-Ku, Tokyo, 160-0016, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita City, Osaka, 565-0871, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Kita 15 Nishi 7, Kita-Ku, Sapporo, Hokkaido, 060-8648, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| |
Collapse
|
9
|
Rodríguez Outeiral R, González PJ, Schaake EE, van der Heide UA, Simões R. Deep learning for segmentation of the cervical cancer gross tumor volume on magnetic resonance imaging for brachytherapy. Radiat Oncol 2023; 18:91. [PMID: 37248490 DOI: 10.1186/s13014-023-02283-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 05/16/2023] [Indexed: 05/31/2023] Open
Abstract
BACKGROUND Segmentation of the Gross Tumor Volume (GTV) is a crucial step in the brachytherapy (BT) treatment planning workflow. Currently, radiation oncologists segment the GTV manually, which is time-consuming. The time pressure is particularly critical for BT because during the segmentation process the patient waits immobilized in bed with the applicator in place. Automatic segmentation algorithms can potentially reduce both the clinical workload and the patient burden. Although deep learning based automatic segmentation algorithms have been extensively developed for organs at risk, automatic segmentation of the targets is less common. The aim of this study was to automatically segment the cervical cancer GTV on BT MRI images using a state-of-the-art automatic segmentation framework and assess its performance. METHODS A cohort of 195 cervical cancer patients treated between August 2012 and December 2021 was retrospectively collected. A total of 524 separate BT fractions were included and the axial T2-weighted (T2w) MRI sequence was used for this project. The 3D nnU-Net was used as the automatic segmentation framework. The automatic segmentations were compared with the manual segmentations used for clinical practice with Sørensen-Dice coefficient (Dice), 95th Hausdorff distance (95th HD) and mean surface distance (MSD). The dosimetric impact was defined as the difference in D98 (ΔD98) and D90 (ΔD90) between the manual segmentations and the automatic segmentations, evaluated using the clinical dose distribution. The performance of the network was also compared separately depending on FIGO stage and on GTV volume. RESULTS The network achieved a median Dice of 0.73 (interquartile range (IQR) = 0.50-0.80), median 95th HD of 6.8 mm (IQR = 4.2-12.5 mm) and median MSD of 1.4 mm (IQR = 0.90-2.8 mm). The median ΔD90 and ΔD98 were 0.18 Gy (IQR = -1.38-1.19 Gy) and 0.20 Gy (IQR =-1.10-0.95 Gy) respectively. No significant differences in geometric or dosimetric performance were observed between tumors with different FIGO stages, however significantly improved Dice and dosimetric performance was found for larger tumors. CONCLUSIONS The nnU-Net framework achieved state-of-the-art performance in the segmentation of the cervical cancer GTV on BT MRI images. Reasonable median performance was achieved geometrically and dosimetrically but with high variability among patients.
Collapse
Affiliation(s)
- Roque Rodríguez Outeiral
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, Amsterdam, CX, The Netherlands
| | - Patrick J González
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, Amsterdam, CX, The Netherlands
| | - Eva E Schaake
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, Amsterdam, CX, The Netherlands
| | - Uulke A van der Heide
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, Amsterdam, CX, The Netherlands
| | - Rita Simões
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, Amsterdam, CX, The Netherlands.
| |
Collapse
|
10
|
Bourbonne V, Laville A, Wagneur N, Ghannam Y, Larnaudie A. Excitement and Concerns of Young Radiation Oncologists over Automatic Segmentation: A French Perspective. Cancers (Basel) 2023; 15:cancers15072040. [PMID: 37046704 PMCID: PMC10093734 DOI: 10.3390/cancers15072040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 03/21/2023] [Accepted: 03/24/2023] [Indexed: 04/01/2023] Open
Abstract
Introduction: Segmentation of organs at risk (OARs) and target volumes need time and precision but are highly repetitive tasks. Radiation oncology has known tremendous technological advances in recent years, the latest being brought by artificial intelligence (AI). Despite the advantages brought by AI for segmentation, some concerns were raised by academics regarding the impact on young radiation oncologists’ training. A survey was thus conducted on young french radiation oncologists (ROs) by the SFjRO (Société Française des jeunes Radiothérapeutes Oncologues). Methodology: The SFjRO organizes regular webinars focusing on anatomical localization, discussing either segmentation or dosimetry. Completion of the survey was mandatory for registration to a dosimetry webinar dedicated to head and neck (H & N) cancers. The survey was generated in accordance with the CHERRIES guidelines. Quantitative data (e.g., time savings and correction needs) were not measured but determined among the propositions. Results: 117 young ROs from 35 different and mostly academic centers participated. Most centers were either already equipped with such solutions or planning to be equipped in the next two years. AI segmentation software was mostly useful for H & N cases. While for the definition of OARs, participants experienced a significant time gain using AI-proposed delineations, with almost 35% of the participants saving between 50–100% of the segmentation time, time gained for target volumes was significantly lower, with only 8.6% experiencing a 50–100% gain. Contours still needed to be thoroughly checked, especially target volumes for some, and edited. The majority of participants suggested that these tools should be integrated into the training so that future radiation oncologists do not neglect the importance of radioanatomy. Fully aware of this risk, up to one-third of them even suggested that AI tools should be reserved for senior physicians only. Conclusions: We believe this survey on automatic segmentation to be the first to focus on the perception of young radiation oncologists. Software developers should focus on enhancing the quality of proposed segmentations, while young radiation oncologists should become more acquainted with these tools.
Collapse
Affiliation(s)
- Vincent Bourbonne
- Radiation Oncology Department, University Hospital Brest, 2 Avenue Foch, 29200 Brest, France
- Société Française des Jeunes Radiothérapeutes Oncologues, 47 Rue de la Colonie, 75013 Paris, France
- Correspondence: ; Tel.: +33-298223398; Fax: +33-98223087
| | - Adrien Laville
- Radiation Oncology Department, University Hospital Amiens-Picardie, 30 Avenue de la Croix Jourdain, 80054 Amiens, France
| | - Nicolas Wagneur
- Société Française des Jeunes Radiothérapeutes Oncologues, 47 Rue de la Colonie, 75013 Paris, France
- Radiation Oncology Department, Institut de Cancérologie de l’Ouest, Centre Paul Papin, 15 Rue André Bocquel, 49055 Angers, France
| | - Youssef Ghannam
- Société Française des Jeunes Radiothérapeutes Oncologues, 47 Rue de la Colonie, 75013 Paris, France
- Radiation Oncology Department, Institut de Cancérologie de l’Ouest, Centre Paul Papin, 15 Rue André Bocquel, 49055 Angers, France
| | - Audrey Larnaudie
- Société Française des Jeunes Radiothérapeutes Oncologues, 47 Rue de la Colonie, 75013 Paris, France
- Radiation Oncology Department, Centre François Baclesse, 3 Avenue du Général Harris, 14000 Caen, France
| |
Collapse
|
11
|
Shao J, Huang X, Gao T, Cao J, Wang Y, Zhang Q, Lou L, Ye J. Deep learning-based image analysis of eyelid morphology in thyroid-associated ophthalmopathy. Quant Imaging Med Surg 2023; 13:1592-1604. [PMID: 36915314 PMCID: PMC10006102 DOI: 10.21037/qims-22-551] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 11/25/2022] [Indexed: 01/05/2023]
Abstract
Background We aimed to propose a deep learning-based approach to automatically measure eyelid morphology in patients with thyroid-associated ophthalmopathy (TAO). Methods This prospective study consecutively included 74 eyes of patients with TAO and 74 eyes of healthy volunteers visiting the ophthalmology department in a tertiary hospital. Patients diagnosed as TAO and healthy volunteers who were age- and gender-matched met the eligibility criteria for recruitment. Facial images were taken under the same light conditions. Comprehensive eyelid morphological parameters, such as palpebral fissure (PF) length, margin reflex distance (MRD), eyelid retraction distance, eyelid length, scleral area, and mid-pupil lid distance (MPLD), were automatically calculated using our deep learning-based analysis system. MRD1 and 2 were manually measured. Bland-Altman plots and intraclass correlation coefficients (ICCs) were performed to assess the agreement between automatic and manual measurements of MRDs. The asymmetry of the eyelid contour was analyzed using the temporal: nasal ratio of the MPLD. All eyelid features were compared between TAO eyes and control eyes using the independent samples t-test. Results A strong agreement between automatic and manual measurement was indicated. Biases of MRDs in TAO eyes and control eyes ranged from -0.01 mm [95% limits of agreement (LoA): -0.64 to 0.63 mm] to 0.09 mm (LoA: -0.46 to 0.63 mm). ICCs ranged from 0.932 to 0.980 (P<0.001). Eyelid features were significantly different in TAO eyes and control eyes, including MRD1 (4.82±1.59 vs. 2.99±0.81 mm; P<0.001), MRD2 (5.89±1.16 vs. 5.47±0.73 mm; P=0.009), upper eyelid length (UEL) (27.73±4.49 vs. 25.42±4.35 mm; P=0.002), lower eyelid length (LEL) (31.51±4.59 vs. 26.34±4.72 mm; P<0.001), and total scleral area (SATOTAL) (96.14±34.38 vs. 56.91±14.97 mm2; P<0.001). The MPLDs at all angles showed significant differences in the 2 groups of eyes (P=0.008 at temporal 180°; P<0.001 at other angles). The greatest temporal-nasal asymmetry appeared at 75° apart from the midline in TAO eyes. Conclusions Our proposed system allowed automatic, comprehensive, and objective measurement of eyelid morphology by only using facial images, which has potential application prospects in TAO. Future work with a large sample of patients that contains different TAO subsets is warranted.
Collapse
Affiliation(s)
- Ji Shao
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou, China
| | - Xingru Huang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Tao Gao
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou, China
| | - Jing Cao
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou, China
| | - Yaqi Wang
- College of Media Engineering, Communication University of Zhejiang, Hangzhou, China
| | - Qianni Zhang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Lixia Lou
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou, China
| | - Juan Ye
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University, School of Medicine, Hangzhou, China
| |
Collapse
|
12
|
Ke J, Lv Y, Ma F, Du Y, Xiong S, Wang J, Wang J. Deep learning-based approach for the automatic segmentation of adult and pediatric temporal bone computed tomography images. Quant Imaging Med Surg 2023; 13:1577-1591. [PMID: 36915310 PMCID: PMC10006112 DOI: 10.21037/qims-22-658] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 12/15/2022] [Indexed: 02/25/2023]
Abstract
Background Automatic segmentation of temporal bone computed tomography (CT) images is fundamental to image-guided otologic surgery and the intelligent analysis of CT images in the field of otology. This study was conducted to test a convolutional neural network (CNN) model that can automatically segment almost all temporal bone anatomy structures in adult and pediatric CT images. Methods A dataset comprising 80 annotated CT volumes was collected, of which 40 samples were obtained from adults and 40 from children. A further 60 annotated CT volumes (30 from adults and 30 from children) were used to train the model. The remaining 20 annotated CT volumes were employed to determine the model's generalizability for automatic segmentation. Finally, the Dice coefficient (DC) and average symmetric surface distance (ASSD) were utilized as metrics to evaluate the performance of the CNN model. Two independent-sample t-tests were used to compare the test set results of adults and children. Results In the adult test set, the mean DC values of all the structures ranged from 0.714 to 0.912, and the ASSD values were less than 0.24 mm for 11 structures. In the pediatric test set, the mean DC values of all the structures ranged from 0.658 to 0.915, and the ASSD values were less than 0.18 mm for 11 structures. There was no statistically significant difference between the adult and child test sets in most temporal bone structures. Conclusions Our CNN model shows excellent automatic segmentation performance and good generalizability for both adult and pediatric temporal bone CT images, which can help to advance otologist education, intelligent imaging diagnosis, surgery simulation, application of augmented reality, and preoperative planning for image-guided otology surgery.
Collapse
Affiliation(s)
- Jia Ke
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, Beijing, China
| | - Yi Lv
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China.,North China Research Institute of Electro-optics, Beijing, China
| | - Furong Ma
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, Beijing, China
| | - Yali Du
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, Beijing, China
| | - Shan Xiong
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, Beijing, China
| | - Junchen Wang
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Jiang Wang
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, Beijing, China.,Department of Otorhinolaryngology, First Affiliated Hospital, Nanjing Medical University, Nanjing, China
| |
Collapse
|
13
|
Khalal DM, Azizi H, Maalej N. Automatic segmentation of kidneys in computed tomography images using U-Net. Cancer Radiother 2023; 27:109-114. [PMID: 36739197 DOI: 10.1016/j.canrad.2022.08.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Revised: 08/03/2022] [Accepted: 08/12/2022] [Indexed: 02/05/2023]
Abstract
PURPOSE Accurate segmentation of target volumes and organs at risk from computed tomography (CT) images is essential for treatment planning in radiation therapy. The segmentation task is often done manually making it time-consuming. Besides, it is biased to the clinician experience and subject to inter-observer variability. Therefore, and due to the development of artificial intelligence tools and particularly deep learning (DL) algorithms, automatic segmentation has been proposed as an alternative. The purpose of this work is to use a DL-based method to segment the kidneys on CT images for radiotherapy treatment planning. MATERIALS AND METHODS In this contribution, we used the CT scans of 20 patients. Segmentation of the kidneys was performed using the U-Net model. The Dice similarity coefficient (DSC), the Matthews correlation coefficient (MCC), the Hausdorff distance (HD), the sensitivity and the specificity were used to quantitatively evaluate this delineation. RESULTS This model was able to segment the organs with a good accuracy. The obtained values of the used metrics for the kidneys segmentation, were presented. Our results were also compared to those obtained recently by other authors. CONCLUSION Fully automated DL-based segmentation of CT images has the potential to improve both the speed and the accuracy of radiotherapy organs contouring.
Collapse
Affiliation(s)
- D M Khalal
- Laboratory of dosing, analysis and characterization in high resolution, Department of Physics, Faculty of Sciences, Ferhat Abbas Sétif 1 University, El Baz campus 19137, Sétif, Algeria.
| | - H Azizi
- Laboratory of dosing, analysis and characterization in high resolution, Department of Physics, Faculty of Sciences, Ferhat Abbas Sétif 1 University, El Baz campus 19137, Sétif, Algeria
| | - N Maalej
- Department of Physics, Khalifa University, Abu Dhabi, United Arab Emirates
| |
Collapse
|
14
|
Luo Y, Zhang J, Yang Y, Rao Y, Chen X, Shi T, Xu S, Jia R, Gao X. Deep learning-based fully automated differential diagnosis of eyelid basal cell and sebaceous carcinoma using whole slide images. Quant Imaging Med Surg 2022; 12:4166-4175. [PMID: 35919066 PMCID: PMC9338367 DOI: 10.21037/qims-22-98] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 06/01/2022] [Indexed: 11/06/2022]
Abstract
Background The differential diagnosis of eyelid basal cell carcinoma (BCC) and sebaceous carcinoma (SC) is highly dependent on pathologist’s experience. Herein, we proposed a fully automated differential diagnostic method, which used deep learning (DL) to accurately classify eyelid BCC and SC based on whole slide images (WSIs). Methods We used 116 haematoxylin and eosin (H&E)-stained sections from 116 eyelid BCC patients and 180 H&E-stained sections from 129 eyelid SC patients treated at the Shanghai Ninth People’s Hospital from 2017 to 2019. The method comprises two stages: patch prediction by the DenseNet-161 architecture-based DL model and WSI differentiation by an average-probability strategy-based integration module, and its differential performance was assessed by the carcinoma differentiation accuracy and F1 score. We compared the classification performance of the method with that of three pathologists, two junior and one senior. To validate the auxiliary value of the method, we compared the pathologists’ BCC and SC classification with and without the assistance of our proposed method. Results Our proposed method achieved an accuracy of 0.983, significantly higher than that of the three pathologists (0.644 and 0.729 for the two junior pathologists and 0.831 for the senior pathologist). With the method’s assistance, the pathologists’ accuracy increased significantly (P<0.05), by 28.8% and 15.2%, respectively, for the two junior pathologists and by 11.8% for the senior pathologist. Conclusions Our proposed method accurately classifies eyelid BCC and SC and effectively improves the diagnostic accuracy of pathologists. It may therefore facilitate the development of appropriate and timely therapeutic plans.
Collapse
Affiliation(s)
- Yingxiu Luo
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.,Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Jiayi Zhang
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.,School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, China
| | - Yidi Yang
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.,Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Yamin Rao
- Department of Pathology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xingyu Chen
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.,Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Tianlei Shi
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.,School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, China
| | - Shiqiong Xu
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.,Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Renbing Jia
- Department of Ophthalmology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.,Shanghai Key Laboratory of Orbital Diseases and Ocular Oncology, Shanghai, China
| | - Xin Gao
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.,School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, China.,Jinan Guoke Medical Engineering and Technology Development Co., Ltd., Jinan, China
| |
Collapse
|
15
|
Wang X, Fan Y, Zhang N, Li J, Duan Y, Yang B. Performance of Machine Learning for Tissue Outcome Prediction in Acute Ischemic Stroke: A Systematic Review and Meta-Analysis. Front Neurol 2022; 13:910259. [PMID: 35873778 PMCID: PMC9305175 DOI: 10.3389/fneur.2022.910259] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 06/20/2022] [Indexed: 12/03/2022] Open
Abstract
Machine learning (ML) has been proposed for lesion segmentation in acute ischemic stroke (AIS). This study aimed to provide a systematic review and meta-analysis of the overall performance of current ML algorithms for final infarct prediction from baseline imaging. We made a comprehensive literature search on eligible studies developing ML models for core infarcted tissue estimation on admission CT or MRI in AIS patients. Eleven studies meeting the inclusion criteria were included in the quantitative analysis. Study characteristics, model methodology, and predictive performance of the included studies were extracted. A meta-analysis was conducted on the dice similarity coefficient (DSC) score by using a random-effects model to assess the overall predictive performance. Study heterogeneity was assessed by Cochrane Q and Higgins I2 tests. The pooled DSC score of the included ML models was 0.50 (95% CI 0.39–0.61), with high heterogeneity observed across studies (I2 96.5%, p < 0.001). Sensitivity analyses using the one-study removed method showed the adjusted overall DSC score ranged from 0.47 to 0.52. Subgroup analyses indicated that the DL-based models outperformed the conventional ML classifiers with the best performance observed in DL algorithms combined with CT data. Despite the presence of heterogeneity, current ML-based approaches for final infarct prediction showed moderate but promising performance. Before well integrated into clinical stroke workflow, future investigations are suggested to train ML models on large-scale, multi-vendor data, validate on external cohorts and adopt formalized reporting standards for improving model accuracy and robustness.
Collapse
Affiliation(s)
- Xinrui Wang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China
| | - Yiming Fan
- Department of Orthopedics, Chinese PLA General Hospital, Beijing, China
| | - Nan Zhang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China
| | - Jing Li
- Department of Radiology, Changhai Hospital, Shanghai, China
| | - Yang Duan
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China
| | - Benqiang Yang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China
- *Correspondence: Benqiang Yang
| |
Collapse
|
16
|
Wang J, Chen Z, Yang C, Qu B, Ma L, Fan W, Zhou Q, Zheng Q, Xu S. Evaluation Exploration of Atlas-Based and Deep Learning-Based Automatic Contouring for Nasopharyngeal Carcinoma. Front Oncol 2022; 12:833816. [PMID: 35433460 PMCID: PMC9008357 DOI: 10.3389/fonc.2022.833816] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Accepted: 02/25/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose The purpose of this study was to evaluate and explore the difference between an atlas-based and deep learning (DL)-based auto-segmentation scheme for organs at risk (OARs) of nasopharyngeal carcinoma cases to provide valuable help for clinical practice. Methods 120 nasopharyngeal carcinoma cases were established in the MIM Maestro (atlas) database and trained by a DL-based model (AccuContour®), and another 20 nasopharyngeal carcinoma cases were randomly selected outside the atlas database. The experienced physicians contoured 14 OARs from 20 patients based on the published consensus guidelines, and these were defined as the reference volumes (Vref). Meanwhile, these OARs were auto-contoured using an atlas-based model, a pre-built DL-based model, and an on-site trained DL-based model. These volumes were named Vatlas, VDL-pre-built, and VDL-trained, respectively. The similarities between Vatlas, VDL-pre-built, VDL-trained, and Vref were assessed using the Dice similarity coefficient (DSC), Jaccard coefficient (JAC), maximum Hausdorff distance (HDmax), and deviation of centroid (DC) methods. A one-way ANOVA test was carried out to show the differences (between each two of them). Results The results of the three methods were almost similar for the brainstem and eyes. For inner ears and temporomandibular joints, the results of the pre-built DL-based model are the worst, as well as the results of atlas-based auto-segmentation for the lens. For the segmentation of optic nerves, the trained DL-based model shows the best performance (p < 0.05). For the contouring of the oral cavity, the DSC value of VDL-pre-built is the smallest, and VDL-trained is the most significant (p < 0.05). For the parotid glands, the DSC of Vatlas is the minimum (about 0.80 or so), and VDL-pre-built and VDL-trained are slightly larger (about 0.82 or so). In addition to the oral cavity, parotid glands, and the brainstem, the maximum Hausdorff distances of the other organs are below 0.5 cm using the trained DL-based segmentation model. The trained DL-based segmentation method behaves well in the contouring of all the organs that the maximum average deviation of the centroid is no more than 0.3 cm. Conclusion The trained DL-based segmentation performs significantly better than atlas-based segmentation for nasopharyngeal carcinoma, especially for the OARs with small volumes. Although some delineation results still need further modification, auto-segmentation methods improve the work efficiency and provide a level of help for clinical work.
Collapse
Affiliation(s)
- Jinyuan Wang
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | | | | | - Baolin Qu
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Lin Ma
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Wenjun Fan
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Qichao Zhou
- Manteia Technologies Co., Ltd., Xiamen, China
| | - Qingzeng Zheng
- Department of Radiation Oncology, Beijing Geriatric Hospital, Beijing, China
| | - Shouping Xu
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| |
Collapse
|
17
|
3D-Image-Guided Multi-Catheter Interstitial Brachytherapy for Bulky and High-Risk Stage IIB-IVB Cervical Cancer. Cancers (Basel) 2022; 14:cancers14051257. [PMID: 35267565 PMCID: PMC8909688 DOI: 10.3390/cancers14051257] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 02/25/2022] [Accepted: 02/26/2022] [Indexed: 11/16/2022] Open
Abstract
Simple Summary The prognosis of locally advanced cervical cancer still remains poor. Recently, image-guided brachytherapy ameliorated local control and pelvic control in these patients. Additionally, concurrent chemoradiotherapy with interstitial brachytherapy (ISBT) demonstrated more favorable outcomes than that with intracavity brachytherapy. The purpose of our study was to evaluate the efficacy and safety of CT-MRI-guided multi-catheter ISBT for bulky (≥4 cm) and high-risk stage IIB-IVB cervical cancer. Total of 18 patients with squamous cell carcinoma received concurrent chemoradiotherapy with ISBT were assessed. Four (22.2%), seven (38.9%), and seven (38.9%) patients were diagnosed with stage II, III, and IV cervical cancer, respectively. The four-year local control, pelvic control, disease-free survival, and overall survival rates were 100%, 100%, 81.6%, and 87.8%, respectively. Although three (16.7%) patients experienced grade 3 late adverse events, no one had procedure-related complications. CT-MRI-guided multi-catheter ISBT could be a promising treatment strategy for locally advanced cervical cancer. Abstract This study aimed to evaluate the efficacy and safety of computed tomography-magnetic resonance imaging (CT-MRI)-guided multi-catheter interstitial brachytherapy for patients with bulky (≥4 cm) and high-risk, stage IIB–IVB advanced cervical cancer. Eighteen patients who underwent concurrent chemoradiotherapy with multi-catheter interstitial brachytherapy between September 2014 and August 2020 were enrolled. The prescribed dose of external beam radiotherapy was 45–50.4 Gy, and the brachytherapy high-dose-rate aim was 25–30 Gy per 5 fractions. The endpoints were four-year local and pelvic control rates, four-year disease-free and overall survival rates, and the adverse events rate. The median follow-up period was 48.4 months (9.1–87.5 months). Fifteen patients received concurrent cisplatin therapy (40 mg/m2, q1week). Four (22.2%), seven (38.9%), and seven (38.9%) patients had stage II, III, and IV cervical cancer, respectively. Pelvic and para-aortic lymph node metastases were observed in 11 (61.1%) and 2 (11.1%) patients, respectively. The median pre-treatment volume was 87.5 cm3. The four-year local control, pelvic control, disease-free survival, and overall survival rates were 100%, 100%, 81.6%, and 87.8%, respectively. Three (16.7%) patients experienced grade 3 adverse events, and none experienced grade 4–5 adverse events. CT-MRI-guided multi-catheter interstitial brachytherapy could be a promising treatment strategy for locally advanced cervical cancer.
Collapse
|