1
|
Rajendran P, Yang Y, Niedermayr TR, Gensheimer M, Beadle B, Le QT, Xing L, Dai X. Large Language Model-Augmented Auto-Delineation of Treatment Target Volume in Radiation Therapy. ARXIV 2024:arXiv:2407.07296v1. [PMID: 39040646 PMCID: PMC11261986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 07/24/2024]
Abstract
Radiation therapy (RT) is one of the most effective treatments for cancer, and its success relies on the accurate delineation of targets. However, target delineation is a comprehensive medical decision that currently relies purely on manual processes by human experts. Manual delineation is time-consuming, laborious, and subject to interobserver variations. Although the advancements in artificial intelligence (AI) techniques have significantly enhanced the auto-contouring of normal tissues, accurate delineation of RT target volumes remains a challenge. In this study, we propose a visual language model-based RT target volume auto-delineation network termed Radformer. The Radformer utilizes a hierarchical vision transformer as the backbone and incorporates large language models to extract text-rich features from clinical data. We introduce a visual language attention module (VLAM) for integrating visual and linguistic features for language-aware visual encoding (LAVE). The Radformer has been evaluated on a dataset comprising 2985 patients with head-and-neck cancer who underwent RT. Metrics, including the Dice similarity coefficient (DSC), intersection over union (IOU), and 95th percentile Hausdorff distance (HD95), were used to evaluate the performance of the model quantitatively. Our results demonstrate that the Radformer has superior segmentation performance compared to other state-of-the-art models, validating its potential for adoption in RT practice.
Collapse
Affiliation(s)
- Praveenbalaji Rajendran
- Stanford University, Stanford, CA 94305 USA. He is now with the Massachusetts General Hospital and Harvard Medical, School, Boston, MA 02114 USA
| | - Yong Yang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Thomas R Niedermayr
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Michael Gensheimer
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Beth Beadle
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Quynh-Thu Le
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Xianjin Dai
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| |
Collapse
|
2
|
McDonald BA, Dal Bello R, Fuller CD, Balermpas P. The Use of MR-Guided Radiation Therapy for Head and Neck Cancer and Recommended Reporting Guidance. Semin Radiat Oncol 2024; 34:69-83. [PMID: 38105096 PMCID: PMC11372437 DOI: 10.1016/j.semradonc.2023.10.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Although magnetic resonance imaging (MRI) has become standard diagnostic workup for head and neck malignancies and is currently recommended by most radiological societies for pharyngeal and oral carcinomas, its utilization in radiotherapy has been heterogeneous during the last decades. However, few would argue that implementing MRI for annotation of target volumes and organs at risk provides several advantages, so that implementation of the modality for this purpose is widely accepted. Today, the term MR-guidance has received a much broader meaning, including MRI for adaptive treatments, MR-gating and tracking during radiotherapy application, MR-features as biomarkers and finally MR-only workflows. First studies on treatment of head and neck cancer on commercially available dedicated hybrid-platforms (MR-linacs), with distinct common features but also differences amongst them, have also been recently reported, as well as "biological adaptation" based on evaluation of early treatment response via functional MRI-sequences such as diffusion weighted ones. Yet, all of these approaches towards head and neck treatment remain at their infancy, especially when compared to other radiotherapy indications. Moreover, the lack of standardization for reporting MR-guided radiotherapy is a major obstacle both to further progress in the field and to conduct and compare clinical trials. Goals of this article is to present and explain all different aspects of MR-guidance for radiotherapy of head and neck cancer, summarize evidence, as well as possible advantages and challenges of the method and finally provide a comprehensive reporting guidance for use in clinical routine and trials.
Collapse
Affiliation(s)
- Brigid A McDonald
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Riccardo Dal Bello
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Panagiotis Balermpas
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland.
| |
Collapse
|
3
|
Liu P, Sun Y, Zhao X, Yan Y. Deep learning algorithm performance in contouring head and neck organs at risk: a systematic review and single-arm meta-analysis. Biomed Eng Online 2023; 22:104. [PMID: 37915046 PMCID: PMC10621161 DOI: 10.1186/s12938-023-01159-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 09/21/2023] [Indexed: 11/03/2023] Open
Abstract
PURPOSE The contouring of organs at risk (OARs) in head and neck cancer radiation treatment planning is a crucial, yet repetitive and time-consuming process. Recent studies have applied deep learning (DL) algorithms to automatically contour head and neck OARs. This study aims to conduct a systematic review and meta-analysis to summarize and analyze the performance of DL algorithms in contouring head and neck OARs. The objective is to assess the advantages and limitations of DL algorithms in contour planning of head and neck OARs. METHODS This study conducted a literature search of Pubmed, Embase and Cochrane Library databases, to include studies related to DL contouring head and neck OARs, and the dice similarity coefficient (DSC) of four categories of OARs from the results of each study are selected as effect sizes for meta-analysis. Furthermore, this study conducted a subgroup analysis of OARs characterized by image modality and image type. RESULTS 149 articles were retrieved, and 22 studies were included in the meta-analysis after excluding duplicate literature, primary screening, and re-screening. The combined effect sizes of DSC for brainstem, spinal cord, mandible, left eye, right eye, left optic nerve, right optic nerve, optic chiasm, left parotid, right parotid, left submandibular, and right submandibular are 0.87, 0.83, 0.92, 0.90, 0.90, 0.71, 0.74, 0.62, 0.85, 0.85, 0.82, and 0.82, respectively. For subgroup analysis, the combined effect sizes for segmentation of the brainstem, mandible, left optic nerve, and left parotid gland using CT and MRI images are 0.86/0.92, 0.92/0.90, 0.71/0.73, and 0.84/0.87, respectively. Pooled effect sizes using 2D and 3D images of the brainstem, mandible, left optic nerve, and left parotid gland for contouring are 0.88/0.87, 0.92/0.92, 0.75/0.71 and 0.87/0.85. CONCLUSIONS The use of automated contouring technology based on DL algorithms is an essential tool for contouring head and neck OARs, achieving high accuracy, reducing the workload of clinical radiation oncologists, and providing individualized, standardized, and refined treatment plans for implementing "precision radiotherapy". Improving DL performance requires the construction of high-quality data sets and enhancing algorithm optimization and innovation.
Collapse
Affiliation(s)
- Peiru Liu
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China
- Beifang Hospital of China Medical University, Shenyang, China
| | - Ying Sun
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China
| | - Xinzhuo Zhao
- Shenyang University of Technology, School of Electrical Engineering,, Shenyang, China
| | - Ying Yan
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China.
| |
Collapse
|
4
|
Fujima N, Kamagata K, Ueda D, Fujita S, Fushimi Y, Yanagawa M, Ito R, Tsuboyama T, Kawamura M, Nakaura T, Yamada A, Nozaki T, Fujioka T, Matsui Y, Hirata K, Tatsugami F, Naganawa S. Current State of Artificial Intelligence in Clinical Applications for Head and Neck MR Imaging. Magn Reson Med Sci 2023; 22:401-414. [PMID: 37532584 PMCID: PMC10552661 DOI: 10.2463/mrms.rev.2023-0047] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 07/09/2023] [Indexed: 08/04/2023] Open
Abstract
Due primarily to the excellent soft tissue contrast depictions provided by MRI, the widespread application of head and neck MRI in clinical practice serves to assess various diseases. Artificial intelligence (AI)-based methodologies, particularly deep learning analyses using convolutional neural networks, have recently gained global recognition and have been extensively investigated in clinical research for their applicability across a range of categories within medical imaging, including head and neck MRI. Analytical approaches using AI have shown potential for addressing the clinical limitations associated with head and neck MRI. In this review, we focus primarily on the technical advancements in deep-learning-based methodologies and their clinical utility within the field of head and neck MRI, encompassing aspects such as image acquisition and reconstruction, lesion segmentation, disease classification and diagnosis, and prognostic prediction for patients presenting with head and neck diseases. We then discuss the limitations of current deep-learning-based approaches and offer insights regarding future challenges in this field.
Collapse
Affiliation(s)
- Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Tokyo, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Osaka, Japan
| | - Shohei Fujita
- Department of Radiology, University of Tokyo, Tokyo, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Kyoto, Kyoto, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, Kumamoto, Kumamoto, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, Matsumoto, Nagano, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, Tokyo, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, Okayama, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, Hiroshima, Hiroshima, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| |
Collapse
|
5
|
Wang J, Peng Y. MHL-Net: A Multistage Hierarchical Learning Network for Head and Neck Multiorgan Segmentation. IEEE J Biomed Health Inform 2023; 27:4074-4085. [PMID: 37171918 DOI: 10.1109/jbhi.2023.3275746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Accurate segmentation of head and neck organs at risk is crucial in radiotherapy. However, the existing methods suffer from incomplete feature mining, insufficient information utilization, and difficulty in simultaneously improving the performance of small and large organ segmentation. In this paper, a multistage hierarchical learning network is designed to fully extract multidimensional features, combined with anatomical prior information and imaging features, using multistage subnetworks to improve the segmentation performance. First, multilevel subnetworks are constructed for primary segmentation, localization, and fine segmentation by dividing organs into two levels-large and small. Different networks both have their own learning focuses and feature reuse and information sharing among each other, which comprehensively improved the segmentation performance of all organs. Second, an anatomical prior probability map and a boundary contour attention mechanism are developed to address the problem of complex anatomical shapes. Prior information and boundary contour features effectively assist in detecting and segmenting special shapes. Finally, a multidimensional combination attention mechanism is proposed to analyze axial, coronal, and sagittal information, capture spatial and channel features, and maximize the use of structural information and semantic features of 3D medical images. Experimental results on several datasets showed that our method was competitive with state-of-the-art methods and improved the segmentation results for multiscale organs.
Collapse
|
6
|
Franzese C, Dei D, Lambri N, Teriaca MA, Badalamenti M, Crespi L, Tomatis S, Loiacono D, Mancosu P, Scorsetti M. Enhancing Radiotherapy Workflow for Head and Neck Cancer with Artificial Intelligence: A Systematic Review. J Pers Med 2023; 13:946. [PMID: 37373935 DOI: 10.3390/jpm13060946] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 06/01/2023] [Accepted: 06/01/2023] [Indexed: 06/29/2023] Open
Abstract
BACKGROUND Head and neck cancer (HNC) is characterized by complex-shaped tumors and numerous organs at risk (OARs), inducing challenging radiotherapy (RT) planning, optimization, and delivery. In this review, we provided a thorough description of the applications of artificial intelligence (AI) tools in the HNC RT process. METHODS The PubMed database was queried, and a total of 168 articles (2016-2022) were screened by a group of experts in radiation oncology. The group selected 62 articles, which were subdivided into three categories, representing the whole RT workflow: (i) target and OAR contouring, (ii) planning, and (iii) delivery. RESULTS The majority of the selected studies focused on the OARs segmentation process. Overall, the performance of AI models was evaluated using standard metrics, while limited research was found on how the introduction of AI could impact clinical outcomes. Additionally, papers usually lacked information about the confidence level associated with the predictions made by the AI models. CONCLUSIONS AI represents a promising tool to automate the RT workflow for the complex field of HNC treatment. To ensure that the development of AI technologies in RT is effectively aligned with clinical needs, we suggest conducting future studies within interdisciplinary groups, including clinicians and computer scientists.
Collapse
Affiliation(s)
- Ciro Franzese
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Damiano Dei
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Nicola Lambri
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Maria Ausilia Teriaca
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Marco Badalamenti
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Leonardo Crespi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
- Centre for Health Data Science, Human Technopole, 20157 Milan, Italy
| | - Stefano Tomatis
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Daniele Loiacono
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
| | - Pietro Mancosu
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Marta Scorsetti
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| |
Collapse
|
7
|
van Elst S, de Bloeme CM, Noteboom S, de Jong MC, Moll AC, Göricke S, de Graaf P, Caan MWA. Automatic segmentation and quantification of the optic nerve on MRI using a 3D U-Net. J Med Imaging (Bellingham) 2023; 10:034501. [PMID: 37197374 PMCID: PMC10185127 DOI: 10.1117/1.jmi.10.3.034501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 03/15/2023] [Accepted: 04/19/2023] [Indexed: 05/19/2023] Open
Abstract
Purpose Pathological conditions associated with the optic nerve (ON) can cause structural changes in the nerve. Quantifying these changes could provide further understanding of disease mechanisms. We aim to develop a framework that automatically segments the ON separately from its surrounding cerebrospinal fluid (CSF) on magnetic resonance imaging (MRI) and quantifies the diameter and cross-sectional area along the entire length of the nerve. Approach Multicenter data were obtained from retinoblastoma referral centers, providing a heterogeneous dataset of 40 high-resolution 3D T2-weighted MRI scans with manual ground truth delineations of both ONs. A 3D U-Net was used for ON segmentation, and performance was assessed in a tenfold cross-validation (n = 32 ) and on a separate test-set (n = 8 ) by measuring spatial, volumetric, and distance agreement with manual ground truths. Segmentations were used to quantify diameter and cross-sectional area along the length of the ON, using centerline extraction of tubular 3D surface models. Absolute agreement between automated and manual measurements was assessed by the intraclass correlation coefficient (ICC). Results The segmentation network achieved high performance, with a mean Dice similarity coefficient score of 0.84, median Hausdorff distance of 0.64 mm, and ICC of 0.95 on the test-set. The quantification method obtained acceptable correspondence to manual reference measurements with mean ICC values of 0.76 for the diameter and 0.71 for the cross-sectional area. Compared with other methods, our method precisely identifies the ON from surrounding CSF and accurately estimates its diameter along the nerve's centerline. Conclusions Our automated framework provides an objective method for ON assessment in vivo.
Collapse
Affiliation(s)
- Sabien van Elst
- Amsterdam UMC location Vrije Universiteit Amsterdam, Department of Radiology and Nuclear Medicine, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| | - Christiaan M. de Bloeme
- Amsterdam UMC location Vrije Universiteit Amsterdam, Department of Radiology and Nuclear Medicine, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| | - Samantha Noteboom
- Amsterdam UMC location Vrije Universiteit Amsterdam, Department of Anatomy and Neurosciences, Amsterdam, The Netherlands
| | - Marcus C. de Jong
- Amsterdam UMC location Vrije Universiteit Amsterdam, Department of Radiology and Nuclear Medicine, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| | - Annette C. Moll
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
- Amsterdam UMC location Vrije Universiteit Amsterdam, Department of Ophthalmology, Amsterdam, The Netherlands
| | - Sophia Göricke
- University Hospital Essen, Institute of Diagnostic and Interventional Radiology and Neuroradiology, Essen, Germany
| | - Pim de Graaf
- Amsterdam UMC location Vrije Universiteit Amsterdam, Department of Radiology and Nuclear Medicine, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| | - Matthan W. A. Caan
- Amsterdam UMC location University of Amsterdam, Department of Biomedical Engineering and Physics, Amsterdam, The Netherlands
| |
Collapse
|
8
|
Zhang Y, Chen C, Huang W, Teng Y, Shu X, Zhao F, Xu J, Zhang L. Preoperative volume of the optic chiasm is an easily obtained predictor for visual recovery of pituitary adenoma patients following endoscopic endonasal transsphenoidal surgery: a cohort study. Int J Surg 2023; 109:896-904. [PMID: 36999782 PMCID: PMC10389445 DOI: 10.1097/js9.0000000000000357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 03/13/2023] [Indexed: 04/01/2023]
Abstract
BACKGROUND Predicting the postoperative visual outcome of pituitary adenoma patients is important but remains challenging. This study aimed to identify a novel prognostic predictor which can be automatically obtained from routine MRI using a deep learning approach. MATERIALS AND METHODS A total of 220 pituitary adenoma patients were prospectively enrolled and stratified into the recovery and nonrecovery groups according to the visual outcome at 6 months after endoscopic endonasal transsphenoidal surgery. The optic chiasm was manually segmented on preoperative coronal T2WI, and its morphometric parameters were measured, including suprasellar extension distance, chiasmal thickness, and chiasmal volume. Univariate and multivariate analyses were conducted on clinical and morphometric parameters to identify predictors for visual recovery. Additionally, a deep learning model for automated segmentation and volumetric measurement of optic chiasm was developed with nnU-Net architecture and evaluated in a multicenter data set covering 1026 pituitary adenoma patients from four institutions. RESULTS Larger preoperative chiasmal volume was significantly associated with better visual outcomes ( P =0.001). Multivariate logistic regression suggested it could be taken as the independent predictor for visual recovery (odds ratio=2.838, P <0.001). The auto-segmentation model represented good performances and generalizability in internal (Dice=0.813) and three independent external test sets (Dice=0.786, 0.818, and 0.808, respectively). Moreover, the model achieved accurate volumetric evaluation of the optic chiasm with an intraclass correlation coefficient of more than 0.83 in both internal and external test sets. CONCLUSION The preoperative volume of the optic chiasm could be utilized as the prognostic predictor for visual recovery of pituitary adenoma patients after surgery. Moreover, the proposed deep learning-based model allowed for automated segmentation and volumetric measurement of the optic chiasm on routine MRI.
Collapse
Affiliation(s)
- Yang Zhang
- Department of Neurosurgery, West China Hospital, Sichuan University
- Department of Radiology, West China Hospital, Sichuan University
| | - Chaoyue Chen
- Department of Neurosurgery, West China Hospital, Sichuan University
- Department of Radiology, West China Hospital, Sichuan University
| | - Wei Huang
- College of Computer Science, Sichuan University
| | - Yuen Teng
- Department of Neurosurgery, West China Hospital, Sichuan University
- Department of Radiology, West China Hospital, Sichuan University
| | - Xin Shu
- College of Computer Science, Sichuan University
| | - Fumin Zhao
- Department of Radiology, West China Second University Hospital, Sichuan University
| | - Jianguo Xu
- Department of Neurosurgery, West China Hospital, Sichuan University
- Department of Radiology, West China Hospital, Sichuan University
| | - Lei Zhang
- College of Computer Science, Sichuan University
| |
Collapse
|
9
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RL, Liu T, Wang T, Yang X. Deep Learning in MRI-guided Radiation Therapy: A Systematic Review. ARXIV 2023:arXiv:2303.11378v2. [PMID: 36994167 PMCID: PMC10055493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
MRI-guided radiation therapy (MRgRT) offers a precise and adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed. MRI-guided radiation therapy offers a precise, adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed with emphasis placed on underlying methods. Studies are further categorized into the areas of segmentation, synthesis, radiomics, and real time MRI. Finally, clinical implications, current challenges, and future directions are discussed.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| |
Collapse
|
10
|
Podobnik G, Strojan P, Peterlin P, Ibragimov B, Vrtovec T. HaN-Seg: The head and neck organ-at-risk CT and MR segmentation dataset. Med Phys 2023; 50:1917-1927. [PMID: 36594372 DOI: 10.1002/mp.16197] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 11/17/2022] [Accepted: 12/07/2022] [Indexed: 01/04/2023] Open
Abstract
PURPOSE For the cancer in the head and neck (HaN), radiotherapy (RT) represents an important treatment modality. Segmentation of organs-at-risk (OARs) is the starting point of RT planning, however, existing approaches are focused on either computed tomography (CT) or magnetic resonance (MR) images, while multimodal segmentation has not been thoroughly explored yet. We present a dataset of CT and MR images of the same patients with curated reference HaN OAR segmentations for an objective evaluation of segmentation methods. ACQUISITION AND VALIDATION METHODS The cohort consists of HaN images of 56 patients that underwent both CT and T1-weighted MR imaging for image-guided RT. For each patient, reference segmentations of up to 30 OARs were obtained by experts performing manual pixel-wise image annotation. By maintaining the distribution of patient age and gender, and annotation type, the patients were randomly split into training Set 1 (42 cases or 75%) and test Set 2 (14 cases or 25%). Baseline auto-segmentation results are also provided by training the publicly available deep nnU-Net architecture on Set 1, and evaluating its performance on Set 2. DATA FORMAT AND USAGE NOTES The data are publicly available through an open-access repository under the name HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Dataset. Images and reference segmentations are stored in the NRRD file format, where the OAR filenames correspond to the nomenclature recommended by the American Association of Physicists in Medicine, and OAR and demographics information is stored in separate comma-separated value files. POTENTIAL APPLICATIONS The HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Challenge is launched in parallel with the dataset release to promote the development of automated techniques for OAR segmentation in the HaN. Other potential applications include out-of-challenge algorithm development and benchmarking, as well as external validation of the developed algorithms.
Collapse
Affiliation(s)
- Gašper Podobnik
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | | | | | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
11
|
Deep learning-based two-step organs at risk auto-segmentation model for brachytherapy planning in parotid gland carcinoma. J Contemp Brachytherapy 2022; 14:527-535. [PMID: 36819465 PMCID: PMC9924151 DOI: 10.5114/jcb.2022.123972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 12/02/2022] [Indexed: 01/18/2023] Open
Abstract
Purpose Delineation of organs at risk (OARs) represents a crucial step for both tailored delivery of radiation doses and prevention of radiation-induced toxicity in brachytherapy. Due to lack of studies on auto-segmentation methods in head and neck cancers, our study proposed a deep learning-based two-step approach for auto-segmentation of organs at risk in parotid carcinoma brachytherapy. Material and methods Computed tomography images of 200 patients with parotid gland carcinoma were used to train and evaluate our in-house developed two-step 3D nnU-Net-based model for OARs auto-segmentation. OARs during brachytherapy were defined as the auricula, condyle process, skin, mastoid process, external auditory canal, and mandibular ramus. Auto-segmentation results were compared to those of manual segmentation by expert oncologists. Accuracy was quantitatively evaluated in terms of dice similarity coefficient (DSC), Jaccard index, 95th-percentile Hausdorff distance (95HD), and precision and recall. Qualitative evaluation of auto-segmentation results was also performed. Results The mean DSC values of each OAR were 0.88, 0.91, 0.75, 0.89, 0.74, and 0.93, respectively, indicating close resemblance of auto-segmentation results to those of manual contouring. In addition, auto-segmentation could be completed within a minute, as compared with manual segmentation, which required over 20 minutes. All generated results were deemed clinically acceptable. Conclusions Our proposed deep learning-based two-step OARs auto-segmentation model demonstrated high efficiency and good agreement with gold standard manual contours. Thereby, this novel approach carries the potential in expediting the treatment planning process of brachytherapy for parotid gland cancers, while allowing for more accurate radiation delivery to minimize toxicity.
Collapse
|