1
|
Podobnik G, Ibragimov B, Peterlin P, Strojan P, Vrtovec T. vOARiability: Interobserver and intermodality variability analysis in OAR contouring from head and neck CT and MR images. Med Phys 2024; 51:2175-2186. [PMID: 38230752 DOI: 10.1002/mp.16924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 10/31/2023] [Accepted: 12/13/2023] [Indexed: 01/18/2024] Open
Abstract
BACKGROUND Accurate and consistent contouring of organs-at-risk (OARs) from medical images is a key step of radiotherapy (RT) cancer treatment planning. Most contouring approaches rely on computed tomography (CT) images, but the integration of complementary magnetic resonance (MR) modality is highly recommended, especially from the perspective of OAR contouring, synthetic CT and MR image generation for MR-only RT, and MR-guided RT. Although MR has been recognized as valuable for contouring OARs in the head and neck (HaN) region, the accuracy and consistency of the resulting contours have not been yet objectively evaluated. PURPOSE To analyze the interobserver and intermodality variability in contouring OARs in the HaN region, performed by observers with different level of experience from CT and MR images of the same patients. METHODS In the final cohort of 27 CT and MR images of the same patients, contours of up to 31 OARs were obtained by a radiation oncology resident (junior observer, JO) and a board-certified radiation oncologist (senior observer, SO). The resulting contours were then evaluated in terms of interobserver variability, characterized as the agreement among different observers (JO and SO) when contouring OARs in a selected modality (CT or MR), and intermodality variability, characterized as the agreement among different modalities (CT and MR) when OARs were contoured by a selected observer (JO or SO), both by the Dice coefficient (DC) and 95-percentile Hausdorff distance (HD95 $_{95}$ ). RESULTS The mean (±standard deviation) interobserver variability was 69.0 ± 20.2% and 5.1 ± 4.1 mm, while the mean intermodality variability was 61.6 ± 19.0% and 6.1 ± 4.3 mm in terms of DC and HD95 $_{95}$ , respectively, across all OARs. Statistically significant differences were only found for specific OARs. The performed MR to CT image registration resulted in a mean target registration error of 1.7 ± 0.5 mm, which was considered as valid for the analysis of intermodality variability. CONCLUSIONS The contouring variability was, in general, similar for both image modalities, and experience did not considerably affect the contouring performance. However, the results indicate that an OAR is difficult to contour regardless of whether it is contoured in the CT or MR image, and that observer experience may be an important factor for OARs that are deemed difficult to contour. Several of the differences in the resulting variability can be also attributed to adherence to guidelines, especially for OARs with poor visibility or without distinctive boundaries in either CT or MR images. Although considerable contouring differences were observed for specific OARs, it can be concluded that almost all OARs can be contoured with a similar degree of variability in either the CT or MR modality, which works in favor of MR images from the perspective of MR-only and MR-guided RT.
Collapse
Affiliation(s)
- Gašper Podobnik
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | - Bulat Ibragimov
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | | | | | - Tomaž Vrtovec
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
2
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RLJ, Liu T, Wang T, Yang X. Deep learning in MRI-guided radiation therapy: A systematic review. J Appl Clin Med Phys 2024; 25:e14155. [PMID: 37712893 PMCID: PMC10860468 DOI: 10.1002/acm2.14155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/10/2023] [Accepted: 08/21/2023] [Indexed: 09/16/2023] Open
Abstract
Recent advances in MRI-guided radiation therapy (MRgRT) and deep learning techniques encourage fully adaptive radiation therapy (ART), real-time MRI monitoring, and the MRI-only treatment planning workflow. Given the rapid growth and emergence of new state-of-the-art methods in these fields, we systematically review 197 studies written on or before December 31, 2022, and categorize the studies into the areas of image segmentation, image synthesis, radiomics, and real time MRI. Building from the underlying deep learning methods, we discuss their clinical importance and current challenges in facilitating small tumor segmentation, accurate x-ray attenuation information from MRI, tumor characterization and prognosis, and tumor motion tracking. In particular, we highlight the recent trends in deep learning such as the emergence of multi-modal, visual transformer, and diffusion models.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Richard L. J. Qiu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | - Tonghe Wang
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| |
Collapse
|
3
|
Kawamura M, Kamomae T, Yanagawa M, Kamagata K, Fujita S, Ueda D, Matsui Y, Fushimi Y, Fujioka T, Nozaki T, Yamada A, Hirata K, Ito R, Fujima N, Tatsugami F, Nakaura T, Tsuboyama T, Naganawa S. Revolutionizing radiation therapy: the role of AI in clinical practice. JOURNAL OF RADIATION RESEARCH 2024; 65:1-9. [PMID: 37996085 PMCID: PMC10803173 DOI: 10.1093/jrr/rrad090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 09/25/2023] [Accepted: 10/16/2023] [Indexed: 11/25/2023]
Abstract
This review provides an overview of the application of artificial intelligence (AI) in radiation therapy (RT) from a radiation oncologist's perspective. Over the years, advances in diagnostic imaging have significantly improved the efficiency and effectiveness of radiotherapy. The introduction of AI has further optimized the segmentation of tumors and organs at risk, thereby saving considerable time for radiation oncologists. AI has also been utilized in treatment planning and optimization, reducing the planning time from several days to minutes or even seconds. Knowledge-based treatment planning and deep learning techniques have been employed to produce treatment plans comparable to those generated by humans. Additionally, AI has potential applications in quality control and assurance of treatment plans, optimization of image-guided RT and monitoring of mobile tumors during treatment. Prognostic evaluation and prediction using AI have been increasingly explored, with radiomics being a prominent area of research. The future of AI in radiation oncology offers the potential to establish treatment standardization by minimizing inter-observer differences in segmentation and improving dose adequacy evaluation. RT standardization through AI may have global implications, providing world-standard treatment even in resource-limited settings. However, there are challenges in accumulating big data, including patient background information and correlating treatment plans with disease outcomes. Although challenges remain, ongoing research and the integration of AI technology hold promise for further advancements in radiation oncology.
Collapse
Affiliation(s)
- Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Takeshi Kamomae
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, 565-0871, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1 Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Shohei Fujita
- Department of Radiology, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata-cho, Kitaku, Okayama, 700-8558, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawaharacho, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, 3-1-1 Asahi, Matsumoto, Nagano, 390-8621, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Faculty of Medicine, Hokkaido University, Kita15, Nishi7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Kita15, Nishi7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, 1-1-1 Honjo, Chuo-ku, Kumamoto, 860-8556, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, 565-0871, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| |
Collapse
|
4
|
McDonald BA, Dal Bello R, Fuller CD, Balermpas P. The Use of MR-Guided Radiation Therapy for Head and Neck Cancer and Recommended Reporting Guidance. Semin Radiat Oncol 2024; 34:69-83. [PMID: 38105096 DOI: 10.1016/j.semradonc.2023.10.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Although magnetic resonance imaging (MRI) has become standard diagnostic workup for head and neck malignancies and is currently recommended by most radiological societies for pharyngeal and oral carcinomas, its utilization in radiotherapy has been heterogeneous during the last decades. However, few would argue that implementing MRI for annotation of target volumes and organs at risk provides several advantages, so that implementation of the modality for this purpose is widely accepted. Today, the term MR-guidance has received a much broader meaning, including MRI for adaptive treatments, MR-gating and tracking during radiotherapy application, MR-features as biomarkers and finally MR-only workflows. First studies on treatment of head and neck cancer on commercially available dedicated hybrid-platforms (MR-linacs), with distinct common features but also differences amongst them, have also been recently reported, as well as "biological adaptation" based on evaluation of early treatment response via functional MRI-sequences such as diffusion weighted ones. Yet, all of these approaches towards head and neck treatment remain at their infancy, especially when compared to other radiotherapy indications. Moreover, the lack of standardization for reporting MR-guided radiotherapy is a major obstacle both to further progress in the field and to conduct and compare clinical trials. Goals of this article is to present and explain all different aspects of MR-guidance for radiotherapy of head and neck cancer, summarize evidence, as well as possible advantages and challenges of the method and finally provide a comprehensive reporting guidance for use in clinical routine and trials.
Collapse
Affiliation(s)
- Brigid A McDonald
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Riccardo Dal Bello
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Panagiotis Balermpas
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland.
| |
Collapse
|
5
|
Liu P, Sun Y, Zhao X, Yan Y. Deep learning algorithm performance in contouring head and neck organs at risk: a systematic review and single-arm meta-analysis. Biomed Eng Online 2023; 22:104. [PMID: 37915046 PMCID: PMC10621161 DOI: 10.1186/s12938-023-01159-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 09/21/2023] [Indexed: 11/03/2023] Open
Abstract
PURPOSE The contouring of organs at risk (OARs) in head and neck cancer radiation treatment planning is a crucial, yet repetitive and time-consuming process. Recent studies have applied deep learning (DL) algorithms to automatically contour head and neck OARs. This study aims to conduct a systematic review and meta-analysis to summarize and analyze the performance of DL algorithms in contouring head and neck OARs. The objective is to assess the advantages and limitations of DL algorithms in contour planning of head and neck OARs. METHODS This study conducted a literature search of Pubmed, Embase and Cochrane Library databases, to include studies related to DL contouring head and neck OARs, and the dice similarity coefficient (DSC) of four categories of OARs from the results of each study are selected as effect sizes for meta-analysis. Furthermore, this study conducted a subgroup analysis of OARs characterized by image modality and image type. RESULTS 149 articles were retrieved, and 22 studies were included in the meta-analysis after excluding duplicate literature, primary screening, and re-screening. The combined effect sizes of DSC for brainstem, spinal cord, mandible, left eye, right eye, left optic nerve, right optic nerve, optic chiasm, left parotid, right parotid, left submandibular, and right submandibular are 0.87, 0.83, 0.92, 0.90, 0.90, 0.71, 0.74, 0.62, 0.85, 0.85, 0.82, and 0.82, respectively. For subgroup analysis, the combined effect sizes for segmentation of the brainstem, mandible, left optic nerve, and left parotid gland using CT and MRI images are 0.86/0.92, 0.92/0.90, 0.71/0.73, and 0.84/0.87, respectively. Pooled effect sizes using 2D and 3D images of the brainstem, mandible, left optic nerve, and left parotid gland for contouring are 0.88/0.87, 0.92/0.92, 0.75/0.71 and 0.87/0.85. CONCLUSIONS The use of automated contouring technology based on DL algorithms is an essential tool for contouring head and neck OARs, achieving high accuracy, reducing the workload of clinical radiation oncologists, and providing individualized, standardized, and refined treatment plans for implementing "precision radiotherapy". Improving DL performance requires the construction of high-quality data sets and enhancing algorithm optimization and innovation.
Collapse
Affiliation(s)
- Peiru Liu
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China
- Beifang Hospital of China Medical University, Shenyang, China
| | - Ying Sun
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China
| | - Xinzhuo Zhao
- Shenyang University of Technology, School of Electrical Engineering,, Shenyang, China
| | - Ying Yan
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China.
| |
Collapse
|
6
|
Fujima N, Kamagata K, Ueda D, Fujita S, Fushimi Y, Yanagawa M, Ito R, Tsuboyama T, Kawamura M, Nakaura T, Yamada A, Nozaki T, Fujioka T, Matsui Y, Hirata K, Tatsugami F, Naganawa S. Current State of Artificial Intelligence in Clinical Applications for Head and Neck MR Imaging. Magn Reson Med Sci 2023; 22:401-414. [PMID: 37532584 PMCID: PMC10552661 DOI: 10.2463/mrms.rev.2023-0047] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 07/09/2023] [Indexed: 08/04/2023] Open
Abstract
Due primarily to the excellent soft tissue contrast depictions provided by MRI, the widespread application of head and neck MRI in clinical practice serves to assess various diseases. Artificial intelligence (AI)-based methodologies, particularly deep learning analyses using convolutional neural networks, have recently gained global recognition and have been extensively investigated in clinical research for their applicability across a range of categories within medical imaging, including head and neck MRI. Analytical approaches using AI have shown potential for addressing the clinical limitations associated with head and neck MRI. In this review, we focus primarily on the technical advancements in deep-learning-based methodologies and their clinical utility within the field of head and neck MRI, encompassing aspects such as image acquisition and reconstruction, lesion segmentation, disease classification and diagnosis, and prognostic prediction for patients presenting with head and neck diseases. We then discuss the limitations of current deep-learning-based approaches and offer insights regarding future challenges in this field.
Collapse
Affiliation(s)
- Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Tokyo, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Osaka, Japan
| | - Shohei Fujita
- Department of Radiology, University of Tokyo, Tokyo, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Kyoto, Kyoto, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, Kumamoto, Kumamoto, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, Matsumoto, Nagano, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, Tokyo, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, Okayama, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, Hiroshima, Hiroshima, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| |
Collapse
|
7
|
Chen Y, Gensheimer MF, Bagshaw HP, Butler S, Yu L, Zhou Y, Shen L, Kovalchuk N, Surucu M, Chang DT, Xing L, Han B. Patient-Specific Auto-segmentation on Daily kVCT Images for Adaptive Radiation Therapy. Int J Radiat Oncol Biol Phys 2023; 117:505-514. [PMID: 37141982 DOI: 10.1016/j.ijrobp.2023.04.026] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 04/18/2023] [Accepted: 04/25/2023] [Indexed: 05/06/2023]
Abstract
PURPOSE This study explored deep-learning-based patient-specific auto-segmentation using transfer learning on daily RefleXion kilovoltage computed tomography (kVCT) images to facilitate adaptive radiation therapy, based on data from the first group of patients treated with the innovative RefleXion system. METHODS AND MATERIALS For head and neck (HaN) and pelvic cancers, a deep convolutional segmentation network was initially trained on a population data set that contained 67 and 56 patient cases, respectively. Then the pretrained population network was adapted to the specific RefleXion patient by fine-tuning the network weights with a transfer learning method. For each of the 6 collected RefleXion HaN cases and 4 pelvic cases, initial planning computed tomography (CT) scans and 5 to 26 sets of daily kVCT images were used for the patient-specific learning and evaluation separately. The performance of the patient-specific network was compared with the population network and the clinical rigid registration method and evaluated by the Dice similarity coefficient (DSC) with manual contours being the reference. The corresponding dosimetric effects resulting from different auto-segmentation and registration methods were also investigated. RESULTS The proposed patient-specific network achieved mean DSC results of 0.88 for 3 HaN organs at risk (OARs) of interest and 0.90 for 8 pelvic target and OARs, outperforming the population network (0.70 and 0.63) and the registration method (0.72 and 0.72). The DSC of the patient-specific network gradually increased with the increment of longitudinal training cases and approached saturation with more than 6 training cases. Compared with using the registration contour, the target and OAR mean doses and dose-volume histograms obtained using the patient-specific auto-segmentation were closer to the results using the manual contour. CONCLUSIONS Auto-segmentation of RefleXion kVCT images based on the patient-specific transfer learning could achieve higher accuracy, outperforming a common population network and clinical registration-based method. This approach shows promise in improving dose evaluation accuracy in RefleXion adaptive radiation therapy.
Collapse
Affiliation(s)
- Yizheng Chen
- Department of Radiation Oncology, Stanford University, Stanford, California
| | | | - Hilary P Bagshaw
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Santino Butler
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Lequan Yu
- Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong, China
| | - Yuyin Zhou
- Department of Computer Science and Engineering, University of California Santa Cruz, Santa Cruz, California
| | - Liyue Shen
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts
| | - Nataliya Kovalchuk
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Murat Surucu
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Daniel T Chang
- Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Bin Han
- Department of Radiation Oncology, Stanford University, Stanford, California.
| |
Collapse
|
8
|
McNaughton J, Fernandez J, Holdsworth S, Chong B, Shim V, Wang A. Machine Learning for Medical Image Translation: A Systematic Review. Bioengineering (Basel) 2023; 10:1078. [PMID: 37760180 PMCID: PMC10525905 DOI: 10.3390/bioengineering10091078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/30/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics and are more likely to pick up abnormalities and lesions. The purpose of this paper is to review studies which use deep learning methods to generate synthetic medical images of modalities such as MRI and CT. METHODS A literature search was performed in March 2023, and relevant articles were selected and analyzed. The year of publication, dataset size, input modality, synthesized modality, deep learning architecture, motivations, and evaluation methods were analyzed. RESULTS A total of 103 studies were included in this review, all of which were published since 2017. Of these, 74% of studies investigated MRI to CT synthesis, and the remaining studies investigated CT to MRI, Cross MRI, PET to CT, and MRI to PET. Additionally, 58% of studies were motivated by synthesizing CT scans from MRI to perform MRI-only radiation therapy. Other motivations included synthesizing scans to aid diagnosis and completing datasets by synthesizing missing scans. CONCLUSIONS Considerably more research has been carried out on MRI to CT synthesis, despite CT to MRI synthesis yielding specific benefits. A limitation on medical image synthesis is that medical datasets, especially paired datasets of different modalities, are lacking in size and availability; it is therefore recommended that a global consortium be developed to obtain and make available more datasets for use. Finally, it is recommended that work be carried out to establish all uses of the synthesis of medical scans in clinical practice and discover which evaluation methods are suitable for assessing the synthesized images for these needs.
Collapse
Affiliation(s)
- Jake McNaughton
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Department of Engineering Science and Biomedical Engineering, University of Auckland, 3/70 Symonds Street, Auckland 1010, New Zealand
| | - Samantha Holdsworth
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Benjamin Chong
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| |
Collapse
|
9
|
Lei Y, Wang T, Roper J, Tian S, Patel P, Bradley JD, Jani AB, Liu T, Yang X. Automatic segmentation of neurovascular bundle on mri using deep learning based topological modulated network. Med Phys 2023; 50:5479-5488. [PMID: 36939189 PMCID: PMC10509305 DOI: 10.1002/mp.16378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 01/20/2023] [Accepted: 03/09/2023] [Indexed: 03/21/2023] Open
Abstract
PURPOSE Radiation damage on neurovascular bundles (NVBs) may be the cause of sexual dysfunction after radiotherapy for prostate cancer. However, it is challenging to delineate NVBs as organ-at-risks from planning CTs during radiotherapy. Recently, the integration of MR into radiotherapy made NVBs contour delineating possible. In this study, we aim to develop an MRI-based deep learning method for automatic NVB segmentation. METHODS The proposed method, named topological modulated network, consists of three subnetworks, that is, a focal modulation, a hierarchical block and a topological fully convolutional network (FCN). The focal modulation is used to derive the location and bounds of left and right NVBs', namely the candidate volume-of-interests (VOIs). The hierarchical block aims to highlight the NVB boundaries information on derived feature map. The topological FCN then segments the NVBs inside the VOIs by considering the topological consistency nature of the vascular delineating. Based on the location information of candidate VOIs, the segmentations of NVBs can then be brought back to the input MRI's coordinate system. RESULTS A five-fold cross-validation study was performed on 60 patient cases to evaluate the performance of the proposed method. The segmented results were compared with manual contours. The Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (HD95 ) are (left NVB) 0.81 ± 0.10, 1.49 ± 0.88 mm, and (right NVB) 0.80 ± 0.15, 1.54 ± 1.22 mm, respectively. CONCLUSION We proposed a novel deep learning-based segmentation method for NVBs on pelvic MR images. The good segmentation agreement of our method with the manually drawn ground truth contours supports the feasibility of the proposed method, which can be potentially used to spare NVBs during proton and photon radiotherapy and thereby improve the quality of life for prostate cancer patients.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
10
|
Song Y, Hu J, Wang Q, Yu C, Su J, Chen L, Jiang X, Chen B, Zhang L, Yu Q, Li P, Wang F, Bai S, Luo Y, Yi Z. Young oncologists benefit more than experts from deep learning-based organs-at-risk contouring modeling in nasopharyngeal carcinoma radiotherapy: A multi-institution clinical study exploring working experience and institute group style factor. Clin Transl Radiat Oncol 2023; 41:100635. [PMID: 37251619 PMCID: PMC10213188 DOI: 10.1016/j.ctro.2023.100635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 04/26/2023] [Accepted: 05/01/2023] [Indexed: 05/31/2023] Open
Abstract
Background To comprehensively investigate the behaviors of oncologists with different working experiences and institute group styles in deep learning-based organs-at-risk (OAR) contouring. Methods A deep learning-based contouring system (DLCS) was modeled from 188 CT datasets of patients with nasopharyngeal carcinoma (NPC) in institute A. Three institute oncology groups, A, B, and C, were included; each contained a beginner and an expert. For each of the 28 OARs, two trials were performed with manual contouring first and post-DLCS edition later, for ten test cases. Contouring performance and group consistency were quantified by volumetric and surface Dice coefficients. A volume-based and a surface-based oncologist satisfaction rate (VOSR and SOSR) were defined to evaluate the oncologists' acceptance of DLCS. Results Based on DLCS, experience inconsistency was eliminated. Intra-institute consistency was eliminated for group C but still existed for group A and group B. Group C benefits most from DLCS with the highest number of improved OARs (8 for volumetric Dice and 10 for surface Dice), followed by group B. Beginners obtained more numbers of improved OARs than experts (7 v.s. 4 in volumetric Dice and 5 v.s. 4 in surface Dice). VOSR and SOSR varied for institute groups, but the rates of beginners were all significantly higher than those of experts for OARs with experience group significance. A remarkable positive linear relationship was found between VOSR and post-DLCS edition volumetric Dice with a coefficient of 0.78. Conclusions The DLCS was effective for various institutes and the beginners benefited more than the experts.
Collapse
Affiliation(s)
- Ying Song
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Junjie Hu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Qiang Wang
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Chengrong Yu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Jiachong Su
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Lin Chen
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Xiaorui Jiang
- Department of Oncology, First People's Hospital of Chengdu, No. 18, Wanxiang North Road, High-tech Zone, Chengdu 610041, PR China
| | - Bo Chen
- Department of Oncology, First People's Hospital of Chengdu, No. 18, Wanxiang North Road, High-tech Zone, Chengdu 610041, PR China
| | - Lei Zhang
- Department of Oncology, Second People's Hospital of Chengdu, Chengdu, PR China
| | - Qian Yu
- Department of Oncology, Second People's Hospital of Chengdu, Chengdu, PR China
| | - Ping Li
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Feng Wang
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Sen Bai
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Yong Luo
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| |
Collapse
|
11
|
Franzese C, Dei D, Lambri N, Teriaca MA, Badalamenti M, Crespi L, Tomatis S, Loiacono D, Mancosu P, Scorsetti M. Enhancing Radiotherapy Workflow for Head and Neck Cancer with Artificial Intelligence: A Systematic Review. J Pers Med 2023; 13:946. [PMID: 37373935 DOI: 10.3390/jpm13060946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 06/01/2023] [Accepted: 06/01/2023] [Indexed: 06/29/2023] Open
Abstract
BACKGROUND Head and neck cancer (HNC) is characterized by complex-shaped tumors and numerous organs at risk (OARs), inducing challenging radiotherapy (RT) planning, optimization, and delivery. In this review, we provided a thorough description of the applications of artificial intelligence (AI) tools in the HNC RT process. METHODS The PubMed database was queried, and a total of 168 articles (2016-2022) were screened by a group of experts in radiation oncology. The group selected 62 articles, which were subdivided into three categories, representing the whole RT workflow: (i) target and OAR contouring, (ii) planning, and (iii) delivery. RESULTS The majority of the selected studies focused on the OARs segmentation process. Overall, the performance of AI models was evaluated using standard metrics, while limited research was found on how the introduction of AI could impact clinical outcomes. Additionally, papers usually lacked information about the confidence level associated with the predictions made by the AI models. CONCLUSIONS AI represents a promising tool to automate the RT workflow for the complex field of HNC treatment. To ensure that the development of AI technologies in RT is effectively aligned with clinical needs, we suggest conducting future studies within interdisciplinary groups, including clinicians and computer scientists.
Collapse
Affiliation(s)
- Ciro Franzese
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Damiano Dei
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Nicola Lambri
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Maria Ausilia Teriaca
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Marco Badalamenti
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Leonardo Crespi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
- Centre for Health Data Science, Human Technopole, 20157 Milan, Italy
| | - Stefano Tomatis
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Daniele Loiacono
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
| | - Pietro Mancosu
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Marta Scorsetti
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| |
Collapse
|
12
|
Podobnik G, Strojan P, Peterlin P, Ibragimov B, Vrtovec T. HaN-Seg: The head and neck organ-at-risk CT and MR segmentation dataset. Med Phys 2023; 50:1917-1927. [PMID: 36594372 DOI: 10.1002/mp.16197] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 11/17/2022] [Accepted: 12/07/2022] [Indexed: 01/04/2023] Open
Abstract
PURPOSE For the cancer in the head and neck (HaN), radiotherapy (RT) represents an important treatment modality. Segmentation of organs-at-risk (OARs) is the starting point of RT planning, however, existing approaches are focused on either computed tomography (CT) or magnetic resonance (MR) images, while multimodal segmentation has not been thoroughly explored yet. We present a dataset of CT and MR images of the same patients with curated reference HaN OAR segmentations for an objective evaluation of segmentation methods. ACQUISITION AND VALIDATION METHODS The cohort consists of HaN images of 56 patients that underwent both CT and T1-weighted MR imaging for image-guided RT. For each patient, reference segmentations of up to 30 OARs were obtained by experts performing manual pixel-wise image annotation. By maintaining the distribution of patient age and gender, and annotation type, the patients were randomly split into training Set 1 (42 cases or 75%) and test Set 2 (14 cases or 25%). Baseline auto-segmentation results are also provided by training the publicly available deep nnU-Net architecture on Set 1, and evaluating its performance on Set 2. DATA FORMAT AND USAGE NOTES The data are publicly available through an open-access repository under the name HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Dataset. Images and reference segmentations are stored in the NRRD file format, where the OAR filenames correspond to the nomenclature recommended by the American Association of Physicists in Medicine, and OAR and demographics information is stored in separate comma-separated value files. POTENTIAL APPLICATIONS The HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Challenge is launched in parallel with the dataset release to promote the development of automated techniques for OAR segmentation in the HaN. Other potential applications include out-of-challenge algorithm development and benchmarking, as well as external validation of the developed algorithms.
Collapse
Affiliation(s)
- Gašper Podobnik
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | | | | | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
13
|
Mackay K, Bernstein D, Glocker B, Kamnitsas K, Taylor A. A Review of the Metrics Used to Assess Auto-Contouring Systems in Radiotherapy. Clin Oncol (R Coll Radiol) 2023; 35:354-369. [PMID: 36803407 DOI: 10.1016/j.clon.2023.01.016] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 12/05/2022] [Accepted: 01/23/2023] [Indexed: 02/01/2023]
Abstract
Auto-contouring could revolutionise future planning of radiotherapy treatment. The lack of consensus on how to assess and validate auto-contouring systems currently limits clinical use. This review formally quantifies the assessment metrics used in studies published during one calendar year and assesses the need for standardised practice. A PubMed literature search was undertaken for papers evaluating radiotherapy auto-contouring published during 2021. Papers were assessed for types of metric and the methodology used to generate ground-truth comparators. Our PubMed search identified 212 studies, of which 117 met the criteria for clinical review. Geometric assessment metrics were used in 116 of 117 studies (99.1%). This includes the Dice Similarity Coefficient used in 113 (96.6%) studies. Clinically relevant metrics, such as qualitative, dosimetric and time-saving metrics, were less frequently used in 22 (18.8%), 27 (23.1%) and 18 (15.4%) of 117 studies, respectively. There was heterogeneity within each category of metric. Over 90 different names for geometric measures were used. Methods for qualitative assessment were different in all but two papers. Variation existed in the methods used to generate radiotherapy plans for dosimetric assessment. Consideration of editing time was only given in 11 (9.4%) papers. A single manual contour as a ground-truth comparator was used in 65 (55.6%) studies. Only 31 (26.5%) studies compared auto-contours to usual inter- and/or intra-observer variation. In conclusion, significant variation exists in how research papers currently assess the accuracy of automatically generated contours. Geometric measures are the most popular, however their clinical utility is unknown. There is heterogeneity in the methods used to perform clinical assessment. Considering the different stages of system implementation may provide a framework to decide the most appropriate metrics. This analysis supports the need for a consensus on the clinical implementation of auto-contouring.
Collapse
Affiliation(s)
- K Mackay
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK.
| | - D Bernstein
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| | - B Glocker
- Department of Computing, Imperial College London, South Kensington Campus, London, UK
| | - K Kamnitsas
- Department of Computing, Imperial College London, South Kensington Campus, London, UK; Department of Engineering Science, University of Oxford, Oxford, UK
| | - A Taylor
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| |
Collapse
|
14
|
Lei Y, Wang T, Jeong JJ, Janopaul-Naylor J, Kesarwala AH, Roper J, Tian S, Bradley JD, Liu T, Higgins K, Yang X. Automated lung tumor delineation on positron emission tomography/computed tomography via a hybrid regional network. Med Phys 2023; 50:274-283. [PMID: 36203393 PMCID: PMC9868056 DOI: 10.1002/mp.16001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 09/20/2022] [Accepted: 09/20/2022] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Multimodality positron emission tomography/computed tomography (PET/CT) imaging combines the anatomical information of CT with the functional information of PET. In the diagnosis and treatment of many cancers, such as non-small cell lung cancer (NSCLC), PET/CT imaging allows more accurate delineation of tumor or involved lymph nodes for radiation planning. PURPOSE In this paper, we propose a hybrid regional network method of automatically segmenting lung tumors from PET/CT images. METHODS The hybrid regional network architecture synthesizes the functional and anatomical information from the two image modalities, whereas the mask regional convolutional neural network (R-CNN) and scoring fine-tune the regional location and quality of the output segmentation. This model consists of five major subnetworks, that is, a dual feature representation network (DFRN), a regional proposal network (RPN), a specific tumor-wise R-CNN, a mask-Net, and a score head. Given a PET/CT image as inputs, the DFRN extracts feature maps from the PET and CT images. Then, the RPN and R-CNN work together to localize lung tumors and reduce the image size and feature map size by removing irrelevant regions. The mask-Net is used to segment tumor within a volume-of-interest (VOI) with a score head evaluating the segmentation performed by the mask-Net. Finally, the segmented tumor within the VOI was mapped back to the volumetric coordinate system based on the location information derived via the RPN and R-CNN. We trained, validated, and tested the proposed neural network using 100 PET/CT images of patients with NSCLC. A fivefold cross-validation study was performed. The segmentation was evaluated with two indicators: (1) multiple metrics, including the Dice similarity coefficient, Jacard, 95th percentile Hausdorff distance, mean surface distance (MSD), residual mean square distance, and center-of-mass distance; (2) Bland-Altman analysis and volumetric Pearson correlation analysis. RESULTS In fivefold cross-validation, this method achieved Dice and MSD of 0.84 ± 0.15 and 1.38 ± 2.2 mm, respectively. A new PET/CT can be segmented in 1 s by this model. External validation on The Cancer Imaging Archive dataset (63 PET/CT images) indicates that the proposed model has superior performance compared to other methods. CONCLUSION The proposed method shows great promise to automatically delineate NSCLC tumors on PET/CT images, thereby allowing for a more streamlined clinical workflow that is faster and reduces physician effort.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Jiwoong J Jeong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - James Janopaul-Naylor
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Aparna H Kesarwala
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, School of Medicine, Atlanta, Georgia, USA
| |
Collapse
|
15
|
Cubero L, Castelli J, Simon A, de Crevoisier R, Acosta O, Pascau J. Deep Learning-Based Segmentation of Head and Neck Organs-at-Risk with Clinical Partially Labeled Data. ENTROPY (BASEL, SWITZERLAND) 2022; 24:e24111661. [PMID: 36421515 PMCID: PMC9689629 DOI: 10.3390/e24111661] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 10/28/2022] [Accepted: 11/09/2022] [Indexed: 06/06/2023]
Abstract
Radiotherapy is one of the main treatments for localized head and neck (HN) cancer. To design a personalized treatment with reduced radio-induced toxicity, accurate delineation of organs at risk (OAR) is a crucial step. Manual delineation is time- and labor-consuming, as well as observer-dependent. Deep learning (DL) based segmentation has proven to overcome some of these limitations, but requires large databases of homogeneously contoured image sets for robust training. However, these are not easily obtained from the standard clinical protocols as the OARs delineated may vary depending on the patient's tumor site and specific treatment plan. This results in incomplete or partially labeled data. This paper presents a solution to train a robust DL-based automated segmentation tool exploiting a clinical partially labeled dataset. We propose a two-step workflow for OAR segmentation: first, we developed longitudinal OAR-specific 3D segmentation models for pseudo-contour generation, completing the missing contours for some patients; with all OAR available, we trained a multi-class 3D convolutional neural network (nnU-Net) for final OAR segmentation. Results obtained in 44 independent datasets showed superior performance of the proposed methodology for the segmentation of fifteen OARs, with an average Dice score coefficient and surface Dice similarity coefficient of 80.59% and 88.74%. We demonstrated that the model can be straightforwardly integrated into the clinical workflow for standard and adaptive radiotherapy.
Collapse
Affiliation(s)
- Lucía Cubero
- Departamento de Bioingeniería, Universidad Carlos III de Madrid, 28911 Madrid, Spain
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Joël Castelli
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Antoine Simon
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Renaud de Crevoisier
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Oscar Acosta
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Javier Pascau
- Departamento de Bioingeniería, Universidad Carlos III de Madrid, 28911 Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain
| |
Collapse
|
16
|
Kalendralis P, Sloep M, Moni George N, Snel J, Veugen J, Hoebers F, Wesseling F, Unipan M, Veening M, Langendijk JA, Dekker A, van Soest J, Fijten R. Independent validation of a dysphagia dose response model for the selection of head and neck cancer patients to proton therapy. Phys Imaging Radiat Oncol 2022; 24:47-52. [PMID: 36158240 PMCID: PMC9493379 DOI: 10.1016/j.phro.2022.09.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 09/09/2022] [Accepted: 09/09/2022] [Indexed: 12/09/2022] Open
Abstract
Background and purpose The model based approach involves the use of normal tissue complication models for selection of head and neck cancer patients to proton therapy. Our goal was to validate the clinical utility of the related dysphagia model using an independent patient cohort. Materials and Methods A dataset of 277 head and neck cancer (pharynx and larynx) patients treated with (chemo)radiotherapy between 2019 and 2021 was acquired. For the evaluation of the model discrimination we used statistical metrics such as the sensitivity, specificity and the area under the receiver operating characteristic curve. After the validation we evaluated if the dysphagia model can be improved using the closed testing procedure, the Brier and the Hosmer-Lemeshow score. Results The performance of the original normal tissue complication probability model for dysphagia grade II-IV at 6 months was good (AUC = 0.80). According to the graphical calibration assessment, the original model showed underestimated dysphagia risk predictions. The closed testing procedure indicated that the model had to be updated and selected a revised model with new predictor coefficients as an optimal model. The revised model had also satisfactory discrimination (AUC = 0.83) with improved calibration. Conclusion The validation of the normal tissue complication probability model for grade II-IV dysphagia was successful in our independent validation cohort. However, the closed testing procedure indicated that the model should be updated with new coefficients.
Collapse
|
17
|
Wang J, Chen Z, Yang C, Qu B, Ma L, Fan W, Zhou Q, Zheng Q, Xu S. Evaluation Exploration of Atlas-Based and Deep Learning-Based Automatic Contouring for Nasopharyngeal Carcinoma. Front Oncol 2022; 12:833816. [PMID: 35433460 PMCID: PMC9008357 DOI: 10.3389/fonc.2022.833816] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Accepted: 02/25/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose The purpose of this study was to evaluate and explore the difference between an atlas-based and deep learning (DL)-based auto-segmentation scheme for organs at risk (OARs) of nasopharyngeal carcinoma cases to provide valuable help for clinical practice. Methods 120 nasopharyngeal carcinoma cases were established in the MIM Maestro (atlas) database and trained by a DL-based model (AccuContour®), and another 20 nasopharyngeal carcinoma cases were randomly selected outside the atlas database. The experienced physicians contoured 14 OARs from 20 patients based on the published consensus guidelines, and these were defined as the reference volumes (Vref). Meanwhile, these OARs were auto-contoured using an atlas-based model, a pre-built DL-based model, and an on-site trained DL-based model. These volumes were named Vatlas, VDL-pre-built, and VDL-trained, respectively. The similarities between Vatlas, VDL-pre-built, VDL-trained, and Vref were assessed using the Dice similarity coefficient (DSC), Jaccard coefficient (JAC), maximum Hausdorff distance (HDmax), and deviation of centroid (DC) methods. A one-way ANOVA test was carried out to show the differences (between each two of them). Results The results of the three methods were almost similar for the brainstem and eyes. For inner ears and temporomandibular joints, the results of the pre-built DL-based model are the worst, as well as the results of atlas-based auto-segmentation for the lens. For the segmentation of optic nerves, the trained DL-based model shows the best performance (p < 0.05). For the contouring of the oral cavity, the DSC value of VDL-pre-built is the smallest, and VDL-trained is the most significant (p < 0.05). For the parotid glands, the DSC of Vatlas is the minimum (about 0.80 or so), and VDL-pre-built and VDL-trained are slightly larger (about 0.82 or so). In addition to the oral cavity, parotid glands, and the brainstem, the maximum Hausdorff distances of the other organs are below 0.5 cm using the trained DL-based segmentation model. The trained DL-based segmentation method behaves well in the contouring of all the organs that the maximum average deviation of the centroid is no more than 0.3 cm. Conclusion The trained DL-based segmentation performs significantly better than atlas-based segmentation for nasopharyngeal carcinoma, especially for the OARs with small volumes. Although some delineation results still need further modification, auto-segmentation methods improve the work efficiency and provide a level of help for clinical work.
Collapse
Affiliation(s)
- Jinyuan Wang
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | | | | | - Baolin Qu
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Lin Ma
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Wenjun Fan
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Qichao Zhou
- Manteia Technologies Co., Ltd., Xiamen, China
| | - Qingzeng Zheng
- Department of Radiation Oncology, Beijing Geriatric Hospital, Beijing, China
| | - Shouping Xu
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| |
Collapse
|
18
|
Recent Applications of Artificial Intelligence in Radiotherapy: Where We Are and Beyond. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073223] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
In recent decades, artificial intelligence (AI) tools have been applied in many medical fields, opening the possibility of finding novel solutions for managing very complex and multifactorial problems, such as those commonly encountered in radiotherapy (RT). We conducted a PubMed and Scopus search to identify the AI application field in RT limited to the last four years. In total, 1824 original papers were identified, and 921 were analyzed by considering the phase of the RT workflow according to the applied AI approaches. AI permits the processing of large quantities of information, data, and images stored in RT oncology information systems, a process that is not manageable for individuals or groups. AI allows the iterative application of complex tasks in large datasets (e.g., delineating normal tissues or finding optimal planning solutions) and might support the entire community working in the various sectors of RT, as summarized in this overview. AI-based tools are now on the roadmap for RT and have been applied to the entire workflow, mainly for segmentation, the generation of synthetic images, and outcome prediction. Several concerns were raised, including the need for harmonization while overcoming ethical, legal, and skill barriers.
Collapse
|
19
|
Amjad A, Xu J, Thill D, Lawton C, Hall W, Awan MJ, Shukla M, Erickson BA, Li XA. General and custom deep learning autosegmentation models for organs in head and neck, abdomen, and male pelvis. Med Phys 2022; 49:1686-1700. [PMID: 35094390 PMCID: PMC8917093 DOI: 10.1002/mp.15507] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 01/19/2022] [Accepted: 01/21/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE To reduce workload and inconsistencies in organ segmentation for radiation treatment planning, we developed and evaluated general and custom autosegmentation models on computed tomography (CT) for three major tumor sites using a well-established deep convolutional neural network (DCNN). METHODS Five CT-based autosegmentation models for 42 organs at risk (OARs) in head and neck (HN), abdomen (ABD), and male pelvis (MP) were developed using a full three-dimensional (3D) DCNN architecture. Two types of deep learning (DL) models were separately trained using either general diversified multi-institutional datasets or custom well-controlled single-institution datasets. To improve segmentation accuracy, an adaptive spatial resolution approach for small and/or narrow OARs and a pseudo scan extension approach, when CT scan length is too short to cover entire organs, were implemented. The performance of the obtained models was evaluated based on accuracy and clinical applicability of the autosegmented contours using qualitative visual inspection and quantitative calculation of dice similarity coefficient (DSC), mean distance to agreement (MDA), and time efficiency. RESULTS The five DL autosegmentation models developed for the three anatomical sites were found to have high accuracy (DSC ranging from 0.8 to 0.98) for 74% OARs and marginally acceptable for 26% OARs. The custom models performed slightly better than the general models, even with smaller custom datasets used for the custom model training. The organ-based approaches improved autosegmentation accuracy for small or complex organs (e.g., eye lens, optic nerves, inner ears, and bowels). Compared with traditional manual contouring times, the autosegmentation times, including subsequent manual editing, if necessary, were substantially reduced by 88% for MP, 80% for HN, and 65% for ABD models. CONCLUSIONS The obtained autosegmentation models, incorporating organ-based approaches, were found to be effective and accurate for most OARs in the male pelvis, head and neck, and abdomen. We have demonstrated that our multianatomical DL autosegmentation models are clinically useful for radiation treatment planning.
Collapse
Affiliation(s)
- Asma Amjad
- Department of Radiation Oncology, Medical College of Wisconsin, WI, USA
| | | | | | - Colleen Lawton
- Department of Radiation Oncology, Medical College of Wisconsin, WI, USA
| | - William Hall
- Department of Radiation Oncology, Medical College of Wisconsin, WI, USA
| | - Musaddiq J. Awan
- Department of Radiation Oncology, Medical College of Wisconsin, WI, USA
| | - Monica Shukla
- Department of Radiation Oncology, Medical College of Wisconsin, WI, USA
| | - Beth A. Erickson
- Department of Radiation Oncology, Medical College of Wisconsin, WI, USA
| | - X. Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, WI, USA
| |
Collapse
|