1
|
Oliver J, Alapati R, Lee J, Bur A. Artificial Intelligence in Head and Neck Surgery. Otolaryngol Clin North Am 2024; 57:803-820. [PMID: 38910064 PMCID: PMC11374486 DOI: 10.1016/j.otc.2024.05.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
This article explores artificial intelligence's (AI's) role in otolaryngology for head and neck cancer diagnosis and management. It highlights AI's potential in pattern recognition for early cancer detection, prognostication, and treatment planning, primarily through image analysis using clinical, endoscopic, and histopathologic images. Radiomics is also discussed at length, as well as the many ways that radiologic image analysis can be utilized, including for diagnosis, lymph node metastasis prediction, and evaluation of treatment response. The study highlights AI's promise and limitations, underlining the need for clinician-data scientist collaboration to enhance head and neck cancer care.
Collapse
Affiliation(s)
- Jamie Oliver
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA
| | - Rahul Alapati
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA
| | - Jason Lee
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA
| | - Andrés Bur
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA.
| |
Collapse
|
2
|
Liu J, Zhang Y, Wang K, Yavuz MC, Chen X, Yuan Y, Li H, Yang Y, Yuille A, Tang Y, Zhou Z. Universal and extensible language-vision models for organ segmentation and tumor detection from abdominal computed tomography. Med Image Anal 2024; 97:103226. [PMID: 38852215 DOI: 10.1016/j.media.2024.103226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 03/30/2024] [Accepted: 05/27/2024] [Indexed: 06/11/2024]
Abstract
The advancement of artificial intelligence (AI) for organ segmentation and tumor detection is propelled by the growing availability of computed tomography (CT) datasets with detailed, per-voxel annotations. However, these AI models often struggle with flexibility for partially annotated datasets and extensibility for new classes due to limitations in the one-hot encoding, architectural design, and learning scheme. To overcome these limitations, we propose a universal, extensible framework enabling a single model, termed Universal Model, to deal with multiple public datasets and adapt to new classes (e.g., organs/tumors). Firstly, we introduce a novel language-driven parameter generator that leverages language embeddings from large language models, enriching semantic encoding compared with one-hot encoding. Secondly, the conventional output layers are replaced with lightweight, class-specific heads, allowing Universal Model to simultaneously segment 25 organs and six types of tumors and ease the addition of new classes. We train our Universal Model on 3410 CT volumes assembled from 14 publicly available datasets and then test it on 6173 CT volumes from four external datasets. Universal Model achieves first place on six CT tasks in the Medical Segmentation Decathlon (MSD) public leaderboard and leading performance on the Beyond The Cranial Vault (BTCV) dataset. In summary, Universal Model exhibits remarkable computational efficiency (6× faster than other dataset-specific models), demonstrates strong generalization across different hospitals, transfers well to numerous downstream tasks, and more importantly, facilitates the extensibility to new classes while alleviating the catastrophic forgetting of previously learned classes. Codes, models, and datasets are available at https://github.com/ljwztc/CLIP-Driven-Universal-Model.
Collapse
Affiliation(s)
- Jie Liu
- City University of Hong Kong, Hong Kong
| | - Yixiao Zhang
- Johns Hopkins University, United States of America
| | - Kang Wang
- University of California, San Francisco, United States of America
| | - Mehmet Can Yavuz
- University of California, San Francisco, United States of America
| | - Xiaoxi Chen
- University of Illinois Urbana-Champaign, United States of America
| | | | | | - Yang Yang
- University of California, San Francisco, United States of America
| | - Alan Yuille
- Johns Hopkins University, United States of America
| | | | - Zongwei Zhou
- Johns Hopkins University, United States of America.
| |
Collapse
|
3
|
Park IS, Kim S, Jang JW, Park SW, Yeo NY, Seo SY, Jeon I, Shin SH, Kim Y, Choi HS, Kim C. Multi-modality multi-task model for mRS prediction using diffusion-weighted resonance imaging. Sci Rep 2024; 14:20572. [PMID: 39232178 PMCID: PMC11374799 DOI: 10.1038/s41598-024-71072-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 08/23/2024] [Indexed: 09/06/2024] Open
Abstract
This study focuses on predicting the prognosis of acute ischemic stroke patients with focal neurologic symptoms using a combination of diffusion-weighted magnetic resonance imaging (DWI) and clinical information. The primary outcome is a poor functional outcome defined by a modified Rankin Scale (mRS) score of 3-6 after 3 months of stroke. Employing nnUnet for DWI lesion segmentation, the study utilizes both multi-task and multi-modality methodologies, integrating DWI and clinical data for prognosis prediction. Integrating the two modalities was shown to improve performance by 0.04 compared to using DWI only. The model achieves notable performance metrics, with a dice score of 0.7375 for lesion segmentation and an area under the curve of 0.8080 for mRS prediction. These results surpass existing scoring systems, showing a 0.16 improvement over the Totaled Health Risks in Vascular Events score. The study further employs grad-class activation maps to identify critical brain regions influencing mRS scores. Analysis of the feature map reveals the efficacy of the multi-tasking nnUnet in predicting poor outcomes, providing insights into the interplay between DWI and clinical data. In conclusion, the integrated approach demonstrates significant advancements in prognosis prediction for cerebral infarction patients, offering a superior alternative to current scoring systems.
Collapse
Affiliation(s)
- In-Seo Park
- Department of Convergence Security, Kangwon National University, Chuncheon, 24253, Korea
- ZIOVISION, Chuncheon, 24341, Korea
| | - Seongheon Kim
- Department of Medical Informatics, Kangwon National University, Chuncheon, 24253, Korea
- Department of Neurology, Kangwon National University Hospital, Chuncheon, 24253, Korea
| | - Jae-Won Jang
- Department of Convergence Security, Kangwon National University, Chuncheon, 24253, Korea
- Department of Medical Bigdata Convergence, Kangwon National University, Chuncheon, 24253, Korea
- Department of Medical Informatics, Kangwon National University, Chuncheon, 24253, Korea
- Department of Neurology, Kangwon National University Hospital, Chuncheon, 24253, Korea
| | - Sang-Won Park
- Department of Medical Informatics, Kangwon National University, Chuncheon, 24253, Korea
- Department of Neurology, Kangwon National University Hospital, Chuncheon, 24253, Korea
| | - Na-Young Yeo
- Department of Medical Bigdata Convergence, Kangwon National University, Chuncheon, 24253, Korea
- Department of Neurology, Kangwon National University Hospital, Chuncheon, 24253, Korea
| | - Soo Young Seo
- Institute of New Frontier Research Team, Hallym University College of Medicine, Chuncheon, 24252, Korea
- Chuncheon Artificial Intelligence Center, Chuncheon Sacred Heart Hospital, Chuncheon, 24253, Korea
| | - Inyeop Jeon
- Chuncheon Artificial Intelligence Center, Chuncheon Sacred Heart Hospital, Chuncheon, 24253, Korea
| | - Seung-Ho Shin
- Chuncheon Artificial Intelligence Center, Chuncheon Sacred Heart Hospital, Chuncheon, 24253, Korea
| | - Yoon Kim
- Department of Computer Science and Engineering, Kangwon National University, Chuncheon, 24253, Korea
- ZIOVISION, Chuncheon, 24341, Korea
| | - Hyun-Soo Choi
- Department of Computer Science and Engineering, Seoul National University of Science and Technology, Seoul, South Korea.
- ZIOVISION, Chuncheon, 24341, Korea.
| | - Chulho Kim
- Department of Neurology, Chuncheon Sacred Heart Hospital, Chuncheon, 24253, Korea.
| |
Collapse
|
4
|
Arjmandi N, Momennezhad M, Arastouei S, Mosleh-Shirazi MA, Albawi A, Pishevar Z, Nasseri S. Deep learning-based automated liver contouring using a small sample of radiotherapy planning computed tomography images. Radiography (Lond) 2024:S1078-8174(24)00203-7. [PMID: 39179459 DOI: 10.1016/j.radi.2024.08.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 07/11/2024] [Accepted: 08/06/2024] [Indexed: 08/26/2024]
Abstract
INTRODUCTION No study has yet investigated the minimum amount of data required for deep learning-based liver contouring. Therefore, this study aimed to investigate the feasibility of automated liver contouring using limited data. METHODS Radiotherapy planning Computed tomography (CT) images were subjected to various preprocessing methods, such as denoising and windowing. Segmentation was conducted using the modified Attention U-Net and Residual U-Net networks. Two different modified networks were trained separately for different training sizes. For each architecture, the model trained with the training set size that achieved the highest dice similarity coefficient (DSC) score was selected for further evaluation. Two unseen external datasets with different distributions from the training set were also used to examine the generalizability of the proposed method. RESULTS The modified Residual U-Net and Attention U-Net networks achieved average DSCs of 97.62% and 96.48%, respectively, on the test set, using 62 training cases. The average Hausdorff distances (AHDs) for the modified Residual U-Net and Attention U-Net networks were 0.57 mm and 0.71 mm, respectively. Also, the modified Residual U-Net and Attention U-Net networks were tested on two unseen external datasets, achieving DSCs of 95.35% and 95.82% for data from another center and 95.16% and 94.93% for the AbdomenCT-1K dataset, respectively. CONCLUSION This study demonstrates that deep learning models can accurately segment livers using a small training set. The method, utilizing simple preprocessing and modified network architectures, shows strong performance on unseen datasets, indicating its generalizability. IMPLICATIONS FOR PRACTICE This promising result suggests its potential for automated liver contouring in radiotherapy planning.
Collapse
Affiliation(s)
- N Arjmandi
- Department of Medical Physics, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran; Student research committee, Mashhad University of medical sciences, Mashhad, Iran.
| | - M Momennezhad
- Department of Medical Physics, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran; Medical Physics Research Center, School of Medicine, Mashhad University of Medical Sciences, Mashhad.
| | - S Arastouei
- Department of Radiation Oncology, Mashhad University of Medical Sciences, Mashhad, Iran.
| | - M A Mosleh-Shirazi
- Physics Unit, Department of Radio-Oncology, Shiraz University of Medical Sciences, Shiraz, Iran; Ionizing and Non-Ionizing Radiation Protection Research Center, School of Paramedical Sciences, Shiraz University of Medical Sciences, Shiraz, Iran.
| | - A Albawi
- Radiology Techniques Department, College of Medical Technology, The Islamic University, Najaf, Iraq; Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran.
| | - Z Pishevar
- Department of Radiation Oncology, Mashhad University of Medical Sciences, Mashhad, Iran.
| | - S Nasseri
- Department of Medical Physics, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran.
| |
Collapse
|
5
|
Tian S, Liu Y, Mao X, Xu X, He S, Jia L, Zhang W, Peng P, Wang J. A multicenter study on deep learning for glioblastoma auto-segmentation with prior knowledge in multimodal imaging. Cancer Sci 2024. [PMID: 39119927 DOI: 10.1111/cas.16304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 07/19/2024] [Accepted: 07/22/2024] [Indexed: 08/10/2024] Open
Abstract
A precise radiotherapy plan is crucial to ensure accurate segmentation of glioblastomas (GBMs) for radiation therapy. However, the traditional manual segmentation process is labor-intensive and heavily reliant on the experience of radiation oncologists. In this retrospective study, a novel auto-segmentation method is proposed to address these problems. To assess the method's applicability across diverse scenarios, we conducted its development and evaluation using a cohort of 148 eligible patients drawn from four multicenter datasets and retrospective data collection including noncontrast CT, multisequence MRI scans, and corresponding medical records. All patients were diagnosed with histologically confirmed high-grade glioma (HGG). A deep learning-based method (PKMI-Net) for automatically segmenting gross tumor volume (GTV) and clinical target volumes (CTV1 and CTV2) of GBMs was proposed by leveraging prior knowledge from multimodal imaging. The proposed PKMI-Net demonstrated high accuracy in segmenting, respectively, GTV, CTV1, and CTV2 in an 11-patient test set, achieving Dice similarity coefficients (DSC) of 0.94, 0.95, and 0.92; 95% Hausdorff distances (HD95) of 2.07, 1.18, and 3.95 mm; average surface distances (ASD) of 0.69, 0.39, and 1.17 mm; and relative volume differences (RVD) of 5.50%, 9.68%, and 3.97%. Moreover, the vast majority of GTV, CTV1, and CTV2 produced by PKMI-Net are clinically acceptable and require no revision for clinical practice. In our multicenter evaluation, the PKMI-Net exhibited consistent and robust generalizability across the various datasets, demonstrating its effectiveness in automatically segmenting GBMs. The proposed method using prior knowledge in multimodal imaging can improve the contouring accuracy of GBMs, which holds the potential to improve the quality and efficiency of GBMs' radiotherapy.
Collapse
Affiliation(s)
- Suqing Tian
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Yinglong Liu
- United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Xinhui Mao
- Radiotherapy Center, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, China
| | - Xin Xu
- Department of Radiation Oncology, The Second Affiliated Hospital of Shandong First Medical University, Tai'an, China
| | - Shumeng He
- Intelligent Radiation Treatment Laboratory, United Imaging Research Institute of Intelligent Imaging, Beijing, China
| | - Lecheng Jia
- United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Wei Zhang
- Radiotherapy Business Unit, Shanghai United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Peng Peng
- United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Junjie Wang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| |
Collapse
|
6
|
Huang Y, Gomaa A, Höfler D, Schubert P, Gaipl U, Frey B, Fietkau R, Bert C, Putz F. Principles of artificial intelligence in radiooncology. Strahlenther Onkol 2024:10.1007/s00066-024-02272-0. [PMID: 39105746 DOI: 10.1007/s00066-024-02272-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Accepted: 06/17/2024] [Indexed: 08/07/2024]
Abstract
PURPOSE In the rapidly expanding field of artificial intelligence (AI) there is a wealth of literature detailing the myriad applications of AI, particularly in the realm of deep learning. However, a review that elucidates the technical principles of deep learning as relevant to radiation oncology in an easily understandable manner is still notably lacking. This paper aims to fill this gap by providing a comprehensive guide to the principles of deep learning that is specifically tailored toward radiation oncology. METHODS In light of the extensive variety of AI methodologies, this review selectively concentrates on the specific domain of deep learning. It emphasizes the principal categories of deep learning models and delineates the methodologies for training these models effectively. RESULTS This review initially delineates the distinctions between AI and deep learning as well as between supervised and unsupervised learning. Subsequently, it elucidates the fundamental principles of major deep learning models, encompassing multilayer perceptrons (MLPs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), transformers, generative adversarial networks (GANs), diffusion-based generative models, and reinforcement learning. For each category, it presents representative networks alongside their specific applications in radiation oncology. Moreover, the review outlines critical factors essential for training deep learning models, such as data preprocessing, loss functions, optimizers, and other pivotal training parameters including learning rate and batch size. CONCLUSION This review provides a comprehensive overview of deep learning principles tailored toward radiation oncology. It aims to enhance the understanding of AI-based research and software applications, thereby bridging the gap between complex technological concepts and clinical practice in radiation oncology.
Collapse
Affiliation(s)
- Yixing Huang
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany.
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany.
| | - Ahmed Gomaa
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany
| | - Daniel Höfler
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany
| | - Philipp Schubert
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany
| | - Udo Gaipl
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany
- Translational Radiobiology, Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
| | - Benjamin Frey
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany
- Translational Radiobiology, Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
| | - Rainer Fietkau
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany
| | - Christoph Bert
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany
| | - Florian Putz
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, 91054, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), 91054, Erlangen, Germany
| |
Collapse
|
7
|
Tsui T, Podgorsak A, Roeske JC, Small W, Refaat T, Kang H. Geometric and dosimetric evaluation for breast and regional nodal auto-segmentation structures. J Appl Clin Med Phys 2024:e14461. [PMID: 39092893 DOI: 10.1002/acm2.14461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 04/09/2024] [Accepted: 06/23/2024] [Indexed: 08/04/2024] Open
Abstract
The accuracy of artificial intelligence (AI) generated contours for intact-breast and post-mastectomy radiotherapy plans was evaluated. Geometric and dosimetric comparisons were performed between auto-contours (ACs) and manual-contours (MCs) produced by physicians for target structures. Breast and regional nodal structures were manually delineated on 66 breast cancer patients. ACs were retrospectively generated. The characteristics of the breast/post-mastectomy chestwall (CW) and regional nodal structures (axillary [AxN], supraclavicular [SC], internal mammary [IM]) were geometrically evaluated by Dice similarity coefficient (DSC), mean surface distance, and Hausdorff Distance. The structures were also evaluated dosimetrically by superimposing the MC clinically delivered plans onto the ACs to assess the impact of utilizing ACs with target dose (Vx%) evaluation. Positive geometric correlations between volume and DSC for intact-breast, AxN, and CW were observed. Little or anti correlations between volume and DSC for IM and SC were shown. For intact-breast plans, insignificant dosimetric differences between ACs and MCs were observed for AxNV95% (p = 0.17) and SCV95% (p = 0.16), while IMNV90% ACs and MCs were significantly different. The average V95% for intact-breast MCs (98.4%) and ACs (97.1%) were comparable but statistically different (p = 0.02). For post-mastectomy plans, AxNV95% (p = 0.35) and SCV95% (p = 0.08) were consistent between ACs and MCs, while IMNV90% was significantly different. Additionally, 94.1% of AC-breasts met ΔV95% variation <5% when DSC > 0.7. However, only 62.5% AC-CWs achieved the same metrics, despite AC-CWV95% (p = 0.43) being statistically insignificant. The AC intact-breast structure was dosimetrically similar to MCs. The AC AxN and SC may require manual adjustments. Careful review should be performed for AC post-mastectomy CW and IMN before treatment planning. The findings of this study may guide the clinical decision-making process for the utilization of AI-driven ACs for intact-breast and post-mastectomy plans. Before clinical implementation of this auto-segmentation software, an in-depth assessment of agreement with each local facilities MCs is needed.
Collapse
Affiliation(s)
- Tiffany Tsui
- Department of Radiation Oncology, Loyola University Chicago, Stritch School of Medicine, Maywood, Illinois, USA
- Department of Radiation Oncology, Cardinal Bernard Cancer Center, Maywood, Illinois, USA
| | - Alexander Podgorsak
- Department of Radiation Oncology, University of Rochester Medical Center, Rochester, New York, USA
| | - John C Roeske
- Department of Radiation Oncology, Loyola University Chicago, Stritch School of Medicine, Maywood, Illinois, USA
- Department of Radiation Oncology, Cardinal Bernard Cancer Center, Maywood, Illinois, USA
| | - William Small
- Department of Radiation Oncology, Loyola University Chicago, Stritch School of Medicine, Maywood, Illinois, USA
- Department of Radiation Oncology, Cardinal Bernard Cancer Center, Maywood, Illinois, USA
| | - Tamer Refaat
- Department of Radiation Oncology, Loyola University Chicago, Stritch School of Medicine, Maywood, Illinois, USA
- Department of Radiation Oncology, Cardinal Bernard Cancer Center, Maywood, Illinois, USA
| | - Hyejoo Kang
- Department of Radiation Oncology, Loyola University Chicago, Stritch School of Medicine, Maywood, Illinois, USA
- Department of Radiation Oncology, Cardinal Bernard Cancer Center, Maywood, Illinois, USA
| |
Collapse
|
8
|
Nerella S, Bandyopadhyay S, Zhang J, Contreras M, Siegel S, Bumin A, Silva B, Sena J, Shickel B, Bihorac A, Khezeli K, Rashidi P. Transformers and large language models in healthcare: A review. Artif Intell Med 2024; 154:102900. [PMID: 38878555 DOI: 10.1016/j.artmed.2024.102900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 05/28/2024] [Accepted: 05/30/2024] [Indexed: 08/09/2024]
Abstract
With Artificial Intelligence (AI) increasingly permeating various aspects of society, including healthcare, the adoption of the Transformers neural network architecture is rapidly changing many applications. Transformer is a type of deep learning architecture initially developed to solve general-purpose Natural Language Processing (NLP) tasks and has subsequently been adapted in many fields, including healthcare. In this survey paper, we provide an overview of how this architecture has been adopted to analyze various forms of healthcare data, including clinical NLP, medical imaging, structured Electronic Health Records (EHR), social media, bio-physiological signals, biomolecular sequences. Furthermore, which have also include the articles that used the transformer architecture for generating surgical instructions and predicting adverse outcomes after surgeries under the umbrella of critical care. Under diverse settings, these models have been used for clinical diagnosis, report generation, data reconstruction, and drug/protein synthesis. Finally, we also discuss the benefits and limitations of using transformers in healthcare and examine issues such as computational cost, model interpretability, fairness, alignment with human values, ethical implications, and environmental impact.
Collapse
Affiliation(s)
- Subhash Nerella
- Department of Biomedical Engineering, University of Florida, Gainesville, United States
| | | | - Jiaqing Zhang
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, United States
| | - Miguel Contreras
- Department of Biomedical Engineering, University of Florida, Gainesville, United States
| | - Scott Siegel
- Department of Biomedical Engineering, University of Florida, Gainesville, United States
| | - Aysegul Bumin
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, United States
| | - Brandon Silva
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, United States
| | - Jessica Sena
- Department Of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| | - Benjamin Shickel
- Department of Medicine, University of Florida, Gainesville, United States
| | - Azra Bihorac
- Department of Medicine, University of Florida, Gainesville, United States
| | - Kia Khezeli
- Department of Biomedical Engineering, University of Florida, Gainesville, United States
| | - Parisa Rashidi
- Department of Biomedical Engineering, University of Florida, Gainesville, United States.
| |
Collapse
|
9
|
Rasmussen ME, Akbarov K, Titovich E, Nijkamp JA, Van Elmpt W, Primdahl H, Lassen P, Cacicedo J, Cordero-Mendez L, Uddin AFMK, Mohamed A, Prajogi B, Brohet KE, Nyongesa C, Lomidze D, Prasiko G, Ferraris G, Mahmood H, Stojkovski I, Isayev I, Mohamad I, Shirley L, Kochbati L, Eftodiev L, Piatkevich M, Bonilla Jara MM, Spahiu O, Aralbayev R, Zakirova R, Subramaniam S, Kibudde S, Tsegmed U, Korreman SS, Eriksen JG. Potential of E-Learning Interventions and Artificial Intelligence-Assisted Contouring Skills in Radiotherapy: The ELAISA Study. JCO Glob Oncol 2024; 10:e2400173. [PMID: 39236283 DOI: 10.1200/go.24.00173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 06/19/2024] [Accepted: 07/10/2024] [Indexed: 09/07/2024] Open
Abstract
PURPOSE Most research on artificial intelligence-based auto-contouring as template (AI-assisted contouring) for organs-at-risk (OARs) stem from high-income countries. The effect and safety are, however, likely to depend on local factors. This study aimed to investigate the effects of AI-assisted contouring and teaching on contouring time and contour quality among radiation oncologists (ROs) working in low- and middle-income countries (LMICs). MATERIALS AND METHODS Ninety-seven ROs were randomly assigned to either manual or AI-assisted contouring of eight OARs for two head-and-neck cancer cases with an in-between teaching session on contouring guidelines. Thereby, the effect of teaching (yes/no) and AI-assisted contouring (yes/no) was quantified. Second, ROs completed short-term and long-term follow-up cases all using AI assistance. Contour quality was quantified with Dice Similarity Coefficient (DSC) between ROs' contours and expert consensus contours. Groups were compared using absolute differences in medians with 95% CIs. RESULTS AI-assisted contouring without previous teaching increased absolute DSC for optic nerve (by 0.05 [0.01; 0.10]), oral cavity (0.10 [0.06; 0.13]), parotid (0.07 [0.05; 0.12]), spinal cord (0.04 [0.01; 0.06]), and mandible (0.02 [0.01; 0.03]). Contouring time decreased for brain stem (-1.41 [-2.44; -0.25]), mandible (-6.60 [-8.09; -3.35]), optic nerve (-0.19 [-0.47; -0.02]), parotid (-1.80 [-2.66; -0.32]), and thyroid (-1.03 [-2.18; -0.05]). Without AI-assisted contouring, teaching increased DSC for oral cavity (0.05 [0.01; 0.09]) and thyroid (0.04 [0.02; 0.07]), and contouring time increased for mandible (2.36 [-0.51; 5.14]), oral cavity (1.42 [-0.08; 4.14]), and thyroid (1.60 [-0.04; 2.22]). CONCLUSION The study suggested that AI-assisted contouring is safe and beneficial to ROs working in LMICs. Prospective clinical trials on AI-assisted contouring should, however, be conducted upon clinical implementation to confirm the effects.
Collapse
Affiliation(s)
| | | | | | | | - Wouter Van Elmpt
- MAASTRO clinic, Maastricht University Medical Centre, Maastricht, the Netherlands
| | - Hanne Primdahl
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Pernille Lassen
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Jon Cacicedo
- Department of Radiation Oncology, Cruces University Hospital, Bilbao, Spain
| | | | - A F M Kamal Uddin
- Labaid Cancer Hospital and Super Speciality Centre, Dhaka, Bangladesh
| | - Ahmed Mohamed
- National Cancer Institute, University of Gezira, Wad Madani, Sudan
| | - Ben Prajogi
- Cipto Mangunkusumo Hospital, Jakarta, Indonesia
| | | | | | - Darejan Lomidze
- Tbilisi State Medical University and Ingorokva High Medical Technology University Clinic, Tbilisi, Georgia
| | | | | | | | - Igor Stojkovski
- University Clinic of Radiotherapy and Oncology, Skopje, Macedonia
| | - Isa Isayev
- National Center of Oncology, Baku, Azerbaijan
| | | | - Leivon Shirley
- Christian Institute of Health Science and Research, Dimapur, India
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
10
|
Ryan ML, Wang S, Pandya SR. Integrating Artificial Intelligence Into the Visualization and Modeling of Three-Dimensional Anatomy in Pediatric Surgical Patients. J Pediatr Surg 2024:S0022-3468(24)00425-1. [PMID: 39095281 DOI: 10.1016/j.jpedsurg.2024.07.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 07/05/2024] [Accepted: 07/11/2024] [Indexed: 08/04/2024]
Abstract
BACKGROUND Pediatric surgeons often treat patients with complex anatomical considerations due to congenital anomalies or distortion of normal structures by solid organ tumors. There are multiple applications for three-dimensional visualization of these structures based on cross-sectional imaging. Recently, advances in artificial intelligence (AI) applications and graphics hardware have made rapid 3D modelling of individual structures within the body accessible to surgeons without sophisticated and expensive hardware. In this report, we provide an overview of these applications and their uses in preoperative planning for pediatric surgeons. METHODS Deidentified DICOM files containing cross-sectional imaging of preoperative pediatric surgery patients were loaded from an institutional PACS database onto a secure PC with dedicated graphics and AI hardware (NVIDIA Geforce RTX 4070 laptop GPU). Visualization was obtained using an open-source imaging platform (3D Slicer). AI extensions to the platform were utilized to delineate the anatomy of interest. RESULTS Segmentations of skeletal and visceral structures within a scan were obtained using the TotalSegmentator extension with an average processing time under 5 min. Additional AI modules were utilized for providing detailed mapping of the airways (AirwaySegmentation), lungs (Chest Imaging Platform), liver (SlicerLiver), or vasculature (SlicerVMTK). Other extensions were used for delineation of tumors within the hepatic parenchyma (MONAI Auto3DSeg) and hepatic vessels (RVesselX). CONCLUSION AI algorithms for image interpretation and processors dedicated to AI functions have significantly decreased the technical and financial requirements for obtaining detailed three-dimensional images of patient anatomy. Models obtained using AI algorithms have potential applications in preoperative planning, surgical simulation, patient education, and training. LEVEL OF EVIDENCE V, Case Series, Description of Technique.
Collapse
Affiliation(s)
- Mark L Ryan
- Division of Pediatric Surgery, Department of Surgery, Children's Medical Center Dallas/University of Texas Southwestern Medical Center, Dallas, TX, USA.
| | - Shengqing Wang
- University of Texas Southwestern School of Medicine, Dallas, TX, USA
| | - Samir R Pandya
- Division of Pediatric Surgery, Department of Surgery, Children's Medical Center Dallas/University of Texas Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
11
|
Hochreuter KM, Ren J, Nijkamp J, Korreman SS, Lukacova S, Kallehauge JF, Trip AK. The effect of editing clinical contours on deep-learning segmentation accuracy of the gross tumor volume in glioblastoma. Phys Imaging Radiat Oncol 2024; 31:100620. [PMID: 39220114 PMCID: PMC11364127 DOI: 10.1016/j.phro.2024.100620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 07/29/2024] [Accepted: 08/01/2024] [Indexed: 09/04/2024] Open
Abstract
Background and purpose Deep-learning (DL) models for segmentation of the gross tumor volume (GTV) in radiotherapy are generally based on clinical delineations which suffer from inter-observer variability. The aim of this study was to compare performance of a DL-model based on clinical glioblastoma GTVs to a model based on a single-observer edited version of the same GTVs. Materials and methods The dataset included imaging data (Computed Tomography (CT), T1, contrast-T1 (T1C), and fluid-attenuated-inversion-recovery (FLAIR)) of 259 glioblastoma patients treated with post-operative radiotherapy between 2012 and 2019 at a single institute. The clinical GTVs were edited using all imaging data. The dataset was split into 207 cases for training/validation and 52 for testing.GTV segmentation models (nnUNet) were trained on clinical and edited GTVs separately and compared using Surface Dice with 1 mm tolerance (sDSC1mm). We also evaluated model performance with respect to extent of resection (EOR), and different imaging combinations (T1C/T1/FLAIR/CT, T1C/FLAIR/CT, T1C/FLAIR, T1C/CT, T1C/T1, T1C). A Wilcoxon test was used for significance testing. Results The median (range) sDSC1mm of the clinical-GTV-model and edited-GTV-model both evaluated with the edited contours, was 0.76 (0.43-0.94) vs. 0.92 (0.60-0.98) respectively (p < 0.001). sDSC1mm was not significantly different between patients with a biopsy, partial, and complete resection. T1C as single input performed as good as use of imaging combinations. Conclusions High segmentation accuracy was obtained by the DL-models. Editing of the clinical GTVs significantly increased DL performance with a relevant effect size. DL performance was robust for EOR and highly accurate using only T1C.
Collapse
Affiliation(s)
- Kim M. Hochreuter
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Jintao Ren
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Jasper Nijkamp
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Stine S. Korreman
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Slávka Lukacova
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Jesper F. Kallehauge
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Anouk K. Trip
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| |
Collapse
|
12
|
Iorio GC, Denaro N, Livi L, Desideri I, Nardone V, Ricardi U. Editorial: Advances in radiotherapy for head and neck cancer. Front Oncol 2024; 14:1437237. [PMID: 38912069 PMCID: PMC11190330 DOI: 10.3389/fonc.2024.1437237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Accepted: 05/31/2024] [Indexed: 06/25/2024] Open
Affiliation(s)
| | - Nerina Denaro
- Medical Oncology Unit, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, Milan, Italy
| | - Lorenzo Livi
- Department of Experimental and Clinical Biomedical Sciences “Mario Serio”, University of Florence, Florence, Italy
| | - Isacco Desideri
- Department of Experimental and Clinical Biomedical Sciences “Mario Serio”, University of Florence, Florence, Italy
| | - Valerio Nardone
- Department of Precision Medicine, University of Campania “L. Vanvitelli”, Naples, Italy
| | - Umberto Ricardi
- Department of Oncology, Radiation Oncology, University of Turin, Turin, Italy
| |
Collapse
|
13
|
Li Z, Gan G, Guo J, Zhan W, Chen L. Accurate object localization facilitates automatic esophagus segmentation in deep learning. Radiat Oncol 2024; 19:55. [PMID: 38735947 PMCID: PMC11088757 DOI: 10.1186/s13014-024-02448-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 05/01/2024] [Indexed: 05/14/2024] Open
Abstract
BACKGROUND Currently, automatic esophagus segmentation remains a challenging task due to its small size, low contrast, and large shape variation. We aimed to improve the performance of esophagus segmentation in deep learning by applying a strategy that involves locating the object first and then performing the segmentation task. METHODS A total of 100 cases with thoracic computed tomography scans from two publicly available datasets were used in this study. A modified CenterNet, an object location network, was employed to locate the center of the esophagus for each slice. Subsequently, the 3D U-net and 2D U-net_coarse models were trained to segment the esophagus based on the predicted object center. A 2D U-net_fine model was trained based on the updated object center according to the 3D U-net model. The dice similarity coefficient and the 95% Hausdorff distance were used as quantitative evaluation indexes for the delineation performance. The characteristics of the automatically delineated esophageal contours by the 2D U-net and 3D U-net models were summarized. Additionally, the impact of the accuracy of object localization on the delineation performance was analyzed. Finally, the delineation performance in different segments of the esophagus was also summarized. RESULTS The mean dice coefficient of the 3D U-net, 2D U-net_coarse, and 2D U-net_fine models were 0.77, 0.81, and 0.82, respectively. The 95% Hausdorff distance for the above models was 6.55, 3.57, and 3.76, respectively. Compared with the 2D U-net, the 3D U-net has a lower incidence of delineating wrong objects and a higher incidence of missing objects. After using the fine object center, the average dice coefficient was improved by 5.5% in the cases with a dice coefficient less than 0.75, while that value was only 0.3% in the cases with a dice coefficient greater than 0.75. The dice coefficients were lower for the esophagus between the orifice of the inferior and the pulmonary bifurcation compared with the other regions. CONCLUSION The 3D U-net model tended to delineate fewer incorrect objects but also miss more objects. Two-stage strategy with accurate object location could enhance the robustness of the segmentation model and significantly improve the esophageal delineation performance, especially for cases with poor delineation results.
Collapse
Affiliation(s)
- Zhibin Li
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Guanghui Gan
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Jian Guo
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Wei Zhan
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Long Chen
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China.
| |
Collapse
|
14
|
Rong Y, Chen Q, Fu Y, Yang X, Al-Hallaq HA, Wu QJ, Yuan L, Xiao Y, Cai B, Latifi K, Benedict SH, Buchsbaum JC, Qi XS. NRG Oncology Assessment of Artificial Intelligence Deep Learning-Based Auto-segmentation for Radiation Therapy: Current Developments, Clinical Considerations, and Future Directions. Int J Radiat Oncol Biol Phys 2024; 119:261-280. [PMID: 37972715 PMCID: PMC11023777 DOI: 10.1016/j.ijrobp.2023.10.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 09/16/2023] [Accepted: 10/14/2023] [Indexed: 11/19/2023]
Abstract
Deep learning neural networks (DLNN) in Artificial intelligence (AI) have been extensively explored for automatic segmentation in radiotherapy (RT). In contrast to traditional model-based methods, data-driven AI-based models for auto-segmentation have shown high accuracy in early studies in research settings and controlled environment (single institution). Vendor-provided commercial AI models are made available as part of the integrated treatment planning system (TPS) or as a stand-alone tool that provides streamlined workflow interacting with the main TPS. These commercial tools have drawn clinics' attention thanks to their significant benefit in reducing the workload from manual contouring and shortening the duration of treatment planning. However, challenges occur when applying these commercial AI-based segmentation models to diverse clinical scenarios, particularly in uncontrolled environments. Contouring nomenclature and guideline standardization has been the main task undertaken by the NRG Oncology. AI auto-segmentation holds the potential clinical trial participants to reduce interobserver variations, nomenclature non-compliance, and contouring guideline deviations. Meanwhile, trial reviewers could use AI tools to verify contour accuracy and compliance of those submitted datasets. In recognizing the growing clinical utilization and potential of these commercial AI auto-segmentation tools, NRG Oncology has formed a working group to evaluate the clinical utilization and potential of commercial AI auto-segmentation tools. The group will assess in-house and commercially available AI models, evaluation metrics, clinical challenges, and limitations, as well as future developments in addressing these challenges. General recommendations are made in terms of the implementation of these commercial AI models, as well as precautions in recognizing the challenges and limitations.
Collapse
Affiliation(s)
- Yi Rong
- Mayo Clinic Arizona, Phoenix, AZ
| | - Quan Chen
- City of Hope Comprehensive Cancer Center Duarte, CA
| | - Yabo Fu
- Memorial Sloan Kettering Cancer Center, Commack, NY
| | | | | | | | - Lulin Yuan
- Virginia Commonwealth University, Richmond, VA
| | - Ying Xiao
- University of Pennsylvania/Abramson Cancer Center, Philadelphia, PA
| | - Bin Cai
- The University of Texas Southwestern Medical Center, Dallas, TX
| | | | - Stanley H Benedict
- University of California Davis Comprehensive Cancer Center, Sacramento, CA
| | | | - X Sharon Qi
- University of California Los Angeles, Los Angeles, CA
| |
Collapse
|
15
|
Zhang L, Liu Z, Zhang L, Wu Z, Yu X, Holmes J, Feng H, Dai H, Li X, Li Q, Wong WW, Vora SA, Zhu D, Liu T, Liu W. Technical Note: Generalizable and Promptable Artificial Intelligence Model to Augment Clinical Delineation in Radiation Oncology. Med Phys 2024; 51:2187-2199. [PMID: 38319676 PMCID: PMC10939804 DOI: 10.1002/mp.16965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 12/29/2023] [Accepted: 01/14/2024] [Indexed: 03/13/2024] Open
Abstract
BACKGROUND Efficient and accurate delineation of organs at risk (OARs) is a critical procedure for treatment planning and dose evaluation. Deep learning-based auto-segmentation of OARs has shown promising results and is increasingly being used in radiation therapy. However, existing deep learning-based auto-segmentation approaches face two challenges in clinical practice: generalizability and human-AI interaction. A generalizable and promptable auto-segmentation model, which segments OARs of multiple disease sites simultaneously and supports on-the-fly human-AI interaction, can significantly enhance the efficiency of radiation therapy treatment planning. PURPOSE Meta's segment anything model (SAM) was proposed as a generalizable and promptable model for next-generation natural image segmentation. We further evaluated the performance of SAM in radiotherapy segmentation. METHODS Computed tomography (CT) images of clinical cases from four disease sites at our institute were collected: prostate, lung, gastrointestinal, and head & neck. For each case, we selected the OARs important in radiotherapy treatment planning. We then compared both the Dice coefficients and Jaccard indices derived from three distinct methods: manual delineation (ground truth), automatic segmentation using SAM's 'segment anything' mode, and automatic segmentation using SAM's 'box prompt' mode that implements manual interaction via live prompts during segmentation. RESULTS Our results indicate that SAM's segment anything mode can achieve clinically acceptable segmentation results in most OARs with Dice scores higher than 0.7. SAM's box prompt mode further improves Dice scores by 0.1∼0.5. Similar results were observed for Jaccard indices. The results show that SAM performs better for prostate and lung, but worse for gastrointestinal and head & neck. When considering the size of organs and the distinctiveness of their boundaries, SAM shows better performance for large organs with distinct boundaries, such as lung and liver, and worse for smaller organs with less distinct boundaries, like parotid and cochlea. CONCLUSIONS Our results demonstrate SAM's robust generalizability with consistent accuracy in automatic segmentation for radiotherapy. Furthermore, the advanced box-prompt method enables the users to augment auto-segmentation interactively and dynamically, leading to patient-specific auto-segmentation in radiation therapy. SAM's generalizability across different disease sites and different modalities makes it feasible to develop a generic auto-segmentation model in radiotherapy.
Collapse
Affiliation(s)
- Lian Zhang
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
| | - Zhengliang Liu
- School of Computing, University of Georgia, Athens, GA 30602, USA
| | - Lu Zhang
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 76019, USA
| | - Zihao Wu
- School of Computing, University of Georgia, Athens, GA 30602, USA
| | - Xiaowei Yu
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 76019, USA
| | - Jason Holmes
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
| | - Hongying Feng
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
| | - Haixing Dai
- School of Computing, University of Georgia, Athens, GA 30602, USA
| | - Xiang Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02115, USA
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02115, USA
| | - William W. Wong
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
| | - Sujay A. Vora
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
| | - Dajiang Zhu
- School of Computing, University of Georgia, Athens, GA 30602, USA
| | - Tianming Liu
- School of Computing, University of Georgia, Athens, GA 30602, USA
| | - Wei Liu
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ 85054, USA
| |
Collapse
|
16
|
Dei D, Lambri N, Crespi L, Brioso RC, Loiacono D, Clerici E, Bellu L, De Philippis C, Navarria P, Bramanti S, Carlo-Stella C, Rusconi R, Reggiori G, Tomatis S, Scorsetti M, Mancosu P. Deep learning and atlas-based models to streamline the segmentation workflow of total marrow and lymphoid irradiation. LA RADIOLOGIA MEDICA 2024; 129:515-523. [PMID: 38308062 DOI: 10.1007/s11547-024-01760-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 01/03/2024] [Indexed: 02/04/2024]
Abstract
PURPOSE To improve the workflow of total marrow and lymphoid irradiation (TMLI) by enhancing the delineation of organs at risk (OARs) and clinical target volume (CTV) using deep learning (DL) and atlas-based (AB) segmentation models. MATERIALS AND METHODS Ninety-five TMLI plans optimized in our institute were analyzed. Two commercial DL software were tested for segmenting 18 OARs. An AB model for lymph node CTV (CTV_LN) delineation was built using 20 TMLI patients. The AB model was evaluated on 20 independent patients, and a semiautomatic approach was tested by correcting the automatic contours. The generated OARs and CTV_LN contours were compared to manual contours in terms of topological agreement, dose statistics, and time workload. A clinical decision tree was developed to define a specific contouring strategy for each OAR. RESULTS The two DL models achieved a median [interquartile range] dice similarity coefficient (DSC) of 0.84 [0.71;0.93] and 0.85 [0.70;0.93] across the OARs. The absolute median Dmean difference between manual and the two DL models was 2.0 [0.7;6.6]% and 2.4 [0.9;7.1]%. The AB model achieved a median DSC of 0.70 [0.66;0.74] for CTV_LN delineation, increasing to 0.94 [0.94;0.95] after manual revision, with minimal Dmean differences. Since September 2022, our institution has implemented DL and AB models for all TMLI patients, reducing from 5 to 2 h the time required to complete the entire segmentation process. CONCLUSION DL models can streamline the TMLI contouring process of OARs. Manual revision is still necessary for lymph node delineation using AB models.
Collapse
Affiliation(s)
- Damiano Dei
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, 20072, Pieve Emanuele, Milan, Italy
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Nicola Lambri
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, 20072, Pieve Emanuele, Milan, Italy.
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy.
| | - Leonardo Crespi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
- Health Data Science Centre, Human Technopole, Milan, Italy
| | - Ricardo Coimbra Brioso
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Daniele Loiacono
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Elena Clerici
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Luisa Bellu
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, 20072, Pieve Emanuele, Milan, Italy
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Chiara De Philippis
- Department of Oncology and Hematology, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Pierina Navarria
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Stefania Bramanti
- Department of Oncology and Hematology, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Carmelo Carlo-Stella
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, 20072, Pieve Emanuele, Milan, Italy
- Department of Oncology and Hematology, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Roberto Rusconi
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, 20072, Pieve Emanuele, Milan, Italy
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Giacomo Reggiori
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, 20072, Pieve Emanuele, Milan, Italy
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Stefano Tomatis
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Marta Scorsetti
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, 20072, Pieve Emanuele, Milan, Italy
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| | - Pietro Mancosu
- Department of Radiotherapy and Radiosurgery, IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089, Rozzano, Milan, Italy
| |
Collapse
|
17
|
Koo J, Caudell J, Latifi K, Moros EG, Feygelman V. Essentially unedited deep-learning-based OARs are suitable for rigorous oropharyngeal and laryngeal cancer treatment planning. J Appl Clin Med Phys 2024; 25:e14202. [PMID: 37942993 DOI: 10.1002/acm2.14202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 10/19/2023] [Accepted: 10/25/2023] [Indexed: 11/10/2023] Open
Abstract
Quality of organ at risk (OAR) autosegmentation is often judged by concordance metrics against the human-generated gold standard. However, the ultimate goal is the ability to use unedited autosegmented OARs in treatment planning, while maintaining the plan quality. We tested this approach with head and neck (HN) OARs generated by a prototype deep-learning (DL) model on patients previously treated for oropharyngeal and laryngeal cancer. Forty patients were selected, with all structures delineated by an experienced physician. For each patient, a set of 13 OARs were generated by the DL model. Each patient was re-planned based on original targets and unedited DL-produced OARs. The new dose distributions were then applied back to the manually delineated structures. The target coverage was evaluated with inhomogeneity index (II) and the relative volume of regret. For the OARs, Dice similarity coefficient (DSC) of areas under the DVH curves, individual DVH objectives, and composite continuous plan quality metric (PQM) were compared. The nearly identical primary target coverage for the original and re-generated plans was achieved, with the same II and relative volume of regret values. The average DSC of the areas under the corresponding pairs of DVH curves was 0.97 ± 0.06. The number of critical DVH points which met the clinical objectives with the dose optimized on autosegmented structures but failed when evaluated on the manual ones was 5 of 896 (0.6%). The average OAR PQM score with the re-planned dose distributions was essentially the same when evaluated either on the autosegmented or manual OARs. Thus, rigorous HN treatment planning is possible with OARs segmented by a prototype DL algorithm with minimal, if any, manual editing.
Collapse
Affiliation(s)
- Jihye Koo
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida, USA
- Department of Physics, University of South Florida, Tampa, Florida, USA
| | - Jimmy Caudell
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida, USA
| | - Kujtim Latifi
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida, USA
| | - Eduardo G Moros
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida, USA
| | - Vladimir Feygelman
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, Florida, USA
| |
Collapse
|
18
|
Polymeri E, Johnsson ÅA, Enqvist O, Ulén J, Pettersson N, Nordström F, Kindblom J, Trägårdh E, Edenbrandt L, Kjölhede H. Artificial Intelligence-Based Organ Delineation for Radiation Treatment Planning of Prostate Cancer on Computed Tomography. Adv Radiat Oncol 2024; 9:101383. [PMID: 38495038 PMCID: PMC10943520 DOI: 10.1016/j.adro.2023.101383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 08/30/2023] [Indexed: 03/19/2024] Open
Abstract
Purpose Meticulous manual delineations of the prostate and the surrounding organs at risk are necessary for prostate cancer radiation therapy to avoid side effects to the latter. This process is time consuming and hampered by inter- and intraobserver variability, all of which could be alleviated by artificial intelligence (AI). This study aimed to evaluate the performance of AI compared with manual organ delineations on computed tomography (CT) scans for radiation treatment planning. Methods and Materials Manual delineations of the prostate, urinary bladder, and rectum of 1530 patients with prostate cancer who received curative radiation therapy from 2006 to 2018 were included. Approximately 50% of those CT scans were used as a training set, 25% as a validation set, and 25% as a test set. Patients with hip prostheses were excluded because of metal artifacts. After training and fine-tuning with the validation set, automated delineations of the prostate and organs at risk were obtained for the test set. Sørensen-Dice similarity coefficient, mean surface distance, and Hausdorff distance were used to evaluate the agreement between the manual and automated delineations. Results The median Sørensen-Dice similarity coefficient between the manual and AI delineations was 0.82, 0.95, and 0.88 for the prostate, urinary bladder, and rectum, respectively. The median mean surface distance and Hausdorff distance were 1.7 and 9.2 mm for the prostate, 0.7 and 6.7 mm for the urinary bladder, and 1.1 and 13.5 mm for the rectum, respectively. Conclusions Automated CT-based organ delineation for prostate cancer radiation treatment planning is feasible and shows good agreement with manually performed contouring.
Collapse
Affiliation(s)
- Eirini Polymeri
- Department of Radiology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Åse A. Johnsson
- Department of Radiology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Olof Enqvist
- Department of Electrical Engineering, Region Västra Götaland, Chalmers University of Technology, Gothenburg, Sweden
- Eigenvision AB, Malmö, Sweden
| | | | - Niclas Pettersson
- Department of Medical Radiation Sciences, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Medical Physics and Biomedical Engineering, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Fredrik Nordström
- Department of Medical Radiation Sciences, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Medical Physics and Biomedical Engineering, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Jon Kindblom
- Department of Oncology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Elin Trägårdh
- Department of Clinical Physiology and Nuclear Medicine, Lund University and Skåne University Hospital, Malmö, Sweden
| | - Lars Edenbrandt
- Department of Molecular and Clinical Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Clinical Physiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Henrik Kjölhede
- Department of Urology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Urology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| |
Collapse
|
19
|
Singh S, Singh BK, Kumar A. Multi-organ segmentation of organ-at-risk (OAR's) of head and neck site using ensemble learning technique. Radiography (Lond) 2024; 30:673-680. [PMID: 38364707 DOI: 10.1016/j.radi.2024.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 11/25/2023] [Accepted: 02/05/2024] [Indexed: 02/18/2024]
Abstract
INTRODUCTION This paper presents a novel approach to automate the segmentation of Organ-at-Risk (OAR) in Head and Neck cancer patients using Deep Learning models combined with Ensemble Learning techniques. The study aims to improve the accuracy and efficiency of OAR segmentation, essential for radiotherapy treatment planning. METHODS The dataset comprised computed tomography (CT) scans of 182 patients in DICOM format, obtained from an institutional image bank. Experienced Radiation Oncologists manually segmented seven OARs for each scan. Two models, 3D U-Net and 3D DenseNet-FCN, were trained on reduced CT scans (192 × 192 x 128) due to memory limitations. Ensemble Learning techniques were employed to enhance accuracy and segmentation metrics. Testing was conducted on 78 patients from the institutional dataset and an open-source dataset (TCGA-HNSC and Head-Neck Cetuximab) consisting of 31 patient scans. RESULTS Using the Ensemble Learning technique, the average dice similarity coefficient for OARs ranged from 0.990 to 0.994, indicating high segmentation accuracy. The 95% Hausdorff distance (mm) ranged from 1.3 to 2.1, demonstrating precise segmentation boundaries. CONCLUSION The proposed automated segmentation method achieved efficient and accurate OAR segmentation, surpassing human expert performance in terms of time and accuracy. IMPLICATIONS FOR PRACTICE This approach has implications for improving treatment planning and patient care in radiotherapy. By reducing manual segmentation reliance, the proposed method offers significant time savings and potential improvements in treatment planning efficiency and precision for head and neck cancer patients.
Collapse
Affiliation(s)
- S Singh
- Department of Physics, GLA University, Mathura, Uttar Pradesh, India; Department of Radiation Oncology, Division of Medical Physics, Rajiv Gandhi Cancer Institute and Research Center, New Delhi, India.
| | - B K Singh
- Department of Physics, GLA University, Mathura, Uttar Pradesh, India.
| | - A Kumar
- Department of Radiotherapy, S N. Medical College, Agra, Uttar Pradesh, India.
| |
Collapse
|
20
|
Kakkos I, Vagenas TP, Zygogianni A, Matsopoulos GK. Towards Automation in Radiotherapy Planning: A Deep Learning Approach for the Delineation of Parotid Glands in Head and Neck Cancer. Bioengineering (Basel) 2024; 11:214. [PMID: 38534488 DOI: 10.3390/bioengineering11030214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/19/2024] [Accepted: 02/22/2024] [Indexed: 03/28/2024] Open
Abstract
The delineation of parotid glands in head and neck (HN) carcinoma is critical to assess radiotherapy (RT) planning. Segmentation processes ensure precise target position and treatment precision, facilitate monitoring of anatomical changes, enable plan adaptation, and enhance overall patient safety. In this context, artificial intelligence (AI) and deep learning (DL) have proven exceedingly effective in precisely outlining tumor tissues and, by extension, the organs at risk. This paper introduces a DL framework using the AttentionUNet neural network for automatic parotid gland segmentation in HN cancer. Extensive evaluation of the model is performed in two public and one private dataset, while segmentation accuracy is compared with other state-of-the-art DL segmentation schemas. To assess replanning necessity during treatment, an additional registration method is implemented on the segmentation output, aligning images of different modalities (Computed Tomography (CT) and Cone Beam CT (CBCT)). AttentionUNet outperforms similar DL methods (Dice Similarity Coefficient: 82.65% ± 1.03, Hausdorff Distance: 6.24 mm ± 2.47), confirming its effectiveness. Moreover, the subsequent registration procedure displays increased similarity, providing insights into the effects of RT procedures for treatment planning adaptations. The implementation of the proposed methods indicates the effectiveness of DL not only for automatic delineation of the anatomical structures, but also for the provision of information for adaptive RT support.
Collapse
Affiliation(s)
- Ioannis Kakkos
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| | - Theodoros P Vagenas
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| | - Anna Zygogianni
- Radiation Oncology Unit, 1st Department of Radiology, ARETAIEION University Hospital, 11528 Athens, Greece
| | - George K Matsopoulos
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| |
Collapse
|
21
|
Hanna EM, Sargent E, Hua CH, Merchant TE, Ates O. Performance analysis and knowledge-based quality assurance of critical organ auto-segmentation for pediatric craniospinal irradiation. Sci Rep 2024; 14:4251. [PMID: 38378834 PMCID: PMC11310500 DOI: 10.1038/s41598-024-55015-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 02/19/2024] [Indexed: 02/22/2024] Open
Abstract
Craniospinal irradiation (CSI) is a vital therapeutic approach utilized for young patients suffering from central nervous system disorders such as medulloblastoma. The task of accurately outlining the treatment area is particularly time-consuming due to the presence of several sensitive organs at risk (OAR) that can be affected by radiation. This study aimed to assess two different methods for automating the segmentation process: an atlas technique and a deep learning neural network approach. Additionally, a novel method was devised to prospectively evaluate the accuracy of automated segmentation as a knowledge-based quality assurance (QA) tool. Involving a patient cohort of 100, ranging in ages from 2 to 25 years with a median age of 8, this study employed quantitative metrics centered around overlap and distance calculations to determine the most effective approach for practical clinical application. The contours generated by two distinct methods of atlas and neural network were compared to ground truth contours approved by a radiation oncologist, utilizing 13 distinct metrics. Furthermore, an innovative QA tool was conceptualized, designed for forthcoming cases based on the baseline dataset of 100 patient cases. The calculated metrics indicated that, in the majority of cases (60.58%), the neural network method demonstrated a notably higher alignment with the ground truth. Instances where no difference was observed accounted for 31.25%, while utilization of the atlas method represented 8.17%. The QA tool results showed that the two approaches achieved 100% agreement in 39.4% of instances for the atlas method and in 50.6% of instances for the neural network auto-segmentation. The results indicate that the neural network approach showcases superior performance, and its significantly closer physical alignment to ground truth contours in the majority of cases. The metrics derived from overlap and distance measurements have enabled clinicians to discern the optimal choice for practical clinical application.
Collapse
Affiliation(s)
- Emeline M Hanna
- St. Jude Children's Research Hospital, Memphis, TN, 38105, USA
| | - Emma Sargent
- St. Jude Children's Research Hospital, Memphis, TN, 38105, USA
| | - Chia-Ho Hua
- St. Jude Children's Research Hospital, Memphis, TN, 38105, USA
| | | | - Ozgur Ates
- St. Jude Children's Research Hospital, Memphis, TN, 38105, USA.
| |
Collapse
|
22
|
Pu Q, Xi Z, Yin S, Zhao Z, Zhao L. Advantages of transformer and its application for medical image segmentation: a survey. Biomed Eng Online 2024; 23:14. [PMID: 38310297 PMCID: PMC10838005 DOI: 10.1186/s12938-024-01212-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 01/22/2024] [Indexed: 02/05/2024] Open
Abstract
PURPOSE Convolution operator-based neural networks have shown great success in medical image segmentation over the past decade. The U-shaped network with a codec structure is one of the most widely used models. Transformer, a technology used in natural language processing, can capture long-distance dependencies and has been applied in Vision Transformer to achieve state-of-the-art performance on image classification tasks. Recently, researchers have extended transformer to medical image segmentation tasks, resulting in good models. METHODS This review comprises publications selected through a Web of Science search. We focused on papers published since 2018 that applied the transformer architecture to medical image segmentation. We conducted a systematic analysis of these studies and summarized the results. RESULTS To better comprehend the benefits of convolutional neural networks and transformers, the construction of the codec and transformer modules is first explained. Second, the medical image segmentation model based on transformer is summarized. The typically used assessment markers for medical image segmentation tasks are then listed. Finally, a large number of medical segmentation datasets are described. CONCLUSION Even if there is a pure transformer model without any convolution operator, the sample size of medical picture segmentation still restricts the growth of the transformer, even though it can be relieved by a pretraining model. More often than not, researchers are still designing models using transformer and convolution operators.
Collapse
Affiliation(s)
- Qiumei Pu
- School of Information Engineering, Minzu University of China, Beijing, 100081, China
| | - Zuoxin Xi
- School of Information Engineering, Minzu University of China, Beijing, 100081, China
- CAS Key Laboratory for Biomedical Effects of Nanomaterials and Nanosafety Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, 100049, China
| | - Shuai Yin
- School of Information Engineering, Minzu University of China, Beijing, 100081, China
| | - Zhe Zhao
- The Fourth Medical Center of PLA General Hospital, Beijing, 100039, China
| | - Lina Zhao
- CAS Key Laboratory for Biomedical Effects of Nanomaterials and Nanosafety Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
23
|
Salazar RM, Duryea JD, Leone AO, Nair SS, Mumme RP, De B, Corrigan KL, Rooney MK, Das P, Holliday EB, Court LE, Niedzielski JS. Random Forest Modeling of Acute Toxicity in Anal Cancer: Effects of Peritoneal Cavity Contouring Approaches on Model Performance. Int J Radiat Oncol Biol Phys 2024; 118:554-564. [PMID: 37619789 DOI: 10.1016/j.ijrobp.2023.08.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 08/04/2023] [Accepted: 08/13/2023] [Indexed: 08/26/2023]
Abstract
PURPOSE Our purpose was to analyze the effect on gastrointestinal (GI) toxicity models when their dose-volume metrics predictors are derived from segmentations of the peritoneal cavity after different contouring approaches. METHODS AND MATERIALS A random forest machine learning approach was used to predict acute grade ≥3 GI toxicity from dose-volume metrics and clinicopathologic factors for 246 patients (toxicity incidence = 9.5%) treated with definitive chemoradiation for squamous cell carcinoma of the anus. Three types of random forest models were constructed based on different bowel bag segmentation approaches: (1) physician-delineated after Radiation Therapy Oncology Group (RTOG) guidelines, (2) autosegmented by a deep learning model (nnU-Net) following RTOG guidelines, and (3) autosegmented but spanning the entire bowel space. Each model type was evaluated using repeated cross-validation (100 iterations; 50%/50% training/test split). The performance of the models was assessed using area under the precision-recall curve (AUPRC) and the receiver operating characteristic curve (AUROCC), as well as optimal F1 score. RESULTS When following RTOG guidelines, the models based on the nnU-Net auto segmentations (mean values: AUROCC, 0.71 ± 0.07; AUPRC, 0.42 ± 0.09; F1 score, 0.46 ± 0.08) significantly outperformed (P < .001) those based on the physician-delineated contours (mean values: AUROCC, 0.67 ± 0.07; AUPRC, 0.34 ± 0.08; F1 score, 0.36 ± 0.07). When spanning the entire bowel space, the performance of the autosegmentation models improved considerably (mean values: AUROCC, 0.87 ± 0.05; AUPRC, 0.70 ± 0.09; F1 score, 0.68 ± 0.09). CONCLUSIONS Random forest models were superior at predicting acute grade ≥3 GI toxicity when based on RTOG-defined bowel bag autosegmentations rather than physician-delineated contours. Models based on autosegmentations spanning the entire bowel space show further considerable improvement in model performance. The results of this study should be further validated using an external data set.
Collapse
Affiliation(s)
- Ramon M Salazar
- Department of Radiation Physics, The University of Texas, MD Anderson Cancer Center, Houston, Texas
| | - Jack D Duryea
- Department of Radiation Physics, The University of Texas, MD Anderson Cancer Center, Houston, Texas
| | - Alexandra O Leone
- Department of Radiation Physics, The University of Texas, MD Anderson Cancer Center, Houston, Texas
| | - Saurabh S Nair
- Department of Radiation Physics, The University of Texas, MD Anderson Cancer Center, Houston, Texas
| | - Raymond P Mumme
- Department of Radiation Physics, The University of Texas, MD Anderson Cancer Center, Houston, Texas
| | - Brian De
- Department of Radiation Oncology, The University of Texas, MD Anderson Cancer Center, Houston, Texas
| | - Kelsey L Corrigan
- Department of Radiation Oncology, The University of Texas, MD Anderson Cancer Center, Houston, Texas
| | - Michael K Rooney
- Department of Radiation Oncology, The University of Texas, MD Anderson Cancer Center, Houston, Texas
| | - Prajnan Das
- Department of Radiation Oncology, The University of Texas, MD Anderson Cancer Center, Houston, Texas
| | - Emma B Holliday
- Department of Radiation Oncology, The University of Texas, MD Anderson Cancer Center, Houston, Texas
| | - Laurence E Court
- Department of Radiation Physics, The University of Texas, MD Anderson Cancer Center, Houston, Texas
| | - Joshua S Niedzielski
- Department of Radiation Physics, The University of Texas, MD Anderson Cancer Center, Houston, Texas.
| |
Collapse
|
24
|
Wang Y, Jian W, Zhu L, Cai C, Zhang B, Wang X. Attention-Gated Deep-Learning-Based Automatic Digitization of Interstitial Needles in High-Dose-Rate Brachytherapy for Cervical Cancer. Adv Radiat Oncol 2024; 9:101340. [PMID: 38260236 PMCID: PMC10801665 DOI: 10.1016/j.adro.2023.101340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 07/31/2023] [Indexed: 01/24/2024] Open
Abstract
Purpose Deep learning can be used to automatically digitize interstitial needles in high-dose-rate (HDR) brachytherapy for patients with cervical cancer. The aim of this study was to design a novel attention-gated deep-learning model, which may further improve the accuracy of and better differentiate needles. Methods and Materials Seventeen patients with cervical cancer with 56 computed tomography-based interstitial HDR brachytherapy plans from the local hospital were retrospectively chosen with the local institutional review board's approval. Among them, 50 plans were randomly selected as the training set and the rest as the validation set. Spatial and channel attention gates (AGs) were added to 3-dimensional convolutional neural networks (CNNs) to highlight needle features and suppress irrelevant regions; this was supposed to facilitate convergence and improve accuracy of automatic needle digitization. Subsequently, the automatically digitized needles were exported to the Oncentra treatment planning system (Elekta Solutions AB, Stockholm, Sweden) for dose evaluation. The geometric and dosimetric accuracy of automatic needle digitization was compared among 3 methods: (1) clinically approved plans with manual needle digitization (ground truth); (2) the conventional deep-learning (CNN) method; and (3) the attention-added deep-learning (CNN + AG) method, in terms of the Dice similarity coefficient (DSC), tip and shaft positioning errors, dose distribution in the high-risk clinical target volume (HR-CTV), organs at risk, and so on. Results The attention-gated CNN model was superior to CNN without AGs, with a greater DSC (approximately 94% for CNN + AG vs 89% for CNN). The needle tip and shaft errors of the CNN + AG method (1.1 mm and 1.8 mm, respectively) were also much smaller than those of the CNN method (2.0 mm and 3.3 mm, respectively). Finally, the dose difference for the HR-CTV D90 using the CNN + AG method was much more accurate than that using CNN (0.4% and 1.7%, respectively). Conclusions The attention-added deep-learning model was successfully implemented for automatic needle digitization in HDR brachytherapy, with clinically acceptable geometric and dosimetric accuracy. Compared with conventional deep-learning neural networks, attention-gated deep learning may have superior performance and great clinical potential.
Collapse
Affiliation(s)
- Yuenan Wang
- Department of Therapeutic Radiology, Yale University School of Medicine, New Haven, Connecticut
| | - Wanwei Jian
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Lin Zhu
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Chunya Cai
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Bailin Zhang
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xuetao Wang
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| |
Collapse
|
25
|
He W, Zhang C, Dai J, Liu L, Wang T, Liu X, Jiang Y, Li N, Xiong J, Wang L, Xie Y, Liang X. A statistical deformation model-based data augmentation method for volumetric medical image segmentation. Med Image Anal 2024; 91:102984. [PMID: 37837690 DOI: 10.1016/j.media.2023.102984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2022] [Revised: 07/15/2023] [Accepted: 09/28/2023] [Indexed: 10/16/2023]
Abstract
The accurate delineation of organs-at-risk (OARs) is a crucial step in treatment planning during radiotherapy, as it minimizes the potential adverse effects of radiation on surrounding healthy organs. However, manual contouring of OARs in computed tomography (CT) images is labor-intensive and susceptible to errors, particularly for low-contrast soft tissue. Deep learning-based artificial intelligence algorithms surpass traditional methods but require large datasets. Obtaining annotated medical images is both time-consuming and expensive, hindering the collection of extensive training sets. To enhance the performance of medical image segmentation, augmentation strategies such as rotation and Gaussian smoothing are employed during preprocessing. However, these conventional data augmentation techniques cannot generate more realistic deformations, limiting improvements in accuracy. To address this issue, this study introduces a statistical deformation model-based data augmentation method for volumetric medical image segmentation. By applying diverse and realistic data augmentation to CT images from a limited patient cohort, our method significantly improves the fully automated segmentation of OARs across various body parts. We evaluate our framework on three datasets containing tumor OARs from the head, neck, chest, and abdomen. Test results demonstrate that the proposed method achieves state-of-the-art performance in numerous OARs segmentation challenges. This innovative approach holds considerable potential as a powerful tool for various medical imaging-related sub-fields, effectively addressing the challenge of limited data access.
Collapse
Affiliation(s)
- Wenfeng He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China
| | - Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jingjing Dai
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Lin Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Tangsheng Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xuan Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yuming Jiang
- Department of Radiation Oncology, Wake Forest University School of Medicine, Winston Salem, North Carolina 27157, USA
| | - Na Li
- Department of Biomedical Engineering, Guangdong Medical University, Dongguan, 523808, China
| | - Jing Xiong
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Lei Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
26
|
Balgobind BV, Visser J, Grehn M, Marquard Knap M, de Ruysscher D, Levis M, Alcantara P, Boda-Heggemann J, Both M, Cozzi S, Cvek J, Dieleman EMT, Elicin O, Giaj-Levra N, Jumeau R, Krug D, Algara López M, Mayinger M, Mehrhof F, Miszczyk M, Pérez-Calatayud MJ, van der Pol LHG, van der Toorn PP, Vitolo V, Postema PG, Pruvot E, Verhoeff JC, Blanck O. Refining critical structure contouring in STereotactic Arrhythmia Radioablation (STAR): Benchmark results and consensus guidelines from the STOPSTORM.eu consortium. Radiother Oncol 2023; 189:109949. [PMID: 37827279 DOI: 10.1016/j.radonc.2023.109949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 09/05/2023] [Accepted: 10/09/2023] [Indexed: 10/14/2023]
Abstract
BACKGROUND AND PURPOSE In patients with recurrent ventricular tachycardia (VT), STereotactic Arrhythmia Radioablation (STAR) shows promising results. The STOPSTORM.eu consortium was established to investigate and harmonise STAR treatment in Europe. The primary goals of this benchmark study were to standardise contouring of organs at risk (OAR) for STAR, including detailed substructures of the heart, and accredit each participating centre. MATERIALS AND METHODS Centres within the STOPSTORM.eu consortium were asked to delineate 31 OAR in three STAR cases. Delineation was reviewed by the consortium expert panel and after a dedicated workshop feedback and accreditation was provided to all participants. Further quantitative analysis was performed by calculating DICE similarity coefficients (DSC), median distance to agreement (MDA), and 95th percentile distance to agreement (HD95). RESULTS Twenty centres participated in this study. Based on DSC, MDA and HD95, the delineations of well-known OAR in radiotherapy were similar, such as lungs (median DSC = 0.96, median MDA = 0.1 mm and median HD95 = 1.1 mm) and aorta (median DSC = 0.90, median MDA = 0.1 mm and median HD95 = 1.5 mm). Some centres did not include the gastro-oesophageal junction, leading to differences in stomach and oesophagus delineations. For cardiac substructures, such as chambers (median DSC = 0.83, median MDA = 0.2 mm and median HD95 = 0.5 mm), valves (median DSC = 0.16, median MDA = 4.6 mm and median HD95 = 16.0 mm), coronary arteries (median DSC = 0.4, median MDA = 0.7 mm and median HD95 = 8.3 mm) and the sinoatrial and atrioventricular nodes (median DSC = 0.29, median MDA = 4.4 mm and median HD95 = 11.4 mm), deviations between centres occurred more frequently. After the dedicated workshop all centres were accredited and contouring consensus guidelines for STAR were established. CONCLUSION This STOPSTORM multi-centre critical structure contouring benchmark study showed high agreement for standard radiotherapy OAR. However, for cardiac substructures larger disagreement in contouring occurred, which may have significant impact on STAR treatment planning and dosimetry evaluation. To standardize OAR contouring, consensus guidelines for critical structure contouring in STAR were established.
Collapse
Affiliation(s)
- Brian V Balgobind
- Department of Radiation Oncology, Amsterdam UMC location University of Amsterdam, Amsterdam, the Netherlands.
| | - Jorrit Visser
- Department of Radiation Oncology, Amsterdam UMC location University of Amsterdam, Amsterdam, the Netherlands
| | - Melanie Grehn
- Department of Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel, Germany
| | | | - Dirk de Ruysscher
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University, Maastricht, the Netherlands
| | - Mario Levis
- Department of Oncology, University of Torino, Torino, Italy
| | - Pino Alcantara
- Department of Radiation Oncology, Hospital Clínico San Carlos, Faculty of Medicine, University Complutense of Madrid, Madrid, Spain
| | - Judit Boda-Heggemann
- Department of Radiation Oncology, University Medical Center Mannheim, Medical Faculty Mannheim, University of Heidelberg, Mannheim, Germany
| | - Marcus Both
- Department of Radiology and Neuroradiology, University Medical Center Schleswig-Holstein, Kiel, Germany
| | - Salvatore Cozzi
- Radiation Oncology Unit, Azienda USL-IRCCS, Reggio Emilia, Italy; Radiation Oncology Department, Centre Léon Bérard, Lyon, France
| | - Jakub Cvek
- Department of Oncology, University Hospital and Faculty of Medicine, Ostrava, Czech Republic
| | - Edith M T Dieleman
- Department of Radiation Oncology, Amsterdam UMC location University of Amsterdam, Amsterdam, the Netherlands
| | - Olgun Elicin
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Niccolò Giaj-Levra
- Department of Advanced Radiation Oncology Department, IRCCS Sacro Cuore Don Calabria Hospital, Negrar, Verona, Italy
| | - Raphaël Jumeau
- Department of Radio-Oncology, Lausanne University Hospital, Lausanne, Switzerland
| | - David Krug
- Department of Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel, Germany
| | - Manuel Algara López
- Department of Radiotherapy, Hospital del Mar, Universitat Pompeu Fabra, Barcelona, Spain
| | - Michael Mayinger
- Department of Radiation Oncology, University Hospital of Zurich, Zurich, Switzerland
| | - Felix Mehrhof
- Department for Radiation Oncology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Marcin Miszczyk
- IIIrd Radiotherapy and Chemotherapy Department, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice, Poland
| | | | - Luuk H G van der Pol
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, the Netherlands
| | | | - Viviana Vitolo
- Radiation Oncology Clinical Department, National Center of Oncological Hadrontherapy (Fondazione CNAO), Pavia, Italy
| | - Pieter G Postema
- Department of Cardiology, Amsterdam UMC location University of Amsterdam, Amsterdam, the Netherlands
| | - Etienne Pruvot
- Heart and Vessel Department, Service of Cardiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Joost C Verhoeff
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Oliver Blanck
- Department of Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel, Germany
| |
Collapse
|
27
|
Liao W, Luo X, He Y, Dong Y, Li C, Li K, Zhang S, Zhang S, Wang G, Xiao J. Comprehensive Evaluation of a Deep Learning Model for Automatic Organs-at-Risk Segmentation on Heterogeneous Computed Tomography Images for Abdominal Radiation Therapy. Int J Radiat Oncol Biol Phys 2023; 117:994-1006. [PMID: 37244625 DOI: 10.1016/j.ijrobp.2023.05.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 03/13/2023] [Accepted: 05/18/2023] [Indexed: 05/29/2023]
Abstract
PURPOSE Our purpose was to develop a deep learning model (AbsegNet) that produces accurate contours of 16 organs at risk (OARs) for abdominal malignancies as an essential part of fully automated radiation treatment planning. METHODS AND MATERIALS Three data sets with 544 computed tomography scans were retrospectively collected. Data set 1 was split into 300 training cases and 128 test cases (cohort 1) for AbsegNet. Data set 2, including cohort 2 (n = 24) and cohort 3 (n = 20), were used to validate AbsegNet externally. Data set 3, including cohort 4 (n = 40) and cohort 5 (n = 32), were used to clinically assess the accuracy of AbsegNet-generated contours. Each cohort was from a different center. The Dice similarity coefficient and 95th-percentile Hausdorff distance were calculated to evaluate the delineation quality for each OAR. Clinical accuracy evaluation was classified into 4 levels: no revision, minor revisions (0% < volumetric revision degrees [VRD] ≤ 10%), moderate revisions (10% ≤ VRD < 20%), and major revisions (VRD ≥20%). RESULTS For all OARs, AbsegNet achieved a mean Dice similarity coefficient of 86.73%, 85.65%, and 88.04% in cohorts 1, 2, and 3, respectively, and a mean 95th-percentile Hausdorff distance of 8.92, 10.18, and 12.40 mm, respectively. The performance of AbsegNet outperformed SwinUNETR, DeepLabV3+, Attention-UNet, UNet, and 3D-UNet. When experts evaluated contours from cohorts 4 and 5, 4 OARs (liver, kidney_L, kidney_R, and spleen) of all patients were scored as having no revision, and over 87.5% of patients with contours of the stomach, esophagus, adrenals, or rectum were considered as having no or minor revisions. Only 15.0% of patients with colon and small bowel contours required major revisions. CONCLUSIONS We propose a novel deep-learning model to delineate OARs on diverse data sets. Most contours produced by AbsegNet are accurate and robust and are, therefore, clinically applicable and helpful to facilitate radiation therapy workflow.
Collapse
Affiliation(s)
- Wenjun Liao
- Department of Radiation Oncology, Radiation Oncology Key Laboratory of Sichuan Province, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, Affiliated Cancer Hospital of University of Electronic Science and Technology of China, Chengdu, China
| | - Xiangde Luo
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; Shanghai AI Laboratory, Shanghai, China
| | - Yuan He
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Ye Dong
- Department of NanFang PET Center, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Churong Li
- Department of Radiation Oncology, Radiation Oncology Key Laboratory of Sichuan Province, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, Affiliated Cancer Hospital of University of Electronic Science and Technology of China, Chengdu, China
| | - Kang Li
- West China Biomedical Big Data Center
| | - Shichuan Zhang
- Department of Radiation Oncology, Radiation Oncology Key Laboratory of Sichuan Province, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, Affiliated Cancer Hospital of University of Electronic Science and Technology of China, Chengdu, China
| | - Shaoting Zhang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; Shanghai AI Laboratory, Shanghai, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; Shanghai AI Laboratory, Shanghai, China
| | - Jianghong Xiao
- Radiotherapy Physics & Technology Center, Department of Radiation Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, China.
| |
Collapse
|
28
|
Liu P, Sun Y, Zhao X, Yan Y. Deep learning algorithm performance in contouring head and neck organs at risk: a systematic review and single-arm meta-analysis. Biomed Eng Online 2023; 22:104. [PMID: 37915046 PMCID: PMC10621161 DOI: 10.1186/s12938-023-01159-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 09/21/2023] [Indexed: 11/03/2023] Open
Abstract
PURPOSE The contouring of organs at risk (OARs) in head and neck cancer radiation treatment planning is a crucial, yet repetitive and time-consuming process. Recent studies have applied deep learning (DL) algorithms to automatically contour head and neck OARs. This study aims to conduct a systematic review and meta-analysis to summarize and analyze the performance of DL algorithms in contouring head and neck OARs. The objective is to assess the advantages and limitations of DL algorithms in contour planning of head and neck OARs. METHODS This study conducted a literature search of Pubmed, Embase and Cochrane Library databases, to include studies related to DL contouring head and neck OARs, and the dice similarity coefficient (DSC) of four categories of OARs from the results of each study are selected as effect sizes for meta-analysis. Furthermore, this study conducted a subgroup analysis of OARs characterized by image modality and image type. RESULTS 149 articles were retrieved, and 22 studies were included in the meta-analysis after excluding duplicate literature, primary screening, and re-screening. The combined effect sizes of DSC for brainstem, spinal cord, mandible, left eye, right eye, left optic nerve, right optic nerve, optic chiasm, left parotid, right parotid, left submandibular, and right submandibular are 0.87, 0.83, 0.92, 0.90, 0.90, 0.71, 0.74, 0.62, 0.85, 0.85, 0.82, and 0.82, respectively. For subgroup analysis, the combined effect sizes for segmentation of the brainstem, mandible, left optic nerve, and left parotid gland using CT and MRI images are 0.86/0.92, 0.92/0.90, 0.71/0.73, and 0.84/0.87, respectively. Pooled effect sizes using 2D and 3D images of the brainstem, mandible, left optic nerve, and left parotid gland for contouring are 0.88/0.87, 0.92/0.92, 0.75/0.71 and 0.87/0.85. CONCLUSIONS The use of automated contouring technology based on DL algorithms is an essential tool for contouring head and neck OARs, achieving high accuracy, reducing the workload of clinical radiation oncologists, and providing individualized, standardized, and refined treatment plans for implementing "precision radiotherapy". Improving DL performance requires the construction of high-quality data sets and enhancing algorithm optimization and innovation.
Collapse
Affiliation(s)
- Peiru Liu
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China
- Beifang Hospital of China Medical University, Shenyang, China
| | - Ying Sun
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China
| | - Xinzhuo Zhao
- Shenyang University of Technology, School of Electrical Engineering,, Shenyang, China
| | - Ying Yan
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China.
| |
Collapse
|
29
|
Luan S, Wei C, Ding Y, Xue X, Wei W, Yu X, Wang X, Ma C, Zhu B. PCG-net: feature adaptive deep learning for automated head and neck organs-at-risk segmentation. Front Oncol 2023; 13:1177788. [PMID: 37927463 PMCID: PMC10623055 DOI: 10.3389/fonc.2023.1177788] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 10/03/2023] [Indexed: 11/07/2023] Open
Abstract
Introduction Radiation therapy is a common treatment option for Head and Neck Cancer (HNC), where the accurate segmentation of Head and Neck (HN) Organs-AtRisks (OARs) is critical for effective treatment planning. Manual labeling of HN OARs is time-consuming and subjective. Therefore, deep learning segmentation methods have been widely used. However, it is still a challenging task for HN OARs segmentation due to some small-sized OARs such as optic chiasm and optic nerve. Methods To address this challenge, we propose a parallel network architecture called PCG-Net, which incorporates both convolutional neural networks (CNN) and a Gate-Axial-Transformer (GAT) to effectively capture local information and global context. Additionally, we employ a cascade graph module (CGM) to enhance feature fusion through message-passing functions and information aggregation strategies. We conducted extensive experiments to evaluate the effectiveness of PCG-Net and its robustness in three different downstream tasks. Results The results show that PCG-Net outperforms other methods, improves the accuracy of HN OARs segmentation, which can potentially improve treatment planning for HNC patients. Discussion In summary, the PCG-Net model effectively establishes the dependency between local information and global context and employs CGM to enhance feature fusion for accurate segment HN OARs. The results demonstrate the superiority of PCGNet over other methods, making it a promising approach for HNC treatment planning.
Collapse
Affiliation(s)
- Shunyao Luan
- School of Integrated Circuit, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| | - Changchao Wei
- Key Laboratory of Artificial Micro and Nano-structures of Ministry of Education, Center for Theoretical Physics, School of Physics and Technology, Wuhan University, Wuhan, China
| | - Yi Ding
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Xudong Xue
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Wei Wei
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Xiao Yu
- Department of Radiation Oncology, The First Affiliated Hospital of University of Science and Technology of China, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Xiao Wang
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Rutgers-Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Chi Ma
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Rutgers-Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Benpeng Zhu
- School of Integrated Circuit, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
30
|
Sundström E, Laudato M. Machine Learning-Based Segmentation of the Thoracic Aorta with Congenital Valve Disease Using MRI. Bioengineering (Basel) 2023; 10:1216. [PMID: 37892946 PMCID: PMC10604748 DOI: 10.3390/bioengineering10101216] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 09/21/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023] Open
Abstract
Subjects with bicuspid aortic valves (BAV) are at risk of developing valve dysfunction and need regular clinical imaging surveillance. Management of BAV involves manual and time-consuming segmentation of the aorta for assessing left ventricular function, jet velocity, gradient, shear stress, and valve area with aortic valve stenosis. This paper aims to employ machine learning-based (ML) segmentation as a potential for improved BAV assessment and reducing manual bias. The focus is on quantifying the relationship between valve morphology and vortical structures, and analyzing how valve morphology influences the aorta's susceptibility to shear stress that may lead to valve incompetence. The ML-based segmentation that is employed is trained on whole-body Computed Tomography (CT). Magnetic Resonance Imaging (MRI) is acquired from six subjects, three with tricuspid aortic valves (TAV) and three functionally BAV, with right-left leaflet fusion. These are used for segmentation of the cardiovascular system and delineation of four-dimensional phase-contrast magnetic resonance imaging (4D-PCMRI) for quantification of vortical structures and wall shear stress. The ML-based segmentation model exhibits a high Dice score (0.86) for the heart organ, indicating a robust segmentation. However, the Dice score for the thoracic aorta is comparatively poor (0.72). It is found that wall shear stress is predominantly symmetric in TAVs. BAVs exhibit highly asymmetric wall shear stress, with the region opposite the fused coronary leaflets experiencing elevated tangential wall shear stress. This is due to the higher tangential velocity explained by helical flow, proximally of the sinutubal junction of the ascending aorta. ML-based segmentation not only reduces the runtime of assessing the hemodynamic effectiveness, but also identifies the significance of the tangential wall shear stress in addition to the axial wall shear stress that may lead to the progression of valve incompetence in BAVs, which could guide potential adjustments in surgical interventions.
Collapse
Affiliation(s)
- Elias Sundström
- Department of Engineering Mechanics, FLOW Research Center, KTH Royal Institute of Technology, Teknikringen 8, 10044 Stockholm, Sweden
| | - Marco Laudato
- Department of Engineering Mechanics, FLOW Research Center, KTH Royal Institute of Technology, Teknikringen 8, 10044 Stockholm, Sweden
- Department of Engineering Mechanics, The Marcus Wallenberg Laboratory for Sound and Vibration Research, KTH Royal Institute of Technology, Teknikringen 8, 10044 Stockholm, Sweden
| |
Collapse
|
31
|
Wasserthal J, Breit HC, Meyer MT, Pradella M, Hinck D, Sauter AW, Heye T, Boll DT, Cyriac J, Yang S, Bach M, Segeroth M. TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images. Radiol Artif Intell 2023; 5:e230024. [PMID: 37795137 PMCID: PMC10546353 DOI: 10.1148/ryai.230024] [Citation(s) in RCA: 111] [Impact Index Per Article: 111.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 05/16/2023] [Accepted: 06/14/2023] [Indexed: 10/06/2023]
Abstract
Purpose To present a deep learning segmentation model that can automatically and robustly segment all major anatomic structures on body CT images. Materials and Methods In this retrospective study, 1204 CT examinations (from 2012, 2016, and 2020) were used to segment 104 anatomic structures (27 organs, 59 bones, 10 muscles, and eight vessels) relevant for use cases such as organ volumetry, disease characterization, and surgical or radiation therapy planning. The CT images were randomly sampled from routine clinical studies and thus represent a real-world dataset (different ages, abnormalities, scanners, body parts, sequences, and sites). The authors trained an nnU-Net segmentation algorithm on this dataset and calculated Dice similarity coefficients to evaluate the model's performance. The trained algorithm was applied to a second dataset of 4004 whole-body CT examinations to investigate age-dependent volume and attenuation changes. Results The proposed model showed a high Dice score (0.943) on the test set, which included a wide range of clinical data with major abnormalities. The model significantly outperformed another publicly available segmentation model on a separate dataset (Dice score, 0.932 vs 0.871; P < .001). The aging study demonstrated significant correlations between age and volume and mean attenuation for a variety of organ groups (eg, age and aortic volume [rs = 0.64; P < .001]; age and mean attenuation of the autochthonous dorsal musculature [rs = -0.74; P < .001]). Conclusion The developed model enables robust and accurate segmentation of 104 anatomic structures. The annotated dataset (https://doi.org/10.5281/zenodo.6802613) and toolkit (https://www.github.com/wasserth/TotalSegmentator) are publicly available.Keywords: CT, Segmentation, Neural Networks Supplemental material is available for this article. © RSNA, 2023See also commentary by Sebro and Mongan in this issue.
Collapse
Affiliation(s)
- Jakob Wasserthal
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Hanns-Christian Breit
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Manfred T. Meyer
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Maurice Pradella
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Daniel Hinck
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Alexander W. Sauter
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Tobias Heye
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Daniel T. Boll
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Joshy Cyriac
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Shan Yang
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Michael Bach
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| | - Martin Segeroth
- From the Clinic of Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland, Petersgraben 4, 4031 Basel, Switzerland
| |
Collapse
|
32
|
Liu Z, Lv Q, Yang Z, Li Y, Lee CH, Shen L. Recent progress in transformer-based medical image analysis. Comput Biol Med 2023; 164:107268. [PMID: 37494821 DOI: 10.1016/j.compbiomed.2023.107268] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/30/2023] [Accepted: 07/16/2023] [Indexed: 07/28/2023]
Abstract
The transformer is primarily used in the field of natural language processing. Recently, it has been adopted and shows promise in the computer vision (CV) field. Medical image analysis (MIA), as a critical branch of CV, also greatly benefits from this state-of-the-art technique. In this review, we first recap the core component of the transformer, the attention mechanism, and the detailed structures of the transformer. After that, we depict the recent progress of the transformer in the field of MIA. We organize the applications in a sequence of different tasks, including classification, segmentation, captioning, registration, detection, enhancement, localization, and synthesis. The mainstream classification and segmentation tasks are further divided into eleven medical image modalities. A large number of experiments studied in this review illustrate that the transformer-based method outperforms existing methods through comparisons with multiple evaluation metrics. Finally, we discuss the open challenges and future opportunities in this field. This task-modality review with the latest contents, detailed information, and comprehensive comparison may greatly benefit the broad MIA community.
Collapse
Affiliation(s)
- Zhaoshan Liu
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| | - Qiujie Lv
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore; School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China.
| | - Ziduo Yang
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore; School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China.
| | - Yifan Li
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| | - Chau Hung Lee
- Department of Radiology, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, Singapore, 308433, Singapore.
| | - Lei Shen
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| |
Collapse
|
33
|
Shamshad F, Khan S, Zamir SW, Khan MH, Hayat M, Khan FS, Fu H. Transformers in medical imaging: A survey. Med Image Anal 2023; 88:102802. [PMID: 37315483 DOI: 10.1016/j.media.2023.102802] [Citation(s) in RCA: 81] [Impact Index Per Article: 81.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 03/11/2023] [Accepted: 03/23/2023] [Indexed: 06/16/2023]
Abstract
Following unprecedented success on the natural language tasks, Transformers have been successfully applied to several computer vision problems, achieving state-of-the-art results and prompting researchers to reconsider the supremacy of convolutional neural networks (CNNs) as de facto operators. Capitalizing on these advances in computer vision, the medical imaging field has also witnessed growing interest for Transformers that can capture global context compared to CNNs with local receptive fields. Inspired from this transition, in this survey, we attempt to provide a comprehensive review of the applications of Transformers in medical imaging covering various aspects, ranging from recently proposed architectural designs to unsolved issues. Specifically, we survey the use of Transformers in medical image segmentation, detection, classification, restoration, synthesis, registration, clinical report generation, and other tasks. In particular, for each of these applications, we develop taxonomy, identify application-specific challenges as well as provide insights to solve them, and highlight recent trends. Further, we provide a critical discussion of the field's current state as a whole, including the identification of key challenges, open problems, and outlining promising future directions. We hope this survey will ignite further interest in the community and provide researchers with an up-to-date reference regarding applications of Transformer models in medical imaging. Finally, to cope with the rapid development in this field, we intend to regularly update the relevant latest papers and their open-source implementations at https://github.com/fahadshamshad/awesome-transformers-in-medical-imaging.
Collapse
Affiliation(s)
- Fahad Shamshad
- MBZ University of Artificial Intelligence, Abu Dhabi, United Arab Emirates.
| | - Salman Khan
- MBZ University of Artificial Intelligence, Abu Dhabi, United Arab Emirates; CECS, Australian National University, Canberra ACT 0200, Australia
| | - Syed Waqas Zamir
- Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | | | - Munawar Hayat
- Faculty of IT, Monash University, Clayton VIC 3800, Australia
| | - Fahad Shahbaz Khan
- MBZ University of Artificial Intelligence, Abu Dhabi, United Arab Emirates; Computer Vision Laboratory, Linköping University, Sweden
| | - Huazhu Fu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| |
Collapse
|
34
|
Uh J, Wang C, Jordan JA, Pirlepesov F, Becksfort JB, Ates O, Krasin MJ, Hua CH. A hybrid method of correcting CBCT for proton range estimation with deep learning and deformable image registration. Phys Med Biol 2023; 68:10.1088/1361-6560/ace754. [PMID: 37442128 PMCID: PMC10846632 DOI: 10.1088/1361-6560/ace754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 07/13/2023] [Indexed: 07/15/2023]
Abstract
Objective. This study aimed to develop a novel method for generating synthetic CT (sCT) from cone-beam CT (CBCT) of the abdomen/pelvis with bowel gas pockets to facilitate estimation of proton ranges.Approach. CBCT, the same-day repeat CT, and the planning CT (pCT) of 81 pediatric patients were used for training (n= 60), validation (n= 6), and testing (n= 15) of the method. The proposed method hybridizes unsupervised deep learning (CycleGAN) and deformable image registration (DIR) of the pCT to CBCT. The CycleGAN and DIR are respectively applied to generate the geometry-weighted (high spatial-frequency) and intensity-weighted (low spatial-frequency) components of the sCT, thereby each process deals with only the component weighted toward its strength. The resultant sCT is further improved in bowel gas regions and other tissues by iteratively feeding back the sCT to adjust incorrect DIR and by increasing the contribution of the deformed pCT in regions of accurate DIR.Main results. The hybrid sCT was more accurate than deformed pCT and CycleGAN-only sCT as indicated by the smaller mean absolute error in CT numbers (28.7 ± 7.1 HU versus 38.8 ± 19.9 HU/53.2 ± 5.5 HU;P≤ 0.012) and higher Dice similarity of the internal gas regions (0.722 ± 0.088 versus 0.180 ± 0.098/0.659 ± 0.129;P≤ 0.002). Accordingly, the hybrid method resulted in more accurate proton range for the beams intersecting gas pockets (11 fields in 6 patients) than the individual methods (the 90th percentile error in 80% distal fall-off, 1.8 ± 0.6 mm versus 6.5 ± 7.8 mm/3.7 ± 1.5 mm;P≤ 0.013). The gamma passing rates also showed a significant dosimetric advantage by the hybrid method (99.7 ± 0.8% versus 98.4 ± 3.1%/98.3 ± 1.8%;P≤ 0.007).Significance. The hybrid method significantly improved the accuracy of sCT and showed promises in CBCT-based proton range verification and adaptive replanning of abdominal/pelvic proton therapy even when gas pockets are present in the beam path.
Collapse
Affiliation(s)
- Jinsoo Uh
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Chuang Wang
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Jacob A Jordan
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
- College of Medicine, The University of Tennessee Health Science Center, Memphis, TN, United States of America
| | - Fakhriddin Pirlepesov
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Jared B Becksfort
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Ozgur Ates
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Matthew J Krasin
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| | - Chia-Ho Hua
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States of America
| |
Collapse
|
35
|
Xiao H, Li L, Liu Q, Zhu X, Zhang Q. Transformers in medical image segmentation: A review. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
|
36
|
Kachelrieß M. [Risk-minimizing tube current modulation for computed tomography]. RADIOLOGIE (HEIDELBERG, GERMANY) 2023:10.1007/s00117-023-01160-5. [PMID: 37306750 DOI: 10.1007/s00117-023-01160-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Accepted: 04/28/2023] [Indexed: 06/13/2023]
Abstract
AIM/PROBLEM Every computed tomography (CT) examination is accompanied by radiation exposure. The aim is to reduce this as much as possible without compromising image quality by using a tube current modulation technique. STANDARD PROCEDURE CT tube current modulation (TCM), which has been in use for about two decades, adjusts the tube current to the patient's attenuation (in the angular and z‑directions) in a way that minimizes the mAs product (tube current-time product) of the scan without compromising image quality. This mAsTCM, present in all CT devices, is associated with a significant dose reduction in those anatomical areas that have high attenuation differences between anterior-posterior (a.p.) and lateral, particularly the shoulder and pelvis. Radiation risk of individual organs or of the patient is not considered in mAsTCM. METHODOLOGICAL INNOVATION Recently, a TCM method was proposed that directly minimizes the patient's radiation risk by predicting organ dose levels and taking them into account when choosing tube current. It is shown that this so-called riskTCM is significantly superior to mAsTCM in all body regions. To be able to use riskTCM in clinical routine, only a software adaptation of the CT system would be necessary. CONCLUSIONS With riskTCM, significant dose reductions can be achieved compared to the standard procedure, typically around 10%-30%. This is especially true in those body regions where the standard procedure shows only moderate advantages over a scan without any tube current modulation at all. It is now up to the CT vendors to take action and implement riskTCM.
Collapse
Affiliation(s)
- Marc Kachelrieß
- Abteilung Röntgenbildgebung und Computertomographie, Deutsches Krebsforschungszentrum (DFKZ), Heidelberg, Deutschland.
| |
Collapse
|
37
|
Li J, Chen J, Tang Y, Wang C, Landman BA, Zhou SK. Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives. Med Image Anal 2023; 85:102762. [PMID: 36738650 PMCID: PMC10010286 DOI: 10.1016/j.media.2023.102762] [Citation(s) in RCA: 30] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 01/18/2023] [Accepted: 01/27/2023] [Indexed: 02/01/2023]
Abstract
Transformer, one of the latest technological advances of deep learning, has gained prevalence in natural language processing or computer vision. Since medical imaging bear some resemblance to computer vision, it is natural to inquire about the status quo of Transformers in medical imaging and ask the question: can the Transformer models transform medical imaging? In this paper, we attempt to make a response to the inquiry. After a brief introduction of the fundamentals of Transformers, especially in comparison with convolutional neural networks (CNNs), and highlighting key defining properties that characterize the Transformers, we offer a comprehensive review of the state-of-the-art Transformer-based approaches for medical imaging and exhibit current research progresses made in the areas of medical image segmentation, recognition, detection, registration, reconstruction, enhancement, etc. In particular, what distinguishes our review lies in its organization based on the Transformer's key defining properties, which are mostly derived from comparing the Transformer and CNN, and its type of architecture, which specifies the manner in which the Transformer and CNN are combined, all helping the readers to best understand the rationale behind the reviewed approaches. We conclude with discussions of future perspectives.
Collapse
Affiliation(s)
- Jun Li
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Junyu Chen
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, USA
| | - Yucheng Tang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Ce Wang
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Bennett A Landman
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - S Kevin Zhou
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China; School of Biomedical Engineering & Suzhou Institute for Advanced Research, Center for Medical Imaging, Robotics, and Analytic Computing & Learning (MIRACLE), University of Science and Technology of China, Suzhou 215123, China.
| |
Collapse
|
38
|
Zhong Y, Guo Y, Fang Y, Wu Z, Wang J, Hu W. Geometric and dosimetric evaluation of deep learning based auto-segmentation for clinical target volume on breast cancer. J Appl Clin Med Phys 2023:e13951. [PMID: 36920901 DOI: 10.1002/acm2.13951] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 02/09/2023] [Accepted: 02/12/2023] [Indexed: 03/16/2023] Open
Abstract
BACKGROUND Recently, target auto-segmentation techniques based on deep learning (DL) have shown promising results. However, inaccurate target delineation will directly affect the treatment planning dose distribution and the effect of subsequent radiotherapy work. Evaluation based on geometric metrics alone may not be sufficient for target delineation accuracy assessment. The purpose of this paper is to validate the performance of automatic segmentation with dosimetric metrics and try to construct new evaluation geometric metrics to comprehensively understand the dose-response relationship from the perspective of clinical application. MATERIALS AND METHODS A DL-based target segmentation model was developed by using 186 manual delineation modified radical mastectomy breast cancer cases. The resulting DL model were used to generate alternative target contours in a new set of 48 patients. The Auto-plan was reoptimized to ensure the same optimized parameters as the reference Manual-plan. To assess the dosimetric impact of target auto-segmentation, not only common geometric metrics but also new spatial parameters with distance and relative volume ( R V ${R}_V$ ) to target were used. Correlations were performed using Spearman's correlation between segmentation evaluation metrics and dosimetric changes. RESULTS Only strong (|R2 | > 0.6, p < 0.01) or moderate (|R2 | > 0.4, p < 0.01) Pearson correlation was established between the traditional geometric metric and three dosimetric evaluation indices to target (conformity index, homogeneity index, and mean dose). For organs at risk (OARs), inferior or no significant relationship was found between geometric parameters and dosimetric differences. Furthermore, we found that OARs dose distribution was affected by boundary error of target segmentation instead of distance and R V ${R}_V$ to target. CONCLUSIONS Current geometric metrics could reflect a certain degree of dose effect of target variation. To find target contour variations that do lead to OARs dosimetry changes, clinically oriented metrics that more accurately reflect how segmentation quality affects dosimetry should be constructed.
Collapse
Affiliation(s)
- Yang Zhong
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Ying Guo
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yingtao Fang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Zhiqiang Wu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| |
Collapse
|
39
|
Artificial intelligence-supported applications in head and neck cancer radiotherapy treatment planning and dose optimisation. Radiography (Lond) 2023; 29:496-502. [PMID: 36889022 DOI: 10.1016/j.radi.2023.02.018] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 02/11/2023] [Accepted: 02/20/2023] [Indexed: 03/08/2023]
Abstract
INTRODUCTION The aim of this review is to describe how various AI-supported applications are used in head and neck cancer radiotherapy treatment planning, and the impact on dose management in regards to target volume and nearby organs at risk (OARs). METHODS Literature searches were conducted in databases and publisher portals Pubmed, Science Direct, CINAHL, Ovid, and ProQuest to peer reviewed studies published between 2015 and 2021. RESULTS Out of 464 potential ones, ten articles covering the topic were selected. The benefit of using deep learning-based methods to automatically segment OARs is that it makes the process more efficient producing clinically acceptable OAR doses. In some cases automated treatment planning systems can outperform traditional systems in dose prediction. CONCLUSIONS Based on the selected articles, in general AI-based systems produced time savings. Also, AI-based solutions perform at the same level or better than traditional planning systems considering auto-segmentation, treatment planning and dose prediction. However, their clinical implementation into routine standard of care should be carefully validated IMPLICATIONS TO PRACTICE: AI has a primary benefit in reducing treatment planning time and improving plan quality allowing dose reduction to the OARs thereby enhancing patients' quality of life. It has a secondary benefit of reducing radiation therapists' time spent annotating thereby saving their time for e.g. patient encounters.
Collapse
|
40
|
Elhakim T, Trinh K, Mansur A, Bridge C, Daye D. Role of Machine Learning-Based CT Body Composition in Risk Prediction and Prognostication: Current State and Future Directions. Diagnostics (Basel) 2023; 13:968. [PMID: 36900112 PMCID: PMC10000509 DOI: 10.3390/diagnostics13050968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 02/11/2023] [Accepted: 02/18/2023] [Indexed: 03/08/2023] Open
Abstract
CT body composition analysis has been shown to play an important role in predicting health and has the potential to improve patient outcomes if implemented clinically. Recent advances in artificial intelligence and machine learning have led to high speed and accuracy for extracting body composition metrics from CT scans. These may inform preoperative interventions and guide treatment planning. This review aims to discuss the clinical applications of CT body composition in clinical practice, as it moves towards widespread clinical implementation.
Collapse
Affiliation(s)
- Tarig Elhakim
- Department of Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA 19104, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Kelly Trinh
- School of Medicine, Texas Tech University Health Sciences Center, School of Medicine, Lubbock, TX 79430, USA
| | - Arian Mansur
- Harvard Medical School, Harvard University, Boston, MA 02115, USA
| | - Christopher Bridge
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
- Harvard Medical School, Harvard University, Boston, MA 02115, USA
| | - Dania Daye
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
- Harvard Medical School, Harvard University, Boston, MA 02115, USA
| |
Collapse
|
41
|
Finnegan RN, Chin V, Chlap P, Haidar A, Otton J, Dowling J, Thwaites DI, Vinod SK, Delaney GP, Holloway L. Open-source, fully-automated hybrid cardiac substructure segmentation: development and optimisation. Phys Eng Sci Med 2023; 46:377-393. [PMID: 36780065 PMCID: PMC10030448 DOI: 10.1007/s13246-023-01231-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 01/30/2023] [Indexed: 02/14/2023]
Abstract
Radiotherapy for thoracic and breast tumours is associated with a range of cardiotoxicities. Emerging evidence suggests cardiac substructure doses may be more predictive of specific outcomes, however, quantitative data necessary to develop clinical planning constraints is lacking. Retrospective analysis of patient data is required, which relies on accurate segmentation of cardiac substructures. In this study, a novel model was designed to deliver reliable, accurate, and anatomically consistent segmentation of 18 cardiac substructures on computed tomography (CT) scans. Thirty manually contoured CT scans were included. The proposed multi-stage method leverages deep learning (DL), multi-atlas mapping, and geometric modelling to automatically segment the whole heart, cardiac chambers, great vessels, heart valves, coronary arteries, and conduction nodes. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), mean distance to agreement (MDA), Hausdorff distance (HD), and volume ratio. Performance was reliable, with no errors observed and acceptable variation in accuracy between cases, including in challenging cases with imaging artefacts and atypical patient anatomy. The median DSC range was 0.81-0.93 for whole heart and cardiac chambers, 0.43-0.76 for great vessels and conduction nodes, and 0.22-0.53 for heart valves. For all structures the median MDA was below 6 mm, median HD ranged 7.7-19.7 mm, and median volume ratio was close to one (0.95-1.49) for all structures except the left main coronary artery (2.07). The fully automatic algorithm takes between 9 and 23 min per case. The proposed fully-automatic method accurately delineates cardiac substructures on radiotherapy planning CT scans. Robust and anatomically consistent segmentations, particularly for smaller structures, represents a major advantage of the proposed segmentation approach. The open-source software will facilitate more precise evaluation of cardiac doses and risks from available clinical datasets.
Collapse
Affiliation(s)
- Robert N Finnegan
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia.
- Institute of Medical Physics, School of Physics, University of Sydney, Sydney, NSW, Australia.
- Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia.
| | - Vicky Chin
- Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia
- Liverpool Cancer Therapy Centre, South Western Sydney Local Health District, Liverpool, NSW, Australia
- South Western Sydney Clinical School, University of New South Wales, Sydney, NSW, Australia
| | - Phillip Chlap
- Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia
- Liverpool Cancer Therapy Centre, South Western Sydney Local Health District, Liverpool, NSW, Australia
- South Western Sydney Clinical School, University of New South Wales, Sydney, NSW, Australia
| | - Ali Haidar
- Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia
- Liverpool Cancer Therapy Centre, South Western Sydney Local Health District, Liverpool, NSW, Australia
- South Western Sydney Clinical School, University of New South Wales, Sydney, NSW, Australia
| | - James Otton
- South Western Sydney Clinical School, University of New South Wales, Sydney, NSW, Australia
| | - Jason Dowling
- Institute of Medical Physics, School of Physics, University of Sydney, Sydney, NSW, Australia
- CSIRO Health and Biosecurity, The Australian e-Health and Research Centre, Herston, QLD, Australia
- School of Mathematical and Physical Sciences, University of Newcastle, Newcastle, NSW, Australia
| | - David I Thwaites
- Institute of Medical Physics, School of Physics, University of Sydney, Sydney, NSW, Australia
- Radiotherapy Research Group, Leeds Institute of Medical Research, St James's Hospital and University of Leeds, Leeds, UK
| | - Shalini K Vinod
- Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia
- Liverpool Cancer Therapy Centre, South Western Sydney Local Health District, Liverpool, NSW, Australia
- South Western Sydney Clinical School, University of New South Wales, Sydney, NSW, Australia
| | - Geoff P Delaney
- Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia
- Liverpool Cancer Therapy Centre, South Western Sydney Local Health District, Liverpool, NSW, Australia
- South Western Sydney Clinical School, University of New South Wales, Sydney, NSW, Australia
| | - Lois Holloway
- Institute of Medical Physics, School of Physics, University of Sydney, Sydney, NSW, Australia
- Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia
- Liverpool Cancer Therapy Centre, South Western Sydney Local Health District, Liverpool, NSW, Australia
- South Western Sydney Clinical School, University of New South Wales, Sydney, NSW, Australia
- Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW, Australia
| |
Collapse
|
42
|
Nagami N, Arimura H, Nojiri J, Yunhao C, Ninomiya K, Ogata M, Oishi M, Ohira K, Kitamura S, Irie H. Dual segmentation models for poorly and well-differentiated hepatocellular carcinoma using two-step transfer deep learning on dynamic contrast-enhanced CT images. Phys Eng Sci Med 2023; 46:83-97. [PMID: 36469246 DOI: 10.1007/s13246-022-01202-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 11/17/2022] [Indexed: 12/12/2022]
Abstract
The aim of this study was to develop dual segmentation models for poorly and well-differentiated hepatocellular carcinoma (HCC), using two-step transfer learning (TSTL) based on dynamic contrast-enhanced (DCE) computed tomography (CT) images. From 2013 to 2019, DCE-CT images of 128 patients with 80 poorly differentiated and 48 well-differentiated HCCs were selected at our hospital. In the first transfer learning (TL) step, a pre-trained segmentation model with 192 CT images of lung cancer patients was retrained as a poorly differentiated HCC model. In the second TL step, a well-differentiated HCC model was built from a poorly differentiated HCC model. The average three-dimensional Dice's similarity coefficient (3D-DSC) and 95th-percentile of the Hausdorff distance (95% HD) were mainly employed to evaluate the segmentation accuracy, based on a nested fourfold cross-validation test. The DSC denotes the degree of regional similarity between the HCC reference regions and the regions estimated using the proposed models. The 95% HD is defined as the 95th-percentile of the maximum measures of how far two subsets of a metric space are from each other. The average 3D-DSC and 95% HD were 0.849 ± 0.078 and 1.98 ± 0.71 mm, respectively, for poorly differentiated HCC regions, and 0.811 ± 0.089 and 2.01 ± 0.84 mm, respectively, for well-differentiated HCC regions. The average 3D-DSC for both regions was 1.2 times superior to that calculated without the TSTL. The proposed model using TSTL from the lung cancer dataset showed the potential to segment poorly and well-differentiated HCC regions on DCE-CT images.
Collapse
Affiliation(s)
- Noriyuki Nagami
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-Ku, Fukuoka City, Fukuoka, 812-8582, Japan
- Department of Radiology, Saga University Hospital, 5-1-1, Nabeshima, Saga City, Saga, 849-8501, Japan
| | - Hidetaka Arimura
- Division of Medical Quantum Science, Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-Ku, Fukuoka City, Fukuoka, 812-8582, Japan.
| | - Junichi Nojiri
- Medical Corporation Kouhoukai, Takagi Hospital, 141-11, Sakemi, Okawa City, Fukuoka, 831-0016, Japan
- Department of Radiology, Faculty of Medicine, Saga University, 5-1-1, Nabeshima, Saga City , Saga, 849-8501, Japan
| | - Cui Yunhao
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-Ku, Fukuoka City, Fukuoka, 812-8582, Japan
| | - Kenta Ninomiya
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1, Maidashi, Higashi-Ku, Fukuoka City, Fukuoka, 812-8582, Japan
| | - Manabu Ogata
- Department of Radiology, Saga University Hospital, 5-1-1, Nabeshima, Saga City, Saga, 849-8501, Japan
| | - Mitsutoshi Oishi
- Department of Radiology, Faculty of Medicine, Saga University, 5-1-1, Nabeshima, Saga City , Saga, 849-8501, Japan
| | - Keiichi Ohira
- Department of Radiology, Faculty of Medicine, Saga University, 5-1-1, Nabeshima, Saga City , Saga, 849-8501, Japan
| | - Shigetoshi Kitamura
- Department of Radiology, Saga University Hospital, 5-1-1, Nabeshima, Saga City, Saga, 849-8501, Japan
| | - Hiroyuki Irie
- Department of Radiology, Faculty of Medicine, Saga University, 5-1-1, Nabeshima, Saga City , Saga, 849-8501, Japan
| |
Collapse
|
43
|
Li W, Song H, Li Z, Lin Y, Shi J, Yang J, Wu W. OrbitNet-A fully automated orbit multi-organ segmentation model based on transformer in CT images. Comput Biol Med 2023; 155:106628. [PMID: 36809695 DOI: 10.1016/j.compbiomed.2023.106628] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 01/11/2023] [Accepted: 01/28/2023] [Indexed: 02/18/2023]
Abstract
The delineation of orbital organs is a vital step in orbital diseases diagnosis and preoperative planning. However, an accurate multi-organ segmentation is still a clinical problem which suffers from two limitations. First, the contrast of soft tissue is relatively low. It usually cannot clearly show the boundaries of organs. Second, the optic nerve and the rectus muscle are difficult to distinguish because they are spatially adjacent and have similar geometry. To address these challenges, we propose the OrbitNet model to automatically segment orbital organs in CT images. Specifically, we present a global feature extraction module based on the transformer architecture called FocusTrans encoder, which enhance the ability to extract boundary features. To make the network focus on the extraction of edge features in the optic nerve and rectus muscle, the SA block is used to replace the convolution block in the decoding stage. In addition, we use the structural similarity measure (SSIM) loss as a part of the hybrid loss function to learn the edge differences of the organs better. OrbitNet has been trained and tested on the CT dataset collected by the Eye Hospital of Wenzhou Medical University. The experimental results show that our proposed model achieved superior results. The average Dice Similarity Coefficient (DSC) is 83.9%, the value of average 95% Hausdorff Distance (HD95) is 1.62 mm, and the value of average Symmetric Surface Distance (ASSD) is 0.47 mm. Our model also has good performance on the MICCAI 2015 challenge dataset.
Collapse
Affiliation(s)
- Wentao Li
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Zongyu Li
- School of Medical and Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Yucong Lin
- School of Medical and Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Jieliang Shi
- Eye Hospital of Wenzhou Medical University, Wenzhou, 325072, China.
| | - Jian Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Wencan Wu
- Eye Hospital of Wenzhou Medical University, Wenzhou, 325072, China.
| |
Collapse
|
44
|
Mackay K, Bernstein D, Glocker B, Kamnitsas K, Taylor A. A Review of the Metrics Used to Assess Auto-Contouring Systems in Radiotherapy. Clin Oncol (R Coll Radiol) 2023; 35:354-369. [PMID: 36803407 DOI: 10.1016/j.clon.2023.01.016] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 12/05/2022] [Accepted: 01/23/2023] [Indexed: 02/01/2023]
Abstract
Auto-contouring could revolutionise future planning of radiotherapy treatment. The lack of consensus on how to assess and validate auto-contouring systems currently limits clinical use. This review formally quantifies the assessment metrics used in studies published during one calendar year and assesses the need for standardised practice. A PubMed literature search was undertaken for papers evaluating radiotherapy auto-contouring published during 2021. Papers were assessed for types of metric and the methodology used to generate ground-truth comparators. Our PubMed search identified 212 studies, of which 117 met the criteria for clinical review. Geometric assessment metrics were used in 116 of 117 studies (99.1%). This includes the Dice Similarity Coefficient used in 113 (96.6%) studies. Clinically relevant metrics, such as qualitative, dosimetric and time-saving metrics, were less frequently used in 22 (18.8%), 27 (23.1%) and 18 (15.4%) of 117 studies, respectively. There was heterogeneity within each category of metric. Over 90 different names for geometric measures were used. Methods for qualitative assessment were different in all but two papers. Variation existed in the methods used to generate radiotherapy plans for dosimetric assessment. Consideration of editing time was only given in 11 (9.4%) papers. A single manual contour as a ground-truth comparator was used in 65 (55.6%) studies. Only 31 (26.5%) studies compared auto-contours to usual inter- and/or intra-observer variation. In conclusion, significant variation exists in how research papers currently assess the accuracy of automatically generated contours. Geometric measures are the most popular, however their clinical utility is unknown. There is heterogeneity in the methods used to perform clinical assessment. Considering the different stages of system implementation may provide a framework to decide the most appropriate metrics. This analysis supports the need for a consensus on the clinical implementation of auto-contouring.
Collapse
Affiliation(s)
- K Mackay
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK.
| | - D Bernstein
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| | - B Glocker
- Department of Computing, Imperial College London, South Kensington Campus, London, UK
| | - K Kamnitsas
- Department of Computing, Imperial College London, South Kensington Campus, London, UK; Department of Engineering Science, University of Oxford, Oxford, UK
| | - A Taylor
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| |
Collapse
|
45
|
Ladbury C, Abuali T, Liu J, Watkins W, Du D, Massarelli E, Villaflor V, Liu A, Salgia R, Williams T, Glaser S, Amini A. Prognostic Role of Biologically Active Volume of Disease in Patients With Metastatic Lung Adenocarcinoma. Clin Lung Cancer 2023; 24:244-251. [PMID: 36759265 DOI: 10.1016/j.cllc.2023.01.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 01/22/2023]
Abstract
BACKGROUND Number of metastatic sites can identify patient populations with non-small cell lung cancer (NSCLC) that benefit from aggressive therapy. Total volume of disease is also relevant. We evaluated the prognostic impact of biologically active volume of disease (BaVD) on patients with metastatic lung adenocarcinoma. MATERIALS AND METHODS Positron emission tomography/computerized tomography (PET/CT) scans from patients with newly diagnosed lung adenocarcinoma prior to starting any therapy were identified. SUV thresholds of 3 and 4 were used to auto-contour all FDG avid areas. Kaplan-Meier analysis and Cox regression were performed to examine influence on OS. RESULTS One hundred forty-eight patients were included in the analysis. The median BaVD when using an SUV threshold of 3 was 122.8 mL. The median BaVD when using an SUV threshold of 4 was 46.2 mL When stratified by median BaVD using an SUV of 3, median OS was higher for patients with <=122.8 mL (2.12 years) compared to patients with >122.8 mL (1.46 years) (log-rank P = .001). Similarly, when stratified by median BaVD using an SUV of 4, median OS was higher for patients with <=46.2 mL (1.91 years; 95% CI: 1.65-3.22 years) compared to patients with >46.2 mL (1.48 years; 95% CI: 1.07-1.80 years) (log-rank P = .007). On multivariable analysis, BaVD was significantly associated with OS when using an SUV threshold of 3 (HR: 20.169, P < .001) and 4 (HR: 4.117, P < .001). CONCLUSION BaVD is an important prognostic factor in metastatic lung adenocarcinoma and may aid identification of patients with limited disease who may be candidates for more aggressive therapies.
Collapse
Affiliation(s)
- Colton Ladbury
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA
| | - Tariq Abuali
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA
| | - Jason Liu
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA
| | - William Watkins
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA
| | - Dongsu Du
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA
| | - Erminia Massarelli
- Department of Medical Oncology and Therapeutics Research, City of Hope National Medical Center, Duarte, CA
| | - Victoria Villaflor
- Department of Medical Oncology and Therapeutics Research, City of Hope National Medical Center, Duarte, CA
| | - An Liu
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA
| | - Ravi Salgia
- Department of Medical Oncology and Therapeutics Research, City of Hope National Medical Center, Duarte, CA
| | - Terence Williams
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA
| | - Scott Glaser
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA
| | - Arya Amini
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA
| |
Collapse
|
46
|
Li X, Bagher-Ebadian H, Gardner S, Kim J, Elshaikh M, Movsas B, Zhu D, Chetty IJ. An uncertainty-aware deep learning architecture with outlier mitigation for prostate gland segmentation in radiotherapy treatment planning. Med Phys 2023; 50:311-322. [PMID: 36112996 DOI: 10.1002/mp.15982] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 08/24/2022] [Accepted: 08/25/2022] [Indexed: 01/25/2023] Open
Abstract
PURPOSE Task automation is essential for efficient and consistent image segmentation in radiation oncology. We report on a deep learning architecture, comprising a U-Net and a variational autoencoder (VAE) for automatic contouring of the prostate gland incorporating interobserver variation for radiotherapy treatment planning. The U-Net/VAE generates an ensemble set of segmentations for each image CT slice. A novel outlier mitigation (OM) technique was implemented to enhance the model segmentation accuracy. METHODS The primary source dataset (source_prim) consisted of 19 200 CT slices (from 300 patient planning CT image datasets) with manually contoured prostate glands. A smaller secondary source dataset (source_sec) comprised 640 CT slices (from 10 patient CT datasets), where prostate glands were segmented by 5 independent physicians on each dataset to account for interobserver variability. Data augmentation via random rotation (<5 degrees), cropping, and horizontal flipping was applied to each dataset to increase sample size by a factor of 100. A probabilistic hierarchical U-Net with VAE was implemented and pretrained using the augmented source_prim dataset for 30 epochs. Model parameters of the U-Net/VAE were fine-tuned using the augmented source_sec dataset for 100 epochs. After the first round of training, outlier contours in the training dataset were automatically detected and replaced by the most accurate contours (based on Dice similarity coefficient, DSC) generated by the model. The U-Net/OM-VAE was retrained using the revised training dataset. Metrics for comparison included DSC, Hausdorff distance (HD, mm), normalized cross-correlation (NCC) coefficient, and center-of-mass (COM) distance (mm). RESULTS Results for U-Net/OM-VAE with outliers replaced in the training dataset versus U-Net/VAE without OM were as follows: DSC = 0.82 ± 0.01 versus 0.80 ± 0.02 (p = 0.019), HD = 9.18 ± 1.22 versus 10.18 ± 1.35 mm (p = 0.043), NCC = 0.59 ± 0.07 versus 0.62 ± 0.06, and COM = 3.36 ± 0.81 versus 4.77 ± 0.96 mm over the average of 15 contours. For the average of 15 highest accuracy contours, values were as follows: DSC = 0.90 ± 0.02 versus 0.85 ± 0.02, HD = 5.47 ± 0.02 versus 7.54 ± 1.36 mm, and COM = 1.03 ± 0.58 versus 1.46 ± 0.68 mm (p < 0.03 for all metrics). Results for the U-Net/OM-VAE with outliers removed were as follows: DSC = 0.78 ± 0.01, HD = 10.65 ± 1.95 mm, NCC = 0.46 ± 0.10, COM = 4.17 ± 0.79 mm for the average of 15 contours, and DSC = 0.88 ± 0.02, HD = 7.00 ± 1.17 mm, COM = 1.58 ± 0.63 mm for the average of 15 highest accuracy contours. All metrics for U-Net/VAE trained on the source_prim and source_sec datasets via pretraining, followed by fine-tuning, show statistically significant improvement over that trained on the source_sec dataset only. Finally, all metrics for U-Net/VAE with or without OM showed statistically significant improvement over those for the standard U-Net. CONCLUSIONS A VAE combined with a hierarchical U-Net and an OM strategy (U-Net/OM-VAE) demonstrates promise toward capturing interobserver variability and produces accurate prostate auto-contours for radiotherapy planning. The availability of multiple contours for each CT slice enables clinicians to determine trade-offs in selecting the "best fitting" contour on each CT slice. Mitigation of outlier contours in the training dataset improves prediction accuracy, but one must be wary of reduction in variability in the training dataset.
Collapse
Affiliation(s)
- Xin Li
- Department of Computer Science, Wayne State University, Detroit, Michigan, USA
| | - Hassan Bagher-Ebadian
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| | - Stephen Gardner
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| | - Joshua Kim
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| | - Mohamed Elshaikh
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| | - Benjamin Movsas
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| | - Dongxiao Zhu
- Department of Computer Science, Wayne State University, Detroit, Michigan, USA
| | - Indrin J Chetty
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| |
Collapse
|
47
|
Costea M, Zlate A, Durand M, Baudier T, Grégoire V, Sarrut D, Biston MC. Comparison of atlas-based and deep learning methods for organs at risk delineation on head-and-neck CT images using an automated treatment planning system. Radiother Oncol 2022; 177:61-70. [PMID: 36328093 DOI: 10.1016/j.radonc.2022.10.029] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 10/21/2022] [Accepted: 10/23/2022] [Indexed: 11/06/2022]
Abstract
BACKGROUND AND PURPOSE To investigate the performance of head-and-neck (HN) organs-at-risk (OAR) automatic segmentation (AS) using four atlas-based (ABAS) and two deep learning (DL) solutions. MATERIAL AND METHODS All patients underwent iodine contrast-enhanced planning CT. Fourteen OAR were manually delineated. DL.1 and DL.2 solutions were trained with 63 mono-centric patients and > 1000 multi-centric patients, respectively. Ten and 15 patients with varied anatomies were selected for the atlas library and for testing, respectively. The evaluation was based on geometric indices (DICE coefficient and 95th percentile-Hausdorff Distance (HD95%)), time needed for manual corrections and clinical dosimetric endpoints obtained using automated treatment planning. RESULTS Both DICE and HD95% results indicated that DL algorithms generally performed better compared with ABAS algorithms for automatic segmentation of HN OAR. However, the hybrid-ABAS (ABAS.3) algorithm sometimes provided the highest agreement to the reference contours compared with the 2 DL. Compared with DL.2 and ABAS.3, DL.1 contours were the fastest to correct. For the 3 solutions, the differences in dose distributions obtained using AS contours and AS + manually corrected contours were not statistically significant. High dose differences could be observed when OAR contours were at short distances to the targets. However, this was not always interrelated. CONCLUSION DL methods generally showed higher delineation accuracy compared with ABAS methods for AS segmentation of HN OAR. Most ABAS contours had high conformity to the reference but were more time consuming than DL algorithms, especially when considering the computing time and the time spent on manual corrections.
Collapse
Affiliation(s)
- Madalina Costea
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | | | - Morgane Durand
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France
| | - Thomas Baudier
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | | | - David Sarrut
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | - Marie-Claude Biston
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France.
| |
Collapse
|
48
|
Artificial intelligence and machine learning in cancer imaging. COMMUNICATIONS MEDICINE 2022; 2:133. [PMID: 36310650 PMCID: PMC9613681 DOI: 10.1038/s43856-022-00199-0] [Citation(s) in RCA: 59] [Impact Index Per Article: 29.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Accepted: 10/06/2022] [Indexed: 11/16/2022] Open
Abstract
An increasing array of tools is being developed using artificial intelligence (AI) and machine learning (ML) for cancer imaging. The development of an optimal tool requires multidisciplinary engagement to ensure that the appropriate use case is met, as well as to undertake robust development and testing prior to its adoption into healthcare systems. This multidisciplinary review highlights key developments in the field. We discuss the challenges and opportunities of AI and ML in cancer imaging; considerations for the development of algorithms into tools that can be widely used and disseminated; and the development of the ecosystem needed to promote growth of AI and ML in cancer imaging.
Collapse
|
49
|
Li Z, Zhu Q, Zhang L, Yang X, Li Z, Fu J. A deep learning-based self-adapting ensemble method for segmentation in gynecological brachytherapy. Radiat Oncol 2022; 17:152. [PMID: 36064571 PMCID: PMC9446699 DOI: 10.1186/s13014-022-02121-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 08/29/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Fast and accurate outlining of the organs at risk (OARs) and high-risk clinical tumor volume (HRCTV) is especially important in high-dose-rate brachytherapy due to the highly time-intensive online treatment planning process and the high dose gradient around the HRCTV. This study aims to apply a self-configured ensemble method for fast and reproducible auto-segmentation of OARs and HRCTVs in gynecological cancer. MATERIALS AND METHODS We applied nnU-Net (no new U-Net), an automatically adapted deep convolutional neural network based on U-Net, to segment the bladder, rectum and HRCTV on CT images in gynecological cancer. In nnU-Net, three architectures, including 2D U-Net, 3D U-Net and 3D-Cascade U-Net, were trained and finally ensembled. 207 cases were randomly chosen for training, and 30 for testing. Quantitative evaluation used well-established image segmentation metrics, including dice similarity coefficient (DSC), 95% Hausdorff distance (HD95%), and average surface distance (ASD). Qualitative analysis of automated segmentation results was performed visually by two radiation oncologists. The dosimetric evaluation was performed by comparing the dose-volume parameters of both predicted segmentation and human contouring. RESULTS nnU-Net obtained high qualitative and quantitative segmentation accuracy on the test dataset and performed better than previously reported methods in bladder and rectum segmentation. In quantitative evaluation, 3D-Cascade achieved the best performance in the bladder (DSC: 0.936 ± 0.051, HD95%: 3.503 ± 1.956, ASD: 0.944 ± 0.503), rectum (DSC: 0.831 ± 0.074, HD95%: 7.579 ± 5.857, ASD: 3.6 ± 3.485), and HRCTV (DSC: 0.836 ± 0.07, HD95%: 7.42 ± 5.023, ASD: 2.094 ± 1.311). According to the qualitative evaluation, over 76% of the test data set had no or minor visually detectable errors in segmentation. CONCLUSION This work showed nnU-Net's superiority in segmenting OARs and HRCTV in gynecological brachytherapy cases in our center, among which 3D-Cascade shows the highest accuracy in segmentation across different applicators and patient anatomy.
Collapse
Affiliation(s)
- Zhen Li
- Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Xuhui District, Shanghai, China
| | - Qingyuan Zhu
- Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Xuhui District, Shanghai, China
| | - Lihua Zhang
- Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Xuhui District, Shanghai, China
| | - Xiaojing Yang
- Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Xuhui District, Shanghai, China
| | - Zhaobin Li
- Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Xuhui District, Shanghai, China
| | - Jie Fu
- Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Xuhui District, Shanghai, China
| |
Collapse
|
50
|
WORD: A large scale dataset, benchmark and clinical applicable study for abdominal organ segmentation from CT image. Med Image Anal 2022; 82:102642. [DOI: 10.1016/j.media.2022.102642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 08/18/2022] [Accepted: 09/20/2022] [Indexed: 11/22/2022]
|