1
|
Oliver J, Alapati R, Lee J, Bur A. Artificial Intelligence in Head and Neck Surgery. Otolaryngol Clin North Am 2024; 57:803-820. [PMID: 38910064 PMCID: PMC11374486 DOI: 10.1016/j.otc.2024.05.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
This article explores artificial intelligence's (AI's) role in otolaryngology for head and neck cancer diagnosis and management. It highlights AI's potential in pattern recognition for early cancer detection, prognostication, and treatment planning, primarily through image analysis using clinical, endoscopic, and histopathologic images. Radiomics is also discussed at length, as well as the many ways that radiologic image analysis can be utilized, including for diagnosis, lymph node metastasis prediction, and evaluation of treatment response. The study highlights AI's promise and limitations, underlining the need for clinician-data scientist collaboration to enhance head and neck cancer care.
Collapse
Affiliation(s)
- Jamie Oliver
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA
| | - Rahul Alapati
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA
| | - Jason Lee
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA
| | - Andrés Bur
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA.
| |
Collapse
|
2
|
Alapati R, Renslo B, Wagoner SF, Karadaghy O, Serpedin A, Kim YE, Feucht M, Wang N, Ramesh U, Bon Nieves A, Lawrence A, Virgen C, Sawaf T, Rameau A, Bur AM. Assessing the Reporting Quality of Machine Learning Algorithms in Head and Neck Oncology. Laryngoscope 2024. [PMID: 39258420 DOI: 10.1002/lary.31756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 07/25/2024] [Accepted: 08/23/2024] [Indexed: 09/12/2024]
Abstract
OBJECTIVE This study aimed to assess reporting quality of machine learning (ML) algorithms in the head and neck oncology literature using the TRIPOD-AI criteria. DATA SOURCES A comprehensive search was conducted using PubMed, Scopus, Embase, and Cochrane Database of Systematic Reviews, incorporating search terms related to "artificial intelligence," "machine learning," "deep learning," "neural network," and various head and neck neoplasms. REVIEW METHODS Two independent reviewers analyzed each published study for adherence to the 65-point TRIPOD-AI criteria. Items were classified as "Yes," "No," or "NA" for each publication. The proportion of studies satisfying each TRIPOD-AI criterion was calculated. Additionally, the evidence level for each study was evaluated independently by two reviewers using the Oxford Centre for Evidence-Based Medicine (OCEBM) Levels of Evidence. Discrepancies were reconciled through discussion until consensus was reached. RESULTS The study highlights the need for improvements in ML algorithm reporting in head and neck oncology. This includes more comprehensive descriptions of datasets, standardization of model performance reporting, and increased sharing of ML models, data, and code with the research community. Adoption of TRIPOD-AI is necessary for achieving standardized ML research reporting in head and neck oncology. CONCLUSION Current reporting of ML algorithms hinders clinical application, reproducibility, and understanding of the data used for model training. To overcome these limitations and improve patient and clinician trust, ML developers should provide open access to models, code, and source data, fostering iterative progress through community critique, thus enhancing model accuracy and mitigating biases. LEVEL OF EVIDENCE NA Laryngoscope, 2024.
Collapse
Affiliation(s)
- Rahul Alapati
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Bryan Renslo
- Department of Otolaryngology-Head & Neck Surgery, Thomas Jefferson University, Philadelphia, Pennsylvania, U.S.A
| | - Sarah F Wagoner
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Omar Karadaghy
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Aisha Serpedin
- Department of Otolaryngology-Head & Neck Surgery, Weill Cornell, New York City, New York, U.S.A
| | - Yeo Eun Kim
- Department of Otolaryngology-Head & Neck Surgery, Weill Cornell, New York City, New York, U.S.A
| | - Maria Feucht
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Naomi Wang
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Uma Ramesh
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Antonio Bon Nieves
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Amelia Lawrence
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Celina Virgen
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Tuleen Sawaf
- Department of Otolaryngology-Head & Neck Surgery, University of Maryland, Baltimore, Maryland, U.S.A
| | - Anaïs Rameau
- Department of Otolaryngology-Head & Neck Surgery, Weill Cornell, New York City, New York, U.S.A
| | - Andrés M Bur
- Department of Otolaryngology-Head & Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| |
Collapse
|
3
|
Schmitz E, Guo Y, Wang J. Adaptive fine-tuning based transfer learning for the identification of MGMT promoter methylation status. Biomed Phys Eng Express 2024; 10:055018. [PMID: 39029475 PMCID: PMC11288403 DOI: 10.1088/2057-1976/ad6573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 06/06/2024] [Accepted: 07/19/2024] [Indexed: 07/21/2024]
Abstract
Background.Glioblastoma Multiforme (GBM) is an aggressive form of malignant brain tumor with a generally poor prognosis.O6-methylguanine-DNA methyltransferase (MGMT) promoter methylation has been shown to be a predictive bio-marker for resistance to treatment of GBM, but it is invasive and time-consuming to determine methylation status. There has been effort to predict the MGMT methylation status through analyzing MRI scans using machine learning, which only requires pre-operative scans that are already part of standard-of-care for GBM patients.Purpose.To improve the performance of conventional transfer learning in the identification of MGMT promoter methylation status, we developed a 3D SpotTune network with adaptive fine-tuning capability. Using the pretrained weights of MedicalNet with the SpotTune network, we compared its performance with a randomly initialized network for different combinations of MR modalities.Methods.Using a ResNet50 as the base network, three categories of networks are created: (1) A 3D SpotTune network to process volumetric MR images, (2) a network with randomly initialized weights, and (3) a network pre-trained on MedicalNet. These three networks are trained and evaluated using a public GBM dataset provided by the University of Pennsylvania. The MRI scans from 240 patients are used, with 11 different modalities corresponding to a set of perfusion, diffusion, and structural scans. The performance is evaluated using 5-fold cross validation with a hold-out testing dataset.Results.The SpotTune network showed better performance than the randomly initialized network. The best performing SpotTune model achieved an area under the Receiver Operating Characteristic curve (AUC), average precision of the precision-recall curve (AP), sensitivity, and specificity values of 0.6604, 0.6179, 0.6667, and 0.6061 respectively.Conclusions.SpotTune enables transfer learning to be adaptive to individual patients, resulting in improved performance in predicting MGMT promoter methylation status in GBM using equivalent MRI modalities as compared to a randomly initialized network.
Collapse
Affiliation(s)
- Erich Schmitz
- Advanced Imaging and Informatics for Radiation Therapy (AIRT) and Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Yunhui Guo
- Department of Computer Science, The University of Texas at Dallas, Richardson, TX, United States of America
| | - Jing Wang
- Advanced Imaging and Informatics for Radiation Therapy (AIRT) and Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| |
Collapse
|
4
|
Chen M, Wang K, Wang J. Vision Transformer-Based Multilabel Survival Prediction for Oropharynx Cancer After Radiation Therapy. Int J Radiat Oncol Biol Phys 2024; 118:1123-1134. [PMID: 37939732 PMCID: PMC11161220 DOI: 10.1016/j.ijrobp.2023.10.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 09/06/2023] [Accepted: 10/15/2023] [Indexed: 11/10/2023]
Abstract
PURPOSE A reliable and comprehensive cancer prognosis model for oropharyngeal cancer (OPC) could better assist in personalizing treatment. In this work, we developed a vision transformer-based (ViT-based) multilabel model with multimodal input to learn complementary information from available pretreatment data and predict multiple associated endpoints for radiation therapy for patients with OPC. METHODS AND MATERIALS A publicly available data set of 512 patients with OPC was used for both model training and evaluation. Planning computed tomography images, primary gross tumor volume masks, and 16 clinical variables representing patient demographics, diagnosis, and treatment were used as inputs. To extract deep image features with global attention, we used a ViT module. Clinical variables were concatenated with the learned image features and fed into fully connected layers to incorporate cross-modality features. To learn the mapping between the features and correlated survival outcomes, including overall survival, local failure-free survival, regional failure-free survival, and distant failure-free survival, we employed 4 multitask logistic regression layers. The proposed model was optimized by combining the multitask logistic regression negative-log likelihood losses of different prediction targets. RESULTS We employed the C-index and area under the curve metrics to assess the performance of our model for time-to-event prediction and time-specific binary prediction, respectively. Our proposed model outperformed corresponding single-modality and single-label models on all prediction labels, achieving C-indices of 0.773, 0.765, 0.776, and 0.773 for overall survival, local failure-free survival, regional failure-free survival, and distant failure-free survival, respectively. The area under the curve values ranged between 0.799 and 0.844 for different tasks at different time points. Using the medians of predicted risks as the thresholds to identify high-risk and low-risk patient groups, we performed the log-rank test, the results of which showed significantly larger separations in different event-free survivals. CONCLUSION We developed the first model capable of predicting multiple labels for OPC simultaneously. Our model demonstrated better prognostic ability for all the prediction targets compared with corresponding single-modality models and single-label models.
Collapse
Affiliation(s)
- Meixu Chen
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas
| | - Kai Wang
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas
| | - Jing Wang
- Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas.
| |
Collapse
|
5
|
Wang K, George-Jones NA, Chen L, Hunter JB, Wang J. Joint Vestibular Schwannoma Enlargement Prediction and Segmentation Using a Deep Multi-task Model. Laryngoscope 2023; 133:2754-2760. [PMID: 36495306 PMCID: PMC10256836 DOI: 10.1002/lary.30516] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 11/17/2022] [Accepted: 11/20/2022] [Indexed: 12/14/2022]
Abstract
OBJECTIVE To develop a deep-learning-based multi-task (DMT) model for joint tumor enlargement prediction (TEP) and automatic tumor segmentation (TS) for vestibular schwannoma (VS) patients using their initial diagnostic contrast-enhanced T1-weighted (ceT1) magnetic resonance images (MRIs). METHODS Initial ceT1 MRIs for VS patients meeting the inclusion/exclusion criteria of this study were retrospectively collected. VSs on the initial MRIs and their first follow-up scans were manually contoured. Tumor volume and enlargement ratio were measured based on expert contours. A DMT model was constructed for jointly TS and TEP. The manually segmented VS volume on the initial scan and the tumor enlargement label (≥20% volumetric growth) were used as the ground truth for training and evaluating the TS and TEP modules, respectively. RESULTS We performed 5-fold cross-validation with the eligible patients (n = 103). Median segmentation dice coefficient, prediction sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were measured and achieved the following values: 84.20%, 0.68, 0.78, 0.72, and 0.77, respectively. The segmentation result is significantly better than the separate TS network (dice coefficient of 83.13%, p = 0.03) and marginally lower than the state-of-the-art segmentation model nnU-Net (dice coefficient of 86.45%, p = 0.16). The TEP performance is significantly better than the single-task prediction model (AUC = 0.60, p = 0.01) and marginally better than a radiomics-based prediction model (AUC = 0.70, p = 0.17). CONCLUSION The proposed DMT model is of higher learning efficiency and achieves promising performance on TEP and TS. The proposed technology has the potential to improve VS patient management. LEVEL OF EVIDENCE NA Laryngoscope, 133:2754-2760, 2023.
Collapse
Affiliation(s)
- Kai Wang
- The Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Nicholas A George-Jones
- The Department of Otolaryngology-Head and Neck Surgery, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- The Department of Otolaryngology-Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Liyuan Chen
- The Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jacob B Hunter
- The Department of Otolaryngology-Head and Neck Surgery, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jing Wang
- The Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
6
|
Sher DJ, Moon DH, Vo D, Wang J, Chen L, Dohopolski M, Hughes R, Sumer BD, Ahn C, Avkshtol V. Efficacy and Quality-of-Life Following Involved Nodal Radiotherapy for Head and Neck Squamous Cell Carcinoma: The INRT-AIR Phase II Clinical Trial. Clin Cancer Res 2023; 29:3284-3291. [PMID: 37363993 DOI: 10.1158/1078-0432.ccr-23-0334] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 05/03/2023] [Accepted: 06/20/2023] [Indexed: 06/28/2023]
Abstract
PURPOSE Elective neck irradiation (ENI) has long been considered mandatory when treating head and neck squamous cell carcinoma (HNSCC) with definitive radiotherapy, but it is associated with significant dose to normal organs-at-risk (OAR). In this prospective phase II study, we investigated the efficacy and tolerability of eliminating ENI and strictly treating involved and suspicious lymph nodes (LN) with intensity-modulated radiotherapy. PATIENTS AND METHODS Patients with newly diagnosed HNSCC of the oropharynx, larynx, and hypopharynx were eligible for enrollment. Each LN was characterized as involved or suspicious based on radiologic criteria and an in-house artificial intelligence (AI)-based classification model. Gross disease received 70 Gray (Gy) in 35 fractions and suspicious LNs were treated with 66.5 Gy, without ENI. The primary endpoint was solitary elective volume recurrence, with secondary endpoints including patterns-of-failure and patient-reported outcomes. RESULTS Sixty-seven patients were enrolled, with 18 larynx/hypopharynx and 49 oropharynx cancer. With a median follow-up of 33.4 months, the 2-year risk of solitary elective nodal recurrence was 0%. Gastrostomy tubes were placed in 14 (21%), with median removal after 2.9 months for disease-free patients; no disease-free patient is chronically dependent. Grade I/II dermatitis was seen in 90%/10%. There was no significant decline in composite MD Anderson Dysphagia Index scores after treatment, with means of 89.1 and 92.6 at 12 and 24 months, respectively. CONCLUSIONS These results suggest that eliminating ENI is oncologically sound for HNSCC, with highly favorable quality-of-life outcomes. Additional prospective studies are needed to support this promising paradigm before implementation in any nontrial setting.
Collapse
Affiliation(s)
- David J Sher
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Dominic H Moon
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Dat Vo
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Jing Wang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Liyuan Chen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Michael Dohopolski
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Randall Hughes
- Department of Medical Oncology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Baran D Sumer
- Department of Otolaryngology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Chul Ahn
- Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Vladimir Avkshtol
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas
| |
Collapse
|
7
|
Nikulin P, Zschaeck S, Maus J, Cegla P, Lombardo E, Furth C, Kaźmierska J, Rogasch JMM, Holzgreve A, Albert NL, Ferentinos K, Strouthos I, Hajiyianni M, Marschner SN, Belka C, Landry G, Cholewinski W, Kotzerke J, Hofheinz F, van den Hoff J. A convolutional neural network with self-attention for fully automated metabolic tumor volume delineation of head and neck cancer in [Formula: see text]F]FDG PET/CT. Eur J Nucl Med Mol Imaging 2023; 50:2751-2766. [PMID: 37079128 PMCID: PMC10317885 DOI: 10.1007/s00259-023-06197-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 03/14/2023] [Indexed: 04/21/2023]
Abstract
PURPOSE PET-derived metabolic tumor volume (MTV) and total lesion glycolysis of the primary tumor are known to be prognostic of clinical outcome in head and neck cancer (HNC). Including evaluation of lymph node metastases can further increase the prognostic value of PET but accurate manual delineation and classification of all lesions is time-consuming and prone to interobserver variability. Our goal, therefore, was development and evaluation of an automated tool for MTV delineation/classification of primary tumor and lymph node metastases in PET/CT investigations of HNC patients. METHODS Automated lesion delineation was performed with a residual 3D U-Net convolutional neural network (CNN) incorporating a multi-head self-attention block. 698 [Formula: see text]F]FDG PET/CT scans from 3 different sites and 5 public databases were used for network training and testing. An external dataset of 181 [Formula: see text]F]FDG PET/CT scans from 2 additional sites was employed to assess the generalizability of the network. In these data, primary tumor and lymph node (LN) metastases were interactively delineated and labeled by two experienced physicians. Performance of the trained network models was assessed by 5-fold cross-validation in the main dataset and by pooling results from the 5 developed models in the external dataset. The Dice similarity coefficient (DSC) for individual delineation tasks and the primary tumor/metastasis classification accuracy were used as evaluation metrics. Additionally, a survival analysis using univariate Cox regression was performed comparing achieved group separation for manual and automated delineation, respectively. RESULTS In the cross-validation experiment, delineation of all malignant lesions with the trained U-Net models achieves DSC of 0.885, 0.805, and 0.870 for primary tumor, LN metastases, and the union of both, respectively. In external testing, the DSC reaches 0.850, 0.724, and 0.823 for primary tumor, LN metastases, and the union of both, respectively. The voxel classification accuracy was 98.0% and 97.9% in cross-validation and external data, respectively. Univariate Cox analysis in the cross-validation and the external testing reveals that manually and automatically derived total MTVs are both highly prognostic with respect to overall survival, yielding essentially identical hazard ratios (HR) ([Formula: see text]; [Formula: see text] vs. [Formula: see text]; [Formula: see text] in cross-validation and [Formula: see text]; [Formula: see text] vs. [Formula: see text]; [Formula: see text] in external testing). CONCLUSION To the best of our knowledge, this work presents the first CNN model for successful MTV delineation and lesion classification in HNC. In the vast majority of patients, the network performs satisfactory delineation and classification of primary tumor and lymph node metastases and only rarely requires more than minimal manual correction. It is thus able to massively facilitate study data evaluation in large patient groups and also does have clear potential for supervised clinical application.
Collapse
Affiliation(s)
- Pavel Nikulin
- Helmholtz-Zentrum Dresden-Rossendorf, PET Center, Institute of Radiopharmaceutical Cancer Research, Bautzner Landstrasse 400, 01328, Dresden, Germany.
| | - Sebastian Zschaeck
- Department of Radiation Oncology, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Jens Maus
- Helmholtz-Zentrum Dresden-Rossendorf, PET Center, Institute of Radiopharmaceutical Cancer Research, Bautzner Landstrasse 400, 01328, Dresden, Germany
| | - Paulina Cegla
- Department of Nuclear Medicine, Greater Poland Cancer Centre, Poznan, Poland
| | - Elia Lombardo
- Department of Radiation Oncology, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
| | - Christian Furth
- Department of Nuclear Medicine, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Joanna Kaźmierska
- Electroradiology Department, University of Medical Sciences, Poznan, Poland
- Radiotherapy Department II, Greater Poland Cancer Centre, Poznan, Poland
| | - Julian M M Rogasch
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany
- Department of Nuclear Medicine, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Adrien Holzgreve
- Department of Nuclear Medicine, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
| | - Nathalie L Albert
- Department of Nuclear Medicine, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
| | - Konstantinos Ferentinos
- Department of Radiation Oncology, German Oncology Center, European University Cyprus, Limassol, Cyprus
| | - Iosif Strouthos
- Department of Radiation Oncology, German Oncology Center, European University Cyprus, Limassol, Cyprus
| | - Marina Hajiyianni
- Department of Radiation Oncology, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Sebastian N Marschner
- Department of Radiation Oncology, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
| | - Claus Belka
- Department of Radiation Oncology, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
- German Cancer Consortium (DKTK), Partner Site Munich, Munich, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
| | - Witold Cholewinski
- Department of Nuclear Medicine, Greater Poland Cancer Centre, Poznan, Poland
- Electroradiology Department, University of Medical Sciences, Poznan, Poland
| | - Jörg Kotzerke
- Department of Nuclear Medicine, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Frank Hofheinz
- Helmholtz-Zentrum Dresden-Rossendorf, PET Center, Institute of Radiopharmaceutical Cancer Research, Bautzner Landstrasse 400, 01328, Dresden, Germany
| | - Jörg van den Hoff
- Helmholtz-Zentrum Dresden-Rossendorf, PET Center, Institute of Radiopharmaceutical Cancer Research, Bautzner Landstrasse 400, 01328, Dresden, Germany
- Department of Nuclear Medicine, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
8
|
Wang K, Dohopolski M, Zhang Q, Sher D, Wang J. Towards reliable head and neck cancers locoregional recurrence prediction using delta-radiomics and learning with rejection option. Med Phys 2023; 50:2212-2223. [PMID: 36484346 PMCID: PMC10121744 DOI: 10.1002/mp.16132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 11/08/2022] [Accepted: 11/20/2022] [Indexed: 12/13/2022] Open
Abstract
PURPOSE A reliable locoregional recurrence (LRR) prediction model is important for the personalized management of head and neck cancers (HNC) patients who received radiotherapy. This work aims to develop a delta-radiomics feature-based multi-classifier, multi-objective, and multi-modality (Delta-mCOM) model for post-treatment HNC LRR prediction. Furthermore, we aim to adopt a learning with rejection option (LRO) strategy to boost the reliability of Delta-mCOM model by rejecting prediction for samples with high prediction uncertainties. METHODS In this retrospective study, we collected PET/CT image and clinical data from 224 HNC patients who received radiotherapy (RT) at our institution. We calculated the differences between radiomics features extracted from PET/CT images acquired before and after radiotherapy and used them in conjunction with pre-treatment radiomics features as the input features. Using clinical parameters, PET radiomics features, and CT radiomics features, we built and optimized three separate single-modality models. We used multiple classifiers for model construction and employed sensitivity and specificity simultaneously as the training objectives for each of them. Then, for testing samples, we fused the output probabilities from all these single-modality models to obtain the final output probabilities of the Delta-mCOM model. In the LRO strategy, we estimated the epistemic and aleatoric uncertainties when predicting with a trained Delta-mCOM model and identified patients associated with prediction of higher reliability (low uncertainty estimates). The epistemic and aleatoric uncertainties were estimated using an AutoEncoder-style anomaly detection model and test-time augmentation (TTA) with predictions made from the Delta-mCOM model, respectively. Predictions with higher epistemic uncertainty or higher aleatoric uncertainty than given thresholds were deemed unreliable, and they were rejected before providing a final prediction. In this study, different thresholds corresponding to different low-reliability prediction rejection ratios were applied. Their values are based on the estimated epistemic and aleatoric uncertainties distribution of the validation data. RESULTS The Delta-mCOM model performed significantly better than the single-modality models, whether trained with pre-, post-treatment radiomics features or concatenated BaseLine and Delta-Radiomics Features (BL-DRFs). It was numerically superior to the PET and CT fused BL-DRF model (nonstatistically significant). Using the LRO strategy for the Delta-mCOM model, most of the evaluation metrics improved as the rejection ratio increased from 0% to around 25%. Utilizing both epistemic and aleatoric uncertainty for rejection yielded nonstatistically significant improved metrics compared to each alone at approximately a 25% rejection ratio. Metrics were significantly better than the no-rejection method when the reject ratio was higher than 50%. CONCLUSIONS The inclusion of the delta-radiomics feature improved the accuracy of HNC LRR prediction, and the proposed Delta-mCOM model can give more reliable predictions by rejecting predictions for samples of high uncertainty using the LRO strategy.
Collapse
Affiliation(s)
- Kai Wang
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Michael Dohopolski
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Qiongwen Zhang
- Department of Head and Neck Oncology, State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University, Chengdu 610041, China
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - David Sher
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Jing Wang
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
9
|
Lv W, Zhou Z, Peng J, Peng L, Lin G, Wu H, Xu H, Lu L. Functional-structural sub-region graph convolutional network (FSGCN): Application to the prognosis of head and neck cancer with PET/CT imaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 230:107341. [PMID: 36682111 DOI: 10.1016/j.cmpb.2023.107341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 12/14/2022] [Accepted: 01/06/2023] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate risk stratification is crucial for enabling personalized treatment for head and neck cancer (HNC). Current PET/CT image-based prognostic methods include radiomics analysis and convolutional neural network (CNN), while extracting radiomics or deep features in grid Euclidean space has inherent limitations for risk stratification. Here, we propose a functional-structural sub-region graph convolutional network (FSGCN) for accurate risk stratification of HNC. METHODS This study collected 642 patients from 8 different centers in The Cancer Imaging Archive (TCIA), 507 patients from 5 centers were used for training, and 135 patients from 3 centers were used for testing. The tumor was first clustered into multiple sub-regions by using PET and CT voxel information, and radiomics features were extracted from each sub-region to characterize its functional and structural information, a graph was then constructed to format the relationship/difference among different sub-regions in non-Euclidean space for each patient, followed by a residual gated graph convolutional network, the prognostic score was finally generated to predict the progression-free survival (PFS). RESULTS In the testing cohort, compared with radiomics or FSGCN or clinical model alone, the model PETCTFea_CTROI + Cli that integrates FSGCN prognostic score and clinical parameter achieved the highest C-index and AUC of 0.767 (95% CI: 0.759-0.774) and 0.781 (95% CI: 0.774-0.788), respectively for PFS prediction. Besides, it also showed good prognostic performance on the secondary endpoints OS, RFS, and MFS in the testing cohort, with C-index of 0.786 (95% CI: 0.778-0.795), 0.775 (95% CI: 0.767-0.782) and 0.781 (95% CI: 0.772-0.789), respectively. CONCLUSIONS The proposed FSGCN can better capture the metabolic or anatomic difference/interaction among sub-regions of the whole tumor imaged with PET/CT. Extensive multi-center experiments demonstrated its capability and generalization of prognosis prediction in HNC over conventional radiomics analysis.
Collapse
Affiliation(s)
- Wenbing Lv
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Department of Electronic Engineering, Information School, Yunnan University, Kunming 650091, China
| | - Zidong Zhou
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
| | - Junyi Peng
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
| | - Lihong Peng
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
| | - Guoyu Lin
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
| | - Huiqin Wu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
| | - Hui Xu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
| | - Lijun Lu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Pazhou Lab, Guangzhou 510515, China.
| |
Collapse
|
10
|
Santer M, Kloppenburg M, Gottfried TM, Runge A, Schmutzhard J, Vorbach SM, Mangesius J, Riedl D, Mangesius S, Widmann G, Riechelmann H, Dejaco D, Freysinger W. Current Applications of Artificial Intelligence to Classify Cervical Lymph Nodes in Patients with Head and Neck Squamous Cell Carcinoma-A Systematic Review. Cancers (Basel) 2022; 14:5397. [PMID: 36358815 PMCID: PMC9654953 DOI: 10.3390/cancers14215397] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 10/28/2022] [Accepted: 10/29/2022] [Indexed: 07/22/2023] Open
Abstract
Locally-advanced head and neck squamous cell carcinoma (HNSCC) is mainly defined by the presence of pathologic cervical lymph nodes (LNs) with or without extracapsular spread (ECS). Current radiologic criteria to classify LNs as non-pathologic, pathologic, or pathologic with ECS are primarily shape-based. However, significantly more quantitative information is contained within imaging modalities. This quantitative information could be exploited for classification of LNs in patients with locally-advanced HNSCC by means of artificial intelligence (AI). Currently, various reviews exploring the role of AI in HNSCC are available. However, reviews specifically addressing the current role of AI to classify LN in HNSCC-patients are sparse. The present work systematically reviews original articles that specifically explore the role of AI to classify LNs in locally-advanced HNSCC applying Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines and the Study Quality Assessment Tool of National Institute of Health (NIH). Between 2001 and 2022, out of 69 studies a total of 13 retrospective, mainly monocentric, studies were identified. The majority of the studies included patients with oropharyngeal and oral cavity (9 and 7 of 13 studies, respectively) HNSCC. Histopathologic findings were defined as reference in 9 of 13 studies. Machine learning was applied in 13 studies, 9 of them applying deep learning. The mean number of included patients was 75 (SD ± 72; range 10-258) and of LNs was 340 (SD ± 268; range 21-791). The mean diagnostic accuracy for the training sets was 86% (SD ± 14%; range: 43-99%) and for testing sets 86% (SD ± 5%; range 76-92%). Consequently, all of the identified studies concluded AI to be a potentially promising diagnostic support tool for LN-classification in HNSCC. However, adequately powered, prospective, and randomized control trials are urgently required to further assess AI's role in LN-classification in locally-advanced HNSCC.
Collapse
Affiliation(s)
- Matthias Santer
- Department of Otorhinolaryngology-Head and Neck Surgery, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Marcel Kloppenburg
- Department of Otorhinolaryngology-Head and Neck Surgery, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Timo Maria Gottfried
- Department of Otorhinolaryngology-Head and Neck Surgery, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Annette Runge
- Department of Otorhinolaryngology-Head and Neck Surgery, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Joachim Schmutzhard
- Department of Otorhinolaryngology-Head and Neck Surgery, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Samuel Moritz Vorbach
- Department of Radiation-Oncology, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Julian Mangesius
- Department of Radiation-Oncology, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - David Riedl
- University Hospital of Psychiatry II, Medical University of Innsbruck, 6020 Innsbruck, Austria
- Ludwig-Boltzmann Institute for Rehabilitation Research, 1100 Vienna, Austria
| | - Stephanie Mangesius
- Department of Radiology, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Gerlig Widmann
- Department of Radiology, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Herbert Riechelmann
- Department of Otorhinolaryngology-Head and Neck Surgery, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Daniel Dejaco
- Department of Otorhinolaryngology-Head and Neck Surgery, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Wolfgang Freysinger
- Department of Otorhinolaryngology-Head and Neck Surgery, Medical University of Innsbruck, 6020 Innsbruck, Austria
| |
Collapse
|