1
|
Caruso CM, Guarrasi V, Ramella S, Soda P. A deep learning approach for overall survival prediction in lung cancer with missing values. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 254:108308. [PMID: 38968829 DOI: 10.1016/j.cmpb.2024.108308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 06/24/2024] [Accepted: 06/24/2024] [Indexed: 07/07/2024]
Abstract
BACKGROUND AND OBJECTIVE In the field of lung cancer research, particularly in the analysis of overall survival (OS), artificial intelligence (AI) serves crucial roles with specific aims. Given the prevalent issue of missing data in the medical domain, our primary objective is to develop an AI model capable of dynamically handling this missing data. Additionally, we aim to leverage all accessible data, effectively analyzing both uncensored patients who have experienced the event of interest and censored patients who have not, by embedding a specialized technique within our AI model, not commonly utilized in other AI tasks. Through the realization of these objectives, our model aims to provide precise OS predictions for non-small cell lung cancer (NSCLC) patients, thus overcoming these significant challenges. METHODS We present a novel approach to survival analysis with missing values in the context of NSCLC, which exploits the strengths of the transformer architecture to account only for available features without requiring any imputation strategy. More specifically, this model tailors the transformer architecture to tabular data by adapting its feature embedding and masked self-attention to mask missing data and fully exploit the available ones. By making use of ad-hoc designed losses for OS, it is able to account for both censored and uncensored patients, as well as changes in risks over time. RESULTS We compared our method with state-of-the-art models for survival analysis coupled with different imputation strategies. We evaluated the results obtained over a period of 6 years using different time granularities obtaining a Ct-index, a time-dependent variant of the C-index, of 71.97, 77.58 and 80.72 for time units of 1 month, 1 year and 2 years, respectively, outperforming all state-of-the-art methods regardless of the imputation method used. CONCLUSIONS The results show that our model not only outperforms the state-of-the-art's performance but also simplifies the analysis in the presence of missing data, by effectively eliminating the need to identify the most appropriate imputation strategy for predicting OS in NSCLC patients.
Collapse
Affiliation(s)
- Camillo Maria Caruso
- Research Unit of Computer Systems and Bioinformatics, Department of Engineering, Università Campus Bio-Medico di Roma, Rome, Italy.
| | - Valerio Guarrasi
- Research Unit of Computer Systems and Bioinformatics, Department of Engineering, Università Campus Bio-Medico di Roma, Rome, Italy.
| | - Sara Ramella
- Operative Research Unit of Radiation Oncology, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy.
| | - Paolo Soda
- Research Unit of Computer Systems and Bioinformatics, Department of Engineering, Università Campus Bio-Medico di Roma, Rome, Italy; Department of Diagnostics and Intervention, Radiation Physics, Biomedical Engineering, Umeå University, Umeå, Sweden.
| |
Collapse
|
2
|
Rabe M, Kurz C, Thummerer A, Landry G. Artificial intelligence for treatment delivery: image-guided radiotherapy. Strahlenther Onkol 2024:10.1007/s00066-024-02277-9. [PMID: 39138806 DOI: 10.1007/s00066-024-02277-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 07/07/2024] [Indexed: 08/15/2024]
Abstract
Radiation therapy (RT) is a highly digitized field relying heavily on computational methods and, as such, has a high affinity for the automation potential afforded by modern artificial intelligence (AI). This is particularly relevant where imaging is concerned and is especially so during image-guided RT (IGRT). With the advent of online adaptive RT (ART) workflows at magnetic resonance (MR) linear accelerators (linacs) and at cone-beam computed tomography (CBCT) linacs, the need for automation is further increased. AI as applied to modern IGRT is thus one area of RT where we can expect important developments in the near future. In this review article, after outlining modern IGRT and online ART workflows, we cover the role of AI in CBCT and MRI correction for dose calculation, auto-segmentation on IGRT imaging, motion management, and response assessment based on in-room imaging.
Collapse
Affiliation(s)
- Moritz Rabe
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Christopher Kurz
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Adrian Thummerer
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
- German Cancer Consortium (DKTK), partner site Munich, a partnership between the DKFZ and the LMU University Hospital Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
- Bavarian Cancer Research Center (BZKF), Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
| |
Collapse
|
3
|
Lan T, Kuang S, Liang P, Ning C, Li Q, Wang L, Wang Y, Lin Z, Hu H, Yang L, Li J, Liu J, Li Y, Wu F, Chai H, Song X, Huang Y, Duan X, Zeng D, Li J, Cao H. MRI-based deep learning and radiomics for prediction of occult cervical lymph node metastasis and prognosis in early-stage oral and oropharyngeal squamous cell carcinoma: a diagnostic study. Int J Surg 2024; 110:4648-4659. [PMID: 38729119 PMCID: PMC11325978 DOI: 10.1097/js9.0000000000001578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 04/25/2024] [Indexed: 05/12/2024]
Abstract
INTRODUCTION The incidence of occult cervical lymph node metastases (OCLNM) is reported to be 20-30% in early-stage oral cancer and oropharyngeal cancer. There is a lack of an accurate diagnostic method to predict occult lymph node metastasis and to help surgeons make precise treatment decisions. AIM To construct and evaluate a preoperative diagnostic method to predict OCLNM in early-stage oral and oropharyngeal squamous cell carcinoma (OC and OP SCC) based on deep learning features (DLFs) and radiomics features. METHODS A total of 319 patients diagnosed with early-stage OC or OP SCC were retrospectively enrolled and divided into training, test and external validation sets. Traditional radiomics features and DLFs were extracted from their MRI images. The least absolute shrinkage and selection operator (LASSO) analysis was employed to identify the most valuable features. Prediction models for OCLNM were developed using radiomics features and DLFs. The effectiveness of the models and their clinical applicability were evaluated using the area under the curve (AUC), decision curve analysis (DCA), and survival analysis. RESULTS Seventeen prediction models were constructed. The Resnet50 deep learning (DL) model based on the combination of radiomics and DL features achieves the optimal performance, with AUC values of 0.928 (95% CI: 0.881-0.975), 0.878 (95% CI: 0.766-0.990), 0.796 (95% CI: 0.666-0.927), and 0.834 (95% CI: 0.721-0.947) in the training, test, external validation set1, and external validation set2, respectively. Moreover, the Resnet50 model has great prediction value of prognosis in patients with early-stage OC and OP SCC. CONCLUSION The proposed MRI-based Resnet50 DL model demonstrated high capability in diagnosis of OCLNM and prognosis prediction in the early-stage OC and OP SCC. The Resnet50 model could help refine the clinical diagnosis and treatment of the early-stage OC and OP SCC.
Collapse
Affiliation(s)
- Tianjun Lan
- Department of Oral and Maxillofacial Surgery, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangdong-Hong Kong Joint Laboratory for RNA Medicine, Medical Research Center, Sun Yat-Sen Memorial Hospital, Guangzhou
| | - Shijia Kuang
- Department of Oral and Maxillofacial Surgery, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangdong-Hong Kong Joint Laboratory for RNA Medicine, Medical Research Center, Sun Yat-Sen Memorial Hospital, Guangzhou
| | - Peisheng Liang
- Guanghua School of Stomatology, Hospital of Stomatology, Guangdong Province Key Laboratory of Stomatology, Sun Yat-Sen University, Guangzhou
| | - Chenglin Ning
- School of Biomedical Engineering, Southern Medical University, Guangzhou
| | - Qunxing Li
- Department of Oral and Maxillofacial Surgery, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangdong-Hong Kong Joint Laboratory for RNA Medicine, Medical Research Center, Sun Yat-Sen Memorial Hospital, Guangzhou
| | - Liansheng Wang
- Department of Oral and Maxillofacial Surgery, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangdong-Hong Kong Joint Laboratory for RNA Medicine, Medical Research Center, Sun Yat-Sen Memorial Hospital, Guangzhou
| | - Youyuan Wang
- Department of Oral and Maxillofacial Surgery, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangdong-Hong Kong Joint Laboratory for RNA Medicine, Medical Research Center, Sun Yat-Sen Memorial Hospital, Guangzhou
| | - Zhaoyu Lin
- Department of Oral and Maxillofacial Surgery, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangdong-Hong Kong Joint Laboratory for RNA Medicine, Medical Research Center, Sun Yat-Sen Memorial Hospital, Guangzhou
| | - Huijun Hu
- Department of Radiology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou
| | - Lingjie Yang
- Department of Radiology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou
| | - Jintao Li
- Department of Oral and Maxillofacial Surgery, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangdong-Hong Kong Joint Laboratory for RNA Medicine, Medical Research Center, Sun Yat-Sen Memorial Hospital, Guangzhou
| | - Jingkang Liu
- Department of Oral and Maxillofacial Surgery, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangdong-Hong Kong Joint Laboratory for RNA Medicine, Medical Research Center, Sun Yat-Sen Memorial Hospital, Guangzhou
| | - Yanyan Li
- Department of Oral and Maxillofacial Surgery, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangdong-Hong Kong Joint Laboratory for RNA Medicine, Medical Research Center, Sun Yat-Sen Memorial Hospital, Guangzhou
| | - Fan Wu
- Department of Oral and Maxillofacial Surgery, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangdong-Hong Kong Joint Laboratory for RNA Medicine, Medical Research Center, Sun Yat-Sen Memorial Hospital, Guangzhou
| | - Hua Chai
- School of Mathematics and Big Data, Foshan University, Foshan, Guangdong
| | - Xinpeng Song
- School of Mathematics and Big Data, Foshan University, Foshan, Guangdong
| | - Yiqian Huang
- School of Mathematics and Big Data, Foshan University, Foshan, Guangdong
| | - Xiaohui Duan
- Department of Radiology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou
| | - Dong Zeng
- School of Biomedical Engineering, Southern Medical University, Guangzhou
- Department of Radiology, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, People’s Republic of China
| | - Jinsong Li
- Department of Oral and Maxillofacial Surgery, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangdong-Hong Kong Joint Laboratory for RNA Medicine, Medical Research Center, Sun Yat-Sen Memorial Hospital, Guangzhou
| | - Haotian Cao
- Department of Oral and Maxillofacial Surgery, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangdong-Hong Kong Joint Laboratory for RNA Medicine, Medical Research Center, Sun Yat-Sen Memorial Hospital, Guangzhou
| |
Collapse
|
4
|
Taciuc IA, Dumitru M, Vrinceanu D, Gherghe M, Manole F, Marinescu A, Serboiu C, Neagos A, Costache A. Applications and challenges of neural networks in otolaryngology (Review). Biomed Rep 2024; 20:92. [PMID: 38765859 PMCID: PMC11099604 DOI: 10.3892/br.2024.1781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Accepted: 04/05/2024] [Indexed: 05/22/2024] Open
Abstract
Artificial Intelligence (AI) has become a topic of interest that is frequently debated in all research fields. The medical field is no exception, where several unanswered questions remain. When and how this field can benefit from AI support in daily routines are the most frequently asked questions. The present review aims to present the types of neural networks (NNs) available for development, discussing their advantages, disadvantages and how they can be applied practically. In addition, the present review summarizes how NNs (combined with various other features) have already been applied in studies in the ear nose throat research field, from assisting diagnosis to treatment management. Although the answer to this question regarding AI remains elusive, understanding the basics and types of applicable NNs can lead to future studies possibly using more than one type of NN. This approach may bypass the actual limitations in accuracy and relevance of information generated by AI. The proposed studies, the majority of which used convolutional NNs, obtained accuracies varying 70-98%, with a number of studies having the AI trained on a limited number of cases (<100 patients). The lack of standardization in AI protocols for research negatively affects data homogeneity and transparency of databases.
Collapse
Affiliation(s)
- Iulian-Alexandru Taciuc
- Department of Pathology, ‘Carol Davila’ University of Medicine and Pharmacy, 020021 Bucharest, Romania
| | - Mihai Dumitru
- Department of ENT, ‘Carol Davila’ University of Medicine and Pharmacy, 050751 Bucharest, Romania
| | - Daniela Vrinceanu
- Department of ENT, ‘Carol Davila’ University of Medicine and Pharmacy, 050751 Bucharest, Romania
| | - Mirela Gherghe
- Department of Nuclear Medicine, ‘Carol Davila’ University of Medicine and Pharmacy, 022328 Bucharest, Romania
| | - Felicia Manole
- Department of ENT, Faculty of Medicine University of Oradea, 410073 Oradea, Romania
| | - Andreea Marinescu
- Department of Radiology and Medical Imaging ‘Carol Davila’ University of Medicine and Pharmacy, 050096 Bucharest, Romania
| | - Crenguta Serboiu
- Department of Cell Biology, Molecular and Histology, ‘Carol Davila’ University of Medicine and Pharmacy, 050096 Bucharest, Romania
| | - Adriana Neagos
- Department of ENT, ‘George Emil Palade’ University of Medicine, Pharmacy, Science, and Technology of Targu Mures, 540142 Mures, Romania
| | - Adrian Costache
- Department of Pathology, ‘Carol Davila’ University of Medicine and Pharmacy, 020021 Bucharest, Romania
| |
Collapse
|
5
|
Wu H, Peng L, Du D, Xu H, Lin G, Zhou Z, Lu L, Lv W. BAF-Net: bidirectional attention-aware fluid pyramid feature integrated multimodal fusion network for diagnosis and prognosis. Phys Med Biol 2024; 69:105007. [PMID: 38593831 DOI: 10.1088/1361-6560/ad3cb2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Accepted: 04/09/2024] [Indexed: 04/11/2024]
Abstract
Objective. To go beyond the deficiencies of the three conventional multimodal fusion strategies (i.e. input-, feature- and output-level fusion), we propose a bidirectional attention-aware fluid pyramid feature integrated fusion network (BAF-Net) with cross-modal interactions for multimodal medical image diagnosis and prognosis.Approach. BAF-Net is composed of two identical branches to preserve the unimodal features and one bidirectional attention-aware distillation stream to progressively assimilate cross-modal complements and to learn supplementary features in both bottom-up and top-down processes. Fluid pyramid connections were adopted to integrate the hierarchical features at different levels of the network, and channel-wise attention modules were exploited to mitigate cross-modal cross-level incompatibility. Furthermore, depth-wise separable convolution was introduced to fuse the cross-modal cross-level features to alleviate the increase in parameters to a great extent. The generalization abilities of BAF-Net were evaluated in terms of two clinical tasks: (1) an in-house PET-CT dataset with 174 patients for differentiation between lung cancer and pulmonary tuberculosis. (2) A public multicenter PET-CT head and neck cancer dataset with 800 patients from nine centers for overall survival prediction.Main results. On the LC-PTB dataset, improved performance was found in BAF-Net (AUC = 0.7342) compared with input-level fusion model (AUC = 0.6825;p< 0.05), feature-level fusion model (AUC = 0.6968;p= 0.0547), output-level fusion model (AUC = 0.7011;p< 0.05). On the H&N cancer dataset, BAF-Net (C-index = 0.7241) outperformed the input-, feature-, and output-level fusion model, with 2.95%, 3.77%, and 1.52% increments of C-index (p= 0.3336, 0.0479 and 0.2911, respectively). The ablation experiments demonstrated the effectiveness of all the designed modules regarding all the evaluated metrics in both datasets.Significance. Extensive experiments on two datasets demonstrated better performance and robustness of BAF-Net than three conventional fusion strategies and PET or CT unimodal network in terms of diagnosis and prognosis.
Collapse
Affiliation(s)
- Huiqin Wu
- Department of Medical Imaging, Guangdong Second Provincial General Hospital, Guangzhou, Guangdong, 518037, People's Republic of China
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Lihong Peng
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Dongyang Du
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Hui Xu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Guoyu Lin
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Zidong Zhou
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Lijun Lu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
- Pazhou Lab, Guangzhou, Guangdong, 510330, People's Republic of China
| | - Wenbing Lv
- School of Information and Yunnan Key Laboratory of Intelligent Systems and Computing, Yunnan University, Kunming, Yunnan, 650504, People's Republic of China
| |
Collapse
|
6
|
De Biase A, Ma B, Guo J, van Dijk LV, Langendijk JA, Both S, van Ooijen PMA, Sijtsema NM. Deep learning-based outcome prediction using PET/CT and automatically predicted probability maps of primary tumor in patients with oropharyngeal cancer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107939. [PMID: 38008678 DOI: 10.1016/j.cmpb.2023.107939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 11/20/2023] [Accepted: 11/20/2023] [Indexed: 11/28/2023]
Abstract
BACKGROUND AND OBJECTIVE Recently, deep learning (DL) algorithms showed to be promising in predicting outcomes such as distant metastasis-free survival (DMFS) and overall survival (OS) using pre-treatment imaging in head and neck cancer. Gross Tumor Volume of the primary tumor (GTVp) segmentation is used as an additional channel in the input to DL algorithms to improve model performance. However, the binary segmentation mask of the GTVp directs the focus of the network to the defined tumor region only and uniformly. DL models trained for tumor segmentation have also been used to generate predicted tumor probability maps (TPM) where each pixel value corresponds to the degree of certainty of that pixel to be classified as tumor. The aim of this study was to explore the effect of using TPM as an extra input channel of CT- and PET-based DL prediction models for oropharyngeal cancer (OPC) patients in terms of local control (LC), regional control (RC), DMFS and OS. METHODS We included 399 OPC patients from our institute that were treated with definitive (chemo)radiation. For each patient, CT and PET scans and GTVp contours, used for radiotherapy treatment planning, were collected. We first trained a previously developed 2.5D DL framework for tumor probability prediction by 5-fold cross validation using 131 patients. Then, a 3D ResNet18 was trained for outcome prediction using the 3D TPM as one of the possible inputs. The endpoints were LC, RC, DMFS, and OS. We performed 3-fold cross validation on 168 patients for each endpoint using different combinations of image modalities as input. The final prediction in the test set (100) was obtained by averaging the predictions of the 3-fold models. The C-index was used to evaluate the discriminative performance of the models. RESULTS The models trained replacing the GTVp contours with the TPM achieved the highest C-indexes for LC (0.74) and RC (0.60) prediction. For OS, using the TPM or the GTVp as additional image modality resulted in comparable C-indexes (0.72 and 0.74). CONCLUSIONS Adding predicted TPMs instead of GTVp contours as an additional input channel for DL-based outcome prediction models improved model performance for LC and RC.
Collapse
Affiliation(s)
- Alessia De Biase
- Department of Radiation Oncology, University Medical Centre Groningen (UMCG), RB, Groningen 9700, the Netherlands; Data Science Centre in Health (DASH), University Medical Centre Groningen (UMCG), RB, Groningen 9700, the Netherlands
| | - Baoqiang Ma
- Department of Radiation Oncology, University Medical Centre Groningen (UMCG), RB, Groningen 9700, the Netherlands.
| | - Jiapan Guo
- Computer Science and Artificial Intelligence, Bernoulli Institute for Mathematics, University of Groningen (RUG), Groningen, AK 9700, the Netherlands
| | - Lisanne V van Dijk
- Department of Radiation Oncology, University Medical Centre Groningen (UMCG), RB, Groningen 9700, the Netherlands
| | - Johannes A Langendijk
- Department of Radiation Oncology, University Medical Centre Groningen (UMCG), RB, Groningen 9700, the Netherlands
| | - Stefan Both
- Department of Radiation Oncology, University Medical Centre Groningen (UMCG), RB, Groningen 9700, the Netherlands
| | - Peter M A van Ooijen
- Department of Radiation Oncology, University Medical Centre Groningen (UMCG), RB, Groningen 9700, the Netherlands; Data Science Centre in Health (DASH), University Medical Centre Groningen (UMCG), RB, Groningen 9700, the Netherlands
| | - Nanna M Sijtsema
- Department of Radiation Oncology, University Medical Centre Groningen (UMCG), RB, Groningen 9700, the Netherlands
| |
Collapse
|
7
|
Tsilivigkos C, Athanasopoulos M, Micco RD, Giotakis A, Mastronikolis NS, Mulita F, Verras GI, Maroulis I, Giotakis E. Deep Learning Techniques and Imaging in Otorhinolaryngology-A State-of-the-Art Review. J Clin Med 2023; 12:6973. [PMID: 38002588 PMCID: PMC10672270 DOI: 10.3390/jcm12226973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 11/02/2023] [Accepted: 11/06/2023] [Indexed: 11/26/2023] Open
Abstract
Over the last decades, the field of medicine has witnessed significant progress in artificial intelligence (AI), the Internet of Medical Things (IoMT), and deep learning (DL) systems. Otorhinolaryngology, and imaging in its various subspecialties, has not remained untouched by this transformative trend. As the medical landscape evolves, the integration of these technologies becomes imperative in augmenting patient care, fostering innovation, and actively participating in the ever-evolving synergy between computer vision techniques in otorhinolaryngology and AI. To that end, we conducted a thorough search on MEDLINE for papers published until June 2023, utilizing the keywords 'otorhinolaryngology', 'imaging', 'computer vision', 'artificial intelligence', and 'deep learning', and at the same time conducted manual searching in the references section of the articles included in our manuscript. Our search culminated in the retrieval of 121 related articles, which were subsequently subdivided into the following categories: imaging in head and neck, otology, and rhinology. Our objective is to provide a comprehensive introduction to this burgeoning field, tailored for both experienced specialists and aspiring residents in the domain of deep learning algorithms in imaging techniques in otorhinolaryngology.
Collapse
Affiliation(s)
- Christos Tsilivigkos
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Michail Athanasopoulos
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Riccardo di Micco
- Department of Otolaryngology and Head and Neck Surgery, Medical School of Hannover, 30625 Hannover, Germany;
| | - Aris Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Nicholas S. Mastronikolis
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Francesk Mulita
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Georgios-Ioannis Verras
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Ioannis Maroulis
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Evangelos Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| |
Collapse
|
8
|
Ma B, Guo J, Chu H, van Dijk LV, van Ooijen PM, Langendijk JA, Both S, Sijtsema NM. Comparison of computed tomography image features extracted by radiomics, self-supervised learning and end-to-end deep learning for outcome prediction of oropharyngeal cancer. Phys Imaging Radiat Oncol 2023; 28:100502. [PMID: 38026084 PMCID: PMC10663809 DOI: 10.1016/j.phro.2023.100502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 10/02/2023] [Accepted: 10/17/2023] [Indexed: 12/01/2023] Open
Abstract
Background and purpose To compare the prediction performance of image features of computed tomography (CT) images extracted by radiomics, self-supervised learning and end-to-end deep learning for local control (LC), regional control (RC), locoregional control (LRC), distant metastasis-free survival (DMFS), tumor-specific survival (TSS), overall survival (OS) and disease-free survival (DFS) of oropharyngeal squamous cell carcinoma (OPSCC) patients after (chemo)radiotherapy. Methods and materials The OPC-Radiomics dataset was used for model development and independent internal testing and the UMCG-OPC set for external testing. Image features were extracted from the Gross Tumor Volume contours of the primary tumor (GTVt) regions in CT scans when using radiomics or a self-supervised learning-based method (autoencoder). Clinical and combined (radiomics, autoencoder or end-to-end) models were built using multivariable Cox proportional-hazard analysis with clinical features only and both clinical and image features for LC, RC, LRC, DMFS, TSS, OS and DFS prediction, respectively. Results In the internal test set, combined autoencoder models performed better than clinical models and combined radiomics models for LC, RC, LRC, DMFS, TSS and DFS prediction (largest improvements in C-index: 0.91 vs. 0.76 in RC and 0.74 vs. 0.60 in DMFS). In the external test set, combined radiomics models performed better than clinical and combined autoencoder models for all endpoints (largest improvements in LC, 0.82 vs. 0.71). Furthermore, combined models performed better in risk stratification than clinical models and showed good calibration for most endpoints. Conclusions Image features extracted using self-supervised learning showed best internal prediction performance while radiomics features have better external generalizability.
Collapse
Affiliation(s)
- Baoqiang Ma
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
| | - Jiapan Guo
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Machine Learning Lab, Data Science Center in Health (DASH), Groningen, Netherlands
- Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence , University of Groningen, Groningen, Netherlands
| | - Hung Chu
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Machine Learning Lab, Data Science Center in Health (DASH), Groningen, Netherlands
- Center for Information Technology, University of Groningen ,Groningen, Netherlands
| | - Lisanne V. van Dijk
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Peter M.A. van Ooijen
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Machine Learning Lab, Data Science Center in Health (DASH), Groningen, Netherlands
| | - Johannes A. Langendijk
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
| | - Stefan Both
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
| | - Nanna M. Sijtsema
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
| |
Collapse
|
9
|
Huynh BN, Groendahl AR, Tomic O, Liland KH, Knudtsen IS, Hoebers F, van Elmpt W, Malinen E, Dale E, Futsaether CM. Head and neck cancer treatment outcome prediction: a comparison between machine learning with conventional radiomics features and deep learning radiomics. Front Med (Lausanne) 2023; 10:1217037. [PMID: 37711738 PMCID: PMC10498924 DOI: 10.3389/fmed.2023.1217037] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 07/07/2023] [Indexed: 09/16/2023] Open
Abstract
Background Radiomics can provide in-depth characterization of cancers for treatment outcome prediction. Conventional radiomics rely on extraction of image features within a pre-defined image region of interest (ROI) which are typically fed to a classification algorithm for prediction of a clinical endpoint. Deep learning radiomics allows for a simpler workflow where images can be used directly as input to a convolutional neural network (CNN) with or without a pre-defined ROI. Purpose The purpose of this study was to evaluate (i) conventional radiomics and (ii) deep learning radiomics for predicting overall survival (OS) and disease-free survival (DFS) for patients with head and neck squamous cell carcinoma (HNSCC) using pre-treatment 18F-fluorodeoxuglucose positron emission tomography (FDG PET) and computed tomography (CT) images. Materials and methods FDG PET/CT images and clinical data of patients with HNSCC treated with radio(chemo)therapy at Oslo University Hospital (OUS; n = 139) and Maastricht University Medical Center (MAASTRO; n = 99) were collected retrospectively. OUS data was used for model training and initial evaluation. MAASTRO data was used for external testing to assess cross-institutional generalizability. Models trained on clinical and/or conventional radiomics features, with or without feature selection, were compared to CNNs trained on PET/CT images without or with the gross tumor volume (GTV) included. Model performance was measured using accuracy, area under the receiver operating characteristic curve (AUC), Matthew's correlation coefficient (MCC), and the F1 score calculated for both classes separately. Results CNNs trained directly on images achieved the highest performance on external data for both endpoints. Adding both clinical and radiomics features to these image-based models increased performance further. Conventional radiomics including clinical data could achieve competitive performance. However, feature selection on clinical and radiomics data lead to overfitting and poor cross-institutional generalizability. CNNs without tumor and node contours achieved close to on-par performance with CNNs including contours. Conclusion High performance and cross-institutional generalizability can be achieved by combining clinical data, radiomics features and medical images together with deep learning models. However, deep learning models trained on images without contours can achieve competitive performance and could see potential use as an initial screening tool for high-risk patients.
Collapse
Affiliation(s)
- Bao Ngoc Huynh
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | | | - Oliver Tomic
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Kristian Hovde Liland
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Ingerid Skjei Knudtsen
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Medical Physics, Oslo University Hospital, Oslo, Norway
| | - Frank Hoebers
- Department of Radiation Oncology (MAASTRO), Maastricht University Medical Center, Maastricht, Netherlands
- GROW School for Oncology and Reproduction, Maastricht University Medical Center, Maastricht, Netherlands
| | - Wouter van Elmpt
- Department of Radiation Oncology (MAASTRO), Maastricht University Medical Center, Maastricht, Netherlands
- GROW School for Oncology and Reproduction, Maastricht University Medical Center, Maastricht, Netherlands
| | - Eirik Malinen
- Department of Medical Physics, Oslo University Hospital, Oslo, Norway
- Department of Physics, University of Oslo, Oslo, Norway
| | - Einar Dale
- Department of Oncology, Oslo University Hospital, Oslo, Norway
| | | |
Collapse
|
10
|
Nikulin P, Zschaeck S, Maus J, Cegla P, Lombardo E, Furth C, Kaźmierska J, Rogasch JMM, Holzgreve A, Albert NL, Ferentinos K, Strouthos I, Hajiyianni M, Marschner SN, Belka C, Landry G, Cholewinski W, Kotzerke J, Hofheinz F, van den Hoff J. A convolutional neural network with self-attention for fully automated metabolic tumor volume delineation of head and neck cancer in [Formula: see text]F]FDG PET/CT. Eur J Nucl Med Mol Imaging 2023; 50:2751-2766. [PMID: 37079128 PMCID: PMC10317885 DOI: 10.1007/s00259-023-06197-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 03/14/2023] [Indexed: 04/21/2023]
Abstract
PURPOSE PET-derived metabolic tumor volume (MTV) and total lesion glycolysis of the primary tumor are known to be prognostic of clinical outcome in head and neck cancer (HNC). Including evaluation of lymph node metastases can further increase the prognostic value of PET but accurate manual delineation and classification of all lesions is time-consuming and prone to interobserver variability. Our goal, therefore, was development and evaluation of an automated tool for MTV delineation/classification of primary tumor and lymph node metastases in PET/CT investigations of HNC patients. METHODS Automated lesion delineation was performed with a residual 3D U-Net convolutional neural network (CNN) incorporating a multi-head self-attention block. 698 [Formula: see text]F]FDG PET/CT scans from 3 different sites and 5 public databases were used for network training and testing. An external dataset of 181 [Formula: see text]F]FDG PET/CT scans from 2 additional sites was employed to assess the generalizability of the network. In these data, primary tumor and lymph node (LN) metastases were interactively delineated and labeled by two experienced physicians. Performance of the trained network models was assessed by 5-fold cross-validation in the main dataset and by pooling results from the 5 developed models in the external dataset. The Dice similarity coefficient (DSC) for individual delineation tasks and the primary tumor/metastasis classification accuracy were used as evaluation metrics. Additionally, a survival analysis using univariate Cox regression was performed comparing achieved group separation for manual and automated delineation, respectively. RESULTS In the cross-validation experiment, delineation of all malignant lesions with the trained U-Net models achieves DSC of 0.885, 0.805, and 0.870 for primary tumor, LN metastases, and the union of both, respectively. In external testing, the DSC reaches 0.850, 0.724, and 0.823 for primary tumor, LN metastases, and the union of both, respectively. The voxel classification accuracy was 98.0% and 97.9% in cross-validation and external data, respectively. Univariate Cox analysis in the cross-validation and the external testing reveals that manually and automatically derived total MTVs are both highly prognostic with respect to overall survival, yielding essentially identical hazard ratios (HR) ([Formula: see text]; [Formula: see text] vs. [Formula: see text]; [Formula: see text] in cross-validation and [Formula: see text]; [Formula: see text] vs. [Formula: see text]; [Formula: see text] in external testing). CONCLUSION To the best of our knowledge, this work presents the first CNN model for successful MTV delineation and lesion classification in HNC. In the vast majority of patients, the network performs satisfactory delineation and classification of primary tumor and lymph node metastases and only rarely requires more than minimal manual correction. It is thus able to massively facilitate study data evaluation in large patient groups and also does have clear potential for supervised clinical application.
Collapse
Affiliation(s)
- Pavel Nikulin
- Helmholtz-Zentrum Dresden-Rossendorf, PET Center, Institute of Radiopharmaceutical Cancer Research, Bautzner Landstrasse 400, 01328, Dresden, Germany.
| | - Sebastian Zschaeck
- Department of Radiation Oncology, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Jens Maus
- Helmholtz-Zentrum Dresden-Rossendorf, PET Center, Institute of Radiopharmaceutical Cancer Research, Bautzner Landstrasse 400, 01328, Dresden, Germany
| | - Paulina Cegla
- Department of Nuclear Medicine, Greater Poland Cancer Centre, Poznan, Poland
| | - Elia Lombardo
- Department of Radiation Oncology, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
| | - Christian Furth
- Department of Nuclear Medicine, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Joanna Kaźmierska
- Electroradiology Department, University of Medical Sciences, Poznan, Poland
- Radiotherapy Department II, Greater Poland Cancer Centre, Poznan, Poland
| | - Julian M M Rogasch
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany
- Department of Nuclear Medicine, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Adrien Holzgreve
- Department of Nuclear Medicine, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
| | - Nathalie L Albert
- Department of Nuclear Medicine, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
| | - Konstantinos Ferentinos
- Department of Radiation Oncology, German Oncology Center, European University Cyprus, Limassol, Cyprus
| | - Iosif Strouthos
- Department of Radiation Oncology, German Oncology Center, European University Cyprus, Limassol, Cyprus
| | - Marina Hajiyianni
- Department of Radiation Oncology, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Sebastian N Marschner
- Department of Radiation Oncology, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
| | - Claus Belka
- Department of Radiation Oncology, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
- German Cancer Consortium (DKTK), Partner Site Munich, Munich, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, University Hospital, Ludwig-Maximilians-University (LMU) Munich, Munich, Germany
| | - Witold Cholewinski
- Department of Nuclear Medicine, Greater Poland Cancer Centre, Poznan, Poland
- Electroradiology Department, University of Medical Sciences, Poznan, Poland
| | - Jörg Kotzerke
- Department of Nuclear Medicine, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Frank Hofheinz
- Helmholtz-Zentrum Dresden-Rossendorf, PET Center, Institute of Radiopharmaceutical Cancer Research, Bautzner Landstrasse 400, 01328, Dresden, Germany
| | - Jörg van den Hoff
- Helmholtz-Zentrum Dresden-Rossendorf, PET Center, Institute of Radiopharmaceutical Cancer Research, Bautzner Landstrasse 400, 01328, Dresden, Germany
- Department of Nuclear Medicine, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
11
|
Lv W, Zhou Z, Peng J, Peng L, Lin G, Wu H, Xu H, Lu L. Functional-structural sub-region graph convolutional network (FSGCN): Application to the prognosis of head and neck cancer with PET/CT imaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 230:107341. [PMID: 36682111 DOI: 10.1016/j.cmpb.2023.107341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 12/14/2022] [Accepted: 01/06/2023] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate risk stratification is crucial for enabling personalized treatment for head and neck cancer (HNC). Current PET/CT image-based prognostic methods include radiomics analysis and convolutional neural network (CNN), while extracting radiomics or deep features in grid Euclidean space has inherent limitations for risk stratification. Here, we propose a functional-structural sub-region graph convolutional network (FSGCN) for accurate risk stratification of HNC. METHODS This study collected 642 patients from 8 different centers in The Cancer Imaging Archive (TCIA), 507 patients from 5 centers were used for training, and 135 patients from 3 centers were used for testing. The tumor was first clustered into multiple sub-regions by using PET and CT voxel information, and radiomics features were extracted from each sub-region to characterize its functional and structural information, a graph was then constructed to format the relationship/difference among different sub-regions in non-Euclidean space for each patient, followed by a residual gated graph convolutional network, the prognostic score was finally generated to predict the progression-free survival (PFS). RESULTS In the testing cohort, compared with radiomics or FSGCN or clinical model alone, the model PETCTFea_CTROI + Cli that integrates FSGCN prognostic score and clinical parameter achieved the highest C-index and AUC of 0.767 (95% CI: 0.759-0.774) and 0.781 (95% CI: 0.774-0.788), respectively for PFS prediction. Besides, it also showed good prognostic performance on the secondary endpoints OS, RFS, and MFS in the testing cohort, with C-index of 0.786 (95% CI: 0.778-0.795), 0.775 (95% CI: 0.767-0.782) and 0.781 (95% CI: 0.772-0.789), respectively. CONCLUSIONS The proposed FSGCN can better capture the metabolic or anatomic difference/interaction among sub-regions of the whole tumor imaged with PET/CT. Extensive multi-center experiments demonstrated its capability and generalization of prognosis prediction in HNC over conventional radiomics analysis.
Collapse
Affiliation(s)
- Wenbing Lv
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Department of Electronic Engineering, Information School, Yunnan University, Kunming 650091, China
| | - Zidong Zhou
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
| | - Junyi Peng
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
| | - Lihong Peng
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
| | - Guoyu Lin
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
| | - Huiqin Wu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
| | - Hui Xu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China
| | - Lijun Lu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou 510515, China; Pazhou Lab, Guangzhou 510515, China.
| |
Collapse
|
12
|
Longitudinal and Multimodal Radiomics Models for Head and Neck Cancer Outcome Prediction. Cancers (Basel) 2023; 15:cancers15030673. [PMID: 36765628 PMCID: PMC9913206 DOI: 10.3390/cancers15030673] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/10/2023] [Accepted: 01/16/2023] [Indexed: 01/25/2023] Open
Abstract
Radiomics analysis provides a promising avenue towards the enabling of personalized radiotherapy. Most frequently, prognostic radiomics models are based on features extracted from medical images that are acquired before treatment. Here, we investigate whether combining data from multiple timepoints during treatment and from multiple imaging modalities can improve the predictive ability of radiomics models. We extracted radiomics features from computed tomography (CT) images acquired before treatment as well as two and three weeks after the start of radiochemotherapy for 55 patients with locally advanced head and neck squamous cell carcinoma (HNSCC). Additionally, we obtained features from FDG-PET images taken before treatment and three weeks after the start of therapy. Cox proportional hazards models were then built based on features of the different image modalities, treatment timepoints, and combinations thereof using two different feature selection methods in a five-fold cross-validation approach. Based on the cross-validation results, feature signatures were derived and their performance was independently validated. Discrimination regarding loco-regional control was assessed by the concordance index (C-index) and log-rank tests were performed to assess risk stratification. The best prognostic performance was obtained for timepoints during treatment for all modalities. Overall, CT was the best discriminating modality with an independent validation C-index of 0.78 for week two and weeks two and three combined. However, none of these models achieved statistically significant patient stratification. Models based on FDG-PET features from week three provided both satisfactory discrimination (C-index = 0.61 and 0.64) and statistically significant stratification (p=0.044 and p<0.001), but produced highly imbalanced risk groups. After independent validation on larger datasets, the value of (multimodal) radiomics models combining several imaging timepoints should be prospectively assessed for personalized treatment strategies.
Collapse
|