1
|
Intelligent predicting of salt pond’s ion concentration based on support vector regression and neural network. Neural Comput Appl 2019. [DOI: 10.1007/s00521-018-03979-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
2
|
Folkert MR, Setton J, Apte AP, Grkovski M, Young RJ, Schöder H, Thorstad WL, Lee NY, Deasy JO, Hun Oh J. Predictive modeling of outcomes following definitive chemoradiotherapy for oropharyngeal cancer based on FDG-PET image characteristics. Phys Med Biol 2017; 62:5327-5343. [PMID: 28604368 PMCID: PMC5729737 DOI: 10.1088/1361-6560/aa73cc] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
In this study, we investigate the use of imaging feature-based outcomes research ('radiomics') combined with machine learning techniques to develop robust predictive models for the risk of all-cause mortality (ACM), local failure (LF), and distant metastasis (DM) following definitive chemoradiation therapy (CRT). One hundred seventy four patients with stage III-IV oropharyngeal cancer (OC) treated at our institution with CRT with retrievable pre- and post-treatment 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) scans were identified. From pre-treatment PET scans, 24 representative imaging features of FDG-avid disease regions were extracted. Using machine learning-based feature selection methods, multiparameter logistic regression models were built incorporating clinical factors and imaging features. All model building methods were tested by cross validation to avoid overfitting, and final outcome models were validated on an independent dataset from a collaborating institution. Multiparameter models were statistically significant on 5 fold cross validation with the area under the receiver operating characteristic curve (AUC) = 0.65 (p = 0.004), 0.73 (p = 0.026), and 0.66 (p = 0.015) for ACM, LF, and DM, respectively. The model for LF retained significance on the independent validation cohort with AUC = 0.68 (p = 0.029) whereas the models for ACM and DM did not reach statistical significance, but resulted in comparable predictive power to the 5 fold cross validation with AUC = 0.60 (p = 0.092) and 0.65 (p = 0.062), respectively. In the largest study of its kind to date, predictive features including increasing metabolic tumor volume, increasing image heterogeneity, and increasing tumor surface irregularity significantly correlated to mortality, LF, and DM on 5 fold cross validation in a relatively uniform single-institution cohort. The LF model also retained significance in an independent population.
Collapse
Affiliation(s)
- Michael R. Folkert
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Jeremy Setton
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Aditya P. Apte
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Milan Grkovski
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Robert J. Young
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Heiko Schöder
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Wade L. Thorstad
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Nancy Y. Lee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Joseph O. Deasy
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Jung Hun Oh
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| |
Collapse
|
3
|
Cawley GC, Talbot NLC. Kernel learning at the first level of inference. Neural Netw 2014; 53:69-80. [PMID: 24561452 DOI: 10.1016/j.neunet.2014.01.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2013] [Revised: 01/22/2014] [Accepted: 01/24/2014] [Indexed: 11/17/2022]
Abstract
Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense.
Collapse
Affiliation(s)
- Gavin C Cawley
- School of Computing Sciences, University of East Anglia, Norwich, NR4 7TJ, UK.
| | - Nicola L C Talbot
- School of Computing Sciences, University of East Anglia, Norwich, NR4 7TJ, UK.
| |
Collapse
|
4
|
Abstract
A novel approach to generalisation is presented that is able, under certain circumstances, to guarantee the generalisation to binary-output data for which no targets have been given. The basis of the guarantee is the recognition of a persistent global minimum error solution. An empirical test for whether the guarantee holds is provided which uses a technique called target reversal. The technique employs two neural networks whose convergence using opposing targets signals validity of the guarantee.
Collapse
Affiliation(s)
- J G Polhill
- Land Use Change Programme, Macaulay Land Use Research Institute, Aberdeen, Scotland, UK.
| | | |
Collapse
|