1
|
Hong X, Chen S, Gao J, Harris CJ. Nonlinear Identification Using Orthogonal Forward Regression With Nested Optimal Regularization. IEEE TRANSACTIONS ON CYBERNETICS 2015; 45:2925-2936. [PMID: 25643422 DOI: 10.1109/tcyb.2015.2389524] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.
Collapse
|
2
|
Hong X, Gao J, Chen S, Zia T. Sparse Density Estimation on the Multinomial Manifold. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:2972-2977. [PMID: 25647665 DOI: 10.1109/tnnls.2015.2389273] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion for the finite mixture model. Since the constraint on the mixing coefficients of the finite mixture model is on the multinomial manifold, we use the well-known Riemannian trust-region (RTR) algorithm for solving this problem. The first- and second-order Riemannian geometry of the multinomial manifold are derived and utilized in the RTR algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with an accuracy competitive with those of existing kernel density estimators.
Collapse
|
3
|
Hong X, Chen S, Qatawneh A, Daqrouq K, Sheikh M, Morfeq A. A radial basis function network classifier to maximise leave-one-out mutual information. Appl Soft Comput 2014. [DOI: 10.1016/j.asoc.2014.06.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
4
|
Hong X, Chen S, Qatawneh A, Daqrouq K, Sheikh M, Morfeq A. Sparse probability density function estimation using the minimum integrated square error. Neurocomputing 2013. [DOI: 10.1016/j.neucom.2013.02.003] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
5
|
Chen H, Gong Y, Hong X. Online modeling with tunable RBF network. IEEE TRANSACTIONS ON CYBERNETICS 2013; 43:935-947. [PMID: 23096075 DOI: 10.1109/tsmcb.2012.2218804] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In this paper, we propose a novel online modeling algorithm for nonlinear and nonstationary systems using a radial basis function (RBF) neural network with a fixed number of hidden nodes. Each of the RBF basis functions has a tunable center vector and an adjustable diagonal covariance matrix. A multi-innovation recursive least square (MRLS) algorithm is applied to update the weights of RBF online, while the modeling performance is monitored. When the modeling residual of the RBF network becomes large in spite of the weight adaptation, a node identified as insignificant is replaced with a new node, for which the tunable center vector and diagonal covariance matrix are optimized using the quantum particle swarm optimization (QPSO) algorithm. The major contribution is to combine the MRLS weight adaptation and QPSO node structure optimization in an innovative way so that it can track well the local characteristic in the nonstationary system with a very sparse model. Simulation results show that the proposed algorithm has significantly better performance than existing approaches.
Collapse
Affiliation(s)
- Hao Chen
- School of Systems Engineering, University of Reading, Reading, West Berkshire RG6 6UR, UK.
| | | | | |
Collapse
|
6
|
A novel automatic two-stage locally regularized classifier construction method using the extreme learning machine. Neurocomputing 2013. [DOI: 10.1016/j.neucom.2011.12.052] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
7
|
Du D, Li X, Fei M, Irwin GW. A novel locally regularized automatic construction method for RBF neural models. Neurocomputing 2012. [DOI: 10.1016/j.neucom.2011.05.045] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
8
|
Gao M, Hong X, Chen S, Harris CJ. A combined SMOTE and PSO based RBF classifier for two-class imbalanced problems. Neurocomputing 2011. [DOI: 10.1016/j.neucom.2011.06.010] [Citation(s) in RCA: 89] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
9
|
Fast automatic two-stage nonlinear model identification based on the extreme learning machine. Neurocomputing 2011. [DOI: 10.1016/j.neucom.2010.11.035] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
10
|
|
11
|
Chen S, Hong X, Harris CJ. Probability density estimation with tunable kernels using orthogonal forward regression. ACTA ACUST UNITED AC 2009; 40:1101-14. [PMID: 20007052 DOI: 10.1109/tsmcb.2009.2034732] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.
Collapse
Affiliation(s)
- Sheng Chen
- School of Electronics and Computer Science, University of Southampton, Southampton, UK.
| | | | | |
Collapse
|
12
|
Chen S, Hong X, Luk B, Harris C. Orthogonal-least-squares regression: A unified approach for data modelling. Neurocomputing 2009. [DOI: 10.1016/j.neucom.2008.10.002] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
13
|
Chen S, Hong X, Luk BL, Harris CJ. Construction of tunable radial basis function networks using orthogonal forward selection. ACTA ACUST UNITED AC 2008; 39:457-66. [PMID: 19095548 DOI: 10.1109/tsmcb.2008.2006688] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
An orthogonal forward selection (OFS) algorithm based on leave-one-out (LOO) criteria is proposed for the construction of radial basis function (RBF) networks with tunable nodes. Each stage of the construction process determines an RBF node, namely, its center vector and diagonal covariance matrix, by minimizing the LOO statistics. For regression application, the LOO criterion is chosen to be the LOO mean-square error, while the LOO misclassification rate is adopted in two-class classification application. This OFS-LOO algorithm is computationally efficient, and it is capable of constructing parsimonious RBF networks that generalize well. Moreover, the proposed algorithm is fully automatic, and the user does not need to specify a termination criterion for the construction process. The effectiveness of the proposed RBF network construction procedure is demonstrated using examples taken from both regression and classification applications.
Collapse
Affiliation(s)
- Sheng Chen
- School of Electronics and Computer Science, University of Southampton, Southampton, UK
| | | | | | | |
Collapse
|
14
|
Hong X, Chen S, Harris CJ. A Forward-Constrained Regression Algorithm for Sparse Kernel Density Estimation. ACTA ACUST UNITED AC 2008; 19:193-8. [DOI: 10.1109/tnn.2007.908645] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
15
|
Chen S, Hong X, Harris C. An orthogonal forward regression technique for sparse kernel density estimation. Neurocomputing 2008. [DOI: 10.1016/j.neucom.2007.02.008] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
16
|
McDowell EJ, Ellerbee AK, Choma MA, Applegate BE, Izatt JA. Spectral domain phase microscopy for local measurements of cytoskeletal rheology in single cells. JOURNAL OF BIOMEDICAL OPTICS 2007; 12:044008. [PMID: 17867812 DOI: 10.1117/1.2753755] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
We present spectral domain phase microscopy (SDPM) as a new tool for measurements at the cellular scale. SDPM is a functional extension of spectral domain optical coherence tomography that allows for the detection of cellular motions and dynamics with nanometer-scale sensitivity in real time. Our goal was to use SDPM to investigate the mechanical properties of the cytoskeleton of MCF-7 cells. Magnetic tweezers were designed to apply a vertical force to ligand-coated magnetic beads attached to integrin receptors on the cell surfaces. SDPM was used to resolve cell surface motions induced by the applied stresses. The cytoskeletal response to an applied force is shown for both normal cells and those with compromised actin networks due to treatment with Cytochalasin D. The cell response data were fit to several models for cytoskeletal rheology, including one- and two-exponential mechanical models, as well as a power law. Finally, we correlated displacement measurements to physical characteristics of individual cells to better compare properties across many cells, reducing the coefficient of variation of extracted model parameters by up to 50%.
Collapse
Affiliation(s)
- Emily J McDowell
- Duke University, Department of Biomedical Engineering, Durham, North Carolina 27708, USA.
| | | | | | | | | |
Collapse
|
17
|
Chen S, Hong X, Harris CJ, Sharkey PM. Sparse modeling using orthogonal forward regression with PRESS statistic and regularization. ACTA ACUST UNITED AC 2004; 34:898-911. [PMID: 15376838 DOI: 10.1109/tsmcb.2003.817107] [Citation(s) in RCA: 198] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The paper introduces an efficient construction algorithm for obtaining sparse linear-in-the-weights regression models based on an approach of directly optimizing model generalization capability. This is achieved by utilizing the delete-1 cross validation concept and the associated leave-one-out test error also known as the predicted residual sums of squares (PRESS) statistic, without resorting to any other validation data set for model evaluation in the model construction process. Computational efficiency is ensured using an orthogonal forward regression, but the algorithm incrementally minimizes the PRESS statistic instead of the usual sum of the squared training errors. A local regularization method can naturally be incorporated into the model selection procedure to further enforce model sparsity. The proposed algorithm is fully automatic, and the user is not required to specify any criterion to terminate the model construction procedure. Comparisons with some of the existing state-of-art modeling methods are given, and several examples are included to demonstrate the ability of the proposed algorithm to effectively construct sparse models that generalize well.
Collapse
Affiliation(s)
- Sheng Chen
- Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK
| | | | | | | |
Collapse
|
18
|
Chen S, Hong X, Harris CJ. Sparse Kernel Density Construction Using Orthogonal Forward Regression With Leave-One-Out Test Score and Local Regularization. ACTA ACUST UNITED AC 2004; 34:1708-17. [PMID: 15462438 DOI: 10.1109/tsmcb.2004.828199] [Citation(s) in RCA: 55] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates.
Collapse
Affiliation(s)
- Sheng Chen
- School of Electronics and Computer Science, University of Southampton, Southampton SO17 IBJ, UK.
| | | | | |
Collapse
|
19
|
Hong X, Chen S, Sharkey PM. Automatic kernel regression modelling using combined leave-one-out test score and regularised orthogonal least squares. Int J Neural Syst 2004; 14:27-37. [PMID: 15034945 DOI: 10.1142/s0129065704001875] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2003] [Revised: 09/17/2003] [Accepted: 09/17/2003] [Indexed: 11/18/2022]
Abstract
This paper introduces an automatic robust nonlinear identification algorithm using the leave-one-out test score also known as the PRESS (Predicted REsidual Sums of Squares) statistic and regularised orthogonal least squares. The proposed algorithm aims to achieve maximised model robustness via two effective and complementary approaches, parameter regularisation via ridge regression and model optimal generalisation structure selection. The major contributions are to derive the PRESS error in a regularised orthogonal weight model, develop an efficient recursive computation formula for PRESS errors in the regularised orthogonal least squares forward regression framework and hence construct a model with a good generalisation property. Based on the properties of the PRESS statistic the proposed algorithm can achieve a fully automated model construction procedure without resort to any other validation data set for model evaluation.
Collapse
Affiliation(s)
- X Hong
- Department of Cybernetics, University of Reading, Reading, RG6 6AY, UK.
| | | | | |
Collapse
|