1
|
Ren Z, Li R, Chen B, Zhang H, Ma Y, Wang C, Lin Y, Zhang Y. EEG-Based Driving Fatigue Detection Using a Two-Level Learning Hierarchy Radial Basis Function. Front Neurorobot 2021; 15:618408. [PMID: 33643018 PMCID: PMC7905350 DOI: 10.3389/fnbot.2021.618408] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 01/05/2021] [Indexed: 11/13/2022] Open
Abstract
Electroencephalography (EEG)-based driving fatigue detection has gained increasing attention recently due to the non-invasive, low-cost, and potable nature of the EEG technology, but it is still challenging to extract informative features from noisy EEG signals for driving fatigue detection. Radial basis function (RBF) neural network has drawn lots of attention as a promising classifier due to its linear-in-the-parameters network structure, strong non-linear approximation ability, and desired generalization property. The RBF network performance heavily relies on network parameters such as the number of the hidden nodes, number of the center vectors, width, and output weights. However, global optimization methods that directly optimize all the network parameters often result in high evaluation cost and slow convergence. To enhance the accuracy and efficiency of EEG-based driving fatigue detection model, this study aims to develop a two-level learning hierarchy RBF network (RBF-TLLH) which allows for global optimization of the key network parameters. Experimental EEG data were collected, at both fatigue and alert states, from six healthy participants in a simulated driving environment. Principal component analysis was first utilized to extract features from EEG signals, and the proposed RBF-TLLH was then employed for driving status (fatigue vs. alert) classification. The results demonstrated that the proposed RBF-TLLH approach achieved a better classification performance (mean accuracy: 92.71%; area under the receiver operating curve: 0.9199) compared to other widely used artificial neural networks. Moreover, only three core parameters need to be determined using the training datasets in the proposed RBF-TLLH classifier, which increases its reliability and applicability. The findings demonstrate that the proposed RBF-TLLH approach can be used as a promising framework for reliable EEG-based driving fatigue detection.
Collapse
Affiliation(s)
- Ziwu Ren
- Robotics and Microsystems Center, Soochow University, Suzhou, China
| | - Rihui Li
- Department of Biomedical Engineering, University of Houston, Houston, TX, United States
| | - Bin Chen
- College of Automation, Intelligent Control & Robotics Institute, Hangzhou Dianzi University, Hangzhou, China
| | - Hongmiao Zhang
- Robotics and Microsystems Center, Soochow University, Suzhou, China
| | - Yuliang Ma
- College of Automation, Intelligent Control & Robotics Institute, Hangzhou Dianzi University, Hangzhou, China
| | - Chushan Wang
- Guangdong Provincial Work Injury Rehabilitation Hospital, Guangzhou, China
| | - Ying Lin
- Department of Industrial Engineering, University of Houston, Houston, TX, United States
| | - Yingchun Zhang
- Department of Biomedical Engineering, University of Houston, Houston, TX, United States
| |
Collapse
|
2
|
Zhang L, Li K, Bai EW, Irwin GW. Two-Stage Orthogonal Least Squares Methods for Neural Network Construction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:1608-1621. [PMID: 25222956 DOI: 10.1109/tnnls.2014.2346399] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.
Collapse
|
3
|
Hong X, Gao J, Chen S, Harris CJ. Particle swarm optimisation assisted classification using elastic net prefiltering. Neurocomputing 2013. [DOI: 10.1016/j.neucom.2013.06.030] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
4
|
Hong X, Chen S, Harris CJ. Elastic-Net Prefiltering for Two-Class Classification. IEEE TRANSACTIONS ON CYBERNETICS 2013; 43:286-295. [PMID: 22829416 DOI: 10.1109/tsmcb.2012.2205677] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
A two-stage linear-in-the-parameter model construction algorithm is proposed aimed at noisy two-class classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage which constructs a sparse linear-in-the-parameter classifier. The prefiltering stage is a two-level process aimed at maximizing a model's generalization capability, in which a new elastic-net model identification algorithm using singular value decomposition is employed at the lower level, and then, two regularization parameters are optimized using a particle-swarm-optimization algorithm at the upper level by minimizing the leave-one-out (LOO) misclassification rate. It is shown that the LOO misclassification rate based on the resultant prefiltered signal can be analytically computed without splitting the data set, and the associated computational cost is minimal due to orthogonality. The second stage of sparse classifier construction is based on orthogonal forward regression with the D-optimality algorithm. Extensive simulations of this approach for noisy data sets illustrate the competitiveness of this approach to classification of noisy data problems.
Collapse
|
5
|
Qi C, Li HX, Zhao X, Li S, Gao F. Hammerstein Modeling with Structure Identification for Multi-input Multi-output Nonlinear Industrial Processes. Ind Eng Chem Res 2011. [DOI: 10.1021/ie102273c] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
| | - Han-Xiong Li
- Department of Manufacturing Engineering & Engineering Management, City University of Hong Kong, Hong Kong, China
| | | | | | | |
Collapse
|
6
|
Chen S, Hong X, Harris C. Regression based D-optimality experimental design for sparse kernel density estimation. Neurocomputing 2010. [DOI: 10.1016/j.neucom.2009.11.002] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
7
|
Xia Hong, Sheng Chen. A New RBF Neural Network With Boundary Value Constraints. ACTA ACUST UNITED AC 2009; 39:298-303. [DOI: 10.1109/tsmcb.2008.2005124] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
8
|
Chen S, Hong X, Harris C, Hanzo L. Fully complex-valued radial basis function networks: Orthogonal least squares regression and classification. Neurocomputing 2008. [DOI: 10.1016/j.neucom.2007.12.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
9
|
Abstract
Many kernel classifier construction algorithms adopt classification accuracy as performance metrics in model evaluation. Moreover, equal weighting is often applied to each data sample in parameter estimation. These modeling practices often become problematic if the data sets are imbalanced. We present a kernel classifier construction algorithm using orthogonal forward selection (OFS) in order to optimize the model generalization for imbalanced two-class data sets. This kernel classifier identification algorithm is based on a new regularized orthogonal weighted least squares (ROWLS) estimator and the model selection criterion of maximal leave-one-out area under curve (LOO-AUC) of the receiver operating characteristics (ROCs). It is shown that, owing to the orthogonalization procedure, the LOO-AUC can be calculated via an analytic formula based on the new regularized orthogonal weighted least squares parameter estimator, without actually splitting the estimation data set. The proposed algorithm can achieve minimal computational expense via a set of forward recursive updating formula in searching model terms with maximal incremental LOO-AUC value. Numerical examples are used to demonstrate the efficacy of the algorithm.
Collapse
Affiliation(s)
- Xia Hong
- Cybernetic Intelligence Research Group, School of Systems Engineering, University of Reading, Reading RG6 6AY, UK.
| | | | | |
Collapse
|
10
|
Wang X, Chen S, Lowe D, Harris C. Sparse support vector regression based on orthogonal forward selection for the generalised kernel model. Neurocomputing 2006. [DOI: 10.1016/j.neucom.2005.12.129] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
11
|
Hong X. A fast identification algorithm for Box-Cox transformation based radial basis function neural network. IEEE TRANSACTIONS ON NEURAL NETWORKS 2006; 17:1064-9. [PMID: 16856667 DOI: 10.1109/tnn.2006.875986] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.
Collapse
|
12
|
Hong X, Chen S. M-Estimator and D-Optimality Model Construction Using Orthogonal Forward Regression. ACTA ACUST UNITED AC 2005; 35:155-62. [PMID: 15719945 DOI: 10.1109/tsmcb.2004.839910] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
13
|
Chen S, Hong X, Harris CJ, Sharkey PM. Sparse modeling using orthogonal forward regression with PRESS statistic and regularization. ACTA ACUST UNITED AC 2004; 34:898-911. [PMID: 15376838 DOI: 10.1109/tsmcb.2003.817107] [Citation(s) in RCA: 198] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The paper introduces an efficient construction algorithm for obtaining sparse linear-in-the-weights regression models based on an approach of directly optimizing model generalization capability. This is achieved by utilizing the delete-1 cross validation concept and the associated leave-one-out test error also known as the predicted residual sums of squares (PRESS) statistic, without resorting to any other validation data set for model evaluation in the model construction process. Computational efficiency is ensured using an orthogonal forward regression, but the algorithm incrementally minimizes the PRESS statistic instead of the usual sum of the squared training errors. A local regularization method can naturally be incorporated into the model selection procedure to further enforce model sparsity. The proposed algorithm is fully automatic, and the user is not required to specify any criterion to terminate the model construction procedure. Comparisons with some of the existing state-of-art modeling methods are given, and several examples are included to demonstrate the ability of the proposed algorithm to effectively construct sparse models that generalize well.
Collapse
Affiliation(s)
- Sheng Chen
- Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK
| | | | | | | |
Collapse
|
14
|
Hong X, Harris CJ, Chen S. Robust neurofuzzy rule base knowledge extraction and estimation using subspace decomposition combined with regularization and D-optimality. ACTA ACUST UNITED AC 2004; 34:598-608. [PMID: 15369096 DOI: 10.1109/tsmcb.2003.817089] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi-Sugeno (T-S) inference mechanism a one to one mapping between a fuzzy rule base and a model matrix feature subspace is established. This link enables rule based knowledge to be extracted from matrix subspace to enhance model transparency. In order to achieve maximized model robustness and sparsity, a new robust extended Gram-Schmidt (G-S) method has been introduced via two effective and complementary approaches of regularization and D-optimality experimental design. Model rule bases are decomposed into orthogonal subspaces, so as to enhance model transparency with the capability of interpreting the derived rule base energy level. A locally regularized orthogonal least squares algorithm, combined with a D-optimality used for subspace based rule selection, has been extended for fuzzy rule regularization and subspace based information extraction. By using a weighting for the D-optimality cost function, the entire model construction procedure becomes automatic. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Collapse
Affiliation(s)
- Xia Hong
- Cybernetic Intelligence Research Group, Department of Cybernetics, University of Reading, Reading, RG6 6AY, UK.
| | | | | |
Collapse
|
15
|
Hong X, Chen S, Brown M, Harris C. Sparse model identification using orthogonal forward regression with basis pursuit and D-optimality. ACTA ACUST UNITED AC 2004. [DOI: 10.1049/ip-cta:20040693] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
16
|
Hong X, Chen S, Sharkey PM. Automatic kernel regression modelling using combined leave-one-out test score and regularised orthogonal least squares. Int J Neural Syst 2004; 14:27-37. [PMID: 15034945 DOI: 10.1142/s0129065704001875] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2003] [Revised: 09/17/2003] [Accepted: 09/17/2003] [Indexed: 11/18/2022]
Abstract
This paper introduces an automatic robust nonlinear identification algorithm using the leave-one-out test score also known as the PRESS (Predicted REsidual Sums of Squares) statistic and regularised orthogonal least squares. The proposed algorithm aims to achieve maximised model robustness via two effective and complementary approaches, parameter regularisation via ridge regression and model optimal generalisation structure selection. The major contributions are to derive the PRESS error in a regularised orthogonal weight model, develop an efficient recursive computation formula for PRESS errors in the regularised orthogonal least squares forward regression framework and hence construct a model with a good generalisation property. Based on the properties of the PRESS statistic the proposed algorithm can achieve a fully automated model construction procedure without resort to any other validation data set for model evaluation.
Collapse
Affiliation(s)
- X Hong
- Department of Cybernetics, University of Reading, Reading, RG6 6AY, UK.
| | | | | |
Collapse
|
17
|
Hong X, Sharkey P, Warwick K. Automatic nonlinear predictive model-construction algorithm using forward regression and the PRESS statistic. ACTA ACUST UNITED AC 2003. [DOI: 10.1049/ip-cta:20030311] [Citation(s) in RCA: 58] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
18
|
Chen S, Harris C, Hong X. Sparse multioutput radial basis function network construction using combined locally regularised orthogonal least square and D-optimality experimental design. ACTA ACUST UNITED AC 2003. [DOI: 10.1049/ip-cta:20030253] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
19
|
Hong X, Sharkey P, Warwick K. A robust nonlinear identification algorithm using press statistic and forward regression. ACTA ACUST UNITED AC 2003; 14:454-8. [DOI: 10.1109/tnn.2003.809422] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|