151
|
|
152
|
Reconstruction of electroencephalographic data using radial basis functions. Clin Neurophysiol 2016; 127:1978-83. [PMID: 26971479 DOI: 10.1016/j.clinph.2016.01.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2015] [Revised: 12/04/2015] [Accepted: 01/05/2016] [Indexed: 11/24/2022]
Abstract
OBJECTIVE In this paper we introduce a new interpolation method to use for scalp potential interpolation. The predictive value of this new interpolation technique (the multiquadric method) is compared to commonly used interpolation techniques like nearest-neighbour averaging and spherical splines. METHODS The method of comparison is cross-validation, where the data of one or two electrodes is predicted by the rest of the data. The difference between the predicted and the measured data is used to determine two error measures. One is the maximal error in one interpolation technique and the other is the mean square error. The methods are tested on data stemming from 30 channel EEG of 10 healthy volunteers. RESULTS The multiquadric interpolation methods performed best regarding both error measures and have been easier to calculate than spherical splines. CONCLUSION Multiquadrics are a good alternative to commonly used EEG reconstruction methods. SIGNIFICANCE Multiquadrics have been widely used in reconstruction on sphere-like surfaces, but until now, the advantages have not been investigated in EEG reconstruction.
Collapse
|
153
|
Rouhani M, Javan DS. Two fast and accurate heuristic RBF learning rules for data classification. Neural Netw 2016; 75:150-61. [PMID: 26797472 DOI: 10.1016/j.neunet.2015.12.011] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2015] [Revised: 10/24/2015] [Accepted: 12/22/2015] [Indexed: 11/28/2022]
Abstract
This paper presents new Radial Basis Function (RBF) learning methods for classification problems. The proposed methods use some heuristics to determine the spreads, the centers and the number of hidden neurons of network in such a way that the higher efficiency is achieved by fewer numbers of neurons, while the learning algorithm remains fast and simple. To retain network size limited, neurons are added to network recursively until termination condition is met. Each neuron covers some of train data. The termination condition is to cover all training data or to reach the maximum number of neurons. In each step, the center and spread of the new neuron are selected based on maximization of its coverage. Maximization of coverage of the neurons leads to a network with fewer neurons and indeed lower VC dimension and better generalization property. Using power exponential distribution function as the activation function of hidden neurons, and in the light of new learning approaches, it is proved that all data became linearly separable in the space of hidden layer outputs which implies that there exist linear output layer weights with zero training error. The proposed methods are applied to some well-known datasets and the simulation results, compared with SVM and some other leading RBF learning methods, show their satisfactory and comparable performance.
Collapse
Affiliation(s)
- Modjtaba Rouhani
- Faculty of engineering, Ferdowsi University of Mashhad, Mashhad, Iran.
| | - Dawood S Javan
- Faculty of engineering, Ferdowsi University of Mashhad, Mashhad, Iran.
| |
Collapse
|
154
|
Wang N, Sun JC, Er MJ, Liu YC. Hybrid recursive least squares algorithm for online sequential identification using data chunks. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.09.090] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
155
|
Ferreira Cruz DP, Dourado Maia R, da Silva LA, de Castro LN. BeeRBF: A bee-inspired data clustering approach to design RBF neural network classifiers. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.03.106] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
156
|
Cui Y, Shi J, Wang Z. Lazy Quantum clustering induced radial basis function networks (LQC-RBFN) with effective centers selection and radii determination. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.10.091] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
157
|
Tominaga D, Mori K, Aburatani S. Linear and Nonlinear Regression for Combinatorial Optimization Problem of Multiple Transgenesis. ACTA ACUST UNITED AC 2016. [DOI: 10.2197/ipsjtbio.9.7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Daisuke Tominaga
- Biotechnology Research Institute for Drug Discovery, National Institute of Advanced Industrial Science and Technology
| | - Kazuki Mori
- Technology Research Association of Highly Efficient Gene Design
| | - Sachiyo Aburatani
- Biotechnology Research Institute for Drug Discovery, National Institute of Advanced Industrial Science and Technology
| |
Collapse
|
158
|
Halali MA, Azari V, Arabloo M, Mohammadi AH, Bahadori A. Application of a radial basis function neural network to estimate pressure gradient in water–oil pipelines. J Taiwan Inst Chem Eng 2016. [DOI: 10.1016/j.jtice.2015.06.042] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
159
|
Hong X, Chen S, Gao J, Harris CJ. Nonlinear Identification Using Orthogonal Forward Regression With Nested Optimal Regularization. IEEE TRANSACTIONS ON CYBERNETICS 2015; 45:2925-2936. [PMID: 25643422 DOI: 10.1109/tcyb.2015.2389524] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.
Collapse
|
160
|
Wan TH, Saccoccio M, Chen C, Ciucci F. Influence of the Discretization Methods on the Distribution of Relaxation Times Deconvolution: Implementing Radial Basis Functions with DRTtools. Electrochim Acta 2015. [DOI: 10.1016/j.electacta.2015.09.097] [Citation(s) in RCA: 373] [Impact Index Per Article: 37.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
161
|
Shahdi SO, Abu-Bakar SAR. Neural Network-Based Approach for Face Recognition Across Varying Pose. INT J PATTERN RECOGN 2015. [DOI: 10.1142/s0218001415560157] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
At present, frontal or even near frontal face recognition problem is no longer considered as a challenge. Recently, the shift has been to improve the recognition rate for the nonfrontal face. In this work, a neural network paradigm based on the radial basis function approach is proposed to tackle the challenge of recognizing faces in different poses. Exploiting the symmetrical properties of human face, our work takes the advantage of the existence of even half of the face. The strategy is to maximize the linearity relationship based on the local information of the face rather than on the global information. To establish the relationship, our proposed method employs discrete wavelet transform and multi-color uniform local binary pattern (ULBP) in order to obtain features for the local information. The local information will then be represented by a single vector known as the face feature vector. This face feature vector will be used to estimate the frontal face feature vector which will be used for matching with the actual vector. With such an approach, our proposed method relies on a database that contains only single frontal face images. The results shown in this paper demonstrate the robustness of our proposed method even at low-resolution conditions.
Collapse
Affiliation(s)
- Seyed Omid Shahdi
- Department of Electrical, Biomedical and Mechatronics Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran
| | - S. A. R. Abu-Bakar
- CvviP Research Lab, Faculty of Electrical Engineering, Universiti Teknologi Malaysia, Johor, Malaysia
| |
Collapse
|
162
|
Jin X, Shin YC. Nonlinear discrete time optimal control based on Fuzzy Models. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2015. [DOI: 10.3233/ifs-141376] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
163
|
Chen ZY, Kuo RJ. Evolutionary Algorithm-Based Radial Basis Function Neural Network Training for Industrial Personal Computer Sales Forecasting. Comput Intell 2015. [DOI: 10.1111/coin.12073] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Affiliation(s)
- Zhen-Yao Chen
- Department of Business Administration; De Lin Institute of Technology; New Taipei City Taiwan
| | - R. J. Kuo
- Department of Industrial Management; National Taiwan University of Science and Technology; Taipei Taiwan
| |
Collapse
|
164
|
Zhao N, Wen X, Yang J, Li S, Wang Z. Modeling and prediction of viscosity of water-based nanofluids by radial basis function neural networks. POWDER TECHNOL 2015. [DOI: 10.1016/j.powtec.2015.04.058] [Citation(s) in RCA: 66] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
165
|
Dash CSK, Sahoo P, Dehuri S, Cho SB. An Empirical Analysis of Evolved Radial Basis Function Networks and Support Vector Machines with Mixture of Kernels. INT J ARTIF INTELL T 2015. [DOI: 10.1142/s021821301550013x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Classification is one of the most fundamental and formidable tasks in many domains including biomedical. In biomedical domain, the distributions of data in most of the datasets into predefined number of classes is significantly different (i.e., the classes are distributed unevenly). Many mathematical, statistical, and machine learning approaches have been developed for classification of biomedical datasets with a varying degree of success. This paper attempts to analyze the empirical performance of two forefront machine learning algorithms particularly designed for classification problem by adding some novelty to address the problem of imbalanced dataset. The evolved radial basis function network with novel kernel and support vector machine with mixture of kernels are suitably designed for the purpose of classification of imbalanced dataset. The experimental outcome shows that both algorithms are promising compared to simple radial basis function neural networks and support vector machine, respectively. However, on an average, support vector machine with mixture kernels is better than evolved radial basis function neural networks.
Collapse
Affiliation(s)
- Ch. Sanjeev Kumar Dash
- Silicon Institute of Technology, Silicon Hills, Patia, Bhubaneswar-751024, Odisha, India
| | - Pulak Sahoo
- Silicon Institute of Technology, Silicon Hills, Patia, Bhubaneswar-751024, Odisha, India
| | - Satchidananda Dehuri
- Department of Systems Engineering, Ajou University, San 5, Woncheon-dong, Yeongtong-gu, Suwon-443-749, South Korea
| | - Sung-Bae Cho
- Soft Computing Laboratory, Department of Computer Science, Yonsei University, 134 Shinchon-dong, Sudaemoon-gu, Seoul 120-749, South Korea
| |
Collapse
|
166
|
Zhang L, Li K, Bai EW, Irwin GW. Two-Stage Orthogonal Least Squares Methods for Neural Network Construction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:1608-1621. [PMID: 25222956 DOI: 10.1109/tnnls.2014.2346399] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.
Collapse
|
167
|
Zhang Q, Hu X, Zhang B. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:1828-1833. [PMID: 25532195 DOI: 10.1109/tnnls.2014.2377245] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.
Collapse
|
168
|
Yuan J, Zhang Q, Wang YX, Wei J, Zhou J. Accuracy and uncertainty of asymmetric magnetization transfer ratio quantification for amide proton transfer (APT) imaging at 3T: a Monte Carlo study. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2013:5139-42. [PMID: 24110892 DOI: 10.1109/embc.2013.6610705] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Amide proton transfer (APT) imaging offers a novel and powerful MRI contrast mechanism for quantitative molecular imaging based on the principle of chemical exchange saturation transfer (CEST). Asymmetric magnetization transfer ratio (MTR(asym)) quantification is crucial for Z-spectrum analysis of APT imaging, but is still challenging, particularly at clinical field strength. This paper studies the accuracy and uncertainty in the quantification of MTR(asym) for APT imaging at 3T, by using high-order polynomial fitting of Z-spectrum through Monte Carlo simulation. Results show that polynomial fitting is a biased estimator that consistently underestimates MTR(asym). For a fixed polynomial order, the accuracy of MTR(asym) is almost constant with regard to signal-to-noise ratio (SNR) while the uncertainty decreases exponentially with SNR. The higher order polynomial fitting increases both the accuracy and the uncertainty of MTR(asym). For different APT signal intensity levels, the relative accuracy and the absolute uncertainty keep constant for a fixed polynomial order. These results indicate the limitations and pitfalls of polynomial fitting for MTR(asym) quantification so better quantification technique for MTR(asym) estimation is warranted.
Collapse
|
169
|
Water Quality Modeling in Reservoirs Using Multivariate Linear Regression and Two Neural Network Models. ACTA ACUST UNITED AC 2015. [DOI: 10.1155/2015/521721] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In this study, two artificial neural network models (i.e., a radial basis function neural network, RBFN, and an adaptive neurofuzzy inference system approach, ANFIS) and a multilinear regression (MLR) model were developed to simulate the DO, TP, Chl a, and SD in the Mingder Reservoir of central Taiwan. The input variables of the neural network and the MLR models were determined using linear regression. The performances were evaluated using the RBFN, ANFIS, and MLR models based on statistical errors, including the mean absolute error, the root mean square error, and the correlation coefficient, computed from the measured and the model-simulated DO, TP, Chl a, and SD values. The results indicate that the performance of the ANFIS model is superior to those of the MLR and RBFN models. The study results show that the neural network using the ANFIS model is suitable for simulating the water quality variables with reasonable accuracy, suggesting that the ANFIS model can be used as a valuable tool for reservoir management in Taiwan.
Collapse
|
170
|
Huang GB. What are Extreme Learning Machines? Filling the Gap Between Frank Rosenblatt’s Dream and John von Neumann’s Puzzle. Cognit Comput 2015. [DOI: 10.1007/s12559-015-9333-0] [Citation(s) in RCA: 341] [Impact Index Per Article: 34.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
171
|
Weruaga L, Vía J. Sparse multivariate gaussian mixture regression. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:1098-1108. [PMID: 25029490 DOI: 10.1109/tnnls.2014.2334596] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Fitting a multivariate Gaussian mixture to data represents an attractive, as well as challenging problem, in especial when sparsity in the solution is demanded. Achieving this objective requires the concurrent update of all parameters (weight, centers, and precisions) of all multivariate Gaussian functions during the learning process. Such is the focus of this paper, which presents a novel method founded on the minimization of the error of the generalized logarithmic utility function (GLUF). This choice, which allows us to move smoothly from the mean square error (MSE) criterion to the one based on the logarithmic error, yields an optimization problem that resembles a locally convex problem and can be solved with a quasi-Newton method. The GLUF framework also facilitates the comparative study between both extremes, concluding that the classical MSE optimization is not the most adequate for the task. The performance of the proposed novel technique is demonstrated on simulated as well as realistic scenarios.
Collapse
|
172
|
Adaptive structure radial basis function network model for processes with operating region migration. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2014.12.030] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
173
|
Pani AK, Mohanta HK. Online monitoring and control of particle size in the grinding process using least square support vector regression and resilient back propagation neural network. ISA TRANSACTIONS 2015; 56:206-221. [PMID: 25528293 DOI: 10.1016/j.isatra.2014.11.011] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2014] [Revised: 11/21/2014] [Accepted: 11/23/2014] [Indexed: 06/04/2023]
Abstract
Particle size soft sensing in cement mills will be largely helpful in maintaining desired cement fineness or Blaine. Despite the growing use of vertical roller mills (VRM) for clinker grinding, very few research work is available on VRM modeling. This article reports the design of three types of feed forward neural network models and least square support vector regression (LS-SVR) model of a VRM for online monitoring of cement fineness based on mill data collected from a cement plant. In the data pre-processing step, a comparative study of the various outlier detection algorithms has been performed. Subsequently, for model development, the advantage of algorithm based data splitting over random selection is presented. The training data set obtained by use of Kennard-Stone maximal intra distance criterion (CADEX algorithm) was used for development of LS-SVR, back propagation neural network, radial basis function neural network and generalized regression neural network models. Simulation results show that resilient back propagation model performs better than RBF network, regression network and LS-SVR model. Model implementation has been done in SIMULINK platform showing the online detection of abnormal data and real time estimation of cement Blaine from the knowledge of the input variables. Finally, closed loop study shows how the model can be effectively utilized for maintaining cement fineness at desired value.
Collapse
Affiliation(s)
- Ajaya Kumar Pani
- Department of Chemical Engineering, Birla Institute of Technology and Science, Pilani, Rajasthan 333031, India.
| | - Hare Krishna Mohanta
- Department of Chemical Engineering, Birla Institute of Technology and Science, Pilani, Rajasthan 333031, India.
| |
Collapse
|
174
|
Han Z, Feng RB, Yan Wan W, Leung CS. Online training and its convergence for faulty networks with multiplicative weight noise. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2014.12.049] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
175
|
Ha Q, Wahid H, Duc H, Azzi M. Enhanced radial basis function neural networks for ozone level estimation. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2014.12.048] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
176
|
A stock market forecasting model combining two-directional two-dimensional principal component analysis and radial basis function neural network. PLoS One 2015; 10:e0122385. [PMID: 25849483 PMCID: PMC4388524 DOI: 10.1371/journal.pone.0122385] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2014] [Accepted: 02/21/2015] [Indexed: 11/19/2022] Open
Abstract
In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.
Collapse
|
177
|
Li Y, Wee CY, Jie B, Peng Z, Shen D. Sparse multivariate autoregressive modeling for mild cognitive impairment classification. Neuroinformatics 2015; 12:455-69. [PMID: 24595922 DOI: 10.1007/s12021-014-9221-x] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Brain connectivity network derived from functional magnetic resonance imaging (fMRI) is becoming increasingly prevalent in the researches related to cognitive and perceptual processes. The capability to detect causal or effective connectivity is highly desirable for understanding the cooperative nature of brain network, particularly when the ultimate goal is to obtain good performance of control-patient classification with biological meaningful interpretations. Understanding directed functional interactions between brain regions via brain connectivity network is a challenging task. Since many genetic and biomedical networks are intrinsically sparse, incorporating sparsity property into connectivity modeling can make the derived models more biologically plausible. Accordingly, we propose an effective connectivity modeling of resting-state fMRI data based on the multivariate autoregressive (MAR) modeling technique, which is widely used to characterize temporal information of dynamic systems. This MAR modeling technique allows for the identification of effective connectivity using the Granger causality concept and reducing the spurious causality connectivity in assessment of directed functional interaction from fMRI data. A forward orthogonal least squares (OLS) regression algorithm is further used to construct a sparse MAR model. By applying the proposed modeling to mild cognitive impairment (MCI) classification, we identify several most discriminative regions, including middle cingulate gyrus, posterior cingulate gyrus, lingual gyrus and caudate regions, in line with results reported in previous findings. A relatively high classification accuracy of 91.89 % is also achieved, with an increment of 5.4 % compared to the fully-connected, non-directional Pearson-correlation-based functional connectivity approach.
Collapse
Affiliation(s)
- Yang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | | | | | | | | |
Collapse
|
178
|
Improved Complex-valued Radial Basis Function (ICRBF) neural networks on multiple crack identification. Appl Soft Comput 2015. [DOI: 10.1016/j.asoc.2014.10.044] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
179
|
Gan M, Li HX, Peng H. A variable projection approach for efficient estimation of RBF-ARX model. IEEE TRANSACTIONS ON CYBERNETICS 2015; 45:476-485. [PMID: 24988599 DOI: 10.1109/tcyb.2014.2328438] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The radial basis function network-based autoregressive with exogenous inputs (RBF-ARX) models have much more linear parameters than nonlinear parameters. Taking advantage of this special structure, a variable projection algorithm is proposed to estimate the model parameters more efficiently by eliminating the linear parameters through the orthogonal projection. The proposed method not only substantially reduces the dimension of parameter space of RBF-ARX model but also results in a better-conditioned problem. In this paper, both the full Jacobian matrix of Golub and Pereyra and the Kaufman's simplification are used to test the performance of the algorithm. An example of chaotic time series modeling is presented for the numerical comparison. It clearly demonstrates that the proposed approach is computationally more efficient than the previous structured nonlinear parameter optimization method and the conventional Levenberg-Marquardt algorithm without the parameters separated. Finally, the proposed method is also applied to a simulated nonlinear single-input single-output process, a time-varying nonlinear process and a real multiinput multioutput nonlinear industrial process to illustrate its usefulness.
Collapse
|
180
|
Distributed Extreme Learning Machine for Nonlinear Learning over Network. ENTROPY 2015. [DOI: 10.3390/e17020818] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
181
|
Liu Y, Huang H, Huang T. WITHDRAWN: An improved maximum spread algorithm with application to complex-valued RBF neural networks. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2015.02.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
182
|
Feng G, Lan Y, Zhang X, Qian Z. Dynamic adjustment of hidden node parameters for extreme learning machine. IEEE TRANSACTIONS ON CYBERNETICS 2015; 45:279-288. [PMID: 24919208 DOI: 10.1109/tcyb.2014.2325594] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Extreme learning machine (ELM), proposed by Huang et al., was developed for generalized single hidden layer feedforward networks with a wide variety of hidden nodes. ELMs have been proved very fast and effective especially for solving function approximation problems with a predetermined network structure. However, it may contain insignificant hidden nodes. In this paper, we propose dynamic adjustment ELM (DA-ELM) that can further tune the input parameters of insignificant hidden nodes in order to reduce the residual error. It is proved in this paper that the energy error can be effectively reduced by applying recursive expectation-minimization theorem. In DA-ELM, the input parameters of insignificant hidden node are updated in the decreasing direction of the energy error in each step. The detailed theoretical foundation of DA-ELM is presented in this paper. Experimental results show that the proposed DA-ELM is more efficient than the state-of-art algorithms such as Bayesian ELM, optimally-pruned ELM, two-stage ELM, Levenberg-Marquardt, sensitivity-based linear learning method as well as the preliminary ELM.
Collapse
|
183
|
Cheng C, Sa-Ngasoongsong A, Beyca O, Le T, Yang H, Kong Z(J, Bukkapatnam ST. Time series forecasting for nonlinear and non-stationary processes: a review and comparative study. ACTA ACUST UNITED AC 2015. [DOI: 10.1080/0740817x.2014.999180] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
184
|
|
185
|
Hirata Y, Shiro M, Takahashi N, Aihara K, Suzuki H, Mas P. Approximating high-dimensional dynamics by barycentric coordinates with linear programming. CHAOS (WOODBURY, N.Y.) 2015; 25:013114. [PMID: 25637925 DOI: 10.1063/1.4906746] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Collapse
Affiliation(s)
- Yoshito Hirata
- Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan
| | - Masanori Shiro
- Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656, Japan
| | - Nozomu Takahashi
- Center for Research in Agricultural Genomics (CRAG), Consorci CSIC-IRTA-UAB-UB, Barcelona 08193, Spain
| | - Kazuyuki Aihara
- Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan
| | - Hideyuki Suzuki
- Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan
| | - Paloma Mas
- Center for Research in Agricultural Genomics (CRAG), Consorci CSIC-IRTA-UAB-UB, Barcelona 08193, Spain
| |
Collapse
|
186
|
Zhang Y, Li Y, Sun J, Ji J. Estimates on compressed neural networks regression. Neural Netw 2014; 63:10-7. [PMID: 25463391 DOI: 10.1016/j.neunet.2014.10.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2014] [Revised: 10/10/2014] [Accepted: 10/24/2014] [Indexed: 11/16/2022]
Abstract
When the neural element number n of neural networks is larger than the sample size m, the overfitting problem arises since there are more parameters than actual data (more variable than constraints). In order to overcome the overfitting problem, we propose to reduce the number of neural elements by using compressed projection A which does not need to satisfy the condition of Restricted Isometric Property (RIP). By applying probability inequalities and approximation properties of the feedforward neural networks (FNNs), we prove that solving the FNNs regression learning algorithm in the compressed domain instead of the original domain reduces the sample error at the price of an increased (but controlled) approximation error, where the covering number theory is used to estimate the excess error, and an upper bound of the excess error is given.
Collapse
Affiliation(s)
- Yongquan Zhang
- Department of Information and Mathematics Sciences, China Jiliang University, Hangzhou 310018, Zhejiang Province, PR China.
| | - Youmei Li
- Department of Information and Mathematics Sciences, China Jiliang University, Hangzhou 310018, Zhejiang Province, PR China
| | - Jianyong Sun
- School of Engineering, University of Greenwich, Central Avenue, Chatham Maritime, Kent ME4 4TB, UK
| | - Jiabing Ji
- Department of Information and Mathematics Sciences, China Jiliang University, Hangzhou 310018, Zhejiang Province, PR China
| |
Collapse
|
187
|
Pérez-Godoy M, Rivera AJ, Carmona C, del Jesus M. Training algorithms for Radial Basis Function Networks to tackle learning processes with imbalanced data-sets. Appl Soft Comput 2014. [DOI: 10.1016/j.asoc.2014.09.011] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
188
|
Sanchez E, Peng W, Toro C, Sanin C, Graña M, Szczerbicki E, Carrasco E, Guijarro F, Brualla L. Decisional DNA for modeling and reuse of experiential clinical assessments in breast cancer diagnosis and treatment. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2014.06.032] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
189
|
Kodogiannis VS, Kontogianni E, Lygouras JN. RETRACTED: Neural network based identification of meat spoilage using Fourier-transform infrared spectra. J FOOD ENG 2014. [DOI: 10.1016/j.jfoodeng.2014.06.018] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
190
|
Deng Z, Choi KS, Jiang Y, Wang S. Generalized hidden-mapping ridge regression, knowledge-leveraged inductive transfer learning for neural networks, fuzzy systems and kernel methods. IEEE TRANSACTIONS ON CYBERNETICS 2014; 44:2585-2599. [PMID: 24710838 DOI: 10.1109/tcyb.2014.2311014] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Inductive transfer learning has attracted increasing attention for the training of effective model in the target domain by leveraging the information in the source domain. However, most transfer learning methods are developed for a specific model, such as the commonly used support vector machine, which makes the methods applicable only to the adopted models. In this regard, the generalized hidden-mapping ridge regression (GHRR) method is introduced in order to train various types of classical intelligence models, including neural networks, fuzzy logical systems and kernel methods. Furthermore, the knowledge-leverage based transfer learning mechanism is integrated with GHRR to realize the inductive transfer learning method called transfer GHRR (TGHRR). Since the information from the induced knowledge is much clearer and more concise than that from the data in the source domain, it is more convenient to control and balance the similarity and difference of data distributions between the source and target domains. The proposed GHRR and TGHRR algorithms have been evaluated experimentally by performing regression and classification on synthetic and real world datasets. The results demonstrate that the performance of TGHRR is competitive with or even superior to existing state-of-the-art inductive transfer learning algorithms.
Collapse
|
191
|
Huang G, Song S, Gupta JND, Wu C. Semi-supervised and unsupervised extreme learning machines. IEEE TRANSACTIONS ON CYBERNETICS 2014; 44:2405-2417. [PMID: 25415946 DOI: 10.1109/tcyb.2014.2307349] [Citation(s) in RCA: 268] [Impact Index Per Article: 24.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Extreme learning machines (ELMs) have proven to be efficient and effective learning mechanisms for pattern classification and regression. However, ELMs are primarily applied to supervised learning problems. Only a few existing research papers have used ELMs to explore unlabeled data. In this paper, we extend ELMs for both semi-supervised and unsupervised tasks based on the manifold regularization, thus greatly expanding the applicability of ELMs. The key advantages of the proposed algorithms are as follows: 1) both the semi-supervised ELM (SS-ELM) and the unsupervised ELM (US-ELM) exhibit learning capability and computational efficiency of ELMs; 2) both algorithms naturally handle multiclass classification or multicluster clustering; and 3) both algorithms are inductive and can handle unseen data at test time directly. Moreover, it is shown in this paper that all the supervised, semi-supervised, and unsupervised ELMs can actually be put into a unified framework. This provides new perspectives for understanding the mechanism of random feature mapping, which is the key concept in ELM theory. Empirical study on a wide range of data sets demonstrates that the proposed algorithms are competitive with the state-of-the-art semi-supervised or unsupervised learning algorithms in terms of accuracy and efficiency.
Collapse
|
192
|
Knefati MA, Chauvet PE, N’Guyen S, Daya B. Reference Curves Estimation Using Conditional Quantile and Radial Basis Function Network with Mass Constraint. Neural Process Lett 2014. [DOI: 10.1007/s11063-014-9399-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
193
|
Huang G, Huang GB, Song S, You K. Trends in extreme learning machines: a review. Neural Netw 2014; 61:32-48. [PMID: 25462632 DOI: 10.1016/j.neunet.2014.10.001] [Citation(s) in RCA: 487] [Impact Index Per Article: 44.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2014] [Revised: 08/25/2014] [Accepted: 10/02/2014] [Indexed: 01/29/2023]
Abstract
Extreme learning machine (ELM) has gained increasing interest from various research fields recently. In this review, we aim to report the current state of the theoretical research and practical advances on this subject. We first give an overview of ELM from the theoretical perspective, including the interpolation theory, universal approximation capability, and generalization ability. Then we focus on the various improvements made to ELM which further improve its stability, sparsity and accuracy under general or specific conditions. Apart from classification and regression, ELM has recently been extended for clustering, feature selection, representational learning and many other learning tasks. These newly emerging algorithms greatly expand the applications of ELM. From implementation aspect, hardware implementation and parallel computation techniques have substantially sped up the training of ELM, making it feasible for big data processing and real-time reasoning. Due to its remarkable efficiency, simplicity, and impressive generalization performance, ELM have been applied in a variety of domains, such as biomedical engineering, computer vision, system identification, and control and robotics. In this review, we try to provide a comprehensive view of these advances in ELM together with its future perspectives.
Collapse
Affiliation(s)
- Gao Huang
- Department of Automation, Tsinghua University, Beijing 100084, China.
| | | | | | | |
Collapse
|
194
|
Yu H, Reiner PD, Xie T, Bartczak T, Wilamowski BM. An incremental design of radial basis function networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:1793-1803. [PMID: 25203995 DOI: 10.1109/tnnls.2013.2295813] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper proposes an offline algorithm for incrementally constructing and training radial basis function (RBF) networks. In each iteration of the error correction (ErrCor) algorithm, one RBF unit is added to fit and then eliminate the highest peak (or lowest valley) in the error surface. This process is repeated until a desired error level is reached. Experimental results on real world data sets show that the ErrCor algorithm designs very compact RBF networks compared with the other investigated algorithms. Several benchmark tests such as the duplicate patterns test and the two spiral problem were applied to show the robustness of the ErrCor algorithm. The proposed ErrCor algorithm generates very compact networks. This compactness leads to greatly reduced computation times of trained networks.
Collapse
|
195
|
Wang N, Er MJ, Han M. Parsimonious extreme learning machine using recursive orthogonal least squares. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:1828-1841. [PMID: 25291736 DOI: 10.1109/tnnls.2013.2296048] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Novel constructive and destructive parsimonious extreme learning machines (CP- and DP-ELM) are proposed in this paper. By virtue of the proposed ELMs, parsimonious structure and excellent generalization of multiinput-multioutput single hidden-layer feedforward networks (SLFNs) are obtained. The proposed ELMs are developed by innovative decomposition of the recursive orthogonal least squares procedure into sequential partial orthogonalization (SPO). The salient features of the proposed approaches are as follows: 1) Initial hidden nodes are randomly generated by the ELM methodology and recursively orthogonalized into an upper triangular matrix with dramatic reduction in matrix size; 2) the constructive SPO in the CP-ELM focuses on the partial matrix with the subcolumn of the selected regressor including nonzeros as the first column while the destructive SPO in the DP-ELM operates on the partial matrix including elements determined by the removed regressor; 3) termination criteria for CP- and DP-ELM are simplified by the additional residual error reduction method; and 4) the output weights of the SLFN need not be solved in the model selection procedure and is derived from the final upper triangular equation by backward substitution. Both single- and multi-output real-world regression data sets are used to verify the effectiveness and superiority of the CP- and DP-ELM in terms of parsimonious architecture and generalization accuracy. Innovative applications to nonlinear time-series modeling demonstrate superior identification results.
Collapse
|
196
|
Hong X, Chen S, Qatawneh A, Daqrouq K, Sheikh M, Morfeq A. A radial basis function network classifier to maximise leave-one-out mutual information. Appl Soft Comput 2014. [DOI: 10.1016/j.asoc.2014.06.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
197
|
Shivaie M, Salemnia A, Ameli MT. A multi-objective approach to optimal placement and sizing of multiple active power filters using a music-inspired algorithm. Appl Soft Comput 2014. [DOI: 10.1016/j.asoc.2014.05.011] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
198
|
Research on Neural Network Prediction of Power Transmission and Transformation Project Cost Based on GA-RBF and PSO-RBF. ACTA ACUST UNITED AC 2014. [DOI: 10.4028/www.scientific.net/amm.644-650.2526] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
This paper’s primary mission is to predict the cost of power transmission and transformation projects of a certain China’s province based on GA-RBF and PSO-RBF neural network. The projects’ data is divided into two main categories-power transformation projects and power line construction projects, with the cost per capacity (RMB/kVA) and cost per unit length (RMB/km) as the indicators of each category. After filtering out main influencing factors and initialization processing for the data, the obtained normalized data can be put into GA-RBF and PSO-RBF predicting model. The empirical analysis is carried on by Matlab. The prediction accuracy can be compared intuitively based on the output of neural network, and from the results we can conclude that GA-RBF is more precise than PSO-RBF when applied to project cost prediction.
Collapse
|
199
|
Cusmano I, Sterpi I, Mazzone A, Ramat S, Delconte C, Pisano F, Colombo R. Evaluation of upper limb sense of position in healthy individuals and patients after stroke. JOURNAL OF HEALTHCARE ENGINEERING 2014; 5:145-62. [PMID: 24918181 DOI: 10.1260/2040-2295.5.2.145] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
The aims of this study were to develop and evaluate reliability of a quantitative assessment tool for upper limb sense of position on the horizontal plane. We evaluated 15 healthy individuals (controls) and 9 stroke patients. A robotic device passively moved one arm of the blindfolded participant who had to actively move his/her opposite hand to the mirror location in the workspace. Upper-limb's position was evaluated by a digital camera. The position of the passive hand was compared with the active hand's 'mirror' position. Performance metrics were then computed to measure the mean absolute errors, error variability, spatial contraction/expansion, and systematic shifts. No significant differences were observed between dominant and non-dominant active arms of controls. All performance parameters of the post-stroke group differed significantly from those of controls. This tool can provide a quantitative measure of upper limb sense of position, therefore allowing detection of changes due to rehabilitation.
Collapse
Affiliation(s)
- I Cusmano
- IRCCS, Service of Bioengineering, "Salvatore Maugeri" Foundation, Pavia, Italy
| | - I Sterpi
- IRCCS, Service of Bioengineering, "Salvatore Maugeri" Foundation, Pavia, Italy
| | - A Mazzone
- IRCCS, Service of Bioengineering, "Salvatore Maugeri" Foundation, Veruno (NO), Italy
| | - S Ramat
- Department of Computer and Systems Science, University of Pavia, Pavia, Italy
| | - C Delconte
- IRCCS, Division of Neurology, "Salvatore Maugeri" Foundation, Veruno (NO), Italy
| | - F Pisano
- IRCCS, Division of Neurology, "Salvatore Maugeri" Foundation, Veruno (NO), Italy
| | - R Colombo
- IRCCS, Service of Bioengineering, "Salvatore Maugeri" Foundation, Pavia, Italy IRCCS, Service of Bioengineering, "Salvatore Maugeri" Foundation, Veruno (NO), Italy
| |
Collapse
|
200
|
Zhang Y, Yu X, Guo D, Yin Y, Zhang Z. Weights and structure determination of multiple-input feed-forward neural network activated by Chebyshev polynomials of Class 2 via cross-validation. Neural Comput Appl 2014. [DOI: 10.1007/s00521-014-1667-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|