1
|
Gao S, Zhou M, Wang Z, Sugiyama D, Cheng J, Wang J, Todo Y. Fully Complex-Valued Dendritic Neuron Model. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2105-2118. [PMID: 34487498 DOI: 10.1109/tnnls.2021.3105901] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
A single dendritic neuron model (DNM) that owns the nonlinear information processing ability of dendrites has been widely used for classification and prediction. Complex-valued neural networks that consist of a number of multiple/deep-layer McCulloch-Pitts neurons have achieved great successes so far since neural computing was utilized for signal processing. Yet no complex value representations appear in single neuron architectures. In this article, we first extend DNM from a real-value domain to a complex-valued one. Performance of complex-valued DNM (CDNM) is evaluated through a complex XOR problem, a non-minimum phase equalization problem, and a real-world wind prediction task. Also, a comparative analysis on a set of elementary transcendental functions as an activation function is implemented and preparatory experiments are carried out for determining hyperparameters. The experimental results indicate that the proposed CDNM significantly outperforms real-valued DNM, complex-valued multi-layer perceptron, and other complex-valued neuron models.
Collapse
|
2
|
Kafiyan-Safari M, Rouhani M. Adaptive one-pass passive-aggressive radial basis function for classification problems. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
3
|
Wan L, Liu Z. Multiple O(t-q) stability and instability of time-varying delayed fractional-order Cohen-Grossberg neural networks with Gaussian activation functions. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.05.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
4
|
Improved Stabilization Results for Markovian Switching CVNNs with Partly Unknown Transition Rates. Neural Process Lett 2020. [DOI: 10.1007/s11063-020-10299-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
5
|
Saoud LS, Ghorbani R. Metacognitive Octonion-Valued Neural Networks as They Relate to Time Series Analysis. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:539-548. [PMID: 30990445 DOI: 10.1109/tnnls.2019.2905643] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In this paper, a metacognitive octonion-valued neural network (Mc-OVNN) learning algorithm and its application to diverse time series prediction are presented. The Mc-OVNN is comprised of two components: the octonion-valued neural network that represents the cognitive component and the metacognitive component that serves to self-regulate the learning algorithm. At each epoch, the metacognitive component decides if, how, and when learning occurs. The algorithm deletes unneeded samples and only stores those that will be used. This decision is determined by the octonion magnitude and the seven phases. To evaluate the Mc-OVNN algorithm's performance, it is applied to five real-world forecasting problems: the power consumption of a home in Honolulu, HI, USA, Box and Jenkins J series, Euro to Algerian Dinar (DZ) real-time conversion rates, the Mackey-Glass equation, and Europe Brent oil price prediction in a time series. When comparing the Mc-OVNN to other relevant techniques, Mc-OVNN displays its capability for efficient time series prediction. The real-time evaluation of the proposed algorithm is presented using the power consumption of a home in Boumerdès, Algeria, as a case study.
Collapse
|
6
|
Aruna Kumar S, Harish B, Mahanand B, Sundararajan N. An efficient Meta-cognitive Fuzzy C-Means clustering approach. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2019.105838] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
7
|
Fully complex conjugate gradient-based neural networks using Wirtinger calculus framework: Deterministic convergence and its application. Neural Netw 2019; 115:50-64. [DOI: 10.1016/j.neunet.2019.02.011] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Revised: 01/15/2019] [Accepted: 02/28/2019] [Indexed: 11/16/2022]
|
8
|
Liu P, Nie X, Liang J, Cao J. Multiple Mittag-Leffler stability of fractional-order competitive neural networks with Gaussian activation functions. Neural Netw 2018; 108:452-465. [DOI: 10.1016/j.neunet.2018.09.005] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2018] [Revised: 09/05/2018] [Accepted: 09/07/2018] [Indexed: 11/28/2022]
|
9
|
Chen CLP. On Some Separated Algorithms for Separable Nonlinear Least Squares Problems. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:2866-2874. [PMID: 28981436 DOI: 10.1109/tcyb.2017.2751558] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
For a class of nonlinear least squares problems, it is usually very beneficial to separate the variables into a linear and a nonlinear part and take full advantage of reliable linear least squares techniques. Consequently, the original problem is turned into a reduced problem which involves only nonlinear parameters. We consider in this paper four separated algorithms for such problems. The first one is the variable projection (VP) algorithm with full Jacobian matrix of Golub and Pereyra. The second and third ones are VP algorithms with simplified Jacobian matrices proposed by Kaufman and Ruano et al. respectively. The fourth one only uses the gradient of the reduced problem. Monte Carlo experiments are conducted to compare the performance of these four algorithms. From the results of the experiments, we find that: 1) the simplified Jacobian proposed by Ruano et al. is not a good choice for the VP algorithm; moreover, it may render the algorithm hard to converge; 2) the fourth algorithm perform moderately among these four algorithms; 3) the VP algorithm with the full Jacobian matrix perform more stable than that of the VP algorithm with Kuafman's simplified one; and 4) the combination of VP algorithm and Levenberg-Marquardt method is more effective than the combination of VP algorithm and Gauss-Newton method.
Collapse
|
10
|
Towards the use of fuzzy logic systems in rotary wing unmanned aerial vehicle: a review. Artif Intell Rev 2018. [DOI: 10.1007/s10462-018-9653-z] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
11
|
Kobayashi M. Singularities of Three-Layered Complex-Valued Neural Networks With Split Activation Function. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:1900-1907. [PMID: 28422693 DOI: 10.1109/tnnls.2017.2688322] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
There are three important concepts related to learning processes in neural networks: reducibility, nonminimality, and singularity. Although the definitions of these three concepts differ, they are equivalent in real-valued neural networks. This is also true of complex-valued neural networks (CVNNs) with hidden neurons not employing biases. The situation of CVNNs with hidden neurons employing biases, however, is very complicated. Exceptional reducibility was found, and it was shown that reducibility and nonminimality are not the same. Irreducibility consists of minimality and exceptional reducibility. The relationship between minimality and singularity has not yet been established. In this paper, we describe our surprising finding that minimality and singularity are independent. We also provide several examples based on exceptional reducibility.
Collapse
|
12
|
|
13
|
Pratama M, Lughofer E, Er MJ, Anavatti S, Lim CP. Data driven modelling based on Recurrent Interval-Valued Metacognitive Scaffolding Fuzzy Neural Network. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.10.093] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
14
|
|
15
|
|
16
|
Wen H, Xie W, Pei J. A Structure-Adaptive Hybrid RBF-BP Classifier with an Optimized Learning Strategy. PLoS One 2016; 11:e0164719. [PMID: 27792737 PMCID: PMC5085025 DOI: 10.1371/journal.pone.0164719] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2016] [Accepted: 09/29/2016] [Indexed: 11/24/2022] Open
Abstract
This paper presents a structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space, the adaptive RBF network is used for nonlinear kernel mapping and the BP network is used for nonlinear classification. The optimized learning strategy is as follows: firstly, a potential function is introduced into training sample space to adaptively determine the number of initial RBF hidden nodes and node parameters, and a form of heterogeneous samples repulsive force is designed to further optimize each generated RBF hidden node parameters, the optimized structure-adaptive RBF network is used for adaptively nonlinear mapping the sample space; then, according to the number of adaptively generated RBF hidden nodes, the number of subsequent BP input nodes can be determined, and the overall SAHRBF-BP classifier is built up; finally, different training sample sets are used to train the BP network parameters in SAHRBF-BP. Compared with other algorithms applied to different data sets, experiments show the superiority of SAHRBF-BP. Especially on most low dimensional and large number of data sets, the classification performance of SAHRBF-BP outperforms other training SLFNs algorithms.
Collapse
Affiliation(s)
- Hui Wen
- ATR Key Lab of National Defense, shenzhen University, shenzhen 518060, China
| | - Weixin Xie
- ATR Key Lab of National Defense, shenzhen University, shenzhen 518060, China
| | - Jihong Pei
- ATR Key Lab of National Defense, shenzhen University, shenzhen 518060, China
| |
Collapse
|
17
|
Liu P, Zeng Z, Wang J. Complete stability of delayed recurrent neural networks with Gaussian activation functions. Neural Netw 2016; 85:21-32. [PMID: 27814464 DOI: 10.1016/j.neunet.2016.09.006] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2016] [Revised: 08/13/2016] [Accepted: 09/20/2016] [Indexed: 11/25/2022]
Abstract
This paper addresses the complete stability of delayed recurrent neural networks with Gaussian activation functions. By means of the geometrical properties of Gaussian function and algebraic properties of nonsingular M-matrix, some sufficient conditions are obtained to ensure that for an n-neuron neural network, there are exactly 3k equilibrium points with 0≤k≤n, among which 2k and 3k-2k equilibrium points are locally exponentially stable and unstable, respectively. Moreover, it concludes that all the states converge to one of the equilibrium points; i.e., the neural networks are completely stable. The derived conditions herein can be easily tested. Finally, a numerical example is given to illustrate the theoretical results.
Collapse
Affiliation(s)
- Peng Liu
- School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China; Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan 430074, China.
| | - Zhigang Zeng
- School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China; Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan 430074, China.
| | - Jun Wang
- Department of Computer Science, City University of Hong Kong, Kowloon Tong, Hong Kong.
| |
Collapse
|
18
|
Pratama M, Lu J, Lughofer E, Zhang G, Anavatti S. Scaffolding type-2 classifier for incremental learning under concept drifts. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.01.049] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
19
|
Venkatesh Babu R, Rangarajan B, Sundaram S, Tom M. Human action recognition in H.264/AVC compressed domain using meta-cognitive radial basis function network. Appl Soft Comput 2015. [DOI: 10.1016/j.asoc.2015.06.054] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
20
|
Xu D, Zhang H, Mandic DP. Convergence analysis of an augmented algorithm for fully complex-valued neural networks. Neural Netw 2015; 69:44-50. [DOI: 10.1016/j.neunet.2015.05.003] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2014] [Revised: 02/28/2015] [Accepted: 05/17/2015] [Indexed: 10/23/2022]
|
21
|
Liu Y, Huang H, Huang T. WITHDRAWN: An improved maximum spread algorithm with application to complex-valued RBF neural networks. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2015.02.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
22
|
Subramanian K, Savitha R, Suresh S. A complex-valued neuro-fuzzy inference system and its learning mechanism. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2013.06.009] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
23
|
A meta-cognitive interval type-2 fuzzy inference system and its projection based learning algorithm. EVOLVING SYSTEMS 2013. [DOI: 10.1007/s12530-013-9102-9] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
24
|
Feng R, Xiao Y, Leung CS, Tsang PWM, Sum J. An Improved Fault-Tolerant Objective Function and Learning Algorithm for Training the Radial Basis Function Neural Network. Cognit Comput 2013. [DOI: 10.1007/s12559-013-9236-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
25
|
|
26
|
Savitha R, Suresh S, Sundararajan N. Projection-based fast learning fully complex-valued relaxation neural network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:529-541. [PMID: 24808375 DOI: 10.1109/tnnls.2012.2235460] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper presents a fully complex-valued relaxation network (FCRN) with its projection-based learning algorithm. The FCRN is a single hidden layer network with a Gaussian-like sech activation function in the hidden layer and an exponential activation function in the output layer. For a given number of hidden neurons, the input weights are assigned randomly and the output weights are estimated by minimizing a nonlinear logarithmic function (called as an energy function) which explicitly contains both the magnitude and phase errors. A projection-based learning algorithm determines the optimal output weights corresponding to the minima of the energy function by converting the nonlinear programming problem into that of solving a set of simultaneous linear algebraic equations. The resultant FCRN approximates the desired output more accurately with a lower computational effort. The classification ability of FCRN is evaluated using a set of real-valued benchmark classification problems from the University of California, Irvine machine learning repository. Here, a circular transformation is used to transform the real-valued input features to the complex domain. Next, the FCRN is used to solve three practical problems: a quadrature amplitude modulation channel equalization, an adaptive beamforming, and a mammogram classification. Performance results from this paper clearly indicate the superior classification/approximation performance of the FCRN.
Collapse
|
27
|
Babu GS, Suresh S. Sequential projection-based metacognitive learning in a radial basis function network for classification problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:194-206. [PMID: 24808275 DOI: 10.1109/tnnls.2012.2226748] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, we present a sequential projection-based metacognitive learning algorithm in a radial basis function network (PBL-McRBFN) for classification problems. The algorithm is inspired by human metacognitive learning principles and has two components: a cognitive component and a metacognitive component. The cognitive component is a single-hidden-layer radial basis function network with evolving architecture. The metacognitive component controls the learning process in the cognitive component by choosing the best learning strategy for the current sample and adapts the learning strategies by implementing self-regulation. In addition, sample overlapping conditions and past knowledge of the samples in the form of pseudosamples are used for proper initialization of new hidden neurons to minimize the misclassification. The parameter update strategy uses projection-based direct minimization of hinge loss error. The interaction of the cognitive component and the metacognitive component addresses the what-to-learn, when-to-learn, and how-to-learn human learning principles efficiently. The performance of the PBL-McRBFN is evaluated using a set of benchmark classification problems from the University of California Irvine machine learning repository. The statistical performance evaluation on these problems proves the superior performance of the PBL-McRBFN classifier over results reported in the literature. Also, we evaluate the performance of the proposed algorithm on a practical Alzheimer's disease detection problem. The performance results on open access series of imaging studies and Alzheimer's disease neuroimaging initiative datasets, which are obtained from different demographic regions, clearly show that PBL-McRBFN can handle a problem with change in distribution.
Collapse
|
28
|
Meta-cognitive RBF Network and its Projection Based Learning algorithm for classification problems. Appl Soft Comput 2013. [DOI: 10.1016/j.asoc.2012.08.047] [Citation(s) in RCA: 86] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
29
|
SUBRAMANIAN K, SURESH S. HUMAN ACTION RECOGNITION USING META-COGNITIVE NEURO-FUZZY INFERENCE SYSTEM. Int J Neural Syst 2012. [DOI: 10.1142/s0129065712500281] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
We propose a sequential Meta-Cognitive learning algorithm for Neuro-Fuzzy Inference System (McFIS) to efficiently recognize human actions from video sequence. Optical flow information between two consecutive image planes can represent actions hierarchically from local pixel level to global object level, and hence are used to describe the human action in McFIS classifier. McFIS classifier and its sequential learning algorithm is developed based on the principles of self-regulation observed in human meta-cognition. McFIS decides on what-to-learn, when-to-learn and how-to-learn based on the knowledge stored in the classifier and the information contained in the new training samples. The sequential learning algorithm of McFIS is controlled and monitored by the meta-cognitive components which uses class-specific, knowledge based criteria along with self-regulatory thresholds to decide on one of the following strategies: (i) Sample deletion (ii) Sample learning and (iii) Sample reserve. Performance of proposed McFIS based human action recognition system is evaluated using benchmark Weizmann and KTH video sequences. The simulation results are compared with well known SVM classifier and also with state-of-the-art action recognition results reported in the literature. The results clearly indicates McFIS action recognition system achieves better performances with minimal computational effort.
Collapse
Affiliation(s)
- K. SUBRAMANIAN
- School of Computer Engineering, Nanyang Technological University, Nanyang Avenue, Singapore 639798, Singapore
| | - S. SURESH
- School of Computer Engineering, Nanyang Technological University, Nanyang Avenue, Singapore 639798, Singapore
| |
Collapse
|
30
|
|
31
|
Savitha R, Suresh S, Sundararajan N. A meta-cognitive learning algorithm for a Fully Complex-valued Relaxation Network. Neural Netw 2012; 32:209-18. [DOI: 10.1016/j.neunet.2012.02.015] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2011] [Revised: 01/14/2012] [Accepted: 02/07/2012] [Indexed: 11/30/2022]
|
32
|
Venkatesh Babu R, Suresh S, Savitha R. Human action recognition using a fast learning fully complex-valued classifier. Neurocomputing 2012. [DOI: 10.1016/j.neucom.2012.03.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
33
|
Savitha R, Suresh S, Sundararajan N. Fast learning complex-valued classifiers for real-valued classification problems. INT J MACH LEARN CYB 2012. [DOI: 10.1007/s13042-012-0112-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|