1
|
Luo X, Qu H, Wang Y, Yi Z, Zhang J, Zhang M. Supervised Learning in Multilayer Spiking Neural Networks With Spike Temporal Error Backpropagation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10141-10153. [PMID: 35436200 DOI: 10.1109/tnnls.2022.3164930] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The brain-inspired spiking neural networks (SNNs) hold the advantages of lower power consumption and powerful computing capability. However, the lack of effective learning algorithms has obstructed the theoretical advance and applications of SNNs. The majority of the existing learning algorithms for SNNs are based on the synaptic weight adjustment. However, neuroscience findings confirm that synaptic delays can also be modulated to play an important role in the learning process. Here, we propose a gradient descent-based learning algorithm for synaptic delays to enhance the sequential learning performance of single spiking neuron. Moreover, we extend the proposed method to multilayer SNNs with spike temporal-based error backpropagation. In the proposed multilayer learning algorithm, information is encoded in the relative timing of individual neuronal spikes, and learning is performed based on the exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. Experimental results on both synthetic and realistic datasets show significant improvements in learning efficiency and accuracy over the existing spike temporal-based learning algorithms. We also evaluate the proposed learning method in an SNN-based multimodal computational model for audiovisual pattern recognition, and it achieves better performance compared with its counterparts.
Collapse
|
2
|
Amiri M, Jafari AH, Makkiabadi B, Nazari S. A Novel Unsupervised Spatial–Temporal Learning Mechanism in a Bio-inspired Spiking Neural Network. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10097-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
3
|
Yang Y, Ren J, Duan F. The Spiking Rates Inspired Encoder and Decoder for Spiking Neural Networks: An Illustration of Hand Gesture Recognition. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10027-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
4
|
Nobukawa S, Nishimura H, Wagatsuma N, Ando S, Yamanishi T. Long-Tailed Characteristic of Spiking Pattern Alternation Induced by Log-Normal Excitatory Synaptic Distribution. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:3525-3537. [PMID: 32822305 DOI: 10.1109/tnnls.2020.3015208] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Studies of structural connectivity at the synaptic level show that in synaptic connections of the cerebral cortex, the excitatory postsynaptic potential (EPSP) in most synapses exhibits sub-mV values, while a small number of synapses exhibit large EPSPs ( >~1.0 [mV]). This means that the distribution of EPSP fits a log-normal distribution. While not restricting structural connectivity, skewed and long-tailed distributions have been widely observed in neural activities, such as the occurrences of spiking rates and the size of a synchronously spiking population. Many studies have been modeled this long-tailed EPSP neural activity distribution; however, its causal factors remain controversial. This study focused on the long-tailed EPSP distributions and interlateral synaptic connections primarily observed in the cortical network structures, thereby having constructed a spiking neural network consistent with these features. Especially, we constructed two coupled modules of spiking neural networks with excitatory and inhibitory neural populations with a log-normal EPSP distribution. We evaluated the spiking activities for different input frequencies and with/without strong synaptic connections. These coupled modules exhibited intermittent intermodule-alternative behavior, given moderate input frequency and the existence of strong synaptic and intermodule connections. Moreover, the power analysis, multiscale entropy analysis, and surrogate data analysis revealed that the long-tailed EPSP distribution and intermodule connections enhanced the complexity of spiking activity at large temporal scales and induced nonlinear dynamics and neural activity that followed the long-tailed distribution.
Collapse
|
5
|
Pregowska A. Signal Fluctuations and the Information Transmission Rates in Binary Communication Channels. ENTROPY 2021; 23:e23010092. [PMID: 33435243 PMCID: PMC7826906 DOI: 10.3390/e23010092] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 12/26/2020] [Accepted: 01/08/2021] [Indexed: 11/25/2022]
Abstract
In the nervous system, information is conveyed by sequence of action potentials, called spikes-trains. As MacKay and McCulloch suggested, spike-trains can be represented as bits sequences coming from Information Sources (IS). Previously, we studied relations between spikes’ Information Transmission Rates (ITR) and their correlations, and frequencies. Now, I concentrate on the problem of how spikes fluctuations affect ITR. The IS are typically modeled as stationary stochastic processes, which I consider here as two-state Markov processes. As a spike-trains’ fluctuation measure, I assume the standard deviation σ, which measures the average fluctuation of spikes around the average spike frequency. I found that the character of ITR and signal fluctuations relation strongly depends on the parameter s being a sum of transitions probabilities from a no spike state to spike state. The estimate of the Information Transmission Rate was found by expressions depending on the values of signal fluctuations and parameter s. It turned out that for smaller s<1, the quotient ITRσ has a maximum and can tend to zero depending on transition probabilities, while for 1<s, the ITRσ is separated from 0. Additionally, it was also shown that ITR quotient by variance behaves in a completely different way. Similar behavior was observed when classical Shannon entropy terms in the Markov entropy formula are replaced by their approximation with polynomials. My results suggest that in a noisier environment (1<s), to get appropriate reliability and efficiency of transmission, IS with higher tendency of transition from the no spike to spike state should be applied. Such selection of appropriate parameters plays an important role in designing learning mechanisms to obtain networks with higher performance.
Collapse
Affiliation(s)
- Agnieszka Pregowska
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02-106 Warsaw, Poland
| |
Collapse
|
6
|
Rashvand P, Ahmadzadeh MR, Shayegh F. Design and Implementation of a Spiking Neural Network with Integrate-and-Fire Neuron Model for Pattern Recognition. Int J Neural Syst 2020; 31:2050073. [PMID: 33353527 DOI: 10.1142/s0129065720500732] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In contrast to the previous artificial neural networks (ANNs), spiking neural networks (SNNs) work based on temporal coding approaches. In the proposed SNN, the number of neurons, neuron models, encoding method, and learning algorithm design are described in a correct and pellucid fashion. It is also discussed that optimizing the SNN parameters based on physiology, and maximizing the information they pass leads to a more robust network. In this paper, inspired by the "center-surround" structure of the receptive fields in the retina, and the amount of overlap that they have, a robust SNN is implemented. It is based on the Integrate-and-Fire (IF) neuron model and uses the time-to-first-spike coding to train the network by a newly proposed method. The Iris and MNIST datasets were employed to evaluate the performance of the proposed network whose accuracy, with 60 input neurons, was 96.33% on the Iris dataset. The network was trained in only 45 iterations indicating its reasonable convergence rate. For the MNIST dataset, when the gray level of each pixel was considered as input to the network, 600 input neurons were required, and the accuracy of the network was 90.5%. Next, 14 structural features were used as input. Therefore, the number of input neurons decreased to 210, and accuracy increased up to 95%, meaning that an SNN with fewer input neurons and good skill was implemented. Also, the ABIDE1 dataset is applied to the proposed SNN. Of the 184 data, 79 are used for healthy people and 105 for people with autism. One of the characteristics that can differentiate between these two classes is the entropy of the existing data. Therefore, Shannon entropy is used for feature extraction. Applying these values to the proposed SNN, an accuracy of 84.42% was achieved by only 120 iterations, which is a good result compared to the recent results.
Collapse
Affiliation(s)
- Parvaneh Rashvand
- Digital Signal Processing Research Lab, Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan 84156-83111, Iran
| | - Mohammad Reza Ahmadzadeh
- Digital Signal Processing Research Lab, Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan 84156-83111, Iran
| | - Farzaneh Shayegh
- Digital Signal Processing Research Lab, Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan 84156-83111, Iran
| |
Collapse
|
7
|
Deterministic characteristics of spontaneous activity detected by multi-fractal analysis in a spiking neural network with long-tailed distributions of synaptic weights. Cogn Neurodyn 2020; 14:829-836. [PMID: 33101534 DOI: 10.1007/s11571-020-09605-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Revised: 05/13/2020] [Accepted: 06/02/2020] [Indexed: 10/24/2022] Open
Abstract
Cortical neural networks maintain autonomous electrical activity called spontaneous activity that represents the brain's dynamic internal state even in the absence of sensory stimuli. The spatio-temporal complexity of spontaneous activity is strongly related to perceptual, learning, and cognitive brain functions; multi-fractal analysis can be utilized to evaluate the complexity of spontaneous activity. Recent studies have shown that the deterministic dynamic behavior of spontaneous activity especially reflects the topological neural network characteristics and changes of neural network structures. However, it remains unclear whether multi-fractal analysis, recently widely utilized for neural activity, is effective for detecting the complexity of the deterministic dynamic process. To verify this point, we focused on the log-normal distribution of excitatory postsynaptic potentials (EPSPs) to evaluate the multi-fractality of spontaneous activity in a spiking neural network with a log-normal distribution of EPSPs. We found that the spiking activities exhibited multi-fractal characteristics. Moreover, to investigate the presence of a deterministic process in the spiking activity, we conducted a surrogate data analysis against the time-series of spiking activity. The results showed that the spontaneous spiking activity included the deterministic dynamic behavior. Overall, the combination of multi-fractal analysis and surrogate data analysis can detect deterministic complex neural activity. The multi-fractal analysis of neural activity used in this study could be widely utilized for brain modeling and evaluation methods for signals obtained by neuroimaging modalities.
Collapse
|
8
|
Yewale A, Methekar R, Agrawal S. Multiple model-based control of multi variable continuous microbial fuel cell (CMFC) using machine learning approaches. Comput Chem Eng 2020. [DOI: 10.1016/j.compchemeng.2020.106884] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
9
|
Wang X, Lin X, Dang X. Supervised learning in spiking neural networks: A review of algorithms and evaluations. Neural Netw 2020; 125:258-280. [PMID: 32146356 DOI: 10.1016/j.neunet.2020.02.011] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 12/15/2019] [Accepted: 02/20/2020] [Indexed: 01/08/2023]
Abstract
As a new brain-inspired computational model of the artificial neural network, a spiking neural network encodes and processes neural information through precisely timed spike trains. Spiking neural networks are composed of biologically plausible spiking neurons, which have become suitable tools for processing complex temporal or spatiotemporal information. However, because of their intricately discontinuous and implicit nonlinear mechanisms, the formulation of efficient supervised learning algorithms for spiking neural networks is difficult, and has become an important problem in this research field. This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively. First, a comparison between spiking neural networks and traditional artificial neural networks is provided. The general framework and some related theories of supervised learning for spiking neural networks are then introduced. Furthermore, the state-of-the-art supervised learning algorithms in recent years are reviewed from the perspectives of applicability to spiking neural network architecture and the inherent mechanisms of supervised learning algorithms. A performance comparison of spike train learning of some representative algorithms is also made. In addition, we provide five qualitative performance evaluation criteria for supervised learning algorithms for spiking neural networks and further present a new taxonomy for supervised learning algorithms depending on these five performance evaluation criteria. Finally, some future research directions in this research field are outlined.
Collapse
Affiliation(s)
- Xiangwen Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| | - Xianghong Lin
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China.
| | - Xiaochao Dang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| |
Collapse
|
10
|
Hao Y, Huang X, Dong M, Xu B. A biologically plausible supervised learning method for spiking neural networks using the symmetric STDP rule. Neural Netw 2019; 121:387-395. [PMID: 31593843 DOI: 10.1016/j.neunet.2019.09.007] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Revised: 06/30/2019] [Accepted: 09/06/2019] [Indexed: 01/28/2023]
Abstract
Spiking neural networks (SNNs) possess energy-efficient potential due to event-based computation. However, supervised training of SNNs remains a challenge as spike activities are non-differentiable. Previous SNNs training methods can be generally categorized into two basic classes, i.e., backpropagation-like training methods and plasticity-based learning methods. The former methods are dependent on energy-inefficient real-valued computation and non-local transmission, as also required in artificial neural networks (ANNs), whereas the latter are either considered to be biologically implausible or exhibit poor performance. Hence, biologically plausible (bio-plausible) high-performance supervised learning (SL) methods for SNNs remain deficient. In this paper, we proposed a novel bio-plausible SNN model for SL based on the symmetric spike-timing dependent plasticity (sym-STDP) rule found in neuroscience. By combining the sym-STDP rule with bio-plausible synaptic scaling and intrinsic plasticity of the dynamic threshold, our SNN model implemented SL well and achieved good performance in the benchmark recognition task (MNIST dataset). To reveal the underlying mechanism of our SL model, we visualized both layer-based activities and synaptic weights using the t-distributed stochastic neighbor embedding (t-SNE) method after training and found that they were well clustered, thereby demonstrating excellent classification ability. Furthermore, to verify the robustness of our model, we trained it on another more realistic dataset (Fashion-MNIST), which also showed good performance. As the learning rules were bio-plausible and based purely on local spike events, our model could be easily applied to neuromorphic hardware for online training and may be helpful for understanding SL information processing at the synaptic level in biological neural systems.
Collapse
Affiliation(s)
- Yunzhe Hao
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, 100190 Beijing, China; University of Chinese Academy of Sciences, 100049 Beijing, China
| | - Xuhui Huang
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, 100190 Beijing, China.
| | - Meng Dong
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, 100190 Beijing, China
| | - Bo Xu
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, 100190 Beijing, China; University of Chinese Academy of Sciences, 100049 Beijing, China; CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, 100190 Beijing, China.
| |
Collapse
|
11
|
Pattern Classification by Spiking Neural Networks Combining Self-Organized and Reward-Related Spike-Timing-Dependent Plasticity. JOURNAL OF ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING RESEARCH 2019. [DOI: 10.2478/jaiscr-2019-0009] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Abstract
Many recent studies have applied to spike neural networks with spike-timing-dependent plasticity (STDP) to machine learning problems. The learning abilities of dopamine-modulated STDP (DA-STDP) for reward-related synaptic plasticity have also been gathering attention. Following these studies, we hypothesize that a network structure combining self-organized STDP and reward-related DA-STDP can solve the machine learning problem of pattern classification. Therefore, we studied the ability of a network in which recurrent spiking neural networks are combined with STDP for non-supervised learning, with an output layer joined by DA-STDP for supervised learning, to perform pattern classification. We confirmed that this network could perform pattern classification using the STDP effect for emphasizing features of the input spike pattern and DA-STDP supervised learning. Therefore, our proposed spiking neural network may prove to be a useful approach for machine learning problems.
Collapse
|
12
|
Nazari S, faez K. Spiking pattern recognition using informative signal of image and unsupervised biologically plausible learning. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.10.066] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|