1
|
Bukh AV, Rybalova EV, Shepelev IA, Vadivasova TE. Classification of musical intervals by spiking neural networks: Perfect student in solfége classes. CHAOS (WOODBURY, N.Y.) 2024; 34:063102. [PMID: 38829796 DOI: 10.1063/5.0210790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Accepted: 05/12/2024] [Indexed: 06/05/2024]
Abstract
We investigate a spike activity of a network of excitable FitzHugh-Nagumo neurons, which is under constant two-frequency auditory signals. The neurons are supplemented with linear frequency filters and nonlinear input signal converters. We show that it is possible to configure the network to recognize a specific frequency ratio (musical interval) by selecting the parameters of the neurons, input filters, and coupling between neurons. A set of appropriately configured subnetworks with different topologies and coupling strengths can serve as a classifier for musical intervals. We have found that the selective properties of the classifier are due to the presence of a specific topology of coupling between the neurons of the network.
Collapse
Affiliation(s)
- A V Bukh
- Institute of Physics, Saratov State University, 83 Astrakhanskaya Street, Saratov 410012, Russia
| | - E V Rybalova
- Institute of Physics, Saratov State University, 83 Astrakhanskaya Street, Saratov 410012, Russia
| | - I A Shepelev
- Institute of Physics, Saratov State University, 83 Astrakhanskaya Street, Saratov 410012, Russia
- Almetyevsk State Petroleum Institute, 2 Lenin Street, Almetyevsk 423462, Russia
| | - T E Vadivasova
- Institute of Physics, Saratov State University, 83 Astrakhanskaya Street, Saratov 410012, Russia
| |
Collapse
|
2
|
Sanchez-Garcia M, Chauhan T, Cottereau BR, Beyeler M. Efficient multi-scale representation of visual objects using a biologically plausible spike-latency code and winner-take-all inhibition. BIOLOGICAL CYBERNETICS 2023; 117:95-111. [PMID: 37004546 DOI: 10.1007/s00422-023-00956-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 02/10/2023] [Indexed: 05/05/2023]
Abstract
Deep neural networks have surpassed human performance in key visual challenges such as object recognition, but require a large amount of energy, computation, and memory. In contrast, spiking neural networks (SNNs) have the potential to improve both the efficiency and biological plausibility of object recognition systems. Here we present a SNN model that uses spike-latency coding and winner-take-all inhibition (WTA-I) to efficiently represent visual stimuli using multi-scale parallel processing. Mimicking neuronal response properties in early visual cortex, images were preprocessed with three different spatial frequency (SF) channels, before they were fed to a layer of spiking neurons whose synaptic weights were updated using spike-timing-dependent-plasticity. We investigate how the quality of the represented objects changes under different SF bands and WTA-I schemes. We demonstrate that a network of 200 spiking neurons tuned to three SFs can efficiently represent objects with as little as 15 spikes per neuron. Studying how core object recognition may be implemented using biologically plausible learning rules in SNNs may not only further our understanding of the brain, but also lead to novel and efficient artificial vision systems.
Collapse
Affiliation(s)
| | - Tushar Chauhan
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Boston, MA, USA
- CerCo CNRS UMR5549, Université de Toulouse III-Paul Sabatier, Toulouse, France
| | - Benoit R Cottereau
- CerCo CNRS UMR5549, Université de Toulouse III-Paul Sabatier, Toulouse, France
- IPAL, CNRS IRL 2955, Singapore, Singapore
| | - Michael Beyeler
- Department of Computer Science, University of California, Santa Barbara, CA, USA
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| |
Collapse
|
3
|
Research Progress of spiking neural network in image classification: a review. APPL INTELL 2023. [DOI: 10.1007/s10489-023-04553-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/11/2023]
|
4
|
Shirsavar SR, Vahabie AH, Dehaqani MRA. Models Developed for Spiking Neural Networks. MethodsX 2023; 10:102157. [PMID: 37077894 PMCID: PMC10106956 DOI: 10.1016/j.mex.2023.102157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 03/23/2023] [Indexed: 03/30/2023] Open
Abstract
Emergence of deep neural networks (DNNs) has raised enormous attention towards artificial neural networks (ANNs) once again. They have become the state-of-the-art models and have won different machine learning challenges. Although these networks are inspired by the brain, they lack biological plausibility, and they have structural differences compared to the brain. Spiking neural networks (SNNs) have been around for a long time, and they have been investigated to understand the dynamics of the brain. However, their application in real-world and complicated machine learning tasks were limited. Recently, they have shown great potential in solving such tasks. Due to their energy efficiency and temporal dynamics there are many promises in their future development. In this work, we reviewed the structures and performances of SNNs on image classification tasks. The comparisons illustrate that these networks show great capabilities for more complicated problems. Furthermore, the simple learning rules developed for SNNs, such as STDP and R-STDP, can be a potential alternative to replace the backpropagation algorithm used in DNNs.•Different building blocks of spiking neural networks are explained in this work.•Developed models for SNNs are introduced based on their characteristics and building blocks.
Collapse
|
5
|
Cheng X, Zhang T, Jia S, Xu B. Meta neurons improve spiking neural networks for efficient spatio-temporal learning. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.02.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
6
|
Yi Z, Lian J, Liu Q, Zhu H, Liang D, Liu J. Learning Rules in Spiking Neural Networks: A Survey. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.02.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
7
|
Zhou W, Wen S, Liu Y, Liu L, Liu X, Chen L. Forgetting memristor based STDP learning circuit for neural networks. Neural Netw 2023; 158:293-304. [PMID: 36493532 DOI: 10.1016/j.neunet.2022.11.023] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 10/18/2022] [Accepted: 11/14/2022] [Indexed: 11/21/2022]
Abstract
The circuit implementation of STDP based on memristor is of great significance for the application of neural network. However, recent research shows that the research on the pure circuit implementation of forgetting memristor and STDP is still rare. This paper proposes a new STDP learning rule implementation circuit based on the forgetting memristor. This kind of forgetting memory resistance synapse makes the neural network have the function of time-division multiplexing, but the instability of short-term memory will affect the learning ability of the neural network. This paper analyzes and discusses the influence of synapses with long-term and short-term memory on the learning characteristics of neural network STDP, which lays a foundation for the construction of time-division multiplexing neural network with long-term and short-term memory synapses. Through this circuit, it is found that the volatile memristor has different behaviors to the stimulus signal in different initial states, and the resulting LTP phenomenon is more in line with the forgetting effect in biology. This circuit has multiple adjustable parameters, which can fit the STDP learning rules under different conditions. The application of neural network proves the availability of this circuit.
Collapse
Affiliation(s)
- Wenhao Zhou
- Electronic Information and Engineering, Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, 400715, China.
| | - Shiping Wen
- Centre for Artificial Intelligence, Faculty of Engineering and Information Technology, University of Technology Sydney, Australia.
| | - Yi Liu
- Electronic Information and Engineering, Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, 400715, China
| | - Lu Liu
- Electronic Information and Engineering, Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, 400715, China
| | - Xin Liu
- Computer Vision and Pattern Recognition Laboratory, School of Engineering Science, Lappeenranta-Lahti University of Technology LUT, Finland.
| | - Ling Chen
- Electronic Information and Engineering, Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, 400715, China; Computer Vision and Pattern Recognition Laboratory, School of Engineering Science, Lappeenranta-Lahti University of Technology LUT, Finland.
| |
Collapse
|
8
|
Amiri M, Jafari AH, Makkiabadi B, Nazari S. A Novel Unsupervised Spatial–Temporal Learning Mechanism in a Bio-inspired Spiking Neural Network. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10097-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
9
|
Zhang B, Zhuge Y, Yin Z. Design and implementation of an EEG-based recognition mechanism for the openness trait of the Big Five. Front Neurosci 2022; 16:926256. [PMID: 36161161 PMCID: PMC9490266 DOI: 10.3389/fnins.2022.926256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 08/08/2022] [Indexed: 11/13/2022] Open
Abstract
The differentiation between the openness and other dimensions of the Big Five personality model indicates that it is necessary to design a specific paradigm as a supplement to the Big Five recognition. The present study examined the relationship between one's openness trait of the Big Five model and the task-related power change of upper alpha band (10–12 Hz). We found that individuals from the high openness group displayed a stronger alpha synchronization over a frontal area in symbolic reasoning task, while the reverse applied in the deductive reasoning task. The results indicated that these two kinds of reasoning tasks could be used as supplement of the Big Five recognition. Besides, we divided one's openness score into three levels and proposed a hybrid-SNN (Spiking Neural Networks)-ANN (Analog Neural Networks) architecture based on EEGNet to recognize one's openness level, named Spike-EEGNet. The recognition accuracy of the two tasks was 90.6 and 92.2%. This result was highly significant for the validation of using a model with hybrid-SNN-ANN architecture for EEG-based openness trait recognition.
Collapse
|
10
|
Yu Q, Song S, Ma C, Wei J, Chen S, Tan KC. Temporal Encoding and Multispike Learning Framework for Efficient Recognition of Visual Patterns. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3387-3399. [PMID: 33531306 DOI: 10.1109/tnnls.2021.3052804] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Biological systems under a parallel and spike-based computation endow individuals with abilities to have prompt and reliable responses to different stimuli. Spiking neural networks (SNNs) have thus been developed to emulate their efficiency and to explore principles of spike-based processing. However, the design of a biologically plausible and efficient SNN for image classification still remains as a challenging task. Previous efforts can be generally clustered into two major categories in terms of coding schemes being employed: rate and temporal. The rate-based schemes suffer inefficiency, whereas the temporal-based ones typically end with a relatively poor performance in accuracy. It is intriguing and important to develop an SNN with both efficiency and efficacy being considered. In this article, we focus on the temporal-based approaches in a way to advance their accuracy performance by a great margin while keeping the efficiency on the other hand. A new temporal-based framework integrated with the multispike learning is developed for efficient recognition of visual patterns. Different approaches of encoding and learning under our framework are evaluated with the MNIST and Fashion-MNIST data sets. Experimental results demonstrate the efficient and effective performance of our temporal-based approaches across a variety of conditions, improving accuracies to higher levels that are even comparable to rate-based ones but importantly with a lighter network structure and far less number of spikes. This article attempts to extend the advanced multispike learning to the challenging task of image recognition and bring state of the arts in temporal-based approaches to a novel level. The experimental results could be potentially favorable to low-power and high-speed requirements in the field of artificial intelligence and contribute to attract more efforts toward brain-like computing.
Collapse
|
11
|
Yang X, Lei Y, Wang M, Cai J, Wang M, Huan Z, Lin X. Evaluation of the Effect of the Dynamic Behavior and Topology Co-Learning of Neurons and Synapses on the Small-Sample Learning Ability of Spiking Neural Network. Brain Sci 2022; 12:brainsci12020139. [PMID: 35203904 PMCID: PMC8870633 DOI: 10.3390/brainsci12020139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 01/14/2022] [Accepted: 01/17/2022] [Indexed: 12/04/2022] Open
Abstract
Small sample learning ability is one of the most significant characteristics of the human brain. However, its mechanism is yet to be fully unveiled. In recent years, brain-inspired artificial intelligence has become a very hot research domain. Researchers explored brain-inspired technologies or architectures to construct neural networks that could achieve human-alike intelligence. In this work, we presented our effort at evaluation of the effect of dynamic behavior and topology co-learning of neurons and synapses on the small sample learning ability of spiking neural network. Results show that the dynamic behavior and topology co-learning mechanism of neurons and synapses presented in our work could significantly reduce the number of required samples, while maintaining a reasonable performance on the MNIST data-set, resulting in a very lightweight neural network structure.
Collapse
Affiliation(s)
- Xu Yang
- Correspondence: ; Tel.: +86-010-6891-3467
| | | | | | | | | | | | | |
Collapse
|
12
|
Gangopadhyay A, Chakrabartty S. A Sparsity-Driven Backpropagation-Less Learning Framework Using Populations of Spiking Growth Transform Neurons. Front Neurosci 2021; 15:715451. [PMID: 34393719 PMCID: PMC8355563 DOI: 10.3389/fnins.2021.715451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 06/28/2021] [Indexed: 11/13/2022] Open
Abstract
Growth-transform (GT) neurons and their population models allow for independent control over the spiking statistics and the transient population dynamics while optimizing a physically plausible distributed energy functional involving continuous-valued neural variables. In this paper we describe a backpropagation-less learning approach to train a network of spiking GT neurons by enforcing sparsity constraints on the overall network spiking activity. The key features of the model and the proposed learning framework are: (a) spike responses are generated as a result of constraint violation and hence can be viewed as Lagrangian parameters; (b) the optimal parameters for a given task can be learned using neurally relevant local learning rules and in an online manner; (c) the network optimizes itself to encode the solution with as few spikes as possible (sparsity); (d) the network optimizes itself to operate at a solution with the maximum dynamic range and away from saturation; and (e) the framework is flexible enough to incorporate additional structural and connectivity constraints on the network. As a result, the proposed formulation is attractive for designing neuromorphic tinyML systems that are constrained in energy, resources, and network structure. In this paper, we show how the approach could be used for unsupervised and supervised learning such that minimizing a training error is equivalent to minimizing the overall spiking activity across the network. We then build on this framework to implement three different multi-layer spiking network architectures with progressively increasing flexibility in training and consequently, sparsity. We demonstrate the applicability of the proposed algorithm for resource-efficient learning using a publicly available machine olfaction dataset with unique challenges like sensor drift and a wide range of stimulus concentrations. In all of these case studies we show that a GT network trained using the proposed learning approach is able to minimize the network-level spiking activity while producing classification accuracy that are comparable to standard approaches on the same dataset.
Collapse
Affiliation(s)
| | - Shantanu Chakrabartty
- Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO, United States
| |
Collapse
|
13
|
Pattern Recognition of Spiking Neural Networks Based on Visual Mechanism and Supervised Synaptic Learning. Neural Plast 2020; 2020:8851351. [PMID: 33193755 PMCID: PMC7641668 DOI: 10.1155/2020/8851351] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Revised: 10/09/2020] [Accepted: 10/14/2020] [Indexed: 11/18/2022] Open
Abstract
Electrophysiological studies have shown that mammalian primary visual cortex are selective for the orientations of visual stimuli. Inspired by this mechanism, we propose a hierarchical spiking neural network (SNN) for image classification. Grayscale input images are fed through a feed-forward network consisting of orientation-selective neurons, which then projected to a layer of downstream classifier neurons through the spiking-based supervised tempotron learning rule. Based on the orientation-selective mechanism of the visual cortex and tempotron learning rule, the network can effectively classify images of the extensively studied MNIST database of handwritten digits, which achieves 96% classification accuracy based on only 2000 training samples (traditional training set is 60000). Compared with other classification methods, our model not only guarantees the biological plausibility and the accuracy of image classification but also significantly reduces the needed training samples. Considering the fact that the most commonly used deep learning neural networks need big data samples and high power consumption in image recognition, this brain-inspired computational neural network model based on the layer-by-layer hierarchical image processing mechanism of the visual cortex may provide a basis for the wide application of spiking neural networks in the field of intelligent computing.
Collapse
|
14
|
Research on learning mechanism designing for equilibrated bipolar spiking neural networks. Artif Intell Rev 2020. [DOI: 10.1007/s10462-020-09818-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
15
|
Chen R, Li L. Analyzing and Accelerating the Bottlenecks of Training Deep SNNs With Backpropagation. Neural Comput 2020; 32:2557-2600. [PMID: 32946710 DOI: 10.1162/neco_a_01319] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Spiking neural networks (SNNs) with the event-driven manner of transmitting spikes consume ultra-low power on neuromorphic chips. However, training deep SNNs is still challenging compared to convolutional neural networks (CNNs). The SNN training algorithms have not achieved the same performance as CNNs. In this letter, we aim to understand the intrinsic limitations of SNN training to design better algorithms. First, the pros and cons of typical SNN training algorithms are analyzed. Then it is found that the spatiotemporal backpropagation algorithm (STBP) has potential in training deep SNNs due to its simplicity and fast convergence. Later, the main bottlenecks of the STBP algorithm are analyzed, and three conditions for training deep SNNs with the STBP algorithm are derived. By analyzing the connection between CNNs and SNNs, we propose a weight initialization algorithm to satisfy the three conditions. Moreover, we propose an error minimization method and a modified loss function to further improve the training performance. Experimental results show that the proposed method achieves 91.53% accuracy on the CIFAR10 data set with 1% accuracy increase over the STBP algorithm and decreases the training epochs on the MNIST data set to 15 epochs (over 13 times speed-up compared to the STBP algorithm). The proposed method also decreases classification latency by over 25 times compared to the CNN-SNN conversion algorithms. In addition, the proposed method works robustly for very deep SNNs, while the STBP algorithm fails in a 19-layer SNN.
Collapse
Affiliation(s)
- Ruizhi Chen
- State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China 100190, and University of Chinese Academy of Sciences, Beijing, China 100049
| | - Ling Li
- State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China 100190, and University of Chinese Academy of Sciences, Beijing, China 100049
| |
Collapse
|
16
|
Liu D, Bellotto N, Yue S. Deep Spiking Neural Network for Video-Based Disguise Face Recognition Based on Dynamic Facial Movements. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:1843-1855. [PMID: 31329135 DOI: 10.1109/tnnls.2019.2927274] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
With the increasing popularity of social media and smart devices, the face as one of the key biometrics becomes vital for person identification. Among those face recognition algorithms, video-based face recognition methods could make use of both temporal and spatial information just as humans do to achieve better classification performance. However, they cannot identify individuals when certain key facial areas, such as eyes or nose, are disguised by heavy makeup or rubber/digital masks. To this end, we propose a novel deep spiking neural network architecture in this paper. It takes dynamic facial movements, the facial muscle changes induced by speaking or other activities, as the sole input. An event-driven continuous spike-timing-dependent plasticity learning rule with adaptive thresholding is applied to train the synaptic weights. The experiments on our proposed video-based disguise face database (MakeFace DB) demonstrate that the proposed learning method performs very well, i.e., it achieves from 95% to 100% correct classification rates under various realistic experimental scenarios.
Collapse
|
17
|
Zhong H, Wang R. Neural mechanism of visual information degradation from retina to V1 area. Cogn Neurodyn 2020; 15:299-313. [PMID: 33854646 DOI: 10.1007/s11571-020-09599-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 04/23/2020] [Accepted: 05/11/2020] [Indexed: 12/16/2022] Open
Abstract
The information processing mechanism of the visual nervous system is an unresolved scientific problem that has long puzzled neuroscientists. The amount of visual information is significantly degraded when it reaches the V1 after entering the retina; nevertheless, this does not affect our visual perception of the outside world. Currently, the mechanisms of visual information degradation from retina to V1 are still unclear. For this purpose, the current study used the experimental data summarized by Marcus E. Raichle to investigate the neural mechanisms underlying the degradation of the large amount of data from topological mapping from retina to V1, drawing on the photoreceptor model first. The obtained results showed that the image edge features of visual information were extracted by the convolution algorithm with respect to the function of synaptic plasticity when visual signals were hierarchically processed from low-level to high-level. The visual processing was characterized by the visual information degradation, and this compensatory mechanism embodied the principles of energy minimization and transmission efficiency maximization of brain activity, which matched the experimental data summarized by Marcus E. Raichle. Our results further the understanding of the information processing mechanism of the visual nervous system.
Collapse
Affiliation(s)
- Haixin Zhong
- Institute for Cognitive Neurodynamics, East China University of Science and Technology, 130 Meilong Road, Shanghai, People's Republic of China
| | - Rubin Wang
- Institute for Cognitive Neurodynamics, East China University of Science and Technology, 130 Meilong Road, Shanghai, People's Republic of China
| |
Collapse
|
18
|
Wang X, Lin X, Dang X. Supervised learning in spiking neural networks: A review of algorithms and evaluations. Neural Netw 2020; 125:258-280. [PMID: 32146356 DOI: 10.1016/j.neunet.2020.02.011] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 12/15/2019] [Accepted: 02/20/2020] [Indexed: 01/08/2023]
Abstract
As a new brain-inspired computational model of the artificial neural network, a spiking neural network encodes and processes neural information through precisely timed spike trains. Spiking neural networks are composed of biologically plausible spiking neurons, which have become suitable tools for processing complex temporal or spatiotemporal information. However, because of their intricately discontinuous and implicit nonlinear mechanisms, the formulation of efficient supervised learning algorithms for spiking neural networks is difficult, and has become an important problem in this research field. This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively. First, a comparison between spiking neural networks and traditional artificial neural networks is provided. The general framework and some related theories of supervised learning for spiking neural networks are then introduced. Furthermore, the state-of-the-art supervised learning algorithms in recent years are reviewed from the perspectives of applicability to spiking neural network architecture and the inherent mechanisms of supervised learning algorithms. A performance comparison of spike train learning of some representative algorithms is also made. In addition, we provide five qualitative performance evaluation criteria for supervised learning algorithms for spiking neural networks and further present a new taxonomy for supervised learning algorithms depending on these five performance evaluation criteria. Finally, some future research directions in this research field are outlined.
Collapse
Affiliation(s)
- Xiangwen Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| | - Xianghong Lin
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China.
| | - Xiaochao Dang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| |
Collapse
|
19
|
Xu Q, Peng J, Shen J, Tang H, Pan G. Deep CovDenseSNN: A hierarchical event-driven dynamic framework with spiking neurons in noisy environment. Neural Netw 2020; 121:512-519. [DOI: 10.1016/j.neunet.2019.08.034] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2019] [Revised: 08/20/2019] [Accepted: 08/25/2019] [Indexed: 11/29/2022]
|
20
|
Hao Y, Huang X, Dong M, Xu B. A biologically plausible supervised learning method for spiking neural networks using the symmetric STDP rule. Neural Netw 2019; 121:387-395. [PMID: 31593843 DOI: 10.1016/j.neunet.2019.09.007] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Revised: 06/30/2019] [Accepted: 09/06/2019] [Indexed: 01/28/2023]
Abstract
Spiking neural networks (SNNs) possess energy-efficient potential due to event-based computation. However, supervised training of SNNs remains a challenge as spike activities are non-differentiable. Previous SNNs training methods can be generally categorized into two basic classes, i.e., backpropagation-like training methods and plasticity-based learning methods. The former methods are dependent on energy-inefficient real-valued computation and non-local transmission, as also required in artificial neural networks (ANNs), whereas the latter are either considered to be biologically implausible or exhibit poor performance. Hence, biologically plausible (bio-plausible) high-performance supervised learning (SL) methods for SNNs remain deficient. In this paper, we proposed a novel bio-plausible SNN model for SL based on the symmetric spike-timing dependent plasticity (sym-STDP) rule found in neuroscience. By combining the sym-STDP rule with bio-plausible synaptic scaling and intrinsic plasticity of the dynamic threshold, our SNN model implemented SL well and achieved good performance in the benchmark recognition task (MNIST dataset). To reveal the underlying mechanism of our SL model, we visualized both layer-based activities and synaptic weights using the t-distributed stochastic neighbor embedding (t-SNE) method after training and found that they were well clustered, thereby demonstrating excellent classification ability. Furthermore, to verify the robustness of our model, we trained it on another more realistic dataset (Fashion-MNIST), which also showed good performance. As the learning rules were bio-plausible and based purely on local spike events, our model could be easily applied to neuromorphic hardware for online training and may be helpful for understanding SL information processing at the synaptic level in biological neural systems.
Collapse
Affiliation(s)
- Yunzhe Hao
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, 100190 Beijing, China; University of Chinese Academy of Sciences, 100049 Beijing, China
| | - Xuhui Huang
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, 100190 Beijing, China.
| | - Meng Dong
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, 100190 Beijing, China
| | - Bo Xu
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, 100190 Beijing, China; University of Chinese Academy of Sciences, 100049 Beijing, China; CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, 100190 Beijing, China.
| |
Collapse
|
21
|
Abstract
This article introduces the Turn-Taking Spiking Neural Network (TTSNet), which is a cognitive model to perform early turn-taking prediction about a human or agent’s intentions. The TTSNet framework relies on implicit and explicit multimodal communication cues (physical, neurological and physiological) to be able to predict when the turn-taking event will occur in a robust and unambiguous fashion. To test the theories proposed, the TTSNet framework was implemented on an assistant robotic nurse, which predicts surgeon’s turn-taking intentions and delivers surgical instruments accordingly. Experiments were conducted to evaluate TTSNet’s performance in early turn-taking prediction. It was found to reach an [Formula: see text] score of 0.683 given 10% of completed action, and an [Formula: see text] score of 0.852 at 50% and 0.894 at 100% of the completed action. This performance outperformed multiple state-of-the-art algorithms, and surpassed human performance when limited partial observation is given (<40%). Such early turn-taking prediction capability would allow robots to perform collaborative actions proactively, in order to facilitate collaboration and increase team efficiency.
Collapse
Affiliation(s)
- Tian Zhou
- Industrial Engineering, Purdue University, USA
| | | |
Collapse
|
22
|
Abstract
In this paper, we present an electrical circuit of a leaky integrate-and-fire neuron with one VO2 switch, which models the properties of biological neurons. Based on VO2 neurons, a two-layer spiking neural network consisting of nine input and three output neurons is modeled in the SPICE simulator. The network contains excitatory and inhibitory couplings, and implements the winner-takes-all principle in pattern recognition. Using a supervised Spike-Timing-Dependent Plasticity training method and a timing method of information coding, the network was trained to recognize three patterns with dimensions of 3 × 3 pixels. The neural network is able to recognize up to 105 images per second, and has the potential to increase the recognition speed further.
Collapse
|
23
|
Liu D, Yue S. Event-Driven Continuous STDP Learning With Deep Structure for Visual Pattern Recognition. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:1377-1390. [PMID: 29994790 DOI: 10.1109/tcyb.2018.2801476] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Human beings can achieve reliable and fast visual pattern recognition with limited time and learning samples. Underlying this capability, ventral stream plays an important role in object representation and form recognition. Modeling the ventral steam may shed light on further understanding the visual brain in humans and building artificial vision systems for pattern recognition. The current methods to model the mechanism of ventral stream are far from exhibiting fast, continuous, and event-driven learning like the human brain. To create a visual system similar to ventral stream in human with fast learning capability, in this paper, we propose a new spiking neural system with an event-driven continuous spike timing dependent plasticity (STDP) learning method using specific spiking timing sequences. Two novel continuous input mechanisms have been used to obtain the continuous input spiking pattern sequence. With the event-driven STDP learning rule, the proposed learning procedure will be activated if the neuron receive one pre- or post-synaptic spike event. The experimental results on MNIST database show that the proposed method outperforms all other methods in fast learning scenarios and most of the current models in exhaustive learning experiments.
Collapse
|
24
|
Wang X, Lin X, Dang X. A Delay Learning Algorithm Based on Spike Train Kernels for Spiking Neurons. Front Neurosci 2019; 13:252. [PMID: 30971877 PMCID: PMC6445871 DOI: 10.3389/fnins.2019.00252] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2018] [Accepted: 03/04/2018] [Indexed: 11/13/2022] Open
Abstract
Neuroscience research confirms that the synaptic delays are not constant, but can be modulated. This paper proposes a supervised delay learning algorithm for spiking neurons with temporal encoding, in which both the weight and delay of a synaptic connection can be adjusted to enhance the learning performance. The proposed algorithm firstly defines spike train kernels to transform discrete spike trains during the learning phase into continuous analog signals so that common mathematical operations can be performed on them, and then deduces the supervised learning rules of synaptic weights and delays by gradient descent method. The proposed algorithm is successfully applied to various spike train learning tasks, and the effects of parameters of synaptic delays are analyzed in detail. Experimental results show that the network with dynamic delays achieves higher learning accuracy and less learning epochs than the network with static delays. The delay learning algorithm is further validated on a practical example of an image classification problem. The results again show that it can achieve a good classification performance with a proper receptive field. Therefore, the synaptic delay learning is significant for practical applications and theoretical researches of spiking neural networks.
Collapse
Affiliation(s)
- Xiangwen Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| | - Xianghong Lin
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| | - Xiaochao Dang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| |
Collapse
|
25
|
Nazari S, Faez K. Novel systematic mathematical computation based on the spiking frequency gate (SFG): Innovative organization of spiking computer. Inf Sci (N Y) 2019. [DOI: 10.1016/j.ins.2018.09.059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
26
|
Nazari S, faez K. Spiking pattern recognition using informative signal of image and unsupervised biologically plausible learning. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.10.066] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
27
|
Guo S, Yu Z, Deng F, Hu X, Chen F. Hierarchical Bayesian Inference and Learning in Spiking Neural Networks. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:133-145. [PMID: 29990165 DOI: 10.1109/tcyb.2017.2768554] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Numerous experimental data from neuroscience and psychological science suggest that human brain utilizes Bayesian principles to deal the complex environment. Furthermore, hierarchical Bayesian inference has been proposed as an appropriate theoretical framework for modeling cortical processing. However, it remains unknown how such a computation is organized in the network of biologically plausible spiking neurons. In this paper, we propose a hierarchical network of winner-take-all circuits which can carry out hierarchical Bayesian inference and learning through a spike-based variational expectation maximization (EM) algorithm. Particularly, we show how the firing activities of spiking neurons in response to the input stimuli and the spike-timing-dependent plasticity rule can be understood, respectively, as variational E-step and M-step of variational EM. Finally, we demonstrate the utility of this spiking neural network on the MNIST benchmark for unsupervised classification of handwritten digits.
Collapse
|
28
|
Tavanaei A, Ghodrati M, Kheradpisheh SR, Masquelier T, Maida A. Deep learning in spiking neural networks. Neural Netw 2018; 111:47-63. [PMID: 30682710 DOI: 10.1016/j.neunet.2018.12.002] [Citation(s) in RCA: 225] [Impact Index Per Article: 37.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Revised: 12/02/2018] [Accepted: 12/03/2018] [Indexed: 12/14/2022]
Abstract
In recent years, deep learning has revolutionized the field of machine learning, for computer vision in particular. In this approach, a deep (multilayer) artificial neural network (ANN) is trained, most often in a supervised manner using backpropagation. Vast amounts of labeled training examples are required, but the resulting classification accuracy is truly impressive, sometimes outperforming humans. Neurons in an ANN are characterized by a single, static, continuous-valued activation. Yet biological neurons use discrete spikes to compute and transmit information, and the spike times, in addition to the spike rates, matter. Spiking neural networks (SNNs) are thus more biologically realistic than ANNs, and are arguably the only viable option if one wants to understand how the brain computes at the neuronal description level. The spikes of biological neurons are sparse in time and space, and event-driven. Combined with bio-plausible local learning rules, this makes it easier to build low-power, neuromorphic hardware for SNNs. However, training deep SNNs remains a challenge. Spiking neurons' transfer function is usually non-differentiable, which prevents using backpropagation. Here we review recent supervised and unsupervised methods to train deep SNNs, and compare them in terms of accuracy and computational cost. The emerging picture is that SNNs still lag behind ANNs in terms of accuracy, but the gap is decreasing, and can even vanish on some tasks, while SNNs typically require many fewer operations and are the better candidates to process spatio-temporal data.
Collapse
Affiliation(s)
- Amirhossein Tavanaei
- School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette, LA 70504, USA.
| | - Masoud Ghodrati
- Department of Physiology, Monash University, Clayton, VIC, Australia
| | - Saeed Reza Kheradpisheh
- Department of Computer Science, Faculty of Mathematical Sciences and Computer, Kharazmi University, Tehran, Iran
| | | | - Anthony Maida
- School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette, LA 70504, USA
| |
Collapse
|
29
|
Mozafari M, Kheradpisheh SR, Masquelier T, Nowzari-Dalini A, Ganjtabesh M. First-Spike-Based Visual Categorization Using Reward-Modulated STDP. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:6178-6190. [PMID: 29993898 DOI: 10.1109/tnnls.2018.2826721] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Reinforcement learning (RL) has recently regained popularity with major achievements such as beating the European game of Go champion. Here, for the first time, we show that RL can be used efficiently to train a spiking neural network (SNN) to perform object recognition in natural images without using an external classifier. We used a feedforward convolutional SNN and a temporal coding scheme where the most strongly activated neurons fire first, while less activated ones fire later, or not at all. In the highest layers, each neuron was assigned to an object category, and it was assumed that the stimulus category was the category of the first neuron to fire. If this assumption was correct, the neuron was rewarded, i.e., spike-timing-dependent plasticity (STDP) was applied, which reinforced the neuron's selectivity. Otherwise, anti-STDP was applied, which encouraged the neuron to learn something else. As demonstrated on various image data sets (Caltech, ETH-80, and NORB), this reward-modulated STDP (R-STDP) approach has extracted particularly discriminative visual features, whereas classic unsupervised STDP extracts any feature that consistently repeats. As a result, R-STDP has outperformed STDP on these data sets. Furthermore, R-STDP is suitable for online learning and can adapt to drastic changes such as label permutations. Finally, it is worth mentioning that both feature extraction and classification were done with spikes, using at most one spike per neuron. Thus, the network is hardware friendly and energy efficient.
Collapse
|
30
|
|
31
|
Hwu T, Wang AY, Oros N, Krichmar JL. Adaptive Robot Path Planning Using a Spiking Neuron Algorithm With Axonal Delays. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2655539] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
32
|
Ferré P, Mamalet F, Thorpe SJ. Unsupervised Feature Learning With Winner-Takes-All Based STDP. Front Comput Neurosci 2018; 12:24. [PMID: 29674961 PMCID: PMC5895733 DOI: 10.3389/fncom.2018.00024] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2017] [Accepted: 03/20/2018] [Indexed: 11/24/2022] Open
Abstract
We present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity (STDP) biological learning rule. We show equivalence between rank order coding Leaky-Integrate-and-Fire neurons and ReLU artificial neurons when applied to non-temporal data. We apply this to images using rank-order coding, which allows us to perform a full network simulation with a single feed-forward pass using GPU hardware. Next we introduce a binary STDP learning rule compatible with training on batches of images. Two mechanisms to stabilize the training are also presented : a Winner-Takes-All (WTA) framework which selects the most relevant patches to learn from along the spatial dimensions, and a simple feature-wise normalization as homeostatic process. This learning process allows us to train multi-layer architectures of convolutional sparse features. We apply our method to extract features from the MNIST, ETH80, CIFAR-10, and STL-10 datasets and show that these features are relevant for classification. We finally compare these results with several other state of the art unsupervised learning methods.
Collapse
Affiliation(s)
- Paul Ferré
- Centre National de la Recherche Scientifique, UMR-5549, Toulouse, France.,Brainchip SAS, Balma, France
| | | | - Simon J Thorpe
- Centre National de la Recherche Scientifique, UMR-5549, Toulouse, France
| |
Collapse
|
33
|
Kheradpisheh SR, Ganjtabesh M, Thorpe SJ, Masquelier T. STDP-based spiking deep convolutional neural networks for object recognition. Neural Netw 2018; 99:56-67. [PMID: 29328958 DOI: 10.1016/j.neunet.2017.12.005] [Citation(s) in RCA: 185] [Impact Index Per Article: 30.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2017] [Revised: 11/23/2017] [Accepted: 12/08/2017] [Indexed: 11/25/2022]
|
34
|
Unsupervised heart-rate estimation in wearables with Liquid states and a probabilistic readout. Neural Netw 2018; 99:134-147. [PMID: 29414535 DOI: 10.1016/j.neunet.2017.12.015] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Revised: 12/08/2017] [Accepted: 12/26/2017] [Indexed: 01/28/2023]
Abstract
Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine learning technique to estimate heart-rate from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery-life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects is considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices.
Collapse
|
35
|
Lin Z, Ma D, Meng J, Chen L. Relative ordering learning in spiking neural network for pattern recognition. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.05.009] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
36
|
Matsubara T. Conduction Delay Learning Model for Unsupervised and Supervised Classification of Spatio-Temporal Spike Patterns. Front Comput Neurosci 2017; 11:104. [PMID: 29209191 PMCID: PMC5702355 DOI: 10.3389/fncom.2017.00104] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2017] [Accepted: 11/02/2017] [Indexed: 12/15/2022] Open
Abstract
Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning.
Collapse
Affiliation(s)
- Takashi Matsubara
- Computational Intelligence, Fundamentals of Computational Science, Department of Computational Science, Graduate School of System Informatics, Kobe University, Hyogo, Japan
| |
Collapse
|
37
|
Samadi A, Lillicrap TP, Tweed DB. Deep Learning with Dynamic Spiking Neurons and Fixed Feedback Weights. Neural Comput 2017; 29:578-602. [PMID: 28095195 DOI: 10.1162/neco_a_00929] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recent work in computer science has shown the power of deep learning driven by the backpropagation algorithm in networks of artificial neurons. But real neurons in the brain are different from most of these artificial ones in at least three crucial ways: they emit spikes rather than graded outputs, their inputs and outputs are related dynamically rather than by piecewise-smooth functions, and they have no known way to coordinate arrays of synapses in separate forward and feedback pathways so that they change simultaneously and identically, as they do in backpropagation. Given these differences, it is unlikely that current deep learning algorithms can operate in the brain, but we that show these problems can be solved by two simple devices: learning rules can approximate dynamic input-output relations with piecewise-smooth functions, and a variation on the feedback alignment algorithm can train deep networks without having to coordinate forward and feedback synapses. Our results also show that deep spiking networks learn much better if each neuron computes an intracellular teaching signal that reflects that cell's nonlinearity. With this mechanism, networks of spiking neurons show useful learning in synapses at least nine layers upstream from the output cells and perform well compared to other spiking networks in the literature on the MNIST digit recognition task.
Collapse
Affiliation(s)
- Arash Samadi
- Department of Physiology, University of Toronto, Toronto, Ontario, M5S 1A8, Canada
| | | | - Douglas B Tweed
- Department of Physiology, University of Toronto, Toronto, Ontario, M5S 1A8, Canada, and Centre for Vision Research, York University, Toronto, Ontario, M3J 1PC, Canada
| |
Collapse
|
38
|
Liu Q, Pineda-García G, Stromatias E, Serrano-Gotarredona T, Furber SB. Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation. Front Neurosci 2016; 10:496. [PMID: 27853419 PMCID: PMC5090001 DOI: 10.3389/fnins.2016.00496] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2016] [Accepted: 10/17/2016] [Indexed: 11/13/2022] Open
Abstract
Today, increasing attention is being paid to research into spike-based neural computation both to gain a better understanding of the brain and to explore biologically-inspired computation. Within this field, the primate visual pathway and its hierarchical organization have been extensively studied. Spiking Neural Networks (SNNs), inspired by the understanding of observed biological structure and function, have been successfully applied to visual recognition and classification tasks. In addition, implementations on neuromorphic hardware have enabled large-scale networks to run in (or even faster than) real time, making spike-based neural vision processing accessible on mobile robots. Neuromorphic sensors such as silicon retinas are able to feed such mobile systems with real-time visual stimuli. A new set of vision benchmarks for spike-based neural processing are now needed to measure progress quantitatively within this rapidly advancing field. We propose that a large dataset of spike-based visual stimuli is needed to provide meaningful comparisons between different systems, and a corresponding evaluation methodology is also required to measure the performance of SNN models and their hardware implementations. In this paper we first propose an initial NE (Neuromorphic Engineering) dataset based on standard computer vision benchmarksand that uses digits from the MNIST database. This dataset is compatible with the state of current research on spike-based image recognition. The corresponding spike trains are produced using a range of techniques: rate-based Poisson spike generation, rank order encoding, and recorded output from a silicon retina with both flashing and oscillating input stimuli. In addition, a complementary evaluation methodology is presented to assess both model-level and hardware-level performance. Finally, we demonstrate the use of the dataset and the evaluation methodology using two SNN models to validate the performance of the models and their hardware implementations. With this dataset we hope to (1) promote meaningful comparison between algorithms in the field of neural computation, (2) allow comparison with conventional image recognition methods, (3) provide an assessment of the state of the art in spike-based visual recognition, and (4) help researchers identify future directions and advance the field.
Collapse
Affiliation(s)
- Qian Liu
- Advanced Processor Technologies Research Group, School of Computer Science, University of ManchesterManchester, UK
| | - Garibaldi Pineda-García
- Advanced Processor Technologies Research Group, School of Computer Science, University of ManchesterManchester, UK
| | | | | | - Steve B. Furber
- Advanced Processor Technologies Research Group, School of Computer Science, University of ManchesterManchester, UK
| |
Collapse
|
39
|
Neftci EO, Pedroni BU, Joshi S, Al-Shedivat M, Cauwenberghs G. Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines. Front Neurosci 2016; 10:241. [PMID: 27445650 PMCID: PMC4925698 DOI: 10.3389/fnins.2016.00241] [Citation(s) in RCA: 80] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Accepted: 05/17/2016] [Indexed: 01/24/2023] Open
Abstract
Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.
Collapse
Affiliation(s)
- Emre O. Neftci
- Department of Cognitive Sciences, University of California, IrvineIrvine, CA, USA
| | - Bruno U. Pedroni
- Department of Bioengineering, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Siddharth Joshi
- Electrical and Computer Engineering Department, University of CaliforniaSan Diego, La Jolla, CA, USA
| | - Maruan Al-Shedivat
- Machine Learning Department, Carnegie Mellon UniversityPittsburgh, PA, USA
| | - Gert Cauwenberghs
- Department of Bioengineering, University of CaliforniaSan Diego, La Jolla, CA, USA
| |
Collapse
|
40
|
Rekabdar B, Nicolescu M, Nicolescu M, Louis S. Using patterns of firing neurons in spiking neural networks for learning and early recognition of spatio-temporal patterns. Neural Comput Appl 2016. [DOI: 10.1007/s00521-016-2283-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
41
|
Diehl PU, Cook M. Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Front Comput Neurosci 2015; 9:99. [PMID: 26941637 PMCID: PMC4522567 DOI: 10.3389/fncom.2015.00099] [Citation(s) in RCA: 309] [Impact Index Per Article: 34.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2015] [Accepted: 07/16/2015] [Indexed: 11/13/2022] Open
Abstract
In order to understand how the mammalian neocortex is performing computations, two things are necessary; we need to have a good understanding of the available neuronal processing units and mechanisms, and we need to gain a better understanding of how those mechanisms are combined to build functioning systems. Therefore, in recent years there is an increasing interest in how spiking neural networks (SNN) can be used to perform complex computations or solve pattern recognition tasks. However, it remains a challenging task to design SNNs which use biologically plausible mechanisms (especially for learning new patterns), since most such SNN architectures rely on training in a rate-based network and subsequent conversion to a SNN. We present a SNN for digit recognition which is based on mechanisms with increased biological plausibility, i.e., conductance-based instead of current-based synapses, spike-timing-dependent plasticity with time-dependent weight change, lateral inhibition, and an adaptive spiking threshold. Unlike most other systems, we do not use a teaching signal and do not present any class labels to the network. Using this unsupervised learning scheme, our architecture achieves 95% accuracy on the MNIST benchmark, which is better than previous SNN implementations without supervision. The fact that we used no domain-specific knowledge points toward the general applicability of our network design. Also, the performance of our network scales well with the number of neurons used and shows similar performance for four different learning rules, indicating robustness of the full combination of mechanisms, which suggests applicability in heterogeneous biological neural networks.
Collapse
Affiliation(s)
- Peter U Diehl
- Institute of Neuroinformatics, ETH Zurich and University Zurich Zurich, Switzerland
| | - Matthew Cook
- Institute of Neuroinformatics, ETH Zurich and University Zurich Zurich, Switzerland
| |
Collapse
|
42
|
A Scale and Translation Invariant Approach for Early Classification of Spatio-Temporal Patterns Using Spiking Neural Networks. Neural Process Lett 2015. [DOI: 10.1007/s11063-015-9436-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
43
|
Galluppi F, Lagorce X, Stromatias E, Pfeiffer M, Plana LA, Furber SB, Benosman RB. A framework for plasticity implementation on the SpiNNaker neural architecture. Front Neurosci 2015; 8:429. [PMID: 25653580 PMCID: PMC4299433 DOI: 10.3389/fnins.2014.00429] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2014] [Accepted: 12/07/2014] [Indexed: 11/21/2022] Open
Abstract
Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system.
Collapse
Affiliation(s)
- Francesco Galluppi
- Equipe de Vision et Calcul Naturel, Vision Institute, Université Pierre et Marie Curie, Unité Mixte de Recherche S968 Inserm, l'Université Pierre et Marie Curie, Centre National de la Recherche Scientifique Unité Mixte de Recherche 7210, Centre Hospitalier National d'Ophtalmologie des quinze-vingtsParis, France
| | - Xavier Lagorce
- Equipe de Vision et Calcul Naturel, Vision Institute, Université Pierre et Marie Curie, Unité Mixte de Recherche S968 Inserm, l'Université Pierre et Marie Curie, Centre National de la Recherche Scientifique Unité Mixte de Recherche 7210, Centre Hospitalier National d'Ophtalmologie des quinze-vingtsParis, France
| | - Evangelos Stromatias
- Advanced Processors Technology Group, School of Computer Science, University of ManchesterManchester, UK
| | - Michael Pfeiffer
- Institute of Neuroinformatics, University of Zürich and ETH ZürichZürich, Switzerland
| | - Luis A. Plana
- Advanced Processors Technology Group, School of Computer Science, University of ManchesterManchester, UK
| | - Steve B. Furber
- Advanced Processors Technology Group, School of Computer Science, University of ManchesterManchester, UK
| | - Ryad B. Benosman
- Equipe de Vision et Calcul Naturel, Vision Institute, Université Pierre et Marie Curie, Unité Mixte de Recherche S968 Inserm, l'Université Pierre et Marie Curie, Centre National de la Recherche Scientifique Unité Mixte de Recherche 7210, Centre Hospitalier National d'Ophtalmologie des quinze-vingtsParis, France
| |
Collapse
|
44
|
An Unsupervised Approach to Learning and Early Detection of Spatio-Temporal Patterns Using Spiking Neural Networks. J INTELL ROBOT SYST 2015. [DOI: 10.1007/s10846-015-0179-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
45
|
Deadwyler SA, Berger TW, Opris I, Song D, Hampson RE. Neurons and networks organizing and sequencing memories. Brain Res 2014; 1621:335-44. [PMID: 25553617 DOI: 10.1016/j.brainres.2014.12.037] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2014] [Revised: 12/16/2014] [Accepted: 12/17/2014] [Indexed: 01/23/2023]
Abstract
Hippocampal CA1 and CA3 neurons sampled randomly in large numbers in primate brain show conclusive examples of hierarchical encoding of task specific information. Hierarchical encoding allows multi-task utilization of the same hippocampal neural networks via distributed firing between neurons that respond to subsets, attributes or "categories" of stimulus features which can be applied in events in different contexts. In addition, such networks are uniquely adaptable to neural systems unrestricted by rigid synaptic architecture (i.e. columns, layers or "patches") which physically limits the number of possible task-specific interactions between neurons. Also hierarchical encoding is not random; it requires multiple exposures to the same types of relevant events to elevate synaptic connectivity between neurons for different stimulus features that occur in different task-dependent contexts. The large number of cells within associated hierarchical circuits in structures such as hippocampus provides efficient processing of information relevant to common memory-dependent behavioral decisions within different contextual circumstances. This article is part of a Special Issue entitled SI: Brain and Memory.
Collapse
Affiliation(s)
- Sam A Deadwyler
- Department of Physiology & Pharmacology, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157-1083, USA.
| | - Theodore W Berger
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, 1042 Downey Way (DRB140), Los Angeles, CA 90089-1111, USA
| | - Ioan Opris
- Department of Physiology & Pharmacology, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157-1083, USA
| | - Dong Song
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, 1042 Downey Way (DRB140), Los Angeles, CA 90089-1111, USA
| | - Robert E Hampson
- Department of Physiology & Pharmacology, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157-1083, USA
| |
Collapse
|
46
|
Afshar S, George L, Tapson J, van Schaik A, Hamilton TJ. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels. Front Neurosci 2014; 8:377. [PMID: 25505378 PMCID: PMC4243566 DOI: 10.3389/fnins.2014.00377] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2014] [Accepted: 11/05/2014] [Indexed: 11/17/2022] Open
Abstract
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.
Collapse
Affiliation(s)
- Saeed Afshar
- Bioelectronics and Neurosciences, The MARCS Institute, University of Western SydneyPenrith, NSW, Australia
| | - Libin George
- School of Electrical Engineering and Telecommunications, The University of New South WalesSydney, NSW, Australia
| | - Jonathan Tapson
- Bioelectronics and Neurosciences, The MARCS Institute, University of Western SydneyPenrith, NSW, Australia
| | - André van Schaik
- Bioelectronics and Neurosciences, The MARCS Institute, University of Western SydneyPenrith, NSW, Australia
| | - Tara J. Hamilton
- Bioelectronics and Neurosciences, The MARCS Institute, University of Western SydneyPenrith, NSW, Australia
- School of Electrical Engineering and Telecommunications, The University of New South WalesSydney, NSW, Australia
| |
Collapse
|