1
|
Wang Z, Cruz L. Trainable Reference Spikes Improve Temporal Information Processing of SNNs With Supervised Learning. Neural Comput 2024; 36:2136-2169. [PMID: 39177970 DOI: 10.1162/neco_a_01702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 06/04/2024] [Indexed: 08/24/2024]
Abstract
Spiking neural networks (SNNs) are the next-generation neural networks composed of biologically plausible neurons that communicate through trains of spikes. By modifying the plastic parameters of SNNs, including weights and time delays, SNNs can be trained to perform various AI tasks, although in general not at the same level of performance as typical artificial neural networks (ANNs). One possible solution to improve the performance of SNNs is to consider plastic parameters other than just weights and time delays drawn from the inherent complexity of the neural system of the brain, which may help SNNs improve their information processing ability and achieve brainlike functions. Here, we propose reference spikes as a new type of plastic parameters in a supervised learning scheme in SNNs. A neuron receives reference spikes through synapses providing reference information independent of input to help during learning, whose number of spikes and timings are trainable by error backpropagation. Theoretically, reference spikes improve the temporal information processing of SNNs by modulating the integration of incoming spikes at a detailed level. Through comparative computational experiments, we demonstrate using supervised learning that reference spikes improve the memory capacity of SNNs to map input spike patterns to target output spike patterns and increase classification accuracy on the MNIST, Fashion-MNIST, and SHD data sets, where both input and target output are temporally encoded. Our results demonstrate that applying reference spikes improves the performance of SNNs by enhancing their temporal information processing ability.
Collapse
Affiliation(s)
- Zeyuan Wang
- Department of Physics, Drexel University, Philadelphia, PA 19104, U.S.A.
| | - Luis Cruz
- Department of Physics, Drexel University, Philadelphia, PA 19104, U.S.A.
| |
Collapse
|
2
|
Sun P, Wu J, Zhang M, Devos P, Botteldooren D. Delay learning based on temporal coding in Spiking Neural Networks. Neural Netw 2024; 180:106678. [PMID: 39260007 DOI: 10.1016/j.neunet.2024.106678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Revised: 08/25/2024] [Accepted: 08/29/2024] [Indexed: 09/13/2024]
Abstract
Spiking Neural Networks (SNNs) hold great potential for mimicking the brain's efficient processing of information. Although biological evidence suggests that precise spike timing is crucial for effective information encoding, contemporary SNN research mainly concentrates on adjusting connection weights. In this work, we introduce Delay Learning based on Temporal Coding (DLTC), an innovative approach that integrates delay learning with a temporal coding strategy to optimize spike timing in SNNs. DLTC utilizes a learnable delay shift, which assigns varying levels of importance to different informational elements. This is complemented by an adjustable threshold that regulates firing times, allowing for earlier or later neuron activation as needed. We have tested DLTC's effectiveness in various contexts, including vision and auditory classification tasks, where it consistently outperformed traditional weight-only SNNs. The results indicate that DLTC achieves remarkable improvements in accuracy and computational efficiency, marking a step forward in advancing SNNs towards real-world applications. Our codes are accessible at https://github.com/sunpengfei1122/DLTC.
Collapse
Affiliation(s)
- Pengfei Sun
- Department of Information Technology, WAVES Research Group, Ghent University, Gent, Belgium
| | - Jibin Wu
- Department of Data Science and Artificial Intelligence and Department of Computing, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China
| | - Malu Zhang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| | - Paul Devos
- Department of Information Technology, WAVES Research Group, Ghent University, Gent, Belgium
| | - Dick Botteldooren
- Department of Information Technology, WAVES Research Group, Ghent University, Gent, Belgium
| |
Collapse
|
3
|
Wang J. Training multi-layer spiking neural networks with plastic synaptic weights and delays. Front Neurosci 2024; 17:1253830. [PMID: 38328553 PMCID: PMC10847234 DOI: 10.3389/fnins.2023.1253830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 12/04/2023] [Indexed: 02/09/2024] Open
Abstract
Spiking neural networks are usually considered as the third generation of neural networks, which hold the potential of ultra-low power consumption on corresponding hardware platforms and are very suitable for temporal information processing. However, how to efficiently train the spiking neural networks remains an open question, and most existing learning methods only consider the plasticity of synaptic weights. In this paper, we proposed a new supervised learning algorithm for multiple-layer spiking neural networks based on the typical SpikeProp method. In the proposed method, both the synaptic weights and delays are considered as adjustable parameters to improve both the biological plausibility and the learning performance. In addition, the proposed method inherits the advantages of SpikeProp, which can make full use of the temporal information of spikes. Various experiments are conducted to verify the performance of the proposed method, and the results demonstrate that the proposed method achieves a competitive learning performance compared with the existing related works. Finally, the differences between the proposed method and the existing mainstream multi-layer training algorithms are discussed.
Collapse
Affiliation(s)
- Jing Wang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
4
|
Yu Q, Gao J, Wei J, Li J, Tan KC, Huang T. Improving Multispike Learning With Plastic Synaptic Delays. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10254-10265. [PMID: 35442893 DOI: 10.1109/tnnls.2022.3165527] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Emulating the spike-based processing in the brain, spiking neural networks (SNNs) are developed and act as a promising candidate for the new generation of artificial neural networks that aim to produce efficient cognitions as the brain. Due to the complex dynamics and nonlinearity of SNNs, designing efficient learning algorithms has remained a major difficulty, which attracts great research attention. Most existing ones focus on the adjustment of synaptic weights. However, other components, such as synaptic delays, are found to be adaptive and important in modulating neural behavior. How could plasticity on different components cooperate to improve the learning of SNNs remains as an interesting question. Advancing our previous multispike learning, we propose a new joint weight-delay plasticity rule, named TDP-DL, in this article. Plastic delays are integrated into the learning framework, and as a result, the performance of multispike learning is significantly improved. Simulation results highlight the effectiveness and efficiency of our TDP-DL rule compared to baseline ones. Moreover, we reveal the underlying principle of how synaptic weights and delays cooperate with each other through a synthetic task of interval selectivity and show that plastic delays can enhance the selectivity and flexibility of neurons by shifting information across time. Due to this capability, useful information distributed away in the time domain can be effectively integrated for a better accuracy performance, as highlighted in our generalization tasks of the image, speech, and event-based object recognitions. Our work is thus valuable and significant to improve the performance of spike-based neuromorphic computing.
Collapse
|
5
|
Luo X, Qu H, Wang Y, Yi Z, Zhang J, Zhang M. Supervised Learning in Multilayer Spiking Neural Networks With Spike Temporal Error Backpropagation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10141-10153. [PMID: 35436200 DOI: 10.1109/tnnls.2022.3164930] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The brain-inspired spiking neural networks (SNNs) hold the advantages of lower power consumption and powerful computing capability. However, the lack of effective learning algorithms has obstructed the theoretical advance and applications of SNNs. The majority of the existing learning algorithms for SNNs are based on the synaptic weight adjustment. However, neuroscience findings confirm that synaptic delays can also be modulated to play an important role in the learning process. Here, we propose a gradient descent-based learning algorithm for synaptic delays to enhance the sequential learning performance of single spiking neuron. Moreover, we extend the proposed method to multilayer SNNs with spike temporal-based error backpropagation. In the proposed multilayer learning algorithm, information is encoded in the relative timing of individual neuronal spikes, and learning is performed based on the exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. Experimental results on both synthetic and realistic datasets show significant improvements in learning efficiency and accuracy over the existing spike temporal-based learning algorithms. We also evaluate the proposed learning method in an SNN-based multimodal computational model for audiovisual pattern recognition, and it achieves better performance compared with its counterparts.
Collapse
|
6
|
Sun P, Chua Y, Devos P, Botteldooren D. Learnable axonal delay in spiking neural networks improves spoken word recognition. Front Neurosci 2023; 17:1275944. [PMID: 38027508 PMCID: PMC10665570 DOI: 10.3389/fnins.2023.1275944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
Spiking neural networks (SNNs), which are composed of biologically plausible spiking neurons, and combined with bio-physically realistic auditory periphery models, offer a means to explore and understand human auditory processing-especially in tasks where precise timing is essential. However, because of the inherent temporal complexity in spike sequences, the performance of SNNs has remained less competitive compared to artificial neural networks (ANNs). To tackle this challenge, a fundamental research topic is the configuration of spike-timing and the exploration of more intricate architectures. In this work, we demonstrate a learnable axonal delay combined with local skip-connections yields state-of-the-art performance on challenging benchmarks for spoken word recognition. Additionally, we introduce an auxiliary loss term to further enhance accuracy and stability. Experiments on the neuromorphic speech benchmark datasets, NTIDIDIGITS and SHD, show improvements in performance when incorporating our delay module in comparison to vanilla feedforward SNNs. Specifically, with the integration of our delay module, the performance on NTIDIDIGITS and SHD improves by 14% and 18%, respectively. When paired with local skip-connections and the auxiliary loss, our approach surpasses both recurrent and convolutional neural networks, yet uses 10 × fewer parameters for NTIDIDIGITS and 7 × fewer for SHD.
Collapse
Affiliation(s)
- Pengfei Sun
- Department of Information Technology, WAVES Research Group, Ghent University, Ghent, Belgium
| | - Yansong Chua
- Neuromorphic Computing Laboratory, China Nanhu Academy of Electronics and Information Technology, Jiaxing, China
| | - Paul Devos
- Department of Information Technology, WAVES Research Group, Ghent University, Ghent, Belgium
| | - Dick Botteldooren
- Department of Information Technology, WAVES Research Group, Ghent University, Ghent, Belgium
| |
Collapse
|
7
|
Yi Z, Lian J, Liu Q, Zhu H, Liang D, Liu J. Learning Rules in Spiking Neural Networks: A Survey. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.02.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
8
|
Zhou Y, Xu N, Gao B, Zhuge F, Tang Z, Deng X, Li Y, He Y, Miao X. Complementary Memtransistor-Based Multilayer Neural Networks for Online Supervised Learning Through (Anti-)Spike-Timing-Dependent Plasticity. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:6640-6651. [PMID: 34081587 DOI: 10.1109/tnnls.2021.3082911] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We propose a complete hardware-based architecture of multilayer neural networks (MNNs), including electronic synapses, neurons, and periphery circuitry to implement supervised learning (SL) algorithm of extended remote supervised method (ReSuMe). In this system, complementary (a pair of n- and p-type) memtransistors (C-MTs) are used as an electrical synapse. By applying the learning rule of spike-timing-dependent plasticity (STDP) to the memtransistor connecting presynaptic neuron to the output one whereas the contrary anti-STDP rule to the other memtransistor connecting presynaptic neuron to the teacher one, extended ReSuMe with multiple layers is realized without the usage of those complicated supervising modules in previous approaches. In this way, both the C-MT-based chip area and power consumption of the learning circuit for weight updating operation are drastically decreased comparing with the conventional single memtransistor (S-MT)-based designs. Two typical benchmarks, the linearly nonseparable benchmark XOR problem and Mixed National Institute of Standards and Technology database (MNIST) recognition have been successfully tackled using the proposed MNN system while impact of the nonideal factors of realistic devices has been evaluated.
Collapse
|
9
|
Lu Y, Zhang W, Fu B, Du J, He Z. Synaptic delay plasticity based on frequency-switched VCSELs for optical delay-weight spiking neural networks. OPTICS LETTERS 2022; 47:5587-5590. [PMID: 37219277 DOI: 10.1364/ol.470512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 10/06/2022] [Indexed: 05/24/2023]
Abstract
In this Letter, we propose an optical delay-weight spiking neural network (SNN) architecture constructed by cascaded frequency and intensity-switched vertical-cavity surface emitting lasers (VCSELs). The synaptic delay plasticity of frequency-switched VCSELs is deeply studied by numerical analysis and simulations. The principal factors related to the delay manipulation are investigated with the tunable spiking delay up to 60 ns. Moreover, a two-layer spiking neural network based on the delay-weight supervised learning algorithm is applied to a spiking sequence pattern training task and then a classification task of the Iris dataset. The proposed optical SNN provides a compact and cost-efficient solution for delay weighted computing architecture without considerations of extra programmable optical delay lines.
Collapse
|
10
|
Susi G, Antón-Toro LF, Maestú F, Pereda E, Mirasso C. nMNSD-A Spiking Neuron-Based Classifier That Combines Weight-Adjustment and Delay-Shift. Front Neurosci 2021; 15:582608. [PMID: 33679293 PMCID: PMC7933525 DOI: 10.3389/fnins.2021.582608] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Accepted: 01/15/2021] [Indexed: 12/01/2022] Open
Abstract
The recent “multi-neuronal spike sequence detector” (MNSD) architecture integrates the weight- and delay-adjustment methods by combining heterosynaptic plasticity with the neurocomputational feature spike latency, representing a new opportunity to understand the mechanisms underlying biological learning. Unfortunately, the range of problems to which this topology can be applied is limited because of the low cardinality of the parallel spike trains that it can process, and the lack of a visualization mechanism to understand its internal operation. We present here the nMNSD structure, which is a generalization of the MNSD to any number of inputs. The mathematical framework of the structure is introduced, together with the “trapezoid method,” that is a reduced method to analyze the recognition mechanism operated by the nMNSD in response to a specific input parallel spike train. We apply the nMNSD to a classification problem previously faced with the classical MNSD from the same authors, showing the new possibilities the nMNSD opens, with associated improvement in classification performances. Finally, we benchmark the nMNSD on the classification of static inputs (MNIST database) obtaining state-of-the-art accuracies together with advantageous aspects in terms of time- and energy-efficiency if compared to similar classification methods.
Collapse
Affiliation(s)
- Gianluca Susi
- UPM-UCM Laboratory of Cognitive and Computational Neuroscience, Centro de Tecnologia Biomedica, Madrid, Spain.,Departamento de Psicología Experimental, Facultad de Psicología, Universidad Complutense de Madrid, Madrid, Spain.,Department of Civil Engineering and Computer Science, University of Rome "Tor Vergata", Rome, Italy
| | - Luis F Antón-Toro
- UPM-UCM Laboratory of Cognitive and Computational Neuroscience, Centro de Tecnologia Biomedica, Madrid, Spain.,Departamento de Psicología Experimental, Facultad de Psicología, Universidad Complutense de Madrid, Madrid, Spain
| | - Fernando Maestú
- UPM-UCM Laboratory of Cognitive and Computational Neuroscience, Centro de Tecnologia Biomedica, Madrid, Spain.,Departamento de Psicología Experimental, Facultad de Psicología, Universidad Complutense de Madrid, Madrid, Spain.,CIBER-BBN: Networking Research Center on Bioengineering, Biomaterials and Nanomedicine, Madrid, Spain
| | - Ernesto Pereda
- UPM-UCM Laboratory of Cognitive and Computational Neuroscience, Centro de Tecnologia Biomedica, Madrid, Spain.,Departamento de Ingeniería Industrial & IUNE & ITB. Universidad de La Laguna, Tenerife, Spain
| | - Claudio Mirasso
- Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC, UIB-CSIC), Palma de Mallorca, Spain
| |
Collapse
|
11
|
Zhang Y, Qu H, Luo X, Chen Y, Wang Y, Zhang M, Li Z. A new recursive least squares-based learning algorithm for spiking neurons. Neural Netw 2021; 138:110-125. [PMID: 33636484 DOI: 10.1016/j.neunet.2021.01.016] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 12/15/2020] [Accepted: 01/18/2021] [Indexed: 10/22/2022]
Abstract
Spiking neural networks (SNNs) are regarded as effective models for processing spatio-temporal information. However, their inherent complexity of temporal coding makes it an arduous task to put forward an effective supervised learning algorithm, which still puzzles researchers in this area. In this paper, we propose a Recursive Least Squares-Based Learning Rule (RLSBLR) for SNN to generate the desired spatio-temporal spike train. During the learning process of our method, the weight update is driven by the cost function defined by the difference between the membrane potential and the firing threshold. The amount of weight modification depends not only on the impact of the current error function, but also on the previous error functions which are evaluated by current weights. In order to improve the learning performance, we integrate a modified synaptic delay learning to the proposed RLSBLR. We conduct experiments in different settings, such as spiking lengths, number of inputs, firing rates, noises and learning parameters, to thoroughly investigate the performance of this learning algorithm. The proposed RLSBLR is compared with competitive algorithms of Perceptron-Based Spiking Neuron Learning Rule (PBSNLR) and Remote Supervised Method (ReSuMe). Experimental results demonstrate that the proposed RLSBLR can achieve higher learning accuracy, higher efficiency and better robustness against different types of noise. In addition, we apply the proposed RLSBLR to open source database TIDIGITS, and the results show that our algorithm has a good practical application performance.
Collapse
Affiliation(s)
- Yun Zhang
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Hong Qu
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China.
| | - Xiaoling Luo
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Yi Chen
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Yuchen Wang
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Malu Zhang
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Zefang Li
- China Coal Research Institute, Beijing 100013, PR China
| |
Collapse
|
12
|
Zhang M, Wu J, Belatreche A, Pan Z, Xie X, Chua Y, Li G, Qu H, Li H. Supervised learning in spiking neural networks with synaptic delay-weight plasticity. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.03.079] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
13
|
Taherkhani A, Cosma G, McGinnity TM. Optimization of Output Spike Train Encoding for a Spiking Neuron Based on its Spatio–Temporal Input Pattern. IEEE Trans Cogn Dev Syst 2020. [DOI: 10.1109/tcds.2019.2909355] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
14
|
Hussain I, Thounaojam DM. SpiFoG: an efficient supervised learning algorithm for the network of spiking neurons. Sci Rep 2020; 10:13122. [PMID: 32753645 PMCID: PMC7403331 DOI: 10.1038/s41598-020-70136-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 07/23/2020] [Indexed: 11/29/2022] Open
Abstract
There has been a lot of research on supervised learning in spiking neural network (SNN) for a couple of decades to improve computational efficiency. However, evolutionary algorithm based supervised learning for SNN has not been investigated thoroughly which is still in embryo stage. This paper introduce an efficient algorithm (SpiFoG) to train multilayer feed forward SNN in supervised manner that uses elitist floating point genetic algorithm with hybrid crossover. The evidence from neuroscience claims that the brain uses spike times with random synaptic delays for information processing. Therefore, leaky-integrate-and-fire spiking neuron is used in this research introducing random synaptic delays. The SpiFoG allows both excitatory and inhibitory neurons by allowing a mixture of positive and negative synaptic weights. In addition, random synaptic delays are also trained with synaptic weights in an efficient manner. Moreover, computational efficiency of SpiFoG was increased by reducing the total simulation time and increasing the time step since increasing time step within the total simulation time takes less iteration. The SpiFoG is benchmarked on Iris and WBC dataset drawn from the UCI machine learning repository and found better performance than state-of-the-art techniques.
Collapse
Affiliation(s)
- Irshed Hussain
- Computer Vision Laboratory, Department of Computer Science and Engineering, National Institute of Technology Silchar, Silchar, Assam, 788010, India.
| | - Dalton Meitei Thounaojam
- Computer Vision Laboratory, Department of Computer Science and Engineering, National Institute of Technology Silchar, Silchar, Assam, 788010, India
| |
Collapse
|
15
|
Haessig G, Milde MB, Aceituno PV, Oubari O, Knight JC, van Schaik A, Benosman RB, Indiveri G. Event-Based Computation for Touch Localization Based on Precise Spike Timing. Front Neurosci 2020; 14:420. [PMID: 32528239 PMCID: PMC7248403 DOI: 10.3389/fnins.2020.00420] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2019] [Accepted: 04/07/2020] [Indexed: 11/13/2022] Open
Abstract
Precise spike timing and temporal coding are used extensively within the nervous system of insects and in the sensory periphery of higher order animals. However, conventional Artificial Neural Networks (ANNs) and machine learning algorithms cannot take advantage of this coding strategy, due to their rate-based representation of signals. Even in the case of artificial Spiking Neural Networks (SNNs), identifying applications where temporal coding outperforms the rate coding strategies of ANNs is still an open challenge. Neuromorphic sensory-processing systems provide an ideal context for exploring the potential advantages of temporal coding, as they are able to efficiently extract the information required to cluster or classify spatio-temporal activity patterns from relative spike timing. Here we propose a neuromorphic model inspired by the sand scorpion to explore the benefits of temporal coding, and validate it in an event-based sensory-processing task. The task consists in localizing a target using only the relative spike timing of eight spatially-separated vibration sensors. We propose two different approaches in which the SNNs learns to cluster spatio-temporal patterns in an unsupervised manner and we demonstrate how the task can be solved both analytically and through numerical simulation of multiple SNN models. We argue that the models presented are optimal for spatio-temporal pattern classification using precise spike timing in a task that could be used as a standard benchmark for evaluating event-based sensory processing models based on temporal coding.
Collapse
Affiliation(s)
- Germain Haessig
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Moritz B Milde
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Penrith, NSW, Australia
| | - Pau Vilimelis Aceituno
- Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany.,Max Planck School of Cognition, Leipzig, Germany
| | - Omar Oubari
- Institut de la Vision, Sorbonne Université, Paris, France
| | - James C Knight
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
| | - André van Schaik
- International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Penrith, NSW, Australia
| | - Ryad B Benosman
- Institut de la Vision, Sorbonne Université, Paris, France.,University of Pittsburgh, Pittsburgh, PA, United States.,Carnegie Mellon University, Pittsburgh, PA, United States
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
16
|
Anwani N, Rajendran B. Training multi-layer spiking neural networks using NormAD based spatio-temporal error backpropagation. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.104] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
17
|
Wang X, Lin X, Dang X. Supervised learning in spiking neural networks: A review of algorithms and evaluations. Neural Netw 2020; 125:258-280. [PMID: 32146356 DOI: 10.1016/j.neunet.2020.02.011] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 12/15/2019] [Accepted: 02/20/2020] [Indexed: 01/08/2023]
Abstract
As a new brain-inspired computational model of the artificial neural network, a spiking neural network encodes and processes neural information through precisely timed spike trains. Spiking neural networks are composed of biologically plausible spiking neurons, which have become suitable tools for processing complex temporal or spatiotemporal information. However, because of their intricately discontinuous and implicit nonlinear mechanisms, the formulation of efficient supervised learning algorithms for spiking neural networks is difficult, and has become an important problem in this research field. This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively. First, a comparison between spiking neural networks and traditional artificial neural networks is provided. The general framework and some related theories of supervised learning for spiking neural networks are then introduced. Furthermore, the state-of-the-art supervised learning algorithms in recent years are reviewed from the perspectives of applicability to spiking neural network architecture and the inherent mechanisms of supervised learning algorithms. A performance comparison of spike train learning of some representative algorithms is also made. In addition, we provide five qualitative performance evaluation criteria for supervised learning algorithms for spiking neural networks and further present a new taxonomy for supervised learning algorithms depending on these five performance evaluation criteria. Finally, some future research directions in this research field are outlined.
Collapse
Affiliation(s)
- Xiangwen Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| | - Xianghong Lin
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China.
| | - Xiaochao Dang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| |
Collapse
|
18
|
Pan Z, Chua Y, Wu J, Zhang M, Li H, Ambikairajah E. An Efficient and Perceptually Motivated Auditory Neural Encoding and Decoding Algorithm for Spiking Neural Networks. Front Neurosci 2020; 13:1420. [PMID: 32038132 PMCID: PMC6987407 DOI: 10.3389/fnins.2019.01420] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Accepted: 12/16/2019] [Indexed: 12/11/2022] Open
Abstract
The auditory front-end is an integral part of a spiking neural network (SNN) when performing auditory cognitive tasks. It encodes the temporal dynamic stimulus, such as speech and audio, into an efficient, effective and reconstructable spike pattern to facilitate the subsequent processing. However, most of the auditory front-ends in current studies have not made use of recent findings in psychoacoustics and physiology concerning human listening. In this paper, we propose a neural encoding and decoding scheme that is optimized for audio processing. The neural encoding scheme, that we call Biologically plausible Auditory Encoding (BAE), emulates the functions of the perceptual components of the human auditory system, that include the cochlear filter bank, the inner hair cells, auditory masking effects from psychoacoustic models, and the spike neural encoding by the auditory nerve. We evaluate the perceptual quality of the BAE scheme using PESQ; the performance of the BAE based on sound classification and speech recognition experiments. Finally, we also built and published two spike-version of speech datasets: the Spike-TIDIGITS and the Spike-TIMIT, for researchers to use and benchmarking of future SNN research.
Collapse
Affiliation(s)
- Zihan Pan
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Yansong Chua
- Institute for Infocomm Research, Agency for Science, Technology and Research, Singapore, Singapore
| | - Jibin Wu
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Malu Zhang
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Haizhou Li
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Eliathamby Ambikairajah
- School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
19
|
Taherkhani A, Belatreche A, Li Y, Cosma G, Maguire LP, McGinnity TM. A review of learning in biologically plausible spiking neural networks. Neural Netw 2019; 122:253-272. [PMID: 31726331 DOI: 10.1016/j.neunet.2019.09.036] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 09/17/2019] [Accepted: 09/23/2019] [Indexed: 11/30/2022]
Abstract
Artificial neural networks have been used as a powerful processing tool in various areas such as pattern recognition, control, robotics, and bioinformatics. Their wide applicability has encouraged researchers to improve artificial neural networks by investigating the biological brain. Neurological research has significantly progressed in recent years and continues to reveal new characteristics of biological neurons. New technologies can now capture temporal changes in the internal activity of the brain in more detail and help clarify the relationship between brain activity and the perception of a given stimulus. This new knowledge has led to a new type of artificial neural network, the Spiking Neural Network (SNN), that draws more faithfully on biological properties to provide higher processing abilities. A review of recent developments in learning of spiking neurons is presented in this paper. First the biological background of SNN learning algorithms is reviewed. The important elements of a learning algorithm such as the neuron model, synaptic plasticity, information encoding and SNN topologies are then presented. Then, a critical review of the state-of-the-art learning algorithms for SNNs using single and multiple spikes is presented. Additionally, deep spiking neural networks are reviewed, and challenges and opportunities in the SNN field are discussed.
Collapse
Affiliation(s)
- Aboozar Taherkhani
- School of Computer Science and Informatics, Faculty of Computing, Engineering and Media, De Montfort University, Leicester, UK.
| | - Ammar Belatreche
- Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne, UK
| | - Yuhua Li
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Georgina Cosma
- Department of Computer Science, Loughborough University, Loughborough, UK
| | - Liam P Maguire
- Intelligent Systems Research Centre, Ulster University, Northern Ireland, Derry, UK
| | - T M McGinnity
- Intelligent Systems Research Centre, Ulster University, Northern Ireland, Derry, UK; School of Science and Technology, Nottingham Trent University, Nottingham, UK
| |
Collapse
|
20
|
|
21
|
Xu Y, Yang J, Zeng X. An optimal time interval of input spikes involved in synaptic adjustment of spike sequence learning. Neural Netw 2019; 116:11-24. [DOI: 10.1016/j.neunet.2019.03.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2018] [Revised: 02/19/2019] [Accepted: 03/26/2019] [Indexed: 10/27/2022]
|
22
|
Luo X, Qu H, Zhang Y, Chen Y. First Error-Based Supervised Learning Algorithm for Spiking Neural Networks. Front Neurosci 2019; 13:559. [PMID: 31244594 PMCID: PMC6563788 DOI: 10.3389/fnins.2019.00559] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Accepted: 05/15/2019] [Indexed: 11/13/2022] Open
Abstract
Neural circuits respond to multiple sensory stimuli by firing precisely timed spikes. Inspired by this phenomenon, the spike timing-based spiking neural networks (SNNs) are proposed to process and memorize the spatiotemporal spike patterns. However, the response speed and accuracy of the existing learning algorithms of SNNs are still lacking compared to the human brain. To further improve the performance of learning precisely timed spikes, we propose a new weight updating mechanism which always adjusts the synaptic weights at the first wrong output spike time. The proposed learning algorithm can accurately adjust the synaptic weights that contribute to the membrane potential of desired and non-desired firing time. Experimental results demonstrate that the proposed algorithm shows higher accuracy, better robustness, and less computational resources compared with the remote supervised method (ReSuMe) and the spike pattern association neuron (SPAN), which are classic sequence learning algorithms. In addition, the SNN-based computational model equipped with the proposed learning method achieves better recognition results in speech recognition task compared with other bio-inspired baseline systems.
Collapse
Affiliation(s)
- Xiaoling Luo
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Hong Qu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yun Zhang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yi Chen
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
23
|
Wang X, Lin X, Dang X. A Delay Learning Algorithm Based on Spike Train Kernels for Spiking Neurons. Front Neurosci 2019; 13:252. [PMID: 30971877 PMCID: PMC6445871 DOI: 10.3389/fnins.2019.00252] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2018] [Accepted: 03/04/2018] [Indexed: 11/13/2022] Open
Abstract
Neuroscience research confirms that the synaptic delays are not constant, but can be modulated. This paper proposes a supervised delay learning algorithm for spiking neurons with temporal encoding, in which both the weight and delay of a synaptic connection can be adjusted to enhance the learning performance. The proposed algorithm firstly defines spike train kernels to transform discrete spike trains during the learning phase into continuous analog signals so that common mathematical operations can be performed on them, and then deduces the supervised learning rules of synaptic weights and delays by gradient descent method. The proposed algorithm is successfully applied to various spike train learning tasks, and the effects of parameters of synaptic delays are analyzed in detail. Experimental results show that the network with dynamic delays achieves higher learning accuracy and less learning epochs than the network with static delays. The delay learning algorithm is further validated on a practical example of an image classification problem. The results again show that it can achieve a good classification performance with a proper receptive field. Therefore, the synaptic delay learning is significant for practical applications and theoretical researches of spiking neural networks.
Collapse
Affiliation(s)
- Xiangwen Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| | - Xianghong Lin
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| | - Xiaochao Dang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| |
Collapse
|
24
|
Zhang M, Qu H, Belatreche A, Chen Y, Yi Z. A Highly Effective and Robust Membrane Potential-Driven Supervised Learning Method for Spiking Neurons. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:123-137. [PMID: 29993588 DOI: 10.1109/tnnls.2018.2833077] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Spiking neurons are becoming increasingly popular owing to their biological plausibility and promising computational properties. Unlike traditional rate-based neural models, spiking neurons encode information in the temporal patterns of the transmitted spike trains, which makes them more suitable for processing spatiotemporal information. One of the fundamental computations of spiking neurons is to transform streams of input spike trains into precisely timed firing activity. However, the existing learning methods, used to realize such computation, often result in relatively low accuracy performance and poor robustness to noise. In order to address these limitations, we propose a novel highly effective and robust membrane potential-driven supervised learning (MemPo-Learn) method, which enables the trained neurons to generate desired spike trains with higher precision, higher efficiency, and better noise robustness than the current state-of-the-art spiking neuron learning methods. While the traditional spike-driven learning methods use an error function based on the difference between the actual and desired output spike trains, the proposed MemPo-Learn method employs an error function based on the difference between the output neuron membrane potential and its firing threshold. The efficiency of the proposed learning method is further improved through the introduction of an adaptive strategy, called skip scan training strategy, that selectively identifies the time steps when to apply weight adjustment. The proposed strategy enables the MemPo-Learn method to effectively and efficiently learn the desired output spike train even when much smaller time steps are used. In addition, the learning rule of MemPo-Learn is improved further to help mitigate the impact of the input noise on the timing accuracy and reliability of the neuron firing dynamics. The proposed learning method is thoroughly evaluated on synthetic data and is further demonstrated on real-world classification tasks. Experimental results show that the proposed method can achieve high learning accuracy with a significant improvement in learning time and better robustness to different types of noise.
Collapse
|
25
|
Taherkhani A, Belatreche A, Li Y, Maguire LP. A Supervised Learning Algorithm for Learning Precise Timing of Multiple Spikes in Multilayer Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:5394-5407. [PMID: 29993611 DOI: 10.1109/tnnls.2018.2797801] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
There is a biological evidence to prove information is coded through precise timing of spikes in the brain. However, training a population of spiking neurons in a multilayer network to fire at multiple precise times remains a challenging task. Delay learning and the effect of a delay on weight learning in a spiking neural network (SNN) have not been investigated thoroughly. This paper proposes a novel biologically plausible supervised learning algorithm for learning precisely timed multiple spikes in a multilayer SNNs. Based on the spike-timing-dependent plasticity learning rule, the proposed learning method trains an SNN through the synergy between weight and delay learning. The weights of the hidden and output neurons are adjusted in parallel. The proposed learning method captures the contribution of synaptic delays to the learning of synaptic weights. Interaction between different layers of the network is realized through biofeedback signals sent by the output neurons. The trained SNN is used for the classification of spatiotemporal input patterns. The proposed learning method also trains the spiking network not to fire spikes at undesired times which contribute to misclassification. Experimental evaluation on benchmark data sets from the UCI machine learning repository shows that the proposed method has comparable results with classical rate-based methods such as deep belief network and the autoencoder models. Moreover, the proposed method can achieve higher classification accuracies than single layer and a similar multilayer SNN.
Collapse
|
26
|
Susi G, Antón Toro L, Canuet L, López ME, Maestú F, Mirasso CR, Pereda E. A Neuro-Inspired System for Online Learning and Recognition of Parallel Spike Trains, Based on Spike Latency, and Heterosynaptic STDP. Front Neurosci 2018; 12:780. [PMID: 30429767 PMCID: PMC6220070 DOI: 10.3389/fnins.2018.00780] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2018] [Accepted: 10/09/2018] [Indexed: 11/17/2022] Open
Abstract
Humans perform remarkably well in many cognitive tasks including pattern recognition. However, the neuronal mechanisms underlying this process are not well understood. Nevertheless, artificial neural networks, inspired in brain circuits, have been designed and used to tackle spatio-temporal pattern recognition tasks. In this paper we present a multi-neuronal spike pattern detection structure able to autonomously implement online learning and recognition of parallel spike sequences (i.e., sequences of pulses belonging to different neurons/neural ensembles). The operating principle of this structure is based on two spiking/synaptic neurocomputational characteristics: spike latency, which enables neurons to fire spikes with a certain delay and heterosynaptic plasticity, which allows the own regulation of synaptic weights. From the perspective of the information representation, the structure allows mapping a spatio-temporal stimulus into a multi-dimensional, temporal, feature space. In this space, the parameter coordinate and the time at which a neuron fires represent one specific feature. In this sense, each feature can be considered to span a single temporal axis. We applied our proposed scheme to experimental data obtained from a motor-inhibitory cognitive task. The results show that out method exhibits similar performance compared with other classification methods, indicating the effectiveness of our approach. In addition, its simplicity and low computational cost suggest a large scale implementation for real time recognition applications in several areas, such as brain computer interface, personal biometrics authentication, or early detection of diseases.
Collapse
Affiliation(s)
- Gianluca Susi
- UCM-UPM Laboratory of Cognitive and Computational Neuroscience, Center for Biomedical Technology, Technical University of Madrid, Madrid, Spain.,Dipartimento di Ingegneria Civile e Ingegneria Informatica, Università di Roma 'Tor Vergata', Rome, Italy
| | - Luis Antón Toro
- UCM-UPM Laboratory of Cognitive and Computational Neuroscience, Center for Biomedical Technology, Technical University of Madrid, Madrid, Spain.,Departamento de Psicología Experimental, Procesos Cognitivos y Logopedia, Facultad de Psicología, Universidad Complutense de Madrid, Madrid, Spain
| | - Leonides Canuet
- UCM-UPM Laboratory of Cognitive and Computational Neuroscience, Center for Biomedical Technology, Technical University of Madrid, Madrid, Spain.,Departamento de Psicología Clinica, Psicobiología y Metodología, Universidad de La Laguna, La Laguna, Spain
| | - Maria Eugenia López
- UCM-UPM Laboratory of Cognitive and Computational Neuroscience, Center for Biomedical Technology, Technical University of Madrid, Madrid, Spain.,Departamento de Psicología Experimental, Procesos Cognitivos y Logopedia, Facultad de Psicología, Universidad Complutense de Madrid, Madrid, Spain
| | - Fernando Maestú
- UCM-UPM Laboratory of Cognitive and Computational Neuroscience, Center for Biomedical Technology, Technical University of Madrid, Madrid, Spain.,Departamento de Psicología Experimental, Procesos Cognitivos y Logopedia, Facultad de Psicología, Universidad Complutense de Madrid, Madrid, Spain
| | - Claudio R Mirasso
- Instituto de Fisica Interdisciplinar y Sistemas Complejos, CSIC-UIB, Campus Universitat de les Illes Balears, Palma de Mallorca, Spain
| | - Ernesto Pereda
- UCM-UPM Laboratory of Cognitive and Computational Neuroscience, Center for Biomedical Technology, Technical University of Madrid, Madrid, Spain.,Departamento de Ingeniería Industrial, Escuela Superior de Ingeniería y Tecnología & IUNE, Universidad de La Laguna, La Laguna, Spain
| |
Collapse
|
27
|
|
28
|
Nandakumar S, Kulkarni SR, Babu AV, Rajendran B. Building Brain-Inspired Computing Systems: Examining the Role of Nanoscale Devices. IEEE NANOTECHNOLOGY MAGAZINE 2018. [DOI: 10.1109/mnano.2018.2845078] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
29
|
Zhang M, Qu H, Belatreche A, Xie X. EMPD: An Efficient Membrane Potential Driven Supervised Learning Algorithm for Spiking Neurons. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2651943] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
30
|
Kulkarni SR, Rajendran B. Spiking neural networks for handwritten digit recognition-Supervised learning and network optimization. Neural Netw 2018; 103:118-127. [PMID: 29674234 DOI: 10.1016/j.neunet.2018.03.019] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2017] [Revised: 02/13/2018] [Accepted: 03/27/2018] [Indexed: 12/17/2022]
Abstract
We demonstrate supervised learning in Spiking Neural Networks (SNNs) for the problem of handwritten digit recognition using the spike triggered Normalized Approximate Descent (NormAD) algorithm. Our network that employs neurons operating at sparse biological spike rates below 300Hz achieves a classification accuracy of 98.17% on the MNIST test database with four times fewer parameters compared to the state-of-the-art. We present several insights from extensive numerical experiments regarding optimization of learning parameters and network configuration to improve its accuracy. We also describe a number of strategies to optimize the SNN for implementation in memory and energy constrained hardware, including approximations in computing the neuronal dynamics and reduced precision in storing the synaptic weights. Experiments reveal that even with 3-bit synaptic weights, the classification accuracy of the designed SNN does not degrade beyond 1% as compared to the floating-point baseline. Further, the proposed SNN, which is trained based on the precise spike timing information outperforms an equivalent non-spiking artificial neural network (ANN) trained using back propagation, especially at low bit precision. Thus, our study shows the potential for realizing efficient neuromorphic systems that use spike based information encoding and learning for real-world applications.
Collapse
Affiliation(s)
- Shruti R Kulkarni
- Department of Electrical and Computer Engineering, New Jersey Institute of Technology, NJ, 07102, USA
| | - Bipin Rajendran
- Department of Electrical and Computer Engineering, New Jersey Institute of Technology, NJ, 07102, USA.
| |
Collapse
|
31
|
Zhang X, Foderaro G, Henriquez C, Ferrari S. A Scalable Weight-Free Learning Algorithm for Regulatory Control of Cell Activity in Spiking Neuronal Networks. Int J Neural Syst 2018; 28:1750015. [DOI: 10.1142/s0129065717500150] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Recent developments in neural stimulation and recording technologies are providing scientists with the ability of recording and controlling the activity of individual neurons in vitro or in vivo, with very high spatial and temporal resolution. Tools such as optogenetics, for example, are having a significant impact in the neuroscience field by delivering optical firing control with the precision and spatiotemporal resolution required for investigating information processing and plasticity in biological brains. While a number of training algorithms have been developed to date for spiking neural network (SNN) models of biological neuronal circuits, exiting methods rely on learning rules that adjust the synaptic strengths (or weights) directly, in order to obtain the desired network-level (or functional-level) performance. As such, they are not applicable to modifying plasticity in biological neuronal circuits, in which synaptic strengths only change as a result of pre- and post-synaptic neuron firings or biological mechanisms beyond our control. This paper presents a weight-free training algorithm that relies solely on adjusting the spatiotemporal delivery of neuron firings in order to optimize the network performance. The proposed weight-free algorithm does not require any knowledge of the SNN model or its plasticity mechanisms. As a result, this training approach is potentially realizable in vitro or in vivo via neural stimulation and recording technologies, such as optogenetics and multielectrode arrays, and could be utilized to control plasticity at multiple scales of biological neuronal circuits. The approach is demonstrated by training SNNs with hundreds of units to control a virtual insect navigating in an unknown environment.
Collapse
Affiliation(s)
- Xu Zhang
- Mechanical Engineering and Materials Science, Duke University, Box 90300 Hudson Hall, Durham, NC, US
| | - Greg Foderaro
- Mechanical Engineering and Materials Science, Duke University, Box 90300 Hudson Hall, Durham, NC, US
| | - Craig Henriquez
- Biomedical Engineering, Duke University, Box 90281 Hudson Hall, Durham, 27708, US
| | - Silvia Ferrari
- Sibley School of Mechanical and Aerospace Engineering, Cornell University, 105 Upson Hall, Ithaca, New York, 14853, US
| |
Collapse
|
32
|
Matsubara T. Conduction Delay Learning Model for Unsupervised and Supervised Classification of Spatio-Temporal Spike Patterns. Front Comput Neurosci 2017; 11:104. [PMID: 29209191 PMCID: PMC5702355 DOI: 10.3389/fncom.2017.00104] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2017] [Accepted: 11/02/2017] [Indexed: 12/15/2022] Open
Abstract
Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning.
Collapse
Affiliation(s)
- Takashi Matsubara
- Computational Intelligence, Fundamentals of Computational Science, Department of Computational Science, Graduate School of System Informatics, Kobe University, Hyogo, Japan
| |
Collapse
|
33
|
Xie X, Qu H, Liu G, Zhang M. Efficient training of supervised spiking neural networks via the normalized perceptron based learning rule. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.01.086] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|