1
|
Hwang S, Hwang Y, Kim D, Lee J, Choe HK, Lee J, Kang H, Kung J. ReplaceNet: real-time replacement of a biological neural circuit with a hardware-assisted spiking neural network. Front Neurosci 2023; 17:1161592. [PMID: 37638314 PMCID: PMC10448768 DOI: 10.3389/fnins.2023.1161592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 07/28/2023] [Indexed: 08/29/2023] Open
Abstract
Recent developments in artificial neural networks and their learning algorithms have enabled new research directions in computer vision, language modeling, and neuroscience. Among various neural network algorithms, spiking neural networks (SNNs) are well-suited for understanding the behavior of biological neural circuits. In this work, we propose to guide the training of a sparse SNN in order to replace a sub-region of a cultured hippocampal network with limited hardware resources. To verify our approach with a realistic experimental setup, we record spikes of cultured hippocampal neurons with a microelectrode array (in vitro). The main focus of this work is to dynamically cut unimportant synapses during SNN training on the fly so that the model can be realized on resource-constrained hardware, e.g., implantable devices. To do so, we adopt a simple STDP learning rule to easily select important synapses that impact the quality of spike timing learning. By combining the STDP rule with online supervised learning, we can precisely predict the spike pattern of the cultured network in real-time. The reduction in the model complexity, i.e., the reduced number of connections, significantly reduces the required hardware resources, which is crucial in developing an implantable chip for the treatment of neurological disorders. In addition to the new learning algorithm, we prototype a sparse SNN hardware on a small FPGA with pipelined execution and parallel computing to verify the possibility of real-time replacement. As a result, we can replace a sub-region of the biological neural circuit within 22 μs using 2.5 × fewer hardware resources, i.e., by allowing 80% sparsity in the SNN model, compared to the fully-connected SNN model. With energy-efficient algorithms and hardware, this work presents an essential step toward real-time neuroprosthetic computation.
Collapse
Affiliation(s)
- Sangwoo Hwang
- Department of Electrical Engineering and Computer Science, DGIST, Daegu, Republic of Korea
| | - Yujin Hwang
- Department of Electrical Engineering and Computer Science, DGIST, Daegu, Republic of Korea
| | - Duhee Kim
- Department of Electrical Engineering and Computer Science, DGIST, Daegu, Republic of Korea
| | - Junhee Lee
- Department of Electrical Engineering and Computer Science, DGIST, Daegu, Republic of Korea
| | - Han Kyoung Choe
- Department of Brain Sciences, DGIST, Daegu, Republic of Korea
| | - Junghyup Lee
- Department of Electrical Engineering and Computer Science, DGIST, Daegu, Republic of Korea
| | - Hongki Kang
- Department of Electrical Engineering and Computer Science, DGIST, Daegu, Republic of Korea
| | - Jaeha Kung
- School of Electrical Engineering, Korea University, Seoul, Republic of Korea
| |
Collapse
|
2
|
Zhan Q, Liu G, Xie X, Sun G, Tang H. Effective Transfer Learning Algorithm in Spiking Neural Networks. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:13323-13335. [PMID: 34270439 DOI: 10.1109/tcyb.2021.3079097] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
As the third generation of neural networks, spiking neural networks (SNNs) have gained much attention recently because of their high energy efficiency on neuromorphic hardware. However, training deep SNNs requires many labeled data that are expensive to obtain in real-world applications, as traditional artificial neural networks (ANNs). In order to address this issue, transfer learning has been proposed and widely used in traditional ANNs, but it has limited use in SNNs. In this article, we propose an effective transfer learning framework for deep SNNs based on the domain in-variance representation. Specifically, we analyze the rationality of centered kernel alignment (CKA) as a domain distance measurement relative to maximum mean discrepancy (MMD) in deep SNNs. In addition, we study the feature transferability across different layers by testing on the Office-31, Office-Caltech-10, and PACS datasets. The experimental results demonstrate the transferability of SNNs and show the effectiveness of the proposed transfer learning framework by using CKA in SNNs.
Collapse
|
3
|
Hu L, Liao X. Voltage slope guided learning in spiking neural networks. Front Neurosci 2022; 16:1012964. [PMID: 36440266 PMCID: PMC9685168 DOI: 10.3389/fnins.2022.1012964] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Accepted: 10/25/2022] [Indexed: 04/19/2024] Open
Abstract
A thorny problem in machine learning is how to extract useful clues related to delayed feedback signals from the clutter of input activity, known as the temporal credit-assignment problem. The aggregate-label learning algorithms make an explicit representation of this problem by training spiking neurons to assign the aggregate feedback signal to potentially effective clues. However, earlier aggregate-label learning algorithms suffered from inefficiencies due to the large amount of computation, while recent algorithms that have solved this problem may fail to learn due to the inability to find adjustment points. Therefore, we propose a membrane voltage slope guided algorithm (VSG) to further cope with this limitation. Direct dependence on the membrane voltage when finding the key point of weight adjustment makes VSG avoid intensive calculation, but more importantly, the membrane voltage that always exists makes it impossible to lose the adjustment point. Experimental results show that the proposed algorithm can correlate delayed feedback signals with the effective clues embedded in background spiking activity, and also achieves excellent performance on real medical classification datasets and speech classification datasets. The superior performance makes it a meaningful reference for aggregate-label learning on spiking neural networks.
Collapse
Affiliation(s)
- Lvhui Hu
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Xin Liao
- Information Center, Hospital of Chengdu University of Traditional Chinese Medicine, Chengdu, China
| |
Collapse
|
4
|
Zhang A, Li X, Gao Y, Niu Y. Event-Driven Intrinsic Plasticity for Spiking Convolutional Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1986-1995. [PMID: 34106868 DOI: 10.1109/tnnls.2021.3084955] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The biologically discovered intrinsic plasticity (IP) learning rule, which changes the intrinsic excitability of an individual neuron by adaptively turning the firing threshold, has been shown to be crucial for efficient information processing. However, this learning rule needs extra time for updating operations at each step, causing extra energy consumption and reducing the computational efficiency. The event-driven or spike-based coding strategy of spiking neural networks (SNNs), i.e., neurons will only be active if driven by continuous spiking trains, employs all-or-none pulses (spikes) to transmit information, contributing to sparseness in neuron activations. In this article, we propose two event-driven IP learning rules, namely, input-driven and self-driven IP, based on basic IP learning. Input-driven means that IP updating occurs only when the neuron receives spiking inputs from its presynaptic neurons, whereas self-driven means that IP updating only occurs when the neuron generates a spike. A spiking convolutional neural network (SCNN) is developed based on the ANN2SNN conversion method, i.e., converting a well-trained rate-based artificial neural network to an SNN via directly mapping the connection weights. By comparing the computational performance of SCNNs with different IP rules on the recognition of MNIST, FashionMNIST, Cifar10, and SVHN datasets, we demonstrate that the two event-based IP rules can remarkably reduce IP updating operations, contributing to sparse computations and accelerating the recognition process. This work may give insights into the modeling of brain-inspired SNNs for low-power applications.
Collapse
|
5
|
Zhang Y, Qu H, Luo X, Chen Y, Wang Y, Zhang M, Li Z. A new recursive least squares-based learning algorithm for spiking neurons. Neural Netw 2021; 138:110-125. [PMID: 33636484 DOI: 10.1016/j.neunet.2021.01.016] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 12/15/2020] [Accepted: 01/18/2021] [Indexed: 10/22/2022]
Abstract
Spiking neural networks (SNNs) are regarded as effective models for processing spatio-temporal information. However, their inherent complexity of temporal coding makes it an arduous task to put forward an effective supervised learning algorithm, which still puzzles researchers in this area. In this paper, we propose a Recursive Least Squares-Based Learning Rule (RLSBLR) for SNN to generate the desired spatio-temporal spike train. During the learning process of our method, the weight update is driven by the cost function defined by the difference between the membrane potential and the firing threshold. The amount of weight modification depends not only on the impact of the current error function, but also on the previous error functions which are evaluated by current weights. In order to improve the learning performance, we integrate a modified synaptic delay learning to the proposed RLSBLR. We conduct experiments in different settings, such as spiking lengths, number of inputs, firing rates, noises and learning parameters, to thoroughly investigate the performance of this learning algorithm. The proposed RLSBLR is compared with competitive algorithms of Perceptron-Based Spiking Neuron Learning Rule (PBSNLR) and Remote Supervised Method (ReSuMe). Experimental results demonstrate that the proposed RLSBLR can achieve higher learning accuracy, higher efficiency and better robustness against different types of noise. In addition, we apply the proposed RLSBLR to open source database TIDIGITS, and the results show that our algorithm has a good practical application performance.
Collapse
Affiliation(s)
- Yun Zhang
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Hong Qu
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China.
| | - Xiaoling Luo
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Yi Chen
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Yuchen Wang
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Malu Zhang
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Zefang Li
- China Coal Research Institute, Beijing 100013, PR China
| |
Collapse
|
6
|
Rashvand P, Ahmadzadeh MR, Shayegh F. Design and Implementation of a Spiking Neural Network with Integrate-and-Fire Neuron Model for Pattern Recognition. Int J Neural Syst 2020; 31:2050073. [PMID: 33353527 DOI: 10.1142/s0129065720500732] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In contrast to the previous artificial neural networks (ANNs), spiking neural networks (SNNs) work based on temporal coding approaches. In the proposed SNN, the number of neurons, neuron models, encoding method, and learning algorithm design are described in a correct and pellucid fashion. It is also discussed that optimizing the SNN parameters based on physiology, and maximizing the information they pass leads to a more robust network. In this paper, inspired by the "center-surround" structure of the receptive fields in the retina, and the amount of overlap that they have, a robust SNN is implemented. It is based on the Integrate-and-Fire (IF) neuron model and uses the time-to-first-spike coding to train the network by a newly proposed method. The Iris and MNIST datasets were employed to evaluate the performance of the proposed network whose accuracy, with 60 input neurons, was 96.33% on the Iris dataset. The network was trained in only 45 iterations indicating its reasonable convergence rate. For the MNIST dataset, when the gray level of each pixel was considered as input to the network, 600 input neurons were required, and the accuracy of the network was 90.5%. Next, 14 structural features were used as input. Therefore, the number of input neurons decreased to 210, and accuracy increased up to 95%, meaning that an SNN with fewer input neurons and good skill was implemented. Also, the ABIDE1 dataset is applied to the proposed SNN. Of the 184 data, 79 are used for healthy people and 105 for people with autism. One of the characteristics that can differentiate between these two classes is the entropy of the existing data. Therefore, Shannon entropy is used for feature extraction. Applying these values to the proposed SNN, an accuracy of 84.42% was achieved by only 120 iterations, which is a good result compared to the recent results.
Collapse
Affiliation(s)
- Parvaneh Rashvand
- Digital Signal Processing Research Lab, Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan 84156-83111, Iran
| | - Mohammad Reza Ahmadzadeh
- Digital Signal Processing Research Lab, Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan 84156-83111, Iran
| | - Farzaneh Shayegh
- Digital Signal Processing Research Lab, Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan 84156-83111, Iran
| |
Collapse
|
7
|
Zhang M, Wu J, Belatreche A, Pan Z, Xie X, Chua Y, Li G, Qu H, Li H. Supervised learning in spiking neural networks with synaptic delay-weight plasticity. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.03.079] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
8
|
Yu Z, Chen F, Liu JK. Sampling-Tree Model: Efficient Implementation of Distributed Bayesian Inference in Neural Networks. IEEE Trans Cogn Dev Syst 2020. [DOI: 10.1109/tcds.2019.2927808] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
9
|
Taherkhani A, Cosma G, McGinnity TM. Optimization of Output Spike Train Encoding for a Spiking Neuron Based on its Spatio–Temporal Input Pattern. IEEE Trans Cogn Dev Syst 2020. [DOI: 10.1109/tcds.2019.2909355] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
10
|
Zuo L, Chen Y, Zhang L, Chen C. A spiking neural network with probability information transmission. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.01.109] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
11
|
Hussain I, Thounaojam DM. SpiFoG: an efficient supervised learning algorithm for the network of spiking neurons. Sci Rep 2020; 10:13122. [PMID: 32753645 PMCID: PMC7403331 DOI: 10.1038/s41598-020-70136-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 07/23/2020] [Indexed: 11/29/2022] Open
Abstract
There has been a lot of research on supervised learning in spiking neural network (SNN) for a couple of decades to improve computational efficiency. However, evolutionary algorithm based supervised learning for SNN has not been investigated thoroughly which is still in embryo stage. This paper introduce an efficient algorithm (SpiFoG) to train multilayer feed forward SNN in supervised manner that uses elitist floating point genetic algorithm with hybrid crossover. The evidence from neuroscience claims that the brain uses spike times with random synaptic delays for information processing. Therefore, leaky-integrate-and-fire spiking neuron is used in this research introducing random synaptic delays. The SpiFoG allows both excitatory and inhibitory neurons by allowing a mixture of positive and negative synaptic weights. In addition, random synaptic delays are also trained with synaptic weights in an efficient manner. Moreover, computational efficiency of SpiFoG was increased by reducing the total simulation time and increasing the time step since increasing time step within the total simulation time takes less iteration. The SpiFoG is benchmarked on Iris and WBC dataset drawn from the UCI machine learning repository and found better performance than state-of-the-art techniques.
Collapse
Affiliation(s)
- Irshed Hussain
- Computer Vision Laboratory, Department of Computer Science and Engineering, National Institute of Technology Silchar, Silchar, Assam, 788010, India.
| | - Dalton Meitei Thounaojam
- Computer Vision Laboratory, Department of Computer Science and Engineering, National Institute of Technology Silchar, Silchar, Assam, 788010, India
| |
Collapse
|
12
|
Wang X, Lin X, Dang X. Supervised learning in spiking neural networks: A review of algorithms and evaluations. Neural Netw 2020; 125:258-280. [PMID: 32146356 DOI: 10.1016/j.neunet.2020.02.011] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 12/15/2019] [Accepted: 02/20/2020] [Indexed: 01/08/2023]
Abstract
As a new brain-inspired computational model of the artificial neural network, a spiking neural network encodes and processes neural information through precisely timed spike trains. Spiking neural networks are composed of biologically plausible spiking neurons, which have become suitable tools for processing complex temporal or spatiotemporal information. However, because of their intricately discontinuous and implicit nonlinear mechanisms, the formulation of efficient supervised learning algorithms for spiking neural networks is difficult, and has become an important problem in this research field. This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively. First, a comparison between spiking neural networks and traditional artificial neural networks is provided. The general framework and some related theories of supervised learning for spiking neural networks are then introduced. Furthermore, the state-of-the-art supervised learning algorithms in recent years are reviewed from the perspectives of applicability to spiking neural network architecture and the inherent mechanisms of supervised learning algorithms. A performance comparison of spike train learning of some representative algorithms is also made. In addition, we provide five qualitative performance evaluation criteria for supervised learning algorithms for spiking neural networks and further present a new taxonomy for supervised learning algorithms depending on these five performance evaluation criteria. Finally, some future research directions in this research field are outlined.
Collapse
Affiliation(s)
- Xiangwen Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| | - Xianghong Lin
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China.
| | - Xiaochao Dang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| |
Collapse
|
13
|
Pan Z, Chua Y, Wu J, Zhang M, Li H, Ambikairajah E. An Efficient and Perceptually Motivated Auditory Neural Encoding and Decoding Algorithm for Spiking Neural Networks. Front Neurosci 2020; 13:1420. [PMID: 32038132 PMCID: PMC6987407 DOI: 10.3389/fnins.2019.01420] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Accepted: 12/16/2019] [Indexed: 12/11/2022] Open
Abstract
The auditory front-end is an integral part of a spiking neural network (SNN) when performing auditory cognitive tasks. It encodes the temporal dynamic stimulus, such as speech and audio, into an efficient, effective and reconstructable spike pattern to facilitate the subsequent processing. However, most of the auditory front-ends in current studies have not made use of recent findings in psychoacoustics and physiology concerning human listening. In this paper, we propose a neural encoding and decoding scheme that is optimized for audio processing. The neural encoding scheme, that we call Biologically plausible Auditory Encoding (BAE), emulates the functions of the perceptual components of the human auditory system, that include the cochlear filter bank, the inner hair cells, auditory masking effects from psychoacoustic models, and the spike neural encoding by the auditory nerve. We evaluate the perceptual quality of the BAE scheme using PESQ; the performance of the BAE based on sound classification and speech recognition experiments. Finally, we also built and published two spike-version of speech datasets: the Spike-TIDIGITS and the Spike-TIMIT, for researchers to use and benchmarking of future SNN research.
Collapse
Affiliation(s)
- Zihan Pan
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Yansong Chua
- Institute for Infocomm Research, Agency for Science, Technology and Research, Singapore, Singapore
| | - Jibin Wu
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Malu Zhang
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Haizhou Li
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Eliathamby Ambikairajah
- School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
14
|
Luo X, Qu H, Zhang Y, Chen Y. First Error-Based Supervised Learning Algorithm for Spiking Neural Networks. Front Neurosci 2019; 13:559. [PMID: 31244594 PMCID: PMC6563788 DOI: 10.3389/fnins.2019.00559] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Accepted: 05/15/2019] [Indexed: 11/13/2022] Open
Abstract
Neural circuits respond to multiple sensory stimuli by firing precisely timed spikes. Inspired by this phenomenon, the spike timing-based spiking neural networks (SNNs) are proposed to process and memorize the spatiotemporal spike patterns. However, the response speed and accuracy of the existing learning algorithms of SNNs are still lacking compared to the human brain. To further improve the performance of learning precisely timed spikes, we propose a new weight updating mechanism which always adjusts the synaptic weights at the first wrong output spike time. The proposed learning algorithm can accurately adjust the synaptic weights that contribute to the membrane potential of desired and non-desired firing time. Experimental results demonstrate that the proposed algorithm shows higher accuracy, better robustness, and less computational resources compared with the remote supervised method (ReSuMe) and the spike pattern association neuron (SPAN), which are classic sequence learning algorithms. In addition, the SNN-based computational model equipped with the proposed learning method achieves better recognition results in speech recognition task compared with other bio-inspired baseline systems.
Collapse
Affiliation(s)
- Xiaoling Luo
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Hong Qu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yun Zhang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yi Chen
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
15
|
Zhang M, Qu H, Belatreche A, Chen Y, Yi Z. A Highly Effective and Robust Membrane Potential-Driven Supervised Learning Method for Spiking Neurons. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:123-137. [PMID: 29993588 DOI: 10.1109/tnnls.2018.2833077] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Spiking neurons are becoming increasingly popular owing to their biological plausibility and promising computational properties. Unlike traditional rate-based neural models, spiking neurons encode information in the temporal patterns of the transmitted spike trains, which makes them more suitable for processing spatiotemporal information. One of the fundamental computations of spiking neurons is to transform streams of input spike trains into precisely timed firing activity. However, the existing learning methods, used to realize such computation, often result in relatively low accuracy performance and poor robustness to noise. In order to address these limitations, we propose a novel highly effective and robust membrane potential-driven supervised learning (MemPo-Learn) method, which enables the trained neurons to generate desired spike trains with higher precision, higher efficiency, and better noise robustness than the current state-of-the-art spiking neuron learning methods. While the traditional spike-driven learning methods use an error function based on the difference between the actual and desired output spike trains, the proposed MemPo-Learn method employs an error function based on the difference between the output neuron membrane potential and its firing threshold. The efficiency of the proposed learning method is further improved through the introduction of an adaptive strategy, called skip scan training strategy, that selectively identifies the time steps when to apply weight adjustment. The proposed strategy enables the MemPo-Learn method to effectively and efficiently learn the desired output spike train even when much smaller time steps are used. In addition, the learning rule of MemPo-Learn is improved further to help mitigate the impact of the input noise on the timing accuracy and reliability of the neuron firing dynamics. The proposed learning method is thoroughly evaluated on synthetic data and is further demonstrated on real-world classification tasks. Experimental results show that the proposed method can achieve high learning accuracy with a significant improvement in learning time and better robustness to different types of noise.
Collapse
|
16
|
Dong M, Huang X, Xu B. Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network. PLoS One 2018; 13:e0204596. [PMID: 30496179 PMCID: PMC6264808 DOI: 10.1371/journal.pone.0204596] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Accepted: 09/11/2018] [Indexed: 11/17/2022] Open
Abstract
Speech recognition (SR) has been improved significantly by artificial neural networks (ANNs), but ANNs have the drawbacks of biologically implausibility and excessive power consumption because of the nonlocal transfer of real-valued errors and weights. While spiking neural networks (SNNs) have the potential to solve these drawbacks of ANNs due to their efficient spike communication and their natural way to utilize kinds of synaptic plasticity rules found in brain for weight modification. However, existing SNN models for SR either had bad performance, or were trained in biologically implausible ways. In this paper, we present a biologically inspired convolutional SNN model for SR. The network adopts the time-to-first-spike coding scheme for fast and efficient information processing. A biological learning rule, spike-timing-dependent plasticity (STDP), is used to adjust the synaptic weights of convolutional neurons to form receptive fields in an unsupervised way. In the convolutional structure, the strategy of local weight sharing is introduced and could lead to better feature extraction of speech signals than global weight sharing. We first evaluated the SNN model with a linear support vector machine (SVM) on the TIDIGITS dataset and it got the performance of 97.5%, comparable to the best results of ANNs. Deep analysis on network outputs showed that, not only are the output data more linearly separable, but they also have fewer dimensions and become sparse. To further confirm the validity of our model, we trained it on a more difficult recognition task based on the TIMIT dataset, and it got a high performance of 93.8%. Moreover, a linear spike-based classifier-tempotron-can also achieve high accuracies very close to that of SVM on both the two tasks. These demonstrate that an STDP-based convolutional SNN model equipped with local weight sharing and temporal coding is capable of solving the SR task accurately and efficiently.
Collapse
Affiliation(s)
- Meng Dong
- School of Automation, Harbin University of Science and Technology, Harbin, Heilongjiang, China
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Xuhui Huang
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Bo Xu
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
17
|
Wu J, Chua Y, Zhang M, Li H, Tan KC. A Spiking Neural Network Framework for Robust Sound Classification. Front Neurosci 2018; 12:836. [PMID: 30510500 PMCID: PMC6252336 DOI: 10.3389/fnins.2018.00836] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Accepted: 10/26/2018] [Indexed: 11/26/2022] Open
Abstract
Environmental sounds form part of our daily life. With the advancement of deep learning models and the abundance of training data, the performance of automatic sound classification (ASC) systems has improved significantly in recent years. However, the high computational cost, hence high power consumption, remains a major hurdle for large-scale implementation of ASC systems on mobile and wearable devices. Motivated by the observations that humans are highly effective and consume little power whilst analyzing complex audio scenes, we propose a biologically plausible ASC framework, namely SOM-SNN. This framework uses the unsupervised self-organizing map (SOM) for representing frequency contents embedded within the acoustic signals, followed by an event-based spiking neural network (SNN) for spatiotemporal spiking pattern classification. We report experimental results on the RWCP environmental sound and TIDIGITS spoken digits datasets, which demonstrate competitive classification accuracies over other deep learning and SNN-based models. The SOM-SNN framework is also shown to be highly robust to corrupting noise after multi-condition training, whereby the model is trained with noise-corrupted sound samples. Moreover, we discover the early decision making capability of the proposed framework: an accurate classification can be made with an only partial presentation of the input.
Collapse
Affiliation(s)
- Jibin Wu
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Yansong Chua
- Institute for Infocomm Research, ASTAR, Singapore, Singapore
| | - Malu Zhang
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Haizhou Li
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore.,Institute for Infocomm Research, ASTAR, Singapore, Singapore
| | - Kay Chen Tan
- Department of Computer Science, City University of Hong Kong, Kowloon Tong, Hong Kong
| |
Collapse
|
18
|
Taherkhani A, Belatreche A, Li Y, Maguire LP. A Supervised Learning Algorithm for Learning Precise Timing of Multiple Spikes in Multilayer Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:5394-5407. [PMID: 29993611 DOI: 10.1109/tnnls.2018.2797801] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
There is a biological evidence to prove information is coded through precise timing of spikes in the brain. However, training a population of spiking neurons in a multilayer network to fire at multiple precise times remains a challenging task. Delay learning and the effect of a delay on weight learning in a spiking neural network (SNN) have not been investigated thoroughly. This paper proposes a novel biologically plausible supervised learning algorithm for learning precisely timed multiple spikes in a multilayer SNNs. Based on the spike-timing-dependent plasticity learning rule, the proposed learning method trains an SNN through the synergy between weight and delay learning. The weights of the hidden and output neurons are adjusted in parallel. The proposed learning method captures the contribution of synaptic delays to the learning of synaptic weights. Interaction between different layers of the network is realized through biofeedback signals sent by the output neurons. The trained SNN is used for the classification of spatiotemporal input patterns. The proposed learning method also trains the spiking network not to fire spikes at undesired times which contribute to misclassification. Experimental evaluation on benchmark data sets from the UCI machine learning repository shows that the proposed method has comparable results with classical rate-based methods such as deep belief network and the autoencoder models. Moreover, the proposed method can achieve higher classification accuracies than single layer and a similar multilayer SNN.
Collapse
|