1
|
Wu Y, Shi B, Zheng Z, Zheng H, Yu F, Liu X, Luo G, Deng L. Adaptive spatiotemporal neural networks through complementary hybridization. Nat Commun 2024; 15:7355. [PMID: 39191782 PMCID: PMC11350166 DOI: 10.1038/s41467-024-51641-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 08/12/2024] [Indexed: 08/29/2024] Open
Abstract
Processing spatiotemporal data sources with both high spatial dimension and rich temporal information is a ubiquitous need in machine intelligence. Recurrent neural networks in the machine learning domain and bio-inspired spiking neural networks in the neuromorphic computing domain are two promising candidate models for dealing with spatiotemporal data via extrinsic dynamics and intrinsic dynamics, respectively. Nevertheless, these networks have disparate modeling paradigms, which leads to different performance results, making it hard for them to cover diverse data sources and performance requirements in practice. Constructing a unified modeling framework that can effectively and adaptively process variable spatiotemporal data in different situations remains quite challenging. In this work, we propose hybrid spatiotemporal neural networks created by combining the recurrent neural networks and spiking neural networks under a unified surrogate gradient learning framework and a Hessian-aware neuron selection method. By flexibly tuning the ratio between two types of neurons, the hybrid model demonstrates better adaptive ability in balancing different performance metrics, including accuracy, robustness, and efficiency on several typical benchmarks, and generally outperforms conventional single-paradigm recurrent neural networks and spiking neural networks. Furthermore, we evidence the great potential of the proposed network with a robotic task in varying environments. With our proof of concept, the proposed hybrid model provides a generic modeling route to process spatiotemporal data sources in the open world.
Collapse
Affiliation(s)
- Yujie Wu
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
- Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Bizhao Shi
- School of Computer Science, Peking University, Beijing, China
- Center for Energy-Efficient Computing and Applications, Peking University, Beijing, China
| | - Zhong Zheng
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Hanle Zheng
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Fangwen Yu
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Xue Liu
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Guojie Luo
- School of Computer Science, Peking University, Beijing, China
- Center for Energy-Efficient Computing and Applications, Peking University, Beijing, China
| | - Lei Deng
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China.
| |
Collapse
|
2
|
Gorgan Mohammadi A, Ganjtabesh M. On computational models of theory of mind and the imitative reinforcement learning in spiking neural networks. Sci Rep 2024; 14:1945. [PMID: 38253595 PMCID: PMC10803361 DOI: 10.1038/s41598-024-52299-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Accepted: 01/16/2024] [Indexed: 01/24/2024] Open
Abstract
Theory of Mind is referred to the ability of inferring other's mental states, and it plays a crucial role in social cognition and learning. Biological evidences indicate that complex circuits are involved in this ability, including the mirror neuron system. The mirror neuron system influences imitation abilities and action understanding, leading to learn through observing others. To simulate this imitative learning behavior, a Theory-of-Mind-based Imitative Reinforcement Learning (ToM-based ImRL) framework is proposed. Employing the bio-inspired spiking neural networks and the mechanisms of the mirror neuron system, ToM-based ImRL is a bio-inspired computational model which enables an agent to effectively learn how to act in an interactive environment through observing an expert, inferring its goals, and imitating its behaviors. The aim of this paper is to review some computational attempts in modeling ToM and to explain the proposed ToM-based ImRL framework which is tested in the environment of River Raid game from Atari 2600 series.
Collapse
Affiliation(s)
- Ashena Gorgan Mohammadi
- Department of Computer Science, School of Mathematics, Statistics, and Computer Science, College of Science, University of Tehran, Tehran, Iran
| | - Mohammad Ganjtabesh
- Department of Computer Science, School of Mathematics, Statistics, and Computer Science, College of Science, University of Tehran, Tehran, Iran.
| |
Collapse
|
3
|
Yi Z, Lian J, Liu Q, Zhu H, Liang D, Liu J. Learning Rules in Spiking Neural Networks: A Survey. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.02.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
4
|
Wu J, Chua Y, Zhang M, Li G, Li H, Tan KC. A Tandem Learning Rule for Effective Training and Rapid Inference of Deep Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:446-460. [PMID: 34288879 DOI: 10.1109/tnnls.2021.3095724] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Spiking neural networks (SNNs) represent the most prominent biologically inspired computing model for neuromorphic computing (NC) architectures. However, due to the nondifferentiable nature of spiking neuronal functions, the standard error backpropagation algorithm is not directly applicable to SNNs. In this work, we propose a tandem learning framework that consists of an SNN and an artificial neural network (ANN) coupled through weight sharing. The ANN is an auxiliary structure that facilitates the error backpropagation for the training of the SNN at the spike-train level. To this end, we consider the spike count as the discrete neural representation in the SNN and design an ANN neuronal activation function that can effectively approximate the spike count of the coupled SNN. The proposed tandem learning rule demonstrates competitive pattern recognition and regression capabilities on both the conventional frame- and event-based vision datasets, with at least an order of magnitude reduced inference time and total synaptic operations over other state-of-the-art SNN implementations. Therefore, the proposed tandem learning rule offers a novel solution to training efficient, low latency, and high-accuracy deep SNNs with low computing resources.
Collapse
|
5
|
Noshad A, Fallahi S. A new hybrid framework based on deep neural networks and JAYA optimization algorithm for feature selection using SVM applied to classification of acute lymphoblastic Leukaemia. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2157748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Affiliation(s)
- Ali Noshad
- Department of Engineering, Polytechnic University of Milan, Milan, Italy
| | - Saeed Fallahi
- Department of Mathematics, Salman Farsi University of Kazerun, Kazerun, Iran
| |
Collapse
|
6
|
Zhou Y, Wang Y, Zhuge F, Guo J, Ma S, Wang J, Tang Z, Li Y, Miao X, He Y, Chai Y. A Reconfigurable Two-WSe 2 -Transistor Synaptic Cell for Reinforcement Learning. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2022; 34:e2107754. [PMID: 35104378 DOI: 10.1002/adma.202107754] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 01/19/2022] [Indexed: 06/14/2023]
Abstract
Reward-modulated spike-timing-dependent plasticity (R-STDP) is a brain-inspired reinforcement learning (RL) rule, exhibiting potential for decision-making tasks and artificial general intelligence. However, the hardware implementation of the reward-modulation process in R-STDP usually requires complicated Si complementary metal-oxide-semiconductor (CMOS) circuit design that causes high power consumption and large footprint. Here, a design with two synaptic transistors (2T) connected in a parallel structure is experimentally demonstrated. The 2T unit based on WSe2 ferroelectric transistors exhibits reconfigurable polarity behavior, where one channel can be tuned as n-type and the other as p-type due to nonvolatile ferroelectric polarization. In this way, opposite synaptic weight update behaviors with multilevel (>6 bit) conductance states, ultralow nonlinearity (0.56/-1.23), and large Gmax /Gmin ratio of 30 are realized. By applying positive/negative reward to (anti-)STDP component of 2T cell, R-STDP learning rules are realized for training the spiking neural network and demonstrated to solve the classical cart-pole problem, exhibiting a way for realizing low-power (32 pJ per forward process) and highly area-efficient (100 µm2 ) hardware chip for reinforcement learning.
Collapse
Affiliation(s)
- Yue Zhou
- Wuhan National Laboratory for Optoelectronics, School of Integrated Circuits, Huazhong University of Science and Technology, Wuhan, 430000, China
- Department of Applied Physics, The Hong Kong Polytechnic University, Hong Kong, 999077, China
| | - Yasai Wang
- Wuhan National Laboratory for Optoelectronics, School of Integrated Circuits, Huazhong University of Science and Technology, Wuhan, 430000, China
| | - Fuwei Zhuge
- School of Materials Science and Engineering, Huazhong University of Science and Technology, Wuhan, 430000, China
| | - Jianmiao Guo
- Department of Applied Physics, The Hong Kong Polytechnic University, Hong Kong, 999077, China
| | - Sijie Ma
- Department of Applied Physics, The Hong Kong Polytechnic University, Hong Kong, 999077, China
| | - Jingli Wang
- Frontier Institute of Chip and System, Fudan University, Shanghai, 200433, China
| | - Zijian Tang
- Wuhan National Laboratory for Optoelectronics, School of Integrated Circuits, Huazhong University of Science and Technology, Wuhan, 430000, China
| | - Yi Li
- Wuhan National Laboratory for Optoelectronics, School of Integrated Circuits, Huazhong University of Science and Technology, Wuhan, 430000, China
| | - Xiangshui Miao
- Wuhan National Laboratory for Optoelectronics, School of Integrated Circuits, Huazhong University of Science and Technology, Wuhan, 430000, China
| | - Yuhui He
- Wuhan National Laboratory for Optoelectronics, School of Integrated Circuits, Huazhong University of Science and Technology, Wuhan, 430000, China
| | - Yang Chai
- Department of Applied Physics, The Hong Kong Polytechnic University, Hong Kong, 999077, China
| |
Collapse
|
7
|
Hu L, Liao X. Voltage slope guided learning in spiking neural networks. Front Neurosci 2022; 16:1012964. [PMID: 36440266 PMCID: PMC9685168 DOI: 10.3389/fnins.2022.1012964] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Accepted: 10/25/2022] [Indexed: 04/19/2024] Open
Abstract
A thorny problem in machine learning is how to extract useful clues related to delayed feedback signals from the clutter of input activity, known as the temporal credit-assignment problem. The aggregate-label learning algorithms make an explicit representation of this problem by training spiking neurons to assign the aggregate feedback signal to potentially effective clues. However, earlier aggregate-label learning algorithms suffered from inefficiencies due to the large amount of computation, while recent algorithms that have solved this problem may fail to learn due to the inability to find adjustment points. Therefore, we propose a membrane voltage slope guided algorithm (VSG) to further cope with this limitation. Direct dependence on the membrane voltage when finding the key point of weight adjustment makes VSG avoid intensive calculation, but more importantly, the membrane voltage that always exists makes it impossible to lose the adjustment point. Experimental results show that the proposed algorithm can correlate delayed feedback signals with the effective clues embedded in background spiking activity, and also achieves excellent performance on real medical classification datasets and speech classification datasets. The superior performance makes it a meaningful reference for aggregate-label learning on spiking neural networks.
Collapse
Affiliation(s)
- Lvhui Hu
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Xin Liao
- Information Center, Hospital of Chengdu University of Traditional Chinese Medicine, Chengdu, China
| |
Collapse
|
8
|
Comsa IM, Potempa K, Versari L, Fischbacher T, Gesmundo A, Alakuijala J. Temporal Coding in Spiking Neural Networks With Alpha Synaptic Function: Learning With Backpropagation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:5939-5952. [PMID: 33900924 DOI: 10.1109/tnnls.2021.3071976] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The timing of individual neuronal spikes is essential for biological brains to make fast responses to sensory stimuli. However, conventional artificial neural networks lack the intrinsic temporal coding ability present in biological networks. We propose a spiking neural network model that encodes information in the relative timing of individual spikes. In classification tasks, the output of the network is indicated by the first neuron to spike in the output layer. This temporal coding scheme allows the supervised training of the network with backpropagation, using locally exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. The network operates using a biologically plausible synaptic transfer function. In addition, we use trainable pulses that provide bias, add flexibility during training, and exploit the decayed part of the synaptic function. We show that such networks can be successfully trained on multiple data sets encoded in time, including MNIST. Our model outperforms comparable spiking models on MNIST and achieves similar quality to fully connected conventional networks with the same architecture. The spiking network spontaneously discovers two operating modes, mirroring the accuracy-speed tradeoff observed in human decision-making: a highly accurate but slow regime, and a fast but slightly lower accuracy regime. These results demonstrate the computational power of spiking networks with biological characteristics that encode information in the timing of individual neurons. By studying temporal coding in spiking networks, we aim to create building blocks toward energy-efficient, state-based biologically inspired neural architectures. We provide open-source code for the model.
Collapse
|
9
|
Haşegan D, Deible M, Earl C, D’Onofrio D, Hazan H, Anwar H, Neymotin SA. Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning. Front Comput Neurosci 2022; 16:1017284. [PMID: 36249482 PMCID: PMC9563231 DOI: 10.3389/fncom.2022.1017284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 08/31/2022] [Indexed: 11/13/2022] Open
Abstract
Artificial neural networks (ANNs) have been successfully trained to perform a wide range of sensory-motor behaviors. In contrast, the performance of spiking neuronal network (SNN) models trained to perform similar behaviors remains relatively suboptimal. In this work, we aimed to push the field of SNNs forward by exploring the potential of different learning mechanisms to achieve optimal performance. We trained SNNs to solve the CartPole reinforcement learning (RL) control problem using two learning mechanisms operating at different timescales: (1) spike-timing-dependent reinforcement learning (STDP-RL) and (2) evolutionary strategy (EVOL). Though the role of STDP-RL in biological systems is well established, several other mechanisms, though not fully understood, work in concert during learning in vivo. Recreating accurate models that capture the interaction of STDP-RL with these diverse learning mechanisms is extremely difficult. EVOL is an alternative method and has been successfully used in many studies to fit model neural responsiveness to electrophysiological recordings and, in some cases, for classification problems. One advantage of EVOL is that it may not need to capture all interacting components of synaptic plasticity and thus provides a better alternative to STDP-RL. Here, we compared the performance of each algorithm after training, which revealed EVOL as a powerful method for training SNNs to perform sensory-motor behaviors. Our modeling opens up new capabilities for SNNs in RL and could serve as a testbed for neurobiologists aiming to understand multi-timescale learning mechanisms and dynamics in neuronal circuits.
Collapse
Affiliation(s)
- Daniel Haşegan
- Vilcek Institute of Graduate Biomedical Sciences, NYU Grossman School of Medicine, New York, NY, United States
| | - Matt Deible
- Department of Computer Science, University of Pittsburgh, Pittsburgh, PA, United States
| | - Christopher Earl
- Department of Computer Science, University of Massachusetts Amherst, Amherst, MA, United States
| | - David D’Onofrio
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| | - Hananel Hazan
- Allen Discovery Center, Tufts University, Boston, MA, United States
| | - Haroon Anwar
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| | - Samuel A. Neymotin
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, United States
- Department of Psychiatry, NYU Grossman School of Medicine, New York, NY, United States
| |
Collapse
|
10
|
Lu S, Xu F. Linear leaky-integrate-and-fire neuron model based spiking neural networks and its mapping relationship to deep neural networks. Front Neurosci 2022; 16:857513. [PMID: 36090262 PMCID: PMC9448910 DOI: 10.3389/fnins.2022.857513] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 07/27/2022] [Indexed: 11/13/2022] Open
Abstract
Spiking neural networks (SNNs) are brain-inspired machine learning algorithms with merits such as biological plausibility and unsupervised learning capability. Previous works have shown that converting Artificial Neural Networks (ANNs) into SNNs is a practical and efficient approach for implementing an SNN. However, the basic principle and theoretical groundwork are lacking for training a non-accuracy-loss SNN. This paper establishes a precise mathematical mapping between the biological parameters of the Linear Leaky-Integrate-and-Fire model (LIF)/SNNs and the parameters of ReLU-AN/Deep Neural Networks (DNNs). Such mapping relationship is analytically proven under certain conditions and demonstrated by simulation and real data experiments. It can serve as the theoretical basis for the potential combination of the respective merits of the two categories of neural networks.
Collapse
|
11
|
Wang H, He Z, Wang T, He J, Zhou X, Wang Y, Liu L, Wu N, Tian M, Shi C. TripleBrain: A Compact Neuromorphic Hardware Core With Fast On-Chip Self-Organizing and Reinforcement Spike-Timing Dependent Plasticity. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:636-650. [PMID: 35802542 DOI: 10.1109/tbcas.2022.3189240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Human brain cortex acts as a rich inspiration source for constructing efficient artificial cognitive systems. In this paper, we investigate to incorporate multiple brain-inspired computing paradigms for compact, fast and high-accuracy neuromorphic hardware implementation. We propose the TripleBrain hardware core that tightly combines three common brain-inspired factors: the spike-based processing and plasticity, the self-organizing map (SOM) mechanism and the reinforcement learning scheme, to improve object recognition accuracy and processing throughput, while keeping low resource costs. The proposed hardware core is fully event-driven to mitigate unnecessary operations, and enables various on-chip learning rules (including the proposed SOM-STDP & R-STDP rule and the R-SOM-STDP rule regarded as the two variants of our TripleBrain learning rule) with different accuracy-latency tradeoffs to satisfy user requirements. An FPGA prototype of the neuromorphic core was implemented and elaborately tested. It realized high-speed learning (1349 frame/s) and inference (2698 frame/s), and obtained comparably high recognition accuracies of 95.10%, 80.89%, 100%, 94.94%, 82.32%, 100% and 97.93% on the MNIST, ETH-80, ORL-10, Yale-10, N-MNIST, Poker-DVS and Posture-DVS datasets, respectively, while only consuming 4146 (7.59%) slices, 32 (3.56%) DSPs and 131 (24.04%) Block RAMs on a Xilinx Zynq-7045 FPGA chip. Our neuromorphic core is very attractive for real-time resource-limited edge intelligent systems.
Collapse
|
12
|
A Spike Neural Network Model for Lateral Suppression of Spike-Timing-Dependent Plasticity with Adaptive Threshold. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12125980] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Aiming at the practical constraints of high resource occupancy and complex calculations in the existing Spike Neural Network (SNN) image classification model, in order to seek a more lightweight and efficient machine vision solution, this paper proposes an adaptive threshold Spike Neural Network (SNN) model of lateral inhibition of Spike-Timing-Dependent Plasticity (STDP). The conversion from grayscale image to pulse sequence is completed by convolution normalization and first pulse time coding. The network self-classification is realized by combining the classical Spike-Timing-Dependent Plasticity algorithm (STDP) and lateral suppression algorithm. The occurrence of overfitting is effectively suppressed by introducing an adaptive threshold. The experimental results on the MNIST data set show that compared with the traditional SNN classification model, the complexity of the weight update algorithm is reduced from O(n2) to O(1), and the accuracy rate can still remain stable at about 96%. The provided model is conducive to the migration of software algorithms to the bottom layer of the hardware platform, and can provide a reference for the realization of edge computing solutions for small intelligent hardware terminals with high efficiency and low power consumption.
Collapse
|
13
|
Zhang M, Wang J, Wu J, Belatreche A, Amornpaisannon B, Zhang Z, Miriyala VPK, Qu H, Chua Y, Carlson TE, Li H. Rectified Linear Postsynaptic Potential Function for Backpropagation in Deep Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1947-1958. [PMID: 34534091 DOI: 10.1109/tnnls.2021.3110991] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Spiking neural networks (SNNs) use spatiotemporal spike patterns to represent and transmit information, which are not only biologically realistic but also suitable for ultralow-power event-driven neuromorphic implementation. Just like other deep learning techniques, deep SNNs (DeepSNNs) benefit from the deep architecture. However, the training of DeepSNNs is not straightforward because the well-studied error backpropagation (BP) algorithm is not directly applicable. In this article, we first establish an understanding as to why error BP does not work well in DeepSNNs. We then propose a simple yet efficient rectified linear postsynaptic potential function (ReL-PSP) for spiking neurons and a spike-timing-dependent BP (STDBP) learning algorithm for DeepSNNs where the timing of individual spikes is used to convey information (temporal coding), and learning (BP) is performed based on spike timing in an event-driven manner. We show that DeepSNNs trained with the proposed single spike time-based learning algorithm can achieve the state-of-the-art classification accuracy. Furthermore, by utilizing the trained model parameters obtained from the proposed STDBP learning algorithm, we demonstrate ultralow-power inference operations on a recently proposed neuromorphic inference accelerator. The experimental results also show that the neuromorphic hardware consumes 0.751 mW of the total power consumption and achieves a low latency of 47.71 ms to classify an image from the Modified National Institute of Standards and Technology (MNIST) dataset. Overall, this work investigates the contribution of spike timing dynamics for information encoding, synaptic plasticity, and decision-making, providing a new perspective to the design of future DeepSNNs and neuromorphic hardware.
Collapse
|
14
|
Javanshir A, Nguyen TT, Mahmud MAP, Kouzani AZ. Advancements in Algorithms and Neuromorphic Hardware for Spiking Neural Networks. Neural Comput 2022; 34:1289-1328. [PMID: 35534005 DOI: 10.1162/neco_a_01499] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 01/18/2022] [Indexed: 11/04/2022]
Abstract
Artificial neural networks (ANNs) have experienced a rapid advancement for their success in various application domains, including autonomous driving and drone vision. Researchers have been improving the performance efficiency and computational requirement of ANNs inspired by the mechanisms of the biological brain. Spiking neural networks (SNNs) provide a power-efficient and brain-inspired computing paradigm for machine learning applications. However, evaluating large-scale SNNs on classical von Neumann architectures (central processing units/graphics processing units) demands a high amount of power and time. Therefore, hardware designers have developed neuromorphic platforms to execute SNNs in and approach that combines fast processing and low power consumption. Recently, field-programmable gate arrays (FPGAs) have been considered promising candidates for implementing neuromorphic solutions due to their varied advantages, such as higher flexibility, shorter design, and excellent stability. This review aims to describe recent advances in SNNs and the neuromorphic hardware platforms (digital, analog, hybrid, and FPGA based) suitable for their implementation. We present that biological background of SNN learning, such as neuron models and information encoding techniques, followed by a categorization of SNN training. In addition, we describe state-of-the-art SNN simulators. Furthermore, we review and present FPGA-based hardware implementation of SNNs. Finally, we discuss some future directions for research in this field.
Collapse
Affiliation(s)
| | - Thanh Thi Nguyen
- School of Information Technology, Deakin University (Burwood Campus) Burwood, VIC 3125, Australia
| | - M A Parvez Mahmud
- School of Engineering, Deakin University, Geelong, VIC 3216, Australia
| | - Abbas Z Kouzani
- School of Engineering, Deakin University, Geelong, VIC 3216, Australia
| |
Collapse
|
15
|
Yan Y, Chu H, Jin Y, Huan Y, Zou Z, Zheng L. Backpropagation With Sparsity Regularization for Spiking Neural Network Learning. Front Neurosci 2022; 16:760298. [PMID: 35495028 PMCID: PMC9047717 DOI: 10.3389/fnins.2022.760298] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Accepted: 02/22/2022] [Indexed: 11/15/2022] Open
Abstract
The spiking neural network (SNN) is a possible pathway for low-power and energy-efficient processing and computing exploiting spiking-driven and sparsity features of biological systems. This article proposes a sparsity-driven SNN learning algorithm, namely backpropagation with sparsity regularization (BPSR), aiming to achieve improved spiking and synaptic sparsity. Backpropagation incorporating spiking regularization is utilized to minimize the spiking firing rate with guaranteed accuracy. Backpropagation realizes the temporal information capture and extends to the spiking recurrent layer to support brain-like structure learning. The rewiring mechanism with synaptic regularization is suggested to further mitigate the redundancy of the network structure. Rewiring based on weight and gradient regulates the pruning and growth of synapses. Experimental results demonstrate that the network learned by BPSR has synaptic sparsity and is highly similar to the biological system. It not only balances the accuracy and firing rate, but also facilitates SNN learning by suppressing the information redundancy. We evaluate the proposed BPSR on the visual dataset MNIST, N-MNIST, and CIFAR10, and further test it on the sensor dataset MIT-BIH and gas sensor. Results bespeak that our algorithm achieves comparable or superior accuracy compared to related works, with sparse spikes and synapses.
Collapse
Affiliation(s)
| | | | | | | | - Zhuo Zou
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Lirong Zheng
- School of Information Science and Technology, Fudan University, Shanghai, China
| |
Collapse
|
16
|
Yu Q, Ma C, Song S, Zhang G, Dang J, Tan KC. Constructing Accurate and Efficient Deep Spiking Neural Networks With Double-Threshold and Augmented Schemes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1714-1726. [PMID: 33471769 DOI: 10.1109/tnnls.2020.3043415] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Spiking neural networks (SNNs) are considered as a potential candidate to overcome current challenges, such as the high-power consumption encountered by artificial neural networks (ANNs); however, there is still a gap between them with respect to the recognition accuracy on various tasks. A conversion strategy was, thus, introduced recently to bridge this gap by mapping a trained ANN to an SNN. However, it is still unclear that to what extent this obtained SNN can benefit both the accuracy advantage from ANN and high efficiency from the spike-based paradigm of computation. In this article, we propose two new conversion methods, namely TerMapping and AugMapping. The TerMapping is a straightforward extension of a typical threshold-balancing method with a double-threshold scheme, while the AugMapping additionally incorporates a new scheme of augmented spike that employs a spike coefficient to carry the number of typical all-or-nothing spikes occurring at a time step. We examine the performance of our methods based on the MNIST, Fashion-MNIST, and CIFAR10 data sets. The results show that the proposed double-threshold scheme can effectively improve the accuracies of the converted SNNs. More importantly, the proposed AugMapping is more advantageous for constructing accurate, fast, and efficient deep SNNs compared with other state-of-the-art approaches. Our study, therefore, provides new approaches for further integration of advanced techniques in ANNs to improve the performance of SNNs, which could be of great merit to applied developments with spike-based neuromorphic computing.
Collapse
|
17
|
Learning general temporal point processes based on dynamic weight generation. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02590-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
18
|
Zhang A, Niu Y, Gao Y, Wu J, Gao Z. Second-order information bottleneck based spiking neural networks for sEMG recognition. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2021.11.065] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
19
|
Schuman CD, Kulkarni SR, Parsa M, Mitchell JP, Date P, Kay B. Opportunities for neuromorphic computing algorithms and applications. NATURE COMPUTATIONAL SCIENCE 2022; 2:10-19. [PMID: 38177712 DOI: 10.1038/s43588-021-00184-y] [Citation(s) in RCA: 122] [Impact Index Per Article: 61.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 12/07/2021] [Indexed: 01/06/2024]
Abstract
Neuromorphic computing technologies will be important for the future of computing, but much of the work in neuromorphic computing has focused on hardware development. Here, we review recent results in neuromorphic computing algorithms and applications. We highlight characteristics of neuromorphic computing technologies that make them attractive for the future of computing and we discuss opportunities for future development of algorithms and applications on these systems.
Collapse
Affiliation(s)
- Catherine D Schuman
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA.
- Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN, USA.
| | - Shruti R Kulkarni
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Maryam Parsa
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
- Department of Electrical and Computer Engineering, George Mason University, Fairfax, VA, USA
| | - J Parker Mitchell
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Prasanna Date
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Bill Kay
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| |
Collapse
|
20
|
Liu F, Zhao W, Chen Y, Wang Z, Yang T, Jiang L. SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training. Front Neurosci 2021; 15:756876. [PMID: 34803591 PMCID: PMC8603828 DOI: 10.3389/fnins.2021.756876] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 10/01/2021] [Indexed: 11/18/2022] Open
Abstract
Spiking Neural Networks (SNNs) are a pathway that could potentially empower low-power event-driven neuromorphic hardware due to their spatio-temporal information processing capability and high biological plausibility. Although SNNs are currently more efficient than artificial neural networks (ANNs), they are not as accurate as ANNs. Error backpropagation is the most common method for directly training neural networks, promoting the prosperity of ANNs in various deep learning fields. However, since the signals transmitted in the SNN are non-differentiable discrete binary spike events, the activation function in the form of spikes presents difficulties for the gradient-based optimization algorithms to be directly applied in SNNs, leading to a performance gap (i.e., accuracy and latency) between SNNs and ANNs. This paper introduces a new learning algorithm, called SSTDP, which bridges the gap between backpropagation (BP)-based learning and spike-time-dependent plasticity (STDP)-based learning to train SNNs efficiently. The scheme incorporates the global optimization process from BP and the efficient weight update derived from STDP. It not only avoids the non-differentiable derivation in the BP process but also utilizes the local feature extraction property of STDP. Consequently, our method can lower the possibility of vanishing spikes in BP training and reduce the number of time steps to reduce network latency. In SSTDP, we employ temporal-based coding and use Integrate-and-Fire (IF) neuron as the neuron model to provide considerable computational benefits. Our experiments show the effectiveness of the proposed SSTDP learning algorithm on the SNN by achieving the best classification accuracy 99.3% on the Caltech 101 dataset, 98.1% on the MNIST dataset, and 91.3% on the CIFAR-10 dataset compared to other SNNs trained with other learning methods. It also surpasses the best inference accuracy of the directly trained SNN with 25~32× less inference latency. Moreover, we analyze event-based computations to demonstrate the efficacy of the SNN for inference operation in the spiking domain, and SSTDP methods can achieve 1.3~37.7× fewer addition operations per inference. The code is available at: https://github.com/MXHX7199/SNN-SSTDP.
Collapse
Affiliation(s)
- Fangxin Liu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.,Shanghai Qi Zhi Institute, Shanghai, China
| | - Wenbo Zhao
- Shanghai Qi Zhi Institute, Shanghai, China.,School of Engineering and Applied Science, Columbia Univeristy, New York, NY, United States
| | - Yongbiao Chen
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zongwu Wang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Tao Yang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Li Jiang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.,Shanghai Qi Zhi Institute, Shanghai, China.,MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
21
|
|
22
|
Qiao G, Ning N, Zuo Y, Hu S, Yu Q, Liu Y. Direct training of hardware-friendly weight binarized spiking neural network with surrogate gradient learning towards spatio-temporal event-based dynamic data recognition. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.06.070] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
23
|
Teichmann M, Larisch R, Hamker FH. Performance of biologically grounded models of the early visual system on standard object recognition tasks. Neural Netw 2021; 144:210-228. [PMID: 34507042 DOI: 10.1016/j.neunet.2021.08.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 07/05/2021] [Accepted: 08/04/2021] [Indexed: 11/29/2022]
Abstract
Computational neuroscience models of vision and neural network models for object recognition are often framed by different research agendas. Computational neuroscience mainly aims at replicating experimental data, while (artificial) neural networks target high performance on classification tasks. However, we propose that models of vision should be validated on object recognition tasks. At some point, mechanisms of realistic neuro-computational models of the visual cortex have to convince in object recognition as well. In order to foster this idea, we report the recognition accuracy for two different neuro-computational models of the visual cortex on several object recognition datasets. The models were trained using unsupervised Hebbian learning rules on natural scene inputs for the emergence of receptive fields comparable to their biological counterpart. We assume that the emerged receptive fields result in a general codebook of features, which should be applicable to a variety of visual scenes. We report the performances on datasets with different levels of difficulty, ranging from the simple MNIST to the more complex CIFAR-10 or ETH-80. We found that both networks show good results on simple digit recognition, comparable with previously published biologically plausible models. We also observed that our deeper layer neurons provide for naturalistic datasets a better recognition codebook. As for most datasets, recognition results of biologically grounded models are not available yet, our results provide a broad basis of performance values to compare methodologically similar models.
Collapse
Affiliation(s)
- Michael Teichmann
- Chemnitz University of Technology, Str. der Nationen, 62, 09111, Chemnitz, Germany.
| | - René Larisch
- Chemnitz University of Technology, Str. der Nationen, 62, 09111, Chemnitz, Germany.
| | - Fred H Hamker
- Chemnitz University of Technology, Str. der Nationen, 62, 09111, Chemnitz, Germany.
| |
Collapse
|
24
|
Nobukawa S, Nishimura H, Wagatsuma N, Ando S, Yamanishi T. Long-Tailed Characteristic of Spiking Pattern Alternation Induced by Log-Normal Excitatory Synaptic Distribution. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:3525-3537. [PMID: 32822305 DOI: 10.1109/tnnls.2020.3015208] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Studies of structural connectivity at the synaptic level show that in synaptic connections of the cerebral cortex, the excitatory postsynaptic potential (EPSP) in most synapses exhibits sub-mV values, while a small number of synapses exhibit large EPSPs ( >~1.0 [mV]). This means that the distribution of EPSP fits a log-normal distribution. While not restricting structural connectivity, skewed and long-tailed distributions have been widely observed in neural activities, such as the occurrences of spiking rates and the size of a synchronously spiking population. Many studies have been modeled this long-tailed EPSP neural activity distribution; however, its causal factors remain controversial. This study focused on the long-tailed EPSP distributions and interlateral synaptic connections primarily observed in the cortical network structures, thereby having constructed a spiking neural network consistent with these features. Especially, we constructed two coupled modules of spiking neural networks with excitatory and inhibitory neural populations with a log-normal EPSP distribution. We evaluated the spiking activities for different input frequencies and with/without strong synaptic connections. These coupled modules exhibited intermittent intermodule-alternative behavior, given moderate input frequency and the existence of strong synaptic and intermodule connections. Moreover, the power analysis, multiscale entropy analysis, and surrogate data analysis revealed that the long-tailed EPSP distribution and intermodule connections enhanced the complexity of spiking activity at large temporal scales and induced nonlinear dynamics and neural activity that followed the long-tailed distribution.
Collapse
|
25
|
Xiang S, Ren Z, Song Z, Zhang Y, Guo X, Han G, Hao Y. Computing Primitive of Fully VCSEL-Based All-Optical Spiking Neural Network for Supervised Learning and Pattern Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2494-2505. [PMID: 32673197 DOI: 10.1109/tnnls.2020.3006263] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We propose computing primitive for an all-optical spiking neural network (SNN) based on vertical-cavity surface-emitting lasers (VCSELs) for supervised learning by using biologically plausible mechanisms. The spike-timing-dependent plasticity (STDP) model was established based on the dynamics of the vertical-cavity semiconductor optical amplifier (VCSOA) subject to dual-optical pulse injection. The neuron-synapse self-consistent unified model of the all-optical SNN was developed, which enables reproducing the essential neuron-like dynamics and STDP function. Optical character numbers are trained and tested by the proposed fully VCSEL-based all-optical SNN. Simulation results show that the proposed all-optical SNN is capable of recognizing ten numbers by a supervised learning algorithm, in which the input and output patterns as well as the teacher signals of the all-optical SNN are represented by spatiotemporal fashions. Moreover, the lateral inhibition is not required in our proposed architecture, which is friendly to the hardware implementation. The system-level unified model enables architecture-algorithm codesigns and optimization of all-optical SNN. To the best of our knowledge, the computing primitive of an all-optical SNN based on VCSELs for supervised learning has not yet been reported, which paves the way toward fully VCSEL-based large-scale photonic neuromorphic systems with low power consumption.
Collapse
|
26
|
Debat G, Chauhan T, Cottereau BR, Masquelier T, Paindavoine M, Baures R. Event-Based Trajectory Prediction Using Spiking Neural Networks. Front Comput Neurosci 2021; 15:658764. [PMID: 34108870 PMCID: PMC8180888 DOI: 10.3389/fncom.2021.658764] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 04/27/2021] [Indexed: 11/13/2022] Open
Abstract
In recent years, event-based sensors have been combined with spiking neural networks (SNNs) to create a new generation of bio-inspired artificial vision systems. These systems can process spatio-temporal data in real time, and are highly energy efficient. In this study, we used a new hybrid event-based camera in conjunction with a multi-layer spiking neural network trained with a spike-timing-dependent plasticity learning rule. We showed that neurons learn from repeated and correlated spatio-temporal patterns in an unsupervised way and become selective to motion features, such as direction and speed. This motion selectivity can then be used to predict ball trajectory by adding a simple read-out layer composed of polynomial regressions, and trained in a supervised manner. Hence, we show that a SNN receiving inputs from an event-based sensor can extract relevant spatio-temporal patterns to process and predict ball trajectories.
Collapse
Affiliation(s)
- Guillaume Debat
- CERCO UMR 5549, CNRS-Université Toulouse 3, Toulouse, France
| | - Tushar Chauhan
- CERCO UMR 5549, CNRS-Université Toulouse 3, Toulouse, France
| | | | | | - Michel Paindavoine
- Laboratory for Research on Learning and Development (LEAD), University of Burgundy, CNRS UMR, Dijon, France
| | - Robin Baures
- CERCO UMR 5549, CNRS-Université Toulouse 3, Toulouse, France
| |
Collapse
|
27
|
Guo W, Fouda ME, Eltawil AM, Salama KN. Neural Coding in Spiking Neural Networks: A Comparative Study for Robust Neuromorphic Systems. Front Neurosci 2021; 15:638474. [PMID: 33746705 PMCID: PMC7970006 DOI: 10.3389/fnins.2021.638474] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Accepted: 02/15/2021] [Indexed: 11/13/2022] Open
Abstract
Various hypotheses of information representation in brain, referred to as neural codes, have been proposed to explain the information transmission between neurons. Neural coding plays an essential role in enabling the brain-inspired spiking neural networks (SNNs) to perform different tasks. To search for the best coding scheme, we performed an extensive comparative study on the impact and performance of four important neural coding schemes, namely, rate coding, time-to-first spike (TTFS) coding, phase coding, and burst coding. The comparative study was carried out using a biological 2-layer SNN trained with an unsupervised spike-timing-dependent plasticity (STDP) algorithm. Various aspects of network performance were considered, including classification accuracy, processing latency, synaptic operations (SOPs), hardware implementation, network compression efficacy, input and synaptic noise resilience, and synaptic fault tolerance. The classification tasks on Modified National Institute of Standards and Technology (MNIST) and Fashion-MNIST datasets were applied in our study. For hardware implementation, area and power consumption were estimated for these coding schemes, and the network compression efficacy was analyzed using pruning and quantization techniques. Different types of input noise and noise variations in the datasets were considered and applied. Furthermore, the robustness of each coding scheme to the non-ideality-induced synaptic noise and fault in analog neuromorphic systems was studied and compared. Our results show that TTFS coding is the best choice in achieving the highest computational performance with very low hardware implementation overhead. TTFS coding requires 4x/7.5x lower processing latency and 3.5x/6.5x fewer SOPs than rate coding during the training/inference process. Phase coding is the most resilient scheme to input noise. Burst coding offers the highest network compression efficacy and the best overall robustness to hardware non-idealities for both training and inference processes. The study presented in this paper reveals the design space created by the choice of each coding scheme, allowing designers to frame each scheme in terms of its strength and weakness given a designs' constraints and considerations in neuromorphic systems.
Collapse
Affiliation(s)
- Wenzhe Guo
- Sensors Laboratory, Advanced Membranes and Porous Materials Center (AMPMC), Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia.,Communication and Computing Systems Laboratory, Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| | - Mohammed E Fouda
- Communication and Computing Systems Laboratory, Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia.,Department of Electrical Engineering and Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Ahmed M Eltawil
- Communication and Computing Systems Laboratory, Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia.,Department of Electrical Engineering and Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Khaled Nabil Salama
- Sensors Laboratory, Advanced Membranes and Porous Materials Center (AMPMC), Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| |
Collapse
|
28
|
Mirsadeghi M, Shalchian M, Kheradpisheh SR, Masquelier T. STiDi-BP: Spike time displacement based error backpropagation in multilayer spiking neural networks. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.11.052] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
29
|
Deterministic characteristics of spontaneous activity detected by multi-fractal analysis in a spiking neural network with long-tailed distributions of synaptic weights. Cogn Neurodyn 2020; 14:829-836. [PMID: 33101534 DOI: 10.1007/s11571-020-09605-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Revised: 05/13/2020] [Accepted: 06/02/2020] [Indexed: 10/24/2022] Open
Abstract
Cortical neural networks maintain autonomous electrical activity called spontaneous activity that represents the brain's dynamic internal state even in the absence of sensory stimuli. The spatio-temporal complexity of spontaneous activity is strongly related to perceptual, learning, and cognitive brain functions; multi-fractal analysis can be utilized to evaluate the complexity of spontaneous activity. Recent studies have shown that the deterministic dynamic behavior of spontaneous activity especially reflects the topological neural network characteristics and changes of neural network structures. However, it remains unclear whether multi-fractal analysis, recently widely utilized for neural activity, is effective for detecting the complexity of the deterministic dynamic process. To verify this point, we focused on the log-normal distribution of excitatory postsynaptic potentials (EPSPs) to evaluate the multi-fractality of spontaneous activity in a spiking neural network with a log-normal distribution of EPSPs. We found that the spiking activities exhibited multi-fractal characteristics. Moreover, to investigate the presence of a deterministic process in the spiking activity, we conducted a surrogate data analysis against the time-series of spiking activity. The results showed that the spontaneous spiking activity included the deterministic dynamic behavior. Overall, the combination of multi-fractal analysis and surrogate data analysis can detect deterministic complex neural activity. The multi-fractal analysis of neural activity used in this study could be widely utilized for brain modeling and evaluation methods for signals obtained by neuroimaging modalities.
Collapse
|
30
|
Kirkland P, Di Caterina G, Soraghan J, Matich G. Perception Understanding Action: Adding Understanding to the Perception Action Cycle With Spiking Segmentation. Front Neurorobot 2020; 14:568319. [PMID: 33192434 PMCID: PMC7604290 DOI: 10.3389/fnbot.2020.568319] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2020] [Accepted: 10/20/2020] [Indexed: 11/30/2022] Open
Abstract
Traditionally the Perception Action cycle is the first stage of building an autonomous robotic system and a practical way to implement a low latency reactive system within a low Size, Weight and Power (SWaP) package. However, within complex scenarios, this method can lack contextual understanding about the scene, such as object recognition-based tracking or system attention. Object detection, identification and tracking along with semantic segmentation and attention are all modern computer vision tasks in which Convolutional Neural Networks (CNN) have shown significant success, although such networks often have a large computational overhead and power requirements, which are not ideal in smaller robotics tasks. Furthermore, cloud computing and massively parallel processing like in Graphic Processing Units (GPUs) are outside the specification of many tasks due to their respective latency and SWaP constraints. In response to this, Spiking Convolutional Neural Networks (SCNNs) look to provide the feature extraction benefits of CNNs, while maintaining low latency and power overhead thanks to their asynchronous spiking event-based processing. A novel Neuromorphic Perception Understanding Action (PUA) system is presented, that aims to combine the feature extraction benefits of CNNs with low latency processing of SCNNs. The PUA utilizes a Neuromorphic Vision Sensor for Perception that facilitates asynchronous processing within a Spiking fully Convolutional Neural Network (SpikeCNN) to provide semantic segmentation and Understanding of the scene. The output is fed to a spiking control system providing Actions. With this approach, the aim is to bring features of deep learning into the lower levels of autonomous robotics, while maintaining a biologically plausible STDP rule throughout the learned encoding part of the network. The network will be shown to provide a more robust and predictable management of spiking activity with an improved thresholding response. The reported experiments show that this system can deliver robust results of over 96 and 81% for accuracy and Intersection over Union, ensuring such a system can be successfully used within object recognition, classification and tracking problem. This demonstrates that the attention of the system can be tracked accurately, while the asynchronous processing means the controller can give precise track updates with minimal latency.
Collapse
Affiliation(s)
- Paul Kirkland
- Neuromorphic Sensor Signal Processing Lab, Centre for Image and Signal Processing, Electrical and Electronic Engineering, University of Strathclyde, Glasgow, United Kingdom
| | - Gaetano Di Caterina
- Neuromorphic Sensor Signal Processing Lab, Centre for Image and Signal Processing, Electrical and Electronic Engineering, University of Strathclyde, Glasgow, United Kingdom
| | - John Soraghan
- Neuromorphic Sensor Signal Processing Lab, Centre for Image and Signal Processing, Electrical and Electronic Engineering, University of Strathclyde, Glasgow, United Kingdom
| | | |
Collapse
|
31
|
Rank order coding based spiking convolutional neural network architecture with energy-efficient membrane voltage updates. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.05.031] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
32
|
Kheradpisheh SR, Masquelier T. Temporal Backpropagation for Spiking Neural Networks with One Spike per Neuron. Int J Neural Syst 2020; 30:2050027. [DOI: 10.1142/s0129065720500276] [Citation(s) in RCA: 57] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
We propose a new supervised learning rule for multilayer spiking neural networks (SNNs) that use a form of temporal coding known as rank-order-coding. With this coding scheme, all neurons fire exactly one spike per stimulus, but the firing order carries information. In particular, in the readout layer, the first neuron to fire determines the class of the stimulus. We derive a new learning rule for this sort of network, named S4NN, akin to traditional error backpropagation, yet based on latencies. We show how approximated error gradients can be computed backward in a feedforward network with any number of layers. This approach reaches state-of-the-art performance with supervised multi-fully connected layer SNNs: test accuracy of 97.4% for the MNIST dataset, and 99.2% for the Caltech Face/Motorbike dataset. Yet, the neuron model that we use, nonleaky integrate-and-fire, is much simpler than the one used in all previous works. The source codes of the proposed S4NN are publicly available at https://github.com/SRKH/S4NN .
Collapse
Affiliation(s)
- Saeed Reza Kheradpisheh
- Department of Computer and Data Sciences, Faculty of Mathematical Sciences, Shahid Beheshti University, Tehran, Iran
| | | |
Collapse
|
33
|
Hong C, Wei X, Wang J, Deng B, Yu H, Che Y. Training Spiking Neural Networks for Cognitive Tasks: A Versatile Framework Compatible With Various Temporal Codes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:1285-1296. [PMID: 31247574 DOI: 10.1109/tnnls.2019.2919662] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Recent studies have demonstrated the effectiveness of supervised learning in spiking neural networks (SNNs). A trainable SNN provides a valuable tool not only for engineering applications but also for theoretical neuroscience studies. Here, we propose a modified SpikeProp learning algorithm, which ensures better learning stability for SNNs and provides more diverse network structures and coding schemes. Specifically, we designed a spike gradient threshold rule to solve the well-known gradient exploding problem in SNN training. In addition, regulation rules on firing rates and connection weights are proposed to control the network activity during training. Based on these rules, biologically realistic features such as lateral connections, complex synaptic dynamics, and sparse activities are included in the network to facilitate neural computation. We demonstrate the versatility of this framework by implementing three well-known temporal codes for different types of cognitive tasks, namely, handwritten digit recognition, spatial coordinate transformation, and motor sequence generation. Several important features observed in experimental studies, such as selective activity, excitatory-inhibitory balance, and weak pairwise correlation, emerged in the trained model. This agreement between experimental and computational results further confirmed the importance of these features in neural function. This work provides a new framework, in which various neural behaviors can be modeled and the underlying computational mechanisms can be studied.
Collapse
|
34
|
Huang K, Ma X, Song R, Rong X, Tian X, Li Y. A self-organizing developmental cognitive architecture with interactive reinforcement learning. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.07.109] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
35
|
Kim H, Tang H, Choi W, Park J. An Energy-Quality Scalable STDP Based Sparse Coding Processor With On-Chip Learning Capability. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2020; 14:125-137. [PMID: 31905147 DOI: 10.1109/tbcas.2019.2963676] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Two main bottlenecks encountered when implementing energy-efficient spike-timing-dependent plasticity (STDP) based sparse coding, are the complex computation of winner-take-all (WTA) operation and repetitive neuronal operations in the time domain processing. In this article, we present an energy-efficient STDP based sparse coding processor. The low-cost hardware is based on the algorithmic reduction techniques as following: First, the complex WTA operation is simplified based on the prediction of spike emitting neurons. Sparsity based approximation in spatial and temporal domain are also efficiently exploited to remove the redundant neurons with negligible algorithmic accuracy loss. We designed and implemented the hardware of the STDP based sparse coding using 65nm CMOS process. By exploiting input sparsity, the proposed SNN architecture can dynamically trade off algorithmic quality for computation energy (up to 74%) for Natural image (maximum 0.01 RMSE increment) and MNIST (no accuracy loss) applications. In the inference mode of operations, the SNN hardware achieves the throughput of 374 Mpixels/s and 840.2 GSOP/s with the energy-efficiency of 781.52 pJ/pixel and 0.35 pJ/SOP.
Collapse
|
36
|
Tang H, Kim H, Kim H, Park J. Spike Counts Based Low Complexity SNN Architecture With Binary Synapse. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:1664-1677. [PMID: 31603797 DOI: 10.1109/tbcas.2019.2945406] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this paper, we present an energy and area efficient spike neural network (SNN) processor based on novel spike counts based methods. For the low cost SNN design, we propose hardware-friendly complexity reduction techniques for both of learning and inferencing modes of operations. First, for the unsupervised learning process, we propose a spike counts based learning method. The novel learning approach utilizes pre- and post-synaptic spike counts to reduce the bit-width of synaptic weights as well as the number of weight updates. For the energy efficient inferencing operations, we propose an accumulation based computing scheme, where the number of input spikes for each input axon is accumulated without instant membrane updates until the pre-defined number of spikes are reached. In addition, the computation skip schemes identify meaningless computations and skip them to improve energy efficiency. Based on the proposed low complexity design techniques, we design and implement the SNN processor using 65 nm CMOS process. According to the implementation results, the SNN processor achieves 87.4% of recognition accuracy in MNIST dataset using only 1-bit 230 k synaptic weights with 400 excitatory neurons. The energy consumptions are 0.26 pJ/SOP and 0.31 μJ/inference in inferencing mode, and 1.42 pJ/SOP and 2.63 μJ/learning in learning mode of operations.
Collapse
|
37
|
Amirshahi A, Hashemi M. ECG Classification Algorithm Based on STDP and R-STDP Neural Networks for Real-Time Monitoring on Ultra Low-Power Personal Wearable Devices. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:1483-1493. [PMID: 31647445 DOI: 10.1109/tbcas.2019.2948920] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This paper presents a novel ECG classification algorithm for inclusion as part of real-time cardiac monitoring systems in ultra low-power wearable devices. The proposed solution is based on spiking neural networks which are the third generation of neural networks. In specific, we employ spike-timing dependent plasticity (STDP), and reward-modulated STDP (R-STDP), in which the model weights are trained according to the timings of spike signals, and reward or punishment signals. Experiments show that the proposed solution is suitable for real-time operation, achieves comparable accuracy with respect to previous methods, and more importantly, its energy consumption in real-time classification of ECG signals is significantly smaller. In specific, energy consumption is 1.78 μJ per beat, which is 2 to 9 orders of magnitude smaller than previous neural network based ECG classification methods.
Collapse
|
38
|
Fast and robust learning in Spiking Feed-forward Neural Networks based on Intrinsic Plasticity mechanism. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.07.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
39
|
Pattern Classification by Spiking Neural Networks Combining Self-Organized and Reward-Related Spike-Timing-Dependent Plasticity. JOURNAL OF ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING RESEARCH 2019. [DOI: 10.2478/jaiscr-2019-0009] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Abstract
Many recent studies have applied to spike neural networks with spike-timing-dependent plasticity (STDP) to machine learning problems. The learning abilities of dopamine-modulated STDP (DA-STDP) for reward-related synaptic plasticity have also been gathering attention. Following these studies, we hypothesize that a network structure combining self-organized STDP and reward-related DA-STDP can solve the machine learning problem of pattern classification. Therefore, we studied the ability of a network in which recurrent spiking neural networks are combined with STDP for non-supervised learning, with an output layer joined by DA-STDP for supervised learning, to perform pattern classification. We confirmed that this network could perform pattern classification using the STDP effect for emphasizing features of the input spike pattern and DA-STDP supervised learning. Therefore, our proposed spiking neural network may prove to be a useful approach for machine learning problems.
Collapse
|
40
|
Camuñas-Mesa LA, Linares-Barranco B, Serrano-Gotarredona T. Neuromorphic Spiking Neural Networks and Their Memristor-CMOS Hardware Implementations. MATERIALS (BASEL, SWITZERLAND) 2019; 12:E2745. [PMID: 31461877 PMCID: PMC6747825 DOI: 10.3390/ma12172745] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 08/02/2019] [Accepted: 08/10/2019] [Indexed: 11/17/2022]
Abstract
Inspired by biology, neuromorphic systems have been trying to emulate the human brain for decades, taking advantage of its massive parallelism and sparse information coding. Recently, several large-scale hardware projects have demonstrated the outstanding capabilities of this paradigm for applications related to sensory information processing. These systems allow for the implementation of massive neural networks with millions of neurons and billions of synapses. However, the realization of learning strategies in these systems consumes an important proportion of resources in terms of area and power. The recent development of nanoscale memristors that can be integrated with Complementary Metal-Oxide-Semiconductor (CMOS) technology opens a very promising solution to emulate the behavior of biological synapses. Therefore, hybrid memristor-CMOS approaches have been proposed to implement large-scale neural networks with learning capabilities, offering a scalable and lower-cost alternative to existing CMOS systems.
Collapse
Affiliation(s)
- Luis A Camuñas-Mesa
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain.
| | - Bernabé Linares-Barranco
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain
| | - Teresa Serrano-Gotarredona
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, 41092 Sevilla, Spain
| |
Collapse
|
41
|
Locally connected spiking neural networks for unsupervised feature learning. Neural Netw 2019; 119:332-340. [PMID: 31499357 DOI: 10.1016/j.neunet.2019.08.016] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Revised: 08/08/2019] [Accepted: 08/14/2019] [Indexed: 11/22/2022]
Abstract
In recent years, spiking neural networks (SNNs) have demonstrated great success in completing various machine learning tasks. We introduce a method for learning image features with locally connected layers in SNNs using a spike-timing-dependent plasticity (STDP) rule. In our approach, sub-networks compete via inhibitory interactions to learn features from different locations of the input space. These locally-connected SNNs (LC-SNNs) manifest key topological features of the spatial interaction of biological neurons. We explore a biologically inspired n-gram classification approach allowing parallel processing over various patches of the image space. We report the classification accuracy of simple two-layer LC-SNNs on two image datasets, which respectively match state-of-art performance and are the first results to date. LC-SNNs have the advantage of fast convergence to a dataset representation, and they require fewer learnable parameters than other SNN approaches with unsupervised learning. Robustness tests demonstrate that LC-SNNs exhibit graceful degradation of performance despite the random deletion of large numbers of synapses and neurons. Our results have been obtained using the BindsNET library, which allows efficient machine learning implementations of spiking neural networks.
Collapse
|
42
|
Mozafari M, Ganjtabesh M, Nowzari-Dalini A, Masquelier T. SpykeTorch: Efficient Simulation of Convolutional Spiking Neural Networks With at Most One Spike per Neuron. Front Neurosci 2019; 13:625. [PMID: 31354403 PMCID: PMC6640212 DOI: 10.3389/fnins.2019.00625] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 05/31/2019] [Indexed: 11/13/2022] Open
Abstract
Application of deep convolutional spiking neural networks (SNNs) to artificial intelligence (AI) tasks has recently gained a lot of interest since SNNs are hardware-friendly and energy-efficient. Unlike the non-spiking counterparts, most of the existing SNN simulation frameworks are not practically efficient enough for large-scale AI tasks. In this paper, we introduce SpykeTorch, an open-source high-speed simulation framework based on PyTorch. This framework simulates convolutional SNNs with at most one spike per neuron and the rank-order encoding scheme. In terms of learning rules, both spike-timing-dependent plasticity (STDP) and reward-modulated STDP (R-STDP) are implemented, but other rules could be implemented easily. Apart from the aforementioned properties, SpykeTorch is highly generic and capable of reproducing the results of various studies. Computations in the proposed framework are tensor-based and totally done by PyTorch functions, which in turn brings the ability of just-in-time optimization for running on CPUs, GPUs, or Multi-GPU platforms.
Collapse
Affiliation(s)
- Milad Mozafari
- Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran.,CERCO UMR 5549, CNRS - Université Toulouse 3, Toulouse, France
| | - Mohammad Ganjtabesh
- Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran
| | - Abbas Nowzari-Dalini
- Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran
| | | |
Collapse
|
43
|
Wunderlich T, Kungl AF, Müller E, Hartel A, Stradmann Y, Aamir SA, Grübl A, Heimbrecht A, Schreiber K, Stöckel D, Pehle C, Billaudelle S, Kiene G, Mauch C, Schemmel J, Meier K, Petrovici MA. Demonstrating Advantages of Neuromorphic Computation: A Pilot Study. Front Neurosci 2019; 13:260. [PMID: 30971881 PMCID: PMC6444279 DOI: 10.3389/fnins.2019.00260] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Accepted: 03/05/2019] [Indexed: 11/26/2022] Open
Abstract
Neuromorphic devices represent an attempt to mimic aspects of the brain's architecture and dynamics with the aim of replicating its hallmark functional capabilities in terms of computational power, robust learning and energy efficiency. We employ a single-chip prototype of the BrainScaleS 2 neuromorphic system to implement a proof-of-concept demonstration of reward-modulated spike-timing-dependent plasticity in a spiking network that learns to play a simplified version of the Pong video game by smooth pursuit. This system combines an electronic mixed-signal substrate for emulating neuron and synapse dynamics with an embedded digital processor for on-chip learning, which in this work also serves to simulate the virtual environment and learning agent. The analog emulation of neuronal membrane dynamics enables a 1000-fold acceleration with respect to biological real-time, with the entire chip operating on a power budget of 57 mW. Compared to an equivalent simulation using state-of-the-art software, the on-chip emulation is at least one order of magnitude faster and three orders of magnitude more energy-efficient. We demonstrate how on-chip learning can mitigate the effects of fixed-pattern noise, which is unavoidable in analog substrates, while making use of temporal variability for action exploration. Learning compensates imperfections of the physical substrate, as manifested in neuronal parameter variability, by adapting synaptic weights to match respective excitability of individual neurons.
Collapse
Affiliation(s)
- Timo Wunderlich
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Akos F Kungl
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Eric Müller
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Hartel
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Yannik Stradmann
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Syed Ahmed Aamir
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Grübl
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Arthur Heimbrecht
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Korbinian Schreiber
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - David Stöckel
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Pehle
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Sebastian Billaudelle
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Gerd Kiene
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Mauch
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Schemmel
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Karlheinz Meier
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Mihai A Petrovici
- Department of Physics, Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany.,Department of Physiology, University of Bern, Bern, Switzerland
| |
Collapse
|
44
|
Hazan H, Saunders DJ, Khan H, Patel D, Sanghavi DT, Siegelmann HT, Kozma R. BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python. Front Neuroinform 2018; 12:89. [PMID: 30631269 PMCID: PMC6315182 DOI: 10.3389/fninf.2018.00089] [Citation(s) in RCA: 57] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Accepted: 11/13/2018] [Indexed: 01/08/2023] Open
Abstract
The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, software abstraction levels, and hardware devices, yet are typically not suitable for rapid prototyping or application to problems in the domain of machine learning. In this paper, we describe a new Python package for the simulation of spiking neural networks, specifically geared toward machine learning and reinforcement learning. Our software, called BindsNET, enables rapid building and simulation of spiking networks and features user-friendly, concise syntax. BindsNET is built on the PyTorch deep neural networks library, facilitating the implementation of spiking neural networks on fast CPU and GPU computational platforms. Moreover, the BindsNET framework can be adjusted to utilize other existing computing and hardware backends; e.g., TensorFlow and SpiNNaker. We provide an interface with the OpenAI gym library, allowing for training and evaluation of spiking networks on reinforcement learning environments. We argue that this package facilitates the use of spiking networks for large-scale machine learning problems and show some simple examples by using BindsNET in practice.
Collapse
Affiliation(s)
- Hananel Hazan
- Biologically Inspired Neural and Dynamical Systems Laboratory, College of Computer and Information Sciences, University of Massachusetts Amherst, Amherst, MA, United States
| | - Daniel J. Saunders
- Biologically Inspired Neural and Dynamical Systems Laboratory, College of Computer and Information Sciences, University of Massachusetts Amherst, Amherst, MA, United States
| | | | | | | | | | | |
Collapse
|
45
|
Cejnar P, Vysata O, Valis M, Prochazka A. The Complex Behaviour of a Simple Neural Oscillator Model in the Human Cortex. IEEE Trans Neural Syst Rehabil Eng 2018; 27:337-347. [PMID: 30507514 DOI: 10.1109/tnsre.2018.2883618] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The brain is a complex organ responsible for memory storage and reasoning; however, the mechanisms underlying these processes remain unknown. This paper forms a contribution to a lot of theoretical studies devoted to regular or chaotic oscillations of interconnected neurons assuming that the smallest information unit in the brain is not a neuron but, instead, a coupling of inhibitory and excitatory neurons forming a simple oscillator. Several coefficients of variation for peak intervals and correlation coefficients for peak interval histograms are evaluated and the sensitivity of such oscillator units is tested to changes in initial membrane potentials, interconnection signal delays, and changes in synaptic weights based on known histologically verified neuron couplings. Results present only a low dependence of oscillation patterns to changes in initial membrane potentials or interconnection signal delays in comparison to a strong sensitivity to changes in synaptic weights showing the stability and robustness of encoded oscillating patterns to signal outages or remoteness of interconnected neurons. Presented simulations prove that the selected neuronal couplings are able to produce a variety of different behavioural patterns, with periodicity ranging from milliseconds to thousands of milliseconds between the spikes. Many detected different intrinsic frequencies then support the idea of possibly large informational capacity of such memory units.
Collapse
|
46
|
Yousefzadeh A, Stromatias E, Soto M, Serrano-Gotarredona T, Linares-Barranco B. On Practical Issues for Stochastic STDP Hardware With 1-bit Synaptic Weights. Front Neurosci 2018; 12:665. [PMID: 30374283 PMCID: PMC6196279 DOI: 10.3389/fnins.2018.00665] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2017] [Accepted: 09/04/2018] [Indexed: 11/21/2022] Open
Abstract
In computational neuroscience, synaptic plasticity learning rules are typically studied using the full 64-bit floating point precision computers provide. However, for dedicated hardware implementations, the precision used not only penalizes directly the required memory resources, but also the computing, communication, and energy resources. When it comes to hardware engineering, a key question is always to find the minimum number of necessary bits to keep the neurocomputational system working satisfactorily. Here we present some techniques and results obtained when limiting synaptic weights to 1-bit precision, applied to a Spike-Timing-Dependent-Plasticity (STDP) learning rule in Spiking Neural Networks (SNN). We first illustrate the 1-bit synapses STDP operation by replicating a classical biological experiment on visual orientation tuning, using a simple four neuron setup. After this, we apply 1-bit STDP learning to the hidden feature extraction layer of a 2-layer system, where for the second (and output) layer we use already reported SNN classifiers. The systems are tested on two spiking datasets: a Dynamic Vision Sensor (DVS) recorded poker card symbols dataset and a Poisson-distributed spike representation MNIST dataset version. Tests are performed using the in-house MegaSim event-driven behavioral simulator and by implementing the systems on FPGA (Field Programmable Gate Array) hardware.
Collapse
Affiliation(s)
- Amirreza Yousefzadeh
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, Sevilla, Spain
| | - Evangelos Stromatias
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, Sevilla, Spain
| | - Miguel Soto
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, Sevilla, Spain
| | | | - Bernabé Linares-Barranco
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla, Sevilla, Spain
| |
Collapse
|
47
|
Masquelier T, Kheradpisheh SR. Optimal Localist and Distributed Coding of Spatiotemporal Spike Patterns Through STDP and Coincidence Detection. Front Comput Neurosci 2018; 12:74. [PMID: 30279653 PMCID: PMC6153331 DOI: 10.3389/fncom.2018.00074] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2018] [Accepted: 08/17/2018] [Indexed: 11/13/2022] Open
Abstract
Repeating spatiotemporal spike patterns exist and carry information. Here we investigated how a single spiking neuron can optimally respond to one given pattern (localist coding), or to either one of several patterns (distributed coding, i.e., the neuron's response is ambiguous but the identity of the pattern could be inferred from the response of multiple neurons), but not to random inputs. To do so, we extended a theory developed in a previous paper (Masquelier, 2017), which was limited to localist coding. More specifically, we computed analytically the signal-to-noise ratio (SNR) of a multi-pattern-detector neuron, using a threshold-free leaky integrate-and-fire (LIF) neuron model with non-plastic unitary synapses and homogeneous Poisson inputs. Surprisingly, when increasing the number of patterns, the SNR decreases slowly, and remains acceptable for several tens of independent patterns. In addition, we investigated whether spike-timing-dependent plasticity (STDP) could enable a neuron to reach the theoretical optimal SNR. To this aim, we simulated a LIF equipped with STDP, and repeatedly exposed it to multiple input spike patterns, embedded in equally dense Poisson spike trains. The LIF progressively became selective to every repeating pattern with no supervision, and stopped discharging during the Poisson spike trains. Furthermore, tuning certain STDP parameters, the resulting pattern detectors were optimal. Tens of independent patterns could be learned by a single neuron using a low adaptive threshold, in contrast with previous studies, in which higher thresholds led to localist coding only. Taken together these results suggest that coincidence detection and STDP are powerful mechanisms, fully compatible with distributed coding. Yet we acknowledge that our theory is limited to single neurons, and thus also applies to feed-forward networks, but not to recurrent ones.
Collapse
Affiliation(s)
- Timothée Masquelier
- Centre de Recherche Cerveau et Cognition, UMR5549 CNRS-Université Toulouse 3, Toulouse, France.,Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC, Universidad de Sevilla, Sevilla, Spain
| | - Saeed R Kheradpisheh
- Department of Computer Science, Faculty of Mathematical Sciences and Computer, Kharazmi University, Tehran, Iran
| |
Collapse
|
48
|
Wu Y, Deng L, Li G, Zhu J, Shi L. Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks. Front Neurosci 2018; 12:331. [PMID: 29875621 PMCID: PMC5974215 DOI: 10.3389/fnins.2018.00331] [Citation(s) in RCA: 192] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Accepted: 04/30/2018] [Indexed: 11/28/2022] Open
Abstract
Spiking neural networks (SNNs) are promising in ascertaining brain-like behaviors since spikes are capable of encoding spatio-temporal information. Recent schemes, e.g., pre-training from artificial neural networks (ANNs) or direct training based on backpropagation (BP), make the high-performance supervised training of SNNs possible. However, these methods primarily fasten more attention on its spatial domain information, and the dynamics in temporal domain are attached less significance. Consequently, this might lead to the performance bottleneck, and scores of training techniques shall be additionally required. Another underlying problem is that the spike activity is naturally non-differentiable, raising more difficulties in supervised training of SNNs. In this paper, we propose a spatio-temporal backpropagation (STBP) algorithm for training high-performance SNNs. In order to solve the non-differentiable problem of SNNs, an approximated derivative for spike activity is proposed, being appropriate for gradient descent training. The STBP algorithm combines the layer-by-layer spatial domain (SD) and the timing-dependent temporal domain (TD), and does not require any additional complicated skill. We evaluate this method through adopting both the fully connected and convolutional architecture on the static MNIST dataset, a custom object detection dataset, and the dynamic N-MNIST dataset. Results bespeak that our approach achieves the best accuracy compared with existing state-of-the-art algorithms on spiking networks. This work provides a new perspective to investigate the high-performance SNNs for future brain-like computing paradigm with rich spatio-temporal dynamics.
Collapse
Affiliation(s)
- Yujie Wu
- Department of Precision Instrument, Center for Brain-Inspired Computing Research, Beijing Innovation Center for Future Chip, Tsinghua University, Beijing, China
| | - Lei Deng
- Department of Precision Instrument, Center for Brain-Inspired Computing Research, Beijing Innovation Center for Future Chip, Tsinghua University, Beijing, China.,Department of Electrical and Computer Engineering, University of California, Santa Barbara, Santa Barbara, CA, United States
| | - Guoqi Li
- Department of Precision Instrument, Center for Brain-Inspired Computing Research, Beijing Innovation Center for Future Chip, Tsinghua University, Beijing, China
| | - Jun Zhu
- State Key Lab of Intelligence Technology and System, Tsinghua National Lab for Information Science and Technology, Tsinghua University, Beijing, China
| | - Luping Shi
- Department of Precision Instrument, Center for Brain-Inspired Computing Research, Beijing Innovation Center for Future Chip, Tsinghua University, Beijing, China
| |
Collapse
|