1
|
Jiao L, Zhao J, Wang C, Liu X, Liu F, Li L, Shang R, Li Y, Ma W, Yang S. Nature-Inspired Intelligent Computing: A Comprehensive Survey. RESEARCH (WASHINGTON, D.C.) 2024; 7:0442. [PMID: 39156658 PMCID: PMC11327401 DOI: 10.34133/research.0442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Accepted: 07/14/2024] [Indexed: 08/20/2024]
Abstract
Nature, with its numerous surprising rules, serves as a rich source of creativity for the development of artificial intelligence, inspiring researchers to create several nature-inspired intelligent computing paradigms based on natural mechanisms. Over the past decades, these paradigms have revealed effective and flexible solutions to practical and complex problems. This paper summarizes the natural mechanisms of diverse advanced nature-inspired intelligent computing paradigms, which provide valuable lessons for building general-purpose machines capable of adapting to the environment autonomously. According to the natural mechanisms, we classify nature-inspired intelligent computing paradigms into 4 types: evolutionary-based, biological-based, social-cultural-based, and science-based. Moreover, this paper also illustrates the interrelationship between these paradigms and natural mechanisms, as well as their real-world applications, offering a comprehensive algorithmic foundation for mitigating unreasonable metaphors. Finally, based on the detailed analysis of natural mechanisms, the challenges of current nature-inspired paradigms and promising future research directions are presented.
Collapse
Affiliation(s)
- Licheng Jiao
- School of Artificial Intelligence, Xidian University, Xi’an, China
| | - Jiaxuan Zhao
- School of Artificial Intelligence, Xidian University, Xi’an, China
| | - Chao Wang
- School of Artificial Intelligence, Xidian University, Xi’an, China
| | - Xu Liu
- School of Artificial Intelligence, Xidian University, Xi’an, China
| | - Fang Liu
- School of Artificial Intelligence, Xidian University, Xi’an, China
| | - Lingling Li
- School of Artificial Intelligence, Xidian University, Xi’an, China
| | - Ronghua Shang
- School of Artificial Intelligence, Xidian University, Xi’an, China
| | - Yangyang Li
- School of Artificial Intelligence, Xidian University, Xi’an, China
| | - Wenping Ma
- School of Artificial Intelligence, Xidian University, Xi’an, China
| | - Shuyuan Yang
- School of Artificial Intelligence, Xidian University, Xi’an, China
| |
Collapse
|
2
|
Qin L, Wang Z, Yan R, Tang H. Attention-Based Deep Spiking Neural Networks for Temporal Credit Assignment Problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:10301-10311. [PMID: 37022405 DOI: 10.1109/tnnls.2023.3240176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The temporal credit assignment (TCA) problem, which aims to detect predictive features hidden in distracting background streams, remains a core challenge in biological and machine learning. Aggregate-label (AL) learning is proposed by researchers to resolve this problem by matching spikes with delayed feedback. However, the existing AL learning algorithms only consider the information of a single timestep, which is inconsistent with the real situation. Meanwhile, there is no quantitative evaluation method for TCA problems. To address these limitations, we propose a novel attention-based TCA (ATCA) algorithm and a minimum editing distance (MED)-based quantitative evaluation method. Specifically, we define a loss function based on the attention mechanism to deal with the information contained within the spike clusters and use MED to evaluate the similarity between the spike train and the target clue flow. Experimental results on musical instrument recognition (MedleyDB), speech recognition (TIDIGITS), and gesture recognition (DVS128-Gesture) show that the ATCA algorithm can reach the state-of-the-art (SOTA) level compared with other AL learning algorithms.
Collapse
|
3
|
Wang J. Training multi-layer spiking neural networks with plastic synaptic weights and delays. Front Neurosci 2024; 17:1253830. [PMID: 38328553 PMCID: PMC10847234 DOI: 10.3389/fnins.2023.1253830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 12/04/2023] [Indexed: 02/09/2024] Open
Abstract
Spiking neural networks are usually considered as the third generation of neural networks, which hold the potential of ultra-low power consumption on corresponding hardware platforms and are very suitable for temporal information processing. However, how to efficiently train the spiking neural networks remains an open question, and most existing learning methods only consider the plasticity of synaptic weights. In this paper, we proposed a new supervised learning algorithm for multiple-layer spiking neural networks based on the typical SpikeProp method. In the proposed method, both the synaptic weights and delays are considered as adjustable parameters to improve both the biological plausibility and the learning performance. In addition, the proposed method inherits the advantages of SpikeProp, which can make full use of the temporal information of spikes. Various experiments are conducted to verify the performance of the proposed method, and the results demonstrate that the proposed method achieves a competitive learning performance compared with the existing related works. Finally, the differences between the proposed method and the existing mainstream multi-layer training algorithms are discussed.
Collapse
Affiliation(s)
- Jing Wang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
4
|
Zhang Z, Xiao M, Ji T, Jiang Y, Lin T, Zhou X, Lin Z. Efficient and generalizable cross-patient epileptic seizure detection through a spiking neural network. Front Neurosci 2024; 17:1303564. [PMID: 38268711 PMCID: PMC10805904 DOI: 10.3389/fnins.2023.1303564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 12/18/2023] [Indexed: 01/26/2024] Open
Abstract
Introduction Epilepsy is a global chronic disease that brings pain and inconvenience to patients, and an electroencephalogram (EEG) is the main analytical tool. For clinical aid that can be applied to any patient, an automatic cross-patient epilepsy seizure detection algorithm is of great significance. Spiking neural networks (SNNs) are modeled on biological neurons and are energy-efficient on neuromorphic hardware, which can be expected to better handle brain signals and benefit real-world, low-power applications. However, automatic epilepsy seizure detection rarely considers SNNs. Methods In this article, we have explored SNNs for cross-patient seizure detection and discovered that SNNs can achieve comparable state-of-the-art performance or a performance that is even better than artificial neural networks (ANNs). We propose an EEG-based spiking neural network (EESNN) with a recurrent spiking convolution structure, which may better take advantage of temporal and biological characteristics in EEG signals. Results We extensively evaluate the performance of different SNN structures, training methods, and time settings, which builds a solid basis for understanding and evaluation of SNNs in seizure detection. Moreover, we show that our EESNN model can achieve energy reduction by several orders of magnitude compared with ANNs according to the theoretical estimation. Discussion These results show the potential for building high-performance, low-power neuromorphic systems for seizure detection and also broaden real-world application scenarios of SNNs.
Collapse
Affiliation(s)
- Zongpeng Zhang
- Department of Biostatistics, School of Public Health, Peking University, Beijing, China
| | - Mingqing Xiao
- National Key Lab of General AI, School of Intelligence Science and Technology, Peking University, Beijing, China
| | - Taoyun Ji
- Department of Pediatrics, Peking University First Hospital, Beijing, China
| | - Yuwu Jiang
- Department of Pediatrics, Peking University First Hospital, Beijing, China
| | - Tong Lin
- National Key Lab of General AI, School of Intelligence Science and Technology, Peking University, Beijing, China
| | - Xiaohua Zhou
- Department of Biostatistics, School of Public Health, Peking University, Beijing, China
- Beijing International Center for Mathematical Research, Peking University, Beijing, China
- Peking University Chongqing Institute for Big Data, Chongqing, China
| | - Zhouchen Lin
- National Key Lab of General AI, School of Intelligence Science and Technology, Peking University, Beijing, China
- Institute for Artificial Intelligence, Peking University, Beijing, China
| |
Collapse
|
5
|
Yu Q, Gao J, Wei J, Li J, Tan KC, Huang T. Improving Multispike Learning With Plastic Synaptic Delays. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10254-10265. [PMID: 35442893 DOI: 10.1109/tnnls.2022.3165527] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Emulating the spike-based processing in the brain, spiking neural networks (SNNs) are developed and act as a promising candidate for the new generation of artificial neural networks that aim to produce efficient cognitions as the brain. Due to the complex dynamics and nonlinearity of SNNs, designing efficient learning algorithms has remained a major difficulty, which attracts great research attention. Most existing ones focus on the adjustment of synaptic weights. However, other components, such as synaptic delays, are found to be adaptive and important in modulating neural behavior. How could plasticity on different components cooperate to improve the learning of SNNs remains as an interesting question. Advancing our previous multispike learning, we propose a new joint weight-delay plasticity rule, named TDP-DL, in this article. Plastic delays are integrated into the learning framework, and as a result, the performance of multispike learning is significantly improved. Simulation results highlight the effectiveness and efficiency of our TDP-DL rule compared to baseline ones. Moreover, we reveal the underlying principle of how synaptic weights and delays cooperate with each other through a synthetic task of interval selectivity and show that plastic delays can enhance the selectivity and flexibility of neurons by shifting information across time. Due to this capability, useful information distributed away in the time domain can be effectively integrated for a better accuracy performance, as highlighted in our generalization tasks of the image, speech, and event-based object recognitions. Our work is thus valuable and significant to improve the performance of spike-based neuromorphic computing.
Collapse
|
6
|
Zhang Y, Shi K, Luo X, Chen Y, Wang Y, Qu H. A biologically inspired auto-associative network with sparse temporal population coding. Neural Netw 2023; 166:670-682. [PMID: 37604076 DOI: 10.1016/j.neunet.2023.07.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 06/25/2023] [Accepted: 07/26/2023] [Indexed: 08/23/2023]
Abstract
Associative system has attracted increasing attention for it can store basic information and then infer details to match perception with an efficient self-organization algorithm. However, the implementation of the associative system with the application of real-world data is relatively difficult. To address this issue, we propose a novel biologically inspired auto-associative (BIAA) network to explore the structure, encoding and formation of associative memory as well as to extend the ability to real-world application. Our network is constructed by imitating the organization of the cortical minicolumns where each minicolumn contains plenty of parallel biological spiking neurons. To allow the network to learn and predict one symbol per theta cycle, we incorporate synaptic delay and theta oscillation into the neuron dynamic process. Subsequently, we design a sparse temporal population (STP) coding scheme that allows each input symbol to be represented as stable, unique, and easily recallable sparsely distributed representations. By combining associative learning dynamics with the STP coding, our network realizes efficient storage and inference in an ordered manner. Experimental results indicate that the proposed network successfully performs sequence retrieval from partial text and sequence recovery from distorted information. BIAA network provides new insight into introducing biologically inspired mechanisms into associative system and has enormous potential for hardware and software applications.
Collapse
Affiliation(s)
- Ya Zhang
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Kexin Shi
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Xiaoling Luo
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Yi Chen
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Yucheng Wang
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Hong Qu
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China.
| |
Collapse
|
7
|
Aceituno PV, Farinha MT, Loidl R, Grewe BF. Learning cortical hierarchies with temporal Hebbian updates. Front Comput Neurosci 2023; 17:1136010. [PMID: 37293353 PMCID: PMC10244748 DOI: 10.3389/fncom.2023.1136010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 04/25/2023] [Indexed: 06/10/2023] Open
Abstract
A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations. Similar hierarchical structures routinely emerge in artificial neural networks (ANNs) trained for object recognition tasks, suggesting that similar structures may underlie biological neural networks. However, the classical ANN training algorithm, backpropagation, is considered biologically implausible, and thus alternative biologically plausible training methods have been developed such as Equilibrium Propagation, Deep Feedback Control, Supervised Predictive Coding, and Dendritic Error Backpropagation. Several of those models propose that local errors are calculated for each neuron by comparing apical and somatic activities. Notwithstanding, from a neuroscience perspective, it is not clear how a neuron could compare compartmental signals. Here, we propose a solution to this problem in that we let the apical feedback signal change the postsynaptic firing rate and combine this with a differential Hebbian update, a rate-based version of classical spiking time-dependent plasticity (STDP). We prove that weight updates of this form minimize two alternative loss functions that we prove to be equivalent to the error-based losses used in machine learning: the inference latency and the amount of top-down feedback necessary. Moreover, we show that the use of differential Hebbian updates works similarly well in other feedback-based deep learning frameworks such as Predictive Coding or Equilibrium Propagation. Finally, our work removes a key requirement of biologically plausible models for deep learning and proposes a learning mechanism that would explain how temporal Hebbian learning rules can implement supervised hierarchical learning.
Collapse
Affiliation(s)
- Pau Vilimelis Aceituno
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| | | | - Reinhard Loidl
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Benjamin F. Grewe
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
8
|
Hanganu-Opatz IL, Klausberger T, Sigurdsson T, Nieder A, Jacob SN, Bartos M, Sauer JF, Durstewitz D, Leibold C, Diester I. Resolving the prefrontal mechanisms of adaptive cognitive behaviors: A cross-species perspective. Neuron 2023; 111:1020-1036. [PMID: 37023708 DOI: 10.1016/j.neuron.2023.03.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 02/15/2023] [Accepted: 03/10/2023] [Indexed: 04/08/2023]
Abstract
The prefrontal cortex (PFC) enables a staggering variety of complex behaviors, such as planning actions, solving problems, and adapting to new situations according to external information and internal states. These higher-order abilities, collectively defined as adaptive cognitive behavior, require cellular ensembles that coordinate the tradeoff between the stability and flexibility of neural representations. While the mechanisms underlying the function of cellular ensembles are still unclear, recent experimental and theoretical studies suggest that temporal coordination dynamically binds prefrontal neurons into functional ensembles. A so far largely separate stream of research has investigated the prefrontal efferent and afferent connectivity. These two research streams have recently converged on the hypothesis that prefrontal connectivity patterns influence ensemble formation and the function of neurons within ensembles. Here, we propose a unitary concept that, leveraging a cross-species definition of prefrontal regions, explains how prefrontal ensembles adaptively regulate and efficiently coordinate multiple processes in distinct cognitive behaviors.
Collapse
Affiliation(s)
- Ileana L Hanganu-Opatz
- Institute of Developmental Neurophysiology, Center for Molecular Neurobiology, Hamburg Center of Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.
| | - Thomas Klausberger
- Center for Brain Research, Division of Cognitive Neurobiology, Medical University of Vienna, Vienna, Austria
| | - Torfi Sigurdsson
- Institute of Neurophysiology, Goethe University, Frankfurt, Germany
| | - Andreas Nieder
- Animal Physiology Unit, Institute of Neurobiology, University of Tübingen, 72076 Tübingen, Germany
| | - Simon N Jacob
- Translational Neurotechnology Laboratory, Department of Neurosurgery, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Marlene Bartos
- Institute for Physiology I, Medical Faculty, University of Freiburg, Freiburg im Breisgau, Germany
| | - Jonas-Frederic Sauer
- Institute for Physiology I, Medical Faculty, University of Freiburg, Freiburg im Breisgau, Germany
| | - Daniel Durstewitz
- Department of Theoretical Neuroscience, Central Institute of Mental Health & Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| | - Christian Leibold
- Faculty of Biology, Bernstein Center Freiburg, BrainLinks-BrainTools, University of Freiburg, Freiburg im Breisgau, Germany
| | - Ilka Diester
- Optophysiology - Optogenetics and Neurophysiology, IMBIT // BrainLinks-BrainTools, University of Freiburg, Freiburg im Breisgau, Germany.
| |
Collapse
|
9
|
Ji M, Wang Z, Yan R, Liu Q, Xu S, Tang H. SCTN: Event-based object tracking with energy-efficient deep convolutional spiking neural networks. Front Neurosci 2023; 17:1123698. [PMID: 36875665 PMCID: PMC9978206 DOI: 10.3389/fnins.2023.1123698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 01/30/2023] [Indexed: 02/18/2023] Open
Abstract
Event cameras are asynchronous and neuromorphically inspired visual sensors, which have shown great potential in object tracking because they can easily detect moving objects. Since event cameras output discrete events, they are inherently suitable to coordinate with Spiking Neural Network (SNN), which has a unique event-driven computation characteristic and energy-efficient computing. In this paper, we tackle the problem of event-based object tracking by a novel architecture with a discriminatively trained SNN, called the Spiking Convolutional Tracking Network (SCTN). Taking a segment of events as input, SCTN not only better exploits implicit associations among events rather than event-wise processing, but also fully utilizes precise temporal information and maintains the sparse representation in segments instead of frames. To make SCTN more suitable for object tracking, we propose a new loss function that introduces an exponential Intersection over Union (IoU) in the voltage domain. To the best of our knowledge, this is the first tracking network directly trained with SNN. Besides, we present a new event-based tracking dataset, dubbed DVSOT21. In contrast to other competing trackers, experimental results on DVSOT21 demonstrate that our method achieves competitive performance with very low energy consumption compared to ANN based trackers with very low energy consumption compared to ANN based trackers. With lower energy consumption, tracking on neuromorphic hardware will reveal its advantage.
Collapse
Affiliation(s)
- Mingcheng Ji
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Ziling Wang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Rui Yan
- College of Computer Science, Zhejiang University of Technology, Hangzhou, China
| | - Qingjie Liu
- Machine Intelligence Laboratory, China Nanhu Academy of Electronics and Information Technology, Jiaxing, China
| | - Shu Xu
- Machine Intelligence Laboratory, China Nanhu Academy of Electronics and Information Technology, Jiaxing, China
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
- Zhejiang Lab, Hangzhou, China
| |
Collapse
|
10
|
Dehghani-Habibabadi M, Pawelzik K. Synaptic self-organization of spatio-temporal pattern selectivity. PLoS Comput Biol 2023; 19:e1010876. [PMID: 36780564 PMCID: PMC9977062 DOI: 10.1371/journal.pcbi.1010876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 03/01/2023] [Accepted: 01/17/2023] [Indexed: 02/15/2023] Open
Abstract
Spiking model neurons can be set up to respond selectively to specific spatio-temporal spike patterns by optimization of their input weights. It is unknown, however, if existing synaptic plasticity mechanisms can achieve this temporal mode of neuronal coding and computation. Here it is shown that changes of synaptic efficacies which tend to balance excitatory and inhibitory synaptic inputs can make neurons sensitive to particular input spike patterns. Simulations demonstrate that a combination of Hebbian mechanisms, hetero-synaptic plasticity and synaptic scaling is sufficient for self-organizing sensitivity for spatio-temporal spike patterns that repeat in the input. In networks inclusion of hetero-synaptic plasticity that depends on the pre-synaptic neurons leads to specialization and faithful representation of pattern sequences by a group of target neurons. Pattern detection is robust against a range of distortions and noise. The proposed combination of Hebbian mechanisms, hetero-synaptic plasticity and synaptic scaling is found to protect the memories for specific patterns from being overwritten by ongoing learning during extended periods when the patterns are not present. This suggests a novel explanation for the long term robustness of memory traces despite ongoing activity with substantial synaptic plasticity. Taken together, our results promote the plausibility of precise temporal coding in the brain.
Collapse
Affiliation(s)
| | - Klaus Pawelzik
- Institute for Theoretical Physics, University of Bremen, Bremen, Germany
| |
Collapse
|
11
|
Schmitt FJ, Rostami V, Nawrot MP. Efficient parameter calibration and real-time simulation of large-scale spiking neural networks with GeNN and NEST. Front Neuroinform 2023; 17:941696. [PMID: 36844916 PMCID: PMC9950635 DOI: 10.3389/fninf.2023.941696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 01/16/2023] [Indexed: 02/12/2023] Open
Abstract
Spiking neural networks (SNNs) represent the state-of-the-art approach to the biologically realistic modeling of nervous system function. The systematic calibration for multiple free model parameters is necessary to achieve robust network function and demands high computing power and large memory resources. Special requirements arise from closed-loop model simulation in virtual environments and from real-time simulation in robotic application. Here, we compare two complementary approaches to efficient large-scale and real-time SNN simulation. The widely used NEural Simulation Tool (NEST) parallelizes simulation across multiple CPU cores. The GPU-enhanced Neural Network (GeNN) simulator uses the highly parallel GPU-based architecture to gain simulation speed. We quantify fixed and variable simulation costs on single machines with different hardware configurations. As a benchmark model, we use a spiking cortical attractor network with a topology of densely connected excitatory and inhibitory neuron clusters with homogeneous or distributed synaptic time constants and in comparison to the random balanced network. We show that simulation time scales linearly with the simulated biological model time and, for large networks, approximately linearly with the model size as dominated by the number of synaptic connections. Additional fixed costs with GeNN are almost independent of model size, while fixed costs with NEST increase linearly with model size. We demonstrate how GeNN can be used for simulating networks with up to 3.5 · 106 neurons (> 3 · 1012synapses) on a high-end GPU, and up to 250, 000 neurons (25 · 109 synapses) on a low-cost GPU. Real-time simulation was achieved for networks with 100, 000 neurons. Network calibration and parameter grid search can be efficiently achieved using batch processing. We discuss the advantages and disadvantages of both approaches for different use cases.
Collapse
Affiliation(s)
| | | | - Martin Paul Nawrot
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Cologne, Germany
| |
Collapse
|
12
|
Hu L, Liao X. Voltage slope guided learning in spiking neural networks. Front Neurosci 2022; 16:1012964. [PMID: 36440266 PMCID: PMC9685168 DOI: 10.3389/fnins.2022.1012964] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Accepted: 10/25/2022] [Indexed: 04/19/2024] Open
Abstract
A thorny problem in machine learning is how to extract useful clues related to delayed feedback signals from the clutter of input activity, known as the temporal credit-assignment problem. The aggregate-label learning algorithms make an explicit representation of this problem by training spiking neurons to assign the aggregate feedback signal to potentially effective clues. However, earlier aggregate-label learning algorithms suffered from inefficiencies due to the large amount of computation, while recent algorithms that have solved this problem may fail to learn due to the inability to find adjustment points. Therefore, we propose a membrane voltage slope guided algorithm (VSG) to further cope with this limitation. Direct dependence on the membrane voltage when finding the key point of weight adjustment makes VSG avoid intensive calculation, but more importantly, the membrane voltage that always exists makes it impossible to lose the adjustment point. Experimental results show that the proposed algorithm can correlate delayed feedback signals with the effective clues embedded in background spiking activity, and also achieves excellent performance on real medical classification datasets and speech classification datasets. The superior performance makes it a meaningful reference for aggregate-label learning on spiking neural networks.
Collapse
Affiliation(s)
- Lvhui Hu
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Xin Liao
- Information Center, Hospital of Chengdu University of Traditional Chinese Medicine, Chengdu, China
| |
Collapse
|
13
|
Yu Q, Song S, Ma C, Wei J, Chen S, Tan KC. Temporal Encoding and Multispike Learning Framework for Efficient Recognition of Visual Patterns. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3387-3399. [PMID: 33531306 DOI: 10.1109/tnnls.2021.3052804] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Biological systems under a parallel and spike-based computation endow individuals with abilities to have prompt and reliable responses to different stimuli. Spiking neural networks (SNNs) have thus been developed to emulate their efficiency and to explore principles of spike-based processing. However, the design of a biologically plausible and efficient SNN for image classification still remains as a challenging task. Previous efforts can be generally clustered into two major categories in terms of coding schemes being employed: rate and temporal. The rate-based schemes suffer inefficiency, whereas the temporal-based ones typically end with a relatively poor performance in accuracy. It is intriguing and important to develop an SNN with both efficiency and efficacy being considered. In this article, we focus on the temporal-based approaches in a way to advance their accuracy performance by a great margin while keeping the efficiency on the other hand. A new temporal-based framework integrated with the multispike learning is developed for efficient recognition of visual patterns. Different approaches of encoding and learning under our framework are evaluated with the MNIST and Fashion-MNIST data sets. Experimental results demonstrate the efficient and effective performance of our temporal-based approaches across a variety of conditions, improving accuracies to higher levels that are even comparable to rate-based ones but importantly with a lighter network structure and far less number of spikes. This article attempts to extend the advanced multispike learning to the challenging task of image recognition and bring state of the arts in temporal-based approaches to a novel level. The experimental results could be potentially favorable to low-power and high-speed requirements in the field of artificial intelligence and contribute to attract more efforts toward brain-like computing.
Collapse
|
14
|
Dong J, Jiang R, Xiao R, Yan R, Tang H. Event stream learning using spatio-temporal event surface. Neural Netw 2022; 154:543-559. [DOI: 10.1016/j.neunet.2022.07.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 06/12/2022] [Accepted: 07/10/2022] [Indexed: 11/29/2022]
|
15
|
Abstract
![]()
Hebbian theory seeks
to explain how the neurons in the brain adapt
to stimuli to enable learning. An interesting feature of Hebbian learning
is that it is an unsupervised method and, as such, does not require
feedback, making it suitable in contexts where systems have to learn
autonomously. This paper explores how molecular systems can be designed
to show such protointelligent behaviors and proposes the first chemical
reaction network (CRN) that can exhibit autonomous Hebbian learning
across arbitrarily many input channels. The system emulates a spiking
neuron, and we demonstrate that it can learn statistical biases of
incoming inputs. The basic CRN is a minimal, thermodynamically plausible
set of microreversible chemical equations that can be analyzed with
respect to their energy requirements. However, to explore how such
chemical systems might be engineered de novo, we also propose an extended
version based on enzyme-driven compartmentalized reactions. Finally,
we show how a purely DNA system, built upon the paradigm of DNA strand
displacement, can realize neuronal dynamics. Our analysis provides
a compelling blueprint for exploring autonomous learning in biological
settings, bringing us closer to realizing real synthetic biological
intelligence.
Collapse
Affiliation(s)
- Jakub Fil
- APT Group, School of Computer Science, The University of Manchester, Manchester M13 9PL, United Kingdom
| | - Neil Dalchau
- Microsoft Research, Cambridge CB1 2FB, United Kingdom
| | - Dominique Chu
- CEMS, School of Computing, University of Kent, Canterbury CT2 7NF, United Kingdom
| |
Collapse
|
16
|
Mo L, Wang G, Long E, Zhuo M. ALSA: Associative Learning Based Supervised Learning Algorithm for SNN. Front Neurosci 2022; 16:838832. [PMID: 35431777 PMCID: PMC9008323 DOI: 10.3389/fnins.2022.838832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Accepted: 03/07/2022] [Indexed: 11/13/2022] Open
Abstract
Spiking neural network (SNN) is considered to be the brain-like model that best conforms to the biological mechanism of the brain. Due to the non-differentiability of the spike, the training method of SNNs is still incomplete. This paper proposes a supervised learning method for SNNs based on associative learning: ALSA. The method is based on the associative learning mechanism, and its realization is similar to the animal conditioned reflex process, with strong physiological plausibility and rationality. This method uses improved spike-timing-dependent plasticity (STDP) rules, combined with a teacher layer to induct spikes of neurons, to strengthen synaptic connections between input spike patterns and specified output neurons, and weaken synaptic connections between unrelated patterns and unrelated output neurons. Based on ALSA, this paper also completed the supervised learning classification tasks of the IRIS dataset and the MNIST dataset, and achieved 95.7 and 91.58% recognition accuracy, respectively, which fully proves that ALSA is a feasible SNNs supervised learning method. The innovation of this paper is to establish a biological plausible supervised learning method for SNNs, which is based on the STDP learning rules and the associative learning mechanism that exists widely in animal training.
Collapse
|
17
|
Yu Q, Li S, Tang H, Wang L, Dang J, Tan KC. Toward Efficient Processing and Learning With Spikes: New Approaches for Multispike Learning. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:1364-1376. [PMID: 32356771 DOI: 10.1109/tcyb.2020.2984888] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Spikes are the currency in central nervous systems for information transmission and processing. They are also believed to play an essential role in low-power consumption of the biological systems, whose efficiency attracts increasing attentions to the field of neuromorphic computing. However, efficient processing and learning of discrete spikes still remain a challenging problem. In this article, we make our contributions toward this direction. A simplified spiking neuron model is first introduced with the effects of both synaptic input and firing output on the membrane potential being modeled with an impulse function. An event-driven scheme is then presented to further improve the processing efficiency. Based on the neuron model, we propose two new multispike learning rules which demonstrate better performance over other baselines on various tasks, including association, classification, and feature detection. In addition to efficiency, our learning rules demonstrate high robustness against the strong noise of different types. They can also be generalized to different spike coding schemes for the classification task, and notably, the single neuron is capable of solving multicategory classifications with our learning rules. In the feature detection task, we re-examine the ability of unsupervised spike-timing-dependent plasticity with its limitations being presented, and find a new phenomenon of losing selectivity. In contrast, our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied. Moreover, our rules cannot only detect features but also discriminate them. The improved performance of our methods would contribute to neuromorphic computing as a preferable choice.
Collapse
|
18
|
Yu Q, Song S, Ma C, Pan L, Tan KC. Synaptic Learning With Augmented Spikes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1134-1146. [PMID: 33471768 DOI: 10.1109/tnnls.2020.3040969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Traditional neuron models use analog values for information representation and computation, while all-or-nothing spikes are employed in the spiking ones. With a more brain-like processing paradigm, spiking neurons are more promising for improvements in efficiency and computational capability. They extend the computation of traditional neurons with an additional dimension of time carried by all-or-nothing spikes. Could one benefit from both the accuracy of analog values and the time-processing capability of spikes? In this article, we introduce a concept of augmented spikes to carry complementary information with spike coefficients in addition to spike latencies. New augmented spiking neuron model and synaptic learning rules are proposed to process and learn patterns of augmented spikes. We provide systematic insights into the properties and characteristics of our methods, including classification of augmented spike patterns, learning capacity, construction of causality, feature detection, robustness, and applicability to practical tasks, such as acoustic and visual pattern recognition. Our augmented approaches show several advanced learning properties and reliably outperform the baseline ones that use typical all-or-nothing spikes. Our approaches significantly improve the accuracies of a temporal-based approach on sound and MNIST recognition tasks to 99.38% and 97.90%, respectively, highlighting the effectiveness and potential merits of our methods. More importantly, our augmented approaches are versatile and can be easily generalized to other spike-based systems, contributing to a potential development for them, including neuromorphic computing.
Collapse
|
19
|
Bird AD, Jedlicka P, Cuntz H. Dendritic normalisation improves learning in sparsely connected artificial neural networks. PLoS Comput Biol 2021; 17:e1009202. [PMID: 34370727 PMCID: PMC8407571 DOI: 10.1371/journal.pcbi.1009202] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Revised: 08/31/2021] [Accepted: 06/19/2021] [Indexed: 11/25/2022] Open
Abstract
Artificial neural networks, taking inspiration from biological neurons, have become an invaluable tool for machine learning applications. Recent studies have developed techniques to effectively tune the connectivity of sparsely-connected artificial neural networks, which have the potential to be more computationally efficient than their fully-connected counterparts and more closely resemble the architectures of biological systems. We here present a normalisation, based on the biophysical behaviour of neuronal dendrites receiving distributed synaptic inputs, that divides the weight of an artificial neuron's afferent contacts by their number. We apply this dendritic normalisation to various sparsely-connected feedforward network architectures, as well as simple recurrent and self-organised networks with spatially extended units. The learning performance is significantly increased, providing an improvement over other widely-used normalisations in sparse networks. The results are two-fold, being both a practical advance in machine learning and an insight into how the structure of neuronal dendritic arbours may contribute to computation.
Collapse
Affiliation(s)
- Alex D. Bird
- Ernst Strüngmann Institute for Neuroscience (ESI) in co-operation with Max Planck Society, Frankfurt, Germany
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt, Germany
- ICAR3R-Interdisciplinary Centre for 3Rs in Animal Research, Faculty of Medicine, Justus Liebig University Giessen, Giessen, Germany
| | - Peter Jedlicka
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt, Germany
- ICAR3R-Interdisciplinary Centre for 3Rs in Animal Research, Faculty of Medicine, Justus Liebig University Giessen, Giessen, Germany
| | - Hermann Cuntz
- Ernst Strüngmann Institute for Neuroscience (ESI) in co-operation with Max Planck Society, Frankfurt, Germany
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt, Germany
| |
Collapse
|
20
|
Wunderlich TC, Pehle C. Event-based backpropagation can compute exact gradients for spiking neural networks. Sci Rep 2021; 11:12829. [PMID: 34145314 PMCID: PMC8213775 DOI: 10.1038/s41598-021-91786-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 05/28/2021] [Indexed: 11/09/2022] Open
Abstract
Spiking neural networks combine analog computation with event-based communication using discrete spikes. While the impressive advances of deep learning are enabled by training non-spiking artificial neural networks using the backpropagation algorithm, applying this algorithm to spiking networks was previously hindered by the existence of discrete spike events and discontinuities. For the first time, this work derives the backpropagation algorithm for a continuous-time spiking neural network and a general loss function by applying the adjoint method together with the proper partial derivative jumps, allowing for backpropagation through discrete spike events without approximations. This algorithm, EventProp, backpropagates errors at spike times in order to compute the exact gradient in an event-based, temporally and spatially sparse fashion. We use gradients computed via EventProp to train networks on the Yin-Yang and MNIST datasets using either a spike time or voltage based loss function and report competitive performance. Our work supports the rigorous study of gradient-based learning algorithms in spiking neural networks and provides insights toward their implementation in novel brain-inspired hardware.
Collapse
Affiliation(s)
- Timo C Wunderlich
- Kirchhoff-Institute for Physics, Heidelberg University, 69120, Heidelberg, Germany.
- Berlin Institute of Health, Charité-Universitätsmedizin, 10117, Berlin, Germany.
| | - Christian Pehle
- Kirchhoff-Institute for Physics, Heidelberg University, 69120, Heidelberg, Germany.
| |
Collapse
|
21
|
Langdon AJ, Chaudhuri R. An evolving perspective on the dynamic brain: Notes from the Brain Conference on Dynamics of the brain: Temporal aspects of computation. Eur J Neurosci 2021; 53:3511-3524. [PMID: 32896026 PMCID: PMC7946155 DOI: 10.1111/ejn.14963] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 08/15/2020] [Accepted: 08/26/2020] [Indexed: 11/29/2022]
Affiliation(s)
- Angela J. Langdon
- Princeton Neuroscience Institute & Department of Psychology, Princeton University, Princeton, NJ, USA
| | - Rishidev Chaudhuri
- Center for Neuroscience, Department of Mathematics and Department of Neurobiology, Physiology & Behavior, University of California, Davis, Davis CA, USA
| |
Collapse
|
22
|
Song S, Ma C, Sun W, Xu J, Dang J, Yu Q. Efficient learning with augmented spikes: A case study with image classification. Neural Netw 2021; 142:205-212. [PMID: 34023641 DOI: 10.1016/j.neunet.2021.05.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 02/15/2021] [Accepted: 05/06/2021] [Indexed: 10/21/2022]
Abstract
Efficient learning of spikes plays a valuable role in training spiking neural networks (SNNs) to have desired responses to input stimuli. However, current learning rules are limited to a binary form of spikes. The seemingly ubiquitous phenomenon of burst in nervous systems suggests a new way to carry more information with spike bursts in addition to times. Based on this, we introduce an advanced form, the augmented spikes, where spike coefficients are used to carry additional information. How could neurons learn and benefit from augmented spikes remains unclear. In this paper, we propose two new efficient learning rules to process spatiotemporal patterns composed of augmented spikes. Moreover, we examine the learning abilities of our methods with a synthetic recognition task of augmented spike patterns and two practical ones for image classification. Experimental results demonstrate that our rules are capable of extracting information carried by both the timing and coefficient of spikes. Our proposed approaches achieve remarkable performance and good robustness under various noise conditions, as compared to benchmarks. The improved performance indicates the merits of augmented spikes and our learning rules, which could be beneficial and generalized to a broad range of spike-based platforms.
Collapse
Affiliation(s)
- Shiming Song
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Chenxiang Ma
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Wei Sun
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Junhai Xu
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Jianwu Dang
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China
| | - Qiang Yu
- Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China.
| |
Collapse
|
23
|
Chalk M, Tkacik G, Marre O. Inferring the function performed by a recurrent neural network. PLoS One 2021; 16:e0248940. [PMID: 33857170 PMCID: PMC8049287 DOI: 10.1371/journal.pone.0248940] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Accepted: 03/08/2021] [Indexed: 11/19/2022] Open
Abstract
A central goal in systems neuroscience is to understand the functions performed by neural circuits. Previous top-down models addressed this question by comparing the behaviour of an ideal model circuit, optimised to perform a given function, with neural recordings. However, this requires guessing in advance what function is being performed, which may not be possible for many neural systems. To address this, we propose an inverse reinforcement learning (RL) framework for inferring the function performed by a neural network from data. We assume that the responses of each neuron in a network are optimised so as to drive the network towards 'rewarded' states, that are desirable for performing a given function. We then show how one can use inverse RL to infer the reward function optimised by the network from observing its responses. This inferred reward function can be used to predict how the neural network should adapt its dynamics to perform the same function when the external environment or network structure changes. This could lead to theoretical predictions about how neural network dynamics adapt to deal with cell death and/or varying sensory stimulus statistics.
Collapse
Affiliation(s)
- Matthew Chalk
- Institut de la Vision, INSERM, CNRS, Sorbonne Université, Paris, France
| | | | - Olivier Marre
- Institut de la Vision, INSERM, CNRS, Sorbonne Université, Paris, France
| |
Collapse
|
24
|
Zenke F, Vogels TP. The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks. Neural Comput 2021; 33:899-925. [PMID: 33513328 DOI: 10.1162/neco_a_01367] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 11/06/2020] [Indexed: 01/10/2023]
Abstract
Brains process information in spiking neural networks. Their intricate connections shape the diverse functions these networks perform. Yet how network connectivity relates to function is poorly understood, and the functional capabilities of models of spiking networks are still rudimentary. The lack of both theoretical insight and practical algorithms to find the necessary connectivity poses a major impediment to both studying information processing in the brain and building efficient neuromorphic hardware systems. The training algorithms that solve this problem for artificial neural networks typically rely on gradient descent. But doing so in spiking networks has remained challenging due to the nondifferentiable nonlinearity of spikes. To avoid this issue, one can employ surrogate gradients to discover the required connectivity. However, the choice of a surrogate is not unique, raising the question of how its implementation influences the effectiveness of the method. Here, we use numerical simulations to systematically study how essential design parameters of surrogate gradients affect learning performance on a range of classification problems. We show that surrogate gradient learning is robust to different shapes of underlying surrogate derivatives, but the choice of the derivative's scale can substantially affect learning performance. When we combine surrogate gradients with suitable activity regularization techniques, spiking networks perform robust information processing at the sparse activity limit. Our study provides a systematic account of the remarkable robustness of surrogate gradient learning and serves as a practical guide to model functional spiking neural networks.
Collapse
Affiliation(s)
- Friedemann Zenke
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford OX1 3SR, U.K., and Friedrich Miescher Institute for Biomedical Research, 4058 Basel, Switzerland,
| | - Tim P Vogels
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford OX1 3SR, U.K., and Institute for Science and Technology, 3400 Klosterneuburg, Austria,
| |
Collapse
|
25
|
Zhang Y, Qu H, Luo X, Chen Y, Wang Y, Zhang M, Li Z. A new recursive least squares-based learning algorithm for spiking neurons. Neural Netw 2021; 138:110-125. [PMID: 33636484 DOI: 10.1016/j.neunet.2021.01.016] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 12/15/2020] [Accepted: 01/18/2021] [Indexed: 10/22/2022]
Abstract
Spiking neural networks (SNNs) are regarded as effective models for processing spatio-temporal information. However, their inherent complexity of temporal coding makes it an arduous task to put forward an effective supervised learning algorithm, which still puzzles researchers in this area. In this paper, we propose a Recursive Least Squares-Based Learning Rule (RLSBLR) for SNN to generate the desired spatio-temporal spike train. During the learning process of our method, the weight update is driven by the cost function defined by the difference between the membrane potential and the firing threshold. The amount of weight modification depends not only on the impact of the current error function, but also on the previous error functions which are evaluated by current weights. In order to improve the learning performance, we integrate a modified synaptic delay learning to the proposed RLSBLR. We conduct experiments in different settings, such as spiking lengths, number of inputs, firing rates, noises and learning parameters, to thoroughly investigate the performance of this learning algorithm. The proposed RLSBLR is compared with competitive algorithms of Perceptron-Based Spiking Neuron Learning Rule (PBSNLR) and Remote Supervised Method (ReSuMe). Experimental results demonstrate that the proposed RLSBLR can achieve higher learning accuracy, higher efficiency and better robustness against different types of noise. In addition, we apply the proposed RLSBLR to open source database TIDIGITS, and the results show that our algorithm has a good practical application performance.
Collapse
Affiliation(s)
- Yun Zhang
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Hong Qu
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China.
| | - Xiaoling Luo
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Yi Chen
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Yuchen Wang
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Malu Zhang
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, PR China
| | - Zefang Li
- China Coal Research Institute, Beijing 100013, PR China
| |
Collapse
|
26
|
Yu Q, Yao Y, Wang L, Tang H, Dang J, Tan KC. Robust Environmental Sound Recognition With Sparse Key-Point Encoding and Efficient Multispike Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:625-638. [PMID: 32203038 DOI: 10.1109/tnnls.2020.2978764] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The capability for environmental sound recognition (ESR) can determine the fitness of individuals in a way to avoid dangers or pursue opportunities when critical sound events occur. It still remains mysterious about the fundamental principles of biological systems that result in such a remarkable ability. Additionally, the practical importance of ESR has attracted an increasing amount of research attention, but the chaotic and nonstationary difficulties continue to make it a challenging task. In this article, we propose a spike-based framework from a more brain-like perspective for the ESR task. Our framework is a unifying system with consistent integration of three major functional parts which are sparse encoding, efficient learning, and robust readout. We first introduce a simple sparse encoding, where key points are used for feature representation, and demonstrate its generalization to both spike- and nonspike-based systems. Then, we evaluate the learning properties of different learning rules in detail with our contributions being added for improvements. Our results highlight the advantages of multispike learning, providing a selection reference for various spike-based developments. Finally, we combine the multispike readout with the other parts to form a system for ESR. Experimental results show that our framework performs the best as compared to other baseline approaches. In addition, we show that our spike-based framework has several advantageous characteristics including early decision making, small dataset acquiring, and ongoing dynamic processing. Our framework is the first attempt to apply the multispike characteristic of nervous neurons to ESR. The outstanding performance of our approach would potentially contribute to draw more research efforts to push the boundaries of spike-based paradigm to a new horizon.
Collapse
|
27
|
Chu D, Le Nguyen H. Constraints on Hebbian and STDP learned weights of a spiking neuron. Neural Netw 2021; 135:192-200. [PMID: 33401225 DOI: 10.1016/j.neunet.2020.12.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 12/09/2020] [Accepted: 12/10/2020] [Indexed: 10/22/2022]
Abstract
We analyse mathematically the constraints on weights resulting from Hebbian and STDP learning rules applied to a spiking neuron with weight normalisation. In the case of pure Hebbian learning, we find that the normalised weights equal the promotion probabilities of weights up to correction terms that depend on the learning rate and are usually small. A similar relation can be derived for STDP algorithms, where the normalised weight values reflect a difference between the promotion and demotion probabilities of the weight. These relations are practically useful in that they allow checking for convergence of Hebbian and STDP algorithms. Another application is novelty detection. We demonstrate this using the MNIST dataset.
Collapse
Affiliation(s)
- Dominique Chu
- CEMS, School of Computing, University of Kent, CT2 7NF, Canterbury, UK.
| | - Huy Le Nguyen
- CEMS, School of Computing, University of Kent, CT2 7NF, Canterbury, UK
| |
Collapse
|
28
|
Zhu X, Zhao B, Ma D, Tang H. An Efficient Learning Algorithm for Direct Training Deep Spiking Neural Networks. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2021.3073846] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Xiaolei Zhu
- College of Microelectronics, Zhejiang University, Hangzhou, China, and also with Zhejiang Lab, Hangzhou, China
| | - Baixin Zhao
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China. (e-mail: )
| | - De Ma
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China, and also with Zhejiang Lab, Hangzhou, China
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China, and also with Zhejiang Lab, Hangzhou, China
| |
Collapse
|
29
|
Rapp H, Nawrot MP. A spiking neural program for sensorimotor control during foraging in flying insects. Proc Natl Acad Sci U S A 2020; 117:28412-28421. [PMID: 33122439 PMCID: PMC7668073 DOI: 10.1073/pnas.2009821117] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Foraging is a vital behavioral task for living organisms. Behavioral strategies and abstract mathematical models thereof have been described in detail for various species. To explore the link between underlying neural circuits and computational principles, we present how a biologically detailed neural circuit model of the insect mushroom body implements sensory processing, learning, and motor control. We focus on cast and surge strategies employed by flying insects when foraging within turbulent odor plumes. Using a spike-based plasticity rule, the model rapidly learns to associate individual olfactory sensory cues paired with food in a classical conditioning paradigm. We show that, without retraining, the system dynamically recalls memories to detect relevant cues in complex sensory scenes. Accumulation of this sensory evidence on short time scales generates cast-and-surge motor commands. Our generic systems approach predicts that population sparseness facilitates learning, while temporal sparseness is required for dynamic memory recall and precise behavioral control. Our work successfully combines biological computational principles with spike-based machine learning. It shows how knowledge transfer from static to arbitrary complex dynamic conditions can be achieved by foraging insects and may serve as inspiration for agent-based machine learning.
Collapse
Affiliation(s)
- Hannes Rapp
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Cologne 50674, Germany
| | - Martin Paul Nawrot
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Cologne 50674, Germany
| |
Collapse
|
30
|
Zhang M, Wu J, Belatreche A, Pan Z, Xie X, Chua Y, Li G, Qu H, Li H. Supervised learning in spiking neural networks with synaptic delay-weight plasticity. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.03.079] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
31
|
Abstract
We present a theory of neural circuits’ design and function, inspired by the random connectivity of real neural circuits and the mathematical power of random projections. Specifically, we introduce a family of statistical models for large neural population codes, a straightforward neural circuit architecture that would implement these models, and a biologically plausible learning rule for such circuits. The resulting neural architecture suggests a design principle for neural circuit—namely, that they learn to compute the mathematical surprise of their inputs, given past inputs, without an explicit teaching signal. We applied these models to recordings from large neural populations in monkeys’ visual and prefrontal cortices and show them to be highly accurate, efficient, and scalable. The brain represents and reasons probabilistically about complex stimuli and motor actions using a noisy, spike-based neural code. A key building block for such neural computations, as well as the basis for supervised and unsupervised learning, is the ability to estimate the surprise or likelihood of incoming high-dimensional neural activity patterns. Despite progress in statistical modeling of neural responses and deep learning, current approaches either do not scale to large neural populations or cannot be implemented using biologically realistic mechanisms. Inspired by the sparse and random connectivity of real neuronal circuits, we present a model for neural codes that accurately estimates the likelihood of individual spiking patterns and has a straightforward, scalable, efficient, learnable, and realistic neural implementation. This model’s performance on simultaneously recorded spiking activity of >100 neurons in the monkey visual and prefrontal cortices is comparable with or better than that of state-of-the-art models. Importantly, the model can be learned using a small number of samples and using a local learning rule that utilizes noise intrinsic to neural circuits. Slower, structural changes in random connectivity, consistent with rewiring and pruning processes, further improve the efficiency and sparseness of the resulting neural representations. Our results merge insights from neuroanatomy, machine learning, and theoretical neuroscience to suggest random sparse connectivity as a key design principle for neuronal computation.
Collapse
|
32
|
Taherkhani A, Cosma G, McGinnity TM. Optimization of Output Spike Train Encoding for a Spiking Neuron Based on its Spatio–Temporal Input Pattern. IEEE Trans Cogn Dev Syst 2020. [DOI: 10.1109/tcds.2019.2909355] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
33
|
Fil J, Chu D. Minimal Spiking Neuron for Solving Multilabel Classification Tasks. Neural Comput 2020; 32:1408-1429. [PMID: 32433898 DOI: 10.1162/neco_a_01290] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The multispike tempotron (MST) is a powersul, single spiking neuron model that can solve complex supervised classification tasks. It is also internally complex, computationally expensive to evaluate, and unsuitable for neuromorphic hardware. Here we aim to understand whether it is possible to simplify the MST model while retaining its ability to learn and process information. To this end, we introduce a family of generalized neuron models (GNMs) that are a special case of the spike response model and much simpler and cheaper to simulate than the MST. We find that over a wide range of parameters, the GNM can learn at least as well as the MST does. We identify the temporal autocorrelation of the membrane potential as the most important ingredient of the GNM that enables it to classify multiple spatiotemporal patterns. We also interpret the GNM as a chemical system, thus conceptually bridging computation by neural networks with molecular information processing. We conclude the letter by proposing alternative training approaches for the GNM, including error trace learning and error backpropagation.
Collapse
Affiliation(s)
- Jakub Fil
- School of Computing, University of Kent, Canterbury CT2 7NF, U.K.
| | - Dominique Chu
- School of Computing, University of Kent, Canterbury CT2 7NF, U.K.
| |
Collapse
|
34
|
Xie X, Liu G, Cai Q, Sun G, Zhang M, Qu H. An end-to-end functional spiking model for sequential feature learning. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.105643] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
35
|
Moldwin T, Segev I. Perceptron Learning and Classification in a Modeled Cortical Pyramidal Cell. Front Comput Neurosci 2020; 14:33. [PMID: 32390819 PMCID: PMC7193948 DOI: 10.3389/fncom.2020.00033] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Accepted: 03/25/2020] [Indexed: 12/04/2022] Open
Abstract
The perceptron learning algorithm and its multiple-layer extension, the backpropagation algorithm, are the foundations of the present-day machine learning revolution. However, these algorithms utilize a highly simplified mathematical abstraction of a neuron; it is not clear to what extent real biophysical neurons with morphologically-extended non-linear dendritic trees and conductance-based synapses can realize perceptron-like learning. Here we implemented the perceptron learning algorithm in a realistic biophysical model of a layer 5 cortical pyramidal cell with a full complement of non-linear dendritic channels. We tested this biophysical perceptron (BP) on a classification task, where it needed to correctly binarily classify 100, 1,000, or 2,000 patterns, and a generalization task, where it was required to discriminate between two "noisy" patterns. We show that the BP performs these tasks with an accuracy comparable to that of the original perceptron, though the classification capacity of the apical tuft is somewhat limited. We concluded that cortical pyramidal neurons can act as powerful classification devices.
Collapse
Affiliation(s)
- Toviah Moldwin
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Idan Segev
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Neurobiology, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
36
|
Rapp H, Nawrot MP, Stern M. Numerical Cognition Based on Precise Counting with a Single Spiking Neuron. iScience 2020; 23:100852. [PMID: 32058964 PMCID: PMC7005464 DOI: 10.1016/j.isci.2020.100852] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Revised: 11/24/2019] [Accepted: 01/14/2020] [Indexed: 12/24/2022] Open
Abstract
Insects are able to solve basic numerical cognition tasks. We show that estimation of numerosity can be realized and learned by a single spiking neuron with an appropriate synaptic plasticity rule. This model can be efficiently trained to detect arbitrary spatiotemporal spike patterns on a noisy and dynamic background with high precision and low variance. When put to test in a task that requires counting of visual concepts in a static image it required considerably less training epochs than a convolutional neural network to achieve equal performance. When mimicking a behavioral task in free-flying bees that requires numerical cognition, the model reaches a similar success rate in making correct decisions. We propose that using action potentials to represent basic numerical concepts with a single spiking neuron is beneficial for organisms with small brains and limited neuronal resources.
Collapse
Affiliation(s)
- Hannes Rapp
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Zülpicher Straße 47b, 50923 Cologne, Germany.
| | - Martin Paul Nawrot
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Zülpicher Straße 47b, 50923 Cologne, Germany
| | - Merav Stern
- Department of Applied Mathematics, University of Washington, Lewis Hall 201, Box 353925, Seattle, WA 98195-3925, USA
| |
Collapse
|
37
|
Pan Z, Chua Y, Wu J, Zhang M, Li H, Ambikairajah E. An Efficient and Perceptually Motivated Auditory Neural Encoding and Decoding Algorithm for Spiking Neural Networks. Front Neurosci 2020; 13:1420. [PMID: 32038132 PMCID: PMC6987407 DOI: 10.3389/fnins.2019.01420] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Accepted: 12/16/2019] [Indexed: 12/11/2022] Open
Abstract
The auditory front-end is an integral part of a spiking neural network (SNN) when performing auditory cognitive tasks. It encodes the temporal dynamic stimulus, such as speech and audio, into an efficient, effective and reconstructable spike pattern to facilitate the subsequent processing. However, most of the auditory front-ends in current studies have not made use of recent findings in psychoacoustics and physiology concerning human listening. In this paper, we propose a neural encoding and decoding scheme that is optimized for audio processing. The neural encoding scheme, that we call Biologically plausible Auditory Encoding (BAE), emulates the functions of the perceptual components of the human auditory system, that include the cochlear filter bank, the inner hair cells, auditory masking effects from psychoacoustic models, and the spike neural encoding by the auditory nerve. We evaluate the perceptual quality of the BAE scheme using PESQ; the performance of the BAE based on sound classification and speech recognition experiments. Finally, we also built and published two spike-version of speech datasets: the Spike-TIDIGITS and the Spike-TIMIT, for researchers to use and benchmarking of future SNN research.
Collapse
Affiliation(s)
- Zihan Pan
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Yansong Chua
- Institute for Infocomm Research, Agency for Science, Technology and Research, Singapore, Singapore
| | - Jibin Wu
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Malu Zhang
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Haizhou Li
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Eliathamby Ambikairajah
- School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
38
|
Inferring and validating mechanistic models of neural microcircuits based on spike-train data. Nat Commun 2019; 10:4933. [PMID: 31666513 PMCID: PMC6821748 DOI: 10.1038/s41467-019-12572-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 09/18/2019] [Indexed: 01/11/2023] Open
Abstract
The interpretation of neuronal spike train recordings often relies on abstract statistical models that allow for principled parameter estimation and model selection but provide only limited insights into underlying microcircuits. In contrast, mechanistic models are useful to interpret microcircuit dynamics, but are rarely quantitatively matched to experimental data due to methodological challenges. Here we present analytical methods to efficiently fit spiking circuit models to single-trial spike trains. Using derived likelihood functions, we statistically infer the mean and variance of hidden inputs, neuronal adaptation properties and connectivity for coupled integrate-and-fire neurons. Comprehensive evaluations on synthetic data, validations using ground truth in-vitro and in-vivo recordings, and comparisons with existing techniques demonstrate that parameter estimation is very accurate and efficient, even for highly subsampled networks. Our methods bridge statistical, data-driven and theoretical, model-based neurosciences at the level of spiking circuits, for the purpose of a quantitative, mechanistic interpretation of recorded neuronal population activity. It is difficult to fit mechanistic, biophysically constrained circuit models to spike train data from in vivo extracellular recordings. Here the authors present analytical methods that enable efficient parameter estimation for integrate-and-fire circuit models and inference of the underlying connectivity structure in subsampled networks.
Collapse
|
39
|
Deng L, Wu Y, Hu X, Liang L, Ding Y, Li G, Zhao G, Li P, Xie Y. Rethinking the performance comparison between SNNS and ANNS. Neural Netw 2019; 121:294-307. [PMID: 31586857 DOI: 10.1016/j.neunet.2019.09.005] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Revised: 09/04/2019] [Accepted: 09/05/2019] [Indexed: 01/21/2023]
Abstract
Artificial neural networks (ANNs), a popular path towards artificial intelligence, have experienced remarkable success via mature models, various benchmarks, open-source datasets, and powerful computing platforms. Spiking neural networks (SNNs), a category of promising models to mimic the neuronal dynamics of the brain, have gained much attention for brain inspired computing and been widely deployed on neuromorphic devices. However, for a long time, there are ongoing debates and skepticisms about the value of SNNs in practical applications. Except for the low power attribute benefit from the spike-driven processing, SNNs usually perform worse than ANNs especially in terms of the application accuracy. Recently, researchers attempt to address this issue by borrowing learning methodologies from ANNs, such as backpropagation, to train high-accuracy SNN models. The rapid progress in this domain continuously produces amazing results with ever-increasing network size, whose growing path seems similar to the development of deep learning. Although these ways endow SNNs the capability to approach the accuracy of ANNs, the natural superiorities of SNNs and the way to outperform ANNs are potentially lost due to the use of ANN-oriented workloads and simplistic evaluation metrics. In this paper, we take the visual recognition task as a case study to answer the questions of "what workloads are ideal for SNNs and how to evaluate SNNs makes sense". We design a series of contrast tests using different types of datasets (ANN-oriented and SNN-oriented), diverse processing models, signal conversion methods, and learning algorithms. We propose comprehensive metrics on the application accuracy and the cost of memory & compute to evaluate these models, and conduct extensive experiments. We evidence the fact that on ANN-oriented workloads, SNNs fail to beat their ANN counterparts; while on SNN-oriented workloads, SNNs can fully perform better. We further demonstrate that in SNNs there exists a trade-off between the application accuracy and the execution cost, which will be affected by the simulation time window and firing threshold. Based on these abundant analyses, we recommend the most suitable model for each scenario. To the best of our knowledge, this is the first work using systematical comparisons to explicitly reveal that the straightforward workload porting from ANNs to SNNs is unwise although many works are doing so and a comprehensive evaluation indeed matters. Finally, we highlight the urgent need to build a benchmarking framework for SNNs with broader tasks, datasets, and metrics.
Collapse
Affiliation(s)
- Lei Deng
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing 100084, China; Department of Electrical and Computer Engineering, University of California, Santa Barbara,, CA 93106, USA.
| | - Yujie Wu
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing 100084, China.
| | - Xing Hu
- Department of Electrical and Computer Engineering, University of California, Santa Barbara,, CA 93106, USA.
| | - Ling Liang
- Department of Electrical and Computer Engineering, University of California, Santa Barbara,, CA 93106, USA.
| | - Yufei Ding
- Department of Computer Science, University of California, Santa Barbara,, CA 93106, USA.
| | - Guoqi Li
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing 100084, China.
| | - Guangshe Zhao
- School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an 710049, China.
| | - Peng Li
- Department of Electrical and Computer Engineering, University of California, Santa Barbara,, CA 93106, USA.
| | - Yuan Xie
- Department of Electrical and Computer Engineering, University of California, Santa Barbara,, CA 93106, USA.
| |
Collapse
|
40
|
Puelma Touzel M, Wolf F. Statistical mechanics of spike events underlying phase space partitioning and sequence codes in large-scale models of neural circuits. Phys Rev E 2019; 99:052402. [PMID: 31212548 DOI: 10.1103/physreve.99.052402] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2017] [Indexed: 11/07/2022]
Abstract
Cortical circuits operate in an inhibition-dominated regime of spiking activity. Recently, it was found that spiking circuit models in this regime can, despite disordered connectivity and asynchronous irregular activity, exhibit a locally stable dynamics that may be used for neural computation. The lack of existing mathematical tools has precluded analytical insight into this phase. Here we present analytical methods tailored to the granularity of spike-based interactions for analyzing attractor geometry in high-dimensional spiking dynamics. We apply them to reveal the properties of the complex geometry of trajectories of population spiking activity in a canonical model of locally stable spiking dynamics. We find that attractor basin boundaries are the preimages of spike-time collision events involving connected neurons. These spike-based instabilities control the divergence rate of neighboring basins and have no equivalent in rate-based models. They are located according to the disordered connectivity at a random subset of edges in a hypercube representation of the phase space. Iterating backward these edges using the stable dynamics induces a partition refinement on this space that converges to the attractor basins. We formulate a statistical theory of the locations of such events relative to attracting trajectories via a tractable representation of local trajectory ensembles. Averaging over the disorder, we derive the basin diameter distribution, whose characteristic scale emerges from the relative strengths of the stabilizing inhibitory coupling and destabilizing spike interactions. Our study provides an approach to analytically dissect how connectivity, coupling strength, and single-neuron dynamics shape the phase space geometry in the locally stable regime of spiking neural circuit dynamics.
Collapse
Affiliation(s)
- Maximilian Puelma Touzel
- Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany and Mila, Université de Montréal, Montréal, Quebec, Canada H2S 3H1
| | - Fred Wolf
- Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany; Faculty of Physics, Georg August University, 37077 Göttingen, Germany; Bernstein Center for Computational Neuroscience, 37077 Göttingen, Germany; and Kavli Institute for Theoretical Physics, University of California, Santa Barbara, Santa Barbara, California 93106-4111, USA
| |
Collapse
|
41
|
Luo X, Qu H, Zhang Y, Chen Y. First Error-Based Supervised Learning Algorithm for Spiking Neural Networks. Front Neurosci 2019; 13:559. [PMID: 31244594 PMCID: PMC6563788 DOI: 10.3389/fnins.2019.00559] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Accepted: 05/15/2019] [Indexed: 11/13/2022] Open
Abstract
Neural circuits respond to multiple sensory stimuli by firing precisely timed spikes. Inspired by this phenomenon, the spike timing-based spiking neural networks (SNNs) are proposed to process and memorize the spatiotemporal spike patterns. However, the response speed and accuracy of the existing learning algorithms of SNNs are still lacking compared to the human brain. To further improve the performance of learning precisely timed spikes, we propose a new weight updating mechanism which always adjusts the synaptic weights at the first wrong output spike time. The proposed learning algorithm can accurately adjust the synaptic weights that contribute to the membrane potential of desired and non-desired firing time. Experimental results demonstrate that the proposed algorithm shows higher accuracy, better robustness, and less computational resources compared with the remote supervised method (ReSuMe) and the spike pattern association neuron (SPAN), which are classic sequence learning algorithms. In addition, the SNN-based computational model equipped with the proposed learning method achieves better recognition results in speech recognition task compared with other bio-inspired baseline systems.
Collapse
Affiliation(s)
- Xiaoling Luo
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Hong Qu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yun Zhang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yi Chen
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
42
|
Yu Q, Li H, Tan KC. Spike Timing or Rate? Neurons Learn to Make Decisions for Both Through Threshold-Driven Plasticity. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:2178-2189. [PMID: 29993593 DOI: 10.1109/tcyb.2018.2821692] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Spikes play an essential role in information transmission in central nervous system, but how neurons learn from them remains a challenging question. Most algorithms studied how to train spiking neurons to process patterns encoded with a sole assumption of either a rate or a temporal code. Is there a general learning algorithm capable of processing both codes regardless of the intense debate on them within neuroscience community? In this paper, we propose several threshold-driven plasticity algorithms to address the above question. In addition to formulating the algorithms, we also provide proofs with respect to several properties, such as robustness and convergence. The experimental results illustrate that our algorithms are simple, effective and yet efficient for training neurons to learn spike patterns. Due to their simplicity and high efficiency, our algorithms would be potentially beneficial for both software and hardware implementations. Neurons with our algorithms can also detect and recognize embedded features from a background sensory activity. With the as-proposed algorithms, a single neuron can successfully perform multicategory classifications by making decisions based on its output spike number in response to each category. Spike patterns being processed can be encoded with both spike rates and precise timings. When afferent spike timings matter, neurons will automatically extract temporal features without being explicitly instructed as to which point to fire.
Collapse
|
43
|
Zhang M, Qu H, Belatreche A, Chen Y, Yi Z. A Highly Effective and Robust Membrane Potential-Driven Supervised Learning Method for Spiking Neurons. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:123-137. [PMID: 29993588 DOI: 10.1109/tnnls.2018.2833077] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Spiking neurons are becoming increasingly popular owing to their biological plausibility and promising computational properties. Unlike traditional rate-based neural models, spiking neurons encode information in the temporal patterns of the transmitted spike trains, which makes them more suitable for processing spatiotemporal information. One of the fundamental computations of spiking neurons is to transform streams of input spike trains into precisely timed firing activity. However, the existing learning methods, used to realize such computation, often result in relatively low accuracy performance and poor robustness to noise. In order to address these limitations, we propose a novel highly effective and robust membrane potential-driven supervised learning (MemPo-Learn) method, which enables the trained neurons to generate desired spike trains with higher precision, higher efficiency, and better noise robustness than the current state-of-the-art spiking neuron learning methods. While the traditional spike-driven learning methods use an error function based on the difference between the actual and desired output spike trains, the proposed MemPo-Learn method employs an error function based on the difference between the output neuron membrane potential and its firing threshold. The efficiency of the proposed learning method is further improved through the introduction of an adaptive strategy, called skip scan training strategy, that selectively identifies the time steps when to apply weight adjustment. The proposed strategy enables the MemPo-Learn method to effectively and efficiently learn the desired output spike train even when much smaller time steps are used. In addition, the learning rule of MemPo-Learn is improved further to help mitigate the impact of the input noise on the timing accuracy and reliability of the neuron firing dynamics. The proposed learning method is thoroughly evaluated on synthetic data and is further demonstrated on real-world classification tasks. Experimental results show that the proposed method can achieve high learning accuracy with a significant improvement in learning time and better robustness to different types of noise.
Collapse
|
44
|
Wu J, Chua Y, Zhang M, Li H, Tan KC. A Spiking Neural Network Framework for Robust Sound Classification. Front Neurosci 2018; 12:836. [PMID: 30510500 PMCID: PMC6252336 DOI: 10.3389/fnins.2018.00836] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Accepted: 10/26/2018] [Indexed: 11/26/2022] Open
Abstract
Environmental sounds form part of our daily life. With the advancement of deep learning models and the abundance of training data, the performance of automatic sound classification (ASC) systems has improved significantly in recent years. However, the high computational cost, hence high power consumption, remains a major hurdle for large-scale implementation of ASC systems on mobile and wearable devices. Motivated by the observations that humans are highly effective and consume little power whilst analyzing complex audio scenes, we propose a biologically plausible ASC framework, namely SOM-SNN. This framework uses the unsupervised self-organizing map (SOM) for representing frequency contents embedded within the acoustic signals, followed by an event-based spiking neural network (SNN) for spatiotemporal spiking pattern classification. We report experimental results on the RWCP environmental sound and TIDIGITS spoken digits datasets, which demonstrate competitive classification accuracies over other deep learning and SNN-based models. The SOM-SNN framework is also shown to be highly robust to corrupting noise after multi-condition training, whereby the model is trained with noise-corrupted sound samples. Moreover, we discover the early decision making capability of the proposed framework: an accurate classification can be made with an only partial presentation of the input.
Collapse
Affiliation(s)
- Jibin Wu
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Yansong Chua
- Institute for Infocomm Research, ASTAR, Singapore, Singapore
| | - Malu Zhang
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Haizhou Li
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore.,Institute for Infocomm Research, ASTAR, Singapore, Singapore
| | - Kay Chen Tan
- Department of Computer Science, City University of Hong Kong, Kowloon Tong, Hong Kong
| |
Collapse
|
45
|
Masquelier T. STDP Allows Close-to-Optimal Spatiotemporal Spike Pattern Detection by Single Coincidence Detector Neurons. Neuroscience 2018; 389:133-140. [PMID: 28668487 PMCID: PMC6372004 DOI: 10.1016/j.neuroscience.2017.06.032] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2017] [Accepted: 06/19/2017] [Indexed: 11/24/2022]
Abstract
Repeating spike patterns exist and are informative. Can a single cell do the readout? We show how a leaky integrate-and-fire (LIF) can do this readout optimally. The optimal membrane time constant is short, possibly much shorter than the pattern. Spike-timing-dependent plasticity (STDP) can turn a neuron into an optimal detector. These results may explain how humans can learn repeating visual or auditory sequences.
Repeating spatiotemporal spike patterns exist and carry information. How this information is extracted by downstream neurons is unclear. Here we theoretically investigate to what extent a single cell could detect a given spike pattern and what the optimal parameters to do so are, in particular the membrane time constant τ. Using a leaky integrate-and-fire (LIF) neuron with homogeneous Poisson input, we computed this optimum analytically. We found that a relatively small τ (at most a few tens of ms) is usually optimal, even when the pattern is much longer. This is somewhat counter-intuitive as the resulting detector ignores most of the pattern, due to its fast memory decay. Next, we wondered if spike-timing-dependent plasticity (STDP) could enable a neuron to reach the theoretical optimum. We simulated a LIF equipped with additive STDP, and repeatedly exposed it to a given input spike pattern. As in previous studies, the LIF progressively became selective to the repeating pattern with no supervision, even when the pattern was embedded in Poisson activity. Here we show that, using certain STDP parameters, the resulting pattern detector is optimal. These mechanisms may explain how humans learn repeating sensory sequences. Long sequences could be recognized thanks to coincidence detectors working at a much shorter timescale. This is consistent with the fact that recognition is still possible if a sound sequence is compressed, played backward, or scrambled using 10-ms bins. Coincidence detection is a simple yet powerful mechanism, which could be the main function of neurons in the brain.
Collapse
|
46
|
Redundancy in synaptic connections enables neurons to learn optimally. Proc Natl Acad Sci U S A 2018; 115:E6871-E6879. [PMID: 29967182 DOI: 10.1073/pnas.1803274115] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
Recent experimental studies suggest that, in cortical microcircuits of the mammalian brain, the majority of neuron-to-neuron connections are realized by multiple synapses. However, it is not known whether such redundant synaptic connections provide any functional benefit. Here, we show that redundant synaptic connections enable near-optimal learning in cooperation with synaptic rewiring. By constructing a simple dendritic neuron model, we demonstrate that with multisynaptic connections synaptic plasticity approximates a sample-based Bayesian filtering algorithm known as particle filtering, and wiring plasticity implements its resampling process. Extending the proposed framework to a detailed single-neuron model of perceptual learning in the primary visual cortex, we show that the model accounts for many experimental observations. In particular, the proposed model reproduces the dendritic position dependence of spike-timing-dependent plasticity and the functional synaptic organization on the dendritic tree based on the stimulus selectivity of presynaptic neurons. Our study provides a conceptual framework for synaptic plasticity and rewiring.
Collapse
|
47
|
Zenke F, Ganguli S. SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks. Neural Comput 2018; 30:1514-1541. [PMID: 29652587 PMCID: PMC6118408 DOI: 10.1162/neco_a_01086] [Citation(s) in RCA: 135] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Accepted: 01/23/2018] [Indexed: 01/02/2023]
Abstract
A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.
Collapse
Affiliation(s)
- Friedemann Zenke
- Department of Applied Physics, Stanford University, Stanford, CA 94305, U.S.A., and Centre for Neural Circuits and Behaviour, University of Oxford, Oxford OX1 3SR, U.K
| | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, CA 94305, U.S.A.
| |
Collapse
|
48
|
Abstract
What can artificial intelligence learn from neuroscience, and vice versa?
Collapse
Affiliation(s)
- Adam Shai
- Department of Biology, Stanford University, Stanford, United States
| | | |
Collapse
|
49
|
Balanced excitation and inhibition are required for high-capacity, noise-robust neuronal selectivity. Proc Natl Acad Sci U S A 2017; 114:E9366-E9375. [PMID: 29042519 DOI: 10.1073/pnas.1705841114] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Neurons and networks in the cerebral cortex must operate reliably despite multiple sources of noise. To evaluate the impact of both input and output noise, we determine the robustness of single-neuron stimulus selective responses, as well as the robustness of attractor states of networks of neurons performing memory tasks. We find that robustness to output noise requires synaptic connections to be in a balanced regime in which excitation and inhibition are strong and largely cancel each other. We evaluate the conditions required for this regime to exist and determine the properties of networks operating within it. A plausible synaptic plasticity rule for learning that balances weight configurations is presented. Our theory predicts an optimal ratio of the number of excitatory and inhibitory synapses for maximizing the encoding capacity of balanced networks for given statistics of afferent activations. Previous work has shown that balanced networks amplify spatiotemporal variability and account for observed asynchronous irregular states. Here we present a distinct type of balanced network that amplifies small changes in the impinging signals and emerges automatically from learning to perform neuronal and network functions robustly.
Collapse
|
50
|
Kuśmierz Ł, Isomura T, Toyoizumi T. Learning with three factors: modulating Hebbian plasticity with errors. Curr Opin Neurobiol 2017; 46:170-177. [PMID: 28918313 DOI: 10.1016/j.conb.2017.08.020] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2017] [Accepted: 08/30/2017] [Indexed: 01/06/2023]
Abstract
Synaptic plasticity is a central theme in neuroscience. A framework of three-factor learning rules provides a powerful abstraction, helping to navigate through the abundance of models of synaptic plasticity. It is well-known that the dopamine modulation of learning is related to reward, but theoretical models predict other functional roles of the modulatory third factor; it may encode errors for supervised learning, summary statistics of the population activity for unsupervised learning or attentional feedback. Specialized structures may be needed in order to generate and propagate third factors in the neural network.
Collapse
Affiliation(s)
- Łukasz Kuśmierz
- RIKEN Brain Science Institute, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
| | - Takuya Isomura
- RIKEN Brain Science Institute, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
| | - Taro Toyoizumi
- RIKEN Brain Science Institute, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan.
| |
Collapse
|