1
|
Vilimelis Aceituno P, Ehsani M, Jost J. Spiking time-dependent plasticity leads to efficient coding of predictions. BIOLOGICAL CYBERNETICS 2020; 114:43-61. [PMID: 31873797 PMCID: PMC7062862 DOI: 10.1007/s00422-019-00813-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2019] [Accepted: 12/13/2019] [Indexed: 06/10/2023]
Abstract
Latency reduction in postsynaptic spikes is a well-known effect of spiking time-dependent plasticity. We expand this notion for long postsynaptic spike trains on single neurons, showing that, for a fixed input spike train, STDP reduces the number of postsynaptic spikes and concentrates the remaining ones. Then, we study the consequences of this phenomena in terms of coding, finding that this mechanism improves the neural code by increasing the signal-to-noise ratio and lowering the metabolic costs of frequent stimuli. Finally, we illustrate that the reduction in postsynaptic latencies can lead to the emergence of predictions.
Collapse
Affiliation(s)
- Pau Vilimelis Aceituno
- Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103, Leipzig, Germany.
- Max Planck School of Cognition, Stephanstraße 1a, 04103, Leipzig, Germany.
| | - Masud Ehsani
- Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103, Leipzig, Germany
| | - Jürgen Jost
- Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103, Leipzig, Germany
- Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM, 87501, USA
| |
Collapse
|
2
|
Masquelier T. STDP Allows Close-to-Optimal Spatiotemporal Spike Pattern Detection by Single Coincidence Detector Neurons. Neuroscience 2018; 389:133-140. [PMID: 28668487 PMCID: PMC6372004 DOI: 10.1016/j.neuroscience.2017.06.032] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2017] [Accepted: 06/19/2017] [Indexed: 11/24/2022]
Abstract
Repeating spike patterns exist and are informative. Can a single cell do the readout? We show how a leaky integrate-and-fire (LIF) can do this readout optimally. The optimal membrane time constant is short, possibly much shorter than the pattern. Spike-timing-dependent plasticity (STDP) can turn a neuron into an optimal detector. These results may explain how humans can learn repeating visual or auditory sequences.
Repeating spatiotemporal spike patterns exist and carry information. How this information is extracted by downstream neurons is unclear. Here we theoretically investigate to what extent a single cell could detect a given spike pattern and what the optimal parameters to do so are, in particular the membrane time constant τ. Using a leaky integrate-and-fire (LIF) neuron with homogeneous Poisson input, we computed this optimum analytically. We found that a relatively small τ (at most a few tens of ms) is usually optimal, even when the pattern is much longer. This is somewhat counter-intuitive as the resulting detector ignores most of the pattern, due to its fast memory decay. Next, we wondered if spike-timing-dependent plasticity (STDP) could enable a neuron to reach the theoretical optimum. We simulated a LIF equipped with additive STDP, and repeatedly exposed it to a given input spike pattern. As in previous studies, the LIF progressively became selective to the repeating pattern with no supervision, even when the pattern was embedded in Poisson activity. Here we show that, using certain STDP parameters, the resulting pattern detector is optimal. These mechanisms may explain how humans learn repeating sensory sequences. Long sequences could be recognized thanks to coincidence detectors working at a much shorter timescale. This is consistent with the fact that recognition is still possible if a sound sequence is compressed, played backward, or scrambled using 10-ms bins. Coincidence detection is a simple yet powerful mechanism, which could be the main function of neurons in the brain.
Collapse
|
3
|
Masquelier T, Kheradpisheh SR. Optimal Localist and Distributed Coding of Spatiotemporal Spike Patterns Through STDP and Coincidence Detection. Front Comput Neurosci 2018; 12:74. [PMID: 30279653 PMCID: PMC6153331 DOI: 10.3389/fncom.2018.00074] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2018] [Accepted: 08/17/2018] [Indexed: 11/13/2022] Open
Abstract
Repeating spatiotemporal spike patterns exist and carry information. Here we investigated how a single spiking neuron can optimally respond to one given pattern (localist coding), or to either one of several patterns (distributed coding, i.e., the neuron's response is ambiguous but the identity of the pattern could be inferred from the response of multiple neurons), but not to random inputs. To do so, we extended a theory developed in a previous paper (Masquelier, 2017), which was limited to localist coding. More specifically, we computed analytically the signal-to-noise ratio (SNR) of a multi-pattern-detector neuron, using a threshold-free leaky integrate-and-fire (LIF) neuron model with non-plastic unitary synapses and homogeneous Poisson inputs. Surprisingly, when increasing the number of patterns, the SNR decreases slowly, and remains acceptable for several tens of independent patterns. In addition, we investigated whether spike-timing-dependent plasticity (STDP) could enable a neuron to reach the theoretical optimal SNR. To this aim, we simulated a LIF equipped with STDP, and repeatedly exposed it to multiple input spike patterns, embedded in equally dense Poisson spike trains. The LIF progressively became selective to every repeating pattern with no supervision, and stopped discharging during the Poisson spike trains. Furthermore, tuning certain STDP parameters, the resulting pattern detectors were optimal. Tens of independent patterns could be learned by a single neuron using a low adaptive threshold, in contrast with previous studies, in which higher thresholds led to localist coding only. Taken together these results suggest that coincidence detection and STDP are powerful mechanisms, fully compatible with distributed coding. Yet we acknowledge that our theory is limited to single neurons, and thus also applies to feed-forward networks, but not to recurrent ones.
Collapse
Affiliation(s)
- Timothée Masquelier
- Centre de Recherche Cerveau et Cognition, UMR5549 CNRS-Université Toulouse 3, Toulouse, France.,Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC, Universidad de Sevilla, Sevilla, Spain
| | - Saeed R Kheradpisheh
- Department of Computer Science, Faculty of Mathematical Sciences and Computer, Kharazmi University, Tehran, Iran
| |
Collapse
|
4
|
Wang R, van Schaik A. Breaking Liebig's Law: An Advanced Multipurpose Neuromorphic Engine. Front Neurosci 2018; 12:593. [PMID: 30210278 PMCID: PMC6123369 DOI: 10.3389/fnins.2018.00593] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Accepted: 08/07/2018] [Indexed: 11/13/2022] Open
Abstract
We present a massively-parallel scalable multi-purpose neuromorphic engine. All existing neuromorphic hardware systems suffer from Liebig’s law (that the performance of the system is limited by the component in shortest supply) as they have fixed numbers of dedicated neurons and synapses for specific types of plasticity. For any application, it is always the availability of one of these components that limits the size of the model, leaving the others unused. To overcome this problem, our engine adopts a unique novel architecture: an array of identical components, each of which can be configured as a leaky-integrate-and-fire (LIF) neuron, a learning-synapse, or an axon with trainable delay. Spike timing dependent plasticity (STDP) and spike timing dependent delay plasticity (STDDP) are the two supported learning rules. All the parameters are stored in the SRAMs such that runtime reconfiguration is supported. As a proof of concept, we have implemented a prototype system with 16 neural engines, each of which consists of 32768 (32k) components, yielding half a million components, on an entry level FPGA (Altera Cyclone V). We verified the prototype system with measurement results. To demonstrate that our neuromorphic engine is a high performance and scalable digital design, we implemented it using TSMC 28nm HPC technology. Place and route results using Cadence Innovus with a clock frequency of 2.5 GHz show that this engine achieves an excellent area efficiency of 1.68 μm2 per component: 256k (218) components in a silicon area of 650 μm × 680 μm (∼0.44 mm2, the utilization of the silicon area is 98.7%). The power consumption of this engine is 37 mW, yielding a power efficiency of 0.92 pJ per synaptic operation (SOP).
Collapse
Affiliation(s)
- Runchun Wang
- The MARCS Institute, Western Sydney University, Sydney, NSW, Australia
| | - André van Schaik
- The MARCS Institute, Western Sydney University, Sydney, NSW, Australia
| |
Collapse
|
5
|
Bhalla US. Dendrites, deep learning, and sequences in the hippocampus. Hippocampus 2017; 29:239-251. [PMID: 29024221 DOI: 10.1002/hipo.22806] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2017] [Revised: 10/06/2017] [Accepted: 10/10/2017] [Indexed: 11/06/2022]
Abstract
The hippocampus places us both in time and space. It does so over remarkably large spans: milliseconds to years, and centimeters to kilometers. This works for sensory representations, for memory, and for behavioral context. How does it fit in such wide ranges of time and space scales, and keep order among the many dimensions of stimulus context? A key organizing principle for a wide sweep of scales and stimulus dimensions is that of order in time, or sequences. Sequences of neuronal activity are ubiquitous in sensory processing, in motor control, in planning actions, and in memory. Against this strong evidence for the phenomenon, there are currently more models than definite experiments about how the brain generates ordered activity. The flip side of sequence generation is discrimination. Discrimination of sequences has been extensively studied at the behavioral, systems, and modeling level, but again physiological mechanisms are fewer. It is against this backdrop that I discuss two recent developments in neural sequence computation, that at face value share little beyond the label "neural." These are dendritic sequence discrimination, and deep learning. One derives from channel physiology and molecular signaling, the other from applied neural network theory - apparently extreme ends of the spectrum of neural circuit detail. I suggest that each of these topics has deep lessons about the possible mechanisms, scales, and capabilities of hippocampal sequence computation.
Collapse
Affiliation(s)
- Upinder S Bhalla
- Neurobiology, National Centre for Biological Sciences, Tata Institute of Fundamental Research, Bellary Road, Bangalore 560065, Karnataka, India
| |
Collapse
|
6
|
Burroni J, Taylor P, Corey C, Vachnadze T, Siegelmann HT. Energetic Constraints Produce Self-sustained Oscillatory Dynamics in Neuronal Networks. Front Neurosci 2017; 11:80. [PMID: 28289370 PMCID: PMC5326782 DOI: 10.3389/fnins.2017.00080] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2016] [Accepted: 02/03/2017] [Indexed: 12/27/2022] Open
Abstract
Overview: We model energy constraints in a network of spiking neurons, while exploring general questions of resource limitation on network function abstractly. Background: Metabolic states like dietary ketosis or hypoglycemia have a large impact on brain function and disease outcomes. Glia provide metabolic support for neurons, among other functions. Yet, in computational models of glia-neuron cooperation, there have been no previous attempts to explore the effects of direct realistic energy costs on network activity in spiking neurons. Currently, biologically realistic spiking neural networks assume that membrane potential is the main driving factor for neural spiking, and do not take into consideration energetic costs. Methods: We define local energy pools to constrain a neuron model, termed Spiking Neuron Energy Pool (SNEP), which explicitly incorporates energy limitations. Each neuron requires energy to spike, and resources in the pool regenerate over time. Our simulation displays an easy-to-use GUI, which can be run locally in a web browser, and is freely available. Results: Energy dependence drastically changes behavior of these neural networks, causing emergent oscillations similar to those in networks of biological neurons. We analyze the system via Lotka-Volterra equations, producing several observations: (1) energy can drive self-sustained oscillations, (2) the energetic cost of spiking modulates the degree and type of oscillations, (3) harmonics emerge with frequencies determined by energy parameters, and (4) varying energetic costs have non-linear effects on energy consumption and firing rates. Conclusions: Models of neuron function which attempt biological realism may benefit from including energy constraints. Further, we assert that observed oscillatory effects of energy limitations exist in networks of many kinds, and that these findings generalize to abstract graphs and technological applications.
Collapse
Affiliation(s)
- Javier Burroni
- Biologically Inspired Neural and Dynamical Systems Laboratory, College of Information and Computer Sciences, University of Massachusetts Amherst, MA, USA
| | - P Taylor
- Biologically Inspired Neural and Dynamical Systems Laboratory, College of Information and Computer Sciences, University of MassachusettsAmherst, MA, USA; Neuroscience and Behavior Program, University of MassachusettsAmherst, MA, USA
| | - Cassian Corey
- Biologically Inspired Neural and Dynamical Systems Laboratory, College of Information and Computer Sciences, University of Massachusetts Amherst, MA, USA
| | - Tengiz Vachnadze
- Biologically Inspired Neural and Dynamical Systems Laboratory, College of Information and Computer Sciences, University of Massachusetts Amherst, MA, USA
| | - Hava T Siegelmann
- Biologically Inspired Neural and Dynamical Systems Laboratory, College of Information and Computer Sciences, University of MassachusettsAmherst, MA, USA; Neuroscience and Behavior Program, University of MassachusettsAmherst, MA, USA
| |
Collapse
|
7
|
Koutsou A, Bugmann G, Christodoulou C. On learning time delays between the spikes from different input neurons in a biophysical model of a pyramidal neuron. Biosystems 2015; 136:80-9. [PMID: 26341613 DOI: 10.1016/j.biosystems.2015.08.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2015] [Revised: 08/21/2015] [Accepted: 08/22/2015] [Indexed: 10/23/2022]
Abstract
Biological systems are able to recognise temporal sequences of stimuli or compute in the temporal domain. In this paper we are exploring whether a biophysical model of a pyramidal neuron can detect and learn systematic time delays between the spikes from different input neurons. In particular, we investigate whether it is possible to reinforce pairs of synapses separated by a dendritic propagation time delay corresponding to the arrival time difference of two spikes from two different input neurons. We examine two subthreshold learning approaches where the first relies on the backpropagation of EPSPs (excitatory postsynaptic potentials) and the second on the backpropagation of a somatic action potential, whose production is supported by a learning-enabling background current. The first approach does not provide a learning signal that sufficiently differentiates between synapses at different locations, while in the second approach, somatic spikes do not provide a reliable signal distinguishing arrival time differences of the order of the dendritic propagation time. It appears that the firing of pyramidal neurons shows little sensitivity to heterosynaptic spike arrival time differences of several milliseconds. This neuron is therefore unlikely to be able to learn to detect such differences.
Collapse
Affiliation(s)
- Achilleas Koutsou
- Department of Computer Science, University of Cyprus, P.O. Box 20537, 1678 Nicosia, Cyprus.
| | - Guido Bugmann
- School of Computing, Electronics and Mathematics, Plymouth University, Drake Circus, PL4 8AA Plymouth, United Kingdom.
| | - Chris Christodoulou
- Department of Computer Science, University of Cyprus, P.O. Box 20537, 1678 Nicosia, Cyprus.
| |
Collapse
|
8
|
Bhalla US. Multiscale modeling and synaptic plasticity. PROGRESS IN MOLECULAR BIOLOGY AND TRANSLATIONAL SCIENCE 2014; 123:351-86. [PMID: 24560151 DOI: 10.1016/b978-0-12-397897-4.00012-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Synaptic plasticity is a major convergence point for theory and computation, and the process of plasticity engages physiology, cell, and molecular biology. In its many manifestations, plasticity is at the hub of basic neuroscience questions about memory and development, as well as more medically themed questions of neural damage and recovery. As an important cellular locus of memory, synaptic plasticity has received a huge amount of experimental and theoretical attention. If computational models have tended to pick specific aspects of plasticity, such as STDP, and reduce them to an equation, some experimental studies are equally guilty of oversimplification each time they identify a new molecule and declare it to be the last word in plasticity and learning. Multiscale modeling begins with the acknowledgment that synaptic function spans many levels of signaling, and these are so tightly coupled that we risk losing essential features of plasticity if we focus exclusively on any one level. Despite the technical challenges and gaps in data for model specification, an increasing number of multiscale modeling studies have taken on key questions in plasticity. These have provided new insights, but importantly, they have opened new avenues for questioning. This review discusses a wide range of multiscale models in plasticity, including their technical landscape and their implications.
Collapse
Affiliation(s)
- Upinder S Bhalla
- National Centre for Biological Sciences, Bangalore, Karnataka, India
| |
Collapse
|
9
|
Modeling the formation process of grouping stimuli sets through cortical columns and microcircuits to feature neurons. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2013; 2013:290358. [PMID: 24369455 PMCID: PMC3863480 DOI: 10.1155/2013/290358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2013] [Revised: 09/24/2013] [Accepted: 10/08/2013] [Indexed: 11/18/2022]
Abstract
A computational model of a self-structuring neuronal net is presented in which repetitively applied pattern sets induce the formation of cortical columns and microcircuits which decode distinct patterns after a learning phase. In a case study, it is demonstrated how specific neurons in a feature classifier layer become orientation selective if they receive bar patterns of different slopes from an input layer. The input layer is mapped and intertwined by self-evolving neuronal microcircuits to the feature classifier layer. In this topical overview, several models are discussed which indicate that the net formation converges in its functionality to a mathematical transform which maps the input pattern space to a feature representing output space. The self-learning of the mathematical transform is discussed and its implications are interpreted. Model assumptions are deduced which serve as a guide to apply model derived repetitive stimuli pattern sets to in vitro cultures of neuron ensembles to condition them to learn and execute a mathematical transform.
Collapse
|