51
|
Associative memory of phase-coded spatiotemporal patterns in leaky Integrate and Fire networks. J Comput Neurosci 2012; 34:319-36. [PMID: 23053861 PMCID: PMC3605499 DOI: 10.1007/s10827-012-0423-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2012] [Revised: 08/16/2012] [Accepted: 09/06/2012] [Indexed: 11/09/2022]
Abstract
We study the collective dynamics of a Leaky Integrate and Fire network in which precise relative phase relationship of spikes among neurons are stored, as attractors of the dynamics, and selectively replayed at different time scales. Using an STDP-based learning process, we store in the connectivity several phase-coded spike patterns, and we find that, depending on the excitability of the network, different working regimes are possible, with transient or persistent replay activity induced by a brief signal. We introduce an order parameter to evaluate the similarity between stored and recalled phase-coded pattern, and measure the storage capacity. Modulation of spiking thresholds during replay changes the frequency of the collective oscillation or the number of spikes per cycle, keeping preserved the phases relationship. This allows a coding scheme in which phase, rate and frequency are dissociable. Robustness with respect to noise and heterogeneity of neurons parameters is studied, showing that, since dynamics is a retrieval process, neurons preserve stable precise phase relationship among units, keeping a unique frequency of oscillation, even in noisy conditions and with heterogeneity of internal parameters of the units.
Collapse
|
52
|
Florian RV. The chronotron: a neuron that learns to fire temporally precise spike patterns. PLoS One 2012; 7:e40233. [PMID: 22879876 PMCID: PMC3412872 DOI: 10.1371/journal.pone.0040233] [Citation(s) in RCA: 131] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2012] [Accepted: 06/03/2012] [Indexed: 12/02/2022] Open
Abstract
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.
Collapse
Affiliation(s)
- Răzvan V Florian
- Center for Cognitive and Neural Studies, Romanian Institute of Science and Technology, Cluj-Napoca, Romania.
| |
Collapse
|
53
|
MOHEMMED AMMAR, SCHLIEBS STEFAN, MATSUDA SATOSHI, KASABOV NIKOLA. SPAN: SPIKE PATTERN ASSOCIATION NEURON FOR LEARNING SPATIO-TEMPORAL SPIKE PATTERNS. Int J Neural Syst 2012; 22:1250012. [PMID: 22830962 DOI: 10.1142/s0129065712500128] [Citation(s) in RCA: 111] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Spiking Neural Networks (SNN) were shown to be suitable tools for the processing of spatio-temporal information. However, due to their inherent complexity, the formulation of efficient supervised learning algorithms for SNN is difficult and remains an important problem in the research area. This article presents SPAN — a spiking neuron that is able to learn associations of arbitrary spike trains in a supervised fashion allowing the processing of spatio-temporal information encoded in the precise timing of spikes. The idea of the proposed algorithm is to transform spike trains during the learning phase into analog signals so that common mathematical operations can be performed on them. Using this conversion, it is possible to apply the well-known Widrow–Hoff rule directly to the transformed spike trains in order to adjust the synaptic weights and to achieve a desired input/output spike behavior of the neuron. In the presented experimental analysis, the proposed learning algorithm is evaluated regarding its learning capabilities, its memory capacity, its robustness to noisy stimuli and its classification performance. Differences and similarities of SPAN regarding two related algorithms, ReSuMe and Chronotron, are discussed.
Collapse
Affiliation(s)
- AMMAR MOHEMMED
- Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, New Zealand
| | - STEFAN SCHLIEBS
- Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, New Zealand
| | - SATOSHI MATSUDA
- Department of Mathematical Information Engineering, Nihon University, Japan
| | - NIKOLA KASABOV
- Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, New Zealand
- Institute for Neuroinformatics, ETH and University of Zurich, Switzerland
| |
Collapse
|
54
|
Grüning A, Sporea I. Supervised Learning of Logical Operations in Layered Spiking Neural Networks with Spike Train Encoding. Neural Process Lett 2012. [DOI: 10.1007/s11063-012-9225-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
55
|
Taniguchi T, Sawaragi T. Incremental acquisition of behaviors and signs based on a reinforcement learning schemata model and a spike timing-dependent plasticity network. Adv Robot 2012. [DOI: 10.1163/156855307781389374] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Affiliation(s)
- Tadahiro Taniguchi
- a Graduate School of Engineering, Kyoto University Yoshida-honmachi, Sakyo, Kyoto, 606-8501, Japan
| | - Tetsuo Sawaragi
- b Graduate School of Engineering, Kyoto University Yoshida-honmachi, Sakyo, Kyoto, 606-8501, Japan
| |
Collapse
|
56
|
Kasabov N. Evolving Spiking Neural Networks and Neurogenetic Systems for Spatio- and Spectro-Temporal Data Modelling and Pattern Recognition. ADVANCES IN COMPUTATIONAL INTELLIGENCE 2012. [DOI: 10.1007/978-3-642-30687-7_12] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
57
|
|
58
|
Abstract
Most current Artificial Neural Network (ANN) models are based on highly simplified brain dynamics. They have been used as powerful computational tools to solve complex pattern recognition, function estimation, and classification problems. ANNs have been evolving towards more powerful and more biologically realistic models. In the past decade, Spiking Neural Networks (SNNs) have been developed which comprise of spiking neurons. Information transfer in these neurons mimics the information transfer in biological neurons, i.e., via the precise timing of spikes or a sequence of spikes. To facilitate learning in such networks, new learning algorithms based on varying degrees of biological plausibility have also been developed recently. Addition of the temporal dimension for information encoding in SNNs yields new insight into the dynamics of the human brain and could result in compact representations of large neural networks. As such, SNNs have great potential for solving complicated time-dependent pattern recognition problems because of their inherent dynamic representation. This article presents a state-of-the-art review of the development of spiking neurons and SNNs, and provides insight into their evolution as the third generation neural networks.
Collapse
Affiliation(s)
| | - HOJJAT ADELI
- Departments of Biomedical Engineering, Biomedical Informatics, Civil and Environmental Engineering and Geodetic Science, Electrical and Computer Engineering, Neurological Surgery and Neuroscience, The Ohio State University, 470 Hitchcock Hall, 2070 Neil Avenue, Columbus, Ohio 43210, USA
| |
Collapse
|
59
|
Synaptic consolidation: an approach to long-term learning. Cogn Neurodyn 2011; 6:251-7. [PMID: 23730356 DOI: 10.1007/s11571-011-9177-6] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2011] [Revised: 10/06/2011] [Accepted: 10/13/2011] [Indexed: 10/16/2022] Open
Abstract
Synaptic plasticity is thought to be the basis of learning and memory, but it is mostly studied on the timescale of mere minutes. This review discusses synaptic consolidation, a process that enables synapses to retain their strength for a much longer time (days to years), instead of returning to their original value. The process involves specific plasticity-related proteins, and depends on the dopamine D1/D5 receptors. Here, we review the research on synaptic consolidation, describing electrophysiology experiments, recent modeling work, as well as behavioral correlates.
Collapse
|
60
|
Friedrich J, Urbanczik R, Senn W. Spatio-temporal credit assignment in neuronal population learning. PLoS Comput Biol 2011; 7:e1002092. [PMID: 21738460 PMCID: PMC3127803 DOI: 10.1371/journal.pcbi.1002092] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2010] [Accepted: 05/02/2011] [Indexed: 01/27/2023] Open
Abstract
In learning from trial and error, animals need to relate behavioral decisions to environmental reinforcement even though it may be difficult to assign credit to a particular decision when outcomes are uncertain or subject to delays. When considering the biophysical basis of learning, the credit-assignment problem is compounded because the behavioral decisions themselves result from the spatio-temporal aggregation of many synaptic releases. We present a model of plasticity induction for reinforcement learning in a population of leaky integrate and fire neurons which is based on a cascade of synaptic memory traces. Each synaptic cascade correlates presynaptic input first with postsynaptic events, next with the behavioral decisions and finally with external reinforcement. For operant conditioning, learning succeeds even when reinforcement is delivered with a delay so large that temporal contiguity between decision and pertinent reward is lost due to intervening decisions which are themselves subject to delayed reinforcement. This shows that the model provides a viable mechanism for temporal credit assignment. Further, learning speeds up with increasing population size, so the plasticity cascade simultaneously addresses the spatial problem of assigning credit to synapses in different population neurons. Simulations on other tasks, such as sequential decision making, serve to contrast the performance of the proposed scheme to that of temporal difference-based learning. We argue that, due to their comparative robustness, synaptic plasticity cascades are attractive basic models of reinforcement learning in the brain. The key mechanisms supporting memory and learning in the brain rely on changing the strength of synapses which control the transmission of information between neurons. But how are appropriate changes determined when animals learn from trial and error? Information on success or failure is likely signaled to synapses by neurotransmitters like dopamine. But interpreting this reward signal is difficult because the number of synaptic transmissions occurring during behavioral decision making is huge and each transmission may have contributed differently to the decision, or perhaps not at all. Extrapolating from experimental evidence on synaptic plasticity, we suggest a computational model where each synapse collects information about its contributions to the decision process by means of a cascade of transient memory traces. The final trace then remodulates the reward signal when the persistent change of the synaptic strength is triggered. Simulation results show that with the suggested synaptic plasticity rule a simple neural network can learn even difficult tasks by trial and error, e.g., when the decision - reward sequence is scrambled due to large delays in reward delivery.
Collapse
Affiliation(s)
| | | | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
- * E-mail:
| |
Collapse
|
61
|
Glackin C, Maguire L, McDaid L, Sayers H. Receptive field optimisation and supervision of a fuzzy spiking neural network. Neural Netw 2011; 24:247-56. [DOI: 10.1016/j.neunet.2010.11.008] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2008] [Revised: 11/19/2010] [Accepted: 11/30/2010] [Indexed: 11/28/2022]
|
62
|
Evolving Probabilistic Spiking Neural Networks for Spatio-temporal Pattern Recognition: A Preliminary Study on Moving Object Recognition. ACTA ACUST UNITED AC 2011. [DOI: 10.1007/978-3-642-24965-5_25] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
63
|
Florian RV. Challenges for interactivist-constructivist robotics. NEW IDEAS IN PSYCHOLOGY 2010. [DOI: 10.1016/j.newideapsych.2009.09.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
64
|
Wade JJ, McDaid LJ, Santos JA, Sayers HM. SWAT: A Spiking Neural Network Training Algorithm for Classification Problems. ACTA ACUST UNITED AC 2010; 21:1817-30. [PMID: 20876015 DOI: 10.1109/tnn.2010.2074212] [Citation(s) in RCA: 81] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- John J Wade
- Intelligent Systems Research Center, University of Ulster, School of Computing and Intelligent Systems, Derry, Northern Ireland, U.K.
| | | | | | | |
Collapse
|
65
|
Klampfl S, Maass W. A theoretical basis for emergent pattern discrimination in neural systems through slow feature extraction. Neural Comput 2010; 22:2979-3035. [PMID: 20858129 DOI: 10.1162/neco_a_00050] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Neurons in the brain are able to detect and discriminate salient spatiotemporal patterns in the firing activity of presynaptic neurons. It is open how they can learn to achieve this, especially without the help of a supervisor. We show that a well-known unsupervised learning algorithm for linear neurons, slow feature analysis (SFA), is able to acquire the discrimination capability of one of the best algorithms for supervised linear discrimination learning, the Fisher linear discriminant (FLD), given suitable input statistics. We demonstrate the power of this principle by showing that it enables readout neurons from simulated cortical microcircuits to learn without any supervision to discriminate between spoken digits and to detect repeated firing patterns that are embedded into a stream of noise spike trains with the same firing statistics. Both these computer simulations and our theoretical analysis show that slow feature extraction enables neurons to extract and collect information that is spread out over a trajectory of firing states that lasts several hundred ms. In addition, it enables neurons to learn without supervision to keep track of time (relative to a stimulus onset, or the initiation of a motor response). Hence, these results elucidate how the brain could compute with trajectories of firing states rather than only with fixed point attractors. It also provides a theoretical basis for understanding recent experimental results on the emergence of view- and position-invariant classification of visual objects in inferior temporal cortex.
Collapse
Affiliation(s)
- Stefan Klampfl
- Institute for Theoretical Computer Science, Graz University of Technology, A-8010 Graz, Austria.
| | | |
Collapse
|
66
|
Gilson M, Burkitt A, van Hemmen LJ. STDP in Recurrent Neuronal Networks. Front Comput Neurosci 2010; 4. [PMID: 20890448 PMCID: PMC2947928 DOI: 10.3389/fncom.2010.00023] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2010] [Accepted: 06/28/2010] [Indexed: 11/13/2022] Open
Abstract
Recent results about spike-timing-dependent plasticity (STDP) in recurrently connected neurons are reviewed, with a focus on the relationship between the weight dynamics and the emergence of network structure. In particular, the evolution of synaptic weights in the two cases of incoming connections for a single neuron and recurrent connections are compared and contrasted. A theoretical framework is used that is based upon Poisson neurons with a temporally inhomogeneous firing rate and the asymptotic distribution of weights generated by the learning dynamics. Different network configurations examined in recent studies are discussed and an overview of the current understanding of STDP in recurrently connected neuronal networks is presented.
Collapse
|
67
|
Maguire L. Does Soft Computing Classify Research in Spiking Neural Networks? INT J COMPUT INT SYS 2010. [DOI: 10.1080/18756891.2010.9727688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
|
68
|
Spierer L, De Lucia M, Bernasconi F, Grivel J, Bourquin NMP, Clarke S, Murray MM. Learning-induced plasticity in human audition: objects, time, and space. Hear Res 2010; 271:88-102. [PMID: 20430070 DOI: 10.1016/j.heares.2010.03.086] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/23/2009] [Revised: 02/16/2010] [Accepted: 03/03/2010] [Indexed: 10/19/2022]
Abstract
The human auditory system is comprised of specialized but interacting anatomic and functional pathways encoding object, spatial, and temporal information. We review how learning-induced plasticity manifests along these pathways and to what extent there are common mechanisms subserving such plasticity. A first series of experiments establishes a temporal hierarchy along which sounds of objects are discriminated along basic to fine-grained categorical boundaries and learned representations. A widespread network of temporal and (pre)frontal brain regions contributes to object discrimination via recursive processing. Learning-induced plasticity typically manifested as repetition suppression within a common set of brain regions. A second series considered how the temporal sequence of sound sources is represented. We show that lateralized responsiveness during the initial encoding phase of pairs of auditory spatial stimuli is critical for their accurate ordered perception. Finally, we consider how spatial representations are formed and modified through training-induced learning. A population-based model of spatial processing is supported wherein temporal and parietal structures interact in the encoding of relative and absolute spatial information over the initial ~300 ms post-stimulus onset. Collectively, these data provide insights into the functional organization of human audition and open directions for new developments in targeted diagnostic and neurorehabilitation strategies.
Collapse
Affiliation(s)
- Lucas Spierer
- Neuropsychology and Neurorehabilitation Service, Department of Clinical Neuroscience, Vaudois University Hospital Center and University of Lausanne, Switzerland
| | | | | | | | | | | | | |
Collapse
|
69
|
Bernasconi F, Grivel J, Murray MM, Spierer L. Plastic brain mechanisms for attaining auditory temporal order judgment proficiency. Neuroimage 2010; 50:1271-9. [DOI: 10.1016/j.neuroimage.2010.01.016] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2009] [Revised: 01/04/2010] [Accepted: 01/06/2010] [Indexed: 10/20/2022] Open
|
70
|
Perceptron learning rule derived from spike-frequency adaptation and spike-time-dependent plasticity. Proc Natl Acad Sci U S A 2010; 107:4722-7. [PMID: 20167805 DOI: 10.1073/pnas.0909394107] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
It is widely believed that sensory and motor processing in the brain is based on simple computational primitives rooted in cellular and synaptic physiology. However, many gaps remain in our understanding of the connections between neural computations and biophysical properties of neurons. Here, we show that synaptic spike-time-dependent plasticity (STDP) combined with spike-frequency adaptation (SFA) in a single neuron together approximate the well-known perceptron learning rule. Our calculations and integrate-and-fire simulations reveal that delayed inputs to a neuron endowed with STDP and SFA precisely instruct neural responses to earlier arriving inputs. We demonstrate this mechanism on a developmental example of auditory map formation guided by visual inputs, as observed in the external nucleus of the inferior colliculus (ICX) of barn owls. The interplay of SFA and STDP in model ICX neurons precisely transfers the tuning curve from the visual modality onto the auditory modality, demonstrating a useful computation for multimodal and sensory-guided processing.
Collapse
|
71
|
Ponulak F, Kasiński A. Supervised Learning in Spiking Neural Networks with ReSuMe: Sequence Learning, Classification, and Spike Shifting. Neural Comput 2010; 22:467-510. [PMID: 19842989 DOI: 10.1162/neco.2009.11-08-901] [Citation(s) in RCA: 181] [Impact Index Per Article: 12.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Learning from instructions or demonstrations is a fundamental property of our brain necessary to acquire new knowledge and develop novel skills or behavioral patterns. This type of learning is thought to be involved in most of our daily routines. Although the concept of instruction-based learning has been studied for several decades, the exact neural mechanisms implementing this process remain unrevealed. One of the central questions in this regard is, How do neurons learn to reproduce template signals (instructions) encoded in precisely timed sequences of spikes? Here we present a model of supervised learning for biologically plausible neurons that addresses this question. In a set of experiments, we demonstrate that our approach enables us to train spiking neurons to reproduce arbitrary template spike patterns in response to given synaptic stimuli even in the presence of various sources of noise. We show that the learning rule can also be used for decision-making tasks. Neurons can be trained to classify categories of input signals based on only a temporal configuration of spikes. The decision is communicated by emitting precisely timed spike trains associated with given input categories. Trained neurons can perform the classification task correctly even if stimuli and corresponding decision times are temporally separated and the relevant information is consequently highly overlapped by the ongoing neural activity. Finally, we demonstrate that neurons can be trained to reproduce sequences of spikes with a controllable time shift with respect to target templates. A reproduced signal can follow or even precede the targets. This surprising result points out that spiking neurons can potentially be applied to forecast the behavior (firing times) of other reference neurons or networks.
Collapse
Affiliation(s)
- Filip Ponulak
- Institute of Control and Information Engineering, Poznań University of Technology, Poznań 60-965, Poland, and Bernstein Center for Computational Neuroscience, Albert-Ludwigs University Freiburg, Freiburg 79-104, Germany
| | - Andrzej Kasiński
- Institute of Control and Information Engineering, Poznań University of Technology, Poznań 60-965, Poland
| |
Collapse
|
72
|
Clopath C, Büsing L, Vasilaki E, Gerstner W. Connectivity reflects coding: a model of voltage-based STDP with homeostasis. Nat Neurosci 2010; 13:344-52. [PMID: 20098420 DOI: 10.1038/nn.2479] [Citation(s) in RCA: 348] [Impact Index Per Article: 24.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2009] [Accepted: 12/01/2009] [Indexed: 11/10/2022]
Abstract
Electrophysiological connectivity patterns in cortex often have a few strong connections, which are sometimes bidirectional, among a lot of weak connections. To explain these connectivity patterns, we created a model of spike timing-dependent plasticity (STDP) in which synaptic changes depend on presynaptic spike arrival and the postsynaptic membrane potential, filtered with two different time constants. Our model describes several nonlinear effects that are observed in STDP experiments, as well as the voltage dependence of plasticity. We found that, in a simulated recurrent network of spiking neurons, our plasticity rule led not only to development of localized receptive fields but also to connectivity patterns that reflect the neural code. For temporal coding procedures with spatio-temporal input correlations, strong connections were predominantly unidirectional, whereas they were bidirectional under rate-coded input with spatial correlations only. Thus, variable connectivity patterns in the brain could reflect different coding principles across brain areas; moreover, our simulations suggested that plasticity is fast.
Collapse
Affiliation(s)
- Claudia Clopath
- Laboratory of Computational Neuroscience, Brain-Mind Institute and School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
| | | | | | | |
Collapse
|
73
|
Synchronization enhances synaptic efficacy through spike timing-dependent plasticity in the olfactory system. Neurocomputing 2009. [DOI: 10.1016/j.neucom.2009.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
74
|
Abstract
Predictive learning rules, where synaptic changes are driven by the difference between a random input and its reconstruction derived from internal variables, have proven to be very stable and efficient. However, it is not clear how such learning rules could take place in biological synapses. Here we propose an implementation that exploits the synchronization of neural activities within a recurrent network. In this framework, the asymmetric shape of spike-timing-dependent plasticity (STDP) can be interpreted as a self-stabilizing mechanism. Our results suggest a novel hypothesis concerning the computational role of neural synchrony and oscillations.
Collapse
Affiliation(s)
- Thomas Voegtlin
- INRIA-Campus Scientifique, F-54506 Vandoeuvre-les-Nancy Cedex, France.
| |
Collapse
|
75
|
On the relation between bursts and dynamic synapse properties: a modulation-based ansatz. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2009:658474. [PMID: 19584940 PMCID: PMC2703876 DOI: 10.1155/2009/658474] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/12/2008] [Revised: 01/31/2009] [Accepted: 03/11/2009] [Indexed: 11/21/2022]
Abstract
When entering a synapse, presynaptic pulse trains are filtered according to the recent pulse history at the synapse and also with respect to their own pulse time course. Various behavioral models have tried to reproduce these complex filtering properties. In particular, the quantal model of neurotransmitter release has been shown to be highly
selective for particular presynaptic pulse patterns. However, since the original, pulse-iterative quantal model does not lend itself to mathematical analysis, investigations have only been carried out via simulations. In contrast, we derive a comprehensive explicit expression for the quantal model. We show the correlation between the parameters of this explicit
expression and the preferred spike train pattern of the synapse. In particular, our analysis of the transmission of modulated pulse trains across a dynamic synapse links the original parameters of the quantal model to the transmission efficacy of two major spiking regimes, that is, bursting and constant-rate ones.
Collapse
|
76
|
Klampfl S, Legenstein R, Maass W. Spiking neurons can learn to solve information bottleneck problems and extract independent components. Neural Comput 2009; 21:911-59. [PMID: 19018708 DOI: 10.1162/neco.2008.01-07-432] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Independent component analysis (or blind source separation) is assumed to be an essential component of sensory processing in the brain and could provide a less redundant representation about the external world. Another powerful processing strategy is the optimization of internal representations according to the information bottleneck method. This method would allow extracting preferentially those components from high-dimensional sensory input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. However, there exists a lack of models that could explain how spiking neurons could learn to execute either of these two processing strategies. We show in this article how stochastically spiking neurons with refractoriness could in principle learn in an unsupervised manner to carry out both information bottleneck optimization and the extraction of independent components. We derive suitable learning rules, which extend the well-known BCM rule, from abstract information optimization principles. These rules will simultaneously keep the firing rate of the neuron within a biologically realistic range.
Collapse
Affiliation(s)
- Stefan Klampfl
- Institute for Theoretical Computer Science, Graz University of Technology, A-8010 Graz, Austria.
| | | | | |
Collapse
|
77
|
Abstract
Recently it has been shown that a repeating arbitrary spatiotemporal spike pattern hidden in equally dense distracter spike trains can be robustly detected and learned by a single neuron equipped with spike-timing-dependent plasticity (STDP) (Masquelier, Guyonneau, & Thorpe, 2008). To be precise, the neuron becomes selective to successive coincidences of the pattern. Here we extend this scheme to a more realistic scenario with multiple repeating patterns and multiple STDP neurons “listening” to the incoming spike trains. These “listening” neurons are in competition: as soon as one fires, it strongly inhibits the others through lateral connections (one-winner-take-all mechanism). This tends to prevent the neurons from learning the same (parts of the) repeating patterns, as shown in simulations. Instead, the population self-organizes, trying to cover the different patterns or coding one pattern by the successive firings of several neurons, and a powerful distributed coding scheme emerges. Taken together, these results illustrate how the brain could easily encode and decode information in the spike times, a theory referred to as temporal coding, and how STDP could play a key role by detecting repeating patterns and generating selective response to them.
Collapse
Affiliation(s)
- Timothée Masquelier
- Centre de Recherche Cerveau et Cognition, Université Toulouse 3, Centre National de la Recherche Scientifique, Faculté de Médecine de Rangueil, Toulouse 31062, France, and Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona E-08003, Spain
| | - Rudy Guyonneau
- Centre de Recherche Cerveau et Cognition, Université Toulouse 3, Centre National de la Recherche Scientifique, Faculté de Médecine de Rangueil, Toulouse 31062, France
| | - Simon J. Thorpe
- Centre de Recherche Cerveau et Cognition, Université Toulouse 3, Centre National de la Recherche Scientifique, Faculté de Médecine de Rangueil, Toulouse 31062, France, and SpikeNet Technology SARL, Prologue 1 La Pyrénénne, Labège 31673, France
| |
Collapse
|
78
|
Mitra S, Fusi S, Indiveri G. Real-Time Classification of Complex Patterns Using Spike-Based Learning in Neuromorphic VLSI. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2009; 3:32-42. [PMID: 23853161 DOI: 10.1109/tbcas.2008.2005781] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Real-time classification of patterns of spike trains is a difficult computational problem that both natural and artificial networks of spiking neurons are confronted with. The solution to this problem not only could contribute to understanding the fundamental mechanisms of computation used in the biological brain, but could also lead to efficient hardware implementations of a wide range of applications ranging from autonomous sensory-motor systems to brain-machine interfaces. Here we demonstrate real-time classification of complex patterns of mean firing rates, using a VLSI network of spiking neurons and dynamic synapses which implement a robust spike-driven plasticity mechanism. The learning rule implemented is a supervised one: a teacher signal provides the output neuron with an extra input spike-train during training, in parallel to the spike-trains that represent the input pattern. The teacher signal simply indicates if the neuron should respond to the input pattern with a high rate or with a low one. The learning mechanism modifies the synaptic weights only as long as the current generated by all the stimulated plastic synapses does not match the output desired by the teacher, as in the perceptron learning rule. We describe the implementation of this learning mechanism and present experimental data that demonstrate how the VLSI neural network can learn to classify patterns of neural activities, also in the case in which they are highly correlated.
Collapse
|
79
|
Abstract
The main contribution of this letter is the derivation of a steepest gradient descent learning rule for a multilayer network of theta neurons, a one-dimensional nonlinear neuron model. Central to our model is the assumption that the intrinsic neuron dynamics are sufficient to achieve consistent time coding, with no need to involve the precise shape of postsynaptic currents; this assumption departs from other related models such as SpikeProp and Tempotron learning. Our results clearly show that it is possible to perform complex computations by applying supervised learning techniques to the spike times and time response properties of nonlinear integrate and fire neurons. Networks trained with our multilayer training rule are shown to have similar generalization abilities for spike latency pattern classification as Tempotron learning. The rule is also able to train networks to perform complex regression tasks that neither SpikeProp or Tempotron learning appears to be capable of.
Collapse
Affiliation(s)
- Sam McKennoch
- INRIA, Campus Scientifique, F-54506 Vandoevre-Les-Nancy, France
| | - Thomas Voegtlin
- INRIA, Campus Scientifique, F-54506 Vandoevre-Les-Nancy, France
| | - Linda Bushnell
- Department of Electrical Engineering, University of Washington, Seattle, WA 98195, U.S.A
| |
Collapse
|
80
|
Lim H, Choe Y. Extrapolative delay compensation through facilitating synapses and its relation to the flash-lag effect. ACTA ACUST UNITED AC 2008; 19:1678-88. [PMID: 18842473 DOI: 10.1109/tnn.2008.2001002] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Neural conduction delay is a serious issue for organisms that need to act in real time. Various forms of flash-lag effect (FLE) suggest that the nervous system may perform extrapolation to compensate for delay. For example, in motion FLE, the position of a moving object is perceived to be ahead of a brief flash when they are actually colocalized. However, the precise mechanism for extrapolation at a single-neuron level has not been fully investigated. Our hypothesis is that facilitating synapses, with their dynamic sensitivity to the rate of change in the input, can serve as a neural basis for extrapolation. To test this hypothesis, we constructed and tested models of facilitating dynamics. First, we derived a spiking neuron model of facilitating dynamics at a single-neuron level, and tested it in the luminance FLE domain. Second, the spiking neuron model was extended to include multiple neurons and spike-timing-dependent plasticity (STDP), and was tested with orientation FLE. The results showed a strong relationship between delay compensation, FLE, and facilitating synapses/STDP. The results are expected to shed new light on real time and predictive processing in the brain, at the single neuron level.
Collapse
Affiliation(s)
- Heejin Lim
- Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, Houston, TX 77030, USA.
| | | |
Collapse
|
81
|
Morrison A, Diesmann M, Gerstner W. Phenomenological models of synaptic plasticity based on spike timing. BIOLOGICAL CYBERNETICS 2008; 98:459-78. [PMID: 18491160 PMCID: PMC2799003 DOI: 10.1007/s00422-008-0233-1] [Citation(s) in RCA: 284] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2008] [Accepted: 04/09/2008] [Indexed: 05/20/2023]
Abstract
Synaptic plasticity is considered to be the biological substrate of learning and memory. In this document we review phenomenological models of short-term and long-term synaptic plasticity, in particular spike-timing dependent plasticity (STDP). The aim of the document is to provide a framework for classifying and evaluating different models of plasticity. We focus on phenomenological synaptic models that are compatible with integrate-and-fire type neuron models where each neuron is described by a small number of variables. This implies that synaptic update rules for short-term or long-term plasticity can only depend on spike timing and, potentially, on membrane potential, as well as on the value of the synaptic weight, or on low-pass filtered (temporally averaged) versions of the above variables. We examine the ability of the models to account for experimental data and to fulfill expectations derived from theoretical considerations. We further discuss their relations to teacher-based rules (supervised learning) and reward-based rules (reinforcement learning). All models discussed in this paper are suitable for large-scale network simulations.
Collapse
Affiliation(s)
- Abigail Morrison
- Computational Neuroscience Group, RIKEN Brain Science Institute, Wako City, Japan
| | - Markus Diesmann
- Computational Neuroscience Group, RIKEN Brain Science Institute, Wako City, Japan
- Bernstein Center for Computational Neuroscience, Albert-Ludwigs-University, Freiburg, Germany
| | - Wulfram Gerstner
- Laboratory of Computational Neuroscience, LCN, Brain Mind Institute and School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Station 15, 1015 Lausanne, Switzerland
| |
Collapse
|
82
|
Brader JM, Senn W, Fusi S. Learning Real-World Stimuli in a Neural Network with Spike-Driven Synaptic Dynamics. Neural Comput 2007; 19:2881-912. [PMID: 17883345 DOI: 10.1162/neco.2007.19.11.2881] [Citation(s) in RCA: 143] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We present a model of spike-driven synaptic plasticity inspired by experimental observations and motivated by the desire to build an electronic hardware device that can learn to classify complex stimuli in a semisupervised fashion. During training, patterns of activity are sequentially imposed on the input neurons, and an additional instructor signal drives the output neurons toward the desired activity. The network is made of integrate-and-fire neurons with constant leak and a floor. The synapses are bistable, and they are modified by the arrival of presynaptic spikes. The sign of the change is determined by both the depolarization and the state of a variable that integrates the postsynaptic action potentials. Following the training phase, the instructor signal is removed, and the output neurons are driven purely by the activity of the input neurons weighted by the plastic synapses. In the absence of stimulation, the synapses preserve their internal state indefinitely. Memories are also very robust to the disruptive action of spontaneous activity. A network of 2000 input neurons is shown to be able to classify correctly a large number (thousands) of highly overlapping patterns (300 classes of preprocessed Latex characters, 30 patterns per class, and a subset of the NIST characters data set) and to generalize with performances that are better than or comparable to those of artificial neural networks. Finally we show that the synaptic dynamics is compatible with many of the experimental observations on the induction of long-term modifications (spike-timing-dependent plasticity and its dependence on both the postsynaptic depolarization and the frequency of pre- and postsynaptic neurons).
Collapse
|
83
|
Farries MA, Fairhall AL. Reinforcement learning with modulated spike timing dependent synaptic plasticity. J Neurophysiol 2007; 98:3648-65. [PMID: 17928565 DOI: 10.1152/jn.00364.2007] [Citation(s) in RCA: 100] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Spike timing-dependent synaptic plasticity (STDP) has emerged as the preferred framework linking patterns of pre- and postsynaptic activity to changes in synaptic strength. Although synaptic plasticity is widely believed to be a major component of learning, it is unclear how STDP itself could serve as a mechanism for general purpose learning. On the other hand, algorithms for reinforcement learning work on a wide variety of problems, but lack an experimentally established neural implementation. Here, we combine these paradigms in a novel model in which a modified version of STDP achieves reinforcement learning. We build this model in stages, identifying a minimal set of conditions needed to make it work. Using a performance-modulated modification of STDP in a two-layer feedforward network, we can train output neurons to generate arbitrarily selected spike trains or population responses. Furthermore, a given network can learn distinct responses to several different input patterns. We also describe in detail how this model might be implemented biologically. Thus our model offers a novel and biologically plausible implementation of reinforcement learning that is capable of training a neural population to produce a very wide range of possible mappings between synaptic input and spiking output.
Collapse
Affiliation(s)
- Michael A Farries
- Department of Biology, University of Texas at San Antonio, San Antonio, TX 78249, USA.
| | | |
Collapse
|
84
|
Florian RV. Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity. Neural Comput 2007; 19:1468-502. [PMID: 17444757 DOI: 10.1162/neco.2007.19.6.1468] [Citation(s) in RCA: 207] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The persistent modification of synaptic efficacy as a function of the relative timing of pre- and postsynaptic spikes is a phenomenon known as spike-timing-dependent plasticity (STDP). Here we show that the modulation of STDP by a global reward signal leads to reinforcement learning. We first derive analytically learning rules involving reward-modulated spike-timing-dependent synaptic and intrinsic plasticity, by applying a reinforcement learning algorithm to the stochastic spike response model of spiking neurons. These rules have several features common to plasticity mechanisms experimentally found in the brain. We then demonstrate in simulations of networks of integrate-and-fire neurons the efficacy of two simple learning rules involving modulated STDP. One rule is a direct extension of the standard STDP model (modulated STDP), and the other one involves an eligibility trace stored at each synapse that keeps a decaying memory of the relationships between the recent pairs of pre- and postsynaptic spike pairs (modulated STDP with eligibility trace). This latter rule permits learning even if the reward signal is delayed. The proposed rules are able to solve the XOR problem with both rate coded and temporally coded input and to learn a target output firing-rate pattern. These learning rules are biologically plausible, may be used for training generic artificial spiking neural networks, regardless of the neural model used, and suggest the experimental investigation in animals of the existence of reward-modulated STDP.
Collapse
Affiliation(s)
- Răzvan V Florian
- Center for Cognitive and Neural Studies (Coneural), 400504 Cluj-Napoca, Romania.
| |
Collapse
|
85
|
Thivierge JP, Rivest F, Monchi O. Spiking neurons, dopamine, and plasticity: timing is everything, but concentration also matters. Synapse 2007; 61:375-90. [PMID: 17372980 DOI: 10.1002/syn.20378] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
While both dopamine (DA) fluctuations and spike-timing-dependent plasticity (STDP) are known to influence long-term corticostriatal plasticity, little attention has been devoted to the interaction between these two fundamental mechanisms. Here, a theoretical framework is proposed to account for experimental results specifying the role of presynaptic activation, postsynaptic activation, and concentrations of extracellular DA in synaptic plasticity. Our starting point was an explicitly-implemented multiplicative rule linking STDP to Michaelis-Menton equations that models the dynamics of extracellular DA fluctuations. This rule captures a wide range of results on conditions leading to long-term potentiation and depression in simulations that manipulate the frequency of induced corticostriatal stimulation and DA release. A well-documented biphasic function relating DA concentrations to synaptic plasticity emerges naturally from simulations involving a multiplicative rule linking DA and neural activity. This biphasic function is found consistently across different neural coding schemes employed (voltage-based vs. spike-based models). By comparison, an additive rule fails to capture these results. The proposed framework is the first to generate testable predictions on the dual influence of DA concentrations and STDP on long-term plasticity, suggesting a way in which the biphasic influence of DA concentrations can modulate the direction and magnitude of change induced by STDP, and raising the possibility that DA concentrations may inverse the LTP/LTD components of the STDP rule.
Collapse
|
86
|
Lazar A, Pipa G, Triesch J. Fading memory and time series prediction in recurrent networks with different forms of plasticity. Neural Netw 2007; 20:312-22. [PMID: 17556114 DOI: 10.1016/j.neunet.2007.04.020] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
We investigate how different forms of plasticity shape the dynamics and computational properties of simple recurrent spiking neural networks. In particular, we study the effect of combining two forms of neuronal plasticity: spike timing dependent plasticity (STDP), which changes the synaptic strength, and intrinsic plasticity (IP), which changes the excitability of individual neurons to maintain homeostasis of their activity. We find that the interaction of these forms of plasticity gives rise to interesting network dynamics characterized by a comparatively large number of stable limit cycles. We study the response of such networks to external input and find that they exhibit a fading memory of recent inputs. We then demonstrate that the combination of STDP and IP shapes the network structure and dynamics in ways that allow the discovery of patterns in input time series and lead to good performance in time series prediction. Our results underscore the importance of studying the interaction of different forms of plasticity on network behavior.
Collapse
Affiliation(s)
- Andreea Lazar
- Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe University, Max-von-Laue-Str. 1, 60438 Frankfurt am Main, Germany.
| | | | | |
Collapse
|
87
|
Bohte SM, Mozer MC. Reducing the Variability of Neural Responses: A Computational Theory of Spike-Timing-Dependent Plasticity. Neural Comput 2007; 19:371-403. [PMID: 17206869 DOI: 10.1162/neco.2007.19.2.371] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Experimental studies have observed synaptic potentiation when a presynaptic neuron fires shortly before a postsynaptic neuron and synaptic depression when the presynaptic neuron fires shortly after. The dependence of synaptic modulation on the precise timing of the two action potentials is known as spike-timing dependent plasticity (STDP). We derive STDP from a simple computational principle: synapses adapt so as to minimize the postsynaptic neuron's response variability to a given presynaptic input, causing the neuron's output to become more reliable in the face of noise. Using an objective function that minimizes response variability and the biophysically realistic spike-response model of Gerstner (2001), we simulate neurophysiological experiments and obtain the characteristic STDP curve along with other phenomena, including the reduction in synaptic plasticity as synaptic efficacy increases. We compare our account to other efforts to derive STDP from computational principles and argue that our account provides the most comprehensive coverage of the phenomena. Thus, reliability of neural response in the face of noise may be a key goal of unsupervised cortical adaptation.
Collapse
Affiliation(s)
- Sander M Bohte
- Netherlands Centre for Mathematics and Computer Science (CWI), 1098 SJ Amsterdam, The Netherlands.
| | | |
Collapse
|
88
|
Mossbridge JA, Fitzgerald MB, O'Connor ES, Wright BA. Perceptual-learning evidence for separate processing of asynchrony and order tasks. J Neurosci 2006; 26:12708-16. [PMID: 17151274 PMCID: PMC6674828 DOI: 10.1523/jneurosci.2254-06.2006] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Normal perception depends, in part, on accurate judgments of the temporal relationships between sensory events. Two such relative-timing skills are the ability to detect stimulus asynchrony and to discriminate stimulus order. Here we investigated the neural processes contributing to the performance of auditory asynchrony and order tasks in humans, using a perceptual-learning paradigm. In each of two parallel experiments, we tested listeners on a pretest and a posttest consisting of auditory relative-timing conditions. Between these two tests, we trained a subset of listeners approximately 1 h/d for 6-8 d on a single relative-timing condition. The trained listeners practiced asynchrony detection in one experiment and order discrimination in the other. Both groups were trained at sound onset with tones at 0.25 and 4.0 kHz. The remaining listeners in each experiment, who served as controls, did not receive multihour training during the 8-10 d between the pretest and posttest. These controls improved even without intervening training, adding to evidence that a single session of exposure to perceptual tasks can yield learning. Most importantly, each of the two groups of trained listeners learned more on their respective trained conditions than controls, but this learning occurred only on the two trained conditions. Neither group of trained listeners generalized their learning to the other task (order or asynchrony), an untrained temporal position (sound offset), or untrained frequency pairs. Thus, it appears that multihour training on relative-timing skills affects task-specific neural circuits that are tuned to a given temporal position and combination of stimulus components.
Collapse
Affiliation(s)
- Julia A. Mossbridge
- Department of Communication Sciences and Disorders, Northwestern University, Frances Searle Building, Evanston, Illinois 60208, and
| | - Matthew B. Fitzgerald
- Department of Communication Sciences and Disorders, Northwestern University, Frances Searle Building, Evanston, Illinois 60208, and
| | - Erin S. O'Connor
- Department of Communication Sciences and Disorders, Northwestern University, Frances Searle Building, Evanston, Illinois 60208, and
| | - Beverly A. Wright
- Department of Communication Sciences and Disorders, Northwestern University, Frances Searle Building, Evanston, Illinois 60208, and
- Northwestern University Institute for Neuroscience, Chicago, Illinois 60611-3010
| |
Collapse
|
89
|
Maass W, Joshi P, Sontag ED. Computational aspects of feedback in neural circuits. PLoS Comput Biol 2006; 3:e165. [PMID: 17238280 PMCID: PMC1779299 DOI: 10.1371/journal.pcbi.0020165] [Citation(s) in RCA: 89] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2005] [Accepted: 10/24/2006] [Indexed: 11/19/2022] Open
Abstract
It has previously been shown that generic cortical microcircuit models can perform complex real-time computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more realistic case where not only readout neurons, but in addition a few neurons within the circuit, have been trained for specific tasks. This is essentially equivalent to the case where the output of trained readout neurons is fed back into the circuit. We show that this new model overcomes the limitation of a rapidly fading memory. In fact, we prove that in the idealized case without noise it can carry out any conceivable digital or analog computation on time-varying inputs. But even with noise, the resulting computational model can perform a large class of biologically relevant real-time computations that require a nonfading memory. We demonstrate these computational implications of feedback both theoretically, and through computer simulations of detailed cortical microcircuit models that are subject to noise and have complex inherent dynamics. We show that the application of simple learning procedures (such as linear regression or perceptron learning) to a few neurons enables such circuits to represent time over behaviorally relevant long time spans, to integrate evidence from incoming spike trains over longer periods of time, and to process new information contained in such spike trains in diverse ways according to the current internal state of the circuit. In particular we show that such generic cortical microcircuits with feedback provide a new model for working memory that is consistent with a large set of biological constraints. Although this article examines primarily the computational role of feedback in circuits of neurons, the mathematical principles on which its analysis is based apply to a variety of dynamical systems. Hence they may also throw new light on the computational role of feedback in other complex biological dynamical systems, such as, for example, genetic regulatory networks. Circuits of neurons in the brain have an abundance of feedback connections, both on the level of local microcircuits and on the level of synaptic connections between brain areas. But the functional role of these feedback connections is largely unknown. We present a computational theory that characterizes the gain in computational power that feedback can provide in such circuits. It shows that feedback endows standard models for neural circuits with the capability to emulate arbitrary Turing machines. In fact, with suitable feedback they can simulate any dynamical system, in particular any conceivable analog computer. Under realistic noise conditions, the computational power of these circuits is necessarily reduced. But we demonstrate through computer simulations that feedback also provides a significant gain in computational power for quite detailed models of cortical microcircuits with in vivo–like high levels of noise. In particular it enables generic cortical microcircuits to carry out computations that combine information from working memory and persistent internal states in real time with new information from online input streams.
Collapse
Affiliation(s)
- Wolfgang Maass
- Institute for Theoretical Computer Science, Technische Universitaet Graz, Graz, Austria.
| | | | | |
Collapse
|
90
|
Tripp B, Eliasmith C. Neural Populations Can Induce Reliable Postsynaptic Currents without Observable Spike Rate Changes or Precise Spike Timing. Cereb Cortex 2006; 17:1830-40. [PMID: 17043082 DOI: 10.1093/cercor/bhl092] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Fine temporal patterns of firing in much of the brain are highly irregular. In some circuits, the precise pattern of irregularity contains information beyond that contained in mean firing rates. However, the capacity of neural circuits to use this additional information for computational purposes is not well understood. Here we employ computational methods to show that an ensemble of neurons firing at a constant mean rate can induce arbitrarily chosen temporal current patterns in postsynaptic cells. If the presynaptic neurons fire with nearly uniform interspike intervals, then current patterns are sensitive to variations in spike timing. But irregular, Poisson-like firing can drive current patterns robustly, even if spike timing varies by tens of milliseconds from trial to trial. Notably, irregular firing patterns can drive useful patterns of current even if they are so variable that several hundred repeated experimental trials would be needed to distinguish them from random firing. Together, these results describe an unrestrictive set of conditions in which postsynaptic cells might exploit virtually any information contained in spike timing. We speculate as to how this capability may underlie an extension of population coding to the temporal domain.
Collapse
Affiliation(s)
- Bryan Tripp
- Departments of Systems Design Engineering, University of Waterloo, Waterloo, Ontario, Canada
| | | |
Collapse
|
91
|
Pfister JP, Toyoizumi T, Barber D, Gerstner W. Optimal Spike-Timing-Dependent Plasticity for Precise Action Potential Firing in Supervised Learning. Neural Comput 2006; 18:1318-48. [PMID: 16764506 DOI: 10.1162/neco.2006.18.6.1318] [Citation(s) in RCA: 129] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In timing-based neural codes, neurons have to emit action potentials at precise moments in time. We use a supervised learning paradigm to derive a synaptic update rule that optimizes by gradient ascent the likelihood of postsynaptic firing at one or several desired firing times. We find that the optimal strategy of up- and downregulating synaptic efficacies depends on the relative timing between presynaptic spike arrival and desired postsynaptic firing. If the presynaptic spike arrives before the desired postsynaptic spike timing, our optimal learning rule predicts that the synapse should become potentiated. The dependence of the potentiation on spike timing directly reflects the time course of an excitatory postsynaptic potential. However, our approach gives no unique reason for synaptic depression under reversed spike timing. In fact, the presence and amplitude of depression of synaptic efficacies for reversed spike timing depend on how constraints are implemented in the optimization problem. Two different constraints, control of postsynaptic rates and control of temporal locality, are studied. The relation of our results to spike-timing-dependent plasticity and reinforcement learning is discussed.
Collapse
Affiliation(s)
- Jean-Pascal Pfister
- Laboratory of Computational Neuroscience, School of Computer and Communication Sciences and Brain-Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland.
| | | | | | | |
Collapse
|
92
|
Gütig R, Sompolinsky H. The tempotron: a neuron that learns spike timing–based decisions. Nat Neurosci 2006; 9:420-8. [PMID: 16474393 DOI: 10.1038/nn1643] [Citation(s) in RCA: 324] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2005] [Accepted: 01/17/2006] [Indexed: 11/08/2022]
Abstract
The timing of action potentials in sensory neurons contains substantial information about the eliciting stimuli. Although the computational advantages of spike timing-based neuronal codes have long been recognized, it is unclear whether, and if so how, neurons can learn to read out such representations. We propose a new, biologically plausible supervised synaptic learning rule that enables neurons to efficiently learn a broad range of decision rules, even when information is embedded in the spatiotemporal structure of spike patterns rather than in mean firing rates. The number of categorizations of random spatiotemporal patterns that a neuron can implement is several times larger than the number of its synapses. The underlying nonlinear temporal computation allows neurons to access information beyond single-neuron statistics and to discriminate between inputs on the basis of multineuronal spike statistics. Our work demonstrates the high capacity of neural systems to learn to decode information embedded in distributed patterns of spike synchrony.
Collapse
Affiliation(s)
- Robert Gütig
- Racah Institute of Physics, Hebrew University, 91904 Jerusalem, Israel.
| | | |
Collapse
|
93
|
Maass W, Natschläger T, Markram H. Fading memory and kernel properties of generic cortical microcircuit models. ACTA ACUST UNITED AC 2005; 98:315-30. [PMID: 16310350 DOI: 10.1016/j.jphysparis.2005.09.020] [Citation(s) in RCA: 37] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
It is quite difficult to construct circuits of spiking neurons that can carry out complex computational tasks. On the other hand even randomly connected circuits of spiking neurons can in principle be used for complex computational tasks such as time-warp invariant speech recognition. This is possible because such circuits have an inherent tendency to integrate incoming information in such a way that simple linear readouts can be trained to transform the current circuit activity into the target output for a very large number of computational tasks. Consequently we propose to analyze circuits of spiking neurons in terms of their roles as analog fading memory and non-linear kernels, rather than as implementations of specific computational operations and algorithms. This article is a sequel to [W. Maass, T. Natschläger, H. Markram, Real-time computing without stable states: a new framework for neural computation based on perturbations, Neural Comput. 14 (11) (2002) 2531-2560, Online available as #130 from: ], and contains new results about the performance of generic neural microcircuit models for the recognition of speech that is subject to linear and non-linear time-warps, as well as for computations on time-varying firing rates. These computations rely, apart from general properties of generic neural microcircuit models, just on capabilities of simple linear readouts trained by linear regression. This article also provides detailed data on the fading memory property of generic neural microcircuit models, and a quick review of other new results on the computational power of such circuits of spiking neurons.
Collapse
Affiliation(s)
- Wolfgang Maass
- Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria.
| | | | | |
Collapse
|