1
|
[Comparison of electrophysiological properties of parvalbumin neurons in the tail of the striatum and the auditory cortex of mice]. NAN FANG YI KE DA XUE XUE BAO = JOURNAL OF SOUTHERN MEDICAL UNIVERSITY 2022; 42:1889-1895. [PMID: 36651259 PMCID: PMC9878424 DOI: 10.12122/j.issn.1673-4254.2022.12.19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
OBJECTIVE To study the electrophysiological properties of parvalbumin (PV) neurons in the auditory cortex (AC) and its descending auditory projection area in the tail of the striatum (TS). METHODS The stimulation response of PV neuron step current was recorded in PV-Cre-Ai14 mice using in vitro patch clamp technique, and the release characteristics and waveform characteristics of PV neuron action potentials (APs) were analyzed using Clampfit and MATLAB software. The release characteristics of the APs included AP onset, rheobase, average firing rate, F/I slope and spike frequency adaptation (SFA); the waveform characteristics included peak and post potential characteristics. RESULTS The PV neurons of the TS and the AC had significantly different electrophysiological characteristics. In terms of peak potential characteristics, the PV neurons in the TS presented with smaller half peak width (P < 0.001) and larger amplitude (P < 0.01) with larger maximum ascending slope (P < 0.01) and maximum descending slope (P < 0.05). For post potential characteristics, the PV neurons in the TS showed a greater post hyperpolarization (P < 0.01) with a shorter time for recovery of the resting potential (P < 0.01). The firing characteristics of the PV neurons of the TS featured a higher AP rheobase (P < 0.01), a larger F/I slope (P < 0.01), a greater firing onset delay (P < 0.001), and a larger SFA (P < 0.01). CONCLUSION The PV neurons in the TS and the AC of mice show significantly different electrophysiological characteristics in processing auditory information.
Collapse
|
2
|
Brivio S, Ly DRB, Vianello E, Spiga S. Non-linear Memristive Synaptic Dynamics for Efficient Unsupervised Learning in Spiking Neural Networks. Front Neurosci 2021; 15:580909. [PMID: 33633531 PMCID: PMC7901913 DOI: 10.3389/fnins.2021.580909] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Accepted: 01/06/2021] [Indexed: 11/13/2022] Open
Abstract
Spiking neural networks (SNNs) are a computational tool in which the information is coded into spikes, as in some parts of the brain, differently from conventional neural networks (NNs) that compute over real-numbers. Therefore, SNNs can implement intelligent information extraction in real-time at the edge of data acquisition and correspond to a complementary solution to conventional NNs working for cloud-computing. Both NN classes face hardware constraints due to limited computing parallelism and separation of logic and memory. Emerging memory devices, like resistive switching memories, phase change memories, or memristive devices in general are strong candidates to remove these hurdles for NN applications. The well-established training procedures of conventional NNs helped in defining the desiderata for memristive device dynamics implementing synaptic units. The generally agreed requirements are a linear evolution of memristive conductance upon stimulation with train of identical pulses and a symmetric conductance change for conductance increase and decrease. Conversely, little work has been done to understand the main properties of memristive devices supporting efficient SNN operation. The reason lies in the lack of a background theory for their training. As a consequence, requirements for NNs have been taken as a reference to develop memristive devices for SNNs. In the present work, we show that, for efficient CMOS/memristive SNNs, the requirements for synaptic memristive dynamics are very different from the needs of a conventional NN. System-level simulations of a SNN trained to classify hand-written digit images through a spike timing dependent plasticity protocol are performed considering various linear and non-linear plausible synaptic memristive dynamics. We consider memristive dynamics bounded by artificial hard conductance values and limited by the natural dynamics evolution toward asymptotic values (soft-boundaries). We quantitatively analyze the impact of resolution and non-linearity properties of the synapses on the network training and classification performance. Finally, we demonstrate that the non-linear synapses with hard boundary values enable higher classification performance and realize the best trade-off between classification accuracy and required training time. With reference to the obtained results, we discuss how memristive devices with non-linear dynamics constitute a technologically convenient solution for the development of on-line SNN training.
Collapse
Affiliation(s)
- Stefano Brivio
- CNR - IMM, Unit of Agrate Brianza, Agrate Brianza, Italy
| | - Denys R B Ly
- Université Grenoble Alpes, CEA, Leti, Grenoble, France
| | | | - Sabina Spiga
- CNR - IMM, Unit of Agrate Brianza, Agrate Brianza, Italy
| |
Collapse
|
3
|
Lim S. Mechanisms underlying sharpening of visual response dynamics with familiarity. eLife 2019; 8:44098. [PMID: 31393260 PMCID: PMC6711664 DOI: 10.7554/elife.44098] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 08/07/2019] [Indexed: 12/03/2022] Open
Abstract
Experience-dependent modifications of synaptic connections are thought to change patterns of network activities and stimulus tuning with learning. However, only a few studies explored how synaptic plasticity shapes the response dynamics of cortical circuits. Here, we investigated the mechanism underlying sharpening of both stimulus selectivity and response dynamics with familiarity observed in monkey inferotemporal cortex. Broadening the distribution of activities and stronger oscillations in the response dynamics after learning provide evidence for synaptic plasticity in recurrent connections modifying the strength of positive feedback. Its interplay with slow negative feedback via firing rate adaptation is critical in sharpening response dynamics. Analysis of changes in temporal patterns also enables us to disentangle recurrent and feedforward synaptic plasticity and provides a measure for the strengths of recurrent synaptic plasticity. Overall, this work highlights the importance of analyzing changes in dynamics as well as network patterns to further reveal the mechanisms of visual learning.
Collapse
Affiliation(s)
- Sukbin Lim
- Neural Science, NYU Shanghai, Shanghai, China.,NYU-ECNU Institute of Brain and Cognitive Science, NYU Shanghai, Shanghai, China
| |
Collapse
|
4
|
Henderson JA, Gong P. Functional mechanisms underlie the emergence of a diverse range of plasticity phenomena. PLoS Comput Biol 2018; 14:e1006590. [PMID: 30419014 PMCID: PMC6258383 DOI: 10.1371/journal.pcbi.1006590] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Revised: 11/26/2018] [Accepted: 10/23/2018] [Indexed: 11/18/2022] Open
Abstract
Diverse plasticity mechanisms are orchestrated to shape the spatiotemporal dynamics underlying brain functions. However, why these plasticity rules emerge and how their dynamics interact with neural activity to give rise to complex neural circuit dynamics remains largely unknown. Here we show that both Hebbian and homeostatic plasticity rules emerge from a functional perspective of neuronal dynamics whereby each neuron learns to encode its own activity in the population activity, so that the activity of the presynaptic neuron can be decoded from the activity of its postsynaptic neurons. We explain how a range of experimentally observed plasticity phenomena with widely separated time scales emerge from learning this encoding function, including STDP and its frequency dependence, and metaplasticity. We show that when implemented in neural circuits, these plasticity rules naturally give rise to essential neural response properties, including variable neural dynamics with balanced excitation and inhibition, and approximately log-normal distributions of synaptic strengths, while simultaneously encoding a complex real-world visual stimulus. These findings establish a novel function-based account of diverse plasticity mechanisms, providing a unifying framework relating plasticity, dynamics and neural computation. Many experiments have documented a variety of ways in which the connectivity strengths between neurons change in response to the activity of neurons. These changes are an important part of learning. However, it is not understood how such a diverse range of observations can be understood as consequences of an underlying algorithm used by brains for learning. In order to understand such a learning algorithm it is also necessary to understand the neural computation that is being learned, that is, how the functions of the brain are encoded in the activity of its neurons and its connectivity. In this work we propose a simple way in which information can be encoded and decoded in a network of neurons for operating on real-world stimuli, and how this can be learned using two fundamental plasticity rules that change the strength of connections between neurons in response to neural activity. Surprisingly, many experimental observations result as consequences of this approach, indicating that studying the learning of function provides a novel framework for unifying plasticity, dynamics, and neural computation.
Collapse
Affiliation(s)
- James A. Henderson
- School of Physics, The University of Sydney, Sydney, NSW, Australia
- ARC Centre of Excellence for Integrative Brain Function, The University of Sydney, Sydney, NSW, Australia
- * E-mail: (JAH); (PG)
| | - Pulin Gong
- School of Physics, The University of Sydney, Sydney, NSW, Australia
- ARC Centre of Excellence for Integrative Brain Function, The University of Sydney, Sydney, NSW, Australia
- * E-mail: (JAH); (PG)
| |
Collapse
|
5
|
Zenke F, Gerstner W. Hebbian plasticity requires compensatory processes on multiple timescales. Philos Trans R Soc Lond B Biol Sci 2017; 372:rstb.2016.0259. [PMID: 28093557 PMCID: PMC5247595 DOI: 10.1098/rstb.2016.0259] [Citation(s) in RCA: 92] [Impact Index Per Article: 13.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2016] [Indexed: 01/19/2023] Open
Abstract
We review a body of theoretical and experimental research on Hebbian and homeostatic plasticity, starting from a puzzling observation: while homeostasis of synapses found in experiments is a slow compensatory process, most mathematical models of synaptic plasticity use rapid compensatory processes (RCPs). Even worse, with the slow homeostatic plasticity reported in experiments, simulations of existing plasticity models cannot maintain network stability unless further control mechanisms are implemented. To solve this paradox, we suggest that in addition to slow forms of homeostatic plasticity there are RCPs which stabilize synaptic plasticity on short timescales. These rapid processes may include heterosynaptic depression triggered by episodes of high postsynaptic firing rate. While slower forms of homeostatic plasticity are not sufficient to stabilize Hebbian plasticity, they are important for fine-tuning neural circuits. Taken together we suggest that learning and memory rely on an intricate interplay of diverse plasticity mechanisms on different timescales which jointly ensure stability and plasticity of neural circuits.This article is part of the themed issue 'Integrating Hebbian and homeostatic plasticity'.
Collapse
Affiliation(s)
- Friedemann Zenke
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Wulfram Gerstner
- Brain Mind Institute, School of Life Sciences and School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne, 1015 Lausanne EPFL, Switzerland
| |
Collapse
|
6
|
Hiratani N, Fukai T. Mixed signal learning by spike correlation propagation in feedback inhibitory circuits. PLoS Comput Biol 2015; 11:e1004227. [PMID: 25910189 PMCID: PMC4409403 DOI: 10.1371/journal.pcbi.1004227] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2014] [Accepted: 03/06/2015] [Indexed: 11/18/2022] Open
Abstract
The brain can learn and detect mixed input signals masked by various types of noise, and spike-timing-dependent plasticity (STDP) is the candidate synaptic level mechanism. Because sensory inputs typically have spike correlation, and local circuits have dense feedback connections, input spikes cause the propagation of spike correlation in lateral circuits; however, it is largely unknown how this secondary correlation generated by lateral circuits influences learning processes through STDP, or whether it is beneficial to achieve efficient spike-based learning from uncertain stimuli. To explore the answers to these questions, we construct models of feedforward networks with lateral inhibitory circuits and study how propagated correlation influences STDP learning, and what kind of learning algorithm such circuits achieve. We derive analytical conditions at which neurons detect minor signals with STDP, and show that depending on the origin of the noise, different correlation timescales are useful for learning. In particular, we show that non-precise spike correlation is beneficial for learning in the presence of cross-talk noise. We also show that by considering excitatory and inhibitory STDP at lateral connections, the circuit can acquire a lateral structure optimal for signal detection. In addition, we demonstrate that the model performs blind source separation in a manner similar to the sequential sampling approximation of the Bayesian independent component analysis algorithm. Our results provide a basic understanding of STDP learning in feedback circuits by integrating analyses from both dynamical systems and information theory.
Collapse
Affiliation(s)
- Naoki Hiratani
- Department of Complexity Science and Engineering, The University of Tokyo, Kashiwa, Chiba, Japan
- Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Wako, Saitama, Japan
- * E-mail: (NH); (TF)
| | - Tomoki Fukai
- Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Wako, Saitama, Japan
- * E-mail: (NH); (TF)
| |
Collapse
|
7
|
Bernacchia A. The interplay of plasticity and adaptation in neural circuits: a generative model. Front Synaptic Neurosci 2014; 6:26. [PMID: 25400577 PMCID: PMC4214225 DOI: 10.3389/fnsyn.2014.00026] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2014] [Accepted: 10/09/2014] [Indexed: 11/13/2022] Open
Abstract
Multiple neural and synaptic phenomena take place in the brain. They operate over a broad range of timescales, and the consequences of their interplay are still unclear. In this work, I study a computational model of a recurrent neural network in which two dynamic processes take place: sensory adaptation and synaptic plasticity. Both phenomena are ubiquitous in the brain, but their dynamic interplay has not been investigated. I show that when both processes are included, the neural circuit is able to perform a specific computation: it becomes a generative model for certain distributions of input stimuli. The neural circuit is able to generate spontaneous patterns of activity that reproduce exactly the probability distribution of experienced stimuli. In particular, the landscape of the phase space includes a large number of stable states (attractors) that sample precisely this prior distribution. This work demonstrates that the interplay between distinct dynamical processes gives rise to useful computation, and proposes a framework in which neural circuit models for Bayesian inference may be developed in the future.
Collapse
Affiliation(s)
- Alberto Bernacchia
- School of Engineering and Science, Jacobs University Bremen Bremen, Germany
| |
Collapse
|
8
|
Zenke F, Hennequin G, Gerstner W. Synaptic plasticity in neural networks needs homeostasis with a fast rate detector. PLoS Comput Biol 2013; 9:e1003330. [PMID: 24244138 PMCID: PMC3828150 DOI: 10.1371/journal.pcbi.1003330] [Citation(s) in RCA: 91] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2013] [Accepted: 09/25/2013] [Indexed: 01/17/2023] Open
Abstract
Hebbian changes of excitatory synapses are driven by and further enhance correlations between pre- and postsynaptic activities. Hence, Hebbian plasticity forms a positive feedback loop that can lead to instability in simulated neural networks. To keep activity at healthy, low levels, plasticity must therefore incorporate homeostatic control mechanisms. We find in numerical simulations of recurrent networks with a realistic triplet-based spike-timing-dependent plasticity rule (triplet STDP) that homeostasis has to detect rate changes on a timescale of seconds to minutes to keep the activity stable. We confirm this result in a generic mean-field formulation of network activity and homeostatic plasticity. Our results strongly suggest the existence of a homeostatic regulatory mechanism that reacts to firing rate changes on the order of seconds to minutes. Learning and memory in the brain are thought to be mediated through Hebbian plasticity. When a group of neurons is repetitively active together, their connections get strengthened. This can cause co-activation even in the absence of the stimulus that triggered the change. To avoid run-away behavior it is important to prevent neurons from forming excessively strong connections. This is achieved by regulatory homeostatic mechanisms that constrain the overall activity. Here we study the stability of background activity in a recurrent network model with a plausible Hebbian learning rule and homeostasis. We find that the activity in our model is unstable unless homeostasis reacts to rate changes on a timescale of minutes or faster. Since this timescale is incompatible with most known forms of homeostasis, this implies the existence of a previously unknown, rapid homeostatic regulatory mechanism capable of either gating the rate of plasticity, or affecting synaptic efficacies otherwise on a short timescale.
Collapse
Affiliation(s)
- Friedemann Zenke
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, Ecole polytechnique fédérale de Lausanne, Lausanne, Switzerland
- * E-mail:
| | - Guillaume Hennequin
- Computational and Biological Learning Laboratory, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, Ecole polytechnique fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
9
|
How lateral connections and spiking dynamics may separate multiple objects moving together. PLoS One 2013; 8:e69952. [PMID: 23936362 PMCID: PMC3732294 DOI: 10.1371/journal.pone.0069952] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2012] [Accepted: 06/13/2013] [Indexed: 11/19/2022] Open
Abstract
Over successive stages, the ventral visual system of the primate brain develops neurons that respond selectively to particular objects or faces with translation, size and view invariance. The powerful neural representations found in Inferotemporal cortex form a remarkably rapid and robust basis for object recognition which belies the difficulties faced by the system when learning in natural visual environments. A central issue in understanding the process of biological object recognition is how these neurons learn to form separate representations of objects from complex visual scenes composed of multiple objects. We show how a one-layer competitive network comprised of ‘spiking’ neurons is able to learn separate transformation-invariant representations (exemplified by one-dimensional translations) of visual objects that are always seen together moving in lock-step, but separated in space. This is achieved by combining ‘Mexican hat’ functional lateral connectivity with cell firing-rate adaptation to temporally segment input representations of competing stimuli through anti-phase oscillations (perceptual cycles). These spiking dynamics are quickly and reliably generated, enabling selective modification of the feed-forward connections to neurons in the next layer through Spike-Time-Dependent Plasticity (STDP), resulting in separate translation-invariant representations of each stimulus. Variations in key properties of the model are investigated with respect to the network’s ability to develop appropriate input representations and subsequently output representations through STDP. Contrary to earlier rate-coded models of this learning process, this work shows how spiking neural networks may learn about more than one stimulus together without suffering from the ‘superposition catastrophe’. We take these results to suggest that spiking dynamics are key to understanding biological visual object recognition.
Collapse
|
10
|
Pawlak V, Greenberg DS, Sprekeler H, Gerstner W, Kerr JND. Changing the responses of cortical neurons from sub- to suprathreshold using single spikes in vivo. eLife 2013; 2:e00012. [PMID: 23359858 PMCID: PMC3552422 DOI: 10.7554/elife.00012] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2012] [Accepted: 11/29/2012] [Indexed: 11/13/2022] Open
Abstract
Action Potential (APs) patterns of sensory cortex neurons encode a variety of stimulus features, but how can a neuron change the feature to which it responds? Here, we show that in vivo a spike-timing-dependent plasticity (STDP) protocol—consisting of pairing a postsynaptic AP with visually driven presynaptic inputs—modifies a neurons' AP-response in a bidirectional way that depends on the relative AP-timing during pairing. Whereas postsynaptic APs repeatedly following presynaptic activation can convert subthreshold into suprathreshold responses, APs repeatedly preceding presynaptic activation reduce AP responses to visual stimulation. These changes were paralleled by restructuring of the neurons response to surround stimulus locations and membrane-potential time-course. Computational simulations could reproduce the observed subthreshold voltage changes only when presynaptic temporal jitter was included. Together this shows that STDP rules can modify output patterns of sensory neurons and the timing of single-APs plays a crucial role in sensory coding and plasticity. DOI:http://dx.doi.org/10.7554/eLife.00012.001 Nerve cells, called neurons, are one of the core components of the brain and form complex networks by connecting to other neurons via long, thin ‘wire-like’ processes called axons. Axons can extend across the brain, enabling neurons to form connections—or synapses—with thousands of others. It is through these complex networks that incoming information from sensory organs, such as the eye, is propagated through the brain and encoded. The basic unit of communication between neurons is the action potential, often called a ‘spike’, which propagates along the network of axons and, through a chemical process at synapses, communicates with the postsynaptic neurons that the axon is connected to. These action potentials excite the neuron that they arrive at, and this excitatory process can generate a new action potential that then propagates along the axon to excite additional target neurons. In the visual areas of the cortex, neurons respond with action potentials when they ‘recognize’ a particular feature in a scene—a process called tuning. How a neuron becomes tuned to certain features in the world and not to others is unclear, as are the rules that enable a neuron to change what it is tuned to. What is clear, however, is that to understand this process is to understand the basis of sensory perception. Memory storage and formation is thought to occur at synapses. The efficiency of signal transmission between neurons can increase or decrease over time, and this process is often referred to as synaptic plasticity. But for these synaptic changes to be transmitted to target neurons, the changes must alter the number of action potentials. Although it has been shown in vitro that the efficiency of synaptic transmission—that is the strength of the synapse—can be altered by changing the order in which the pre- and postsynaptic cells are activated (referred to as ‘Spike-timing-dependent plasticity’), this has never been shown to have an effect on the number of action potentials generated in a single neuron in vivo. It is therefore unknown whether this process is functionally relevant. Now Pawlak et al. report that spike-timing-dependent plasticity in the visual cortex of anaesthetized rats can change the spiking of neurons in the visual cortex. They used a visual stimulus (a bar flashed up for half a second) to activate a presynaptic cell, and triggered a single action potential in the postsynaptic cell a very short time later. By repeatedly activating the cells in this way, they increased the strength of the synaptic connection between the two neurons. After a small number of these pairing activations, presenting the visual stimulus alone to the presynaptic cell was enough to trigger an action potential (a suprathreshold response) in the postsynaptic neuron—even though this was not the case prior to the pairing. This study shows that timing rules known to change the strength of synaptic connections—and proposed to underlie learning and memory—have functional relevance in vivo, and that the timing of single action potentials can change the functional status of a cortical neuron. DOI:http://dx.doi.org/10.7554/eLife.00012.002
Collapse
Affiliation(s)
- Verena Pawlak
- Network Imaging Group , Max Planck Institute for Biological Cybernetics , Tübingen , Germany
| | | | | | | | | |
Collapse
|
11
|
Cutsuridis V. Interaction of inhibition and triplets of excitatory spikes modulates the NMDA-R-mediated synaptic plasticity in a computational model of spike timing-dependent plasticity. Hippocampus 2012; 23:75-86. [PMID: 22851353 DOI: 10.1002/hipo.22057] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2012] [Indexed: 02/03/2023]
Abstract
Spike timing-dependent plasticity (STDP) experiments have shown that a synapse is strengthened when a presynaptic spike precedes a postsynaptic one and depressed vice versa. The canonical form of STDP has been shown to have an asymmetric shape with the peak long-term potentiation at +6 ms and the peak long-term depression at -5 ms. Experiments in hippocampal cultures with more complex stimuli such as triplets (one presynaptic spike combined with two postsynaptic spikes or one postsynaptic spike with two presynaptic spikes) have shown that pre-post-pre spike triplets result in no change in synaptic strength, whereas post-pre-post spike triplets lead to significant potentiation. The sign and magnitude of STDP have also been experimentally hypothesized to be modulated by inhibition. Recently, a computational study showed that the asymmetrical form of STDP in the CA1 pyramidal cell dendrite when two spikes interact switches to a symmetrical one in the presence of inhibition under certain conditions. In the present study, I investigate computationally how inhibition modulates STDP in the CA1 pyramidal neuron dendrite when it is driven by triplets. The model uses calcium as the postsynaptic signaling agent for STDP and is shown to be consistent with the experimental triplet observations in the absence of inhibition: simulated pre-post-pre spike triplets result in no change in synaptic strength, whereas simulated post-pre-post spike triplets lead to significant potentiation. When inhibition is bounded by the onset and offset of the triplet stimulation, then the strength of the synapse is decreased as the strength of inhibition increases. When inhibition arrives either few milliseconds before or at the onset of the last spike in the pre-post-pre triplet stimulation, then the synapse is potentiated. Variability in the frequency of inhibition (50 vs. 100 Hz) produces no change in synaptic strength. Finally, a 5% variation in model's calcium parameters (calcium thresholds) proves that the model's performance is robust.
Collapse
|
12
|
Viriyopase A, Bojak I, Zeitler M, Gielen S. When Long-Range Zero-Lag Synchronization is Feasible in Cortical Networks. Front Comput Neurosci 2012; 6:49. [PMID: 22866034 PMCID: PMC3406310 DOI: 10.3389/fncom.2012.00049] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2012] [Accepted: 06/27/2012] [Indexed: 11/13/2022] Open
Abstract
Many studies have reported long-range synchronization of neuronal activity between brain areas, in particular in the beta and gamma bands with frequencies in the range of 14–30 and 40–80 Hz, respectively. Several studies have reported synchrony with zero phase lag, which is remarkable considering the synaptic and conduction delays inherent in the connections between distant brain areas. This result has led to many speculations about the possible functional role of zero-lag synchrony, such as for neuronal communication, attention, memory, and feature binding. However, recent studies using recordings of single-unit activity and local field potentials report that neuronal synchronization may occur with non-zero phase lags. This raises the questions whether zero-lag synchrony can occur in the brain and, if so, under which conditions. We used analytical methods and computer simulations to investigate which connectivity between neuronal populations allows or prohibits zero-lag synchrony. We did so for a model where two oscillators interact via a relay oscillator. Analytical results and computer simulations were obtained for both type I Mirollo–Strogatz neurons and type II Hodgkin–Huxley neurons. We have investigated the dynamics of the model for various types of synaptic coupling and importantly considered the potential impact of Spike-Timing Dependent Plasticity (STDP) and its learning window. We confirm previous results that zero-lag synchrony can be achieved in this configuration. This is much easier to achieve with Hodgkin–Huxley neurons, which have a biphasic phase response curve, than for type I neurons. STDP facilitates zero-lag synchrony as it adjusts the synaptic strengths such that zero-lag synchrony is feasible for a much larger range of parameters than without STDP.
Collapse
Affiliation(s)
- Atthaphon Viriyopase
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen (Medical Centre) Nijmegen, Netherlands
| | | | | | | |
Collapse
|
13
|
Spectral analysis of input spike trains by spike-timing-dependent plasticity. PLoS Comput Biol 2012; 8:e1002584. [PMID: 22792056 PMCID: PMC3390410 DOI: 10.1371/journal.pcbi.1002584] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2011] [Accepted: 05/14/2012] [Indexed: 11/19/2022] Open
Abstract
Spike-timing-dependent plasticity (STDP) has been observed in many brain areas such as sensory cortices, where it is hypothesized to structure synaptic connections between neurons. Previous studies have demonstrated how STDP can capture spiking information at short timescales using specific input configurations, such as coincident spiking, spike patterns and oscillatory spike trains. However, the corresponding computation in the case of arbitrary input signals is still unclear. This paper provides an overarching picture of the algorithm inherent to STDP, tying together many previous results for commonly used models of pairwise STDP. For a single neuron with plastic excitatory synapses, we show how STDP performs a spectral analysis on the temporal cross-correlograms between its afferent spike trains. The postsynaptic responses and STDP learning window determine kernel functions that specify how the neuron “sees” the input correlations. We thus denote this unsupervised learning scheme as ‘kernel spectral component analysis’ (kSCA). In particular, the whole input correlation structure must be considered since all plastic synapses compete with each other. We find that kSCA is enhanced when weight-dependent STDP induces gradual synaptic competition. For a spiking neuron with a “linear” response and pairwise STDP alone, we find that kSCA resembles principal component analysis (PCA). However, plain STDP does not isolate correlation sources in general, e.g., when they are mixed among the input spike trains. In other words, it does not perform independent component analysis (ICA). Tuning the neuron to a single correlation source can be achieved when STDP is paired with a homeostatic mechanism that reinforces the competition between synaptic inputs. Our results suggest that neuronal networks equipped with STDP can process signals encoded in the transient spiking activity at the timescales of tens of milliseconds for usual STDP. Tuning feature extraction of sensory stimuli is an important function for synaptic plasticity models. A widely studied example is the development of orientation preference in the primary visual cortex, which can emerge using moving bars in the visual field. A crucial point is the decomposition of stimuli into basic information tokens, e.g., selecting individual bars even though they are presented in overlapping pairs (vertical and horizontal). Among classical unsupervised learning models, independent component analysis (ICA) is capable of isolating basic tokens, whereas principal component analysis (PCA) cannot. This paper focuses on spike-timing-dependent plasticity (STDP), whose functional implications for neural information processing have been intensively studied both theoretically and experimentally in the last decade. Following recent studies demonstrating that STDP can perform ICA for specific cases, we show how STDP relates to PCA or ICA, and in particular explains the conditions under which it switches between them. Here information at the neuronal level is assumed to be encoded in temporal cross-correlograms of spike trains. We find that a linear spiking neuron equipped with pairwise STDP requires additional mechanisms, such as a homeostatic regulation of its output firing, in order to separate mixed correlation sources and thus perform ICA.
Collapse
|
14
|
Ren Q, Zhang Z, Zhao J. Effect on information transfer of synaptic pruning driven by spike-timing-dependent plasticity. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2012; 85:022901. [PMID: 22463266 DOI: 10.1103/physreve.85.022901] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2011] [Revised: 01/09/2012] [Indexed: 05/31/2023]
Abstract
Spike-timing-dependent plasticity (STDP) is an important driving force of self-organization in neural systems. With properly chosen input signals, STDP can yield a synaptic pruning process, whose functional role needs to be further investigated. We explore this issue from an information theoretic standpoint. Temporally correlated stimuli are introduced to neurons of an input layer. Then synapses on the dendrite, and thus the receptive field, of an output neuron are refined by STDP. The mutual information between input and output spike trains is calculated with the context tree method. The results show that synapse removal can enhance information transfer, i.e., that "less can be more" under certain constraints that stress the balance between potentiation and depression dictated by the parameters of the STDP rule, as well as the temporal scale of the input correlation.
Collapse
Affiliation(s)
- Quansheng Ren
- School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, People's Republic of China
| | | | | |
Collapse
|
15
|
Wade JJ, McDaid LJ, Harkin J, Crunelli V, Kelso JAS. Bidirectional coupling between astrocytes and neurons mediates learning and dynamic coordination in the brain: a multiple modeling approach. PLoS One 2011; 6:e29445. [PMID: 22242121 PMCID: PMC3248449 DOI: 10.1371/journal.pone.0029445] [Citation(s) in RCA: 88] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2011] [Accepted: 11/28/2011] [Indexed: 11/30/2022] Open
Abstract
In recent years research suggests that astrocyte networks, in addition to nutrient and waste processing functions, regulate both structural and synaptic plasticity. To understand the biological mechanisms that underpin such plasticity requires the development of cell level models that capture the mutual interaction between astrocytes and neurons. This paper presents a detailed model of bidirectional signaling between astrocytes and neurons (the astrocyte-neuron model or AN model) which yields new insights into the computational role of astrocyte-neuronal coupling. From a set of modeling studies we demonstrate two significant findings. Firstly, that spatial signaling via astrocytes can relay a "learning signal" to remote synaptic sites. Results show that slow inward currents cause synchronized postsynaptic activity in remote neurons and subsequently allow Spike-Timing-Dependent Plasticity based learning to occur at the associated synapses. Secondly, that bidirectional communication between neurons and astrocytes underpins dynamic coordination between neuron clusters. Although our composite AN model is presently applied to simplified neural structures and limited to coordination between localized neurons, the principle (which embodies structural, functional and dynamic complexity), and the modeling strategy may be extended to coordination among remote neuron clusters.
Collapse
Affiliation(s)
- John J Wade
- Intelligent Systems Research Centre, School of Computing and Intelligent Systems, University of Ulster, Derry, Northern Ireland.
| | | | | | | | | |
Collapse
|
16
|
A triplet spike-timing-dependent plasticity model generalizes the Bienenstock-Cooper-Munro rule to higher-order spatiotemporal correlations. Proc Natl Acad Sci U S A 2011; 108:19383-8. [PMID: 22080608 DOI: 10.1073/pnas.1105933108] [Citation(s) in RCA: 87] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Synaptic strength depresses for low and potentiates for high activation of the postsynaptic neuron. This feature is a key property of the Bienenstock-Cooper-Munro (BCM) synaptic learning rule, which has been shown to maximize the selectivity of the postsynaptic neuron, and thereby offers a possible explanation for experience-dependent cortical plasticity such as orientation selectivity. However, the BCM framework is rate-based and a significant amount of recent work has shown that synaptic plasticity also depends on the precise timing of presynaptic and postsynaptic spikes. Here we consider a triplet model of spike-timing-dependent plasticity (STDP) that depends on the interactions of three precisely timed spikes. Triplet STDP has been shown to describe plasticity experiments that the classical STDP rule, based on pairs of spikes, has failed to capture. In the case of rate-based patterns, we show a tight correspondence between the triplet STDP rule and the BCM rule. We analytically demonstrate the selectivity property of the triplet STDP rule for orthogonal inputs and perform numerical simulations for nonorthogonal inputs. Moreover, in contrast to BCM, we show that triplet STDP can also induce selectivity for input patterns consisting of higher-order spatiotemporal correlations, which exist in natural stimuli and have been measured in the brain. We show that this sensitivity to higher-order correlations can be used to develop direction and speed selectivity.
Collapse
|
17
|
Stability versus neuronal specialization for STDP: long-tail weight distributions solve the dilemma. PLoS One 2011; 6:e25339. [PMID: 22003389 PMCID: PMC3189213 DOI: 10.1371/journal.pone.0025339] [Citation(s) in RCA: 58] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2011] [Accepted: 09/01/2011] [Indexed: 11/19/2022] Open
Abstract
Spike-timing-dependent plasticity (STDP) modifies the weight (or strength) of synaptic connections between neurons and is considered to be crucial for generating network structure. It has been observed in physiology that, in addition to spike timing, the weight update also depends on the current value of the weight. The functional implications of this feature are still largely unclear. Additive STDP gives rise to strong competition among synapses, but due to the absence of weight dependence, it requires hard boundaries to secure the stability of weight dynamics. Multiplicative STDP with linear weight dependence for depression ensures stability, but it lacks sufficiently strong competition required to obtain a clear synaptic specialization. A solution to this stability-versus-function dilemma can be found with an intermediate parametrization between additive and multiplicative STDP. Here we propose a novel solution to the dilemma, named log-STDP, whose key feature is a sublinear weight dependence for depression. Due to its specific weight dependence, this new model can produce significantly broad weight distributions with no hard upper bound, similar to those recently observed in experiments. Log-STDP induces graded competition between synapses, such that synapses receiving stronger input correlations are pushed further in the tail of (very) large weights. Strong weights are functionally important to enhance the neuronal response to synchronous spike volleys. Depending on the input configuration, multiple groups of correlated synaptic inputs exhibit either winner-share-all or winner-take-all behavior. When the configuration of input correlations changes, individual synapses quickly and robustly readapt to represent the new configuration. We also demonstrate the advantages of log-STDP for generating a stable structure of strong weights in a recurrently connected network. These properties of log-STDP are compared with those of previous models. Through long-tail weight distributions, log-STDP achieves both stable dynamics for and robust competition of synapses, which are crucial for spike-based information processing.
Collapse
|
18
|
Vandecasteele M, Deniau JM, Venance L. Spike frequency adaptation is developmentally regulated in substantia nigra pars compacta dopaminergic neurons. Neuroscience 2011; 192:1-10. [PMID: 21767612 DOI: 10.1016/j.neuroscience.2011.07.017] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2011] [Revised: 06/29/2011] [Accepted: 07/06/2011] [Indexed: 10/18/2022]
Abstract
Dopaminergic neurons of the substantia nigra pars compacta play a key role in the modulation of basal ganglia and provide a reward-related teaching signal essential for adaptative motor control. They are generally considered as a homogenous population despite several chemical and electrophysiological heterogeneities, which could underlie different preferential patterns of activity and/or different roles. Using whole-cell patch-clamp recordings in juvenile rat brain slices, we observed that the evoked activity of dopaminergic neurons displays variable spike frequency adaptation patterns. The intensity of spike frequency adaptation decreased during post-natal development. The adaptation was associated with an increase in the initial firing frequency due to faster kinetics of the afterhyperpolarization component of the spike. Adaptation was enhanced when small conductance calcium-activated potassium (SK) channels were blocked with bath application of apamine. Lastly, spike frequency adaptation of the evoked discharge was associated with more irregularity in the spontaneous firing pattern. Altogether these results show a developmental heterogeneity and electrophysiological maturation of substantia nigra dopaminergic neurons.
Collapse
Affiliation(s)
- M Vandecasteele
- Laboratory of Dynamics and Pathophysiology of Neuronal Networks, CIRB, INSERM-U1050, CNRS-UMR7241, Collège de France, Paris, France
| | | | | |
Collapse
|
19
|
Pool RR, Mato G. Spike-timing-dependent plasticity and reliability optimization: the role of neuron dynamics. Neural Comput 2011; 23:1768-89. [PMID: 21492013 DOI: 10.1162/neco_a_00140] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Plastic changes in synaptic efficacy can depend on the time ordering of presynaptic and postsynaptic spikes. This phenomenon is called spike-timing-dependent plasticity (STDP). One of the most striking aspects of this plasticity mechanism is that the STDP windows display a great variety of forms in different parts of the nervous system. We explore this issue from a theoretical point of view. We choose as the optimization principle the minimization of conditional entropy or maximization of reliability in the transmission of information. We apply this principle to two types of postsynaptic dynamics, designated type I and type II. The first is characterized as being an integrator, while the second is a resonator. We find that, depending on the parameters of the models, the optimization principle can give rise to a wide variety of STDP windows, such as antisymmetric Hebbian, predominantly depressing or symmetric with one positive region and two lateral negative regions. We can relate each of these forms to the dynamical behavior of the different models. We also propose experimental tests to assess the validity of the optimization principle.
Collapse
Affiliation(s)
- R Rossi Pool
- Comisión Nacional de Energía Atómica and CONICET, Centro Atómico Bariloche and Instituto Balseiro, 8400 San Carlos de Bariloche, RN, Argentina.
| | | |
Collapse
|
20
|
Kunkel S, Diesmann M, Morrison A. Limits to the development of feed-forward structures in large recurrent neuronal networks. Front Comput Neurosci 2011; 4:160. [PMID: 21415913 PMCID: PMC3042733 DOI: 10.3389/fncom.2010.00160] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2010] [Accepted: 12/25/2010] [Indexed: 11/25/2022] Open
Abstract
Spike-timing dependent plasticity (STDP) has traditionally been of great interest to theoreticians, as it seems to provide an answer to the question of how the brain can develop functional structure in response to repeated stimuli. However, despite this high level of interest, convincing demonstrations of this capacity in large, initially random networks have not been forthcoming. Such demonstrations as there are typically rely on constraining the problem artificially. Techniques include employing additional pruning mechanisms or STDP rules that enhance symmetry breaking, simulating networks with low connectivity that magnify competition between synapses, or combinations of the above. In this paper, we first review modeling choices that carry particularly high risks of producing non-generalizable results in the context of STDP in recurrent networks. We then develop a theory for the development of feed-forward structure in random networks and conclude that an unstable fixed point in the dynamics prevents the stable propagation of structure in recurrent networks with weight-dependent STDP. We demonstrate that the key predictions of the theory hold in large-scale simulations. The theory provides insight into the reasons why such development does not take place in unconstrained systems and enables us to identify biologically motivated candidate adaptations to the balanced random network model that might enable it.
Collapse
Affiliation(s)
- Susanne Kunkel
- Functional Neural Circuits Group, Faculty of Biology, Albert-Ludwig University of Freiburg Germany
| | | | | |
Collapse
|