1
|
Gillett M, Brunel N. Dynamic control of sequential retrieval speed in networks with heterogeneous learning rules. eLife 2024; 12:RP88805. [PMID: 39197099 PMCID: PMC11357343 DOI: 10.7554/elife.88805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/30/2024] Open
Abstract
Temporal rescaling of sequential neural activity has been observed in multiple brain areas during behaviors involving time estimation and motor execution at variable speeds. Temporally asymmetric Hebbian rules have been used in network models to learn and retrieve sequential activity, with characteristics that are qualitatively consistent with experimental observations. However, in these models sequential activity is retrieved at a fixed speed. Here, we investigate the effects of a heterogeneity of plasticity rules on network dynamics. In a model in which neurons differ by the degree of temporal symmetry of their plasticity rule, we find that retrieval speed can be controlled by varying external inputs to the network. Neurons with temporally symmetric plasticity rules act as brakes and tend to slow down the dynamics, while neurons with temporally asymmetric rules act as accelerators of the dynamics. We also find that such networks can naturally generate separate 'preparatory' and 'execution' activity patterns with appropriate external inputs.
Collapse
Affiliation(s)
- Maxwell Gillett
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - Nicolas Brunel
- Department of Neurobiology, Duke UniversityDurhamUnited States
- Department of Physics, Duke UniversityDurhamUnited States
| |
Collapse
|
2
|
Wang B, Torok Z, Duffy A, Bell DG, Wongso S, Velho TAF, Fairhall AL, Lois C. Unsupervised restoration of a complex learned behavior after large-scale neuronal perturbation. Nat Neurosci 2024; 27:1176-1186. [PMID: 38684893 DOI: 10.1038/s41593-024-01630-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 03/26/2024] [Indexed: 05/02/2024]
Abstract
Reliable execution of precise behaviors requires that brain circuits are resilient to variations in neuronal dynamics. Genetic perturbation of the majority of excitatory neurons in HVC, a brain region involved in song production, in adult songbirds with stereotypical songs triggered severe degradation of the song. The song fully recovered within 2 weeks, and substantial improvement occurred even when animals were prevented from singing during the recovery period, indicating that offline mechanisms enable recovery in an unsupervised manner. Song restoration was accompanied by increased excitatory synaptic input to neighboring, unmanipulated neurons in the same brain region. A model inspired by the behavioral and electrophysiological findings suggests that unsupervised single-cell and population-level homeostatic plasticity rules can support the functional restoration after large-scale disruption of networks that implement sequential dynamics. These observations suggest the existence of cellular and systems-level restorative mechanisms that ensure behavioral resilience.
Collapse
Affiliation(s)
- Bo Wang
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| | - Zsofia Torok
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Alison Duffy
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
| | - David G Bell
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
- Department of Physics, University of Washington, Seattle, WA, USA
| | - Shelyn Wongso
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Tarciso A F Velho
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Adrienne L Fairhall
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
- Department of Physics, University of Washington, Seattle, WA, USA
| | - Carlos Lois
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| |
Collapse
|
3
|
Soldado-Magraner S, Buonomano DV. Neural Sequences and the Encoding of Time. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:81-93. [PMID: 38918347 DOI: 10.1007/978-3-031-60183-5_5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Converging experimental and computational evidence indicate that on the scale of seconds the brain encodes time through changing patterns of neural activity. Experimentally, two general forms of neural dynamic regimes that can encode time have been observed: neural population clocks and ramping activity. Neural population clocks provide a high-dimensional code to generate complex spatiotemporal output patterns, in which each neuron exhibits a nonlinear temporal profile. A prototypical example of neural population clocks are neural sequences, which have been observed across species, brain areas, and behavioral paradigms. Additionally, neural sequences emerge in artificial neural networks trained to solve time-dependent tasks. Here, we examine the role of neural sequences in the encoding of time, and how they may emerge in a biologically plausible manner. We conclude that neural sequences may represent a canonical computational regime to perform temporal computations.
Collapse
Affiliation(s)
| | - Dean V Buonomano
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA, USA.
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
4
|
Mackevicius EL, Gu S, Denisenko NI, Fee MS. Self-organization of songbird neural sequences during social isolation. eLife 2023; 12:e77262. [PMID: 37252761 PMCID: PMC10229124 DOI: 10.7554/elife.77262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 04/19/2023] [Indexed: 05/31/2023] Open
Abstract
Behaviors emerge via a combination of experience and innate predispositions. As the brain matures, it undergoes major changes in cellular, network, and functional properties that can be due to sensory experience as well as developmental processes. In normal birdsong learning, neural sequences emerge to control song syllables learned from a tutor. Here, we disambiguate the role of tutor experience and development in neural sequence formation by delaying exposure to a tutor. Using functional calcium imaging, we observe neural sequences in the absence of tutoring, demonstrating that tutor experience is not necessary for the formation of sequences. However, after exposure to a tutor, pre-existing sequences can become tightly associated with new song syllables. Since we delayed tutoring, only half our birds learned new syllables following tutor exposure. The birds that failed to learn were the birds in which pre-tutoring neural sequences were most 'crystallized,' that is, already tightly associated with their (untutored) song.
Collapse
Affiliation(s)
- Emily L Mackevicius
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, MITCambridgeUnited States
| | - Shijie Gu
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, MITCambridgeUnited States
| | - Natalia I Denisenko
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, MITCambridgeUnited States
| | - Michale S Fee
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, MITCambridgeUnited States
| |
Collapse
|
5
|
Wang X, Jin Y, Hao K. Computational Modeling of Structural Synaptic Plasticity in Echo State Networks. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:11254-11266. [PMID: 33760748 DOI: 10.1109/tcyb.2021.3060466] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Most existing studies on computational modeling of neural plasticity have focused on synaptic plasticity. However, regulation of the internal weights in the reservoir based on synaptic plasticity often results in unstable learning dynamics. In this article, a structural synaptic plasticity learning rule is proposed to train the weights and add or remove neurons within the reservoir, which is shown to be able to alleviate the instability of the synaptic plasticity, and to contribute to increase the memory capacity of the network as well. Our experimental results also reveal that a few stronger connections may last for a longer period of time in a constantly changing network structure, and are relatively resistant to decay or disruptions in the learning process. These results are consistent with the evidence observed in biological systems. Finally, we show that an echo state network (ESN) using the proposed structural plasticity rule outperforms an ESN using synaptic plasticity and three state-of-the-art ESNs on four benchmark tasks.
Collapse
|
6
|
Metastable attractors explain the variable timing of stable behavioral action sequences. Neuron 2022; 110:139-153.e9. [PMID: 34717794 PMCID: PMC9194601 DOI: 10.1016/j.neuron.2021.10.011] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 08/30/2021] [Accepted: 10/05/2021] [Indexed: 01/07/2023]
Abstract
The timing of self-initiated actions shows large variability even when they are executed in stable, well-learned sequences. Could this mix of reliability and stochasticity arise within the same neural circuit? We trained rats to perform a stereotyped sequence of self-initiated actions and recorded neural ensemble activity in secondary motor cortex (M2), which is known to reflect trial-by-trial action-timing fluctuations. Using hidden Markov models, we established a dictionary between activity patterns and actions. We then showed that metastable attractors, representing activity patterns with a reliable sequential structure and large transition timing variability, could be produced by reciprocally coupling a high-dimensional recurrent network and a low-dimensional feedforward one. Transitions between attractors relied on correlated variability in this mesoscale feedback loop, predicting a specific structure of low-dimensional correlations that were empirically verified in M2 recordings. Our results suggest a novel mesoscale network motif based on correlated variability supporting naturalistic animal behavior.
Collapse
|
7
|
Egger R, Tupikov Y, Elmaleh M, Katlowitz KA, Benezra SE, Picardo MA, Moll F, Kornfeld J, Jin DZ, Long MA. Local Axonal Conduction Shapes the Spatiotemporal Properties of Neural Sequences. Cell 2021; 183:537-548.e12. [PMID: 33064989 DOI: 10.1016/j.cell.2020.09.019] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Revised: 08/07/2020] [Accepted: 09/04/2020] [Indexed: 12/30/2022]
Abstract
Sequential activation of neurons has been observed during various behavioral and cognitive processes, but the underlying circuit mechanisms remain poorly understood. Here, we investigate premotor sequences in HVC (proper name) of the adult zebra finch forebrain that are central to the performance of the temporally precise courtship song. We use high-density silicon probes to measure song-related population activity, and we compare these observations with predictions from a range of network models. Our results support a circuit architecture in which heterogeneous delays between sequentially active neurons shape the spatiotemporal patterns of HVC premotor neuron activity. We gauge the impact of several delay sources, and we find the primary contributor to be slow conduction through axonal collaterals within HVC, which typically adds between 1 and 7.5 ms for each link within the sequence. Thus, local axonal "delay lines" can play an important role in determining the dynamical repertoire of neural circuits.
Collapse
Affiliation(s)
- Robert Egger
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Yevhen Tupikov
- Department of Physics and Center for Neural Engineering, Pennsylvania State University, University Park, PA 16802, USA
| | - Margot Elmaleh
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Kalman A Katlowitz
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Sam E Benezra
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Michel A Picardo
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Felix Moll
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Jörgen Kornfeld
- Max Planck Institute of Neurobiology, 82152 Martinsried, Germany
| | - Dezhe Z Jin
- Department of Physics and Center for Neural Engineering, Pennsylvania State University, University Park, PA 16802, USA
| | - Michael A Long
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA.
| |
Collapse
|
8
|
Cone I, Shouval HZ. Learning precise spatiotemporal sequences via biophysically realistic learning rules in a modular, spiking network. eLife 2021; 10:e63751. [PMID: 33734085 PMCID: PMC7972481 DOI: 10.7554/elife.63751] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/16/2021] [Indexed: 11/13/2022] Open
Abstract
Multiple brain regions are able to learn and express temporal sequences, and this functionality is an essential component of learning and memory. We propose a substrate for such representations via a network model that learns and recalls discrete sequences of variable order and duration. The model consists of a network of spiking neurons placed in a modular microcolumn based architecture. Learning is performed via a biophysically realistic learning rule that depends on synaptic 'eligibility traces'. Before training, the network contains no memory of any particular sequence. After training, presentation of only the first element in that sequence is sufficient for the network to recall an entire learned representation of the sequence. An extended version of the model also demonstrates the ability to successfully learn and recall non-Markovian sequences. This model provides a possible framework for biologically plausible sequence learning and memory, in agreement with recent experimental results.
Collapse
Affiliation(s)
- Ian Cone
- Neurobiology and Anatomy, University of Texas Medical School at HoustonHouston, TXUnited States
- Applied Physics, Rice UniversityHouston, TXUnited States
| | - Harel Z Shouval
- Neurobiology and Anatomy, University of Texas Medical School at HoustonHouston, TXUnited States
| |
Collapse
|
9
|
Tupikov Y, Jin DZ. Addition of new neurons and the emergence of a local neural circuit for precise timing. PLoS Comput Biol 2021; 17:e1008824. [PMID: 33730085 PMCID: PMC8007041 DOI: 10.1371/journal.pcbi.1008824] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Revised: 03/29/2021] [Accepted: 02/19/2021] [Indexed: 11/28/2022] Open
Abstract
During development, neurons arrive at local brain areas in an extended period of time, but how they form local neural circuits is unknown. Here we computationally model the emergence of a network for precise timing in the premotor nucleus HVC in songbird. We show that new projection neurons, added to HVC post hatch at early stages of song development, are recruited to the end of a growing feedforward network. High spontaneous activity of the new neurons makes them the prime targets for recruitment in a self-organized process via synaptic plasticity. Once recruited, the new neurons fire readily at precise times, and they become mature. Neurons that are not recruited become silent and replaced by new immature neurons. Our model incorporates realistic HVC features such as interneurons, spatial distributions of neurons, and distributed axonal delays. The model predicts that the birth order of the projection neurons correlates with their burst timing during the song. Functions of local neural circuits depend on their specific network structures, but how the networks are wired is unknown. We show that such structures can emerge during development through a self-organized process, during which the network is wired by neuron-by-neuron recruitment. This growth is facilitated by steady supply of immature neurons, which are highly excitable and plastic. We suggest that neuron maturation dynamics is an integral part of constructing local neural circuits.
Collapse
Affiliation(s)
- Yevhen Tupikov
- Departments of Physics and Huck Institutes of the Life Sciences, Pennsylvania State University, University Park, Pennsylvania, United States of America
| | - Dezhe Z. Jin
- Departments of Physics and Huck Institutes of the Life Sciences, Pennsylvania State University, University Park, Pennsylvania, United States of America
- * E-mail:
| |
Collapse
|
10
|
Maes A, Barahona M, Clopath C. Learning compositional sequences with multiple time scales through a hierarchical network of spiking neurons. PLoS Comput Biol 2021; 17:e1008866. [PMID: 33764970 PMCID: PMC8023498 DOI: 10.1371/journal.pcbi.1008866] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 04/06/2021] [Accepted: 03/08/2021] [Indexed: 11/17/2022] Open
Abstract
Sequential behaviour is often compositional and organised across multiple time scales: a set of individual elements developing on short time scales (motifs) are combined to form longer functional sequences (syntax). Such organisation leads to a natural hierarchy that can be used advantageously for learning, since the motifs and the syntax can be acquired independently. Despite mounting experimental evidence for hierarchical structures in neuroscience, models for temporal learning based on neuronal networks have mostly focused on serial methods. Here, we introduce a network model of spiking neurons with a hierarchical organisation aimed at sequence learning on multiple time scales. Using biophysically motivated neuron dynamics and local plasticity rules, the model can learn motifs and syntax independently. Furthermore, the model can relearn sequences efficiently and store multiple sequences. Compared to serial learning, the hierarchical model displays faster learning, more flexible relearning, increased capacity, and higher robustness to perturbations. The hierarchical model redistributes the variability: it achieves high motif fidelity at the cost of higher variability in the between-motif timings.
Collapse
Affiliation(s)
- Amadeus Maes
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Mauricio Barahona
- Mathematics Department, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, United Kingdom
| |
Collapse
|
11
|
An avian cortical circuit for chunking tutor song syllables into simple vocal-motor units. Nat Commun 2020; 11:5029. [PMID: 33024101 PMCID: PMC7538968 DOI: 10.1038/s41467-020-18732-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Accepted: 08/24/2020] [Indexed: 12/24/2022] Open
Abstract
How are brain circuits constructed to achieve complex goals? The brains of young songbirds develop motor circuits that achieve the goal of imitating a specific tutor song to which they are exposed. Here, we set out to examine how song-generating circuits may be influenced early in song learning by a cortical region (NIf) at the interface between auditory and motor systems. Single-unit recordings reveal that, during juvenile babbling, NIf neurons burst at syllable onsets, with some neurons exhibiting selectivity for particular emerging syllable types. When juvenile birds listen to their tutor, NIf neurons are also activated at tutor syllable onsets, and are often selective for particular syllable types. We examine a simple computational model in which tutor exposure imprints the correct number of syllable patterns as ensembles in an interconnected NIf network. These ensembles are then reactivated during singing to train a set of syllable sequences in the motor network. Young songbirds learn to imitate their parents’ songs. Here, the authors find that, in baby birds, neurons in a brain region at the interface of auditory and motor circuits signal the onsets of song syllables during both tutoring and babbling, suggesting a specific neural mechanism for vocal imitation.
Collapse
|
12
|
Cabessa J, Tchaptchet A. Automata complete computation with Hodgkin-Huxley neural networks composed of synfire rings. Neural Netw 2020; 126:312-334. [PMID: 32278841 DOI: 10.1016/j.neunet.2020.03.019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2019] [Revised: 03/23/2020] [Accepted: 03/23/2020] [Indexed: 11/15/2022]
Abstract
Synfire rings are neural circuits capable of conveying synchronous, temporally precise and self-sustained activities in a robust manner. We propose a cell assembly based paradigm for abstract neural computation centered on the concept of synfire rings. More precisely, we empirically show that Hodgkin-Huxley neural networks modularly composed of synfire rings are automata complete. We provide an algorithmic construction which, starting from any given finite state automaton, builds a corresponding Hodgkin-Huxley neural network modularly composed of synfire rings and capable of simulating it. We illustrate the correctness of the construction on two specific examples. We further analyze the stability and robustness of the construction as a function of changes in the ring topologies as well as with respect to cell death and synaptic failure mechanisms, respectively. These results establish the possibility of achieving abstract computation with bio-inspired neural networks. They might constitute a theoretical ground for the realization of biological neural computers.
Collapse
Affiliation(s)
- Jérémie Cabessa
- Laboratory of Mathematical Economics and Applied Microeconomics (LEMMA), Université Paris 2, Panthéon-Assas, 75005 Paris, France; Institute of Computer Science of the Czech Academy of Sciences, P. O. Box 5, 18207 Prague 8, Czech Republic.
| | - Aubin Tchaptchet
- Institute of Physiology, Philipps University of Marburg, 35037 Marburg, Germany.
| |
Collapse
|
13
|
Pereira U, Brunel N. Unsupervised Learning of Persistent and Sequential Activity. Front Comput Neurosci 2020; 13:97. [PMID: 32009924 PMCID: PMC6978734 DOI: 10.3389/fncom.2019.00097] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Accepted: 12/23/2019] [Indexed: 11/25/2022] Open
Abstract
Two strikingly distinct types of activity have been observed in various brain structures during delay periods of delayed response tasks: Persistent activity (PA), in which a sub-population of neurons maintains an elevated firing rate throughout an entire delay period; and Sequential activity (SA), in which sub-populations of neurons are activated sequentially in time. It has been hypothesized that both types of dynamics can be “learned” by the relevant networks from the statistics of their inputs, thanks to mechanisms of synaptic plasticity. However, the necessary conditions for a synaptic plasticity rule and input statistics to learn these two types of dynamics in a stable fashion are still unclear. In particular, it is unclear whether a single learning rule is able to learn both types of activity patterns, depending on the statistics of the inputs driving the network. Here, we first characterize the complete bifurcation diagram of a firing rate model of multiple excitatory populations with an inhibitory mechanism, as a function of the parameters characterizing its connectivity. We then investigate how an unsupervised temporally asymmetric Hebbian plasticity rule shapes the dynamics of the network. Consistent with previous studies, we find that for stable learning of PA and SA, an additional stabilization mechanism is necessary. We show that a generalized version of the standard multiplicative homeostatic plasticity (Renart et al., 2003; Toyoizumi et al., 2014) stabilizes learning by effectively masking excitatory connections during stimulation and unmasking those connections during retrieval. Using the bifurcation diagram derived for fixed connectivity, we study analytically the temporal evolution and the steady state of the learned recurrent architecture as a function of parameters characterizing the external inputs. Slow changing stimuli lead to PA, while fast changing stimuli lead to SA. Our network model shows how a network with plastic synapses can stably and flexibly learn PA and SA in an unsupervised manner.
Collapse
Affiliation(s)
- Ulises Pereira
- Department of Statistics, The University of Chicago, Chicago, IL, United States
| | - Nicolas Brunel
- Department of Statistics, The University of Chicago, Chicago, IL, United States.,Department of Neurobiology, The University of Chicago, Chicago, IL, United States.,Department of Neurobiology, Duke University, Durham, NC, United States.,Department of Physics, Duke University, Durham, NC, United States
| |
Collapse
|
14
|
Cabessa J. Turing complete neural computation based on synaptic plasticity. PLoS One 2019; 14:e0223451. [PMID: 31618230 PMCID: PMC6795493 DOI: 10.1371/journal.pone.0223451] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Accepted: 09/20/2019] [Indexed: 11/19/2022] Open
Abstract
In neural computation, the essential information is generally encoded into the neurons via their spiking configurations, activation values or (attractor) dynamics. The synapses and their associated plasticity mechanisms are, by contrast, mainly used to process this information and implement the crucial learning features. Here, we propose a novel Turing complete paradigm of neural computation where the essential information is encoded into discrete synaptic states, and the updating of this information achieved via synaptic plasticity mechanisms. More specifically, we prove that any 2-counter machine—and hence any Turing machine—can be simulated by a rational-weighted recurrent neural network employing spike-timing-dependent plasticity (STDP) rules. The computational states and counter values of the machine are encoded into discrete synaptic strengths. The transitions between those synaptic weights are then achieved via STDP. These considerations show that a Turing complete synaptic-based paradigm of neural computation is theoretically possible and potentially exploitable. They support the idea that synapses are not only crucially involved in information processing and learning features, but also in the encoding of essential information. This approach represents a paradigm shift in the field of neural computation.
Collapse
Affiliation(s)
- Jérémie Cabessa
- Laboratory of Mathematical Economics and Applied Microeconomics (LEMMA), University Paris 2 – Panthéon-Assas, 75005 Paris, France
- Institute of Computer Science, Czech Academy of Sciences, 18207 Prague 8, Czech Republic
- * E-mail:
| |
Collapse
|
15
|
Adler A, Zhao R, Shin ME, Yasuda R, Gan WB. Somatostatin-Expressing Interneurons Enable and Maintain Learning-Dependent Sequential Activation of Pyramidal Neurons. Neuron 2019; 102:202-216.e7. [PMID: 30792151 DOI: 10.1016/j.neuron.2019.01.036] [Citation(s) in RCA: 88] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Revised: 12/09/2018] [Accepted: 01/17/2019] [Indexed: 12/15/2022]
Abstract
The activities of neuronal populations exhibit temporal sequences that are thought to mediate spatial navigation, cognitive processing, and motor actions. The mechanisms underlying the generation and maintenance of sequential neuronal activity remain unclear. We found that layer 2 and/or 3 pyramidal neurons (PNs) showed sequential activation in the mouse primary motor cortex during motor skill learning. Concomitantly, the activity of somatostatin (SST)-expressing interneurons increased and decreased in a task-specific manner. Activating SST interneurons during motor training, either directly or via inhibiting vasoactive-intestinal-peptide-expressing interneurons, prevented learning-induced sequential activities of PNs and behavioral improvement. Conversely, inactivating SST interneurons during the learning of a new motor task reversed sequential activities and behavioral improvement that occurred during a previous task. Furthermore, the control of SST interneurons over sequential activation of PNs required CaMKII-dependent synaptic plasticity. These findings indicate that SST interneurons enable and maintain synaptic plasticity-dependent sequential activation of PNs during motor skill learning.
Collapse
Affiliation(s)
- Avital Adler
- Skirball Institute, Department of Neuroscience and Physiology, Department of Anesthesiology, New York University School of Medicine, New York, NY 10016, USA
| | - Ruohe Zhao
- Skirball Institute, Department of Neuroscience and Physiology, Department of Anesthesiology, New York University School of Medicine, New York, NY 10016, USA
| | - Myung Eun Shin
- Max Planck Florida Institute of Neuroscience, Jupiter, FL 33458, USA
| | - Ryohei Yasuda
- Max Planck Florida Institute of Neuroscience, Jupiter, FL 33458, USA; Department of Neurobiology, Duke University Medical Center, Durham, NC 27710, USA
| | - Wen-Biao Gan
- Skirball Institute, Department of Neuroscience and Physiology, Department of Anesthesiology, New York University School of Medicine, New York, NY 10016, USA.
| |
Collapse
|
16
|
Matheus Gauy M, Lengler J, Einarsson H, Meier F, Weissenberger F, Yanik MF, Steger A. A Hippocampal Model for Behavioral Time Acquisition and Fast Bidirectional Replay of Spatio-Temporal Memory Sequences. Front Neurosci 2018; 12:961. [PMID: 30618583 PMCID: PMC6306028 DOI: 10.3389/fnins.2018.00961] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Accepted: 12/03/2018] [Indexed: 01/09/2023] Open
Abstract
The hippocampus is known to play a crucial role in the formation of long-term memory. For this, fast replays of previously experienced activities during sleep or after reward experiences are believed to be crucial. But how such replays are generated is still completely unclear. In this paper we propose a possible mechanism for this: we present a model that can store experienced trajectories on a behavioral timescale after a single run, and can subsequently bidirectionally replay such trajectories, thereby omitting any specifics of the previous behavior like speed, etc, but allowing repetitions of events, even with different subsequent events. Our solution builds on well-known concepts, one-shot learning and synfire chains, enhancing them by additional mechanisms using global inhibition and disinhibition. For replays our approach relies on dendritic spikes and cholinergic modulation, as supported by experimental data. We also hypothesize a functional role of disinhibition as a pacemaker during behavioral time.
Collapse
Affiliation(s)
- Marcelo Matheus Gauy
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| | - Johannes Lengler
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| | - Hafsteinn Einarsson
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| | - Florian Meier
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| | - Felix Weissenberger
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| | - Mehmet Fatih Yanik
- Department of Information Technology and Electrical Engineering, Institute for Neuroinformatics, ETH Zurich, Zurich, Switzerland
| | - Angelika Steger
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
17
|
Zhao J, Qin YM, Che YQ. Effects of topologies on signal propagation in feedforward networks. CHAOS (WOODBURY, N.Y.) 2018; 28:013117. [PMID: 29390642 DOI: 10.1063/1.4999996] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We systematically investigate the effects of topologies on signal propagation in feedforward networks (FFNs) based on the FitzHugh-Nagumo neuron model. FFNs with different topological structures are constructed with same number of both in-degrees and out-degrees in each layer and given the same input signal. The propagation of firing patterns and firing rates are found to be affected by the distribution of neuron connections in the FFNs. Synchronous firing patterns emerge in the later layers of FFNs with identical, uniform, and exponential degree distributions, but the number of synchronous spike trains in the output layers of the three topologies obviously differs from one another. The firing rates in the output layers of the three FFNs can be ordered from high to low according to their topological structures as exponential, uniform, and identical distributions, respectively. Interestingly, the sequence of spiking regularity in the output layers of the three FFNs is consistent with the firing rates, but their firing synchronization is in the opposite order. In summary, the node degree is an important factor that can dramatically influence the neuronal network activity.
Collapse
Affiliation(s)
- Jia Zhao
- Key Laboratory of Cognition and Personality (Ministry of Education) and Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Ying-Mei Qin
- Tianjin Key Laboratory of Information Sensing and Intelligent Control, Tianjin University of Technology and Education, Tianjin 300222, China
| | - Yan-Qiu Che
- Tianjin Key Laboratory of Information Sensing and Intelligent Control, Tianjin University of Technology and Education, Tianjin 300222, China
| |
Collapse
|
18
|
Mackevicius EL, Fee MS. Building a state space for song learning. Curr Opin Neurobiol 2017; 49:59-68. [PMID: 29268193 DOI: 10.1016/j.conb.2017.12.001] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2017] [Revised: 11/05/2017] [Accepted: 12/02/2017] [Indexed: 11/29/2022]
Abstract
The songbird system has shed light on how the brain produces precisely timed behavioral sequences, and how the brain implements reinforcement learning (RL). RL is a powerful strategy for learning what action to produce in each state, but requires a unique representation of the states involved in the task. Songbird RL circuitry is thought to operate using a representation of each moment within song syllables, consistent with the sparse sequential bursting of neurons in premotor cortical nucleus HVC. However, such sparse sequences are not present in very young birds, which sing highly variable syllables of random lengths. Here, we review and expand upon a model for how the songbird brain could construct latent sequences to support RL, in light of new data elucidating connections between HVC and auditory cortical areas. We hypothesize that learning occurs via four distinct plasticity processes: 1) formation of 'tutor memory' sequences in auditory areas; 2) formation of appropriately-timed latent HVC sequences, seeded by inputs from auditory areas spontaneously replaying the tutor song; 3) strengthening, during spontaneous replay, of connections from HVC to auditory neurons of corresponding timing in the 'tutor memory' sequence, aligning auditory and motor representations for subsequent song evaluation; and 4) strengthening of connections from premotor neurons to motor output neurons that produce the desired sounds, via well-described song RL circuitry.
Collapse
Affiliation(s)
- Emily Lambert Mackevicius
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, 46-5133 Cambridge, MA, USA
| | - Michale Sean Fee
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, 46-5133 Cambridge, MA, USA.
| |
Collapse
|
19
|
High frequency neurons determine effective connectivity in neuronal networks. Neuroimage 2017; 166:349-359. [PMID: 29128543 DOI: 10.1016/j.neuroimage.2017.11.014] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2017] [Revised: 10/31/2017] [Accepted: 11/05/2017] [Indexed: 01/01/2023] Open
Abstract
The emergence of flexible information channels in brain networks is a fundamental question in neuroscience. Understanding the mechanisms of dynamic routing of information would have far-reaching implications in a number of disciplines ranging from biology and medicine to information technologies and engineering. In this work, we show that the presence of a node firing at a higher frequency in a network with local connections, leads to reliable transmission of signals and establishes a preferential direction of information flow. Thus, by raising the firing rate a low degree node can behave as a functional hub, spreading its activity patterns polysynaptically in the network. Therefore, in an otherwise homogeneous and undirected network, firing rate is a tunable parameter that introduces directionality and enhances the reliability of signal transmission. The intrinsic firing rate across neuronal populations may thus determine preferred routes for signal transmission that can be easily controlled by changing the firing rate in specific nodes. We show that the results are generic and the same mechanism works in the networks with more complex topology.
Collapse
|
20
|
Weissenberger F, Meier F, Lengler J, Einarsson H, Steger A. Long Synfire Chains Emerge by Spike-Timing Dependent Plasticity Modulated by Population Activity. Int J Neural Syst 2017; 27:1750044. [DOI: 10.1142/s0129065717500447] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Sequences of precisely timed neuronal activity are observed in many brain areas in various species. Synfire chains are a well-established model that can explain such sequences. However, it is unknown under which conditions synfire chains can develop in initially unstructured networks by self-organization. This work shows that with spike-timing dependent plasticity (STDP), modulated by global population activity, long synfire chains emerge in sparse random networks. The learning rule fosters neurons to participate multiple times in the chain or in multiple chains. Such reuse of neurons has been experimentally observed and is necessary for high capacity. Sparse networks prevent the chains from being short and cyclic and show that the formation of specific synapses is not essential for chain formation. Analysis of the learning rule in a simple network of binary threshold neurons reveals the asymptotically optimal length of the emerging chains. The theoretical results generalize to simulated networks of conductance-based leaky integrate-and-fire (LIF) neurons. As an application of the emerged chain, we propose a one-shot memory for sequences of precisely timed neuronal activity.
Collapse
Affiliation(s)
- Felix Weissenberger
- Department of Computer Science, ETH Zürich, Universitätsstrasse 6, 8092, Zürich, Switzerland
| | - Florian Meier
- Department of Computer Science, ETH Zürich, Universitätsstrasse 6, 8092, Zürich, Switzerland
| | - Johannes Lengler
- Department of Computer Science, ETH Zürich, Universitätsstrasse 6, 8092, Zürich, Switzerland
| | - Hafsteinn Einarsson
- Department of Computer Science, ETH Zürich, Universitätsstrasse 6, 8092, Zürich, Switzerland
| | - Angelika Steger
- Department of Computer Science, ETH Zürich, Universitätsstrasse 6, 8092, Zürich, Switzerland
| |
Collapse
|
21
|
Sripad A, Sanchez G, Zapata M, Pirrone V, Dorta T, Cambria S, Marti A, Krishnamourthy K, Madrenas J. SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture. Neural Netw 2017; 97:28-45. [PMID: 29054036 DOI: 10.1016/j.neunet.2017.09.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2016] [Revised: 09/07/2017] [Accepted: 09/15/2017] [Indexed: 10/18/2022]
Abstract
Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities.
Collapse
Affiliation(s)
- Athul Sripad
- Dept. of Electronics Engineering, Universitat Politècnica de Catalunya, Jordi Girona, 1-3, edif. C4, 08034 Barcelona, Catalunya, Spain
| | - Giovanny Sanchez
- Instituto Politecnico Nacional, ESIME Culhuacan, Av. Santa Ana N 1000, Coyoacan, 04260, Distrito Federal, Mexico.
| | - Mireya Zapata
- Dept. of Electronics Engineering, Universitat Politècnica de Catalunya, Jordi Girona, 1-3, edif. C4, 08034 Barcelona, Catalunya, Spain
| | - Vito Pirrone
- Dept. of Electronics Engineering, Universitat Politècnica de Catalunya, Jordi Girona, 1-3, edif. C4, 08034 Barcelona, Catalunya, Spain
| | - Taho Dorta
- Dept. of Electronics Engineering, Universitat Politècnica de Catalunya, Jordi Girona, 1-3, edif. C4, 08034 Barcelona, Catalunya, Spain
| | - Salvatore Cambria
- Dept. of Electronics Engineering, Universitat Politècnica de Catalunya, Jordi Girona, 1-3, edif. C4, 08034 Barcelona, Catalunya, Spain
| | - Albert Marti
- Dept. of Electronics Engineering, Universitat Politècnica de Catalunya, Jordi Girona, 1-3, edif. C4, 08034 Barcelona, Catalunya, Spain
| | - Karthikeyan Krishnamourthy
- Dept. of Electronics Engineering, Universitat Politècnica de Catalunya, Jordi Girona, 1-3, edif. C4, 08034 Barcelona, Catalunya, Spain
| | - Jordi Madrenas
- Dept. of Electronics Engineering, Universitat Politècnica de Catalunya, Jordi Girona, 1-3, edif. C4, 08034 Barcelona, Catalunya, Spain
| |
Collapse
|
22
|
Picardo MA, Merel J, Katlowitz KA, Vallentin D, Okobi DE, Benezra SE, Clary RC, Pnevmatikakis EA, Paninski L, Long MA. Population-Level Representation of a Temporal Sequence Underlying Song Production in the Zebra Finch. Neuron 2017; 90:866-76. [PMID: 27196976 DOI: 10.1016/j.neuron.2016.02.016] [Citation(s) in RCA: 77] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2015] [Revised: 01/14/2016] [Accepted: 02/04/2016] [Indexed: 12/13/2022]
Abstract
The zebra finch brain features a set of clearly defined and hierarchically arranged motor nuclei that are selectively responsible for producing singing behavior. One of these regions, a critical forebrain structure called HVC, contains premotor neurons that are active at precise time points during song production. However, the neural representation of this behavior at a population level remains elusive. We used two-photon microscopy to monitor ensemble activity during singing, integrating across multiple trials by adopting a Bayesian inference approach to more precisely estimate burst timing. Additionally, we examined spiking and motor-related synaptic inputs using intracellular recordings during singing. With both experimental approaches, we find that premotor events do not occur preferentially at the onsets or offsets of song syllables or at specific subsyllabic motor landmarks. These results strongly support the notion that HVC projection neurons collectively exhibit a temporal sequence during singing that is uncoupled from ongoing movements.
Collapse
Affiliation(s)
- Michel A Picardo
- New York University Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Josh Merel
- Department of Statistics and Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA
| | - Kalman A Katlowitz
- New York University Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Daniela Vallentin
- New York University Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Daniel E Okobi
- New York University Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Sam E Benezra
- New York University Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Rachel C Clary
- New York University Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Eftychios A Pnevmatikakis
- Department of Statistics and Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Simons Center for Data Analysis, Simons Foundation, New York, NY 10010, USA
| | - Liam Paninski
- Department of Statistics and Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA
| | - Michael A Long
- New York University Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA.
| |
Collapse
|
23
|
Bi Z, Zhou C. Spike Pattern Structure Influences Synaptic Efficacy Variability under STDP and Synaptic Homeostasis. I: Spike Generating Models on Converging Motifs. Front Comput Neurosci 2016; 10:14. [PMID: 26941634 PMCID: PMC4763167 DOI: 10.3389/fncom.2016.00014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2015] [Accepted: 02/01/2016] [Indexed: 11/26/2022] Open
Abstract
In neural systems, synaptic plasticity is usually driven by spike trains. Due to the inherent noises of neurons and synapses as well as the randomness of connection details, spike trains typically exhibit variability such as spatial randomness and temporal stochasticity, resulting in variability of synaptic changes under plasticity, which we call efficacy variability. How the variability of spike trains influences the efficacy variability of synapses remains unclear. In this paper, we try to understand this influence under pair-wise additive spike-timing dependent plasticity (STDP) when the mean strength of plastic synapses into a neuron is bounded (synaptic homeostasis). Specifically, we systematically study, analytically and numerically, how four aspects of statistical features, i.e., synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations, as well as their interactions influence the efficacy variability in converging motifs (simple networks in which one neuron receives from many other neurons). Neurons (including the post-synaptic neuron) in a converging motif generate spikes according to statistical models with tunable parameters. In this way, we can explicitly control the statistics of the spike patterns, and investigate their influence onto the efficacy variability, without worrying about the feedback from synaptic changes onto the dynamics of the post-synaptic neuron. We separate efficacy variability into two parts: the drift part (DriftV) induced by the heterogeneity of change rates of different synapses, and the diffusion part (DiffV) induced by weight diffusion caused by stochasticity of spike trains. Our main findings are: (1) synchronous firing and burstiness tend to increase DiffV, (2) heterogeneity of rates induces DriftV when potentiation and depression in STDP are not balanced, and (3) heterogeneity of cross-correlations induces DriftV together with heterogeneity of rates. We anticipate our work important for understanding functional processes of neuronal networks (such as memory) and neural development.
Collapse
Affiliation(s)
- Zedong Bi
- State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of SciencesBeijing, China; Department of Physics, Hong Kong Baptist UniversityKowloon Tong, Hong Kong
| | - Changsong Zhou
- Department of Physics, Hong Kong Baptist UniversityKowloon Tong, Hong Kong; Centre for Nonlinear Studies, Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems, Institute of Computational and Theoretical Studies, Hong Kong Baptist UniversityKowloon Tong, Hong Kong; Beijing Computational Science Research CenterBeijing, China; Research Centre, HKBU Institute of Research and Continuing EducationShenzhen, China
| |
Collapse
|
24
|
Growth and splitting of neural sequences in songbird vocal development. Nature 2015; 528:352-7. [PMID: 26618871 PMCID: PMC4957523 DOI: 10.1038/nature15741] [Citation(s) in RCA: 85] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2015] [Accepted: 09/22/2015] [Indexed: 12/29/2022]
Abstract
Neural sequences are a fundamental feature of brain dynamics underlying diverse behaviors, but the mechanisms by which they develop during learning remain unknown. Songbirds learn vocalizations composed of syllables; in adult birds, each syllable is produced by a different sequence of action potential bursts in the premotor cortical area HVC. Here we carried out recordings of large populations of HVC neurons in singing juvenile birds throughout learning to examine the emergence of neural sequences. Early in vocal development, HVC neurons begin producing rhythmic bursts, temporally locked to a ‘prototype’ syllable. Different neurons are active at different latencies relative to syllable onset to form a continuous sequence. Through development, as new syllables emerge from the prototype syllable, initially highly overlapping burst sequences become increasingly distinct. We propose a mechanistic model in which multiple neural sequences can emerge from the growth and splitting of a common precursor sequence.
Collapse
|
25
|
Fonollosa J, Neftci E, Rabinovich M. Learning of Chunking Sequences in Cognition and Behavior. PLoS Comput Biol 2015; 11:e1004592. [PMID: 26584306 PMCID: PMC4652905 DOI: 10.1371/journal.pcbi.1004592] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2015] [Accepted: 10/05/2015] [Indexed: 12/19/2022] Open
Abstract
We often learn and recall long sequences in smaller segments, such as a phone number 858 534 22 30 memorized as four segments. Behavioral experiments suggest that humans and some animals employ this strategy of breaking down cognitive or behavioral sequences into chunks in a wide variety of tasks, but the dynamical principles of how this is achieved remains unknown. Here, we study the temporal dynamics of chunking for learning cognitive sequences in a chunking representation using a dynamical model of competing modes arranged to evoke hierarchical Winnerless Competition (WLC) dynamics. Sequential memory is represented as trajectories along a chain of metastable fixed points at each level of the hierarchy, and bistable Hebbian dynamics enables the learning of such trajectories in an unsupervised fashion. Using computer simulations, we demonstrate the learning of a chunking representation of sequences and their robust recall. During learning, the dynamics associates a set of modes to each information-carrying item in the sequence and encodes their relative order. During recall, hierarchical WLC guarantees the robustness of the sequence order when the sequence is not too long. The resulting patterns of activities share several features observed in behavioral experiments, such as the pauses between boundaries of chunks, their size and their duration. Failures in learning chunking sequences provide new insights into the dynamical causes of neurological disorders such as Parkinson’s disease and Schizophrenia. Because chunking is a hallmark of the brain’s organization, efforts to understand its dynamics can provide valuable insights into the brain and its disorders. For identifying the dynamical principles of chunking learning, we hypothesize that perceptual sequences can be learned and stored as a chain of metastable fixed points in a low-dimensional dynamical system, similar to the trajectory of a ball rolling down a pinball machine. During a learning phase, the interactions in the network evolve such that the network learns a chunking representation of the sequence, as when memorizing a phone number in segments. In the example of the pinball machine, learning can be identified with the gradual placement of the pins. After learning, the pins are placed in a way that, at each run, the ball follows the same trajectory (recall of the same sequence) that encodes the perceptual sequence. Simulations show that the dynamics are endowed with the hallmarks of chunking observed in behavioral experiments, such as increased delays observed before loading new chunks.
Collapse
Affiliation(s)
- Jordi Fonollosa
- Biocircuits Institute, University of California, San Diego, La Jolla, California, United States of America
- Institute for Bioengineering of Catalonia, Barcelona, Spain
| | - Emre Neftci
- Biocircuits Institute, University of California, San Diego, La Jolla, California, United States of America
- Department of Cognitive Sciences, University of California, Irvine, Irvine, California, United States of America
- * E-mail:
| | - Mikhail Rabinovich
- Biocircuits Institute, University of California, San Diego, La Jolla, California, United States of America
| |
Collapse
|
26
|
Veliz-Cuba A, Shouval HZ, Josić K, Kilpatrick ZP. Networks that learn the precise timing of event sequences. J Comput Neurosci 2015; 39:235-54. [PMID: 26334992 DOI: 10.1007/s10827-015-0574-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2015] [Revised: 08/06/2015] [Accepted: 08/10/2015] [Indexed: 11/28/2022]
Abstract
Neuronal circuits can learn and replay firing patterns evoked by sequences of sensory stimuli. After training, a brief cue can trigger a spatiotemporal pattern of neural activity similar to that evoked by a learned stimulus sequence. Network models show that such sequence learning can occur through the shaping of feedforward excitatory connectivity via long term plasticity. Previous models describe how event order can be learned, but they typically do not explain how precise timing can be recalled. We propose a mechanism for learning both the order and precise timing of event sequences. In our recurrent network model, long term plasticity leads to the learning of the sequence, while short term facilitation enables temporally precise replay of events. Learned synaptic weights between populations determine the time necessary for one population to activate another. Long term plasticity adjusts these weights so that the trained event times are matched during playback. While we chose short term facilitation as a time-tracking process, we also demonstrate that other mechanisms, such as spike rate adaptation, can fulfill this role. We also analyze the impact of trial-to-trial variability, showing how observational errors as well as neuronal noise result in variability in learned event times. The dynamics of the playback process determines how stochasticity is inherited in learned sequence timings. Future experiments that characterize such variability can therefore shed light on the neural mechanisms of sequence learning.
Collapse
Affiliation(s)
- Alan Veliz-Cuba
- Department of Mathematics, University of Houston, Houston, TX, 77204, USA.
| | - Harel Z Shouval
- Department of Neurobiology and Anatomy, University of Texas Medical School, Houston, TX, 77030, USA.
| | - Krešimir Josić
- Department of Mathematics, University of Houston, Houston, TX, 77204, USA. .,Department of Biology, University of Houston, Houston, TX, 77204, USA.
| | | |
Collapse
|
27
|
Bouchard KE, Ganguli S, Brainard MS. Role of the site of synaptic competition and the balance of learning forces for Hebbian encoding of probabilistic Markov sequences. Front Comput Neurosci 2015; 9:92. [PMID: 26257637 PMCID: PMC4508839 DOI: 10.3389/fncom.2015.00092] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2015] [Accepted: 06/30/2015] [Indexed: 12/11/2022] Open
Abstract
The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions.
Collapse
Affiliation(s)
- Kristofer E Bouchard
- Life Sciences and Computational Research Divisions, Lawrence Berkeley National Laboratory Berkeley, CA, USA
| | - Surya Ganguli
- Department of Applied Physics, Stanford University Stanford, CA, USA
| | - Michael S Brainard
- Department of Physiology, University of California, San Francisco and Center for Integrative Neuroscience, University of California, San Francisco San Francisco, CA, USA ; Howard Hughes Medical Institute Chevy Chase, MD, USA
| |
Collapse
|
28
|
Fauth M, Wörgötter F, Tetzlaff C. The formation of multi-synaptic connections by the interaction of synaptic and structural plasticity and their functional consequences. PLoS Comput Biol 2015; 11:e1004031. [PMID: 25590330 PMCID: PMC4295841 DOI: 10.1371/journal.pcbi.1004031] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2014] [Accepted: 11/06/2014] [Indexed: 11/19/2022] Open
Abstract
Cortical connectivity emerges from the permanent interaction between neuronal activity and synaptic as well as structural plasticity. An important experimentally observed feature of this connectivity is the distribution of the number of synapses from one neuron to another, which has been measured in several cortical layers. All of these distributions are bimodal with one peak at zero and a second one at a small number (3–8) of synapses. In this study, using a probabilistic model of structural plasticity, which depends on the synaptic weights, we explore how these distributions can emerge and which functional consequences they have. We find that bimodal distributions arise generically from the interaction of structural plasticity with synaptic plasticity rules that fulfill the following biological realistic constraints: First, the synaptic weights have to grow with the postsynaptic activity. Second, this growth curve and/or the input-output relation of the postsynaptic neuron have to change sub-linearly (negative curvature). As most neurons show such input-output-relations, these constraints can be fulfilled by many biological reasonable systems. Given such a system, we show that the different activities, which can explain the layer-specific distributions, correspond to experimentally observed activities. Considering these activities as working point of the system and varying the pre- or postsynaptic stimulation reveals a hysteresis in the number of synapses. As a consequence of this, the connectivity between two neurons can be controlled by activity but is also safeguarded against overly fast changes. These results indicate that the complex dynamics between activity and plasticity will, already between a pair of neurons, induce a variety of possible stable synaptic distributions, which could support memory mechanisms. The connectivity between neurons is modified by different mechanisms. On a time scale of minutes to hours one finds synaptic plasticity, whereas mechanisms for structural changes at axons or dendrites may take days. One main factor determining structural changes is the weight of a connection, which, in turn, is adapted by synaptic plasticity. Both mechanisms, synaptic and structural plasticity, are influenced and determined by the activity pattern in the network. Hence, it is important to understand how activity and the different plasticity mechanisms influence each other. Especially how activity influences rewiring in adult networks is still an open question. We present a model, which captures these complex interactions by abstracting structural plasticity with weight-dependent probabilities. This allows for calculating the distribution of the number of synapses between two neurons analytically. We report that biologically realistic connection patterns for different cortical layers generically arise with synaptic plasticity rules in which the synaptic weights grow with postsynaptic activity. The connectivity patterns also lead to different activity levels resembling those found in the different cortical layers. Interestingly such a system exhibits a hysteresis by which connections remain stable longer than expected, which may add to the stability of information storage in the network.
Collapse
Affiliation(s)
- Michael Fauth
- Georg-August University Göttingen, Third Institute of Physics, Bernstein Center for Computational Neuroscience, Göttingen, Germany
- * E-mail:
| | - Florentin Wörgötter
- Georg-August University Göttingen, Third Institute of Physics, Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Christian Tetzlaff
- Georg-August University Göttingen, Third Institute of Physics, Bernstein Center for Computational Neuroscience, Göttingen, Germany
| |
Collapse
|
29
|
Jahnke S, Memmesheimer RM, Timme M. Oscillation-induced signal transmission and gating in neural circuits. PLoS Comput Biol 2014; 10:e1003940. [PMID: 25503492 PMCID: PMC4263355 DOI: 10.1371/journal.pcbi.1003940] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2014] [Accepted: 09/26/2014] [Indexed: 11/19/2022] Open
Abstract
Reliable signal transmission constitutes a key requirement for neural circuit function. The propagation of synchronous pulse packets through recurrent circuits is hypothesized to be one robust form of signal transmission and has been extensively studied in computational and theoretical works. Yet, although external or internally generated oscillations are ubiquitous across neural systems, their influence on such signal propagation is unclear. Here we systematically investigate the impact of oscillations on propagating synchrony. We find that for standard, additive couplings and a net excitatory effect of oscillations, robust propagation of synchrony is enabled in less prominent feed-forward structures than in systems without oscillations. In the presence of non-additive coupling (as mediated by fast dendritic spikes), even balanced oscillatory inputs may enable robust propagation. Here, emerging resonances create complex locking patterns between oscillations and spike synchrony. Interestingly, these resonances make the circuits capable of selecting specific pathways for signal transmission. Oscillations may thus promote reliable transmission and, in co-action with dendritic nonlinearities, provide a mechanism for information processing by selectively gating and routing of signals. Our results are of particular interest for the interpretation of sharp wave/ripple complexes in the hippocampus, where previously learned spike patterns are replayed in conjunction with global high-frequency oscillations. We suggest that the oscillations may serve to stabilize the replay. Rhythmic activity in the brain is ubiquitous, its functions are debated. Here we show that it may contribute to the reliable transmission of information within brain areas. We find that its effect is particularly strong if we take nonlinear coupling into account. This experimentally found neuronal property implies that inputs which arrive nearly simultaneously can have a much stronger impact than expected from the sum of their individuals strengths. In such systems, rhythmic activity supports information transmission even if its positive and negative part exactly cancels all the time. Further, the information transmission can adapt to the oscillation frequency to optimally benefit from it. Finally, we show that rhythms with different frequencies may enable or disable communication channels, and are thus suitable for the steering of information flow.
Collapse
Affiliation(s)
- Sven Jahnke
- Network Dynamics, Max Planck Institute for Dynamics and Self-Organization (MPIDS), Göttingen, Germany
- Bernstein Center for Computational Neuroscience (BCCN), Göttingen, Germany
- Institute for Nonlinear Dynamics, Fakultät für Physik, Georg-August Universität Göttingen, Göttingen Germany
- * E-mail:
| | | | - Marc Timme
- Network Dynamics, Max Planck Institute for Dynamics and Self-Organization (MPIDS), Göttingen, Germany
- Bernstein Center for Computational Neuroscience (BCCN), Göttingen, Germany
- Institute for Nonlinear Dynamics, Fakultät für Physik, Georg-August Universität Göttingen, Göttingen Germany
| |
Collapse
|
30
|
James LS, Sakata JT. Vocal motor changes beyond the sensitive period for song plasticity. J Neurophysiol 2014; 112:2040-52. [PMID: 25057147 DOI: 10.1152/jn.00217.2014] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Behavior is critically shaped during sensitive periods in development. Birdsong is a learned vocal behavior that undergoes dramatic plasticity during a sensitive period of sensorimotor learning. During this period, juvenile songbirds engage in vocal practice to shape their vocalizations into relatively stereotyped songs. By the time songbirds reach adulthood, their songs are relatively stable and thought to be "crystallized." Recent studies, however, highlight the potential for adult song plasticity and suggest that adult song could naturally change over time. As such, we investigated the degree to which temporal and spectral features of song changed over time in adult Bengalese finches. We observed that the sequencing and timing of song syllables became more stereotyped over time. Increases in the stereotypy of syllable sequencing were due to the pruning of infrequently produced transitions and, to a lesser extent, increases in the prevalence of frequently produced transitions. Changes in song tempo were driven by decreases in the duration and variability of intersyllable gaps. In contrast to significant changes to temporal song features, we found little evidence that the spectral structure of adult song syllables changed over time. These data highlight differences in the degree to which temporal and spectral features of adult song change over time and support evidence for distinct mechanisms underlying the control of syllable sequencing, timing, and structure. Furthermore, the observed changes to temporal song features are consistent with a Hebbian framework of behavioral plasticity and support the notion that adult song should be considered a form of vocal practice.
Collapse
Affiliation(s)
- Logan S James
- Department of Biology, McGill University, Montreal, Quebec, Canada
| | - Jon T Sakata
- Department of Biology, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
31
|
Zheng P, Triesch J. Robust development of synfire chains from multiple plasticity mechanisms. Front Comput Neurosci 2014; 8:66. [PMID: 25071537 PMCID: PMC4074894 DOI: 10.3389/fncom.2014.00066] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2014] [Accepted: 06/02/2014] [Indexed: 11/13/2022] Open
Abstract
Biological neural networks are shaped by a large number of plasticity mechanisms operating at different time scales. How these mechanisms work together to sculpt such networks into effective information processing circuits is still poorly understood. Here we study the spontaneous development of synfire chains in a self-organizing recurrent neural network (SORN) model that combines a number of different plasticity mechanisms including spike-timing-dependent plasticity, structural plasticity, as well as homeostatic forms of plasticity. We find that the network develops an abundance of feed-forward motifs giving rise to synfire chains. The chains develop into ring-like structures, which we refer to as "synfire rings." These rings emerge spontaneously in the SORN network and allow for stable propagation of activity on a fast time scale. A single network can contain multiple non-overlapping rings suppressing each other. On a slower time scale activity switches from one synfire ring to another maintaining firing rate homeostasis. Overall, our results show how the interaction of multiple plasticity mechanisms might give rise to the robust formation of synfire chains in biological neural networks.
Collapse
Affiliation(s)
- Pengsheng Zheng
- Frankfurt Institute for Advanced Studies Frankfurt am Main, Germany
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies Frankfurt am Main, Germany
| |
Collapse
|
32
|
Miller A, Jin DZ. Potentiation decay of synapses and length distributions of synfire chains self-organized in recurrent neural networks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2013; 88:062716. [PMID: 24483495 DOI: 10.1103/physreve.88.062716] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2013] [Indexed: 06/03/2023]
Abstract
Synfire chains are thought to underlie precisely timed sequences of spikes observed in various brain regions and across species. How they are formed is not understood. Here we analyze self-organization of synfire chains through the spike-timing dependent plasticity (STDP) of the synapses, axon remodeling, and potentiation decay of synaptic weights in networks of neurons driven by noisy external inputs and subject to dominant feedback inhibition. Potentiation decay is the gradual, activity-independent reduction of synaptic weights over time. We show that potentiation decay enables a dynamic and statistically stable network connectivity when neurons spike spontaneously. Periodic stimulation of a subset of neurons leads to formation of synfire chains through a random recruitment process, which terminates when the chain connects to itself and forms a loop. We demonstrate that chain length distributions depend on the potentiation decay. Fast potentiation decay leads to long chains with wide distributions, while slow potentiation decay leads to short chains with narrow distributions. We suggest that the potentiation decay, which corresponds to the decay of early long-term potentiation of synapses, is an important synaptic plasticity rule in regulating formation of neural circuity through STDP.
Collapse
Affiliation(s)
- Aaron Miller
- Department of Physics, Bridgewater College, Bridgewater, Virginia 22812, USA
| | - Dezhe Z Jin
- Department of Physics, The Pennsylvania State University, University Park, Pennsylvania 16802, USA
| |
Collapse
|
33
|
Palmer JHC, Gong P. Formation and regulation of dynamic patterns in two-dimensional spiking neural circuits with spike-timing-dependent plasticity. Neural Comput 2013; 25:2833-57. [PMID: 24001345 DOI: 10.1162/neco_a_00511] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Spike-timing-dependent plasticity (STDP) is an important synaptic dynamics that is capable of shaping the complex spatiotemporal activity of neural circuits. In this study, we examine the effects of STDP on the spatiotemporal patterns of a spatially extended, two-dimensional spiking neural circuit. We show that STDP can promote the formation of multiple, localized spiking wave patterns or multiple spike timing sequences in a broad parameter space of the neural circuit. Furthermore, we illustrate that the formation of these dynamic patterns is due to the interaction between the dynamics of ongoing patterns in the neural circuit and STDP. This interaction is analyzed by developing a simple model able to capture its essential dynamics, which give rise to symmetry breaking. This occurs in a fundamentally self-organizing manner, without fine-tuning of the system parameters. Moreover, we find that STDP provides a synaptic mechanism to learn the paths taken by spiking waves and modulate the dynamics of their interactions, enabling them to be regulated. This regulation mechanism has error-correcting properties. Our results therefore highlight the important roles played by STDP in facilitating the formation and regulation of spiking wave patterns that may have crucial functional roles in brain information processing.
Collapse
Affiliation(s)
- John H C Palmer
- School of Physics, University of Sydney, Sydney 2006, NSW, Australia
| | | |
Collapse
|
34
|
Amin N, Gastpar M, Theunissen FE. Selective and efficient neural coding of communication signals depends on early acoustic and social environment. PLoS One 2013; 8:e61417. [PMID: 23630587 PMCID: PMC3632581 DOI: 10.1371/journal.pone.0061417] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2013] [Accepted: 03/13/2013] [Indexed: 11/18/2022] Open
Abstract
Previous research has shown that postnatal exposure to simple, synthetic sounds can affect the sound representation in the auditory cortex as reflected by changes in the tonotopic map or other relatively simple tuning properties, such as AM tuning. However, their functional implications for neural processing in the generation of ethologically-based perception remain unexplored. Here we examined the effects of noise-rearing and social isolation on the neural processing of communication sounds such as species-specific song, in the primary auditory cortex analog of adult zebra finches. Our electrophysiological recordings reveal that neural tuning to simple frequency-based synthetic sounds is initially established in all the laminae independent of patterned acoustic experience; however, we provide the first evidence that early exposure to patterned sound statistics, such as those found in native sounds, is required for the subsequent emergence of neural selectivity for complex vocalizations and for shaping neural spiking precision in superficial and deep cortical laminae, and for creating efficient neural representations of song and a less redundant ensemble code in all the laminae. Our study also provides the first causal evidence for ‘sparse coding’, such that when the statistics of the stimuli were changed during rearing, as in noise-rearing, that the sparse or optimal representation for species-specific vocalizations disappeared. Taken together, these results imply that a layer-specific differential development of the auditory cortex requires patterned acoustic input, and a specialized and robust sensory representation of complex communication sounds in the auditory cortex requires a rich acoustic and social environment.
Collapse
Affiliation(s)
- Noopur Amin
- Helen Wills Neuroscience Institute, University of California, Berkeley, California, United States of America
| | - Michael Gastpar
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, California, United States of America
| | - Frédéric E. Theunissen
- Helen Wills Neuroscience Institute, University of California, Berkeley, California, United States of America
- Psychology Department, University of California, Berkeley, California, United States of America
- * E-mail:
| |
Collapse
|
35
|
Glaze CM, Troyer TW. Development of temporal structure in zebra finch song. J Neurophysiol 2012; 109:1025-35. [PMID: 23175805 DOI: 10.1152/jn.00578.2012] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Zebra finch song has provided an excellent case study in the neural basis of sequence learning, with a high degree of temporal precision and tight links with precisely timed bursting in forebrain neurons. To examine the development of song timing, we measured the following four aspects of song temporal structure at four age ranges between 65 and 375 days posthatch: the mean durations of song syllables and the silent gaps between them, timing variability linked to song tempo, timing variability expressed independently across syllables and gaps, and transition probabilities between consecutive syllable pairs. We found substantial increases in song tempo between 65 and 85 days posthatch, due almost entirely to a shortening of gaps. We also found a decrease in tempo variability, also specific to gaps. Both the magnitude of the increase in tempo and the decrease in tempo variability were correlated on gap-by-gap basis with increases in the reliability of corresponding syllable transitions. Syllables had no systematic increase in tempo or decrease in tempo variability. In contrast to tempo parameters, both syllables and gaps showed an early sharp reduction in independent variability followed by continued reductions over the first year. The data suggest that links between syllable-based representations are strengthened during the later parts of the traditional period of song learning and that song rhythm continues to become more regular throughout the first year of life. Similar learning patterns have been identified in human sequence learning, suggesting a potentially rich area of comparative research.
Collapse
Affiliation(s)
- Christopher M Glaze
- Program in Neuroscience and Cognitive Science, Department of Psychology, University of Maryland, College Park, Maryland, USA.
| | | |
Collapse
|
36
|
Waddington A, Appleby PA, De Kamps M, Cohen N. Triphasic spike-timing-dependent plasticity organizes networks to produce robust sequences of neural activity. Front Comput Neurosci 2012; 6:88. [PMID: 23162457 PMCID: PMC3495293 DOI: 10.3389/fncom.2012.00088] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2012] [Accepted: 10/05/2012] [Indexed: 11/13/2022] Open
Abstract
Synfire chains have long been proposed to generate precisely timed sequences of neural activity. Such activity has been linked to numerous neural functions including sensory encoding, cognitive and motor responses. In particular, it has been argued that synfire chains underlie the precise spatiotemporal firing patterns that control song production in a variety of songbirds. Previous studies have suggested that the development of synfire chains requires either initial sparse connectivity or strong topological constraints, in addition to any synaptic learning rules. Here, we show that this necessity can be removed by using a previously reported but hitherto unconsidered spike-timing-dependent plasticity (STDP) rule and activity-dependent excitability. Under this rule the network develops stable synfire chains that possess a non-trivial, scalable multi-layer structure, in which relative layer sizes appear to follow a universal function. Using computational modeling and a coarse grained random walk model, we demonstrate the role of the STDP rule in growing, molding and stabilizing the chain, and link model parameters to the resulting structure.
Collapse
|
37
|
Vincent K, Tauskela JS, Thivierge JP. Extracting functionally feedforward networks from a population of spiking neurons. Front Comput Neurosci 2012; 6:86. [PMID: 23091458 PMCID: PMC3476068 DOI: 10.3389/fncom.2012.00086] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2012] [Accepted: 10/03/2012] [Indexed: 11/02/2022] Open
Abstract
Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABA(A) receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV/DNQX, consistent with fading feedforward activation. The observed alterations in FFCs provide experimental support for recent theoretical work linking power-law avalanches to the feedforward organization of functional connections in local neuronal circuits.
Collapse
|
38
|
High-capacity embedding of synfire chains in a cortical network model. J Comput Neurosci 2012; 34:185-209. [PMID: 22878688 PMCID: PMC3605496 DOI: 10.1007/s10827-012-0413-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2011] [Revised: 04/18/2012] [Accepted: 07/02/2012] [Indexed: 10/28/2022]
Abstract
Synfire chains, sequences of pools linked by feedforward connections, support the propagation of precisely timed spike sequences, or synfire waves. An important question remains, how synfire chains can efficiently be embedded in cortical architecture. We present a model of synfire chain embedding in a cortical scale recurrent network using conductance-based synapses, balanced chains, and variable transmission delays. The network attains substantially higher embedding capacities than previous spiking neuron models and allows all its connections to be used for embedding. The number of waves in the model is regulated by recurrent background noise. We computationally explore the embedding capacity limit, and use a mean field analysis to describe the equilibrium state. Simulations confirm the mean field analysis over broad ranges of pool sizes and connectivity levels; the number of pools embedded in the system trades off against the firing rate and the number of waves. An optimal inhibition level balances the conflicting requirements of stable synfire propagation and limited response to background noise. A simplified analysis shows that the present conductance-based synapses achieve higher contrast between the responses to synfire input and background noise compared to current-based synapses, while regulation of wave numbers is traced to the use of variable transmission delays.
Collapse
|
39
|
Bamford SA, Murray AF, Willshaw DJ. Spike-timing-dependent plasticity with weight dependence evoked from physical constraints. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2012; 6:385-398. [PMID: 23853183 DOI: 10.1109/tbcas.2012.2184285] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Analogue and mixed-signal VLSI implementations of Spike-Timing-Dependent Plasticity (STDP) are reviewed. A circuit is presented with a compact implementation of STDP suitable for parallel integration in large synaptic arrays. In contrast to previously published circuits, it uses the limitations of the silicon substrate to achieve various forms and degrees of weight dependence of STDP. It also uses reverse-biased transistors to reduce leakage from a capacitance representing weight. Chip results are presented showing: various ways in which the learning rule may be shaped; how synaptic weights may retain some indication of their learned values over periods of minutes; and how distributions of weights for synapses convergent on single neurons may shift between more or less extreme bimodality according to the strength of correlational cues in their inputs.
Collapse
Affiliation(s)
- Simeon A Bamford
- Neuroinformatics Doctoral Training Centre, University of Edinburgh, Edinburgh, Scotland EH8 9AB, UK.
| | | | | |
Collapse
|
40
|
Two distinct modes of forebrain circuit dynamics underlie temporal patterning in the vocalizations of young songbirds. J Neurosci 2012; 31:16353-68. [PMID: 22072687 DOI: 10.1523/jneurosci.3009-11.2011] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Accurate timing is a critical aspect of motor control, yet the temporal structure of many mature behaviors emerges during learning from highly variable exploratory actions. How does a developing brain acquire the precise control of timing in behavioral sequences? To investigate the development of timing, we analyzed the songs of young juvenile zebra finches. These highly variable vocalizations, akin to human babbling, gradually develop into temporally stereotyped adult songs. We find that the durations of syllables and silences in juvenile singing are formed by a mixture of two distinct modes of timing: a random mode producing broadly distributed durations early in development, and a stereotyped mode underlying the gradual emergence of stereotyped durations. Using lesions, inactivations, and localized brain cooling, we investigated the roles of neural dynamics within two premotor cortical areas in the production of these temporal modes. We find that LMAN (lateral magnocellular nucleus of the nidopallium) is required specifically for the generation of the random mode of timing and that mild cooling of LMAN causes an increase in the durations produced by this mode. On the contrary, HVC (used as a proper name) is required specifically for producing the stereotyped mode of timing, and its cooling causes a slowing of all stereotyped components. These results show that two neural pathways contribute to the timing of juvenile songs and suggest an interesting organization in the forebrain, whereby different brain areas are specialized for the production of distinct forms of neural dynamics.
Collapse
|
41
|
Tetzlaff C, Kolodziejski C, Timme M, Wörgötter F. Synaptic scaling in combination with many generic plasticity mechanisms stabilizes circuit connectivity. Front Comput Neurosci 2011; 5:47. [PMID: 22203799 PMCID: PMC3214727 DOI: 10.3389/fncom.2011.00047] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2011] [Accepted: 10/20/2011] [Indexed: 11/13/2022] Open
Abstract
Synaptic scaling is a slow process that modifies synapses, keeping the firing rate of neural circuits in specific regimes. Together with other processes, such as conventional synaptic plasticity in the form of long term depression and potentiation, synaptic scaling changes the synaptic patterns in a network, ensuring diverse, functionally relevant, stable, and input-dependent connectivity. How synaptic patterns are generated and stabilized, however, is largely unknown. Here we formally describe and analyze synaptic scaling based on results from experimental studies and demonstrate that the combination of different conventional plasticity mechanisms and synaptic scaling provides a powerful general framework for regulating network connectivity. In addition, we design several simple models that reproduce experimentally observed synaptic distributions as well as the observed synaptic modifications during sustained activity changes. These models predict that the combination of plasticity with scaling generates globally stable, input-controlled synaptic patterns, also in recurrent networks. Thus, in combination with other forms of plasticity, synaptic scaling can robustly yield neuronal circuits with high synaptic diversity, which potentially enables robust dynamic storage of complex activation patterns. This mechanism is even more pronounced when considering networks with a realistic degree of inhibition. Synaptic scaling combined with plasticity could thus be the basis for learning structured behavior even in initially random networks.
Collapse
Affiliation(s)
- Christian Tetzlaff
- Institute for Physics - Biophysics, Georg-August-University Göttingen, Germany
| | | | | | | |
Collapse
|
42
|
Hanuschkin A, Diesmann M, Morrison A. A reafferent and feed-forward model of song syntax generation in the Bengalese finch. J Comput Neurosci 2011; 31:509-32. [PMID: 21404048 PMCID: PMC3232349 DOI: 10.1007/s10827-011-0318-z] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2010] [Revised: 01/28/2011] [Accepted: 02/03/2011] [Indexed: 12/04/2022]
Abstract
Adult Bengalese finches generate a variable song that obeys a distinct and individual syntax. The syntax is gradually lost over a period of days after deafening and is recovered when hearing is restored. We present a spiking neuronal network model of the song syntax generation and its loss, based on the assumption that the syntax is stored in reafferent connections from the auditory to the motor control area. Propagating synfire activity in the HVC codes for individual syllables of the song and priming signals from the auditory network reduce the competition between syllables to allow only those transitions that are permitted by the syntax. Both imprinting of song syntax within HVC and the interaction of the reafferent signal with an efference copy of the motor command are sufficient to explain the gradual loss of syntax in the absence of auditory feedback. The model also reproduces for the first time experimental findings on the influence of altered auditory feedback on the song syntax generation, and predicts song- and species-specific low frequency components in the LFP. This study illustrates how sequential compositionality following a defined syntax can be realized in networks of spiking neurons.
Collapse
Affiliation(s)
- Alexander Hanuschkin
- Functional Neural Circuits Group, Faculty of Biology, Albert-Ludwig University of Freiburg, Schänzlestrasse 1, 79104 Freiburg, Germany.
| | | | | |
Collapse
|
43
|
Blättler F, Hahnloser RHR. An efficient coding hypothesis links sparsity and selectivity of neural responses. PLoS One 2011; 6:e25506. [PMID: 22022405 PMCID: PMC3192758 DOI: 10.1371/journal.pone.0025506] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2011] [Accepted: 09/05/2011] [Indexed: 11/18/2022] Open
Abstract
To what extent are sensory responses in the brain compatible with first-order principles? The efficient coding hypothesis projects that neurons use as few spikes as possible to faithfully represent natural stimuli. However, many sparsely firing neurons in higher brain areas seem to violate this hypothesis in that they respond more to familiar stimuli than to nonfamiliar stimuli. We reconcile this discrepancy by showing that efficient sensory responses give rise to stimulus selectivity that depends on the stimulus-independent firing threshold and the balance between excitatory and inhibitory inputs. We construct a cost function that enforces minimal firing rates in model neurons by linearly punishing suprathreshold synaptic currents. By contrast, subthreshold currents are punished quadratically, which allows us to optimally reconstruct sensory inputs from elicited responses. We train synaptic currents on many renditions of a particular bird's own song (BOS) and few renditions of conspecific birds' songs (CONs). During training, model neurons develop a response selectivity with complex dependence on the firing threshold. At low thresholds, they fire densely and prefer CON and the reverse BOS (REV) over BOS. However, at high thresholds or when hyperpolarized, they fire sparsely and prefer BOS over REV and over CON. Based on this selectivity reversal, our model suggests that preference for a highly familiar stimulus corresponds to a high-threshold or strong-inhibition regime of an efficient coding strategy. Our findings apply to songbird mirror neurons, and in general, they suggest that the brain may be endowed with simple mechanisms to rapidly change selectivity of neural responses to focus sensory processing on either familiar or nonfamiliar stimuli. In summary, we find support for the efficient coding hypothesis and provide new insights into the interplay between the sparsity and selectivity of neural responses.
Collapse
Affiliation(s)
- Florian Blättler
- Institute of Neuroinformatics, University of Zurich/ETH Zurich, Zurich, Switzerland
| | | |
Collapse
|
44
|
Verduzco-Flores S, Ermentrout B, Bodner M. Modeling neuropathologies as disruption of normal sequence generation in working memory networks. Neural Netw 2011; 27:21-31. [PMID: 22112921 DOI: 10.1016/j.neunet.2011.09.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2011] [Revised: 09/09/2011] [Accepted: 09/28/2011] [Indexed: 10/16/2022]
Abstract
Recurrent networks of cortico-cortical connections have been implicated as the substrate of working memory persistent activity, and patterned sequenced representation as needed in cognitive function. We examine the pathological behavior which may result from specific changes in the normal parameters or architecture in a biologically plausible computational working memory model capable of learning and reproducing sequences which come from external stimuli. Specifically, we examine systematical reductions in network inhibition, excitatory potentiation, delays in excitatory connections, and heterosynaptic plasticity. We show that these changes result in a set of dynamics which may be associated with cognitive symptoms associated with different neuropathologies, particularly epilepsy, schizophrenia, and obsessive compulsive disorders. We demonstrate how cognitive symptoms in these disorders may arise from similar or the same general mechanisms acting in the recurrent working memory networks. We suggest that these pathological dynamics may form a set overlapping states within the normal network function, and relate this to observed associations between different pathologies.
Collapse
|
45
|
A model for complex sequence learning and reproduction in neural populations. J Comput Neurosci 2011; 32:403-23. [DOI: 10.1007/s10827-011-0360-x] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2011] [Revised: 08/12/2011] [Accepted: 08/15/2011] [Indexed: 10/17/2022]
|
46
|
Kunkel S, Diesmann M, Morrison A. Limits to the development of feed-forward structures in large recurrent neuronal networks. Front Comput Neurosci 2011; 4:160. [PMID: 21415913 PMCID: PMC3042733 DOI: 10.3389/fncom.2010.00160] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2010] [Accepted: 12/25/2010] [Indexed: 11/25/2022] Open
Abstract
Spike-timing dependent plasticity (STDP) has traditionally been of great interest to theoreticians, as it seems to provide an answer to the question of how the brain can develop functional structure in response to repeated stimuli. However, despite this high level of interest, convincing demonstrations of this capacity in large, initially random networks have not been forthcoming. Such demonstrations as there are typically rely on constraining the problem artificially. Techniques include employing additional pruning mechanisms or STDP rules that enhance symmetry breaking, simulating networks with low connectivity that magnify competition between synapses, or combinations of the above. In this paper, we first review modeling choices that carry particularly high risks of producing non-generalizable results in the context of STDP in recurrent networks. We then develop a theory for the development of feed-forward structure in random networks and conclude that an unstable fixed point in the dynamics prevents the stable propagation of structure in recurrent networks with weight-dependent STDP. We demonstrate that the key predictions of the theory hold in large-scale simulations. The theory provides insight into the reasons why such development does not take place in unconstrained systems and enables us to identify biologically motivated candidate adaptations to the balanced random network model that might enable it.
Collapse
Affiliation(s)
- Susanne Kunkel
- Functional Neural Circuits Group, Faculty of Biology, Albert-Ludwig University of Freiburg Germany
| | | | | |
Collapse
|
47
|
Schrader S, Diesmann M, Morrison A. A compositionality machine realized by a hierarchic architecture of synfire chains. Front Comput Neurosci 2011; 4:154. [PMID: 21258641 PMCID: PMC3020397 DOI: 10.3389/fncom.2010.00154] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2010] [Accepted: 12/05/2010] [Indexed: 11/17/2022] Open
Abstract
The composition of complex behavior is thought to rely on the concurrent and sequential activation of simpler action components, or primitives. Systems of synfire chains have previously been proposed to account for either the simultaneous or the sequential aspects of compositionality; however, the compatibility of the two aspects has so far not been addressed. Moreover, the simultaneous activation of primitives has up until now only been investigated in the context of reactive computations, i.e., the perception of stimuli. In this study we demonstrate how a hierarchical organization of synfire chains is capable of generating both aspects of compositionality for proactive computations such as the generation of complex and ongoing action. To this end, we develop a network model consisting of two layers of synfire chains. Using simple drawing strokes as a visualization of abstract primitives, we map the feed-forward activity of the upper level synfire chains to motion in two-dimensional space. Our model is capable of producing drawing strokes that are combinations of primitive strokes by binding together the corresponding chains. Moreover, when the lower layer of the network is constructed in a closed-loop fashion, drawing strokes are generated sequentially. The generated pattern can be random or deterministic, depending on the connection pattern between the lower level chains. We propose quantitative measures for simultaneity and sequentiality, revealing a wide parameter range in which both aspects are fulfilled. Finally, we investigate the spiking activity of our model to propose candidate signatures of synfire chain computation in measurements of neural activity during action execution.
Collapse
|
48
|
Support for a synaptic chain model of neuronal sequence generation. Nature 2010; 468:394-9. [PMID: 20972420 PMCID: PMC2998755 DOI: 10.1038/nature09514] [Citation(s) in RCA: 256] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2010] [Accepted: 09/02/2010] [Indexed: 01/20/2023]
Abstract
In songbirds, the remarkable temporal precision of song is generated by a sparse sequence of bursts in the premotor nucleus HVC (proper name). To distinguish between two possible classes of models of neural sequence generation, we carried out intracellular recordings of HVC neurons in singing birds. We found that the subthreshold membrane potential is characterized by a large rapid depolarization 5–10 ms prior to burst onset, consistent with a synaptically-connected chain of neurons in HVC. We found no evidence for the slow membrane potential modulation predicted by models in which burst timing is controlled by subthreshold dynamics. Furthermore, bursts ride on an underlying depolarization of ~10ms duration, likely the result of a regenerative calcium spike within HVC neurons that could facilitate the propagation of activity through a chain network with high temporal precision. Our results shed light on the fundamental mechanisms by which neural circuits can generate complex sequential behaviours.
Collapse
|
49
|
Hanuschkin A, Herrmann JM, Morrison A, Diesmann M. Compositionality of arm movements can be realized by propagating synchrony. J Comput Neurosci 2010; 30:675-97. [PMID: 20953686 PMCID: PMC3108016 DOI: 10.1007/s10827-010-0285-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2010] [Revised: 09/02/2010] [Accepted: 09/30/2010] [Indexed: 11/29/2022]
Abstract
We present a biologically plausible spiking neuronal network model of free monkey scribbling that reproduces experimental findings on cortical activity and the properties of the scribbling trajectory. The model is based on the idea that synfire chains can encode movement primitives. Here, we map the propagation of activity in a chain to a linearly evolving preferred velocity, which results in parabolic segments that fulfill the two-thirds power law. Connections between chains that match the final velocity of one encoded primitive to the initial velocity of the next allow the composition of random sequences of primitives with smooth transitions. The model provides an explanation for the segmentation of the trajectory and the experimentally observed deviations of the trajectory from the parabolic shape at primitive transition sites. Furthermore, the model predicts low frequency oscillations (<10 Hz) of the motor cortex local field potential during ongoing movements and increasing firing rates of non-specific motor cortex neurons before movement onset.
Collapse
Affiliation(s)
- Alexander Hanuschkin
- Functional Neural Circuits Group, Faculty of Biology, Schänzlestrasse 1, 79104, Freiburg, Germany.
| | | | | | | |
Collapse
|
50
|
Olveczky BP, Gardner TJ. A bird's eye view of neural circuit formation. Curr Opin Neurobiol 2010; 21:124-31. [PMID: 20943369 DOI: 10.1016/j.conb.2010.08.001] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2010] [Revised: 08/03/2010] [Accepted: 08/04/2010] [Indexed: 11/29/2022]
Abstract
Neural circuits underlying complex learned behaviors, such as speech in humans, develop under genetic constraints and in response to environmental influences. Little is known about the rules and mechanisms through which such circuits form. We argue that songbirds, with their discrete and well studied neural pathways underlying a complex and naturally learned behavior, provide a powerful model for addressing these questions. We briefly review current knowledge of how the song circuit develops during learning and discuss new possibilities for advancing the field given recent technological advances.
Collapse
Affiliation(s)
- Bence P Olveczky
- Harvard University, Department of Organismic and Evolutionary Biology and Center for Brain Science, 52 Oxford Street, Cambridge, MA 02138, USA.
| | | |
Collapse
|