1
|
Sun SED, Levenstein D, Li B, Mandelberg N, Chenouard N, Suutari BS, Sanchez S, Tian G, Rinzel J, Buzsáki G, Tsien RW. Synaptic homeostasis transiently leverages Hebbian mechanisms for a multiphasic response to inactivity. Cell Rep 2024; 43:113839. [PMID: 38507409 DOI: 10.1016/j.celrep.2024.113839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 12/19/2023] [Accepted: 02/05/2024] [Indexed: 03/22/2024] Open
Abstract
Homeostatic regulation of synapses is vital for nervous system function and key to understanding a range of neurological conditions. Synaptic homeostasis is proposed to operate over hours to counteract the destabilizing influence of long-term potentiation (LTP) and long-term depression (LTD). The prevailing view holds that synaptic scaling is a slow first-order process that regulates postsynaptic glutamate receptors and fundamentally differs from LTP or LTD. Surprisingly, we find that the dynamics of scaling induced by neuronal inactivity are not exponential or monotonic, and the mechanism requires calcineurin and CaMKII, molecules dominant in LTD and LTP. Our quantitative model of these enzymes reconstructs the unexpected dynamics of homeostatic scaling and reveals how synapses can efficiently safeguard future capacity for synaptic plasticity. This mechanism of synaptic adaptation supports a broader set of homeostatic changes, including action potential autoregulation, and invites further inquiry into how such a mechanism varies in health and disease.
Collapse
Affiliation(s)
- Simón E D Sun
- Center for Neural Science, New York University, New York, NY 10003, USA; Department of Neuroscience and Physiology, Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA; Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| | - Daniel Levenstein
- Center for Neural Science, New York University, New York, NY 10003, USA; Department of Neuroscience and Physiology, Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA; Montreal Neurological Institute, Department of Neurology and Neurosurgery, McGill University, 3810 University Street, Montreal, QC, Canada
| | - Boxing Li
- Department of Neuroscience and Physiology, Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA; Neuroscience Program, Guangdong Provincial Key Laboratory of Brain Function and Disease, Zhongshan School of Medicine and the Fifth Affiliated Hospital, Sun Yat-sen University, Guangzhou 510810, China
| | - Nataniel Mandelberg
- Department of Neuroscience and Physiology, Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA
| | - Nicolas Chenouard
- Department of Neuroscience and Physiology, Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA; Sorbonne Université, INSERM U1127, UMR CNRS 7225, Institut du Cerveau (ICM), 47 bld de l'hôpital, 75013 Paris, France
| | - Benjamin S Suutari
- Center for Neural Science, New York University, New York, NY 10003, USA; Department of Neuroscience and Physiology, Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA
| | - Sandrine Sanchez
- Department of Neuroscience and Physiology, Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA
| | - Guoling Tian
- Department of Neuroscience and Physiology, Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA
| | - John Rinzel
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - György Buzsáki
- Department of Neuroscience and Physiology, Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA
| | - Richard W Tsien
- Center for Neural Science, New York University, New York, NY 10003, USA; Department of Neuroscience and Physiology, Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA.
| |
Collapse
|
2
|
Soldado-Magraner S, Buonomano DV. Neural Sequences and the Encoding of Time. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:81-93. [PMID: 38918347 DOI: 10.1007/978-3-031-60183-5_5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Converging experimental and computational evidence indicate that on the scale of seconds the brain encodes time through changing patterns of neural activity. Experimentally, two general forms of neural dynamic regimes that can encode time have been observed: neural population clocks and ramping activity. Neural population clocks provide a high-dimensional code to generate complex spatiotemporal output patterns, in which each neuron exhibits a nonlinear temporal profile. A prototypical example of neural population clocks are neural sequences, which have been observed across species, brain areas, and behavioral paradigms. Additionally, neural sequences emerge in artificial neural networks trained to solve time-dependent tasks. Here, we examine the role of neural sequences in the encoding of time, and how they may emerge in a biologically plausible manner. We conclude that neural sequences may represent a canonical computational regime to perform temporal computations.
Collapse
Affiliation(s)
| | - Dean V Buonomano
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA, USA.
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
3
|
Balcı F, Simen P. Neurocomputational Models of Interval Timing: Seeing the Forest for the Trees. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:51-78. [PMID: 38918346 DOI: 10.1007/978-3-031-60183-5_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Extracting temporal regularities and relations from experience/observation is critical for organisms' adaptiveness (communication, foraging, predation, prediction) in their ecological niches. Therefore, it is not surprising that the internal clock that enables the perception of seconds-to-minutes-long intervals (interval timing) is evolutionarily well-preserved across many species of animals. This comparative claim is primarily supported by the fact that the timing behavior of many vertebrates exhibits common statistical signatures (e.g., on-average accuracy, scalar variability, positive skew). These ubiquitous statistical features of timing behaviors serve as empirical benchmarks for modelers in their efforts to unravel the processing dynamics of the internal clock (namely answering how internal clock "ticks"). In this chapter, we introduce prominent (neuro)computational approaches to modeling interval timing at a level that can be understood by general audience. These models include Treisman's pacemaker accumulator model, the information processing variant of scalar expectancy theory, the striatal beat frequency model, behavioral expectancy theory, the learning to time model, the time-adaptive opponent Poisson drift-diffusion model, time cell models, and neural trajectory models. Crucially, we discuss these models within an overarching conceptual framework that categorizes different models as threshold vs. clock-adaptive models and as dedicated clock/ramping vs. emergent time/population code models.
Collapse
Affiliation(s)
- Fuat Balcı
- Department of Biological Sciences, University of Manitoba, Winnipeg, MB, Canada.
| | - Patrick Simen
- Department of Neuroscience, Oberlin College, Oberlin, OH, USA
| |
Collapse
|
4
|
Buonomano DV, Buzsáki G, Davachi L, Nobre AC. Time for Memories. J Neurosci 2023; 43:7565-7574. [PMID: 37940593 PMCID: PMC10634580 DOI: 10.1523/jneurosci.1430-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 08/22/2023] [Accepted: 08/25/2023] [Indexed: 11/10/2023] Open
Abstract
The ability to store information about the past to dynamically predict and prepare for the future is among the most fundamental tasks the brain performs. To date, the problems of understanding how the brain stores and organizes information about the past (memory) and how the brain represents and processes temporal information for adaptive behavior have generally been studied as distinct cognitive functions. This Symposium explores the inherent link between memory and temporal cognition, as well as the potential shared neural mechanisms between them. We suggest that working memory and implicit timing are interconnected and may share overlapping neural mechanisms. Additionally, we explore how temporal structure is encoded in associative and episodic memory and, conversely, the influences of episodic memory on subsequent temporal anticipation and the perception of time. We suggest that neural sequences provide a general computational motif that contributes to timing and working memory, as well as the spatiotemporal coding and recall of episodes.
Collapse
Affiliation(s)
- Dean V Buonomano
- Department of Neurobiology, University of California, Los Angeles, California 90095
- Department of Psychology, University of California, Los Angeles, Los Angeles, California 90095
- Integrative Center for Learning and Memory, UCLA, Los Angeles, California 90025
| | - György Buzsáki
- Neuroscience Institute and Department of Neurology, NYU Grossman School of Medicine, New York University, New York, New York 10016
- Center for Neural Science, New York University, New York, New York 10003
| | - Lila Davachi
- Department of Psychology, Columbia University, New York, New York 10027
- Center for Clinical Research, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, New York 10962
| | - Anna C Nobre
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford OX3 7JX, United Kingdom
- Department of Psychology, Yale University, New Haven, Connecticut 06510
- Wu Tsai Center for Neurocognition and Behavior, Wu Tsai Institute, Yale University, New Haven, Connecticut 06510
| |
Collapse
|
5
|
Ghirlanda S, Enquist M. How associations become behavior. Neurobiol Learn Mem 2023; 205:107833. [PMID: 37778687 DOI: 10.1016/j.nlm.2023.107833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 09/06/2023] [Accepted: 09/24/2023] [Indexed: 10/03/2023]
Abstract
The Rescorla and Wagner (1972) model is the first mathematical theory to explain associative learning in the presence of multiple stimuli. Its main theoretical construct is that of associative strength, but this is connected to behavior only loosely. We propose a model in which behavior is described by a collection of Poisson processes, each with a rate proportional to an associative strength. The model predicts that the time between behaviors follows an exponential or hypoexponential distribution. This prediction is supported by two data sets on autoshaped and instrumental behavior in rats.
Collapse
Affiliation(s)
- Stefano Ghirlanda
- Department of Psychology, Brooklyn College, United States; Departments of Psychology and Biology, CUNY Graduate Center, United States; Centre for Cultural Evolution, Stockholm University, Sweden.
| | - Magnus Enquist
- Centre for Cultural Evolution, Stockholm University, Sweden
| |
Collapse
|
6
|
Mackevicius EL, Gu S, Denisenko NI, Fee MS. Self-organization of songbird neural sequences during social isolation. eLife 2023; 12:e77262. [PMID: 37252761 PMCID: PMC10229124 DOI: 10.7554/elife.77262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 04/19/2023] [Indexed: 05/31/2023] Open
Abstract
Behaviors emerge via a combination of experience and innate predispositions. As the brain matures, it undergoes major changes in cellular, network, and functional properties that can be due to sensory experience as well as developmental processes. In normal birdsong learning, neural sequences emerge to control song syllables learned from a tutor. Here, we disambiguate the role of tutor experience and development in neural sequence formation by delaying exposure to a tutor. Using functional calcium imaging, we observe neural sequences in the absence of tutoring, demonstrating that tutor experience is not necessary for the formation of sequences. However, after exposure to a tutor, pre-existing sequences can become tightly associated with new song syllables. Since we delayed tutoring, only half our birds learned new syllables following tutor exposure. The birds that failed to learn were the birds in which pre-tutoring neural sequences were most 'crystallized,' that is, already tightly associated with their (untutored) song.
Collapse
Affiliation(s)
- Emily L Mackevicius
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, MITCambridgeUnited States
| | - Shijie Gu
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, MITCambridgeUnited States
| | - Natalia I Denisenko
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, MITCambridgeUnited States
| | - Michale S Fee
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, MITCambridgeUnited States
| |
Collapse
|
7
|
Chinoy RB, Tanwar A, Buonomano DV. A Recurrent Neural Network Model Accounts for Both Timing and Working Memory Components of an Interval Discrimination Task. TIMING & TIME PERCEPTION 2022. [DOI: 10.1163/22134468-bja10058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Abstract
Interval discrimination is of fundamental importance to many forms of sensory processing, including speech and music. Standard interval discrimination tasks require comparing two intervals separated in time, and thus include both working memory (WM) and timing components. Models of interval discrimination invoke separate circuits for the timing and WM components. Here we examine if, in principle, the same recurrent neural network can implement both. Using human psychophysics, we first explored the role of the WM component by varying the interstimulus delay. Consistent with previous studies, discrimination was significantly worse for a 250 ms delay, compared to 750 and 1500 ms delays, suggesting that the first interval is stably stored in WM for longer delays. We next successfully trained a recurrent neural network (RNN) on the task, demonstrating that the same network can implement both the timing and WM components. Many units in the RNN were tuned to specific intervals during the sensory epoch, and others encoded the first interval during the delay period. Overall, the encoding strategy was consistent with the notion of mixed selectivity. Units generally encoded more interval information during the sensory epoch than in the delay period, reflecting categorical encoding of short versus long in WM, rather than encoding of the specific interval. Our results demonstrate that, in contrast to standard models of interval discrimination that invoke a separate memory module, the same network can, in principle, solve the timing, WM, and comparison components of an interval discrimination task.
Collapse
Affiliation(s)
- Rehan B. Chinoy
- Departments of Neurobiology and Psychology, Brain Research Institute, and Integrative Center for Learning and Memory, University of California, Los Angeles, CA 90095–1763, USA
| | - Ashita Tanwar
- Departments of Neurobiology and Psychology, Brain Research Institute, and Integrative Center for Learning and Memory, University of California, Los Angeles, CA 90095–1763, USA
| | - Dean V. Buonomano
- Departments of Neurobiology and Psychology, Brain Research Institute, and Integrative Center for Learning and Memory, University of California, Los Angeles, CA 90095–1763, USA
| |
Collapse
|
8
|
Tsao A, Yousefzadeh SA, Meck WH, Moser MB, Moser EI. The neural bases for timing of durations. Nat Rev Neurosci 2022; 23:646-665. [PMID: 36097049 DOI: 10.1038/s41583-022-00623-3] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/19/2022] [Indexed: 11/10/2022]
Abstract
Durations are defined by a beginning and an end, and a major distinction is drawn between durations that start in the present and end in the future ('prospective timing') and durations that start in the past and end either in the past or the present ('retrospective timing'). Different psychological processes are thought to be engaged in each of these cases. The former is thought to engage a clock-like mechanism that accurately tracks the continuing passage of time, whereas the latter is thought to engage a reconstructive process that utilizes both temporal and non-temporal information from the memory of past events. We propose that, from a biological perspective, these two forms of duration 'estimation' are supported by computational processes that are both reliant on population state dynamics but are nevertheless distinct. Prospective timing is effectively carried out in a single step where the ongoing dynamics of population activity directly serve as the computation of duration, whereas retrospective timing is carried out in two steps: the initial generation of population state dynamics through the process of event segmentation and the subsequent computation of duration utilizing the memory of those dynamics.
Collapse
Affiliation(s)
- Albert Tsao
- Department of Biology, Stanford University, Stanford, CA, USA.
| | | | - Warren H Meck
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| | - May-Britt Moser
- Centre for Neural Computation, Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Trondheim, Norway
| | - Edvard I Moser
- Centre for Neural Computation, Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Trondheim, Norway.
| |
Collapse
|
9
|
Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation. Proc Natl Acad Sci U S A 2021; 118:2023832118. [PMID: 34772802 DOI: 10.1073/pnas.2023832118] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/11/2021] [Indexed: 11/18/2022] Open
Abstract
Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless, behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. Here we propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity and spontaneous synaptic turnover induce neuron exchange. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs, and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.
Collapse
|
10
|
Zhou S, Masmanidis SC, Buonomano DV. Neural Sequences as an Optimal Dynamical Regime for the Readout of Time. Neuron 2020; 108:651-658.e5. [PMID: 32946745 DOI: 10.1016/j.neuron.2020.08.020] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2020] [Revised: 07/20/2020] [Accepted: 08/20/2020] [Indexed: 01/19/2023]
Abstract
Converging evidence suggests that the brain encodes time through dynamically changing patterns of neural activity, including neural sequences, ramping activity, and complex spatiotemporal dynamics. However, the potential computational significance and advantage of these different regimes have remained unaddressed. We combined large-scale recordings and modeling to compare population dynamics between premotor cortex and striatum in mice performing a two-interval timing task. Conventional decoders revealed that the dynamics within each area encoded time equally well; however, the dynamics in striatum exhibited a higher degree of sequentiality. Analysis of premotor and striatal dynamics, together with a large set of simulated prototypical dynamical regimes, revealed that regimes with higher sequentiality allowed a biologically constrained artificial downstream network to better read out time. These results suggest that, although different strategies exist for encoding time in the brain, neural sequences represent an ideal and flexible dynamical regime for enabling downstream areas to read out this information.
Collapse
Affiliation(s)
- Shanglin Zhou
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Sotiris C Masmanidis
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA 90095, USA; California Nanosystems Institute, University of California, Los Angeles, Los Angeles, CA 90095, USA.
| | - Dean V Buonomano
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA 90095, USA.
| |
Collapse
|
11
|
Cabessa J, Tchaptchet A. Automata complete computation with Hodgkin-Huxley neural networks composed of synfire rings. Neural Netw 2020; 126:312-334. [PMID: 32278841 DOI: 10.1016/j.neunet.2020.03.019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2019] [Revised: 03/23/2020] [Accepted: 03/23/2020] [Indexed: 11/15/2022]
Abstract
Synfire rings are neural circuits capable of conveying synchronous, temporally precise and self-sustained activities in a robust manner. We propose a cell assembly based paradigm for abstract neural computation centered on the concept of synfire rings. More precisely, we empirically show that Hodgkin-Huxley neural networks modularly composed of synfire rings are automata complete. We provide an algorithmic construction which, starting from any given finite state automaton, builds a corresponding Hodgkin-Huxley neural network modularly composed of synfire rings and capable of simulating it. We illustrate the correctness of the construction on two specific examples. We further analyze the stability and robustness of the construction as a function of changes in the ring topologies as well as with respect to cell death and synaptic failure mechanisms, respectively. These results establish the possibility of achieving abstract computation with bio-inspired neural networks. They might constitute a theoretical ground for the realization of biological neural computers.
Collapse
Affiliation(s)
- Jérémie Cabessa
- Laboratory of Mathematical Economics and Applied Microeconomics (LEMMA), Université Paris 2, Panthéon-Assas, 75005 Paris, France; Institute of Computer Science of the Czech Academy of Sciences, P. O. Box 5, 18207 Prague 8, Czech Republic.
| | - Aubin Tchaptchet
- Institute of Physiology, Philipps University of Marburg, 35037 Marburg, Germany.
| |
Collapse
|
12
|
Susman L, Brenner N, Barak O. Stable memory with unstable synapses. Nat Commun 2019; 10:4441. [PMID: 31570719 PMCID: PMC6768856 DOI: 10.1038/s41467-019-12306-2] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Accepted: 08/20/2019] [Indexed: 12/22/2022] Open
Abstract
What is the physiological basis of long-term memory? The prevailing view in Neuroscience attributes changes in synaptic efficacy to memory acquisition, implying that stable memories correspond to stable connectivity patterns. However, an increasing body of experimental evidence points to significant, activity-independent fluctuations in synaptic strengths. How memories can survive these fluctuations and the accompanying stabilizing homeostatic mechanisms is a fundamental open question. Here we explore the possibility of memory storage within a global component of network connectivity, while individual connections fluctuate. We find that homeostatic stabilization of fluctuations differentially affects different aspects of network connectivity. Specifically, memories stored as time-varying attractors of neural dynamics are more resilient to erosion than fixed-points. Such dynamic attractors can be learned by biologically plausible learning-rules and support associative retrieval. Our results suggest a link between the properties of learning-rules and those of network-level memory representations, and point at experimentally measurable signatures. How are stable memories maintained in the brain despite significant ongoing fluctuations in synaptic strengths? Here, the authors show that a model consistent with fluctuations, homeostasis and biologically plausible learning rules, naturally leads to memories implemented as dynamic attractors.
Collapse
Affiliation(s)
- Lee Susman
- Interdisciplinary Program in Applied Mathematics, Technion Israel Institute of Technology, Haifa, 32000, Israel. .,Network Biology Research Laboratories, Technion Israel Institute of Technology, Haifa, 32000, Israel.
| | - Naama Brenner
- Network Biology Research Laboratories, Technion Israel Institute of Technology, Haifa, 32000, Israel. .,Dept. of Chemical Engineering, Technion Israel Institute of Technology, Haifa, 32000, Israel.
| | - Omri Barak
- Network Biology Research Laboratories, Technion Israel Institute of Technology, Haifa, 32000, Israel. .,Rappaport Faculty of Medicine, Technion Israel Institute of Technology, Haifa, 32000, Israel.
| |
Collapse
|
13
|
Paton JJ, Buonomano DV. The Neural Basis of Timing: Distributed Mechanisms for Diverse Functions. Neuron 2019; 98:687-705. [PMID: 29772201 DOI: 10.1016/j.neuron.2018.03.045] [Citation(s) in RCA: 197] [Impact Index Per Article: 39.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2018] [Revised: 02/26/2018] [Accepted: 03/24/2018] [Indexed: 12/15/2022]
Abstract
Timing is critical to most forms of learning, behavior, and sensory-motor processing. Converging evidence supports the notion that, precisely because of its importance across a wide range of brain functions, timing relies on intrinsic and general properties of neurons and neural circuits; that is, the brain uses its natural cellular and network dynamics to solve a diversity of temporal computations. Many circuits have been shown to encode elapsed time in dynamically changing patterns of neural activity-so-called population clocks. But temporal processing encompasses a wide range of different computations, and just as there are different circuits and mechanisms underlying computations about space, there are a multitude of circuits and mechanisms underlying the ability to tell time and generate temporal patterns.
Collapse
Affiliation(s)
- Joseph J Paton
- Champalimaud Research, Champalimaud Centre for the Unknown, Lisbon, Portugal.
| | - Dean V Buonomano
- Departments of Neurobiology and Psychology and Brain Research Institute, Integrative Center for Learning and Memory, University of California, Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
14
|
Hardy NF, Buonomano DV. Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model. Neural Comput 2018; 30:378-396. [PMID: 29162002 PMCID: PMC5873300 DOI: 10.1162/neco_a_01041] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.
Collapse
Affiliation(s)
- N F Hardy
- Neuroscience Interdepartmental Program and Department of Neurobiology, University of California Los Angeles, Los Angeles, CA 90095, U.S.A.
| | - Dean V Buonomano
- Neuroscience Interdepartmental Program and Departments of Neurology and Psychology, University of California Los Angeles, Los Angeles, CA 90095, U.S.A.
| |
Collapse
|
15
|
Ravid Tannenbaum N, Burak Y. Shaping Neural Circuits by High Order Synaptic Interactions. PLoS Comput Biol 2016; 12:e1005056. [PMID: 27517461 PMCID: PMC4982676 DOI: 10.1371/journal.pcbi.1005056] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2015] [Accepted: 06/30/2016] [Indexed: 11/19/2022] Open
Abstract
Spike timing dependent plasticity (STDP) is believed to play an important role in shaping the structure of neural circuits. Here we show that STDP generates effective interactions between synapses of different neurons, which were neglected in previous theoretical treatments, and can be described as a sum over contributions from structural motifs. These interactions can have a pivotal influence on the connectivity patterns that emerge under the influence of STDP. In particular, we consider two highly ordered forms of structure: wide synfire chains, in which groups of neurons project to each other sequentially, and self connected assemblies. We show that high order synaptic interactions can enable the formation of both structures, depending on the form of the STDP function and the time course of synaptic currents. Furthermore, within a certain regime of biophysical parameters, emergence of the ordered connectivity occurs robustly and autonomously in a stochastic network of spiking neurons, without a need to expose the neural network to structured inputs during learning. Plasticity between neural connections plays a key role in our ability to process and store information. One of the fundamental questions on plasticity, is the extent to which local processes, affecting individual synapses, are responsible for large scale structures of neural connectivity. Here we focus on two types of structures: synfire chains and self connected assemblies. These structures are often proposed as forms of neural connectivity that can support brain functions such as memory and generation of motor activity. We show that an important plasticity mechanism, spike timing dependent plasticity, can lead to autonomous emergence of these large scale structures in the brain: in contrast to previous theoretical proposals, we show that the emergence can occur autonomously even if instructive signals are not fed into the neural network while its form is shaped by synaptic plasticity.
Collapse
Affiliation(s)
- Neta Ravid Tannenbaum
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
| | - Yoram Burak
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
- Racah Institute of Physics, Hebrew University, Jerusalem, Israel
- * E-mail:
| |
Collapse
|
16
|
Mitani K, Kashino M. Self-Produced Time Intervals Are Perceived as More Variable and/or Shorter Depending on Temporal Context in Subsecond and Suprasecond Ranges. Front Integr Neurosci 2016; 10:19. [PMID: 27313515 PMCID: PMC4887498 DOI: 10.3389/fnint.2016.00019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 05/17/2016] [Indexed: 11/24/2022] Open
Abstract
The processing of time intervals is fundamental for sensorimotor and cognitive functions. Perceptual and motor timing are often performed concurrently (e.g., playing a musical instrument). Although previous studies have shown the influence of body movements on time perception, how we perceive self-produced time intervals has remained unclear. Furthermore, it has been suggested that the timing mechanisms are distinct for the sub- and suprasecond ranges. Here, we compared perceptual performances for self-produced and passively presented time intervals in random contexts (i.e., multiple target intervals presented in a session) across the sub- and suprasecond ranges (Experiment 1) and within the sub- (Experiment 2) and suprasecond (Experiment 3) ranges, and in a constant context (i.e., a single target interval presented in a session) in the sub- and suprasecond ranges (Experiment 4). We show that self-produced time intervals were perceived as shorter and more variable across the sub- and suprasecond ranges and within the suprasecond range but not within the subsecond range in a random context. In a constant context, the self-produced time intervals were perceived as more variable in the suprasecond range but not in the subsecond range. The impairing effects indicate that motor timing interferes with perceptual timing. The dependence of impairment on temporal contexts suggests multiple timing mechanisms for the subsecond and suprasecond ranges. In addition, violation of the scalar property (i.e., a constant variability to target interval ratio) was observed between the sub- and suprasecond ranges. The violation was clearer for motor timing than for perceptual timing. This suggests that the multiple timing mechanisms for the sub- and suprasecond ranges overlap more for perception than for motor. Moreover, the central tendency effect (i.e., where shorter base intervals are overestimated and longer base intervals are underestimated) disappeared with motor timing within the subsecond range, suggesting multiple subsecond timing system for perception and motor.
Collapse
Affiliation(s)
- Keita Mitani
- Department of Information Processing, Tokyo Institute of Technology Yokohama, Japan
| | - Makio Kashino
- Department of Information Processing, Tokyo Institute of TechnologyYokohama, Japan; Human Information Science Laboratory, NTT Communication Science Laboratories, Nippon Telegraph and Telephone CorporationAtsugi, Japan
| |
Collapse
|
17
|
Cannon J, Miller P. Synaptic and intrinsic homeostasis cooperate to optimize single neuron response properties and tune integrator circuits. J Neurophysiol 2016; 116:2004-2022. [PMID: 27306675 DOI: 10.1152/jn.00253.2016] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2016] [Accepted: 06/15/2016] [Indexed: 11/22/2022] Open
Abstract
Homeostatic processes that provide negative feedback to regulate neuronal firing rate are essential for normal brain function, and observations suggest that multiple such processes may operate simultaneously in the same network. We pose two questions: why might a diversity of homeostatic pathways be necessary, and how can they operate in concert without opposing and undermining each other? To address these questions, we perform a computational and analytical study of cell-intrinsic homeostasis and synaptic homeostasis in single-neuron and recurrent circuit models. We demonstrate analytically and in simulation that when two such mechanisms are controlled on a long time scale by firing rate via simple and general feedback rules, they can robustly operate in tandem to tune the mean and variance of single neuron's firing rate to desired goals. This property allows the system to recover desired behavior after chronic changes in input statistics. We illustrate the power of this homeostatic tuning scheme by using it to regain high mutual information between neuronal input and output after major changes in input statistics. We then show that such dual homeostasis can be applied to tune the behavior of a neural integrator, a system that is notoriously sensitive to variation in parameters. These results are robust to variation in goals and model parameters. We argue that a set of homeostatic processes that appear to redundantly regulate mean firing rate may work together to control firing rate mean and variance and thus maintain performance in a parameter-sensitive task such as integration.
Collapse
Affiliation(s)
- Jonathan Cannon
- Department of Biology, Brandeis University, Waltham, Massachusetts
| | - Paul Miller
- Department of Biology, Brandeis University, Waltham, Massachusetts
| |
Collapse
|
18
|
Watson BO, Levenstein D, Greene JP, Gelinas JN, Buzsáki G. Network Homeostasis and State Dynamics of Neocortical Sleep. Neuron 2016; 90:839-52. [PMID: 27133462 PMCID: PMC4873379 DOI: 10.1016/j.neuron.2016.03.036] [Citation(s) in RCA: 189] [Impact Index Per Article: 23.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2015] [Revised: 02/22/2016] [Accepted: 03/30/2016] [Indexed: 12/23/2022]
Abstract
Sleep exerts many effects on mammalian forebrain networks, including homeostatic effects on both synaptic strengths and firing rates. We used large-scale recordings to examine the activity of neurons in the frontal cortex of rats and first observed that the distribution of pyramidal cell firing rates was wide and strongly skewed toward high firing rates. Moreover, neurons from different parts of that distribution were differentially modulated by sleep substates. Periods of nonREM sleep reduced the activity of high firing rate neurons and tended to upregulate firing of slow-firing neurons. By contrast, the effect of REM was to reduce firing rates across the entire rate spectrum. Microarousals, interspersed within nonREM epochs, increased firing rates of slow-firing neurons. The net result of sleep was to homogenize the firing rate distribution. These findings are at variance with current homeostatic models and provide a novel view of sleep in adjusting network excitability.
Collapse
Affiliation(s)
- Brendon O Watson
- New York University Neuroscience Institute, New York University, New York, NY 10016, USA; Department of Psychiatry, Weill Cornell Medical College, New York, NY 10065, USA
| | - Daniel Levenstein
- New York University Neuroscience Institute, New York University, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10016, USA
| | - J Palmer Greene
- New York University Neuroscience Institute, New York University, New York, NY 10016, USA
| | - Jennifer N Gelinas
- New York University Neuroscience Institute, New York University, New York, NY 10016, USA
| | - György Buzsáki
- New York University Neuroscience Institute, New York University, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10016, USA.
| |
Collapse
|
19
|
|
20
|
Rajan K, Harvey CD, Tank DW. Recurrent Network Models of Sequence Generation and Memory. Neuron 2016; 90:128-42. [PMID: 26971945 DOI: 10.1016/j.neuron.2016.02.009] [Citation(s) in RCA: 186] [Impact Index Per Article: 23.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2015] [Revised: 12/03/2015] [Accepted: 02/02/2016] [Indexed: 12/29/2022]
Abstract
Sequential activation of neurons is a common feature of network activity during a variety of behaviors, including working memory and decision making. Previous network models for sequences and memory emphasized specialized architectures in which a principled mechanism is pre-wired into their connectivity. Here we demonstrate that, starting from random connectivity and modifying a small fraction of connections, a largely disordered recurrent network can produce sequences and implement working memory efficiently. We use this process, called Partial In-Network Training (PINning), to model and match cellular resolution imaging data from the posterior parietal cortex during a virtual memory-guided two-alternative forced-choice task. Analysis of the connectivity reveals that sequences propagate by the cooperation between recurrent synaptic interactions and external inputs, rather than through feedforward or asymmetric connections. Together our results suggest that neural sequences may emerge through learning from largely unstructured network architectures.
Collapse
Affiliation(s)
- Kanaka Rajan
- Joseph Henry Laboratories of Physics and Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, NJ 08544, USA.
| | | | - David W Tank
- Department of Molecular Biology and Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA.
| |
Collapse
|
21
|
Growth and splitting of neural sequences in songbird vocal development. Nature 2015; 528:352-7. [PMID: 26618871 PMCID: PMC4957523 DOI: 10.1038/nature15741] [Citation(s) in RCA: 85] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2015] [Accepted: 09/22/2015] [Indexed: 12/29/2022]
Abstract
Neural sequences are a fundamental feature of brain dynamics underlying diverse behaviors, but the mechanisms by which they develop during learning remain unknown. Songbirds learn vocalizations composed of syllables; in adult birds, each syllable is produced by a different sequence of action potential bursts in the premotor cortical area HVC. Here we carried out recordings of large populations of HVC neurons in singing juvenile birds throughout learning to examine the emergence of neural sequences. Early in vocal development, HVC neurons begin producing rhythmic bursts, temporally locked to a ‘prototype’ syllable. Different neurons are active at different latencies relative to syllable onset to form a continuous sequence. Through development, as new syllables emerge from the prototype syllable, initially highly overlapping burst sequences become increasingly distinct. We propose a mechanistic model in which multiple neural sequences can emerge from the growth and splitting of a common precursor sequence.
Collapse
|
22
|
Miura K, Aoki T. Hodge-Kodaira decomposition of evolving neural networks. Neural Netw 2014; 62:20-4. [PMID: 24958507 DOI: 10.1016/j.neunet.2014.05.021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2014] [Revised: 05/22/2014] [Accepted: 05/28/2014] [Indexed: 10/25/2022]
Abstract
Although it is very important to scrutinize recurrent structures of neural networks for elucidating brain functions, conventional methods often have difficulty in characterizing global loops within a network systematically. Here we applied the Hodge-Kodaira decomposition, a topological method, to an evolving neural network model in order to characterize its loop structure. By controlling a learning rule parametrically, we found that a model with an STDP-rule, which tends to form paths coincident with causal firing orders, had the most loops. Furthermore, by counting the number of global loops in the network, we detected the inhomogeneity inside the chaotic region, which is usually considered intractable.
Collapse
Affiliation(s)
- Keiji Miura
- Graduate School of Information Sciences, Tohoku University, Sendai, Japan.
| | - Takaaki Aoki
- Faculty of Education, Kagawa University, Takamatsu, Japan.
| |
Collapse
|
23
|
Savin C, Triesch J. Emergence of task-dependent representations in working memory circuits. Front Comput Neurosci 2014; 8:57. [PMID: 24904395 PMCID: PMC4035833 DOI: 10.3389/fncom.2014.00057] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2014] [Accepted: 05/10/2014] [Indexed: 01/31/2023] Open
Abstract
A wealth of experimental evidence suggests that working memory circuits preferentially represent information that is behaviorally relevant. Still, we are missing a mechanistic account of how these representations come about. Here we provide a simple explanation for a range of experimental findings, in light of prefrontal circuits adapting to task constraints by reward-dependent learning. In particular, we model a neural network shaped by reward-modulated spike-timing dependent plasticity (r-STDP) and homeostatic plasticity (intrinsic excitability and synaptic scaling). We show that the experimentally-observed neural representations naturally emerge in an initially unstructured circuit as it learns to solve several working memory tasks. These results point to a critical, and previously unappreciated, role for reward-dependent learning in shaping prefrontal cortex activity.
Collapse
Affiliation(s)
- Cristina Savin
- Frankfurt Institute for Advanced Studies Frankfurt am Main, Germany
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies Frankfurt am Main, Germany ; Physics Department, Goethe University Frankfurt am Main, Germany
| |
Collapse
|
24
|
Rivest F, Kalaska JF, Bengio Y. Conditioning and time representation in long short-term memory networks. BIOLOGICAL CYBERNETICS 2014; 108:23-48. [PMID: 24258005 DOI: 10.1007/s00422-013-0575-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2013] [Accepted: 10/19/2013] [Indexed: 06/02/2023]
Abstract
Dopaminergic models based on the temporal-difference learning algorithm usually do not differentiate trace from delay conditioning. Instead, they use a fixed temporal representation of elapsed time since conditioned stimulus onset. Recently, a new model was proposed in which timing is learned within a long short-term memory (LSTM) artificial neural network representing the cerebral cortex (Rivest et al. in J Comput Neurosci 28(1):107-130, 2010). In this paper, that model's ability to reproduce and explain relevant data, as well as its ability to make interesting new predictions, are evaluated. The model reveals a strikingly different temporal representation between trace and delay conditioning since trace conditioning requires working memory to remember the past conditioned stimulus while delay conditioning does not. On the other hand, the model predicts no important difference in DA responses between those two conditions when trained on one conditioning paradigm and tested on the other. The model predicts that in trace conditioning, animal timing starts with the conditioned stimulus offset as opposed to its onset. In classical conditioning, it predicts that if the conditioned stimulus does not disappear after the reward, the animal may expect a second reward. Finally, the last simulation reveals that the buildup of activity of some units in the networks can adapt to new delays by adjusting their rate of integration. Most importantly, the paper shows that it is possible, with the proposed architecture, to acquire discharge patterns similar to those observed in dopaminergic neurons and in the cerebral cortex on those tasks simply by minimizing a predictive cost function.
Collapse
Affiliation(s)
- Francois Rivest
- Department of Mathematics and Computer Science, Royal Military College of Canada, PO Box 17000, Station Forces, Kingston, ON, K7K 7B4, Canada,
| | | | | |
Collapse
|
25
|
Neurocomputational Models of Time Perception. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2014; 829:49-71. [DOI: 10.1007/978-1-4939-1782-2_4] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
26
|
Yu Q, Tang H, Tan KC, Li H. Precise-spike-driven synaptic plasticity: learning hetero-association of spatiotemporal spike patterns. PLoS One 2013; 8:e78318. [PMID: 24223789 PMCID: PMC3818323 DOI: 10.1371/journal.pone.0078318] [Citation(s) in RCA: 111] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2013] [Accepted: 09/11/2013] [Indexed: 11/18/2022] Open
Abstract
A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.
Collapse
Affiliation(s)
- Qiang Yu
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Huajin Tang
- Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore, Singapore
- College of Computer Science, Sichuan University, Chengdu, China
- * E-mail:
| | - Kay Chen Tan
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Haizhou Li
- Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore, Singapore
- School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, Australia
| |
Collapse
|
27
|
Pearce TC, Karout S, Rácz Z, Capurro A, Gardner JW, Cole M. Rapid processing of chemosensor transients in a neuromorphic implementation of the insect macroglomerular complex. Front Neurosci 2013; 7:119. [PMID: 23874265 PMCID: PMC3709137 DOI: 10.3389/fnins.2013.00119] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2012] [Accepted: 06/20/2013] [Indexed: 12/28/2022] Open
Abstract
We present a biologically-constrained neuromorphic spiking model of the insect antennal lobe macroglomerular complex that encodes concentration ratios of chemical components existing within a blend, implemented using a set of programmable logic neuronal modeling cores. Depending upon the level of inhibition and symmetry in its inhibitory connections, the model exhibits two dynamical regimes: fixed point attractor (winner-takes-all type), and limit cycle attractor (winnerless competition type) dynamics. We show that, when driven by chemosensor input in real-time, the dynamical trajectories of the model's projection neuron population activity accurately encode the concentration ratios of binary odor mixtures in both dynamical regimes. By deploying spike timing-dependent plasticity in a subset of the synapses in the model, we demonstrate that a Hebbian-like associative learning rule is able to organize weights into a stable configuration after exposure to a randomized training set comprising a variety of input ratios. Examining the resulting local interneuron weights in the model shows that each inhibitory neuron competes to represent possible ratios across the population, forming a ratiometric representation via mutual inhibition. After training the resulting dynamical trajectories of the projection neuron population activity show amplification and better separation in their response to inputs of different ratios. Finally, we demonstrate that by using limit cycle attractor dynamics, it is possible to recover and classify blend ratio information from the early transient phases of chemosensor responses in real-time more rapidly and accurately compared to a nearest-neighbor classifier applied to the normalized chemosensor data. Our results demonstrate the potential of biologically-constrained neuromorphic spiking models in achieving rapid and efficient classification of early phase chemosensor array transients with execution times well beyond biological timescales.
Collapse
Affiliation(s)
- Timothy C Pearce
- Centre for Bioengineering, Department of Engineering, University of Leicester Leicester, East Midlands, UK
| | | | | | | | | | | |
Collapse
|
28
|
Dockendorf K, Srinivasa N. Learning and prospective recall of noisy spike pattern episodes. Front Comput Neurosci 2013; 7:80. [PMID: 23801961 PMCID: PMC3689221 DOI: 10.3389/fncom.2013.00080] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2013] [Accepted: 06/03/2013] [Indexed: 11/13/2022] Open
Abstract
Spike patterns in vivo are often incomplete or corrupted with noise that makes inputs to neuronal networks appear to vary although they may, in fact, be samples of a single underlying pattern or repeated presentation. Here we present a recurrent spiking neural network (SNN) model that learns noisy pattern sequences through the use of homeostasis and spike-timing dependent plasticity (STDP). We find that the changes in the synaptic weight vector during learning of patterns of random ensembles are approximately orthogonal in a reduced dimension space when the patterns are constructed to minimize overlap in representations. Using this model, representations of sparse patterns maybe associated through co-activated firing and integrated into ensemble representations. While the model is tolerant to noise, prospective activity, and pattern completion differ in their ability to adapt in the presence of noise. One version of the model is able to demonstrate the recently discovered phenomena of preplay and replay reminiscent of hippocampal-like behaviors.
Collapse
Affiliation(s)
- Karl Dockendorf
- Information and System Sciences Lab, Center for Neural and Emergent Systems, HRL Laboratories LLC Malibu, CA, USA
| | | |
Collapse
|
29
|
Goel A, Buonomano DV. Chronic electrical stimulation homeostatically decreases spontaneous activity, but paradoxically increases evoked network activity. J Neurophysiol 2013; 109:1824-36. [PMID: 23324317 DOI: 10.1152/jn.00612.2012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Neural dynamics generated within cortical networks play a fundamental role in brain function. However, the learning rules that allow recurrent networks to generate functional dynamic regimes, and the degree to which these regimes are themselves plastic, are not known. In this study we examined plasticity of network dynamics in cortical organotypic slices in response to chronic changes in activity. Studies have typically manipulated network activity pharmacologically; we used chronic electrical stimulation to increase activity in in vitro cortical circuits in a more physiological manner. Slices were stimulated with "implanted" electrodes for 4 days. Chronic electrical stimulation or treatment with bicuculline decreased spontaneous activity as predicted by homeostatic learning rules. Paradoxically, however, whereas bicuculline decreased evoked network activity, chronic stimulation actually increased the likelihood that evoked stimulation elicited polysynaptic activity, despite a decrease in evoked monosynaptic strength. Furthermore, there was an inverse correlation between spontaneous and evoked activity, suggesting a homeostatic tradeoff between spontaneous and evoked activity. Within-slice experiments revealed that cells close to the stimulated electrode exhibited more evoked polysynaptic activity and less spontaneous activity than cells close to a control electrode. Collectively, our results establish that chronic stimulation changes the dynamic regimes of networks. In vitro studies of homeostatic plasticity typically lack any external input, and thus neurons must rely on "spontaneous" activity to reach homeostatic "set points." However, in the presence of external input we propose that homeostatic learning rules seem to shift networks from spontaneous to evoked regimes.
Collapse
Affiliation(s)
- Anubhuti Goel
- Dept. of Neurobiology and Psychology, Integrative Center for Learning and Memory, Univ. of California, Los Angeles, Los Angeles, CA 90095, USA
| | | |
Collapse
|
30
|
Lee TP, Buonomano DV. Unsupervised formation of vocalization-sensitive neurons: a cortical model based on short-term and homeostatic plasticity. Neural Comput 2012; 24:2579-603. [PMID: 22845822 DOI: 10.1162/neco_a_00345] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The discrimination of complex auditory stimuli relies on the spatiotemporal structure of spike patterns arriving in the cortex. While recordings from auditory areas reveal that many neurons are highly selective to specific spatiotemporal stimuli, the mechanisms underlying this selectivity are unknown. Using computer simulations, we show that selectivity can emerge in neurons in an entirely unsupervised manner. The model is based on recurrently connected spiking neurons and synapses that exhibit short-term synaptic plasticity. During a developmental stage, spoken digits were presented to the network; the only type of long-term plasticity present was a form of homeostatic synaptic plasticity. From an initially unresponsive state, training generated a high percentage of neurons that responded selectively to individual digits. Furthermore, units within the network exhibited a cardinal feature of vocalization-sensitive neurons in vivo: differential responses between forward and reverse stimulus presentations. Direction selectivity deteriorated significantly, however, if short-term synaptic plasticity was removed. These results establish that a simple form of homeostatic plasticity is capable of guiding recurrent networks into regimes in which complex stimuli can be discriminated. In addition, one computational function of short-term synaptic plasticity may be to provide an inherent temporal asymmetry, thus contributing to the characteristic forward-reverse selectivity.
Collapse
Affiliation(s)
- Tyler P Lee
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94720, USA
| | | |
Collapse
|
31
|
Calcium control of triphasic hippocampal STDP. J Comput Neurosci 2012; 33:495-514. [DOI: 10.1007/s10827-012-0397-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2011] [Revised: 04/07/2012] [Accepted: 04/11/2012] [Indexed: 10/28/2022]
|
32
|
Abstract
The mismatch negativity (MMN) is thought to index the activation of specialized neural networks for active prediction and deviance detection. However, a detailed neuronal model of the neurobiological mechanisms underlying the MMN is still lacking, and its computational foundations remain debated. We propose here a detailed neuronal model of auditory cortex, based on predictive coding, that accounts for the critical features of MMN. The model is entirely composed of spiking excitatory and inhibitory neurons interconnected in a layered cortical architecture with distinct input, predictive, and prediction error units. A spike-timing dependent learning rule, relying upon NMDA receptor synaptic transmission, allows the network to adjust its internal predictions and use a memory of the recent past inputs to anticipate on future stimuli based on transition statistics. We demonstrate that this simple architecture can account for the major empirical properties of the MMN. These include a frequency-dependent response to rare deviants, a response to unexpected repeats in alternating sequences (ABABAA…), a lack of consideration of the global sequence context, a response to sound omission, and a sensitivity of the MMN to NMDA receptor antagonists. Novel predictions are presented, and a new magnetoencephalography experiment in healthy human subjects is presented that validates our key hypothesis: the MMN results from active cortical prediction rather than passive synaptic habituation.
Collapse
|
33
|
Heterogeneous reallocation of presynaptic efficacy in recurrent excitatory circuits adapting to inactivity. Nat Neurosci 2011; 15:250-7. [PMID: 22179109 PMCID: PMC3558750 DOI: 10.1038/nn.3004] [Citation(s) in RCA: 61] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2011] [Accepted: 11/04/2011] [Indexed: 02/08/2023]
Abstract
Recurrent excitatory circuits face extreme challenges in balancing efficacy and stability. We recorded from CA3 pyramidal neuron pairs in rat hippocampal slice cultures to characterize synaptic and circuit-level changes in recurrent synapses resulting from long-term inactivity. Chronic tetrodotoxin treatment greatly reduced the percentage of connected CA3-CA3 neurons, but enhanced the strength of the remaining connections; presynaptic release probability sharply increased, whereas quantal size was unaltered. Connectivity was decreased in activity-deprived circuits by functional silencing of synapses, whereas three-dimensional anatomical analysis revealed no change in spine or bouton density or aggregate dendrite length. The silencing arose from enhanced Cdk5 activity and could be reverted by acute Cdk5 inhibition with roscovitine. Our results suggest that recurrent circuits adapt to chronic inactivity by reallocating presynaptic weights heterogeneously, strengthening certain connections while silencing others. This restricts synaptic output and input, preserving signaling efficacy among a subset of neuronal ensembles while protecting network stability.
Collapse
|
34
|
A hierarchical neuronal model for generation and online recognition of birdsongs. PLoS Comput Biol 2011; 7:e1002303. [PMID: 22194676 PMCID: PMC3240584 DOI: 10.1371/journal.pcbi.1002303] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2011] [Accepted: 10/29/2011] [Indexed: 11/19/2022] Open
Abstract
The neuronal system underlying learning, generation and recognition of song in birds is one of the best-studied systems in the neurosciences. Here, we use these experimental findings to derive a neurobiologically plausible, dynamic, hierarchical model of birdsong generation and transform it into a functional model of birdsong recognition. The generation model consists of neuronal rate models and includes critical anatomical components like the premotor song-control nucleus HVC (proper name), the premotor nucleus RA (robust nucleus of the arcopallium), and a model of the syringeal and respiratory organs. We use Bayesian inference of this dynamical system to derive a possible mechanism for how birds can efficiently and robustly recognize the songs of their conspecifics in an online fashion. Our results indicate that the specific way birdsong is generated enables a listening bird to robustly and rapidly perceive embedded information at multiple time scales of a song. The resulting mechanism can be useful for investigating the functional roles of auditory recognition areas and providing predictions for future birdsong experiments. How do birds communicate via their songs? Investigating this question may not only lead to a better understanding of communication via birdsong, but many believe that the answer will also give us hints about how humans decode speech from complex sound wave modulations. In birds, the output and neuronal responses of the song generation system can be measured precisely and this has resulted in a considerable body of experimental findings. We used these findings to assemble a complete model of birdsong generation and use it as the basis for constructing a potentially neurobiologically plausible, artificial recognition system based on state-of-the-art Bayesian inference techniques. Our artificial system resembles the real birdsong system when performing recognition tasks and may be used as a functional model to explain and predict experimental findings in song recognition.
Collapse
|
35
|
Bush D, Jin Y. A unified computational model of the genetic regulatory networks underlying synaptic, intrinsic and homeostatic plasticity. BMC Neurosci 2011. [PMCID: PMC3240258 DOI: 10.1186/1471-2202-12-s1-p161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
36
|
Hanuschkin A, Diesmann M, Morrison A. A reafferent and feed-forward model of song syntax generation in the Bengalese finch. J Comput Neurosci 2011; 31:509-32. [PMID: 21404048 PMCID: PMC3232349 DOI: 10.1007/s10827-011-0318-z] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2010] [Revised: 01/28/2011] [Accepted: 02/03/2011] [Indexed: 12/04/2022]
Abstract
Adult Bengalese finches generate a variable song that obeys a distinct and individual syntax. The syntax is gradually lost over a period of days after deafening and is recovered when hearing is restored. We present a spiking neuronal network model of the song syntax generation and its loss, based on the assumption that the syntax is stored in reafferent connections from the auditory to the motor control area. Propagating synfire activity in the HVC codes for individual syllables of the song and priming signals from the auditory network reduce the competition between syllables to allow only those transitions that are permitted by the syntax. Both imprinting of song syntax within HVC and the interaction of the reafferent signal with an efference copy of the motor command are sufficient to explain the gradual loss of syntax in the absence of auditory feedback. The model also reproduces for the first time experimental findings on the influence of altered auditory feedback on the song syntax generation, and predicts song- and species-specific low frequency components in the LFP. This study illustrates how sequential compositionality following a defined syntax can be realized in networks of spiking neurons.
Collapse
Affiliation(s)
- Alexander Hanuschkin
- Functional Neural Circuits Group, Faculty of Biology, Albert-Ludwig University of Freiburg, Schänzlestrasse 1, 79104 Freiburg, Germany.
| | | | | |
Collapse
|
37
|
Laje R, Cheng K, Buonomano DV. Learning of temporal motor patterns: an analysis of continuous versus reset timing. Front Integr Neurosci 2011; 5:61. [PMID: 22016724 PMCID: PMC3192320 DOI: 10.3389/fnint.2011.00061] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2011] [Accepted: 09/25/2011] [Indexed: 11/27/2022] Open
Abstract
Our ability to generate well-timed sequences of movements is critical to an array of behaviors, including the ability to play a musical instrument or a video game. Here we address two questions relating to timing with the goal of better understanding the neural mechanisms underlying temporal processing. First, how does accuracy and variance change over the course of learning of complex spatiotemporal patterns? Second, is the timing of sequential responses most consistent with starting and stopping an internal timer at each interval or with continuous timing? To address these questions we used a psychophysical task in which subjects learned to reproduce a sequence of finger taps in the correct order and at the correct times – much like playing a melody at the piano. This task allowed us to calculate the variance of the responses at different time points using data from the same trials. Our results show that while “standard” Weber’s law is clearly violated, variance does increase as a function of time squared, as expected according to the generalized form of Weber’s law – which separates the source of variance into time-dependent and time-independent components. Over the course of learning, both the time-independent variance and the coefficient of the time-dependent term decrease. Our analyses also suggest that timing of sequential events does not rely on the resetting of an internal timer at each event. We describe and interpret our results in the context of computer simulations that capture some of our psychophysical findings. Specifically, we show that continuous timing, as opposed to “reset” timing, is consistent with “population clock” models in which timing emerges from the internal dynamics of recurrent neural networks.
Collapse
Affiliation(s)
- Rodrigo Laje
- Department of Neurobiology, University of California Los Angeles, CA, USA
| | | | | |
Collapse
|
38
|
Liu JK. Learning rule of homeostatic synaptic scaling: presynaptic dependent or not. Neural Comput 2011; 23:3145-61. [PMID: 21919784 DOI: 10.1162/neco_a_00210] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
It has been established that homeostatic synaptic scaling plasticity can maintain neural network activity in a stable regime. However, the underlying learning rule for this mechanism is still unclear. Whether it is dependent on the presynaptic site remains a topic of debate. Here we focus on two forms of learning rules: traditional synaptic scaling (SS) without presynaptic effect and presynaptic-dependent synaptic scaling (PSD). Analysis of the synaptic matrices reveals that transition matrices between consecutive synaptic matrices are distinct: they are diagonal and linear to neural activity under SS, but become nondiagonal and nonlinear under PSD. These differences produce different dynamics in recurrent neural networks. Numerical simulations show that network dynamics are stable under PSD but not SS, which suggests that PSD is a better form to describe homeostatic synaptic scaling plasticity. Matrix analysis used in the study may provide a novel way to examine the stability of learning dynamics.
Collapse
Affiliation(s)
- Jian K Liu
- Laboratory of Neurophysics and Physiology, CNRS UMR 8119, Université Paris Descartes, Paris, France.
| |
Collapse
|
39
|
Abstract
Homeostatic processes that regulate electrical activity in neurones are now an established aspect of physiology and rest on a large body of experimental evidence that points to roles in development, learning and memory, and disease. However, the concepts underlying homeostasis are too often summarized in ways that restrict their explanatory power and obviate important subtleties. Here, we present a review of the underlying theory of homeostasis--control theory--in an attempt to reconcile some existing conceptual problems in the context of neuronal physiology. In addition to clarifying the underlying theory, this review highlights the remaining challenges posed when analysing homeostatic phenomena that underlie the regulation of neuronal excitability. Moreover, we suggest approaches for future experimental and computational work that will further our understanding of neuronal homeostasis and the fundamental neurophysiological functions it serves.
Collapse
Affiliation(s)
- Timothy O'Leary
- Centre for Integrative Physiology, Hugh Robson Building, University of Edinburgh, George Square, Edinburgh EH8 9XD, UK.
| | | |
Collapse
|
40
|
Buonomano DV, Laje R. Population clocks: motor timing with neural dynamics. Trends Cogn Sci 2011; 14:520-7. [PMID: 20889368 DOI: 10.1016/j.tics.2010.09.002] [Citation(s) in RCA: 107] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2010] [Revised: 08/31/2010] [Accepted: 09/01/2010] [Indexed: 01/06/2023]
Abstract
An understanding of sensory and motor processing will require elucidation of the mechanisms by which the brain tells time. Open questions relate to whether timing relies on dedicated or intrinsic mechanisms and whether distinct mechanisms underlie timing across scales and modalities. Although experimental and theoretical studies support the notion that neural circuits are intrinsically capable of sensory timing on short scales, few general models of motor timing have been proposed. For one class of models, population clocks, it is proposed that time is encoded in the time-varying patterns of activity of a population of neurons. We argue that population clocks emerge from the internal dynamics of recurrently connected networks, are biologically realistic and account for many aspects of motor timing.
Collapse
Affiliation(s)
- Dean V Buonomano
- Department of Neurobiology, University of California, Los Angeles, Box 951761, Los Angeles, CA 90095, USA.
| | | |
Collapse
|
41
|
Li JX, Lisberger SG. Learned timing of motor behavior in the smooth eye movement region of the frontal eye fields. Neuron 2011; 69:159-69. [PMID: 21220106 DOI: 10.1016/j.neuron.2010.11.043] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/23/2010] [Indexed: 01/10/2023]
Abstract
Proper timing is a critical aspect of motor learning. We report a relationship between a representation of time and an expression of learned timing in neurons in the smooth eye movement region of the frontal eye fields (FEF(SEM)). During prelearning pursuit of target motion at a constant velocity, each FEF(SEM) neuron is most active at a distinct time relative to the onset of pursuit tracking. In response to an instructive change in target direction, a neuron expresses the most learning when the instruction occurs near the time of its maximal participation in prelearning pursuit. Different neurons are most active, and undergo the most learning, at distinct times during pursuit. We suggest that the representation of time in the FEF(SEM) drives learning that is temporally linked to an instructive change in target motion, and that this may be a general function of motor areas of the cortex.
Collapse
Affiliation(s)
- Jennifer X Li
- Department of Physiology, University of California, San Francisco, CA 94143, USA.
| | | |
Collapse
|
42
|
Schrader S, Diesmann M, Morrison A. A compositionality machine realized by a hierarchic architecture of synfire chains. Front Comput Neurosci 2011; 4:154. [PMID: 21258641 PMCID: PMC3020397 DOI: 10.3389/fncom.2010.00154] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2010] [Accepted: 12/05/2010] [Indexed: 11/17/2022] Open
Abstract
The composition of complex behavior is thought to rely on the concurrent and sequential activation of simpler action components, or primitives. Systems of synfire chains have previously been proposed to account for either the simultaneous or the sequential aspects of compositionality; however, the compatibility of the two aspects has so far not been addressed. Moreover, the simultaneous activation of primitives has up until now only been investigated in the context of reactive computations, i.e., the perception of stimuli. In this study we demonstrate how a hierarchical organization of synfire chains is capable of generating both aspects of compositionality for proactive computations such as the generation of complex and ongoing action. To this end, we develop a network model consisting of two layers of synfire chains. Using simple drawing strokes as a visualization of abstract primitives, we map the feed-forward activity of the upper level synfire chains to motion in two-dimensional space. Our model is capable of producing drawing strokes that are combinations of primitive strokes by binding together the corresponding chains. Moreover, when the lower layer of the network is constructed in a closed-loop fashion, drawing strokes are generated sequentially. The generated pattern can be random or deterministic, depending on the connection pattern between the lower level chains. We propose quantitative measures for simultaneity and sequentiality, revealing a wide parameter range in which both aspects are fulfilled. Finally, we investigate the spiking activity of our model to propose candidate signatures of synfire chain computation in measurements of neural activity during action execution.
Collapse
|
43
|
Postsynaptic GluA1 enables acute retrograde enhancement of presynaptic function to coordinate adaptation to synaptic inactivity. Proc Natl Acad Sci U S A 2010; 107:21806-11. [PMID: 21098665 DOI: 10.1073/pnas.1016399107] [Citation(s) in RCA: 59] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Prolonged blockade of AMPA-type glutamate receptors in hippocampal neuron cultures leads to homeostatic enhancements of pre- and postsynaptic function that appear correlated at individual synapses, suggesting some form of transsynaptic coordination. The respective modifications are important for overall synaptic strength but their interrelationship, dynamics, and molecular underpinnings are unclear. Here we demonstrate that adaptation begins postsynaptically but is ultimately communicated to presynaptic terminals and expressed as an accelerated turnover of synaptic vesicles. Critical postsynaptic modifications occur over hours, but enable retrograde communication within minutes once AMPA receptor (AMPAR) blockade is removed, causing elevation of both spontaneous and evoked vesicle fusion. The retrograde signaling does not require spiking activity and can be interrupted by NBQX, philanthotoxin, postsynaptic BAPTA, or external sequestration of BDNF, consistent with the acute release of retrograde messenger, triggered by postsynaptic Ca(2+) elevation via Ca(2+)-permeable AMPARs.
Collapse
|
44
|
Hanuschkin A, Herrmann JM, Morrison A, Diesmann M. Compositionality of arm movements can be realized by propagating synchrony. J Comput Neurosci 2010; 30:675-97. [PMID: 20953686 PMCID: PMC3108016 DOI: 10.1007/s10827-010-0285-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2010] [Revised: 09/02/2010] [Accepted: 09/30/2010] [Indexed: 11/29/2022]
Abstract
We present a biologically plausible spiking neuronal network model of free monkey scribbling that reproduces experimental findings on cortical activity and the properties of the scribbling trajectory. The model is based on the idea that synfire chains can encode movement primitives. Here, we map the propagation of activity in a chain to a linearly evolving preferred velocity, which results in parabolic segments that fulfill the two-thirds power law. Connections between chains that match the final velocity of one encoded primitive to the initial velocity of the next allow the composition of random sequences of primitives with smooth transitions. The model provides an explanation for the segmentation of the trajectory and the experimentally observed deviations of the trajectory from the parabolic shape at primitive transition sites. Furthermore, the model predicts low frequency oscillations (<10 Hz) of the motor cortex local field potential during ongoing movements and increasing firing rates of non-specific motor cortex neurons before movement onset.
Collapse
Affiliation(s)
- Alexander Hanuschkin
- Functional Neural Circuits Group, Faculty of Biology, Schänzlestrasse 1, 79104, Freiburg, Germany.
| | | | | | | |
Collapse
|
45
|
Spike-time precision and network synchrony are controlled by the homeostatic regulation of the D-type potassium current. J Neurosci 2010; 30:12885-95. [PMID: 20861392 DOI: 10.1523/jneurosci.0740-10.2010] [Citation(s) in RCA: 74] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Homeostatic plasticity of neuronal intrinsic excitability (HPIE) operates to maintain networks within physiological bounds in response to chronic changes in activity. Classically, this form of plasticity adjusts the output firing level of the neuron through the regulation of voltage-gated ion channels. Ion channels also determine spike timing in individual neurons by shaping subthreshold synaptic and intrinsic potentials. Thus, an intriguing hypothesis is that HPIE can also regulate network synchronization. We show here that the dendrotoxin-sensitive D-type K+ current (ID) disrupts the precision of AP generation in CA3 pyramidal neurons and may, in turn, limit network synchronization. The reduced precision is mediated by the sequence of outward ID followed by inward Na+ current. The homeostatic downregulation of ID increases both spike-time precision and the propensity for synchronization in iteratively constructed networks in vitro. Thus, network synchronization is adjusted in area CA3 through activity-dependent remodeling of ID.
Collapse
|
46
|
Abstract
Working memory (WM) is the part of the brain's memory system that provides temporary storage and manipulation of information necessary for cognition. Although WM has limited capacity at any given time, it has vast memory content in the sense that it acts on the brain's nearly infinite repertoire of lifetime long-term memories. Using simulations, we show that large memory content and WM functionality emerge spontaneously if we take the spike-timing nature of neuronal processing into account. Here, memories are represented by extensively overlapping groups of neurons that exhibit stereotypical time-locked spatiotemporal spike-timing patterns, called polychronous patterns; and synapses forming such polychronous neuronal groups (PNGs) are subject to associative synaptic plasticity in the form of both long-term and short-term spike-timing dependent plasticity. While long-term potentiation is essential in PNG formation, we show how short-term plasticity can temporarily strengthen the synapses of selected PNGs and lead to an increase in the spontaneous reactivation rate of these PNGs. This increased reactivation rate, consistent with in vivo recordings during WM tasks, results in high interspike interval variability and irregular, yet systematically changing, elevated firing rate profiles within the neurons of the selected PNGs. Additionally, our theory explains the relationship between such slowly changing firing rates and precisely timed spikes, and it reveals a novel relationship between WM and the perception of time on the order of seconds. Working memory (WM) is the part of the brain's vast memory system that provides temporary storage and manipulation of the information necessary for complex cognitive tasks, such as language comprehension, learning, and reasoning. Despite extensive neuroscience research, its mechanism is not clearly understood. We exploit a well-known feature of the brain — its ability to use precisely timed spiking events in its operation — to show how working memory functionality can emerge in the brain's vast memory repertoire. Our neural simulations explain many features of neural activity observed in vivo during working memory tasks, previously thought to be unrelated, and our results point to a relationship between working memory and other mental functions such as perception of time. This work contributes to our understanding of these brain functions and, by giving testable predictions, has the potential to impact the broader neuroscience research field.
Collapse
Affiliation(s)
- Botond Szatmáry
- The Neurosciences Institute, San Diego, California, United States of America
| | - Eugene M. Izhikevich
- The Neurosciences Institute, San Diego, California, United States of America
- * E-mail:
| |
Collapse
|
47
|
Fiete IR, Senn W, Wang CZ, Hahnloser RH. Spike-Time-Dependent Plasticity and Heterosynaptic Competition Organize Networks to Produce Long Scale-Free Sequences of Neural Activity. Neuron 2010; 65:563-76. [DOI: 10.1016/j.neuron.2010.02.003] [Citation(s) in RCA: 157] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/27/2010] [Indexed: 10/19/2022]
|
48
|
Embedding multiple trajectories in simulated recurrent neural networks in a self-organizing manner. J Neurosci 2009; 29:13172-81. [PMID: 19846705 DOI: 10.1523/jneurosci.2358-09.2009] [Citation(s) in RCA: 75] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Complex neural dynamics produced by the recurrent architecture of neocortical circuits is critical to the cortex's computational power. However, the synaptic learning rules underlying the creation of stable propagation and reproducible neural trajectories within recurrent networks are not understood. Here, we examined synaptic learning rules with the goal of creating recurrent networks in which evoked activity would: (1) propagate throughout the entire network in response to a brief stimulus while avoiding runaway excitation; (2) exhibit spatially and temporally sparse dynamics; and (3) incorporate multiple neural trajectories, i.e., different input patterns should elicit distinct trajectories. We established that an unsupervised learning rule, termed presynaptic-dependent scaling (PSD), can achieve the proposed network dynamics. To quantify the structure of the trained networks, we developed a recurrence index, which revealed that presynaptic-dependent scaling generated a functionally feedforward network when training with a single stimulus. However, training the network with multiple input patterns established that: (1) multiple non-overlapping stable trajectories can be embedded in the network; and (2) the structure of the network became progressively more complex (recurrent) as the number of training patterns increased. In addition, we determined that PSD and spike-timing-dependent plasticity operating in parallel improved the ability of the network to incorporate multiple and less variable trajectories, but also shortened the duration of the neural trajectory. Together, these results establish one of the first learning rules that can embed multiple trajectories, each of which recruits all neurons, within recurrent neural networks in a self-organizing manner.
Collapse
|
49
|
Abstract
In this issue of Neuron, Sussillo and Abbott describe a new learning rule that helps harness the computational power of recurrent neural networks.
Collapse
Affiliation(s)
- Dean V Buonomano
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA 90095, USA.
| |
Collapse
|
50
|
Buonomano DV, Bramen J, Khodadadifar M. Influence of the interstimulus interval on temporal processing and learning: testing the state-dependent network model. Philos Trans R Soc Lond B Biol Sci 2009; 364:1865-73. [PMID: 19487189 DOI: 10.1098/rstb.2009.0019] [Citation(s) in RCA: 58] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The ability to determine the interval and duration of sensory events is fundamental to most forms of sensory processing, including speech and music perception. Recent experimental data support the notion that different mechanisms underlie temporal processing in the subsecond and suprasecond range. Here, we examine the predictions of one class of subsecond timing models: state-dependent networks. We establish that the interval between the comparison and the test interval, interstimulus interval (ISI), in a two-interval forced-choice discrimination task, alters the accuracy of interval discrimination but not the point of subjective equality-i.e. while timing was impaired, subjective time contraction or expansion was not observed. We also examined whether the deficit in temporal processing produced by short ISIs can be reduced by learning, and determined the generalization patterns. These results show that training subjects on a task using a short or long ISI produces dramatically different generalization patterns, suggesting different forms of perceptual learning are being engaged. Together, our results are consistent with the notion that timing in the range of hundreds of milliseconds is local as opposed to centralized, and that rapid stimulus presentation rates impair temporal discrimination. This interference is, however, decreased if the stimuli are presented to different sensory channels.
Collapse
Affiliation(s)
- Dean V Buonomano
- Brain Research Institute, Department of Neurobiology, University of California-Los Angeles, Los Angeles, CA 90095, USA.
| | | | | |
Collapse
|