1
|
Gillett M, Brunel N. Dynamic control of sequential retrieval speed in networks with heterogeneous learning rules. eLife 2024; 12:RP88805. [PMID: 39197099 PMCID: PMC11357343 DOI: 10.7554/elife.88805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/30/2024] Open
Abstract
Temporal rescaling of sequential neural activity has been observed in multiple brain areas during behaviors involving time estimation and motor execution at variable speeds. Temporally asymmetric Hebbian rules have been used in network models to learn and retrieve sequential activity, with characteristics that are qualitatively consistent with experimental observations. However, in these models sequential activity is retrieved at a fixed speed. Here, we investigate the effects of a heterogeneity of plasticity rules on network dynamics. In a model in which neurons differ by the degree of temporal symmetry of their plasticity rule, we find that retrieval speed can be controlled by varying external inputs to the network. Neurons with temporally symmetric plasticity rules act as brakes and tend to slow down the dynamics, while neurons with temporally asymmetric rules act as accelerators of the dynamics. We also find that such networks can naturally generate separate 'preparatory' and 'execution' activity patterns with appropriate external inputs.
Collapse
Affiliation(s)
- Maxwell Gillett
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - Nicolas Brunel
- Department of Neurobiology, Duke UniversityDurhamUnited States
- Department of Physics, Duke UniversityDurhamUnited States
| |
Collapse
|
2
|
Rostami V, Rost T, Schmitt FJ, van Albada SJ, Riehle A, Nawrot MP. Spiking attractor model of motor cortex explains modulation of neural and behavioral variability by prior target information. Nat Commun 2024; 15:6304. [PMID: 39060243 PMCID: PMC11282312 DOI: 10.1038/s41467-024-49889-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 06/19/2024] [Indexed: 07/28/2024] Open
Abstract
When preparing a movement, we often rely on partial or incomplete information, which can decrement task performance. In behaving monkeys we show that the degree of cued target information is reflected in both, neural variability in motor cortex and behavioral reaction times. We study the underlying mechanisms in a spiking motor-cortical attractor model. By introducing a biologically realistic network topology where excitatory neuron clusters are locally balanced with inhibitory neuron clusters we robustly achieve metastable network activity across a wide range of network parameters. In application to the monkey task, the model performs target-specific action selection and accurately reproduces the task-epoch dependent reduction of trial-to-trial variability in vivo where the degree of reduction directly reflects the amount of processed target information, while spiking irregularity remained constant throughout the task. In the context of incomplete cue information, the increased target selection time of the model can explain increased behavioral reaction times. We conclude that context-dependent neural and behavioral variability is a signum of attractor computation in the motor cortex.
Collapse
Affiliation(s)
- Vahid Rostami
- Institute of Zoology, University of Cologne, Cologne, Germany
| | - Thomas Rost
- Institute of Zoology, University of Cologne, Cologne, Germany
| | | | - Sacha Jennifer van Albada
- Institute of Zoology, University of Cologne, Cologne, Germany
- Institute for Advanced Simulation (IAS-6), Jülich Research Center, Jülich, Germany
| | - Alexa Riehle
- Institute for Advanced Simulation (IAS-6), Jülich Research Center, Jülich, Germany
- UMR7289 Institut de Neurosciences de la Timone (INT), Centre National de la Recherche Scientifique (CNRS)-Aix-Marseille Université (AMU), Marseille, France
| | | |
Collapse
|
3
|
Choucry A, Nomoto M, Inokuchi K. Engram mechanisms of memory linking and identity. Nat Rev Neurosci 2024; 25:375-392. [PMID: 38664582 DOI: 10.1038/s41583-024-00814-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/25/2024] [Indexed: 05/25/2024]
Abstract
Memories are thought to be stored in neuronal ensembles referred to as engrams. Studies have suggested that when two memories occur in quick succession, a proportion of their engrams overlap and the memories become linked (in a process known as prospective linking) while maintaining their individual identities. In this Review, we summarize the key principles of memory linking through engram overlap, as revealed by experimental and modelling studies. We describe evidence of the involvement of synaptic memory substrates, spine clustering and non-linear neuronal capacities in prospective linking, and suggest a dynamic somato-synaptic model, in which memories are shared between neurons yet remain separable through distinct dendritic and synaptic allocation patterns. We also bring into focus retrospective linking, in which memories become associated after encoding via offline reactivation, and discuss key temporal and mechanistic differences between prospective and retrospective linking, as well as the potential differences in their cognitive outcomes.
Collapse
Affiliation(s)
- Ali Choucry
- Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
- Research Center for Idling Brain Science, University of Toyama, Toyama, Japan
- Department of Pharmacology and Toxicology, Faculty of Pharmacy, Cairo University, Cairo, Egypt
| | - Masanori Nomoto
- Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
- Research Center for Idling Brain Science, University of Toyama, Toyama, Japan
- CREST, Japan Science and Technology Agency (JST), University of Toyama, Toyama, Japan
- Japan Agency for Medical Research and Development (AMED), Tokyo, Japan
| | - Kaoru Inokuchi
- Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan.
- Research Center for Idling Brain Science, University of Toyama, Toyama, Japan.
- CREST, Japan Science and Technology Agency (JST), University of Toyama, Toyama, Japan.
| |
Collapse
|
4
|
She L, Benna MK, Shi Y, Fusi S, Tsao DY. Temporal multiplexing of perception and memory codes in IT cortex. Nature 2024; 629:861-868. [PMID: 38750353 PMCID: PMC11111405 DOI: 10.1038/s41586-024-07349-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 03/25/2024] [Indexed: 05/24/2024]
Abstract
A central assumption of neuroscience is that long-term memories are represented by the same brain areas that encode sensory stimuli1. Neurons in inferotemporal (IT) cortex represent the sensory percept of visual objects using a distributed axis code2-4. Whether and how the same IT neural population represents the long-term memory of visual objects remains unclear. Here we examined how familiar faces are encoded in the IT anterior medial face patch (AM), perirhinal face patch (PR) and temporal pole face patch (TP). In AM and PR we observed that the encoding axis for familiar faces is rotated relative to that for unfamiliar faces at long latency; in TP this memory-related rotation was much weaker. Contrary to previous claims, the relative response magnitude to familiar versus unfamiliar faces was not a stable indicator of familiarity in any patch5-11. The mechanism underlying the memory-related axis change is likely intrinsic to IT cortex, because inactivation of PR did not affect axis change dynamics in AM. Overall, our results suggest that memories of familiar faces are represented in AM and perirhinal cortex by a distinct long-latency code, explaining how the same cell population can encode both the percept and memory of faces.
Collapse
Affiliation(s)
- Liang She
- Division of Biology and Biological Engineering, Caltech, Pasadena, CA, USA.
| | - Marcus K Benna
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York City, NY, USA
- Neurobiology Section, Division of Biological Sciences, University of California, San Diego, San Diego, CA, USA
| | - Yuelin Shi
- Division of Biology and Biological Engineering, Caltech, Pasadena, CA, USA
| | - Stefano Fusi
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York City, NY, USA
| | - Doris Y Tsao
- Division of Biology and Biological Engineering, Caltech, Pasadena, CA, USA.
- Howard Hughes Medical Institute, University of California, Berkeley, CA, USA.
- Department of Neuroscience, University of California, Berkeley, CA, USA.
| |
Collapse
|
5
|
Pereira-Obilinovic U, Hou H, Svoboda K, Wang XJ. Brain mechanism of foraging: Reward-dependent synaptic plasticity versus neural integration of values. Proc Natl Acad Sci U S A 2024; 121:e2318521121. [PMID: 38551832 PMCID: PMC10998608 DOI: 10.1073/pnas.2318521121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 01/16/2024] [Indexed: 04/02/2024] Open
Abstract
During foraging behavior, action values are persistently encoded in neural activity and updated depending on the history of choice outcomes. What is the neural mechanism for action value maintenance and updating? Here, we explore two contrasting network models: synaptic learning of action value versus neural integration. We show that both models can reproduce extant experimental data, but they yield distinct predictions about the underlying biological neural circuits. In particular, the neural integrator model but not the synaptic model requires that reward signals are mediated by neural pools selective for action alternatives and their projections are aligned with linear attractor axes in the valuation system. We demonstrate experimentally observable neural dynamical signatures and feasible perturbations to differentiate the two contrasting scenarios, suggesting that the synaptic model is a more robust candidate mechanism. Overall, this work provides a modeling framework to guide future experimental research on probabilistic foraging.
Collapse
Affiliation(s)
- Ulises Pereira-Obilinovic
- Center for Neural Science, New York University, New York, NY10003
- Allen Institute for Neural Dynamics, Seattle, WA98109
| | - Han Hou
- Allen Institute for Neural Dynamics, Seattle, WA98109
| | - Karel Svoboda
- Allen Institute for Neural Dynamics, Seattle, WA98109
| | - Xiao-Jing Wang
- Center for Neural Science, New York University, New York, NY10003
| |
Collapse
|
6
|
Gosti G, Milanetti E, Folli V, de Pasquale F, Leonetti M, Corbetta M, Ruocco G, Della Penna S. A recurrent Hopfield network for estimating meso-scale effective connectivity in MEG. Neural Netw 2024; 170:72-93. [PMID: 37977091 DOI: 10.1016/j.neunet.2023.11.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 11/07/2023] [Accepted: 11/09/2023] [Indexed: 11/19/2023]
Abstract
The architecture of communication within the brain, represented by the human connectome, has gained a paramount role in the neuroscience community. Several features of this communication, e.g., the frequency content, spatial topology, and temporal dynamics are currently well established. However, identifying generative models providing the underlying patterns of inhibition/excitation is very challenging. To address this issue, we present a novel generative model to estimate large-scale effective connectivity from MEG. The dynamic evolution of this model is determined by a recurrent Hopfield neural network with asymmetric connections, and thus denoted Recurrent Hopfield Mass Model (RHoMM). Since RHoMM must be applied to binary neurons, it is suitable for analyzing Band Limited Power (BLP) dynamics following a binarization process. We trained RHoMM to predict the MEG dynamics through a gradient descent minimization and we validated it in two steps. First, we showed a significant agreement between the similarity of the effective connectivity patterns and that of the interregional BLP correlation, demonstrating RHoMM's ability to capture individual variability of BLP dynamics. Second, we showed that the simulated BLP correlation connectomes, obtained from RHoMM evolutions of BLP, preserved some important topological features, e.g, the centrality of the real data, assuring the reliability of RHoMM. Compared to other biophysical models, RHoMM is based on recurrent Hopfield neural networks, thus, it has the advantage of being data-driven, less demanding in terms of hyperparameters and scalable to encompass large-scale system interactions. These features are promising for investigating the dynamics of inhibition/excitation at different spatial scales.
Collapse
Affiliation(s)
- Giorgio Gosti
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena, 291, 00161, Rome, Italy; Soft and Living Matter Laboratory, Institute of Nanotechnology, Consiglio Nazionale delle Ricerche, Piazzale Aldo Moro, 5, 00185, Rome, Italy; Istituto di Scienze del Patrimonio Culturale, Sede di Roma, Consiglio Nazionale delle Ricerche, CNR-ISPC, Via Salaria km, 34900 Rome, Italy.
| | - Edoardo Milanetti
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena, 291, 00161, Rome, Italy; Department of Physics, Sapienza University of Rome, Piazzale Aldo Moro, 5, 00185, Rome, Italy.
| | - Viola Folli
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena, 291, 00161, Rome, Italy; D-TAILS srl, Via di Torre Rossa, 66, 00165, Rome, Italy.
| | - Francesco de Pasquale
- Faculty of Veterinary Medicine, University of Teramo, 64100 Piano D'Accio, Teramo, Italy.
| | - Marco Leonetti
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena, 291, 00161, Rome, Italy; Soft and Living Matter Laboratory, Institute of Nanotechnology, Consiglio Nazionale delle Ricerche, Piazzale Aldo Moro, 5, 00185, Rome, Italy; D-TAILS srl, Via di Torre Rossa, 66, 00165, Rome, Italy.
| | - Maurizio Corbetta
- Department of Neuroscience, University of Padova, Via Belzoni, 160, 35121, Padova, Italy; Padova Neuroscience Center (PNC), University of Padova, Via Orus, 2/B, 35129, Padova, Italy; Veneto Institute of Molecular Medicine (VIMM), Via Orus, 2, 35129, Padova, Italy.
| | - Giancarlo Ruocco
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena, 291, 00161, Rome, Italy; Department of Physics, Sapienza University of Rome, Piazzale Aldo Moro, 5, 00185, Rome, Italy.
| | - Stefania Della Penna
- Department of Neuroscience, Imaging and Clinical Sciences, and Institute for Advanced Biomedical Technologies, "G. d'Annunzio" University of Chieti-Pescara, Via Luigi Polacchi, 11, 66100 Chieti, Italy.
| |
Collapse
|
7
|
Gastaldi C, Gerstner W. A Computational Framework for Memory Engrams. ADVANCES IN NEUROBIOLOGY 2024; 38:237-257. [PMID: 39008019 DOI: 10.1007/978-3-031-62983-9_13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
Memory engrams in mice brains are potentially related to groups of concept cells in human brains. A single concept cell in human hippocampus responds, for example, not only to different images of the same object or person but also to its name written down in characters. Importantly, a single mental concept (object or person) is represented by several concept cells and each concept cell can respond to more than one concept. Computational work shows how mental concepts can be embedded in recurrent artificial neural networks as memory engrams and how neurons that are shared between different engrams can lead to associations between concepts. Therefore, observations at the level of neurons can be linked to cognitive notions of memory recall and association chains between memory items.
Collapse
Affiliation(s)
- Chiara Gastaldi
- Brain Mind Institute - School of Computer and Communication Sciences - School of Life Sciences, EPFL, Lausanne, Switzerland
| | - Wulfram Gerstner
- Brain Mind Institute - School of Computer and Communication Sciences - School of Life Sciences, EPFL, Lausanne, Switzerland.
| |
Collapse
|
8
|
Boscaglia M, Gastaldi C, Gerstner W, Quian Quiroga R. A dynamic attractor network model of memory formation, reinforcement and forgetting. PLoS Comput Biol 2023; 19:e1011727. [PMID: 38117859 PMCID: PMC10766193 DOI: 10.1371/journal.pcbi.1011727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 01/04/2024] [Accepted: 12/02/2023] [Indexed: 12/22/2023] Open
Abstract
Empirical evidence shows that memories that are frequently revisited are easy to recall, and that familiar items involve larger hippocampal representations than less familiar ones. In line with these observations, here we develop a modelling approach to provide a mechanistic understanding of how hippocampal neural assemblies evolve differently, depending on the frequency of presentation of the stimuli. For this, we added an online Hebbian learning rule, background firing activity, neural adaptation and heterosynaptic plasticity to a rate attractor network model, thus creating dynamic memory representations that can persist, increase or fade according to the frequency of presentation of the corresponding memory patterns. Specifically, we show that a dynamic interplay between Hebbian learning and background firing activity can explain the relationship between the memory assembly sizes and their frequency of stimulation. Frequently stimulated assemblies increase their size independently from each other (i.e. creating orthogonal representations that do not share neurons, thus avoiding interference). Importantly, connections between neurons of assemblies that are not further stimulated become labile so that these neurons can be recruited by other assemblies, providing a neuronal mechanism of forgetting.
Collapse
Affiliation(s)
- Marta Boscaglia
- Centre for Systems Neuroscience, University of Leicester, United Kingdom
- School of Psychology and Vision Sciences, University of Leicester, United Kingdom
| | - Chiara Gastaldi
- School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Rodrigo Quian Quiroga
- Centre for Systems Neuroscience, University of Leicester, United Kingdom
- Hospital del Mar Medical Research Institute (IMIM), Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
- Ruijin hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, People’s Republic of China
| |
Collapse
|
9
|
Cimeša L, Ciric L, Ostojic S. Geometry of population activity in spiking networks with low-rank structure. PLoS Comput Biol 2023; 19:e1011315. [PMID: 37549194 PMCID: PMC10461857 DOI: 10.1371/journal.pcbi.1011315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 08/28/2023] [Accepted: 06/27/2023] [Indexed: 08/09/2023] Open
Abstract
Recurrent network models are instrumental in investigating how behaviorally-relevant computations emerge from collective neural dynamics. A recently developed class of models based on low-rank connectivity provides an analytically tractable framework for understanding of how connectivity structure determines the geometry of low-dimensional dynamics and the ensuing computations. Such models however lack some fundamental biological constraints, and in particular represent individual neurons in terms of abstract units that communicate through continuous firing rates rather than discrete action potentials. Here we examine how far the theoretical insights obtained from low-rank rate networks transfer to more biologically plausible networks of spiking neurons. Adding a low-rank structure on top of random excitatory-inhibitory connectivity, we systematically compare the geometry of activity in networks of integrate-and-fire neurons to rate networks with statistically equivalent low-rank connectivity. We show that the mean-field predictions of rate networks allow us to identify low-dimensional dynamics at constant population-average activity in spiking networks, as well as novel non-linear regimes of activity such as out-of-phase oscillations and slow manifolds. We finally exploit these results to directly build spiking networks that perform nonlinear computations.
Collapse
Affiliation(s)
- Ljubica Cimeša
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Lazar Ciric
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
10
|
Levenstein D, Okun M. Logarithmically scaled, gamma distributed neuronal spiking. J Physiol 2023; 601:3055-3069. [PMID: 36086892 PMCID: PMC10952267 DOI: 10.1113/jp282758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 07/28/2022] [Indexed: 11/08/2022] Open
Abstract
Naturally log-scaled quantities abound in the nervous system. Distributions of these quantities have non-intuitive properties, which have implications for data analysis and the understanding of neural circuits. Here, we review the log-scaled statistics of neuronal spiking and the relevant analytical probability distributions. Recent work using log-scaling revealed that interspike intervals of forebrain neurons segregate into discrete modes reflecting spiking at different timescales and are each well-approximated by a gamma distribution. Each neuron spends most of the time in an irregular spiking 'ground state' with the longest intervals, which determines the mean firing rate of the neuron. Across the entire neuronal population, firing rates are log-scaled and well approximated by the gamma distribution, with a small number of highly active neurons and an overabundance of low rate neurons (the 'dark matter'). These results are intricately linked to a heterogeneous balanced operating regime, which confers upon neuronal circuits multiple computational advantages and has evolutionarily ancient origins.
Collapse
Affiliation(s)
- Daniel Levenstein
- Department of Neurology and NeurosurgeryMcGill UniversityMontrealQCCanada
- MilaMontréalQCCanada
| | - Michael Okun
- Department of Psychology and Neuroscience InstituteUniversity of SheffieldSheffieldUK
| |
Collapse
|
11
|
Clusella P, Köksal-Ersöz E, Garcia-Ojalvo J, Ruffini G. Comparison between an exact and a heuristic neural mass model with second-order synapses. BIOLOGICAL CYBERNETICS 2023; 117:5-19. [PMID: 36454267 PMCID: PMC10160168 DOI: 10.1007/s00422-022-00952-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 10/23/2022] [Indexed: 05/05/2023]
Abstract
Neural mass models (NMMs) are designed to reproduce the collective dynamics of neuronal populations. A common framework for NMMs assumes heuristically that the output firing rate of a neural population can be described by a static nonlinear transfer function (NMM1). However, a recent exact mean-field theory for quadratic integrate-and-fire (QIF) neurons challenges this view by showing that the mean firing rate is not a static function of the neuronal state but follows two coupled nonlinear differential equations (NMM2). Here we analyze and compare these two descriptions in the presence of second-order synaptic dynamics. First, we derive the mathematical equivalence between the two models in the infinitely slow synapse limit, i.e., we show that NMM1 is an approximation of NMM2 in this regime. Next, we evaluate the applicability of this limit in the context of realistic physiological parameter values by analyzing the dynamics of models with inhibitory or excitatory synapses. We show that NMM1 fails to reproduce important dynamical features of the exact model, such as the self-sustained oscillations of an inhibitory interneuron QIF network. Furthermore, in the exact model but not in the limit one, stimulation of a pyramidal cell population induces resonant oscillatory activity whose peak frequency and amplitude increase with the self-coupling gain and the external excitatory input. This may play a role in the enhanced response of densely connected networks to weak uniform inputs, such as the electric fields produced by noninvasive brain stimulation.
Collapse
Affiliation(s)
- Pau Clusella
- Department of Medicine and Life Sciences, Universitat Pompeu Fabra, Barcelona Biomedical Research Park, 08003, Barcelona, Spain.
| | - Elif Köksal-Ersöz
- LTSI - UMR 1099, INSERM, Univ Rennes, Campus Beaulieu, 35000, Rennes, France
| | - Jordi Garcia-Ojalvo
- Department of Medicine and Life Sciences, Universitat Pompeu Fabra, Barcelona Biomedical Research Park, 08003, Barcelona, Spain
| | - Giulio Ruffini
- Brain Modeling Department, Neuroelectrics, Av. Tibidabo, 47b, 08035, Barcelona, Spain.
| |
Collapse
|
12
|
Beiran M, Meirhaeghe N, Sohn H, Jazayeri M, Ostojic S. Parametric control of flexible timing through low-dimensional neural manifolds. Neuron 2023; 111:739-753.e8. [PMID: 36640766 PMCID: PMC9992137 DOI: 10.1016/j.neuron.2022.12.016] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 09/23/2022] [Accepted: 12/08/2022] [Indexed: 01/15/2023]
Abstract
Biological brains possess an unparalleled ability to adapt behavioral responses to changing stimuli and environments. How neural processes enable this capacity is a fundamental open question. Previous works have identified two candidate mechanisms: a low-dimensional organization of neural activity and a modulation by contextual inputs. We hypothesized that combining the two might facilitate generalization and adaptation in complex tasks. We tested this hypothesis in flexible timing tasks where dynamics play a key role. Examining trained recurrent neural networks, we found that confining the dynamics to a low-dimensional subspace allowed tonic inputs to parametrically control the overall input-output transform, enabling generalization to novel inputs and adaptation to changing conditions. Reverse-engineering and theoretical analyses demonstrated that this parametric control relies on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds while preserving their geometry. Comparisons with data from behaving monkeys confirmed the behavioral and neural signatures of this mechanism.
Collapse
Affiliation(s)
- Manuel Beiran
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL University, 75005 Paris, France; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Nicolas Meirhaeghe
- Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Institut de Neurosciences de la Timone (INT), UMR 7289, CNRS, Aix-Marseille Université, Marseille 13005, France
| | - Hansem Sohn
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL University, 75005 Paris, France.
| |
Collapse
|
13
|
Goldt S, Krzakala F, Zdeborová L, Brunel N. Bayesian reconstruction of memories stored in neural networks from their connectivity. PLoS Comput Biol 2023; 19:e1010813. [PMID: 36716332 PMCID: PMC9910750 DOI: 10.1371/journal.pcbi.1010813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 02/09/2023] [Accepted: 12/12/2022] [Indexed: 02/01/2023] Open
Abstract
The advent of comprehensive synaptic wiring diagrams of large neural circuits has created the field of connectomics and given rise to a number of open research questions. One such question is whether it is possible to reconstruct the information stored in a recurrent network of neurons, given its synaptic connectivity matrix. Here, we address this question by determining when solving such an inference problem is theoretically possible in specific attractor network models and by providing a practical algorithm to do so. The algorithm builds on ideas from statistical physics to perform approximate Bayesian inference and is amenable to exact analysis. We study its performance on three different models, compare the algorithm to standard algorithms such as PCA, and explore the limitations of reconstructing stored patterns from synaptic connectivity.
Collapse
Affiliation(s)
- Sebastian Goldt
- International School of Advanced Studies (SISSA), Trieste, Italy
- * E-mail:
| | - Florent Krzakala
- IdePHICS laboratory, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Lenka Zdeborová
- SPOC laboratory, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Nicolas Brunel
- Department of Neurobiology, Duke University, Durham, North Carolina, United States of America
- Department of Physics, Duke University, Durham, North Carolina, United States of America
| |
Collapse
|
14
|
Shao Y, Ostojic S. Relating local connectivity and global dynamics in recurrent excitatory-inhibitory networks. PLoS Comput Biol 2023; 19:e1010855. [PMID: 36689488 PMCID: PMC9894562 DOI: 10.1371/journal.pcbi.1010855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 02/02/2023] [Accepted: 01/06/2023] [Indexed: 01/24/2023] Open
Abstract
How the connectivity of cortical networks determines the neural dynamics and the resulting computations is one of the key questions in neuroscience. Previous works have pursued two complementary approaches to quantify the structure in connectivity. One approach starts from the perspective of biological experiments where only the local statistics of connectivity motifs between small groups of neurons are accessible. Another approach is based instead on the perspective of artificial neural networks where the global connectivity matrix is known, and in particular its low-rank structure can be used to determine the resulting low-dimensional dynamics. A direct relationship between these two approaches is however currently missing. Specifically, it remains to be clarified how local connectivity statistics and the global low-rank connectivity structure are inter-related and shape the low-dimensional activity. To bridge this gap, here we develop a method for mapping local connectivity statistics onto an approximate global low-rank structure. Our method rests on approximating the global connectivity matrix using dominant eigenvectors, which we compute using perturbation theory for random matrices. We demonstrate that multi-population networks defined from local connectivity statistics for which the central limit theorem holds can be approximated by low-rank connectivity with Gaussian-mixture statistics. We specifically apply this method to excitatory-inhibitory networks with reciprocal motifs, and show that it yields reliable predictions for both the low-dimensional dynamics, and statistics of population activity. Importantly, it analytically accounts for the activity heterogeneity of individual neurons in specific realizations of local connectivity. Altogether, our approach allows us to disentangle the effects of mean connectivity and reciprocal motifs on the global recurrent feedback, and provides an intuitive picture of how local connectivity shapes global network dynamics.
Collapse
Affiliation(s)
- Yuxiu Shao
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure—PSL Research University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure—PSL Research University, Paris, France
| |
Collapse
|
15
|
Wang B, Aljadeff J. Multiplicative Shot-Noise: A New Route to Stability of Plastic Networks. PHYSICAL REVIEW LETTERS 2022; 129:068101. [PMID: 36018633 DOI: 10.1103/physrevlett.129.068101] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 06/30/2022] [Indexed: 06/15/2023]
Abstract
Fluctuations of synaptic weights, among many other physical, biological, and ecological quantities, are driven by coincident events of two "parent" processes. We propose a multiplicative shot-noise model that can capture the behaviors of a broad range of such natural phenomena, and analytically derive an approximation that accurately predicts its statistics. We apply our results to study the effects of a multiplicative synaptic plasticity rule that was recently extracted from measurements in physiological conditions. Using mean-field theory analysis and network simulations, we investigate how this rule shapes the connectivity and dynamics of recurrent spiking neural networks. The multiplicative plasticity rule is shown to support efficient learning of input stimuli, and it gives a stable, unimodal synaptic-weight distribution with a large fraction of strong synapses. The strong synapses remain stable over long times but do not "run away." Our results suggest that the multiplicative shot-noise offers a new route to understand the tradeoff between flexibility and stability in neural circuits and other dynamic networks.
Collapse
Affiliation(s)
- Bin Wang
- Department of Physics, University of California San Diego, La Jolla, California 92093, USA
| | - Johnatan Aljadeff
- Department of Neurobiology, University of California San Diego, La Jolla, California 92093, USA
| |
Collapse
|
16
|
Herbert E, Ostojic S. The impact of sparsity in low-rank recurrent neural networks. PLoS Comput Biol 2022; 18:e1010426. [PMID: 35944030 PMCID: PMC9390915 DOI: 10.1371/journal.pcbi.1010426] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/19/2022] [Accepted: 07/22/2022] [Indexed: 11/18/2022] Open
Abstract
Neural population dynamics are often highly coordinated, allowing task-related computations to be understood as neural trajectories through low-dimensional subspaces. How the network connectivity and input structure give rise to such activity can be investigated with the aid of low-rank recurrent neural networks, a recently-developed class of computational models which offer a rich theoretical framework linking the underlying connectivity structure to emergent low-dimensional dynamics. This framework has so far relied on the assumption of all-to-all connectivity, yet cortical networks are known to be highly sparse. Here we investigate the dynamics of low-rank recurrent networks in which the connections are randomly sparsified, which makes the network connectivity formally full-rank. We first analyse the impact of sparsity on the eigenvalue spectrum of low-rank connectivity matrices, and use this to examine the implications for the dynamics. We find that in the presence of sparsity, the eigenspectra in the complex plane consist of a continuous bulk and isolated outliers, a form analogous to the eigenspectra of connectivity matrices composed of a low-rank and a full-rank random component. This analogy allows us to characterise distinct dynamical regimes of the sparsified low-rank network as a function of key network parameters. Altogether, we find that the low-dimensional dynamics induced by low-rank connectivity structure are preserved even at high levels of sparsity, and can therefore support rich and robust computations even in networks sparsified to a biologically-realistic extent.
Collapse
Affiliation(s)
- Elizabeth Herbert
- Laboratoire de Neurosciences Cognitives et Computationnelles, Département d’Études Cognitives, INSERM U960, École Normale Supérieure - PSL University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, Département d’Études Cognitives, INSERM U960, École Normale Supérieure - PSL University, Paris, France
| |
Collapse
|
17
|
Valente A, Ostojic S, Pillow J. Probing the Relationship Between Latent Linear Dynamical Systems and Low-Rank Recurrent Neural Network Models. Neural Comput 2022; 34:1871-1892. [PMID: 35896161 DOI: 10.1162/neco_a_01522] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 04/15/2022] [Indexed: 11/04/2022]
Abstract
A large body of work has suggested that neural populations exhibit low-dimensional dynamics during behavior. However, there are a variety of different approaches for modeling low-dimensional neural population activity. One approach involves latent linear dynamical system (LDS) models, in which population activity is described by a projection of low-dimensional latent variables with linear dynamics. A second approach involves low-rank recurrent neural networks (RNNs), in which population activity arises directly from a low-dimensional projection of past activity. Although these two modeling approaches have strong similarities, they arise in different contexts and tend to have different domains of application. Here we examine the precise relationship between latent LDS models and linear low-rank RNNs. When can one model class be converted to the other, and vice versa? We show that latent LDS models can only be converted to RNNs in specific limit cases, due to the non-Markovian property of latent LDS models. Conversely, we show that linear RNNs can be mapped onto LDS models, with latent dimensionality at most twice the rank of the RNN. A surprising consequence of our results is that a partially observed RNN is better represented by an LDS model than by an RNN consisting of only observed units.
Collapse
Affiliation(s)
- Adrian Valente
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure-PSL Research University, 75005 Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure-PSL Research University, 75005 Paris, France
| | - Jonathan Pillow
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, U.S.A.
| |
Collapse
|
18
|
Masset P, Qin S, Zavatone-Veth JA. Drifting neuronal representations: Bug or feature? BIOLOGICAL CYBERNETICS 2022; 116:253-266. [PMID: 34993613 DOI: 10.1007/s00422-021-00916-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/17/2021] [Indexed: 06/14/2023]
Abstract
The brain displays a remarkable ability to sustain stable memories, allowing animals to execute precise behaviors or recall stimulus associations years after they were first learned. Yet, recent long-term recording experiments have revealed that single-neuron representations continuously change over time, contravening the classical assumption that learned features remain static. How do unstable neural codes support robust perception, memories, and actions? Here, we review recent experimental evidence for such representational drift across brain areas, as well as dissections of its functional characteristics and underlying mechanisms. We emphasize theoretical proposals for how drift need not only be a form of noise for which the brain must compensate. Rather, it can emerge from computationally beneficial mechanisms in hierarchical networks performing robust probabilistic computations.
Collapse
Affiliation(s)
- Paul Masset
- Center for Brain Science, Harvard University, Cambridge, MA, USA.
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA.
| | - Shanshan Qin
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Jacob A Zavatone-Veth
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Physics, Harvard University, Cambridge, MA, USA
| |
Collapse
|
19
|
Bing Z, Sewisy AE, Zhuang G, Walter F, Morin FO, Huang K, Knoll A. Toward Cognitive Navigation: Design and Implementation of a Biologically Inspired Head Direction Cell Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:2147-2158. [PMID: 34860654 DOI: 10.1109/tnnls.2021.3128380] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
As a vital cognitive function of animals, the navigation skill is first built on the accurate perception of the directional heading in the environment. Head direction cells (HDCs), found in the limbic system of animals, are proven to play an important role in identifying the directional heading allocentrically in the horizontal plane, independent of the animal's location and the ambient conditions of the environment. However, practical HDC models that can be implemented in robotic applications are rarely investigated, especially those that are biologically plausible and yet applicable to the real world. In this article, we propose a computational HDC network that is consistent with several neurophysiological findings concerning biological HDCs and then implement it in robotic navigation tasks. The HDC network keeps a representation of the directional heading only relying on the angular velocity as an input. We examine the proposed HDC model in extensive simulations and real-world experiments and demonstrate its excellent performance in terms of accuracy and real-time capability.
Collapse
|
20
|
Metastable attractors explain the variable timing of stable behavioral action sequences. Neuron 2022; 110:139-153.e9. [PMID: 34717794 PMCID: PMC9194601 DOI: 10.1016/j.neuron.2021.10.011] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 08/30/2021] [Accepted: 10/05/2021] [Indexed: 01/07/2023]
Abstract
The timing of self-initiated actions shows large variability even when they are executed in stable, well-learned sequences. Could this mix of reliability and stochasticity arise within the same neural circuit? We trained rats to perform a stereotyped sequence of self-initiated actions and recorded neural ensemble activity in secondary motor cortex (M2), which is known to reflect trial-by-trial action-timing fluctuations. Using hidden Markov models, we established a dictionary between activity patterns and actions. We then showed that metastable attractors, representing activity patterns with a reliable sequential structure and large transition timing variability, could be produced by reciprocally coupling a high-dimensional recurrent network and a low-dimensional feedforward one. Transitions between attractors relied on correlated variability in this mesoscale feedback loop, predicting a specific structure of low-dimensional correlations that were empirically verified in M2 recordings. Our results suggest a novel mesoscale network motif based on correlated variability supporting naturalistic animal behavior.
Collapse
|
21
|
Krishnamurthy K, Can T, Schwab DJ. Theory of Gating in Recurrent Neural Networks. PHYSICAL REVIEW. X 2022; 12:011011. [PMID: 36545030 PMCID: PMC9762509 DOI: 10.1103/physrevx.12.011011] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) and neuroscience. Prior theoretical work has focused on RNNs with additive interactions. However gating i.e., multiplicative interactions are ubiquitous in real neurons and also the central feature of the best-performing RNNs in ML. Here, we show that gating offers flexible control of two salient features of the collective dynamics: (i) timescales and (ii) dimensionality. The gate controlling timescales leads to a novel marginally stable state, where the network functions as a flexible integrator. Unlike previous approaches, gating permits this important function without parameter fine-tuning or special symmetries. Gates also provide a flexible, context-dependent mechanism to reset the memory trace, thus complementing the memory function. The gate modulating the dimensionality can induce a novel, discontinuous chaotic transition, where inputs push a stable system to strong chaotic activity, in contrast to the typically stabilizing effect of inputs. At this transition, unlike additive RNNs, the proliferation of critical points (topological complexity) is decoupled from the appearance of chaotic dynamics (dynamical complexity). The rich dynamics are summarized in phase diagrams, thus providing a map for principled parameter initialization choices to ML practitioners.
Collapse
Affiliation(s)
- Kamesh Krishnamurthy
- Joseph Henry Laboratories of Physics and PNI, Princeton University, Princeton, New Jersey 08544, USA
| | - Tankut Can
- Institute for Advanced Study, Princeton, New Jersey 08540, USA
| | - David J. Schwab
- Initiative for Theoretical Sciences, Graduate Center, CUNY, New York, New York 10016, USA
| |
Collapse
|
22
|
Gastaldi C, Schwalger T, De Falco E, Quiroga RQ, Gerstner W. When shared concept cells support associations: Theory of overlapping memory engrams. PLoS Comput Biol 2021; 17:e1009691. [PMID: 34968383 PMCID: PMC8754331 DOI: 10.1371/journal.pcbi.1009691] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 01/12/2022] [Accepted: 11/29/2021] [Indexed: 12/02/2022] Open
Abstract
Assemblies of neurons, called concepts cells, encode acquired concepts in human Medial Temporal Lobe. Those concept cells that are shared between two assemblies have been hypothesized to encode associations between concepts. Here we test this hypothesis in a computational model of attractor neural networks. We find that for concepts encoded in sparse neural assemblies there is a minimal fraction cmin of neurons shared between assemblies below which associations cannot be reliably implemented; and a maximal fraction cmax of shared neurons above which single concepts can no longer be retrieved. In the presence of a periodically modulated background signal, such as hippocampal oscillations, recall takes the form of association chains reminiscent of those postulated by theories of free recall of words. Predictions of an iterative overlap-generating model match experimental data on the number of concepts to which a neuron responds. Experimental evidence suggests that associations between concepts are encoded in the hippocampus by cells shared between neuronal assemblies (“overlap” of concepts). What is the necessary overlap that ensures a reliable encoding of associations? Under which conditions can associations induce a simultaneous or a chain-like activation of concepts? Our theoretical model shows that the ideal overlap presents a tradeoff: the overlap should be larger than a minimum value in order to reliably encode associations, but lower than a maximum value to prevent loss of individual memories. Our theory explains experimental data from human Medial Temporal Lobe and provides a mechanism for chain-like recall in presence of inhibition, while still allowing for simultaneous recall if inhibition is weak.
Collapse
Affiliation(s)
- Chiara Gastaldi
- School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- * E-mail:
| | - Tilo Schwalger
- Institut für Mathematik, Technische Universität Berlin, Berlin, Germany
| | - Emanuela De Falco
- School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Rodrigo Quian Quiroga
- Centre for Systems Neuroscience, University of Leicester, Leicester, United Kingdom
- Peng Cheng Laboratory, Shenzhen, China
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
23
|
Trial-to-Trial Variability of Spiking Delay Activity in Prefrontal Cortex Constrains Burst-Coding Models of Working Memory. J Neurosci 2021; 41:8928-8945. [PMID: 34551937 DOI: 10.1523/jneurosci.0167-21.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 08/17/2021] [Accepted: 08/29/2021] [Indexed: 11/21/2022] Open
Abstract
A hallmark neuronal correlate of working memory (WM) is stimulus-selective spiking activity of neurons in PFC during mnemonic delays. These observations have motivated an influential computational modeling framework in which WM is supported by persistent activity. Recently, this framework has been challenged by arguments that observed persistent activity may be an artifact of trial-averaging, which potentially masks high variability of delay activity at the single-trial level. In an alternative scenario, WM delay activity could be encoded in bursts of selective neuronal firing which occur intermittently across trials. However, this alternative proposal has not been tested on single-neuron spike-train data. Here, we developed a framework for addressing this issue by characterizing the trial-to-trial variability of neuronal spiking quantified by Fano factor (FF). By building a doubly stochastic Poisson spiking model, we first demonstrated that the burst-coding proposal implies a significant increase in FF positively correlated with firing rate, and thus loss of stability across trials during the delay. Simulation of spiking cortical circuit WM models further confirmed that FF is a sensitive measure that can well dissociate distinct WM mechanisms. We then tested these predictions on datasets of single-neuron recordings from macaque PFC during three WM tasks. In sharp contrast to the burst-coding model predictions, we only found a small fraction of neurons showing increased WM-dependent burstiness, and stability across trials during delay was strengthened in empirical data. Therefore, reduced trial-to-trial variability during delay provides strong constraints on the contribution of single-neuron intermittent bursting to WM maintenance.SIGNIFICANCE STATEMENT There are diverging classes of theoretical models explaining how information is maintained in working memory by cortical circuits. In an influential model class, neurons exhibit persistent elevated memorandum-selective firing, whereas a recently developed class of burst-coding models suggests that persistent activity is an artifact of trial-averaging, and spiking is sparse in each single trial, subserved by brief intermittent bursts. However, this alternative picture has not been characterized or tested on empirical spike-train data. Here we combine mathematical analysis, computational model simulation, and experimental data analysis to test empirically these two classes of models and show that the trial-to-trial variability of empirical spike trains is not consistent with burst coding. These findings provide constraints for theoretical models of working memory.
Collapse
|
24
|
Pietras B, Gallice N, Schwalger T. Low-dimensional firing-rate dynamics for populations of renewal-type spiking neurons. Phys Rev E 2021; 102:022407. [PMID: 32942450 DOI: 10.1103/physreve.102.022407] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 06/29/2020] [Indexed: 11/07/2022]
Abstract
The macroscopic dynamics of large populations of neurons can be mathematically analyzed using low-dimensional firing-rate or neural-mass models. However, these models fail to capture spike synchronization effects and nonstationary responses of the population activity to rapidly changing stimuli. Here we derive low-dimensional firing-rate models for homogeneous populations of neurons modeled as time-dependent renewal processes. The class of renewal neurons includes integrate-and-fire models driven by white noise and has been frequently used to model neuronal refractoriness and spike synchronization dynamics. The derivation is based on an eigenmode expansion of the associated refractory density equation, which generalizes previous spectral methods for Fokker-Planck equations to arbitrary renewal models. We find a simple relation between the eigenvalues characterizing the timescales of the firing rate dynamics and the Laplace transform of the interspike interval density, for which explicit expressions are available for many renewal models. Retaining only the first eigenmode already yields a reliable low-dimensional approximation of the firing-rate dynamics that captures spike synchronization effects and fast transient dynamics at stimulus onset. We explicitly demonstrate the validity of our model for a large homogeneous population of Poisson neurons with absolute refractoriness and other renewal models that admit an explicit analytical calculation of the eigenvalues. The eigenmode expansion presented here provides a systematic framework for alternative firing-rate models in computational neuroscience based on spiking neuron dynamics with refractoriness.
Collapse
Affiliation(s)
- Bastian Pietras
- Institute of Mathematics, Technical University Berlin, 10623 Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Noé Gallice
- Brain Mind Institute, École polytechnique fédérale de Lausanne (EPFL), Station 15, CH-1015 Lausanne, Switzerland
| | - Tilo Schwalger
- Institute of Mathematics, Technical University Berlin, 10623 Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| |
Collapse
|
25
|
Aljadeff J, Gillett M, Pereira Obilinovic U, Brunel N. From synapse to network: models of information storage and retrieval in neural circuits. Curr Opin Neurobiol 2021; 70:24-33. [PMID: 34175521 DOI: 10.1016/j.conb.2021.05.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/06/2021] [Accepted: 05/25/2021] [Indexed: 10/21/2022]
Abstract
The mechanisms of information storage and retrieval in brain circuits are still the subject of debate. It is widely believed that information is stored at least in part through changes in synaptic connectivity in networks that encode this information and that these changes lead in turn to modifications of network dynamics, such that the stored information can be retrieved at a later time. Here, we review recent progress in deriving synaptic plasticity rules from experimental data and in understanding how plasticity rules affect the dynamics of recurrent networks. We show that the dynamics generated by such networks exhibit a large degree of diversity, depending on parameters, similar to experimental observations in vivo during delayed response tasks.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Neurobiology Section, Division of Biological Sciences, UC San Diego, USA
| | | | | | - Nicolas Brunel
- Department of Neurobiology, Duke University, USA; Department of Physics, Duke University, USA.
| |
Collapse
|
26
|
Wyrick D, Mazzucato L. State-Dependent Regulation of Cortical Processing Speed via Gain Modulation. J Neurosci 2021; 41:3988-4005. [PMID: 33858943 PMCID: PMC8176754 DOI: 10.1523/jneurosci.1895-20.2021] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 03/04/2021] [Accepted: 03/08/2021] [Indexed: 11/21/2022] Open
Abstract
To thrive in dynamic environments, animals must be capable of rapidly and flexibly adapting behavioral responses to a changing context and internal state. Examples of behavioral flexibility include faster stimulus responses when attentive and slower responses when distracted. Contextual or state-dependent modulations may occur early in the cortical hierarchy and may be implemented via top-down projections from corticocortical or neuromodulatory pathways. However, the computational mechanisms mediating the effects of such projections are not known. Here, we introduce a theoretical framework to classify the effects of cell type-specific top-down perturbations on the information processing speed of cortical circuits. Our theory demonstrates that perturbation effects on stimulus processing can be predicted by intrinsic gain modulation, which controls the timescale of the circuit dynamics. Our theory leads to counterintuitive effects, such as improved performance with increased input variance. We tested the model predictions using large-scale electrophysiological recordings from the visual hierarchy in freely running mice, where we found that a decrease in single-cell intrinsic gain during locomotion led to an acceleration of visual processing. Our results establish a novel theory of cell type-specific perturbations, applicable to top-down modulation as well as optogenetic and pharmacological manipulations. Our theory links connectivity, dynamics, and information processing via gain modulation.SIGNIFICANCE STATEMENT To thrive in dynamic environments, animals adapt their behavior to changing circumstances and different internal states. Examples of behavioral flexibility include faster responses to sensory stimuli when attentive and slower responses when distracted. Previous work suggested that contextual modulations may be implemented via top-down inputs to sensory cortex coming from higher brain areas or neuromodulatory pathways. Here, we introduce a theory explaining how the speed at which sensory cortex processes incoming information is adjusted by changes in these top-down projections, which control the timescale of neural activity. We tested our model predictions in freely running mice, revealing that locomotion accelerates visual processing. Our theory is applicable to internal modulation as well as optogenetic and pharmacological manipulations and links circuit connectivity, dynamics, and information processing.
Collapse
Affiliation(s)
- David Wyrick
- Department of Biology and Institute of Neuroscience
| | - Luca Mazzucato
- Department of Biology and Institute of Neuroscience
- Departments of Mathematics and Physics, University of Oregon, Eugene, Oregon 97403
| |
Collapse
|
27
|
|
28
|
Schönsberg F, Roudi Y, Treves A. Efficiency of Local Learning Rules in Threshold-Linear Associative Networks. PHYSICAL REVIEW LETTERS 2021; 126:018301. [PMID: 33480759 DOI: 10.1103/physrevlett.126.018301] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 12/10/2020] [Accepted: 12/11/2020] [Indexed: 06/12/2023]
Abstract
We derive the Gardner storage capacity for associative networks of threshold linear units, and show that with Hebbian learning they can operate closer to such Gardner bound than binary networks, and even surpass it. This is largely achieved through a sparsification of the retrieved patterns, which we analyze for theoretical and empirical distributions of activity. As reaching the optimal capacity via nonlocal learning rules like back propagation requires slow and neurally implausible training procedures, our results indicate that one-shot self-organized Hebbian learning can be just as efficient.
Collapse
Affiliation(s)
| | - Yasser Roudi
- Kavli Institute for Systems Neuroscience & Centre for Neural Computation, NTNU, Trondheim, Norway
| | - Alessandro Treves
- SISSA, Scuola Internazionale Superiore di Studi Avanzati, Trieste, Italy
- Kavli Institute for Systems Neuroscience & Centre for Neural Computation, NTNU, Trondheim, Norway
| |
Collapse
|
29
|
Characteristics of sequential activity in networks with temporally asymmetric Hebbian learning. Proc Natl Acad Sci U S A 2020; 117:29948-29958. [PMID: 33177232 PMCID: PMC7703604 DOI: 10.1073/pnas.1918674117] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Sequential activity is a prominent feature of many neural systems, in multiple behavioral contexts. Here, we investigate how Hebbian rules lead to storage and recall of random sequences of inputs in both rate and spiking recurrent networks. In the case of the simplest (bilinear) rule, we characterize extensively the regions in parameter space that allow sequence retrieval and compute analytically the storage capacity of the network. We show that nonlinearities in the learning rule can lead to sparse sequences and find that sequences maintain robust decoding but display highly labile dynamics to continuous changes in the connectivity matrix, similar to recent observations in hippocampus and parietal cortex. Sequential activity has been observed in multiple neuronal circuits across species, neural structures, and behaviors. It has been hypothesized that sequences could arise from learning processes. However, it is still unclear whether biologically plausible synaptic plasticity rules can organize neuronal activity to form sequences whose statistics match experimental observations. Here, we investigate temporally asymmetric Hebbian rules in sparsely connected recurrent rate networks and develop a theory of the transient sequential activity observed after learning. These rules transform a sequence of random input patterns into synaptic weight updates. After learning, recalled sequential activity is reflected in the transient correlation of network activity with each of the stored input patterns. Using mean-field theory, we derive a low-dimensional description of the network dynamics and compute the storage capacity of these networks. Multiple temporal characteristics of the recalled sequential activity are consistent with experimental observations. We find that the degree of sparseness of the recalled sequences can be controlled by nonlinearities in the learning rule. Furthermore, sequences maintain robust decoding, but display highly labile dynamics, when synaptic connectivity is continuously modified due to noise or storage of other patterns, similar to recent observations in hippocampus and parietal cortex. Finally, we demonstrate that our results also hold in recurrent networks of spiking neurons with separate excitatory and inhibitory populations.
Collapse
|
30
|
Natale JL, Hentschel HGE, Nemenman I. Precise spatial memory in local random networks. Phys Rev E 2020; 102:022405. [PMID: 32942429 DOI: 10.1103/physreve.102.022405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Accepted: 06/16/2020] [Indexed: 11/07/2022]
Abstract
Self-sustained, elevated neuronal activity persisting on timescales of 10 s or longer is thought to be vital for aspects of working memory, including brain representations of real space. Continuous-attractor neural networks, one of the most well-known modeling frameworks for persistent activity, have been able to model crucial aspects of such spatial memory. These models tend to require highly structured or regular synaptic architectures. In contrast, we study numerical simulations of a geometrically embedded model with a local, but otherwise random, connectivity profile; imposing a global regulation of our system's mean firing rate produces localized, finely spaced discrete attractors that effectively span a two-dimensional manifold. We demonstrate how the set of attracting states can reliably encode a representation of the spatial locations at which the system receives external input, thereby accomplishing spatial memory via attractor dynamics without synaptic fine-tuning or regular structure. We then measure the network's storage capacity numerically and find that the statistics of retrievable positions are also equivalent to a full tiling of the plane, something hitherto achievable only with (approximately) translationally invariant synapses, and which may be of interest in modeling such biological phenomena as visuospatial working memory in two dimensions.
Collapse
Affiliation(s)
- Joseph L Natale
- Department of Physics, Emory University, Atlanta, Georgia 30322, USA
| | | | - Ilya Nemenman
- Department of Physics, Department of Biology, and Initiative in Theory and Modeling of Living Systems, Emory University, Atlanta, Georgia 30322, USA
| |
Collapse
|
31
|
Sampath S, Srivastava V. On stability and associative recall of memories in attractor neural networks. PLoS One 2020; 15:e0238054. [PMID: 32941475 PMCID: PMC7498056 DOI: 10.1371/journal.pone.0238054] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Accepted: 08/10/2020] [Indexed: 11/21/2022] Open
Abstract
Attractor neural networks such as the Hopfield model can be used to model associative memory. An efficient associative memory should be able to store a large number of patterns which must all be stable. We study in detail the meaning and definition of stability of network states. We reexamine the meanings of retrieval, recognition and recall and assign precise mathematical meanings to each of these terms. We also examine the relation between them and how they relate to memory capacity of the network. We have shown earlier in this journal that orthogonalization scheme provides an effective way of overcoming catastrophic interference that limits the memory capacity of the Hopfield model. It is not immediately apparent whether the improvement made by orthgonalization affects the processes of retrieval, recognition and recall equally. We show that this influence occurs to different degrees and hence affects the relations between them. We then show that the conditions for pattern stability can be split into a necessary condition (recognition) and a sufficient one (recall). We interpret in cognitive terms the information being stored in the Hopfield model and also after it is orthogonalized. We also study the alterations in the network dynamics of the Hopfield network upon the introduction of orthogonalization, and their effects on the efficiency of the network as an associative memory.
Collapse
Affiliation(s)
- Suchitra Sampath
- Centre for Neural and Cognitive Sciences, University of Hyderabad, Hyderabad, Telangana, India
- * E-mail:
| | - Vipin Srivastava
- School of Physics, University of Hyderabad, Hyderabad, Telangana, India
| |
Collapse
|
32
|
The Generation of Time in the Hippocampal Memory System. Cell Rep 2020; 28:1649-1658.e6. [PMID: 31412236 DOI: 10.1016/j.celrep.2019.07.042] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Revised: 05/30/2019] [Accepted: 07/14/2019] [Indexed: 11/21/2022] Open
Abstract
We propose that ramping time cells in the lateral entorhinal cortex can be produced by synaptic adaptation and demonstrate this in an integrate-and-fire attractor network model. We propose that competitive networks in the hippocampal system can convert these entorhinal ramping cells into hippocampal time cells and demonstrate this in a competitive network. We propose that this conversion is necessary to provide orthogonal hippocampal time representations to encode the temporal sequence of events in hippocampal episodic memory, and we support that with analytic arguments. We demonstrate that this processing can produce hippocampal neuronal ensembles that not only show replay of the sequence later on, but can also do this in reverse order in reverse replay. This research addresses a major issue in neuroscience: the mechanisms by which time is encoded in the brain and how the time representations are then useful in the hippocampal memory of events and their order.
Collapse
|
33
|
Ocker GK, Buice MA. Flexible neural connectivity under constraints on total connection strength. PLoS Comput Biol 2020; 16:e1008080. [PMID: 32745134 PMCID: PMC7425997 DOI: 10.1371/journal.pcbi.1008080] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 08/13/2020] [Accepted: 06/19/2020] [Indexed: 12/23/2022] Open
Abstract
Neural computation is determined by neurons’ dynamics and circuit connectivity. Uncertain and dynamic environments may require neural hardware to adapt to different computational tasks, each requiring different connectivity configurations. At the same time, connectivity is subject to a variety of constraints, placing limits on the possible computations a given neural circuit can perform. Here we examine the hypothesis that the organization of neural circuitry favors computational flexibility: that it makes many computational solutions available, given physiological constraints. From this hypothesis, we develop models of connectivity degree distributions based on constraints on a neuron’s total synaptic weight. To test these models, we examine reconstructions of the mushroom bodies from the first instar larva and adult Drosophila melanogaster. We perform a Bayesian model comparison for two constraint models and a random wiring null model. Overall, we find that flexibility under a homeostatically fixed total synaptic weight describes Kenyon cell connectivity better than other models, suggesting a principle shaping the apparently random structure of Kenyon cell wiring. Furthermore, we find evidence that larval Kenyon cells are more flexible earlier in development, suggesting a mechanism whereby neural circuits begin as flexible systems that develop into specialized computational circuits. High-throughput electron microscopic anatomical experiments have begun to yield detailed maps of neural circuit connectivity. Uncovering the principles that govern these circuit structures is a major challenge for systems neuroscience. Healthy neural circuits must be able to perform computational tasks while satisfying physiological constraints. Those constraints can restrict a neuron’s possible connectivity, and thus potentially restrict its computation. Here we examine simple models of constraints on total synaptic weights, and calculate the number of circuit configurations they allow: a simple measure of their computational flexibility. We propose probabilistic models of connectivity that weight the number of synaptic partners according to computational flexibility under a constraint and test them using recent wiring diagrams from a learning center, the mushroom body, in the fly brain. We compare constraints that fix or bound a neuron’s total connection strength to a simple random wiring null model. Of these models, the fixed total connection strength matched the overall connectivity best in mushroom bodies from both larval and adult flies. We also provide evidence suggesting that neural circuits are more flexible in early stages of development and lose this flexibility as they grow towards specialized function.
Collapse
Affiliation(s)
- Gabriel Koch Ocker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- * E-mail:
| | - Michael A. Buice
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
34
|
Bachmann C, Tetzlaff T, Duarte R, Morrison A. Firing rate homeostasis counteracts changes in stability of recurrent neural networks caused by synapse loss in Alzheimer's disease. PLoS Comput Biol 2020; 16:e1007790. [PMID: 32841234 PMCID: PMC7505475 DOI: 10.1371/journal.pcbi.1007790] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Revised: 09/21/2020] [Accepted: 03/17/2020] [Indexed: 11/19/2022] Open
Abstract
The impairment of cognitive function in Alzheimer's disease is clearly correlated to synapse loss. However, the mechanisms underlying this correlation are only poorly understood. Here, we investigate how the loss of excitatory synapses in sparsely connected random networks of spiking excitatory and inhibitory neurons alters their dynamical characteristics. Beyond the effects on the activity statistics, we find that the loss of excitatory synapses on excitatory neurons reduces the network's sensitivity to small perturbations. This decrease in sensitivity can be considered as an indication of a reduction of computational capacity. A full recovery of the network's dynamical characteristics and sensitivity can be achieved by firing rate homeostasis, here implemented by an up-scaling of the remaining excitatory-excitatory synapses. Mean-field analysis reveals that the stability of the linearised network dynamics is, in good approximation, uniquely determined by the firing rate, and thereby explains why firing rate homeostasis preserves not only the firing rate but also the network's sensitivity to small perturbations.
Collapse
Affiliation(s)
- Claudia Bachmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Renato Duarte
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
35
|
Pereira U, Brunel N. Unsupervised Learning of Persistent and Sequential Activity. Front Comput Neurosci 2020; 13:97. [PMID: 32009924 PMCID: PMC6978734 DOI: 10.3389/fncom.2019.00097] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Accepted: 12/23/2019] [Indexed: 11/25/2022] Open
Abstract
Two strikingly distinct types of activity have been observed in various brain structures during delay periods of delayed response tasks: Persistent activity (PA), in which a sub-population of neurons maintains an elevated firing rate throughout an entire delay period; and Sequential activity (SA), in which sub-populations of neurons are activated sequentially in time. It has been hypothesized that both types of dynamics can be “learned” by the relevant networks from the statistics of their inputs, thanks to mechanisms of synaptic plasticity. However, the necessary conditions for a synaptic plasticity rule and input statistics to learn these two types of dynamics in a stable fashion are still unclear. In particular, it is unclear whether a single learning rule is able to learn both types of activity patterns, depending on the statistics of the inputs driving the network. Here, we first characterize the complete bifurcation diagram of a firing rate model of multiple excitatory populations with an inhibitory mechanism, as a function of the parameters characterizing its connectivity. We then investigate how an unsupervised temporally asymmetric Hebbian plasticity rule shapes the dynamics of the network. Consistent with previous studies, we find that for stable learning of PA and SA, an additional stabilization mechanism is necessary. We show that a generalized version of the standard multiplicative homeostatic plasticity (Renart et al., 2003; Toyoizumi et al., 2014) stabilizes learning by effectively masking excitatory connections during stimulation and unmasking those connections during retrieval. Using the bifurcation diagram derived for fixed connectivity, we study analytically the temporal evolution and the steady state of the learned recurrent architecture as a function of parameters characterizing the external inputs. Slow changing stimuli lead to PA, while fast changing stimuli lead to SA. Our network model shows how a network with plastic synapses can stably and flexibly learn PA and SA in an unsupervised manner.
Collapse
Affiliation(s)
- Ulises Pereira
- Department of Statistics, The University of Chicago, Chicago, IL, United States
| | - Nicolas Brunel
- Department of Statistics, The University of Chicago, Chicago, IL, United States.,Department of Neurobiology, The University of Chicago, Chicago, IL, United States.,Department of Neurobiology, Duke University, Durham, NC, United States.,Department of Physics, Duke University, Durham, NC, United States
| |
Collapse
|
36
|
Schwalger T, Chizhov AV. Mind the last spike - firing rate models for mesoscopic populations of spiking neurons. Curr Opin Neurobiol 2019; 58:155-166. [PMID: 31590003 DOI: 10.1016/j.conb.2019.08.003] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 08/25/2019] [Indexed: 02/07/2023]
Abstract
The dominant modeling framework for understanding cortical computations are heuristic firing rate models. Despite their success, these models fall short to capture spike synchronization effects, to link to biophysical parameters and to describe finite-size fluctuations. In this opinion article, we propose that the refractory density method (RDM), also known as age-structured population dynamics or quasi-renewal theory, yields a powerful theoretical framework to build rate-based models for mesoscopic neural populations from realistic neuron dynamics at the microscopic level. We review recent advances achieved by the RDM to obtain efficient population density equations for networks of generalized integrate-and-fire (GIF) neurons - a class of neuron models that has been successfully fitted to various cell types. The theory not only predicts the nonstationary dynamics of large populations of neurons but also permits an extension to finite-size populations and a systematic reduction to low-dimensional rate dynamics. The new types of rate models will allow a re-examination of models of cortical computations under biological constraints.
Collapse
Affiliation(s)
- Tilo Schwalger
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany; Institut für Mathematik, Technische Universität Berlin, 10623 Berlin, Germany.
| | - Anton V Chizhov
- Ioffe Institute, 194021 Saint-Petersburg, Russia; Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences, 194223 Saint-Petersburg, Russia
| |
Collapse
|
37
|
Lim S. Mechanisms underlying sharpening of visual response dynamics with familiarity. eLife 2019; 8:44098. [PMID: 31393260 PMCID: PMC6711664 DOI: 10.7554/elife.44098] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 08/07/2019] [Indexed: 12/03/2022] Open
Abstract
Experience-dependent modifications of synaptic connections are thought to change patterns of network activities and stimulus tuning with learning. However, only a few studies explored how synaptic plasticity shapes the response dynamics of cortical circuits. Here, we investigated the mechanism underlying sharpening of both stimulus selectivity and response dynamics with familiarity observed in monkey inferotemporal cortex. Broadening the distribution of activities and stronger oscillations in the response dynamics after learning provide evidence for synaptic plasticity in recurrent connections modifying the strength of positive feedback. Its interplay with slow negative feedback via firing rate adaptation is critical in sharpening response dynamics. Analysis of changes in temporal patterns also enables us to disentangle recurrent and feedforward synaptic plasticity and provides a measure for the strengths of recurrent synaptic plasticity. Overall, this work highlights the importance of analyzing changes in dynamics as well as network patterns to further reveal the mechanisms of visual learning.
Collapse
Affiliation(s)
- Sukbin Lim
- Neural Science, NYU Shanghai, Shanghai, China.,NYU-ECNU Institute of Brain and Cognitive Science, NYU Shanghai, Shanghai, China
| |
Collapse
|
38
|
Muscinelli SP, Gerstner W, Schwalger T. How single neuron properties shape chaotic dynamics and signal transmission in random neural networks. PLoS Comput Biol 2019; 15:e1007122. [PMID: 31181063 PMCID: PMC6586367 DOI: 10.1371/journal.pcbi.1007122] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Revised: 06/20/2019] [Accepted: 05/22/2019] [Indexed: 02/07/2023] Open
Abstract
While most models of randomly connected neural networks assume single-neuron models with simple dynamics, neurons in the brain exhibit complex intrinsic dynamics over multiple timescales. We analyze how the dynamical properties of single neurons and recurrent connections interact to shape the effective dynamics in large randomly connected networks. A novel dynamical mean-field theory for strongly connected networks of multi-dimensional rate neurons shows that the power spectrum of the network activity in the chaotic phase emerges from a nonlinear sharpening of the frequency response function of single neurons. For the case of two-dimensional rate neurons with strong adaptation, we find that the network exhibits a state of "resonant chaos", characterized by robust, narrow-band stochastic oscillations. The coherence of stochastic oscillations is maximal at the onset of chaos and their correlation time scales with the adaptation timescale of single units. Surprisingly, the resonance frequency can be predicted from the properties of isolated neurons, even in the presence of heterogeneity in the adaptation parameters. In the presence of these internally-generated chaotic fluctuations, the transmission of weak, low-frequency signals is strongly enhanced by adaptation, whereas signal transmission is not influenced by adaptation in the non-chaotic regime. Our theoretical framework can be applied to other mechanisms at the level of single neurons, such as synaptic filtering, refractoriness or spike synchronization. These results advance our understanding of the interaction between the dynamics of single units and recurrent connectivity, which is a fundamental step toward the description of biologically realistic neural networks.
Collapse
Affiliation(s)
- Samuel P. Muscinelli
- School of Computer and Communication Sciences and School of Life Sciences, École polytechnique fédérale de Lausanne, Station 15, CH-1015 Lausanne EPFL, Switzerland
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, École polytechnique fédérale de Lausanne, Station 15, CH-1015 Lausanne EPFL, Switzerland
| | - Tilo Schwalger
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany
- Institut für Mathematik, Technische Universität Berlin, 10623 Berlin, Germany
| |
Collapse
|
39
|
Llera-Montero M, Sacramento J, Costa RP. Computational roles of plastic probabilistic synapses. Curr Opin Neurobiol 2018; 54:90-97. [PMID: 30308457 DOI: 10.1016/j.conb.2018.09.002] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Revised: 07/02/2018] [Accepted: 09/06/2018] [Indexed: 11/18/2022]
Abstract
The probabilistic nature of synaptic transmission has remained enigmatic. However, recent developments have started to shed light on why the brain may rely on probabilistic synapses. Here, we start out by reviewing experimental evidence on the specificity and plasticity of synaptic response statistics. Next, we overview different computational perspectives on the function of plastic probabilistic synapses for constrained, statistical and deep learning. We highlight that all of these views require some form of optimisation of probabilistic synapses, which has recently gained support from theoretical analysis of long-term synaptic plasticity experiments. Finally, we contrast these different computational views and propose avenues for future research. Overall, we argue that the time is ripe for a better understanding of the computational functions of probabilistic synapses.
Collapse
Affiliation(s)
- Milton Llera-Montero
- Computational Neuroscience Unit, Department of Computer Science, School of Computer Science, Electrical and Electronic Engineering, and Engineering Mathematics, Faculty of Engineering, University of Bristol, United Kingdom; Bristol Neuroscience, University of Bristol, United Kingdom; School of Psychological Science, Faculty of Life Sciences, University of Bristol, United Kingdom
| | | | - Rui Ponte Costa
- Computational Neuroscience Unit, Department of Computer Science, School of Computer Science, Electrical and Electronic Engineering, and Engineering Mathematics, Faculty of Engineering, University of Bristol, United Kingdom; Bristol Neuroscience, University of Bristol, United Kingdom; Department of Physiology, University of Bern, Switzerland; Centre for Neural Circuits and Behaviour, Department of Physiology, Anatomy and Genetics, University of Oxford, United Kingdom.
| |
Collapse
|