1
|
Denagamage S, Morton MP, Hudson NV, Reynolds JH, Jadi MP, Nandy AS. Laminar mechanisms of saccadic suppression in primate visual cortex. Cell Rep 2023; 42:112720. [PMID: 37392385 PMCID: PMC10528056 DOI: 10.1016/j.celrep.2023.112720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 04/15/2023] [Accepted: 06/13/2023] [Indexed: 07/03/2023] Open
Abstract
Saccadic eye movements are known to cause saccadic suppression, a temporary reduction in visual sensitivity and visual cortical firing rates. While saccadic suppression has been well characterized at the level of perception and single neurons, relatively little is known about the visual cortical networks governing this phenomenon. Here we examine the effects of saccadic suppression on distinct neural subpopulations within visual area V4. We find subpopulation-specific differences in the magnitude and timing of peri-saccadic modulation. Input-layer neurons show changes in firing rate and inter-neuronal correlations prior to saccade onset, and putative inhibitory interneurons in the input layer elevate their firing rate during saccades. A computational model of this circuit recapitulates our empirical observations and demonstrates that an input-layer-targeting pathway can initiate saccadic suppression by enhancing local inhibitory activity. Collectively, our results provide a mechanistic understanding of how eye movement signaling interacts with cortical circuitry to enforce visual stability.
Collapse
Affiliation(s)
- Sachira Denagamage
- Department of Neuroscience, Yale University, New Haven, CT 06511, USA; Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Mitchell P Morton
- Department of Neuroscience, Yale University, New Haven, CT 06511, USA; Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA
| | - Nyomi V Hudson
- Department of Neuroscience, Yale University, New Haven, CT 06511, USA
| | - John H Reynolds
- Systems Neurobiology Laboratories, The Salk Institute for Biological Studies, La Jolla, CA 92037, USA
| | - Monika P Jadi
- Department of Neuroscience, Yale University, New Haven, CT 06511, USA; Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA; Department of Psychiatry, Yale University, New Haven, CT 06511, USA; Kavli Institute for Neuroscience, Yale University, New Haven, CT 06511, USA; Wu Tsai Institute, Yale University, New Haven, CT 06511, USA.
| | - Anirvan S Nandy
- Department of Neuroscience, Yale University, New Haven, CT 06511, USA; Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06511, USA; Kavli Institute for Neuroscience, Yale University, New Haven, CT 06511, USA; Wu Tsai Institute, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
2
|
Bouhadjar Y, Wouters DJ, Diesmann M, Tetzlaff T. Coherent noise enables probabilistic sequence replay in spiking neuronal networks. PLoS Comput Biol 2023; 19:e1010989. [PMID: 37130121 PMCID: PMC10153753 DOI: 10.1371/journal.pcbi.1010989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 03/02/2023] [Indexed: 05/03/2023] Open
Abstract
Animals rely on different decision strategies when faced with ambiguous or uncertain cues. Depending on the context, decisions may be biased towards events that were most frequently experienced in the past, or be more explorative. A particular type of decision making central to cognition is sequential memory recall in response to ambiguous cues. A previously developed spiking neuronal network implementation of sequence prediction and recall learns complex, high-order sequences in an unsupervised manner by local, biologically inspired plasticity rules. In response to an ambiguous cue, the model deterministically recalls the sequence shown most frequently during training. Here, we present an extension of the model enabling a range of different decision strategies. In this model, explorative behavior is generated by supplying neurons with noise. As the model relies on population encoding, uncorrelated noise averages out, and the recall dynamics remain effectively deterministic. In the presence of locally correlated noise, the averaging effect is avoided without impairing the model performance, and without the need for large noise amplitudes. We investigate two forms of correlated noise occurring in nature: shared synaptic background inputs, and random locking of the stimulus to spatiotemporal oscillations in the network activity. Depending on the noise characteristics, the network adopts various recall strategies. This study thereby provides potential mechanisms explaining how the statistics of learned sequences affect decision making, and how decision strategies can be adjusted after learning.
Collapse
Affiliation(s)
- Younes Bouhadjar
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Peter Grünberg Institute (PGI-7,10), Jülich Research Centre and JARA, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Dirk J Wouters
- Institute of Electronic Materials (IWE 2) & JARA-FIT, RWTH Aachen University, Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, & Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
3
|
Ranft J, Lindner B. A self-consistent analytical theory for rotator networks under stochastic forcing: Effects of intrinsic noise and common input. CHAOS (WOODBURY, N.Y.) 2022; 32:063131. [PMID: 35778158 DOI: 10.1063/5.0096000] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 05/23/2022] [Indexed: 06/15/2023]
Abstract
Despite the incredible complexity of our brains' neural networks, theoretical descriptions of neural dynamics have led to profound insights into possible network states and dynamics. It remains challenging to develop theories that apply to spiking networks and thus allow one to characterize the dynamic properties of biologically more realistic networks. Here, we build on recent work by van Meegen and Lindner who have shown that "rotator networks," while considerably simpler than real spiking networks and, therefore, more amenable to mathematical analysis, still allow one to capture dynamical properties of networks of spiking neurons. This framework can be easily extended to the case where individual units receive uncorrelated stochastic input, which can be interpreted as intrinsic noise. However, the assumptions of the theory do not apply anymore when the input received by the single rotators is strongly correlated among units. As we show, in this case, the network fluctuations become significantly non-Gaussian, which calls for reworking of the theory. Using a cumulant expansion, we develop a self-consistent analytical theory that accounts for the observed non-Gaussian statistics. Our theory provides a starting point for further studies of more general network setups and information transmission properties of these networks.
Collapse
Affiliation(s)
- Jonas Ranft
- Institut de Biologie de l'ENS, Ecole Normale Supérieure, CNRS, Inserm, Université PSL, 46 rue d'Ulm, 75005 Paris, France
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstraße 13, Haus 2, 10115 Berlin, Germany and Department of Physics, Humboldt University Berlin, Newtonstraße 15, 12489 Berlin, Germany
| |
Collapse
|
4
|
Gallinaro JV, Gašparović N, Rotter S. Homeostatic control of synaptic rewiring in recurrent networks induces the formation of stable memory engrams. PLoS Comput Biol 2022; 18:e1009836. [PMID: 35143489 PMCID: PMC8865699 DOI: 10.1371/journal.pcbi.1009836] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Revised: 02/23/2022] [Accepted: 01/14/2022] [Indexed: 12/04/2022] Open
Abstract
Brain networks store new memories using functional and structural synaptic plasticity. Memory formation is generally attributed to Hebbian plasticity, while homeostatic plasticity is thought to have an ancillary role in stabilizing network dynamics. Here we report that homeostatic plasticity alone can also lead to the formation of stable memories. We analyze this phenomenon using a new theory of network remodeling, combined with numerical simulations of recurrent spiking neural networks that exhibit structural plasticity based on firing rate homeostasis. These networks are able to store repeatedly presented patterns and recall them upon the presentation of incomplete cues. Storage is fast, governed by the homeostatic drift. In contrast, forgetting is slow, driven by a diffusion process. Joint stimulation of neurons induces the growth of associative connections between them, leading to the formation of memory engrams. These memories are stored in a distributed fashion throughout connectivity matrix, and individual synaptic connections have only a small influence. Although memory-specific connections are increased in number, the total number of inputs and outputs of neurons undergo only small changes during stimulation. We find that homeostatic structural plasticity induces a specific type of "silent memories", different from conventional attractor states.
Collapse
Affiliation(s)
- Júlia V. Gallinaro
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Nebojša Gašparović
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
| | - Stefan Rotter
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
| |
Collapse
|
5
|
Sadeh S, Clopath C. Excitatory-inhibitory balance modulates the formation and dynamics of neuronal assemblies in cortical networks. SCIENCE ADVANCES 2021; 7:eabg8411. [PMID: 34731002 PMCID: PMC8565910 DOI: 10.1126/sciadv.abg8411] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 09/14/2021] [Indexed: 05/20/2023]
Abstract
Repetitive activation of subpopulations of neurons leads to the formation of neuronal assemblies, which can guide learning and behavior. Recent technological advances have made the artificial induction of these assemblies feasible, yet how various parameters of induction can be optimized is not clear. Here, we studied this question in large-scale cortical network models with excitatory-inhibitory balance. We found that the background network in which assemblies are embedded can strongly modulate their dynamics and formation. Networks with dominant excitatory interactions enabled a fast formation of assemblies, but this was accompanied by recruitment of other non-perturbed neurons, leading to some degree of nonspecific induction. On the other hand, networks with strong excitatory-inhibitory interactions ensured that the formation of assemblies remained constrained to the perturbed neurons, but slowed down the induction. Our results suggest that these two regimes can be suitable for computational and cognitive tasks with different trade-offs between speed and specificity.
Collapse
Affiliation(s)
- Sadra Sadeh
- Bioengineering Department, Imperial College London, London SW7 2AZ, UK
| | | |
Collapse
|
6
|
Manz P, Goedeke S, Memmesheimer RM. Dynamics and computation in mixed networks containing neurons that accelerate towards spiking. Phys Rev E 2019; 100:042404. [PMID: 31770941 DOI: 10.1103/physreve.100.042404] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Indexed: 11/07/2022]
Abstract
Networks in the brain consist of different types of neurons. Here we investigate the influence of neuron diversity on the dynamics, phase space structure, and computational capabilities of spiking neural networks. We find that already a single neuron of a different type can qualitatively change the network dynamics and that mixed networks may combine the computational capabilities of ones with a single-neuron type. We study inhibitory networks of concave leaky (LIF) and convex "antileaky" (XIF) integrate-and-fire neurons that generalize irregularly spiking nonchaotic LIF neuron networks. Endowed with simple conductance-based synapses for XIF neurons, our networks can generate a balanced state of irregular asynchronous spiking as well. We determine the voltage probability distributions and self-consistent firing rates assuming Poisson input with finite-size spike impacts. Further, we compute the full spectrum of Lyapunov exponents (LEs) and the covariant Lyapunov vectors (CLVs) specifying the corresponding perturbation directions. We find that there is approximately one positive LE for each XIF neuron. This indicates in particular that a single XIF neuron renders the network dynamics chaotic. A simple mean-field approach, which can be justified by properties of the CLVs, explains the finding. As an application, we propose a spike-based computing scheme where our networks serve as computational reservoirs and their different stability properties yield different computational capabilities.
Collapse
Affiliation(s)
- Paul Manz
- Neural Network Dynamics and Computation, Institute for Genetics, University of Bonn, 53115 Bonn, Germany
| | - Sven Goedeke
- Neural Network Dynamics and Computation, Institute for Genetics, University of Bonn, 53115 Bonn, Germany
| | - Raoul-Martin Memmesheimer
- Neural Network Dynamics and Computation, Institute for Genetics, University of Bonn, 53115 Bonn, Germany
| |
Collapse
|
7
|
Hennequin G, Ahmadian Y, Rubin DB, Lengyel M, Miller KD. The Dynamical Regime of Sensory Cortex: Stable Dynamics around a Single Stimulus-Tuned Attractor Account for Patterns of Noise Variability. Neuron 2019; 98:846-860.e5. [PMID: 29772203 PMCID: PMC5971207 DOI: 10.1016/j.neuron.2018.04.017] [Citation(s) in RCA: 79] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Revised: 02/14/2018] [Accepted: 04/12/2018] [Indexed: 12/16/2022]
Abstract
Correlated variability in cortical activity is ubiquitously quenched following stimulus onset, in a stimulus-dependent manner. These modulations have been attributed to circuit dynamics involving either multiple stable states (“attractors”) or chaotic activity. Here we show that a qualitatively different dynamical regime, involving fluctuations about a single, stimulus-driven attractor in a loosely balanced excitatory-inhibitory network (the stochastic “stabilized supralinear network”), best explains these modulations. Given the supralinear input/output functions of cortical neurons, increased stimulus drive strengthens effective network connectivity. This shifts the balance from interactions that amplify variability to suppressive inhibitory feedback, quenching correlated variability around more strongly driven steady states. Comparing to previously published and original data analyses, we show that this mechanism, unlike previous proposals, uniquely accounts for the spatial patterns and fast temporal dynamics of variability suppression. Specifying the cortical operating regime is key to understanding the computations underlying perception. A simple network model explains stimulus-tuning of cortical variability suppression Inhibition stabilizes recurrently interacting neurons with supralinear I/O functions Stimuli strengthen inhibitory stabilization around a stable state, quenching variability Single-trial V1 data are compatible with this model and rules out competing proposals
Collapse
Affiliation(s)
- Guillaume Hennequin
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK.
| | - Yashar Ahmadian
- Center for Theoretical Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY 10032, USA; Department of Neuroscience, Swartz Program in Theoretical Neuroscience, Kavli Institute for Brain Science, College of Physicians and Surgeons, Columbia University, New York, NY 10032, USA; Centre de Neurophysique, Physiologie, et Pathologie, CNRS, 75270 Paris Cedex 06, France; Institute of Neuroscience, Department of Biology and Mathematics, University of Oregon, Eugene, OR 97403, USA
| | - Daniel B Rubin
- Center for Theoretical Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY 10032, USA; Department of Neurology, Massachusetts General Hospital and Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK; Department of Cognitive Science, Central European University, 1051 Budapest, Hungary
| | - Kenneth D Miller
- Center for Theoretical Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY 10032, USA; Department of Neuroscience, Swartz Program in Theoretical Neuroscience, Kavli Institute for Brain Science, College of Physicians and Surgeons, Columbia University, New York, NY 10032, USA
| |
Collapse
|
8
|
Zhang J, Shao Y, Rangan AV, Tao L. A coarse-graining framework for spiking neuronal networks: from strongly-coupled conductance-based integrate-and-fire neurons to augmented systems of ODEs. J Comput Neurosci 2019; 46:211-232. [PMID: 30788694 DOI: 10.1007/s10827-019-00712-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2018] [Revised: 01/27/2019] [Accepted: 01/31/2019] [Indexed: 11/29/2022]
Abstract
Homogeneously structured, fluctuation-driven networks of spiking neurons can exhibit a wide variety of dynamical behaviors, ranging from homogeneity to synchrony. We extend our partitioned-ensemble average (PEA) formalism proposed in Zhang et al. (Journal of Computational Neuroscience, 37(1), 81-104, 2014a) to systematically coarse grain the heterogeneous dynamics of strongly coupled, conductance-based integrate-and-fire neuronal networks. The population dynamics models derived here successfully capture the so-called multiple-firing events (MFEs), which emerge naturally in fluctuation-driven networks of strongly coupled neurons. Although these MFEs likely play a crucial role in the generation of the neuronal avalanches observed in vitro and in vivo, the mechanisms underlying these MFEs cannot easily be understood using standard population dynamic models. Using our PEA formalism, we systematically generate a sequence of model reductions, going from Master equations, to Fokker-Planck equations, and finally, to an augmented system of ordinary differential equations. Furthermore, we show that these reductions can faithfully describe the heterogeneous dynamic regimes underlying the generation of MFEs in strongly coupled conductance-based integrate-and-fire neuronal networks.
Collapse
Affiliation(s)
- Jiwei Zhang
- School of Mathematics and Statistics, Wuhan University, Wuhan, 430072, China.,Hubei Key Laboratory of Computational Science, Wuhan University, Wuhan, 430072, China
| | - Yuxiu Shao
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing, 100871, China.,Center for Quantitative Biology, Peking University, Beijing, 100871, China
| | - Aaditya V Rangan
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | - Louis Tao
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing, 100871, China. .,Center for Quantitative Biology, Peking University, Beijing, 100871, China.
| |
Collapse
|
9
|
Pernice V, da Silveira RA. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits. PLoS Comput Biol 2018; 14:e1005979. [PMID: 29408930 PMCID: PMC5833435 DOI: 10.1371/journal.pcbi.1005979] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2017] [Revised: 03/01/2018] [Accepted: 01/10/2018] [Indexed: 11/18/2022] Open
Abstract
Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures—recurrent connections, shared feed-forward projections, and shared gain fluctuations—on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing. The response of neurons to a stimulus is variable across trials. A natural solution for reliable coding in the face of noise is the averaging across a neural population. The nature of this averaging depends on the structure of noise correlations in the neural population. In turn, the correlation structure depends on the way noise and correlations are generated in neural circuits. It is in general difficult to identify the origin of correlations from the observed population activity alone. In this article, we explore different theoretical scenarios of the way in which correlations can be generated, and we relate these to the architecture of feed-forward and recurrent neural circuits. Analyzing population recordings of the activity in mouse auditory cortex in response to sound stimuli, we find that population statistics are consistent with those generated in a recurrent network model. Using this model, we can then quantify the effects of network properties on average population responses, noise correlations, and the representation of sensory information.
Collapse
Affiliation(s)
- Volker Pernice
- Department of Physics, Ecole Normale Supérieure, Paris, France
- Laboratoire de Physique Statistique, Ecole Normale Supérieure, PSL Research University; Université Paris Diderot Sorbonne Paris-Cité, Sorbonne Universités UPMC Univ Paris 06; CNRS, Paris, France
| | - Rava Azeredo da Silveira
- Department of Physics, Ecole Normale Supérieure, Paris, France
- Laboratoire de Physique Statistique, Ecole Normale Supérieure, PSL Research University; Université Paris Diderot Sorbonne Paris-Cité, Sorbonne Universités UPMC Univ Paris 06; CNRS, Paris, France
- Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America
- * E-mail:
| |
Collapse
|
10
|
Stimulus Dependence of Correlated Variability across Cortical Areas. J Neurosci 2017; 36:7546-56. [PMID: 27413163 DOI: 10.1523/jneurosci.0504-16.2016] [Citation(s) in RCA: 54] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2016] [Accepted: 06/06/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED The way that correlated trial-to-trial variability between pairs of neurons in the same brain area (termed spike count or noise correlation, rSC) depends on stimulus or task conditions can constrain models of cortical circuits and of the computations performed by networks of neurons (Cohen and Kohn, 2011). In visual cortex, rSC tends not to depend on stimulus properties (Kohn and Smith, 2005; Huang and Lisberger, 2009) but does depend on cognitive factors like visual attention (Cohen and Maunsell, 2009; Mitchell et al., 2009). However, neurons across visual areas respond to any visual stimulus or contribute to any perceptual decision, and the way that information from multiple areas is combined to guide perception is unknown. To gain insight into these issues, we recorded simultaneously from neurons in two areas of visual cortex (primary visual cortex, V1, and the middle temporal area, MT) while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. Correlations across, but not within, areas depend on stimulus direction and the presence of a second stimulus, and attention has opposite effects on correlations within and across areas. This observed pattern of cross-area correlations is predicted by a normalization model where MT units sum V1 inputs that are passed through a divisive nonlinearity. Together, our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. SIGNIFICANCE STATEMENT Correlations in the responses of pairs of neurons within the same cortical area have been a subject of growing interest in systems neuroscience. However, correlated variability between different cortical areas is likely just as important. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. The observed pattern of cross-area correlations was predicted by a simple normalization model. Our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas.
Collapse
|
11
|
Deniz T, Rotter S. Solving the two-dimensional Fokker-Planck equation for strongly correlated neurons. Phys Rev E 2017; 95:012412. [PMID: 28208505 DOI: 10.1103/physreve.95.012412] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2016] [Indexed: 06/06/2023]
Abstract
Pairs of neurons in brain networks often share much of the input they receive from other neurons. Due to essential nonlinearities of the neuronal dynamics, the consequences for the correlation of the output spike trains are generally not well understood. Here we analyze the case of two leaky integrate-and-fire neurons using an approach which is nonperturbative with respect to the degree of input correlation. Our treatment covers both weakly and strongly correlated dynamics, generalizing previous results based on linear response theory.
Collapse
Affiliation(s)
- Taşkın Deniz
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Hansastraße 9a, 79104 Freiburg, Germany
| | - Stefan Rotter
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Hansastraße 9a, 79104 Freiburg, Germany
| |
Collapse
|
12
|
Shi L, Niu X, Wan H, Shang Z, Wang Z. A small-world-based population encoding model of the primary visual cortex. BIOLOGICAL CYBERNETICS 2015; 109:377-388. [PMID: 25753903 DOI: 10.1007/s00422-015-0649-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2014] [Accepted: 02/16/2015] [Indexed: 06/04/2023]
Abstract
A wide range of evidence has shown that information encoding performed by the visual cortex involves complex activities of neuronal populations. However, the effects of the neuronal connectivity structure on the population's encoding performance remain poorly understood. In this paper, a small-world-based population encoding model of the primary visual cortex (V1) is established on the basis of the generalized linear model (GLM) to describe the computation of the neuronal population. The model mainly consists of three sets of filters, including a spatiotemporal stimulus filter, a post-spike history filter, and a set of coupled filters with the coupling neurons organizing as a small-world network. The parameters of the model were fitted with neuronal data of the rat V1 recorded with a micro-electrode array. Compared to the traditional GLM, without considering the small-world structure of the neuronal population, the proposed model was proved to produce more accurate spiking response to grating stimuli and enhance the capability of the neuronal population to carry information. The comparison results proved the validity of the proposed model and further suggest the role of small-world structure in the encoding performance of local populations in V1, which provides new insights for understanding encoding mechanisms of a small scale population in visual system.
Collapse
Affiliation(s)
- Li Shi
- The School of Electrical Engineering, Zhengzhou University, Zhengzhou, 450001, China
| | | | | | | | | |
Collapse
|
13
|
Smith GB, Sederberg A, Elyada YM, Van Hooser SD, Kaschube M, Fitzpatrick D. The development of cortical circuits for motion discrimination. Nat Neurosci 2015; 18:252-61. [PMID: 25599224 PMCID: PMC4334116 DOI: 10.1038/nn.3921] [Citation(s) in RCA: 59] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2014] [Accepted: 12/10/2014] [Indexed: 12/12/2022]
Abstract
Stimulus discrimination depends on the selectivity and variability of neural responses, as well as the size and correlation structure of the responsive population. For direction discrimination in visual cortex, only the selectivity of neurons has been well characterized across development. Here we show in ferrets that at eye opening, the cortical response to visual stimulation exhibits several immaturities, including a high density of active neurons that display prominent wave-like activity, a high degree of variability and strong noise correlations. Over the next three weeks, the population response becomes increasingly sparse, wave-like activity disappears, and variability and noise correlations are markedly reduced. Similar changes were observed in identified neuronal populations imaged repeatedly over days. Furthermore, experience with a moving stimulus was capable of driving a reduction in noise correlations over a matter of hours. These changes in variability and correlation contribute significantly to a marked improvement in direction discriminability over development.
Collapse
Affiliation(s)
- Gordon B Smith
- Department of Functional Architecture and Development of Cerebral Cortex, Max Planck Florida Institute for Neuroscience, Jupiter, Florida, USA
| | - Audrey Sederberg
- Department of Physics, Princeton University, Princeton, New Jersey, USA
| | - Yishai M Elyada
- Department of Functional Architecture and Development of Cerebral Cortex, Max Planck Florida Institute for Neuroscience, Jupiter, Florida, USA
| | | | - Matthias Kaschube
- 1] Frankfurt Institute for Advanced Studies, Frankfurt, Germany. [2] Bernstein Focus: Neurotechnology Frankfurt, Frankfurt, Germany
| | - David Fitzpatrick
- Department of Functional Architecture and Development of Cerebral Cortex, Max Planck Florida Institute for Neuroscience, Jupiter, Florida, USA
| |
Collapse
|
14
|
Sadeh S, Rotter S. Orientation selectivity in inhibition-dominated networks of spiking neurons: effect of single neuron properties and network dynamics. PLoS Comput Biol 2015; 11:e1004045. [PMID: 25569445 PMCID: PMC4287576 DOI: 10.1371/journal.pcbi.1004045] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2014] [Accepted: 11/14/2014] [Indexed: 02/02/2023] Open
Abstract
The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not depend on the state of network dynamics, and hold equally well for mean-driven and fluctuation-driven regimes of activity.
Collapse
Affiliation(s)
- Sadra Sadeh
- Bernstein Center Freiberg & Faculty of Biology, University of Freiberg, Freiberg, Germany
- * E-mail:
| | - Stefan Rotter
- Bernstein Center Freiberg & Faculty of Biology, University of Freiberg, Freiberg, Germany
| |
Collapse
|
15
|
Lagzi F, Rotter S. A Markov model for the temporal dynamics of balanced random networks of finite size. Front Comput Neurosci 2014; 8:142. [PMID: 25520644 PMCID: PMC4253948 DOI: 10.3389/fncom.2014.00142] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2014] [Accepted: 10/20/2014] [Indexed: 11/21/2022] Open
Abstract
The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between neuronal populations also opens new doors to analyze the joint dynamics of multiple interacting networks.
Collapse
Affiliation(s)
- Fereshteh Lagzi
- Bernstein Center Freiburg and Faculty of Biology, University of FreiburgFreiburg, Germany
| | | |
Collapse
|
16
|
Chariker L, Young LS. Emergent spike patterns in neuronal populations. J Comput Neurosci 2014; 38:203-20. [PMID: 25326365 DOI: 10.1007/s10827-014-0534-4] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2014] [Revised: 09/23/2014] [Accepted: 09/25/2014] [Indexed: 11/29/2022]
Abstract
This numerical study documents and analyzes emergent spiking behavior in local neuronal populations. Emphasis is given to a phenomenon we call clustering, by which we refer to a tendency of random groups of neurons large and small to spontaneously coordinate their spiking activity in some fashion. Using a sparsely connected network of integrate-and-fire neurons, we demonstrate that spike clustering occurs ubiquitously in both high firing and low firing regimes. As a practical tool for quantifying such spike patterns, we propose a simple scheme with two parameters, one setting the temporal scale and the other the amount of deviation from the mean to be regarded as significant. Viewing population activity as a sequence of events, meaning relatively brief durations of elevated spiking, separated by inter-event times, we observe that background activity tends to give rise to extremely broad distributions of event sizes and inter-event times, while driving a system imposes a certain regularity on its inter-event times, producing a rhythm consistent with broad-band gamma oscillations. We note also that event sizes and inter-event times decorrelate very quickly. Dynamical analyses supported by numerical evidence are offered.
Collapse
Affiliation(s)
- Logan Chariker
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | | |
Collapse
|
17
|
Hennequin G, Vogels TP, Gerstner W. Optimal control of transient dynamics in balanced networks supports generation of complex movements. Neuron 2014; 82:1394-406. [PMID: 24945778 DOI: 10.1016/j.neuron.2014.04.045] [Citation(s) in RCA: 168] [Impact Index Per Article: 16.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/14/2014] [Indexed: 01/27/2023]
Abstract
Populations of neurons in motor cortex engage in complex transient dynamics of large amplitude during the execution of limb movements. Traditional network models with stochastically assigned synapses cannot reproduce this behavior. Here we introduce a class of cortical architectures with strong and random excitatory recurrence that is stabilized by intricate, fine-tuned inhibition, optimized from a control theory perspective. Such networks transiently amplify specific activity states and can be used to reliably execute multidimensional movement patterns. Similar to the experimental observations, these transients must be preceded by a steady-state initialization phase from which the network relaxes back into the background state by way of complex internal dynamics. In our networks, excitation and inhibition are as tightly balanced as recently reported in experiments across several brain areas, suggesting inhibitory control of complex excitatory recurrence as a generic organizational principle in cortex.
Collapse
Affiliation(s)
- Guillaume Hennequin
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland; Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK.
| | - Tim P Vogels
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland; Centre for Neural Circuits and Behaviour, University of Oxford, Oxford OX1 3SR, UK
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland
| |
Collapse
|
18
|
Moreno-Bote R. Poisson-like spiking in circuits with probabilistic synapses. PLoS Comput Biol 2014; 10:e1003522. [PMID: 25032705 PMCID: PMC4102400 DOI: 10.1371/journal.pcbi.1003522] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2013] [Accepted: 02/05/2014] [Indexed: 11/18/2022] Open
Abstract
Neuronal activity in cortex is variable both spontaneously and during stimulation, and it has the remarkable property that it is Poisson-like over broad ranges of firing rates covering from virtually zero to hundreds of spikes per second. The mechanisms underlying cortical-like spiking variability over such a broad continuum of rates are currently unknown. We show that neuronal networks endowed with probabilistic synaptic transmission, a well-documented source of variability in cortex, robustly generate Poisson-like variability over several orders of magnitude in their firing rate without fine-tuning of the network parameters. Other sources of variability, such as random synaptic delays or spike generation jittering, do not lead to Poisson-like variability at high rates because they cannot be sufficiently amplified by recurrent neuronal networks. We also show that probabilistic synapses predict Fano factor constancy of synaptic conductances. Our results suggest that synaptic noise is a robust and sufficient mechanism for the type of variability found in cortex. Neurons in cortex fire irregularly and in an irreproducible way under repeated presentations of an identical stimulus. Where is this spiking variability coming from? One unexplored possibility is that cortical variability originates from the amplification of a particular type of noise that is present throughout cortex: synaptic failures. In this paper we found that probabilistic synapses are sufficient to lead to cortical-like firing for several orders of magnitude in firing rate. Moreover, the resulting variability displays the property that variance of the spike counts is proportional to the mean for every cell in the network, the so-called Poisson-like firing, a well-known property of sensory cortical firing responses. We finally argue that far from being harmful, probabilistic synapses allow networks to sample neuronal states and sustain probabilistic population codes. Therefore, synaptic noise is not only a robust mechanism for the type of variability found in cortex, but it also provides cortical circuits with computational properties to perform probabilistic inference under noisy and ambiguous stimulation.
Collapse
Affiliation(s)
- Rubén Moreno-Bote
- Research Unit, Parc Sanitari Sant Joan de Déu and Universitat de Barcelona, Esplugues de Llobregat, Barcelona, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Esplugues de Llobregat, Barcelona, Spain
- * E-mail:
| |
Collapse
|
19
|
Yim MY, Kumar A, Aertsen A, Rotter S. Impact of correlated inputs to neurons: modeling observations from in vivo intracellular recordings. J Comput Neurosci 2014; 37:293-304. [PMID: 24789376 PMCID: PMC4159600 DOI: 10.1007/s10827-014-0502-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2013] [Revised: 04/02/2014] [Accepted: 04/04/2014] [Indexed: 11/24/2022]
Abstract
In vivo recordings in rat somatosensory cortex suggest that excitatory and inhibitory inputs are often correlated during spontaneous and sensory-evoked activity. Using a computational approach, we study how the interplay of input correlations and timing observed in experiments controls the spiking probability of single neurons. Several correlation-based mechanisms are identified, which can effectively switch a neuron on and off. In addition, we investigate the transfer of input correlation to output correlation in pairs of neurons, at the spike train and the membrane potential levels, by considering spike-driving and non-spike-driving inputs separately. In particular, we propose a plausible explanation for the in vivo finding that membrane potentials in neighboring neurons are correlated, but the spike-triggered averages of membrane potentials preceding a spike are not: Neighboring neurons possibly receive an ongoing bombardment of correlated subthreshold background inputs, and occasionally uncorrelated spike-driving inputs.
Collapse
Affiliation(s)
- Man Yi Yim
- Department of Mathematics, University of Hong Kong, Pokfulam Road, Hong Kong
| | | | | | | |
Collapse
|
20
|
Sadeh S, Cardanobile S, Rotter S. Mean-field analysis of orientation selectivity in inhibition-dominated networks of spiking neurons. SPRINGERPLUS 2014; 3:148. [PMID: 24790806 PMCID: PMC4003001 DOI: 10.1186/2193-1801-3-148] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/13/2014] [Accepted: 03/14/2014] [Indexed: 11/10/2022]
Abstract
Mechanisms underlying the emergence of orientation selectivity in the primary visual cortex are highly debated. Here we study the contribution of inhibition-dominated random recurrent networks to orientation selectivity, and more generally to sensory processing. By simulating and analyzing large-scale networks of spiking neurons, we investigate tuning amplification and contrast invariance of orientation selectivity in these networks. In particular, we show how selective attenuation of the common mode and amplification of the modulation component take place in these networks. Selective attenuation of the baseline, which is governed by the exceptional eigenvalue of the connectivity matrix, removes the unspecific, redundant signal component and ensures the invariance of selectivity across different contrasts. Selective amplification of modulation, which is governed by the operating regime of the network and depends on the strength of coupling, amplifies the informative signal component and thus increases the signal-to-noise ratio. Here, we perform a mean-field analysis which accounts for this process.
Collapse
Affiliation(s)
- Sadra Sadeh
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Hansastr. 9a, 79104 Freiburg, Germany
| | - Stefano Cardanobile
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Hansastr. 9a, 79104 Freiburg, Germany
| | - Stefan Rotter
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Hansastr. 9a, 79104 Freiburg, Germany
| |
Collapse
|
21
|
The correlation structure of local neuronal networks intrinsically results from recurrent dynamics. PLoS Comput Biol 2014; 10:e1003428. [PMID: 24453955 PMCID: PMC3894226 DOI: 10.1371/journal.pcbi.1003428] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2013] [Accepted: 11/22/2013] [Indexed: 11/19/2022] Open
Abstract
Correlated neuronal activity is a natural consequence of network connectivity and shared inputs to pairs of neurons, but the task-dependent modulation of correlations in relation to behavior also hints at a functional role. Correlations influence the gain of postsynaptic neurons, the amount of information encoded in the population activity and decoded by readout neurons, and synaptic plasticity. Further, it affects the power and spatial reach of extracellular signals like the local-field potential. A theory of correlated neuronal activity accounting for recurrent connectivity as well as fluctuating external sources is currently lacking. In particular, it is unclear how the recently found mechanism of active decorrelation by negative feedback on the population level affects the network response to externally applied correlated stimuli. Here, we present such an extension of the theory of correlations in stochastic binary networks. We show that (1) for homogeneous external input, the structure of correlations is mainly determined by the local recurrent connectivity, (2) homogeneous external inputs provide an additive, unspecific contribution to the correlations, (3) inhibitory feedback effectively decorrelates neuronal activity, even if neurons receive identical external inputs, and (4) identical synaptic input statistics to excitatory and to inhibitory cells increases intrinsically generated fluctuations and pairwise correlations. We further demonstrate how the accuracy of mean-field predictions can be improved by self-consistently including correlations. As a byproduct, we show that the cancellation of correlations between the summed inputs to pairs of neurons does not originate from the fast tracking of external input, but from the suppression of fluctuations on the population level by the local network. This suppression is a necessary constraint, but not sufficient to determine the structure of correlations; specifically, the structure observed at finite network size differs from the prediction based on perfect tracking, even though perfect tracking implies suppression of population fluctuations. The co-occurrence of action potentials of pairs of neurons within short time intervals has been known for a long time. Such synchronous events can appear time-locked to the behavior of an animal, and also theoretical considerations argue for a functional role of synchrony. Early theoretical work tried to explain correlated activity by neurons transmitting common fluctuations due to shared inputs. This, however, overestimates correlations. Recently, the recurrent connectivity of cortical networks was shown responsible for the observed low baseline correlations. Two different explanations were given: One argues that excitatory and inhibitory population activities closely follow the external inputs to the network, so that their effects on a pair of cells mutually cancel. Another explanation relies on negative recurrent feedback to suppress fluctuations in the population activity, equivalent to small correlations. In a biological neuronal network one expects both, external inputs and recurrence, to affect correlated activity. The present work extends the theoretical framework of correlations to include both contributions and explains their qualitative differences. Moreover, the study shows that the arguments of fast tracking and recurrent feedback are not equivalent, only the latter correctly predicts the cell-type specific correlations.
Collapse
|
22
|
Kriener B, Helias M, Rotter S, Diesmann M, Einevoll GT. How pattern formation in ring networks of excitatory and inhibitory spiking neurons depends on the input current regime. Front Comput Neurosci 2014; 7:187. [PMID: 24501591 PMCID: PMC3882721 DOI: 10.3389/fncom.2013.00187] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2013] [Accepted: 12/09/2013] [Indexed: 11/13/2022] Open
Abstract
Pattern formation, i.e., the generation of an inhomogeneous spatial activity distribution in a dynamical system with translation invariant structure, is a well-studied phenomenon in neuronal network dynamics, specifically in neural field models. These are population models to describe the spatio-temporal dynamics of large groups of neurons in terms of macroscopic variables such as population firing rates. Though neural field models are often deduced from and equipped with biophysically meaningful properties, a direct mapping to simulations of individual spiking neuron populations is rarely considered. Neurons have a distinct identity defined by their action on their postsynaptic targets. In its simplest form they act either excitatorily or inhibitorily. When the distribution of neuron identities is assumed to be periodic, pattern formation can be observed, given the coupling strength is supracritical, i.e., larger than a critical weight. We find that this critical weight is strongly dependent on the characteristics of the neuronal input, i.e., depends on whether neurons are mean- or fluctuation driven, and different limits in linearizing the full non-linear system apply in order to assess stability. In particular, if neurons are mean-driven, the linearization has a very simple form and becomes independent of both the fixed point firing rate and the variance of the input current, while in the very strongly fluctuation-driven regime the fixed point rate, as well as the input mean and variance are important parameters in the determination of the critical weight. We demonstrate that interestingly even in "intermediate" regimes, when the system is technically fluctuation-driven, the simple linearization neglecting the variance of the input can yield the better prediction of the critical coupling strength. We moreover analyze the effects of structural randomness by rewiring individual synapses or redistributing weights, as well as coarse-graining on the formation of inhomogeneous activity patterns.
Collapse
Affiliation(s)
- Birgit Kriener
- Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences Ås, Norway
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre and JARA Jülich, Germany
| | - Stefan Rotter
- Faculty of Biology, University of Freiburg Freiburg, Germany ; Bernstein Center Freiburg, University of Freiburg Freiburg, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre and JARA Jülich, Germany ; Medical Faculty, RWTH Aachen University Aachen, Germany
| | - Gaute T Einevoll
- Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences Ås, Norway
| |
Collapse
|
23
|
Lengler J, Jug F, Steger A. Reliable neuronal systems: the importance of heterogeneity. PLoS One 2013; 8:e80694. [PMID: 24324621 PMCID: PMC3851464 DOI: 10.1371/journal.pone.0080694] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2013] [Accepted: 10/14/2013] [Indexed: 12/31/2022] Open
Abstract
For every engineer it goes without saying: in order to build a reliable system we need components that consistently behave precisely as they should. It is also well known that neurons, the building blocks of brains, do not satisfy this constraint. Even neurons of the same type come with huge variances in their properties and these properties also vary over time. Synapses, the connections between neurons, are highly unreliable in forwarding signals. In this paper we argue that both these fact add variance to neuronal processes, and that this variance is not a handicap of neural systems, but that instead predictable and reliable functional behavior of neural systems depends crucially on this variability. In particular, we show that higher variance allows a recurrently connected neural population to react more sensitively to incoming signals, and processes them faster and more energy efficient. This, for example, challenges the general assumption that the intrinsic variability of neurons in the brain is a defect that has to be overcome by synaptic plasticity in the process of learning.
Collapse
Affiliation(s)
- Johannes Lengler
- Institute of Theoretical Computer Science, ETH Zürich, Zürich, Switzerland
- * E-mail:
| | - Florian Jug
- Max-Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany
| | - Angelika Steger
- Institute of Theoretical Computer Science, ETH Zürich, Zürich, Switzerland
- Collegium Helveticum, Zürich, Switzerland
| |
Collapse
|
24
|
Grytskyy D, Tetzlaff T, Diesmann M, Helias M. A unified view on weakly correlated recurrent networks. Front Comput Neurosci 2013; 7:131. [PMID: 24151463 PMCID: PMC3799216 DOI: 10.3389/fncom.2013.00131] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2013] [Accepted: 09/10/2013] [Indexed: 11/13/2022] Open
Abstract
The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances in the spiking activity raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties of covariances and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire (LIF) model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models (LRM), including the Ornstein-Uhlenbeck process (OUP) as a special case. The distinction between both classes is the location of additive noise in the rate dynamics, which is located on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the situation with synaptic conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for the calculation of population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of LIF models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the population power spectra.
Collapse
Affiliation(s)
- Dmytro Grytskyy
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre and JARA Jülich, Germany
| | | | | | | |
Collapse
|
25
|
Quiroga-Lombard CS, Hass J, Durstewitz D. Method for stationarity-segmentation of spike train data with application to the Pearson cross-correlation. J Neurophysiol 2013; 110:562-72. [PMID: 23636729 DOI: 10.1152/jn.00186.2013] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Correlations among neurons are supposed to play an important role in computation and information coding in the nervous system. Empirically, functional interactions between neurons are most commonly assessed by cross-correlation functions. Recent studies have suggested that pairwise correlations may indeed be sufficient to capture most of the information present in neural interactions. Many applications of correlation functions, however, implicitly tend to assume that the underlying processes are stationary. This assumption will usually fail for real neurons recorded in vivo since their activity during behavioral tasks is heavily influenced by stimulus-, movement-, or cognition-related processes as well as by more general processes like slow oscillations or changes in state of alertness. To address the problem of nonstationarity, we introduce a method for assessing stationarity empirically and then “slicing” spike trains into stationary segments according to the statistical definition of weak-sense stationarity. We examine pairwise Pearson cross-correlations (PCCs) under both stationary and nonstationary conditions and identify another source of covariance that can be differentiated from the covariance of the spike times and emerges as a consequence of residual nonstationarities after the slicing process: the covariance of the firing rates defined on each segment. Based on this, a correction of the PCC is introduced that accounts for the effect of segmentation. We probe these methods both on simulated data sets and on in vivo recordings from the prefrontal cortex of behaving rats. Rather than for removing nonstationarities, the present method may also be used for detecting significant events in spike trains.
Collapse
Affiliation(s)
- Claudio S. Quiroga-Lombard
- Bernstein Center for Computational Neuroscience, Psychiatry, Central Institute of Mental Health, Medical Faculty Mannheim/Heidelberg University, Mannheim, Germany
| | - Joachim Hass
- Bernstein Center for Computational Neuroscience, Psychiatry, Central Institute of Mental Health, Medical Faculty Mannheim/Heidelberg University, Mannheim, Germany
| | - Daniel Durstewitz
- Bernstein Center for Computational Neuroscience, Psychiatry, Central Institute of Mental Health, Medical Faculty Mannheim/Heidelberg University, Mannheim, Germany
| |
Collapse
|
26
|
Distribution of correlated spiking events in a population-based approach for Integrate-and-Fire networks. J Comput Neurosci 2013; 36:279-95. [PMID: 23851661 DOI: 10.1007/s10827-013-0472-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2013] [Revised: 06/12/2013] [Accepted: 06/16/2013] [Indexed: 10/26/2022]
Abstract
Randomly connected populations of spiking neurons display a rich variety of dynamics. However, much of the current modeling and theoretical work has focused on two dynamical extremes: on one hand homogeneous dynamics characterized by weak correlations between neurons, and on the other hand total synchrony characterized by large populations firing in unison. In this paper we address the conceptual issue of how to mathematically characterize the partially synchronous "multiple firing events" (MFEs) which manifest in between these two dynamical extremes. We further develop a geometric method for obtaining the distribution of magnitudes of these MFEs by recasting the cascading firing event process as a first-passage time problem, and deriving an analytical approximation of the first passage time density valid for large neuron populations. Thus, we establish a direct link between the voltage distributions of excitatory and inhibitory neurons and the number of neurons firing in an MFE that can be easily integrated into population-based computational methods, thereby bridging the gap between homogeneous firing regimes and total synchrony.
Collapse
|
27
|
Deger M, Kumar A, Aertsen A, Rotter S. Linking neural mass signals and spike train statistics through point process and linear systems theory. BMC Neurosci 2013. [PMCID: PMC3704725 DOI: 10.1186/1471-2202-14-s1-p330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
|
28
|
Emergent dynamics in a model of visual cortex. J Comput Neurosci 2013; 35:155-67. [PMID: 23519442 PMCID: PMC3766520 DOI: 10.1007/s10827-013-0445-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2012] [Revised: 01/24/2013] [Accepted: 01/27/2013] [Indexed: 12/02/2022]
Abstract
This paper proposes that the network dynamics of the mammalian visual cortex are highly structured and strongly shaped by temporally localized barrages of excitatory and inhibitory firing we call ‘multiple-firing events’ (MFEs). Our proposal is based on careful study of a network of spiking neurons built to reflect the coarse physiology of a small patch of layer 2/3 of V1. When appropriately benchmarked this network is capable of reproducing the qualitative features of a range of phenomena observed in the real visual cortex, including spontaneous background patterns, orientation-specific responses, surround suppression and gamma-band oscillations. Detailed investigation into the relevant regimes reveals causal relationships among dynamical events driven by a strong competition between the excitatory and inhibitory populations. It suggests that along with firing rates, MFE characteristics can be a powerful signature of a regime. Testable predictions based on model observations and dynamical analysis are proposed.
Collapse
|
29
|
Abstract
Over the past 20 years, neuroimaging has become a predominant technique in systems neuroscience. One might envisage that over the next 20 years the neuroimaging of distributed processing and connectivity will play a major role in disclosing the brain's functional architecture and operational principles. The inception of this journal has been foreshadowed by an ever-increasing number of publications on functional connectivity, causal modeling, connectomics, and multivariate analyses of distributed patterns of brain responses. I accepted the invitation to write this review with great pleasure and hope to celebrate and critique the achievements to date, while addressing the challenges ahead.
Collapse
Affiliation(s)
- Karl J Friston
- The Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom.
| |
Collapse
|
30
|
Potjans TC, Diesmann M. The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model. ACTA ACUST UNITED AC 2012. [PMID: 23203991 PMCID: PMC3920768 DOI: 10.1093/cercor/bhs358] [Citation(s) in RCA: 206] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
In the past decade, the cell-type specific connectivity and activity of local cortical networks have been characterized experimentally to some detail. In parallel, modeling has been established as a tool to relate network structure to activity dynamics. While available comprehensive connectivity maps (
Thomson, West, et al. 2002; Binzegger et al. 2004) have been used in various computational studies, prominent features of the simulated activity such as the spontaneous firing rates do not match the experimental findings. Here, we analyze the properties of these maps to compile an integrated connectivity map, which additionally incorporates insights on the specific selection of target types. Based on this integrated map, we build a full-scale spiking network model of the local cortical microcircuit. The simulated spontaneous activity is asynchronous irregular and cell-type specific firing rates are in agreement with in vivo recordings in awake animals, including the low rate of layer 2/3 excitatory cells. The interplay of excitation and inhibition captures the flow of activity through cortical layers after transient thalamic stimulation. In conclusion, the integration of a large body of the available connectivity data enables us to expose the dynamical consequences of the cortical microcircuitry.
Collapse
Affiliation(s)
- Tobias C Potjans
- Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience, Research Center Juelich, Juelich, Germany
| | | |
Collapse
|
31
|
Rangan AV, Young LS. Dynamics of spiking neurons: between homogeneity and synchrony. J Comput Neurosci 2012; 34:433-60. [PMID: 23096934 DOI: 10.1007/s10827-012-0429-1] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2012] [Revised: 09/28/2012] [Accepted: 10/02/2012] [Indexed: 11/24/2022]
Abstract
Randomly connected networks of neurons driven by Poisson inputs are often assumed to produce "homogeneous" dynamics, characterized by largely independent firing and approximable by diffusion processes. At the same time, it is well known that such networks can fire synchronously. Between these two much studied scenarios lies a vastly complex dynamical landscape that is relatively unexplored. In this paper, we discuss a phenomenon which commonly manifests in these intermediate regimes, namely brief spurts of spiking activity which we call multiple firing events (MFE). These events do not depend on structured network architecture nor on structured input; they are an emergent property of the system. We came upon them in an earlier modeling paper, in which we discovered, through a careful benchmarking process, that MFEs are the single most important dynamical mechanism behind many of the V1 phenomena we were able to replicate. In this paper we explain in a simpler setting how MFEs come about, as well as their potential dynamic consequences. Although the mechanism underlying MFEs cannot easily be captured by current population dynamics models, this phenomena should not be ignored during analysis; there is a growing body of evidence that such collaborative activity may be a key towards unlocking the possible functional properties of many neuronal networks.
Collapse
Affiliation(s)
- Aaditya V Rangan
- Courant Institute of Mathematical Sciences, New York University, New York, USA
| | | |
Collapse
|
32
|
Tetzlaff T, Helias M, Einevoll GT, Diesmann M. Decorrelation of neural-network activity by inhibitory feedback. PLoS Comput Biol 2012; 8:e1002596. [PMID: 23133368 PMCID: PMC3487539 DOI: 10.1371/journal.pcbi.1002596] [Citation(s) in RCA: 112] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2011] [Accepted: 05/20/2012] [Indexed: 11/19/2022] Open
Abstract
Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II).
Collapse
Affiliation(s)
- Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience, Research Center Jülich, Jülich, Germany.
| | | | | | | |
Collapse
|
33
|
Voges N, Perrinet L. Complex dynamics in recurrent cortical networks based on spatially realistic connectivities. Front Comput Neurosci 2012; 6:41. [PMID: 22787446 PMCID: PMC3392693 DOI: 10.3389/fncom.2012.00041] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2012] [Accepted: 06/09/2012] [Indexed: 11/13/2022] Open
Abstract
Most studies on the dynamics of recurrent cortical networks are either based on purely random wiring or neighborhood couplings. Neuronal cortical connectivity, however, shows a complex spatial pattern composed of local and remote patchy connections. We ask to what extent such geometric traits influence the “idle” dynamics of two-dimensional (2d) cortical network models composed of conductance-based integrate-and-fire (iaf) neurons. In contrast to the typical 1 mm2 used in most studies, we employ an enlarged spatial set-up of 25 mm2 to provide for long-range connections. Our models range from purely random to distance-dependent connectivities including patchy projections, i.e., spatially clustered synapses. Analyzing the characteristic measures for synchronicity and regularity in neuronal spiking, we explore and compare the phase spaces and activity patterns of our simulation results. Depending on the input parameters, different dynamical states appear, similar to the known synchronous regular “SR” or asynchronous irregular “AI” firing in random networks. Our structured networks, however, exhibit shifted and sharper transitions, as well as more complex activity patterns. Distance-dependent connectivity structures induce a spatio-temporal spread of activity, e.g., propagating waves, that random networks cannot account for. Spatially and temporally restricted activity injections reveal that a high amount of local coupling induces rather unstable AI dynamics. We find that the amount of local versus long-range connections is an important parameter, whereas the structurally advantageous wiring cost optimization of patchy networks has little bearing on the phase space.
Collapse
Affiliation(s)
- N Voges
- Institut des Neurosciences de la Timone (INT), Aix-Marseille Université, CNRS (UMR 7289) Marseille, France
| | | |
Collapse
|
34
|
Hennequin G, Vogels TP, Gerstner W. Non-normal amplification in random balanced neuronal networks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2012; 86:011909. [PMID: 23005454 DOI: 10.1103/physreve.86.011909] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2012] [Indexed: 06/01/2023]
Abstract
In dynamical models of cortical networks, the recurrent connectivity can amplify the input given to the network in two distinct ways. One is induced by the presence of near-critical eigenvalues in the connectivity matrix W, producing large but slow activity fluctuations along the corresponding eigenvectors (dynamical slowing). The other relies on W not being normal, which allows the network activity to make large but fast excursions along specific directions. Here we investigate the trade-off between non-normal amplification and dynamical slowing in the spontaneous activity of large random neuronal networks composed of excitatory and inhibitory neurons. We use a Schur decomposition of W to separate the two amplification mechanisms. Assuming linear stochastic dynamics, we derive an exact expression for the expected amount of purely non-normal amplification. We find that amplification is very limited if dynamical slowing must be kept weak. We conclude that, to achieve strong transient amplification with little slowing, the connectivity must be structured. We show that unidirectional connections between neurons of the same type together with reciprocal connections between neurons of different types, allow for amplification already in the fast dynamical regime. Finally, our results also shed light on the differences between balanced networks in which inhibition exactly cancels excitation and those where inhibition dominates.
Collapse
Affiliation(s)
- Guillaume Hennequin
- School of Computer and Communication Sciences and Brain-Mind Institute, Ecole Polytechnique Fédérale de Lausanne, 1015 EPFL, Switzerland.
| | | | | |
Collapse
|
35
|
|
36
|
Pernice V, Staude B, Cardanobile S, Rotter S. Recurrent interactions in spiking networks with arbitrary topology. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2012; 85:031916. [PMID: 22587132 DOI: 10.1103/physreve.85.031916] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2011] [Revised: 01/04/2012] [Indexed: 05/31/2023]
Abstract
The population activity of random networks of excitatory and inhibitory leaky integrate-and-fire neurons has been studied extensively. In particular, a state of asynchronous activity with low firing rates and low pairwise correlations emerges in sparsely connected networks. We apply linear response theory to evaluate the influence of detailed network structure on neuron dynamics. It turns out that pairwise correlations induced by direct and indirect network connections can be related to the matrix of direct linear interactions. Furthermore, we study the influence of the characteristics of the neuron model. Interpreting the reset as self-inhibition, we examine its influence, via the spectrum of single-neuron activity, on network autocorrelation functions and the overall correlation level. The neuron model also affects the form of interaction kernels and consequently the time-dependent correlation functions. We find that a linear instability of networks with Erdös-Rényi topology coincides with a global transition to a highly correlated network state. Our work shows that recurrent interactions have a profound impact on spike train statistics and provides tools to study the effects of specific network topologies.
Collapse
Affiliation(s)
- Volker Pernice
- Bernstein Center Freiburg and Faculty of Biology, University of Freiburg, Freiburg, Germany.
| | | | | | | |
Collapse
|
37
|
Lefebvre J, Perkins TJ. Neural population densities shape network correlations. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2012; 85:021914. [PMID: 22463251 DOI: 10.1103/physreve.85.021914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2011] [Revised: 11/08/2011] [Indexed: 05/31/2023]
Abstract
The way sensory microcircuits manage cellular response correlations is a crucial question in understanding how such systems integrate external stimuli and encode information. Most sensory systems exhibit heterogeneities in terms of population sizes and features, which all impact their dynamics. This work addresses how correlations between the dynamics of neural ensembles depend on the relative size or density of excitatory and inhibitory populations. To do so, we study an apparently symmetric system of coupled stochastic differential equations that model the evolution of the populations' activities. Excitatory and inhibitory populations are connected by reciprocal recurrent connections, and both receive different stimuli exhibiting a certain level of correlation with each other. A stability analysis is performed, which reveals an intrinsic asymmetry in the distribution of the fixed points with respect to the threshold of the nonlinearities. Based on this, we show how the cross correlation between the population responses depends on the density of the inhibitory population, and that a specific ratio between both population sizes leads to a state of zero correlation. We show that this so-called asynchronous state subsists, despite the presence of stimulus correlation, and most importantly, that it occurs only in asymmetrical systems where one population outnumbers the other. Using linear approximations, we derive analytical expressions for the root of the cross-correlation function and study how the asynchronous state is impacted by the model's parameters. This work suggests a possible explanation for why inhibitory cells outnumber excitatory cells in the visual system.
Collapse
Affiliation(s)
- Jérémie Lefebvre
- Ottawa Hospital Research Institute, 501 Smyth Road, Ottawa, Ontario K1H 8L6, Canada.
| | | |
Collapse
|
38
|
Abstract
It is a common and good practice in experimental sciences to assess the statistical significance of measured outcomes. For this, the probability of obtaining the actual results is estimated under the assumption of an appropriately chosen null-hypothesis. If this probability is smaller than some threshold, the results are deemed statistically significant and the researchers are content in having revealed, within their own experimental domain, a “surprising” anomaly, possibly indicative of a hitherto hidden fragment of the underlying “ground-truth”. What is often neglected, though, is the actual importance of these experimental outcomes for understanding the system under investigation. We illustrate this point by giving practical and intuitive examples from the field of systems neuroscience. Specifically, we use the notion of embeddedness to quantify the impact of a neuron's activity on its downstream neurons in the network. We show that the network response strongly depends on the embeddedness of stimulated neurons and that embeddedness is a key determinant of the importance of neuronal activity on local and downstream processing. We extrapolate these results to other fields in which networks are used as a theoretical framework.
Collapse
|
39
|
Fox PT, Friston KJ. Distributed processing; distributed functions? Neuroimage 2012; 61:407-26. [PMID: 22245638 DOI: 10.1016/j.neuroimage.2011.12.051] [Citation(s) in RCA: 62] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2011] [Revised: 12/01/2011] [Accepted: 12/15/2011] [Indexed: 11/26/2022] Open
Abstract
After more than twenty years busily mapping the human brain, what have we learned from neuroimaging? This review (coda) considers this question from the point of view of structure-function relationships and the two cornerstones of functional neuroimaging; functional segregation and integration. Despite remarkable advances and insights into the brain's functional architecture, the earliest and simplest challenge in human brain mapping remains unresolved: We do not have a principled way to map brain function onto its structure in a way that speaks directly to cognitive neuroscience. Having said this, there are distinct clues about how this might be done: First, there is a growing appreciation of the role of functional integration in the distributed nature of neuronal processing. Second, there is an emerging interest in data-driven cognitive ontologies, i.e., that are internally consistent with functional anatomy. We will focus this review on the growing momentum in the fields of functional connectivity and distributed brain responses and consider this in the light of meta-analyses that use very large data sets to disclose large-scale structure-function mappings in the human brain.
Collapse
Affiliation(s)
- Peter T Fox
- Research Imaging Institute and Department of Radiology, University of Texas Health Science Center at San Antonio, 7703 Floyd Curl Drive, San Antonio, TX, USA.
| | | |
Collapse
|
40
|
Cardanobile S, Rotter S. Emergent properties of interacting populations of spiking neurons. Front Comput Neurosci 2011; 5:59. [PMID: 22207844 PMCID: PMC3245521 DOI: 10.3389/fncom.2011.00059] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2011] [Accepted: 11/28/2011] [Indexed: 12/05/2022] Open
Abstract
Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations.
Collapse
|
41
|
Reitsma P, Doiron B, Rubin J. Correlation transfer from basal ganglia to thalamus in Parkinson's disease. Front Comput Neurosci 2011; 5:58. [PMID: 22355287 PMCID: PMC3280480 DOI: 10.3389/fncom.2011.00058] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2011] [Accepted: 11/16/2011] [Indexed: 11/13/2022] Open
Abstract
Spike trains from neurons in the basal ganglia of parkinsonian primates show increased pairwise correlations, oscillatory activity, and burst rate compared to those from neurons recorded during normal brain activity. However, it is not known how these changes affect the behavior of downstream thalamic neurons. To understand how patterns of basal ganglia population activity may affect thalamic spike statistics, we study pairs of model thalamocortical (TC) relay neurons receiving correlated inhibitory input from the internal segment of the globus pallidus (GPi), a primary output nucleus of the basal ganglia. We observe that the strength of correlations of TC neuron spike trains increases with the GPi correlation level, and bursty firing patterns such as those seen in the parkinsonian GPi allow for stronger transfer of correlations than do firing patterns found under normal conditions. We also show that the T-current in the TC neurons does not significantly affect correlation transfer, despite its pronounced effects on spiking. Oscillatory firing patterns in GPi are shown to affect the timescale at which correlations are best transferred through the system. To explain this last result, we analytically compute the spike count correlation coefficient for oscillatory cases in a reduced point process model. Our analysis indicates that the dependence of the timescale of correlation transfer is robust to different levels of input spike and rate correlations and arises due to differences in instantaneous spike correlations, even when the long timescale rhythmic modulations of neurons are identical. Overall, these results show that parkinsonian firing patterns in GPi do affect the transfer of correlations to the thalamus.
Collapse
Affiliation(s)
- Pamela Reitsma
- Department of Mathematics, University of Pittsburgh Pittsburgh, PA, USA
| | | | | |
Collapse
|
42
|
Pernice V, Staude B, Cardanobile S, Rotter S. Effect of network structure on spike train correlations in networks of integrate-and-fire neurons. BMC Neurosci 2011. [PMCID: PMC3240381 DOI: 10.1186/1471-2202-12-s1-p272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
|
43
|
Enger H, Tetzlaff T, Kriener B, Gewaltig MO, Einevoll GT. Dynamics of self-sustained activity in random networks with strong synapses. BMC Neurosci 2011. [PMCID: PMC3240560 DOI: 10.1186/1471-2202-12-s1-p89] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
|
44
|
Mäki-Marttunen T, Aćimović J, Nykter M, Kesseli J, Ruohonen K, Yli-Harja O, Linne ML. Information diversity in structure and dynamics of simulated neuronal networks. Front Comput Neurosci 2011; 5:26. [PMID: 21852970 PMCID: PMC3151619 DOI: 10.3389/fncom.2011.00026] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2010] [Accepted: 05/17/2011] [Indexed: 11/13/2022] Open
Abstract
Neuronal networks exhibit a wide diversity of structures, which contributes to the diversity of the dynamics therein. The presented work applies an information theoretic framework to simultaneously analyze structure and dynamics in neuronal networks. Information diversity within the structure and dynamics of a neuronal network is studied using the normalized compression distance. To describe the structure, a scheme for generating distance-dependent networks with identical in-degree distribution but variable strength of dependence on distance is presented. The resulting network structure classes possess differing path length and clustering coefficient distributions. In parallel, comparable realistic neuronal networks are generated with NETMORPH simulator and similar analysis is done on them. To describe the dynamics, network spike trains are simulated using different network structures and their bursting behaviors are analyzed. For the simulation of the network activity the Izhikevich model of spiking neurons is used together with the Tsodyks model of dynamical synapses. We show that the structure of the simulated neuronal networks affects the spontaneous bursting activity when measured with bursting frequency and a set of intraburst measures: the more locally connected networks produce more and longer bursts than the more random networks. The information diversity of the structure of a network is greatest in the most locally connected networks, smallest in random networks, and somewhere in between in the networks between order and disorder. As for the dynamics, the most locally connected networks and some of the in-between networks produce the most complex intraburst spike trains. The same result also holds for sparser of the two considered network densities in the case of full spike trains.
Collapse
Affiliation(s)
- Tuomo Mäki-Marttunen
- Department of Signal Processing, Tampere University of Technology Tampere, Finland
| | | | | | | | | | | | | |
Collapse
|
45
|
Ledoux E, Brunel N. Dynamics of networks of excitatory and inhibitory neurons in response to time-dependent inputs. Front Comput Neurosci 2011; 5:25. [PMID: 21647353 PMCID: PMC3103906 DOI: 10.3389/fncom.2011.00025] [Citation(s) in RCA: 88] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2011] [Accepted: 05/09/2011] [Indexed: 11/13/2022] Open
Abstract
We investigate the dynamics of recurrent networks of excitatory (E) and inhibitory (I) neurons in the presence of time-dependent inputs. The dynamics is characterized by the network dynamical transfer function, i.e., how the population firing rate is modulated by sinusoidal inputs at arbitrary frequencies. Two types of networks are studied and compared: (i) a Wilson-Cowan type firing rate model; and (ii) a fully connected network of leaky integrate-and-fire (LIF) neurons, in a strong noise regime. We first characterize the region of stability of the "asynchronous state" (a state in which population activity is constant in time when external inputs are constant) in the space of parameters characterizing the connectivity of the network. We then systematically characterize the qualitative behaviors of the dynamical transfer function, as a function of the connectivity. We find that the transfer function can be either low-pass, or with a single or double resonance, depending on the connection strengths and synaptic time constants. Resonances appear when the system is close to Hopf bifurcations, that can be induced by two separate mechanisms: the I-I connectivity and the E-I connectivity. Double resonances can appear when excitatory delays are larger than inhibitory delays, due to the fact that two distinct instabilities exist with a finite gap between the corresponding frequencies. In networks of LIF neurons, changes in external inputs and external noise are shown to be able to change qualitatively the network transfer function. Firing rate models are shown to exhibit the same diversity of transfer functions as the LIF network, provided delays are present. They can also exhibit input-dependent changes of the transfer function, provided a suitable static non-linearity is incorporated.
Collapse
Affiliation(s)
- Erwan Ledoux
- Laboratory of Neurophysics and Physiology, UMR 8119, CNRS, Université Paris Descartes Paris, France
| | | |
Collapse
|
46
|
Pernice V, Staude B, Cardanobile S, Rotter S. How structure determines correlations in neuronal networks. PLoS Comput Biol 2011; 7:e1002059. [PMID: 21625580 PMCID: PMC3098224 DOI: 10.1371/journal.pcbi.1002059] [Citation(s) in RCA: 145] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2010] [Accepted: 04/01/2011] [Indexed: 11/19/2022] Open
Abstract
Networks are becoming a ubiquitous metaphor for the understanding of complex biological systems, spanning the range between molecular signalling pathways, neural networks in the brain, and interacting species in a food web. In many models, we face an intricate interplay between the topology of the network and the dynamics of the system, which is generally very hard to disentangle. A dynamical feature that has been subject of intense research in various fields are correlations between the noisy activity of nodes in a network. We consider a class of systems, where discrete signals are sent along the links of the network. Such systems are of particular relevance in neuroscience, because they provide models for networks of neurons that use action potentials for communication. We study correlations in dynamic networks with arbitrary topology, assuming linear pulse coupling. With our novel approach, we are able to understand in detail how specific structural motifs affect pairwise correlations. Based on a power series decomposition of the covariance matrix, we describe the conditions under which very indirect interactions will have a pronounced effect on correlations and population dynamics. In random networks, we find that indirect interactions may lead to a broad distribution of activation levels with low average but highly variable correlations. This phenomenon is even more pronounced in networks with distance dependent connectivity. In contrast, networks with highly connected hubs or patchy connections often exhibit strong average correlations. Our results are particularly relevant in view of new experimental techniques that enable the parallel recording of spiking activity from a large number of neurons, an appropriate interpretation of which is hampered by the currently limited understanding of structure-dynamics relations in complex networks.
Collapse
|
47
|
Roxin A. The role of degree distribution in shaping the dynamics in networks of sparsely connected spiking neurons. Front Comput Neurosci 2011; 5:8. [PMID: 21556129 PMCID: PMC3058136 DOI: 10.3389/fncom.2011.00008] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2010] [Accepted: 02/07/2011] [Indexed: 12/03/2022] Open
Abstract
Neuronal network models often assume a fixed probability of connection between neurons. This assumption leads to random networks with binomial in-degree and out-degree distributions which are relatively narrow. Here I study the effect of broad degree distributions on network dynamics by interpolating between a binomial and a truncated power-law distribution for the in-degree and out-degree independently. This is done both for an inhibitory network (I network) as well as for the recurrent excitatory connections in a network of excitatory and inhibitory neurons (EI network). In both cases increasing the width of the in-degree distribution affects the global state of the network by driving transitions between asynchronous behavior and oscillations. This effect is reproduced in a simplified rate model which includes the heterogeneity in neuronal input due to the in-degree of cells. On the other hand, broadening the out-degree distribution is shown to increase the fraction of common inputs to pairs of neurons. This leads to increases in the amplitude of the cross-correlation (CC) of synaptic currents. In the case of the I network, despite strong oscillatory CCs in the currents, CCs of the membrane potential are low due to filtering and reset effects, leading to very weak CCs of the spike-count. In the asynchronous regime of the EI network, broadening the out-degree increases the amplitude of CCs in the recurrent excitatory currents, while CC of the total current is essentially unaffected as are pairwise spiking correlations. This is due to a dynamic balance between excitatory and inhibitory synaptic currents. In the oscillatory regime, changes in the out-degree can have a large effect on spiking correlations and even on the qualitative dynamical state of the network.
Collapse
Affiliation(s)
- Alex Roxin
- Center for Theoretical Neuroscience, Columbia University, New York NY, USA
| |
Collapse
|
48
|
Rice A, Fuglevand AJ, Laine CM, Fregosi RF. Synchronization of presynaptic input to motor units of tongue, inspiratory intercostal, and diaphragm muscles. J Neurophysiol 2011; 105:2330-6. [PMID: 21307319 DOI: 10.1152/jn.01078.2010] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The respiratory central pattern generator distributes rhythmic excitatory input to phrenic, intercostal, and hypoglossal premotor neurons. The degree to which this input shapes motor neuron activity can vary across respiratory muscles and motor neuron pools. We evaluated the extent to which respiratory drive synchronizes the activation of motor unit pairs in tongue (genioglossus, hyoglossus) and chest-wall (diaphragm, external intercostals) muscles using coherence analysis. This is a frequency domain technique, which characterizes the frequency and relative strength of neural inputs that are common to each of the recorded motor units. We also examined coherence across the two tongue muscles, as our previous work shows that, despite being antagonists, they are strongly coactivated during the inspiratory phase, suggesting that excitatory input from the premotor neurons is distributed broadly throughout the hypoglossal motoneuron pool. All motor unit pairs showed highly correlated activity in the low-frequency range (1-8 Hz), reflecting the fundamental respiratory frequency and its harmonics. Coherence of motor unit pairs recorded either within or across the tongue muscles was similar, consistent with broadly distributed premotor input to the hypoglossal motoneuron pool. Interestingly, motor units from diaphragm and external intercostal muscles showed significantly higher coherence across the 10-20-Hz bandwidth than tongue-muscle units. We propose that the lower coherence in tongue-muscle motor units over this range reflects a larger constellation of presynaptic inputs, which collectively lead to a reduction in the coherence between hypoglossal motoneurons in this frequency band. This, in turn, may reflect the relative simplicity of the respiratory drive to the diaphragm and intercostal muscles, compared with the greater diversity of functions fulfilled by muscles of the tongue.
Collapse
Affiliation(s)
- Amber Rice
- Department of Physiology, The University of Arizona, Tucson, AZ 85721-0093, USA
| | | | | | | |
Collapse
|
49
|
Topologically invariant macroscopic statistics in balanced networks of conductance-based integrate-and-fire neurons. J Comput Neurosci 2011; 31:229-45. [PMID: 21222148 DOI: 10.1007/s10827-010-0310-z] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2009] [Revised: 10/26/2010] [Accepted: 12/20/2010] [Indexed: 10/18/2022]
Abstract
The relationship between the dynamics of neural networks and their patterns of connectivity is far from clear, despite its importance for understanding functional properties. Here, we have studied sparsely-connected networks of conductance-based integrate-and-fire (IF) neurons with balanced excitatory and inhibitory connections and with finite axonal propagation speed. We focused on the genesis of states with highly irregular spiking activity and synchronous firing patterns at low rates, called slow Synchronous Irregular (SI) states. In such balanced networks, we examined the "macroscopic" properties of the spiking activity, such as ensemble correlations and mean firing rates, for different intracortical connectivity profiles ranging from randomly connected networks to networks with Gaussian-distributed local connectivity. We systematically computed the distance-dependent correlations at the extracellular (spiking) and intracellular (membrane potential) levels between randomly assigned pairs of neurons. The main finding is that such properties, when they are averaged at a macroscopic scale, are invariant with respect to the different connectivity patterns, provided the excitatory-inhibitory balance is the same. In particular, the same correlation structure holds for different connectivity profiles. In addition, we examined the response of such networks to external input, and found that the correlation landscape can be modulated by the mean level of synchrony imposed by the external drive. This modulation was found again to be independent of the external connectivity profile. We conclude that first and second-order "mean-field" statistics of such networks do not depend on the details of the connectivity at a microscopic scale. This study is an encouraging step toward a mean-field description of topological neuronal networks.
Collapse
|
50
|
Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks V: self-organization schemes and weight dependence. BIOLOGICAL CYBERNETICS 2010; 103:365-386. [PMID: 20882297 DOI: 10.1007/s00422-010-0405-7] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/25/2009] [Accepted: 08/23/2010] [Indexed: 05/29/2023]
Abstract
Spike-timing-dependent plasticity (STDP) determines the evolution of the synaptic weights according to their pre- and post-synaptic activity, which in turn changes the neuronal activity on a (much) slower time scale. This paper examines the effect of STDP in a recurrently connected network stimulated by external pools of input spike trains, where both input and recurrent synapses are plastic. Our previously developed theoretical framework is extended to incorporate weight-dependent STDP and dendritic delays. The weight dynamics is determined by an interplay between the neuronal activation mechanisms, the input spike-time correlations, and the learning parameters. For the case of two external input pools, the resulting learning scheme can exhibit a symmetry breaking of the input connections such that two neuronal groups emerge, each specialized to one input pool only. In addition, we show how the recurrent connections within each neuronal group can be strengthened by STDP at the expense of those between the two groups. This neuronal self-organization can be seen as a basic dynamical ingredient for the emergence of neuronal maps induced by activity-dependent plasticity.
Collapse
Affiliation(s)
- Matthieu Gilson
- Department of Electrical and Electronic Engineering, University of Melbourne, Melbourne, VIC 3010, Australia.
| | | | | | | | | |
Collapse
|