1
|
Papo D, Buldú JM. Does the brain behave like a (complex) network? I. Dynamics. Phys Life Rev 2024; 48:47-98. [PMID: 38145591 DOI: 10.1016/j.plrev.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 12/10/2023] [Indexed: 12/27/2023]
Abstract
Graph theory is now becoming a standard tool in system-level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network structure does not entail that the brain actually works as a network. Asking whether the brain behaves as a network means asking whether network properties count. From the viewpoint of neurophysiology and, possibly, of brain physics, the most substantial issues a network structure may be instrumental in addressing relate to the influence of network properties on brain dynamics and to whether these properties ultimately explain some aspects of brain function. Here, we address the dynamical implications of complex network, examining which aspects and scales of brain activity may be understood to genuinely behave as a network. To do so, we first define the meaning of networkness, and analyse some of its implications. We then examine ways in which brain anatomy and dynamics can be endowed with a network structure and discuss possible ways in which network structure may be shown to represent a genuine organisational principle of brain activity, rather than just a convenient description of its anatomy and dynamics.
Collapse
Affiliation(s)
- D Papo
- Department of Neuroscience and Rehabilitation, Section of Physiology, University of Ferrara, Ferrara, Italy; Center for Translational Neurophysiology, Fondazione Istituto Italiano di Tecnologia, Ferrara, Italy.
| | - J M Buldú
- Complex Systems Group & G.I.S.C., Universidad Rey Juan Carlos, Madrid, Spain
| |
Collapse
|
2
|
Mandal S, Shrimali MD. Learning unidirectional coupling using an echo-state network. Phys Rev E 2023; 107:064205. [PMID: 37464638 DOI: 10.1103/physreve.107.064205] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 05/23/2023] [Indexed: 07/20/2023]
Abstract
Reservoir Computing has found many potential applications in the field of complex dynamics. In this article, we explore the exceptional capability of the echo-state network (ESN) model to make it learn a unidirectional coupling scheme from only a few time series data of the system. We show that, once trained with a few example dynamics of a drive-response system, the machine is able to predict the response system's dynamics for any driver signal with the same coupling. Only a few time series data of an A-B type drive-response system in training is sufficient for the ESN to learn the coupling scheme. After training, even if we replace drive system A with a different system C, the ESN can reproduce the dynamics of response system B using the dynamics of new drive system C only.
Collapse
|
3
|
Shao Y, Ostojic S. Relating local connectivity and global dynamics in recurrent excitatory-inhibitory networks. PLoS Comput Biol 2023; 19:e1010855. [PMID: 36689488 PMCID: PMC9894562 DOI: 10.1371/journal.pcbi.1010855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 02/02/2023] [Accepted: 01/06/2023] [Indexed: 01/24/2023] Open
Abstract
How the connectivity of cortical networks determines the neural dynamics and the resulting computations is one of the key questions in neuroscience. Previous works have pursued two complementary approaches to quantify the structure in connectivity. One approach starts from the perspective of biological experiments where only the local statistics of connectivity motifs between small groups of neurons are accessible. Another approach is based instead on the perspective of artificial neural networks where the global connectivity matrix is known, and in particular its low-rank structure can be used to determine the resulting low-dimensional dynamics. A direct relationship between these two approaches is however currently missing. Specifically, it remains to be clarified how local connectivity statistics and the global low-rank connectivity structure are inter-related and shape the low-dimensional activity. To bridge this gap, here we develop a method for mapping local connectivity statistics onto an approximate global low-rank structure. Our method rests on approximating the global connectivity matrix using dominant eigenvectors, which we compute using perturbation theory for random matrices. We demonstrate that multi-population networks defined from local connectivity statistics for which the central limit theorem holds can be approximated by low-rank connectivity with Gaussian-mixture statistics. We specifically apply this method to excitatory-inhibitory networks with reciprocal motifs, and show that it yields reliable predictions for both the low-dimensional dynamics, and statistics of population activity. Importantly, it analytically accounts for the activity heterogeneity of individual neurons in specific realizations of local connectivity. Altogether, our approach allows us to disentangle the effects of mean connectivity and reciprocal motifs on the global recurrent feedback, and provides an intuitive picture of how local connectivity shapes global network dynamics.
Collapse
Affiliation(s)
- Yuxiu Shao
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure—PSL Research University, Paris, France
- * E-mail: (YS); (SO)
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure—PSL Research University, Paris, France
- * E-mail: (YS); (SO)
| |
Collapse
|
4
|
Input correlations impede suppression of chaos and learning in balanced firing-rate networks. PLoS Comput Biol 2022; 18:e1010590. [DOI: 10.1371/journal.pcbi.1010590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 12/15/2022] [Accepted: 09/20/2022] [Indexed: 12/12/2022] Open
Abstract
Neural circuits exhibit complex activity patterns, both spontaneously and evoked by external stimuli. Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity. We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, i.e., the suppression of internally-generated chaotic variability, strongly depends on correlations in the input. A distinctive feature of balanced networks is that, because common external input is dynamically canceled by recurrent feedback, it is far more difficult to suppress chaos with common input into each neuron than through independent input. To study this phenomenon, we develop a non-stationary dynamic mean-field theory for driven networks. The theory explains how the activity statistics and the largest Lyapunov exponent depend on the frequency and amplitude of the input, recurrent coupling strength, and network size, for both common and independent input. We further show that uncorrelated inputs facilitate learning in balanced networks.
Collapse
|
5
|
Small, correlated changes in synaptic connectivity may facilitate rapid motor learning. Nat Commun 2022; 13:5163. [PMID: 36056006 PMCID: PMC9440011 DOI: 10.1038/s41467-022-32646-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 08/08/2022] [Indexed: 11/08/2022] Open
Abstract
Animals rapidly adapt their movements to external perturbations, a process paralleled by changes in neural activity in the motor cortex. Experimental studies suggest that these changes originate from altered inputs (Hinput) rather than from changes in local connectivity (Hlocal), as neural covariance is largely preserved during adaptation. Since measuring synaptic changes in vivo remains very challenging, we used a modular recurrent neural network to qualitatively test this interpretation. As expected, Hinput resulted in small activity changes and largely preserved covariance. Surprisingly given the presumed dependence of stable covariance on preserved circuit connectivity, Hlocal led to only slightly larger changes in activity and covariance, still within the range of experimental recordings. This similarity is due to Hlocal only requiring small, correlated connectivity changes for successful adaptation. Simulations of tasks that impose increasingly larger behavioural changes revealed a growing difference between Hinput and Hlocal, which could be exploited when designing future experiments.
Collapse
|
6
|
Carugno G, Neri I, Vivo P. Instabilities of complex fluids with partially structured and partially random interactions. Phys Biol 2022; 19. [PMID: 35172289 DOI: 10.1088/1478-3975/ac55f9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 02/16/2022] [Indexed: 11/12/2022]
Abstract
We develop a theory for thermodynamic instabilities of complex fluids composed of many interacting chemical species organised in families. This model includes partially structured and partially random interactions and can be solved exactly using tools from random matrix theory. The model exhibits three kinds of fluid instabilities: one in which the species form a condensate with a local density that depends on their family (family condensation); one in which species demix in two phases depending on their family (family demixing); and one in which species demix in a random manner irrespective of their family (random demixing). We determine the critical spinodal density of the three types of instabilities and find that the critical spinodal density is finite for both family condensation and family demixing, while for random demixing the critical spinodal density grows as the square root of the number of species. We use the developed framework to describe phase-separation instability of the cytoplasm induced by a change in pH.
Collapse
Affiliation(s)
- Giorgio Carugno
- Mathematics, King's College London, Strand, London, WC2R 2LS, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Izaak Neri
- Mathematics, King's College London, Strand, London, WC2R 2LS, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Pierpaolo Vivo
- King's College London School of Natural and Mathematical Sciences, Strand, London, WC2R 2LS, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| |
Collapse
|
7
|
Dahmen D, Layer M, Deutz L, Dąbrowska PA, Voges N, von Papen M, Brochier T, Riehle A, Diesmann M, Grün S, Helias M. Global organization of neuronal activity only requires unstructured local connectivity. eLife 2022; 11:e68422. [PMID: 35049496 PMCID: PMC8776256 DOI: 10.7554/elife.68422] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 11/18/2021] [Indexed: 11/13/2022] Open
Abstract
Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons spread across large cortical distances. Yet, this parallel activity is often confined to relatively low-dimensional manifolds. This implies strong coordination also among neurons that are most likely not even connected. Here, we combine in vivo recordings with network models and theory to characterize the nature of mesoscopic coordination patterns in macaque motor cortex and to expose their origin: We find that heterogeneity in local connectivity supports network states with complex long-range cooperation between neurons that arises from multi-synaptic, short-range connections. Our theory explains the experimentally observed spatial organization of covariances in resting state recordings as well as the behaviorally related modulation of covariance patterns during a reach-to-grasp task. The ubiquity of heterogeneity in local cortical circuits suggests that the brain uses the described mechanism to flexibly adapt neuronal coordination to momentary demands.
Collapse
Affiliation(s)
- David Dahmen
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
| | - Moritz Layer
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- RWTH Aachen UniversityAachenGermany
| | - Lukas Deutz
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- School of Computing, University of LeedsLeedsUnited Kingdom
| | - Paulina Anna Dąbrowska
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- RWTH Aachen UniversityAachenGermany
| | - Nicole Voges
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Michael von Papen
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
| | - Thomas Brochier
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Alexa Riehle
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Markus Diesmann
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachenGermany
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen UniversityAachenGermany
| | - Sonja Grün
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Theoretical Systems Neurobiology, RWTH Aachen UniversityAachenGermany
| | - Moritz Helias
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachenGermany
| |
Collapse
|
8
|
Feulner B, Clopath C. Neural manifold under plasticity in a goal driven learning behaviour. PLoS Comput Biol 2021; 17:e1008621. [PMID: 33544700 PMCID: PMC7864452 DOI: 10.1371/journal.pcbi.1008621] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 12/08/2020] [Indexed: 11/19/2022] Open
Abstract
Neural activity is often low dimensional and dominated by only a few prominent neural covariation patterns. It has been hypothesised that these covariation patterns could form the building blocks used for fast and flexible motor control. Supporting this idea, recent experiments have shown that monkeys can learn to adapt their neural activity in motor cortex on a timescale of minutes, given that the change lies within the original low-dimensional subspace, also called neural manifold. However, the neural mechanism underlying this within-manifold adaptation remains unknown. Here, we show in a computational model that modification of recurrent weights, driven by a learned feedback signal, can account for the observed behavioural difference between within- and outside-manifold learning. Our findings give a new perspective, showing that recurrent weight changes do not necessarily lead to change in the neural manifold. On the contrary, successful learning is naturally constrained to a common subspace.
Collapse
Affiliation(s)
- Barbara Feulner
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
9
|
Zhang GH, Nelson DR. Eigenvalue repulsion and eigenvector localization in sparse non-Hermitian random matrices. Phys Rev E 2019; 100:052315. [PMID: 31870007 DOI: 10.1103/physreve.100.052315] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2019] [Indexed: 11/07/2022]
Abstract
Complex networks with directed, local interactions are ubiquitous in nature and often occur with probabilistic connections due to both intrinsic stochasticity and disordered environments. Sparse non-Hermitian random matrices arise naturally in this context and are key to describing statistical properties of the nonequilibrium dynamics that emerges from interactions within the network structure. Here we study one-dimensional (1D) spatial structures and focus on sparse non-Hermitian random matrices in the spirit of tight-binding models in solid state physics. We first investigate two-point eigenvalue correlations in the complex plane for sparse non-Hermitian random matrices using methods developed for the statistical mechanics of inhomogeneous two-dimensional interacting particles. We find that eigenvalue repulsion in the complex plane directly correlates with eigenvector delocalization. In addition, for 1D chains and rings with both disordered nearest-neighbor connections and self-interactions, the self-interaction disorder tends to decorrelate eigenvalues and localize eigenvectors more than simple hopping disorder. However, remarkable resistance to eigenvector localization by disorder is provided by large cycles, such as those embodied in 1D periodic boundary conditions under strong directional bias. The directional bias also spatially separates the left and right eigenvectors, leading to interesting dynamics in excitation and response. These phenomena have important implications for asymmetric random networks and highlight a need for mathematical tools to describe and understand them analytically.
Collapse
Affiliation(s)
- Grace H Zhang
- Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA
| | - David R Nelson
- Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA
| |
Collapse
|
10
|
Vegué M, Roxin A. Firing rate distributions in spiking networks with heterogeneous connectivity. Phys Rev E 2019; 100:022208. [PMID: 31574753 DOI: 10.1103/physreve.100.022208] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2018] [Indexed: 11/07/2022]
Abstract
Mean-field theory for networks of spiking neurons based on the so-called diffusion approximation has been used to calculate certain measures of neuronal activity which can be compared with experimental data. This includes the distribution of firing rates across the network. However, the theory in its current form applies only to networks in which there is relatively little heterogeneity in the number of incoming and outgoing connections per neuron. Here we extend this theory to include networks with arbitrary degree distributions. Furthermore, the theory takes into account correlations in the in-degree and out-degree of neurons, which would arise, e.g., in the case of networks with hublike neurons. Finally, we show that networks with broad and positively correlated degrees can generate a large-amplitude sustained response to transient stimuli which does not occur in more homogeneous networks.
Collapse
Affiliation(s)
- Marina Vegué
- Centre de Recerca Matemàtica, Bellaterra, Spain and Departament de Matemàtiques, Universitat Politècnica de Catalunya, Barcelona, Spain and Instituto de Neurociencias, Consejo Superior de Investigaciones Científicas y Universidad Miguel Hernández, Sant Joan d'Alacant, Alicante, Spain
| | - Alex Roxin
- Centre de Recerca Matemàtica, Bellaterra, Spain and Barcelona Graduate School of Mathematics, Barcelona, Spain
| |
Collapse
|
11
|
Merkt B, Schüßler F, Rotter S. Propagation of orientation selectivity in a spiking network model of layered primary visual cortex. PLoS Comput Biol 2019; 15:e1007080. [PMID: 31323031 PMCID: PMC6641049 DOI: 10.1371/journal.pcbi.1007080] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2018] [Accepted: 05/02/2019] [Indexed: 11/29/2022] Open
Abstract
Neurons in different layers of sensory cortex generally have different functional properties. But what determines firing rates and tuning properties of neurons in different layers? Orientation selectivity in primary visual cortex (V1) is an interesting case to study these questions. Thalamic projections essentially determine the preferred orientation of neurons that receive direct input. But how is this tuning propagated though layers, and how can selective responses emerge in layers that do not have direct access to the thalamus? Here we combine numerical simulations with mathematical analyses to address this problem. We find that a large-scale network, which just accounts for experimentally measured layer and cell-type specific connection probabilities, yields firing rates and orientation selectivities matching electrophysiological recordings in rodent V1 surprisingly well. Further analysis, however, is complicated by the fact that neuronal responses emerge in a dynamic fashion and cannot be directly inferred from static neuroanatomy, as some connections tend to have unintuitive effects due to recurrent interactions and strong feedback loops. These emergent phenomena can be understood by linearizing and coarse-graining. In fact, we were able to derive a low-dimensional linear dynamical system effectively describing stimulus-driven activity layer by layer. This low-dimensional system explains layer-specific firing rates and orientation tuning by accounting for the different gain factors of the aggregate system. Our theory can also be used to design novel optogenetic stimulation experiments, thus facilitating further exploration of the interplay between connectivity and function. Understanding the precise roles of neuronal sub-populations in shaping the activity of networks is a fundamental objective of neuroscience research. In complex neuronal network structures like the neocortex, the relation between the connectome and the algorithm implemented in it is often not self-explaining. To this end, our work makes three important contributions. First, we show that the connectivity extracted by anatomical and physiological experiments in visual cortex suffices to explain important properties of the various sub-populations, including their selectivity to visual stimulation. Second, we introduce a novel system-level approach for the analysis of input-output relations of recurrent networks, which leads to the observed activity patterns. Third, we present a method for the design of future optogenetic experiments that can be used to devise specific stimuli resulting in a predictable change of neuronal activity. In summary, we introduce a novel framework to determine the relevant features of neuronal microcircuit function that can be applied to a wide range of neuronal systems.
Collapse
Affiliation(s)
- Benjamin Merkt
- Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany.,Faculty of Biology, University of Freiburg, Freiburg, Germany
| | | | - Stefan Rotter
- Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany.,Faculty of Biology, University of Freiburg, Freiburg, Germany
| |
Collapse
|
12
|
Ceni A, Ashwin P, Livi L. Interpreting Recurrent Neural Networks Behaviour via Excitable Network Attractors. Cognit Comput 2019. [DOI: 10.1007/s12559-019-09634-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
13
|
Muir DR, Molina-Luna P, Roth MM, Helmchen F, Kampa BM. Specific excitatory connectivity for feature integration in mouse primary visual cortex. PLoS Comput Biol 2017; 13:e1005888. [PMID: 29240769 PMCID: PMC5746254 DOI: 10.1371/journal.pcbi.1005888] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Revised: 12/28/2017] [Accepted: 11/23/2017] [Indexed: 11/21/2022] Open
Abstract
Local excitatory connections in mouse primary visual cortex (V1) are stronger and more prevalent between neurons that share similar functional response features. However, the details of how functional rules for local connectivity shape neuronal responses in V1 remain unknown. We hypothesised that complex responses to visual stimuli may arise as a consequence of rules for selective excitatory connectivity within the local network in the superficial layers of mouse V1. In mouse V1 many neurons respond to overlapping grating stimuli (plaid stimuli) with highly selective and facilitatory responses, which are not simply predicted by responses to single gratings presented alone. This complexity is surprising, since excitatory neurons in V1 are considered to be mainly tuned to single preferred orientations. Here we examined the consequences for visual processing of two alternative connectivity schemes: in the first case, local connections are aligned with visual properties inherited from feedforward input (a 'like-to-like' scheme specifically connecting neurons that share similar preferred orientations); in the second case, local connections group neurons into excitatory subnetworks that combine and amplify multiple feedforward visual properties (a 'feature binding' scheme). By comparing predictions from large scale computational models with in vivo recordings of visual representations in mouse V1, we found that responses to plaid stimuli were best explained by assuming feature binding connectivity. Unlike under the like-to-like scheme, selective amplification within feature-binding excitatory subnetworks replicated experimentally observed facilitatory responses to plaid stimuli; explained selective plaid responses not predicted by grating selectivity; and was consistent with broad anatomical selectivity observed in mouse V1. Our results show that visual feature binding can occur through local recurrent mechanisms without requiring feedforward convergence, and that such a mechanism is consistent with visual responses and cortical anatomy in mouse V1.
Collapse
Affiliation(s)
- Dylan R. Muir
- Biozentrum, University of Basel, Basel, Switzerland
- Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich, Switzerland
| | - Patricia Molina-Luna
- Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich, Switzerland
| | - Morgane M. Roth
- Biozentrum, University of Basel, Basel, Switzerland
- Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich, Switzerland
| | - Fritjof Helmchen
- Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich, Switzerland
| | - Björn M. Kampa
- Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich, Switzerland
- Department of Neurophysiology, Institute of Biology 2, RWTH Aachen University, Aachen, Germany
- JARA-BRAIN, Aachen, Germany
| |
Collapse
|
14
|
Piet AT, Erlich JC, Kopec CD, Brody CD. Rat Prefrontal Cortex Inactivations during Decision Making Are Explained by Bistable Attractor Dynamics. Neural Comput 2017; 29:2861-2886. [PMID: 28777728 DOI: 10.1162/neco_a_01005] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Two-node attractor networks are flexible models for neural activity during decision making. Depending on the network configuration, these networks can model distinct aspects of decisions including evidence integration, evidence categorization, and decision memory. Here, we use attractor networks to model recent causal perturbations of the frontal orienting fields (FOF) in rat cortex during a perceptual decision-making task (Erlich, Brunton, Duan, Hanks, & Brody, 2015 ). We focus on a striking feature of the perturbation results. Pharmacological silencing of the FOF resulted in a stimulus-independent bias. We fit several models to test whether integration, categorization, or decision memory could account for this bias and found that only the memory configuration successfully accounts for it. This memory model naturally accounts for optogenetic perturbations of FOF in the same task and correctly predicts a memory-duration-dependent deficit caused by silencing FOF in a different task. Our results provide mechanistic support for a "postcategorization" memory role of the FOF in upcoming choices.
Collapse
Affiliation(s)
- Alex T Piet
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Jeffrey C Erlich
- NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai 200122, China
| | - Charles D Kopec
- Princeton Neuroscience Institute and Department of Molecular Biology, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Carlos D Brody
- Princeton Neuroscience Institute, Department of Molecular Biology, and Howard Hughes Medical Institute, Princeton University, Princeton, NJ 08544, U.S.A.
| |
Collapse
|
15
|
Abstract
Recent experimental advances are producing an avalanche of data on both neural connectivity and neural activity. To take full advantage of these two emerging datasets we need a framework that links them, revealing how collective neural activity arises from the structure of neural connectivity and intrinsic neural dynamics. This problem of structure-driven activity has drawn major interest in computational neuroscience. Existing methods for relating activity and architecture in spiking networks rely on linearizing activity around a central operating point and thus fail to capture the nonlinear responses of individual neurons that are the hallmark of neural information processing. Here, we overcome this limitation and present a new relationship between connectivity and activity in networks of nonlinear spiking neurons by developing a diagrammatic fluctuation expansion based on statistical field theory. We explicitly show how recurrent network structure produces pairwise and higher-order correlated activity, and how nonlinearities impact the networks’ spiking activity. Our findings open new avenues to investigating how single-neuron nonlinearities—including those of different cell types—combine with connectivity to shape population activity and function. Neuronal networks, like many biological systems, exhibit variable activity. This activity is shaped by both the underlying biology of the component neurons and the structure of their interactions. How can we combine knowledge of these two things—that is, models of individual neurons and of their interactions—to predict the statistics of single- and multi-neuron activity? Current approaches rely on linearizing neural activity around a stationary state. In the face of neural nonlinearities, however, these linear methods can fail to predict spiking statistics and even fail to correctly predict whether activity is stable or pathological. Here, we show how to calculate any spike train cumulant in a broad class of models, while systematically accounting for nonlinear effects. We then study a fundamental effect of nonlinear input-rate transfer–coupling between different orders of spiking statistic–and how this depends on single-neuron and network properties.
Collapse
Affiliation(s)
- Gabriel Koch Ocker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Krešimir Josić
- Department of Mathematics and Department of Biology and Biochemistry, University of Houston, Houston, Texas, United States of America
- Department of BioSciences, Rice University, Houston, Texas, United States of America
| | - Eric Shea-Brown
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
- Department of Physiology and Biophysics, and UW Institute of Neuroengineering, University of Washington, Seattle, Washington, United States of America
| | - Michael A. Buice
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
- * E-mail:
| |
Collapse
|
16
|
Mastrogiuseppe F, Ostojic S. Intrinsically-generated fluctuating activity in excitatory-inhibitory networks. PLoS Comput Biol 2017; 13:e1005498. [PMID: 28437436 PMCID: PMC5421821 DOI: 10.1371/journal.pcbi.1005498] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2016] [Revised: 05/08/2017] [Accepted: 04/04/2017] [Indexed: 12/05/2022] Open
Abstract
Recurrent networks of non-linear units display a variety of dynamical regimes depending on the structure of their synaptic connectivity. A particularly remarkable phenomenon is the appearance of strongly fluctuating, chaotic activity in networks of deterministic, but randomly connected rate units. How this type of intrinsically generated fluctuations appears in more realistic networks of spiking neurons has been a long standing question. To ease the comparison between rate and spiking networks, recent works investigated the dynamical regimes of randomly-connected rate networks with segregated excitatory and inhibitory populations, and firing rates constrained to be positive. These works derived general dynamical mean field (DMF) equations describing the fluctuating dynamics, but solved these equations only in the case of purely inhibitory networks. Using a simplified excitatory-inhibitory architecture in which DMF equations are more easily tractable, here we show that the presence of excitation qualitatively modifies the fluctuating activity compared to purely inhibitory networks. In presence of excitation, intrinsically generated fluctuations induce a strong increase in mean firing rates, a phenomenon that is much weaker in purely inhibitory networks. Excitation moreover induces two different fluctuating regimes: for moderate overall coupling, recurrent inhibition is sufficient to stabilize fluctuations; for strong coupling, firing rates are stabilized solely by the upper bound imposed on activity, even if inhibition is stronger than excitation. These results extend to more general network architectures, and to rate networks receiving noisy inputs mimicking spiking activity. Finally, we show that signatures of the second dynamical regime appear in networks of integrate-and-fire neurons.
Collapse
Affiliation(s)
- Francesca Mastrogiuseppe
- Laboratoire de Neurosciences Cognitives, INSERM U960, École Normale Supérieure - PSL Research University, Paris, France
- Laboratoire de Physique Statistique, CNRS UMR 8550, École Normale Supérieure - PSL Research University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives, INSERM U960, École Normale Supérieure - PSL Research University, Paris, France
| |
Collapse
|
17
|
Nykamp DQ, Friedman D, Shaker S, Shinn M, Vella M, Compte A, Roxin A. Mean-field equations for neuronal networks with arbitrary degree distributions. Phys Rev E 2017; 95:042323. [PMID: 28505854 DOI: 10.1103/physreve.95.042323] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2016] [Indexed: 06/07/2023]
Abstract
The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.
Collapse
Affiliation(s)
- Duane Q Nykamp
- School of Mathematics, University of Minnesota 127 Vincent Hall, Minneapolis, Minnesota 55455, USA
| | - Daniel Friedman
- School of Mathematics, University of Minnesota 127 Vincent Hall, Minneapolis, Minnesota 55455, USA
| | - Sammy Shaker
- School of Mathematics, University of Minnesota 127 Vincent Hall, Minneapolis, Minnesota 55455, USA
| | - Maxwell Shinn
- School of Mathematics, University of Minnesota 127 Vincent Hall, Minneapolis, Minnesota 55455, USA
| | - Michael Vella
- School of Mathematics, University of Minnesota 127 Vincent Hall, Minneapolis, Minnesota 55455, USA
| | - Albert Compte
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Carrer Rosselló 149, 08036 Barcelona, Spain
| | - Alex Roxin
- Centre de Recerca Matemàtica, Campus de Bellaterra, Edifici C 08193 Bellaterra, Spain
| |
Collapse
|
18
|
Kuczala A, Sharpee TO. Eigenvalue spectra of large correlated random matrices. Phys Rev E 2016; 94:050101. [PMID: 27967175 PMCID: PMC5161118 DOI: 10.1103/physreve.94.050101] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2016] [Indexed: 11/07/2022]
Abstract
Using the diagrammatic method, we derive a set of self-consistent equations that describe eigenvalue distributions of large correlated asymmetric random matrices. The matrix elements can have different variances and be correlated with each other. The analytical results are confirmed by numerical simulations. The results have implications for the dynamics of neural and other biological networks where plasticity induces correlations in the connection strengths within the network. We find that the presence of correlations can have a major impact on network stability.
Collapse
Affiliation(s)
- Alexander Kuczala
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, California 92037, USA and Department of Physics, University of California, San Diego, California 92161, USA
| | - Tatyana O Sharpee
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, California 92037, USA and Department of Physics, University of California, San Diego, California 92161, USA
| |
Collapse
|