1
|
Holt CJ, Miller KD, Ahmadian Y. The stabilized supralinear network accounts for the contrast dependence of visual cortical gamma oscillations. PLoS Comput Biol 2024; 20:e1012190. [PMID: 38935792 DOI: 10.1371/journal.pcbi.1012190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 05/23/2024] [Indexed: 06/29/2024] Open
Abstract
When stimulated, neural populations in the visual cortex exhibit fast rhythmic activity with frequencies in the gamma band (30-80 Hz). The gamma rhythm manifests as a broad resonance peak in the power-spectrum of recorded local field potentials, which exhibits various stimulus dependencies. In particular, in macaque primary visual cortex (V1), the gamma peak frequency increases with increasing stimulus contrast. Moreover, this contrast dependence is local: when contrast varies smoothly over visual space, the gamma peak frequency in each cortical column is controlled by the local contrast in that column's receptive field. No parsimonious mechanistic explanation for these contrast dependencies of V1 gamma oscillations has been proposed. The stabilized supralinear network (SSN) is a mechanistic model of cortical circuits that has accounted for a range of visual cortical response nonlinearities and contextual modulations, as well as their contrast dependence. Here, we begin by showing that a reduced SSN model without retinotopy robustly captures the contrast dependence of gamma peak frequency, and provides a mechanistic explanation for this effect based on the observed non-saturating and supralinear input-output function of V1 neurons. Given this result, the local dependence on contrast can trivially be captured in a retinotopic SSN which however lacks horizontal synaptic connections between its cortical columns. However, long-range horizontal connections in V1 are in fact strong, and underlie contextual modulation effects such as surround suppression. We thus explored whether a retinotopically organized SSN model of V1 with strong excitatory horizontal connections can exhibit both surround suppression and the local contrast dependence of gamma peak frequency. We found that retinotopic SSNs can account for both effects, but only when the horizontal excitatory projections are composed of two components with different patterns of spatial fall-off with distance: a short-range component that only targets the source column, combined with a long-range component that targets columns neighboring the source column. We thus make a specific qualitative prediction for the spatial structure of horizontal connections in macaque V1, consistent with the columnar structure of cortex.
Collapse
Affiliation(s)
- Caleb J Holt
- Department of Physics, Institute of Neuroscience, University of Oregon, Eugene, Oregon, United States of America
| | - Kenneth D Miller
- Deptartment of Neuroscience, Center for Theoretical Neuroscience, Swartz Program in Theoretical Neuroscience, Kavli Institute for Brain Science, College of Physicians and Surgeons, and Morton B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
| | - Yashar Ahmadian
- Department of Engineering, Computational and Biological Learning Lab, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
2
|
Eckmann S, Young EJ, Gjorgjieva J. Synapse-type-specific competitive Hebbian learning forms functional recurrent networks. Proc Natl Acad Sci U S A 2024; 121:e2305326121. [PMID: 38870059 PMCID: PMC11194505 DOI: 10.1073/pnas.2305326121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 04/25/2024] [Indexed: 06/15/2024] Open
Abstract
Cortical networks exhibit complex stimulus-response patterns that are based on specific recurrent interactions between neurons. For example, the balance between excitatory and inhibitory currents has been identified as a central component of cortical computations. However, it remains unclear how the required synaptic connectivity can emerge in developing circuits where synapses between excitatory and inhibitory neurons are simultaneously plastic. Using theory and modeling, we propose that a wide range of cortical response properties can arise from a single plasticity paradigm that acts simultaneously at all excitatory and inhibitory connections-Hebbian learning that is stabilized by the synapse-type-specific competition for a limited supply of synaptic resources. In plastic recurrent circuits, this competition enables the formation and decorrelation of inhibition-balanced receptive fields. Networks develop an assembly structure with stronger synaptic connections between similarly tuned excitatory and inhibitory neurons and exhibit response normalization and orientation-specific center-surround suppression, reflecting the stimulus statistics during training. These results demonstrate how neurons can self-organize into functional networks and suggest an essential role for synapse-type-specific competitive learning in the development of cortical circuits.
Collapse
Affiliation(s)
- Samuel Eckmann
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt am Main60438, Germany
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, CambridgeCB2 1PZ, United Kingdom
| | - Edward James Young
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, CambridgeCB2 1PZ, United Kingdom
| | - Julijana Gjorgjieva
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt am Main60438, Germany
- School of Life Sciences, Technical University Munich, Freising85354, Germany
| |
Collapse
|
3
|
Saiki-Ishikawa A, Agrios M, Savya S, Forrest A, Sroussi H, Hsu S, Basrai D, Xu F, Miri A. Hierarchy between forelimb premotor and primary motor cortices and its manifestation in their firing patterns. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.09.23.559136. [PMID: 38798685 PMCID: PMC11118350 DOI: 10.1101/2023.09.23.559136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Though hierarchy is commonly invoked in descriptions of motor cortical function, its presence and manifestation in firing patterns remain poorly resolved. Here we use optogenetic inactivation to demonstrate that short-latency influence between forelimb premotor and primary motor cortices is asymmetric during reaching in mice, demonstrating a partial hierarchy between the endogenous activity in each region. Multi-region recordings revealed that some activity is captured by similar but delayed patterns where either region's activity leads, with premotor activity leading more. Yet firing in each region is dominated by patterns shared between regions and is equally predictive of firing in the other region at the single-neuron level. In dual-region network models fit to data, regions differed in their dependence on across-region input, rather than the amount of such input they received. Our results indicate that motor cortical hierarchy, while present, may not be exposed when inferring interactions between populations from firing patterns alone.
Collapse
|
4
|
Koren V, Malerba SB, Schwalger T, Panzeri S. Structure, dynamics, coding and optimal biophysical parameters of efficient excitatory-inhibitory spiking networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.24.590955. [PMID: 38712237 PMCID: PMC11071478 DOI: 10.1101/2024.04.24.590955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
The principle of efficient coding posits that sensory cortical networks are designed to encode maximal sensory information with minimal metabolic cost. Despite the major influence of efficient coding in neuro-science, it has remained unclear whether fundamental empirical properties of neural network activity can be explained solely based on this normative principle. Here, we rigorously derive the structural, coding, biophysical and dynamical properties of excitatory-inhibitory recurrent networks of spiking neurons that emerge directly from imposing that the network minimizes an instantaneous loss function and a time-averaged performance measure enacting efficient coding. The optimal network has biologically-plausible biophysical features, including realistic integrate-and-fire spiking dynamics, spike-triggered adaptation, and a non-stimulus-specific excitatory external input regulating metabolic cost. The efficient network has excitatory-inhibitory recurrent connectivity between neurons with similar stimulus tuning implementing feature-specific competition, similar to that recently found in visual cortex. Networks with unstructured connectivity cannot reach comparable levels of coding efficiency. The optimal biophysical parameters include 4 to 1 ratio of excitatory vs inhibitory neurons and 3 to 1 ratio of mean inhibitory-to-inhibitory vs. excitatory-to-inhibitory connectivity that closely match those of cortical sensory networks. The efficient network has biologically-plausible spiking dynamics, with a tight instantaneous E-I balance that makes them capable to achieve efficient coding of external stimuli varying over multiple time scales. Together, these results explain how efficient coding may be implemented in cortical networks and suggests that key properties of biological neural networks may be accounted for by efficient coding.
Collapse
|
5
|
Podlaski WF, Machens CK. Approximating Nonlinear Functions With Latent Boundaries in Low-Rank Excitatory-Inhibitory Spiking Networks. Neural Comput 2024; 36:803-857. [PMID: 38658028 DOI: 10.1162/neco_a_01658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 01/02/2024] [Indexed: 04/26/2024]
Abstract
Deep feedforward and recurrent neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.
Collapse
Affiliation(s)
- William F Podlaski
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| | - Christian K Machens
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| |
Collapse
|
6
|
Waitzmann F, Wu YK, Gjorgjieva J. Top-down modulation in canonical cortical circuits with short-term plasticity. Proc Natl Acad Sci U S A 2024; 121:e2311040121. [PMID: 38593083 PMCID: PMC11032497 DOI: 10.1073/pnas.2311040121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 02/14/2024] [Indexed: 04/11/2024] Open
Abstract
Cortical dynamics and computations are strongly influenced by diverse GABAergic interneurons, including those expressing parvalbumin (PV), somatostatin (SST), and vasoactive intestinal peptide (VIP). Together with excitatory (E) neurons, they form a canonical microcircuit and exhibit counterintuitive nonlinear phenomena. One instance of such phenomena is response reversal, whereby SST neurons show opposite responses to top-down modulation via VIP depending on the presence of bottom-up sensory input, indicating that the network may function in different regimes under different stimulation conditions. Combining analytical and computational approaches, we demonstrate that model networks with multiple interneuron subtypes and experimentally identified short-term plasticity mechanisms can implement response reversal. Surprisingly, despite not directly affecting SST and VIP activity, PV-to-E short-term depression has a decisive impact on SST response reversal. We show how response reversal relates to inhibition stabilization and the paradoxical effect in the presence of several short-term plasticity mechanisms demonstrating that response reversal coincides with a change in the indispensability of SST for network stabilization. In summary, our work suggests a role of short-term plasticity mechanisms in generating nonlinear phenomena in networks with multiple interneuron subtypes and makes several experimentally testable predictions.
Collapse
Affiliation(s)
- Felix Waitzmann
- School of Life Sciences, Technical University of Munich, 85354Freising, Germany
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, 60438Frankfurt, Germany
| | - Yue Kris Wu
- School of Life Sciences, Technical University of Munich, 85354Freising, Germany
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, 60438Frankfurt, Germany
| | - Julijana Gjorgjieva
- School of Life Sciences, Technical University of Munich, 85354Freising, Germany
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, 60438Frankfurt, Germany
| |
Collapse
|
7
|
Politi A, Torcini A. A robust balancing mechanism for spiking neural networks. CHAOS (WOODBURY, N.Y.) 2024; 34:041102. [PMID: 38639569 DOI: 10.1063/5.0199298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 02/03/2024] [Indexed: 04/20/2024]
Abstract
Dynamical balance of excitation and inhibition is usually invoked to explain the irregular low firing activity observed in the cortex. We propose a robust nonlinear balancing mechanism for a random network of spiking neurons, which works also in the absence of strong external currents. Biologically, the mechanism exploits the plasticity of excitatory-excitatory synapses induced by short-term depression. Mathematically, the nonlinear response of the synaptic activity is the key ingredient responsible for the emergence of a stable balanced regime. Our claim is supported by a simple self-consistent analysis accompanied by extensive simulations performed for increasing network sizes. The observed regime is essentially fluctuation driven and characterized by highly irregular spiking dynamics of all neurons.
Collapse
Affiliation(s)
- Antonio Politi
- Institute for Complex Systems and Mathematical Biology and Department of Physics, Aberdeen AB24 3UE, United Kingdom
- CNR-Consiglio Nazionale delle Ricerche-Istituto dei Sistemi Complessi, via Madonna del Piano 10, 50019 Sesto Fiorentino, Italy
| | - Alessandro Torcini
- CNR-Consiglio Nazionale delle Ricerche-Istituto dei Sistemi Complessi, via Madonna del Piano 10, 50019 Sesto Fiorentino, Italy
- Laboratoire de Physique Théorique et Modélisation, CY Cergy Paris Université, CNRS UMR 8089, 95302 Cergy-Pontoise cedex, France
- INFN Sezione di Firenze, Via Sansone 1 50019 Sesto Fiorentino, Italy
| |
Collapse
|
8
|
Goris RLT, Coen-Cagli R, Miller KD, Priebe NJ, Lengyel M. Response sub-additivity and variability quenching in visual cortex. Nat Rev Neurosci 2024; 25:237-252. [PMID: 38374462 DOI: 10.1038/s41583-024-00795-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/24/2024] [Indexed: 02/21/2024]
Abstract
Sub-additivity and variability are ubiquitous response motifs in the primary visual cortex (V1). Response sub-additivity enables the construction of useful interpretations of the visual environment, whereas response variability indicates the factors that limit the precision with which the brain can do this. There is increasing evidence that experimental manipulations that elicit response sub-additivity often also quench response variability. Here, we provide an overview of these phenomena and suggest that they may have common origins. We discuss empirical findings and recent model-based insights into the functional operations, computational objectives and circuit mechanisms underlying V1 activity. These different modelling approaches all predict that response sub-additivity and variability quenching often co-occur. The phenomenology of these two response motifs, as well as many of the insights obtained about them in V1, generalize to other cortical areas. Thus, the connection between response sub-additivity and variability quenching may be a canonical motif across the cortex.
Collapse
Affiliation(s)
- Robbe L T Goris
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA.
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
- Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Kenneth D Miller
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA
- Dept. of Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY, USA
- Morton B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Swartz Program in Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Nicholas J Priebe
- Center for Learning and Memory, University of Texas at Austin, Austin, TX, USA
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
9
|
Liang J, Yang Z, Zhou C. Excitation-Inhibition Balance, Neural Criticality, and Activities in Neuronal Circuits. Neuroscientist 2024:10738584231221766. [PMID: 38291889 DOI: 10.1177/10738584231221766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
Neural activities in local circuits exhibit complex and multilevel dynamic features. Individual neurons spike irregularly, which is believed to originate from receiving balanced amounts of excitatory and inhibitory inputs, known as the excitation-inhibition balance. The spatial-temporal cascades of clustered neuronal spikes occur in variable sizes and durations, manifested as neural avalanches with scale-free features. These may be explained by the neural criticality hypothesis, which posits that neural systems operate around the transition between distinct dynamic states. Here, we summarize the experimental evidence for and the underlying theory of excitation-inhibition balance and neural criticality. Furthermore, we review recent studies of excitatory-inhibitory networks with synaptic kinetics as a simple solution to reconcile these two apparently distinct theories in a single circuit model. This provides a more unified understanding of multilevel neural activities in local circuits, from spontaneous to stimulus-response dynamics.
Collapse
Affiliation(s)
- Junhao Liang
- Eberhard Karls University of Tübingen and Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Zhuda Yang
- Department of Physics, Centre for Nonlinear Studies and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong
| | - Changsong Zhou
- Department of Physics, Centre for Nonlinear Studies and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong
- Life Science Imaging Centre, Hong Kong Baptist University, Kowloon Tong, Hong Kong
- Research Centre, Hong Kong Baptist University Institute of Research and Continuing Education, Shenzhen, China
| |
Collapse
|
10
|
Peters B, DiCarlo JJ, Gureckis T, Haefner R, Isik L, Tenenbaum J, Konkle T, Naselaris T, Stachenfeld K, Tavares Z, Tsao D, Yildirim I, Kriegeskorte N. How does the primate brain combine generative and discriminative computations in vision? ARXIV 2024:arXiv:2401.06005v1. [PMID: 38259351 PMCID: PMC10802669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
Vision is widely understood as an inference problem. However, two contrasting conceptions of the inference process have each been influential in research on biological vision as well as the engineering of machine vision. The first emphasizes bottom-up signal flow, describing vision as a largely feedforward, discriminative inference process that filters and transforms the visual information to remove irrelevant variation and represent behaviorally relevant information in a format suitable for downstream functions of cognition and behavioral control. In this conception, vision is driven by the sensory data, and perception is direct because the processing proceeds from the data to the latent variables of interest. The notion of "inference" in this conception is that of the engineering literature on neural networks, where feedforward convolutional neural networks processing images are said to perform inference. The alternative conception is that of vision as an inference process in Helmholtz's sense, where the sensory evidence is evaluated in the context of a generative model of the causal processes that give rise to it. In this conception, vision inverts a generative model through an interrogation of the sensory evidence in a process often thought to involve top-down predictions of sensory data to evaluate the likelihood of alternative hypotheses. The authors include scientists rooted in roughly equal numbers in each of the conceptions and motivated to overcome what might be a false dichotomy between them and engage the other perspective in the realm of theory and experiment. The primate brain employs an unknown algorithm that may combine the advantages of both conceptions. We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends the dichotomy and sets the stage for revealing the mysterious hybrid algorithm of primate vision.
Collapse
Affiliation(s)
- Benjamin Peters
- Zuckerman Mind Brain Behavior Institute, Columbia University
- School of Psychology & Neuroscience, University of Glasgow
| | - James J DiCarlo
- Department of Brain and Cognitive Sciences, MIT
- McGovern Institute for Brain Research, MIT
- NSF Center for Brains, Minds and Machines, MIT
- Quest for Intelligence, Schwarzman College of Computing, MIT
| | | | - Ralf Haefner
- Brain and Cognitive Sciences, University of Rochester
- Center for Visual Science, University of Rochester
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University
| | - Joshua Tenenbaum
- Department of Brain and Cognitive Sciences, MIT
- NSF Center for Brains, Minds and Machines, MIT
- Computer Science and Artificial Intelligence Laboratory, MIT
| | - Talia Konkle
- Department of Psychology, Harvard University
- Center for Brain Science, Harvard University
- Kempner Institute for Natural and Artificial Intelligence, Harvard University
| | | | | | - Zenna Tavares
- Zuckerman Mind Brain Behavior Institute, Columbia University
- Data Science Institute, Columbia University
| | - Doris Tsao
- Dept of Molecular & Cell Biology, University of California Berkeley
- Howard Hughes Medical Institute
| | - Ilker Yildirim
- Department of Psychology, Yale University
- Department of Statistics and Data Science, Yale University
| | - Nikolaus Kriegeskorte
- Zuckerman Mind Brain Behavior Institute, Columbia University
- Department of Psychology, Columbia University
- Department of Neuroscience, Columbia University
- Department of Electrical Engineering, Columbia University
| |
Collapse
|
11
|
Becker LA, Li B, Priebe NJ, Seidemann E, Taillefumier T. Exact Analysis of the Subthreshold Variability for Conductance-Based Neuronal Models with Synchronous Synaptic Inputs. PHYSICAL REVIEW. X 2024; 14:011021. [PMID: 38911939 PMCID: PMC11194039 DOI: 10.1103/physrevx.14.011021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
The spiking activity of neocortical neurons exhibits a striking level of variability, even when these networks are driven by identical stimuli. The approximately Poisson firing of neurons has led to the hypothesis that these neural networks operate in the asynchronous state. In the asynchronous state, neurons fire independently from one another, so that the probability that a neuron experience synchronous synaptic inputs is exceedingly low. While the models of asynchronous neurons lead to observed spiking variability, it is not clear whether the asynchronous state can also account for the level of subthreshold membrane potential variability. We propose a new analytical framework to rigorously quantify the subthreshold variability of a single conductance-based neuron in response to synaptic inputs with prescribed degrees of synchrony. Technically, we leverage the theory of exchangeability to model input synchrony via jump-process-based synaptic drives; we then perform a moment analysis of the stationary response of a neuronal model with all-or-none conductances that neglects postspiking reset. As a result, we produce exact, interpretable closed forms for the first two stationary moments of the membrane voltage, with explicit dependence on the input synaptic numbers, strengths, and synchrony. For biophysically relevant parameters, we find that the asynchronous regime yields realistic subthreshold variability (voltage variance ≃4-9 mV2) only when driven by a restricted number of large synapses, compatible with strong thalamic drive. By contrast, we find that achieving realistic subthreshold variability with dense cortico-cortical inputs requires including weak but nonzero input synchrony, consistent with measured pairwise spiking correlations. We also show that, without synchrony, the neural variability averages out to zero for all scaling limits with vanishing synaptic weights, independent of any balanced state hypothesis. This result challenges the theoretical basis for mean-field theories of the asynchronous state.
Collapse
Affiliation(s)
- Logan A. Becker
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Baowang Li
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Center for Perceptual Systems, The University of Texas at Austin, Austin, Texas 78712, USA
- Center for Learning and Memory, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Psychology, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Nicholas J. Priebe
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Center for Learning and Memory, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Eyal Seidemann
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Center for Perceptual Systems, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Psychology, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Thibaud Taillefumier
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Mathematics, The University of Texas at Austin, Austin, Texas 78712, USA
| |
Collapse
|
12
|
Becker LA, Li B, Priebe NJ, Seidemann E, Taillefumier T. Exact analysis of the subthreshold variability for conductance-based neuronal models with synchronous synaptic inputs. ARXIV 2023:arXiv:2304.09280v3. [PMID: 37131877 PMCID: PMC10153295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
The spiking activity of neocortical neurons exhibits a striking level of variability, even when these networks are driven by identical stimuli. The approximately Poisson firing of neurons has led to the hypothesis that these neural networks operate in the asynchronous state. In the asynchronous state neurons fire independently from one another, so that the probability that a neuron experience synchronous synaptic inputs is exceedingly low. While the models of asynchronous neurons lead to observed spiking variability, it is not clear whether the asynchronous state can also account for the level of subthreshold membrane potential variability. We propose a new analytical framework to rigorously quantify the subthreshold variability of a single conductance-based neuron in response to synaptic inputs with prescribed degrees of synchrony. Technically we leverage the theory of exchangeability to model input synchrony via jump-process-based synaptic drives; we then perform a moment analysis of the stationary response of a neuronal model with all-or-none conductances that neglects post-spiking reset. As a result, we produce exact, interpretable closed forms for the first two stationary moments of the membrane voltage, with explicit dependence on the input synaptic numbers, strengths, and synchrony. For biophysically relevant parameters, we find that the asynchronous regime only yields realistic subthreshold variability (voltage variance ≃ 4 - 9 m V 2 ) when driven by a restricted number of large synapses, compatible with strong thalamic drive. By contrast, we find that achieving realistic subthreshold variability with dense cortico-cortical inputs requires including weak but nonzero input synchrony, consistent with measured pairwise spiking correlations. We also show that without synchrony, the neural variability averages out to zero for all scaling limits with vanishing synaptic weights, independent of any balanced state hypothesis. This result challenges the theoretical basis for mean-field theories of the asynchronous state.
Collapse
Affiliation(s)
- Logan A. Becker
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
| | - Baowang Li
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
- Center for Perceptual Systems, The University of Texas at Austin
- Center for Learning and Memory, The University of Texas at Austin
- Department of Psychology, The University of Texas at Austin
| | - Nicholas J. Priebe
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
- Center for Learning and Memory, The University of Texas at Austin
| | - Eyal Seidemann
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
- Center for Perceptual Systems, The University of Texas at Austin
- Department of Psychology, The University of Texas at Austin
| | - Thibaud Taillefumier
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
- Department of Mathematics, The University of Texas at Austin
| |
Collapse
|
13
|
O'Rawe JF, Zhou Z, Li AJ, LaFosse PK, Goldbach HC, Histed MH. Excitation creates a distributed pattern of cortical suppression due to varied recurrent input. Neuron 2023; 111:4086-4101.e5. [PMID: 37865083 PMCID: PMC10872553 DOI: 10.1016/j.neuron.2023.09.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 05/14/2023] [Accepted: 09/08/2023] [Indexed: 10/23/2023]
Abstract
Dense local, recurrent connections are a major feature of cortical circuits, yet how they affect neurons' responses has been unclear, with some studies reporting weak recurrent effects, some reporting amplification, and others indicating local suppression. Here, we show that optogenetic input to mouse V1 excitatory neurons generates salt-and-pepper patterns of both excitation and suppression. Responses in individual neurons are not strongly predicted by that neuron's direct input. A balanced-state network model reconciles a set of diverse observations: the observed dynamics, suppressed responses, decoupling of input and output, and long tail of excited responses. The model shows recurrent excitatory-excitatory connections are strong and also variable across neurons. Together, these results demonstrate that excitatory recurrent connections can have major effects on cortical computations by shaping and changing neurons' responses to input.
Collapse
Affiliation(s)
- Jonathan F O'Rawe
- National Institute of Mental Health Intramural Program, NIH, Bethesda, MD, USA
| | - Zhishang Zhou
- National Institute of Mental Health Intramural Program, NIH, Bethesda, MD, USA
| | - Anna J Li
- National Institute of Mental Health Intramural Program, NIH, Bethesda, MD, USA
| | - Paul K LaFosse
- National Institute of Mental Health Intramural Program, NIH, Bethesda, MD, USA; NIH-University of Maryland Graduate Partnerships Program, Bethesda, MD, USA; Neuroscience and Cognitive Science Program, University of Maryland, College Park, MD, USA
| | - Hannah C Goldbach
- National Institute of Mental Health Intramural Program, NIH, Bethesda, MD, USA
| | - Mark H Histed
- National Institute of Mental Health Intramural Program, NIH, Bethesda, MD, USA.
| |
Collapse
|
14
|
Sanzeni A, Palmigiano A, Nguyen TH, Luo J, Nassi JJ, Reynolds JH, Histed MH, Miller KD, Brunel N. Mechanisms underlying reshuffling of visual responses by optogenetic stimulation in mice and monkeys. Neuron 2023; 111:4102-4115.e9. [PMID: 37865082 PMCID: PMC10841937 DOI: 10.1016/j.neuron.2023.09.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 05/05/2023] [Accepted: 09/15/2023] [Indexed: 10/23/2023]
Abstract
The ability to optogenetically perturb neural circuits opens an unprecedented window into mechanisms governing circuit function. We analyzed and theoretically modeled neuronal responses to visual and optogenetic inputs in mouse and monkey V1. In both species, optogenetic stimulation of excitatory neurons strongly modulated the activity of single neurons yet had weak or no effects on the distribution of firing rates across the population. Thus, the optogenetic inputs reshuffled firing rates across the network. Key statistics of mouse and monkey responses lay on a continuum, with mice/monkeys occupying the low-/high-rate regions, respectively. We show that neuronal reshuffling emerges generically in randomly connected excitatory/inhibitory networks, provided the coupling strength (combination of recurrent coupling and external input) is sufficient that powerful inhibitory feedback cancels the mean optogenetic input. A more realistic model, distinguishing tuned visual vs. untuned optogenetic input in a structured network, reduces the coupling strength needed to explain reshuffling.
Collapse
Affiliation(s)
- Alessandro Sanzeni
- Department of Computing Sciences, Bocconi University, 20100 Milan, Italy; Center for Theoretical Neuroscience and Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Department of Neurobiology, Duke University, Durham, NC 27710, USA
| | - Agostina Palmigiano
- Center for Theoretical Neuroscience and Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Tuan H Nguyen
- Center for Theoretical Neuroscience and Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Department of Physics, Columbia University, New York, NY 10027, USA
| | - Junxiang Luo
- Systems Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA 92037, USA
| | - Jonathan J Nassi
- Systems Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA 92037, USA
| | - John H Reynolds
- Systems Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, CA 92037, USA
| | - Mark H Histed
- National Institute of Mental Health Intramural Program, NIH, Bethesda, MD 20814, USA
| | - Kenneth D Miller
- Center for Theoretical Neuroscience and Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Department of Neuroscience, Swartz Program in Theoretical Neuroscience, Kavli Institute for Brain Science, College of Physicians and Surgeons and Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York City, NY 10027, USA.
| | - Nicolas Brunel
- Department of Neurobiology, Duke University, Durham, NC 27710, USA; Department of Physics, Duke University, Durham, NC 27710, USA.
| |
Collapse
|
15
|
Yoshida K, Toyoizumi T. Computational role of sleep in memory reorganization. Curr Opin Neurobiol 2023; 83:102799. [PMID: 37844426 DOI: 10.1016/j.conb.2023.102799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 09/07/2023] [Accepted: 09/21/2023] [Indexed: 10/18/2023]
Abstract
Sleep is considered to play an essential role in memory reorganization. Despite its importance, classical theoretical models did not focus on some sleep characteristics. Here, we review recent theoretical approaches investigating their roles in learning and discuss the possibility that non-rapid eye movement (NREM) sleep selectively consolidates memory, and rapid eye movement (REM) sleep reorganizes the representations of memories. We first review the possibility that slow waves during NREM sleep contribute to memory selection by using sequential firing patterns and the existence of up and down states. Second, we discuss the role of dreaming during REM sleep in developing neuronal representations. We finally discuss how to develop these points further, emphasizing the connections to experimental neuroscience and machine learning.
Collapse
Affiliation(s)
- Kensuke Yoshida
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan; Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan; Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan.
| |
Collapse
|
16
|
Barbosa J, Proville R, Rodgers CC, DeWeese MR, Ostojic S, Boubenec Y. Early selection of task-relevant features through population gating. Nat Commun 2023; 14:6837. [PMID: 37884507 PMCID: PMC10603060 DOI: 10.1038/s41467-023-42519-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 10/12/2023] [Indexed: 10/28/2023] Open
Abstract
Brains can gracefully weed out irrelevant stimuli to guide behavior. This feat is believed to rely on a progressive selection of task-relevant stimuli across the cortical hierarchy, but the specific across-area interactions enabling stimulus selection are still unclear. Here, we propose that population gating, occurring within primary auditory cortex (A1) but controlled by top-down inputs from prelimbic region of medial prefrontal cortex (mPFC), can support across-area stimulus selection. Examining single-unit activity recorded while rats performed an auditory context-dependent task, we found that A1 encoded relevant and irrelevant stimuli along a common dimension of its neural space. Yet, the relevant stimulus encoding was enhanced along an extra dimension. In turn, mPFC encoded only the stimulus relevant to the ongoing context. To identify candidate mechanisms for stimulus selection within A1, we reverse-engineered low-rank RNNs trained on a similar task. Our analyses predicted that two context-modulated neural populations gated their preferred stimulus in opposite contexts, which we confirmed in further analyses of A1. Finally, we show in a two-region RNN how population gating within A1 could be controlled by top-down inputs from PFC, enabling flexible across-area communication despite fixed inter-areal connectivity.
Collapse
Affiliation(s)
- Joao Barbosa
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, 75005, Paris, France.
| | - Rémi Proville
- Tailored Data Solutions, 192 Cours Gambetta, 84300, Cavaillon, France
| | - Chris C Rodgers
- Department of Neurosurgery, Emory University, Atlanta, GA, 30033, USA
| | - Michael R DeWeese
- Department of Physics, Helen Wills Neuroscience Institute, and Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA, USA
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, 75005, Paris, France
| | - Yves Boubenec
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure PSL Research University, CNRS, Paris, France
| |
Collapse
|
17
|
Cimeša L, Ciric L, Ostojic S. Geometry of population activity in spiking networks with low-rank structure. PLoS Comput Biol 2023; 19:e1011315. [PMID: 37549194 PMCID: PMC10461857 DOI: 10.1371/journal.pcbi.1011315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 08/28/2023] [Accepted: 06/27/2023] [Indexed: 08/09/2023] Open
Abstract
Recurrent network models are instrumental in investigating how behaviorally-relevant computations emerge from collective neural dynamics. A recently developed class of models based on low-rank connectivity provides an analytically tractable framework for understanding of how connectivity structure determines the geometry of low-dimensional dynamics and the ensuing computations. Such models however lack some fundamental biological constraints, and in particular represent individual neurons in terms of abstract units that communicate through continuous firing rates rather than discrete action potentials. Here we examine how far the theoretical insights obtained from low-rank rate networks transfer to more biologically plausible networks of spiking neurons. Adding a low-rank structure on top of random excitatory-inhibitory connectivity, we systematically compare the geometry of activity in networks of integrate-and-fire neurons to rate networks with statistically equivalent low-rank connectivity. We show that the mean-field predictions of rate networks allow us to identify low-dimensional dynamics at constant population-average activity in spiking networks, as well as novel non-linear regimes of activity such as out-of-phase oscillations and slow manifolds. We finally exploit these results to directly build spiking networks that perform nonlinear computations.
Collapse
Affiliation(s)
- Ljubica Cimeša
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Lazar Ciric
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
18
|
Handy G, Borisyuk A. Investigating the ability of astrocytes to drive neural network synchrony. PLoS Comput Biol 2023; 19:e1011290. [PMID: 37556468 PMCID: PMC10441806 DOI: 10.1371/journal.pcbi.1011290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 08/21/2023] [Accepted: 06/21/2023] [Indexed: 08/11/2023] Open
Abstract
Recent experimental works have implicated astrocytes as a significant cell type underlying several neuronal processes in the mammalian brain, from encoding sensory information to neurological disorders. Despite this progress, it is still unclear how astrocytes are communicating with and driving their neuronal neighbors. While previous computational modeling works have helped propose mechanisms responsible for driving these interactions, they have primarily focused on interactions at the synaptic level, with microscale models of calcium dynamics and neurotransmitter diffusion. Since it is computationally infeasible to include the intricate microscale details in a network-scale model, little computational work has been done to understand how astrocytes may be influencing spiking patterns and synchronization of large networks. We overcome this issue by first developing an "effective" astrocyte that can be easily implemented to already established network frameworks. We do this by showing that the astrocyte proximity to a synapse makes synaptic transmission faster, weaker, and less reliable. Thus, our "effective" astrocytes can be incorporated by considering heterogeneous synaptic time constants, which are parametrized only by the degree of astrocytic proximity at that synapse. We then apply our framework to large networks of exponential integrate-and-fire neurons with various spatial structures. Depending on key parameters, such as the number of synapses ensheathed and the strength of this ensheathment, we show that astrocytes can push the network to a synchronous state and exhibit spatially correlated patterns.
Collapse
Affiliation(s)
- Gregory Handy
- Departments of Neurobiology and Statistics, University of Chicago, Chicago, Illinois, United States of America
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, Illinois, United States of America
| | - Alla Borisyuk
- Department of Mathematics, University of Utah, Salt Lake City, Utah, United States of America
| |
Collapse
|
19
|
Akitake B, Douglas HM, LaFosse PK, Beiran M, Deveau CE, O'Rawe J, Li AJ, Ryan LN, Duffy SP, Zhou Z, Deng Y, Rajan K, Histed MH. Amplified cortical neural responses as animals learn to use novel activity patterns. Curr Biol 2023; 33:2163-2174.e4. [PMID: 37148876 DOI: 10.1016/j.cub.2023.04.032] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 02/09/2023] [Accepted: 04/14/2023] [Indexed: 05/08/2023]
Abstract
Cerebral cortex supports representations of the world in patterns of neural activity, used by the brain to make decisions and guide behavior. Past work has found diverse, or limited, changes in the primary sensory cortex in response to learning, suggesting that the key computations might occur in downstream regions. Alternatively, sensory cortical changes may be central to learning. We studied cortical learning by using controlled inputs we insert: we trained mice to recognize entirely novel, non-sensory patterns of cortical activity in the primary visual cortex (V1) created by optogenetic stimulation. As animals learned to use these novel patterns, we found that their detection abilities improved by an order of magnitude or more. The behavioral change was accompanied by large increases in V1 neural responses to fixed optogenetic input. Neural response amplification to novel optogenetic inputs had little effect on existing visual sensory responses. A recurrent cortical model shows that this amplification can be achieved by a small mean shift in recurrent network synaptic strength. Amplification would seem to be desirable to improve decision-making in a detection task; therefore, these results suggest that adult recurrent cortical plasticity plays a significant role in improving behavioral performance during learning.
Collapse
Affiliation(s)
- Bradley Akitake
- Unit on Neural Computation and Behavior, National Institute of Mental Health Intramural Program, National Institutes of Health, Bethesda, MD 20892, USA
| | - Hannah M Douglas
- Unit on Neural Computation and Behavior, National Institute of Mental Health Intramural Program, National Institutes of Health, Bethesda, MD 20892, USA
| | - Paul K LaFosse
- Unit on Neural Computation and Behavior, National Institute of Mental Health Intramural Program, National Institutes of Health, Bethesda, MD 20892, USA
| | - Manuel Beiran
- Nash Department of Neuroscience, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Ciana E Deveau
- Unit on Neural Computation and Behavior, National Institute of Mental Health Intramural Program, National Institutes of Health, Bethesda, MD 20892, USA
| | - Jonathan O'Rawe
- Unit on Neural Computation and Behavior, National Institute of Mental Health Intramural Program, National Institutes of Health, Bethesda, MD 20892, USA
| | - Anna J Li
- Unit on Neural Computation and Behavior, National Institute of Mental Health Intramural Program, National Institutes of Health, Bethesda, MD 20892, USA
| | - Lauren N Ryan
- Unit on Neural Computation and Behavior, National Institute of Mental Health Intramural Program, National Institutes of Health, Bethesda, MD 20892, USA
| | - Samuel P Duffy
- Unit on Neural Computation and Behavior, National Institute of Mental Health Intramural Program, National Institutes of Health, Bethesda, MD 20892, USA
| | - Zhishang Zhou
- Unit on Neural Computation and Behavior, National Institute of Mental Health Intramural Program, National Institutes of Health, Bethesda, MD 20892, USA
| | - Yanting Deng
- Unit on Neural Computation and Behavior, National Institute of Mental Health Intramural Program, National Institutes of Health, Bethesda, MD 20892, USA
| | - Kanaka Rajan
- Nash Department of Neuroscience, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Mark H Histed
- Unit on Neural Computation and Behavior, National Institute of Mental Health Intramural Program, National Institutes of Health, Bethesda, MD 20892, USA.
| |
Collapse
|
20
|
Kim CM, Finkelstein A, Chow CC, Svoboda K, Darshan R. Distributing task-related neural activity across a cortical network through task-independent connections. Nat Commun 2023; 14:2851. [PMID: 37202424 DOI: 10.1038/s41467-023-38529-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 05/05/2023] [Indexed: 05/20/2023] Open
Abstract
Task-related neural activity is widespread across populations of neurons during goal-directed behaviors. However, little is known about the synaptic reorganization and circuit mechanisms that lead to broad activity changes. Here we trained a subset of neurons in a spiking network with strong synaptic interactions to reproduce the activity of neurons in the motor cortex during a decision-making task. Task-related activity, resembling the neural data, emerged across the network, even in the untrained neurons. Analysis of trained networks showed that strong untrained synapses, which were independent of the task and determined the dynamical state of the network, mediated the spread of task-related activity. Optogenetic perturbations suggest that the motor cortex is strongly-coupled, supporting the applicability of the mechanism to cortical networks. Our results reveal a cortical mechanism that facilitates distributed representations of task-variables by spreading the activity from a subset of plastic neurons to the entire network through task-independent strong synapses.
Collapse
Affiliation(s)
- Christopher M Kim
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD, USA.
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| | - Arseny Finkelstein
- Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Carson C Chow
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD, USA
| | - Karel Svoboda
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Allen Institute for Neural Dynamics, Seattle, WA, USA
| | - Ran Darshan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| |
Collapse
|
21
|
Ekelmans P, Kraynyukovas N, Tchumatchenko T. Targeting operational regimes of interest in recurrent neural networks. PLoS Comput Biol 2023; 19:e1011097. [PMID: 37186668 DOI: 10.1371/journal.pcbi.1011097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 05/25/2023] [Accepted: 04/11/2023] [Indexed: 05/17/2023] Open
Abstract
Neural computations emerge from local recurrent neural circuits or computational units such as cortical columns that comprise hundreds to a few thousand neurons. Continuous progress in connectomics, electrophysiology, and calcium imaging require tractable spiking network models that can consistently incorporate new information about the network structure and reproduce the recorded neural activity features. However, for spiking networks, it is challenging to predict which connectivity configurations and neural properties can generate fundamental operational states and specific experimentally reported nonlinear cortical computations. Theoretical descriptions for the computational state of cortical spiking circuits are diverse, including the balanced state where excitatory and inhibitory inputs balance almost perfectly or the inhibition stabilized state (ISN) where the excitatory part of the circuit is unstable. It remains an open question whether these states can co-exist with experimentally reported nonlinear computations and whether they can be recovered in biologically realistic implementations of spiking networks. Here, we show how to identify spiking network connectivity patterns underlying diverse nonlinear computations such as XOR, bistability, inhibitory stabilization, supersaturation, and persistent activity. We establish a mapping between the stabilized supralinear network (SSN) and spiking activity which allows us to pinpoint the location in parameter space where these activity regimes occur. Notably, we find that biologically-sized spiking networks can have irregular asynchronous activity that does not require strong excitation-inhibition balance or large feedforward input and we show that the dynamic firing rate trajectories in spiking networks can be precisely targeted without error-driven training algorithms.
Collapse
Affiliation(s)
- Pierre Ekelmans
- Theory of Neural Dynamics group, Max Planck Institute for Brain Research, Frankfurt am Main, Germany
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | - Nataliya Kraynyukovas
- Theory of Neural Dynamics group, Max Planck Institute for Brain Research, Frankfurt am Main, Germany
- Institute of Experimental Epileptology and Cognition Research, Life and Brain Center, Universitätsklinikum Bonn, Bonn, Germany
| | - Tatjana Tchumatchenko
- Theory of Neural Dynamics group, Max Planck Institute for Brain Research, Frankfurt am Main, Germany
- Institute of Experimental Epileptology and Cognition Research, Life and Brain Center, Universitätsklinikum Bonn, Bonn, Germany
- Institute of physiological chemistry, Medical center of the Johannes Gutenberg-University Mainz, Mainz, Germany
| |
Collapse
|
22
|
Holt CJ, Miller KD, Ahmadian Y. The stabilized supralinear network accounts for the contrast dependence of visual cortical gamma oscillations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.11.540442. [PMID: 37214812 PMCID: PMC10197697 DOI: 10.1101/2023.05.11.540442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
When stimulated, neural populations in the visual cortex exhibit fast rhythmic activity with frequencies in the gamma band (30-80 Hz). The gamma rhythm manifests as a broad resonance peak in the power-spectrum of recorded local field potentials, which exhibits various stimulus dependencies. In particular, in macaque primary visual cortex (V1), the gamma peak frequency increases with increasing stimulus contrast. Moreover, this contrast dependence is local: when contrast varies smoothly over visual space, the gamma peak frequency in each cortical column is controlled by the local contrast in that column's receptive field. No parsimonious mechanistic explanation for these contrast dependencies of V1 gamma oscillations has been proposed. The stabilized supralinear network (SSN) is a mechanistic model of cortical circuits that has accounted for a range of visual cortical response nonlinearities and contextual modulations, as well as their contrast dependence. Here, we begin by showing that a reduced SSN model without retinotopy robustly captures the contrast dependence of gamma peak frequency, and provides a mechanistic explanation for this effect based on the observed non-saturating and supralinear input-output function of V1 neurons. Given this result, the local dependence on contrast can trivially be captured in a retinotopic SSN which however lacks horizontal synaptic connections between its cortical columns. However, long-range horizontal connections in V1 are in fact strong, and underlie contextual modulation effects such as surround suppression. We thus explored whether a retinotopically organized SSN model of V1 with strong excitatory horizontal connections can exhibit both surround suppression and the local contrast dependence of gamma peak frequency. We found that retinotopic SSNs can account for both effects, but only when the horizontal excitatory projections are composed of two components with different patterns of spatial fall-off with distance: a short-range component that only targets the source column, combined with a long-range component that targets columns neighboring the source column. We thus make a specific qualitative prediction for the spatial structure of horizontal connections in macaque V1, consistent with the columnar structure of cortex.
Collapse
Affiliation(s)
- Caleb J Holt
- Institute of Neuroscience, Department of Physics, University of Oregon, OR, USA
| | - Kenneth D Miller
- Center for Theoretical Neuroscience, Swartz Program in Theoretical Neuroscience, Kavli Institute for Brain Science, and Dept. of Neuroscience, College of Physicians and Surgeons and Morton B. Zuckerman Mind Brain Behavior Institute, Columbia University, NY, USA
| | - Yashar Ahmadian
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
| |
Collapse
|
23
|
Negrón A, Getz MP, Handy G, Doiron B. The mechanics of correlated variability in segregated cortical excitatory subnetworks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.25.538323. [PMID: 37162867 PMCID: PMC10168290 DOI: 10.1101/2023.04.25.538323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Understanding the genesis of shared trial-to-trial variability in neural activity within sensory cortex is critical to uncovering the biological basis of information processing in the brain. Shared variability is often a reflection of the structure of cortical connectivity since this variability likely arises, in part, from local circuit inputs. A series of experiments from segregated networks of (excitatory) pyramidal neurons in mouse primary visual cortex challenge this view. Specifically, the across-network correlations were found to be larger than predicted given the known weak cross-network connectivity. We aim to uncover the circuit mechanisms responsible for these enhanced correlations through biologically motivated cortical circuit models. Our central finding is that coupling each excitatory subpopulation with a specific inhibitory subpopulation provides the most robust network-intrinsic solution in shaping these enhanced correlations. This result argues for the existence of excitatory-inhibitory functional assemblies in early sensory areas which mirror not just response properties but also connectivity between pyramidal cells.
Collapse
Affiliation(s)
- Alex Negrón
- Department of Applied Mathematics, Illinois Institute of Technology
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| | - Matthew P. Getz
- Departments of Neurobiology and Statistics, University of Chicago
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| | - Gregory Handy
- Departments of Neurobiology and Statistics, University of Chicago
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| | - Brent Doiron
- Departments of Neurobiology and Statistics, University of Chicago
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago
| |
Collapse
|
24
|
Input correlations impede suppression of chaos and learning in balanced firing-rate networks. PLoS Comput Biol 2022; 18:e1010590. [DOI: 10.1371/journal.pcbi.1010590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 12/15/2022] [Accepted: 09/20/2022] [Indexed: 12/12/2022] Open
Abstract
Neural circuits exhibit complex activity patterns, both spontaneously and evoked by external stimuli. Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity. We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, i.e., the suppression of internally-generated chaotic variability, strongly depends on correlations in the input. A distinctive feature of balanced networks is that, because common external input is dynamically canceled by recurrent feedback, it is far more difficult to suppress chaos with common input into each neuron than through independent input. To study this phenomenon, we develop a non-stationary dynamic mean-field theory for driven networks. The theory explains how the activity statistics and the largest Lyapunov exponent depend on the frequency and amplitude of the input, recurrent coupling strength, and network size, for both common and independent input. We further show that uncorrelated inputs facilitate learning in balanced networks.
Collapse
|
25
|
Miehl C, Gjorgjieva J. Stability and learning in excitatory synapses by nonlinear inhibitory plasticity. PLoS Comput Biol 2022; 18:e1010682. [PMID: 36459503 PMCID: PMC9718420 DOI: 10.1371/journal.pcbi.1010682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 10/25/2022] [Indexed: 12/03/2022] Open
Abstract
Synaptic changes are hypothesized to underlie learning and memory formation in the brain. But Hebbian synaptic plasticity of excitatory synapses on its own is unstable, leading to either unlimited growth of synaptic strengths or silencing of neuronal activity without additional homeostatic mechanisms. To control excitatory synaptic strengths, we propose a novel form of synaptic plasticity at inhibitory synapses. Using computational modeling, we suggest two key features of inhibitory plasticity, dominance of inhibition over excitation and a nonlinear dependence on the firing rate of postsynaptic excitatory neurons whereby inhibitory synaptic strengths change with the same sign (potentiate or depress) as excitatory synaptic strengths. We demonstrate that the stable synaptic strengths realized by this novel inhibitory plasticity model affects excitatory/inhibitory weight ratios in agreement with experimental results. Applying a disinhibitory signal can gate plasticity and lead to the generation of receptive fields and strong bidirectional connectivity in a recurrent network. Hence, a novel form of nonlinear inhibitory plasticity can simultaneously stabilize excitatory synaptic strengths and enable learning upon disinhibition.
Collapse
Affiliation(s)
- Christoph Miehl
- Max Planck Institute for Brain Research, Frankfurt am Main, Germany
- School of Life Sciences, Technical University of Munich, Freising, Germany
- * E-mail: (CM); (JG)
| | - Julijana Gjorgjieva
- Max Planck Institute for Brain Research, Frankfurt am Main, Germany
- School of Life Sciences, Technical University of Munich, Freising, Germany
- * E-mail: (CM); (JG)
| |
Collapse
|
26
|
Bosten JM, Coen-Cagli R, Franklin A, Solomon SG, Webster MA. Calibrating Vision: Concepts and Questions. Vision Res 2022; 201:108131. [PMID: 37139435 PMCID: PMC10151026 DOI: 10.1016/j.visres.2022.108131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The idea that visual coding and perception are shaped by experience and adjust to changes in the environment or the observer is universally recognized as a cornerstone of visual processing, yet the functions and processes mediating these calibrations remain in many ways poorly understood. In this article we review a number of facets and issues surrounding the general notion of calibration, with a focus on plasticity within the encoding and representational stages of visual processing. These include how many types of calibrations there are - and how we decide; how plasticity for encoding is intertwined with other principles of sensory coding; how it is instantiated at the level of the dynamic networks mediating vision; how it varies with development or between individuals; and the factors that may limit the form or degree of the adjustments. Our goal is to give a small glimpse of an enormous and fundamental dimension of vision, and to point to some of the unresolved questions in our understanding of how and why ongoing calibrations are a pervasive and essential element of vision.
Collapse
Affiliation(s)
| | - Ruben Coen-Cagli
- Department of Systems Computational Biology, and Dominick P. Purpura Department of Neuroscience, and Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx NY
| | | | - Samuel G Solomon
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, UK
| | | |
Collapse
|
27
|
Ordering in heterogeneous connectome weights for visual information processing. Proc Natl Acad Sci U S A 2022; 119:e2216092119. [PMID: 36409900 PMCID: PMC9860139 DOI: 10.1073/pnas.2216092119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
|
28
|
Wagatsuma N, Nobukawa S, Fukai T. A microcircuit model involving parvalbumin, somatostatin, and vasoactive intestinal polypeptide inhibitory interneurons for the modulation of neuronal oscillation during visual processing. Cereb Cortex 2022; 33:4459-4477. [PMID: 36130096 PMCID: PMC10110453 DOI: 10.1093/cercor/bhac355] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 08/06/2022] [Accepted: 08/08/2022] [Indexed: 11/12/2022] Open
Abstract
Various subtypes of inhibitory interneurons contact one another to organize cortical networks. Most cortical inhibitory interneurons express 1 of 3 genes: parvalbumin (PV), somatostatin (SOM), or vasoactive intestinal polypeptide (VIP). This diversity of inhibition allows the flexible regulation of neuronal responses within and between cortical areas. However, the exact roles of these interneuron subtypes and of excitatory pyramidal (Pyr) neurons in regulating neuronal network activity and establishing perception (via interactions between feedforward sensory and feedback attentional signals) remain largely unknown. To explore the regulatory roles of distinct neuronal types in cortical computation, we developed a computational microcircuit model with biologically plausible visual cortex layers 2/3 that combined Pyr neurons and the 3 inhibitory interneuron subtypes to generate network activity. In simulations with our model, inhibitory signals from PV and SOM neurons preferentially induced neuronal firing at gamma (30-80 Hz) and beta (20-30 Hz) frequencies, respectively, in agreement with observed physiological results. Furthermore, our model indicated that rapid inhibition from VIP to SOM subtypes underlies marked attentional modulation for low-gamma frequency (30-50 Hz) in Pyr neuron responses. Our results suggest the distinct but cooperative roles of inhibitory interneuron subtypes in the establishment of visual perception.
Collapse
Affiliation(s)
- Nobuhiko Wagatsuma
- Faculty of Science, Toho University, 2-2-1 Miyama, Funabashi, Chiba 274-8510, Japan
| | - Sou Nobukawa
- Department of Computer Science, Chiba Institute of Technology, 2-17-1 Tsudanuma, Narashino, Chiba 275-0016, Japan.,Department of Preventive Intervention for Psychiatric Disorders, National Institute of Mental Health, National Center of Neurology and Psychiatry, 4-1-1 Ogawa-Higashi, Kodaira, Tokyo 187-8502, Japan
| | - Tomoki Fukai
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology Graduate University, 1919-1 Tancha, Onna-son, Kunigami-gun, Okinawa 904-0495, Japan
| |
Collapse
|
29
|
Wang B, Aljadeff J. Multiplicative Shot-Noise: A New Route to Stability of Plastic Networks. PHYSICAL REVIEW LETTERS 2022; 129:068101. [PMID: 36018633 DOI: 10.1103/physrevlett.129.068101] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 06/30/2022] [Indexed: 06/15/2023]
Abstract
Fluctuations of synaptic weights, among many other physical, biological, and ecological quantities, are driven by coincident events of two "parent" processes. We propose a multiplicative shot-noise model that can capture the behaviors of a broad range of such natural phenomena, and analytically derive an approximation that accurately predicts its statistics. We apply our results to study the effects of a multiplicative synaptic plasticity rule that was recently extracted from measurements in physiological conditions. Using mean-field theory analysis and network simulations, we investigate how this rule shapes the connectivity and dynamics of recurrent spiking neural networks. The multiplicative plasticity rule is shown to support efficient learning of input stimuli, and it gives a stable, unimodal synaptic-weight distribution with a large fraction of strong synapses. The strong synapses remain stable over long times but do not "run away." Our results suggest that the multiplicative shot-noise offers a new route to understand the tradeoff between flexibility and stability in neural circuits and other dynamic networks.
Collapse
Affiliation(s)
- Bin Wang
- Department of Physics, University of California San Diego, La Jolla, California 92093, USA
| | - Johnatan Aljadeff
- Department of Neurobiology, University of California San Diego, La Jolla, California 92093, USA
| |
Collapse
|
30
|
Mean-field limits for non-linear Hawkes processes with excitation and inhibition. Stoch Process Their Appl 2022. [DOI: 10.1016/j.spa.2022.07.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
31
|
Chew KCM, Kumar V, Tan AYY. Different Excitation-Inhibition Correlations Between Spontaneous and Tone-evoked Activity in Primary Auditory Cortex Neurons. Neuroscience 2022; 496:205-218. [PMID: 35728764 DOI: 10.1016/j.neuroscience.2022.06.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 05/18/2022] [Accepted: 06/14/2022] [Indexed: 10/18/2022]
Abstract
Tone-evoked synaptic excitation and inhibition are highly correlated in many neurons with V-shaped tuning curves in the primary auditory cortex of pentobarbital-anesthetized rats. In contrast, there is less correlation between spontaneous excitation and inhibition in visual cortex neurons under the same anesthetic conditions. However, it was not known whether the primary auditory cortex resembles visual cortex in having spontaneous excitation and inhibition that is less correlated than tone-evoked excitation and inhibition. Here we report whole-cell voltage-clamp measurements of spontaneous excitation and inhibition in primary auditory cortex neurons of pentobarbital-anesthetized rats. Spontaneous excitatory and inhibitory currents appeared to mainly consist of distinct events, with the inhibitory event rate typically lower than the excitatory event rate. We use the ratio of the excitatory event rate to the inhibitory event rate, and the assumption that the excitatory and inhibitory synaptic currents can each be reasonably described as a filtered Poisson process, to estimate the maximum spontaneous excitatory-inhibitory correlation for each neuron. In a subset of neurons, we also measured tone-evoked excitation and inhibition. In neurons with V-shaped tuning curves, although tone-evoked excitation and inhibition were highly correlated, the spontaneous inhibitory event rate was typically sufficiently lower than the spontaneous excitatory event rate to indicate a lower excitatory-inhibitory correlation for spontaneous activity than for tone-evoked responses.
Collapse
Affiliation(s)
- Katherine C M Chew
- Department of Physiology, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Healthy Longevity Translational Research Programme, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Cardiovascular Translational Research Programme, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Neurobiology Programme, Life Sciences Institute, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore.
| | - Vineet Kumar
- Department of Physiology, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Healthy Longevity Translational Research Programme, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Cardiovascular Translational Research Programme, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Neurobiology Programme, Life Sciences Institute, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore.
| | - Andrew Y Y Tan
- Department of Physiology, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Healthy Longevity Translational Research Programme, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Cardiovascular Translational Research Programme, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Neurobiology Programme, Life Sciences Institute, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore.
| |
Collapse
|
32
|
Evaluating the extent to which homeostatic plasticity learns to compute prediction errors in unstructured neuronal networks. J Comput Neurosci 2022; 50:357-373. [PMID: 35657570 DOI: 10.1007/s10827-022-00820-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 02/18/2022] [Accepted: 05/10/2022] [Indexed: 10/18/2022]
Abstract
The brain is believed to operate in part by making predictions about sensory stimuli and encoding deviations from these predictions in the activity of "prediction error neurons." This principle defines the widely influential theory of predictive coding. The precise circuitry and plasticity mechanisms through which animals learn to compute and update their predictions are unknown. Homeostatic inhibitory synaptic plasticity is a promising mechanism for training neuronal networks to perform predictive coding. Homeostatic plasticity causes neurons to maintain a steady, baseline firing rate in response to inputs that closely match the inputs on which a network was trained, but firing rates can deviate away from this baseline in response to stimuli that are mismatched from training. We combine computer simulations and mathematical analysis systematically to test the extent to which randomly connected, unstructured networks compute prediction errors after training with homeostatic inhibitory synaptic plasticity. We find that homeostatic plasticity alone is sufficient for computing prediction errors for trivial time-constant stimuli, but not for more realistic time-varying stimuli. We use a mean-field theory of plastic networks to explain our findings and characterize the assumptions under which they apply.
Collapse
|
33
|
Layer M, Senk J, Essink S, van Meegen A, Bos H, Helias M. NNMT: Mean-Field Based Analysis Tools for Neuronal Network Models. Front Neuroinform 2022; 16:835657. [PMID: 35712677 PMCID: PMC9196133 DOI: 10.3389/fninf.2022.835657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 03/17/2022] [Indexed: 11/13/2022] Open
Abstract
Mean-field theory of neuronal networks has led to numerous advances in our analytical and intuitive understanding of their dynamics during the past decades. In order to make mean-field based analysis tools more accessible, we implemented an extensible, easy-to-use open-source Python toolbox that collects a variety of mean-field methods for the leaky integrate-and-fire neuron model. The Neuronal Network Mean-field Toolbox (NNMT) in its current state allows for estimating properties of large neuronal networks, such as firing rates, power spectra, and dynamical stability in mean-field and linear response approximation, without running simulations. In this article, we describe how the toolbox is implemented, show how it is used to reproduce results of previous studies, and discuss different use-cases, such as parameter space explorations, or mapping different network models. Although the initial version of the toolbox focuses on methods for leaky integrate-and-fire neurons, its structure is designed to be open and extensible. It aims to provide a platform for collecting analytical methods for neuronal network model analysis, such that the neuroscientific community can take maximal advantage of them.
Collapse
Affiliation(s)
- Moritz Layer
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Simon Essink
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Alexander van Meegen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Institute of Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany
| | - Hannah Bos
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
34
|
Krishnamurthy K, Can T, Schwab DJ. Theory of Gating in Recurrent Neural Networks. PHYSICAL REVIEW. X 2022; 12:011011. [PMID: 36545030 PMCID: PMC9762509 DOI: 10.1103/physrevx.12.011011] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) and neuroscience. Prior theoretical work has focused on RNNs with additive interactions. However gating i.e., multiplicative interactions are ubiquitous in real neurons and also the central feature of the best-performing RNNs in ML. Here, we show that gating offers flexible control of two salient features of the collective dynamics: (i) timescales and (ii) dimensionality. The gate controlling timescales leads to a novel marginally stable state, where the network functions as a flexible integrator. Unlike previous approaches, gating permits this important function without parameter fine-tuning or special symmetries. Gates also provide a flexible, context-dependent mechanism to reset the memory trace, thus complementing the memory function. The gate modulating the dimensionality can induce a novel, discontinuous chaotic transition, where inputs push a stable system to strong chaotic activity, in contrast to the typically stabilizing effect of inputs. At this transition, unlike additive RNNs, the proliferation of critical points (topological complexity) is decoupled from the appearance of chaotic dynamics (dynamical complexity). The rich dynamics are summarized in phase diagrams, thus providing a map for principled parameter initialization choices to ML practitioners.
Collapse
Affiliation(s)
- Kamesh Krishnamurthy
- Joseph Henry Laboratories of Physics and PNI, Princeton University, Princeton, New Jersey 08544, USA
| | - Tankut Can
- Institute for Advanced Study, Princeton, New Jersey 08540, USA
| | - David J. Schwab
- Initiative for Theoretical Sciences, Graduate Center, CUNY, New York, New York 10016, USA
| |
Collapse
|
35
|
Kumaravelu K, Sombeck J, Miller LE, Bensmaia SJ, Grill WM. Stoney vs. Histed: Quantifying the spatial effects of intracortical microstimulation. Brain Stimul 2022; 15:141-151. [PMID: 34861412 PMCID: PMC8816873 DOI: 10.1016/j.brs.2021.11.015] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 11/17/2021] [Accepted: 11/19/2021] [Indexed: 01/03/2023] Open
Abstract
BACKGROUND Intracortical microstimulation (ICMS) is used to map neural circuits and restore lost sensory modalities such as vision, hearing, and somatosensation. The spatial effects of ICMS remain controversial: Stoney and colleagues proposed that the volume of somatic activation increased with stimulation intensity, while Histed et al., suggested activation density, but not somatic activation volume, increases with stimulation intensity. OBJECTIVE We used computational modeling to quantify the spatial effects of ICMS intensity and unify the apparently paradoxical findings of Histed and Stoney. METHODS We implemented a biophysically-based computational model of a cortical column comprising neurons with realistic morphology and representative synapses. We quantified the spatial effects of single pulses and short trains of ICMS, including the volume of activated neurons and the density of activated neurons as a function of stimulation intensity. RESULTS At all amplitudes, the dominant mode of somatic activation was by antidromic propagation to the soma following axonal activation, rather than via transsynaptic activation. There were no occurrences of direct activation of somata or dendrites. The volume over which antidromic action potentials were initiated grew with stimulation amplitude, while the volume of somatic activation increased marginally. However, the density of somatic activation within the activated volume increased with stimulation amplitude. CONCLUSIONS The results resolve the apparent paradox between Stoney and Histed's results by demonstrating that the volume over which action potentials are initiated grows with ICMS amplitude, consistent with Stoney. However, the volume occupied by the activated somata remains approximately constant, while the density of activated neurons within that volume increase, consistent with Histed.
Collapse
Affiliation(s)
| | - Joseph Sombeck
- Department of Physiology, Northwestern University, Chicago, IL,Department of Biomedical Engineering, Northwestern University, Chicago, IL
| | - Lee E. Miller
- Department of Physiology, Northwestern University, Chicago, IL,Department of Biomedical Engineering, Northwestern University, Chicago, IL,Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL
| | - Sliman J. Bensmaia
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL,Committee on Computational Neuroscience, University of Chicago, Chicago, IL,Neuroscience Institute, University of Chicago, Chicago, IL
| | - Warren M. Grill
- Department of Biomedical Engineering, Duke University, Durham, NC,Department of Electrical and Computer Engineering, Duke University, Durham, NC,Department of Neurobiology, Duke University, Durham, NC,Department of Neurosurgery, Duke University, Durham, NC,Correspondence: Warren M. Grill, Ph.D., Duke University, Department of Biomedical Engineering, Rm. 1427, Fitzpatrick CIEMAS, 101 Science Drive, Campus Box 90281, Durham, NC, 27708, USA, , 919 660-5276 Phone, 919 684-4488 FAX
| |
Collapse
|
36
|
Sanzeni A, Histed MH, Brunel N. Emergence of Irregular Activity in Networks of Strongly Coupled Conductance-Based Neurons. PHYSICAL REVIEW. X 2022; 12:011044. [PMID: 35923858 PMCID: PMC9344604 DOI: 10.1103/physrevx.12.011044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Cortical neurons are characterized by irregular firing and a broad distribution of rates. The balanced state model explains these observations with a cancellation of mean excitatory and inhibitory currents, which makes fluctuations drive firing. In networks of neurons with current-based synapses, the balanced state emerges dynamically if coupling is strong, i.e., if the mean number of synapses per neuron K is large and synaptic efficacy is of the order of 1 / K . When synapses are conductance-based, current fluctuations are suppressed when coupling is strong, questioning the applicability of the balanced state idea to biological neural networks. We analyze networks of strongly coupled conductance-based neurons and show that asynchronous irregular activity and broad distributions of rates emerge if synaptic efficacy is of the order of 1/ log(K). In such networks, unlike in the standard balanced state model, current fluctuations are small and firing is maintained by a drift-diffusion balance. This balance emerges dynamically, without fine-tuning, if inputs are smaller than a critical value, which depends on synaptic time constants and coupling strength, and is significantly more robust to connection heterogeneities than the classical balanced state model. Our analysis makes experimentally testable predictions of how the network response properties should evolve as input increases.
Collapse
Affiliation(s)
- A. Sanzeni
- Center for Theoretical Neuroscience, Columbia University, New York, New York, USA
- Department of Neurobiology, Duke University, Durham, North Carolina, USA
- National Institute of Mental Health Intramural Program, NIH, Bethesda, Maryland, USA
| | - M. H. Histed
- National Institute of Mental Health Intramural Program, NIH, Bethesda, Maryland, USA
| | - N. Brunel
- Department of Neurobiology, Duke University, Durham, North Carolina, USA
- Department of Physics, Duke University, Durham, North Carolina, USA
| |
Collapse
|
37
|
Bryson A, Berkovic SF, Petrou S, Grayden DB. State transitions through inhibitory interneurons in a cortical network model. PLoS Comput Biol 2021; 17:e1009521. [PMID: 34653178 PMCID: PMC8550371 DOI: 10.1371/journal.pcbi.1009521] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 10/27/2021] [Accepted: 10/04/2021] [Indexed: 11/18/2022] Open
Abstract
Inhibitory interneurons shape the spiking characteristics and computational properties of cortical networks. Interneuron subtypes can precisely regulate cortical function but the roles of interneuron subtypes for promoting different regimes of cortical activity remains unclear. Therefore, we investigated the impact of fast spiking and non-fast spiking interneuron subtypes on cortical activity using a network model with connectivity and synaptic properties constrained by experimental data. We found that network properties were more sensitive to modulation of the fast spiking population, with reductions of fast spiking excitability generating strong spike correlations and network oscillations. Paradoxically, reduced fast spiking excitability produced a reduction of global excitation-inhibition balance and features of an inhibition stabilised network, in which firing rates were driven by the activity of excitatory neurons within the network. Further analysis revealed that the synaptic interactions and biophysical features associated with fast spiking interneurons, in particular their rapid intrinsic response properties and short synaptic latency, enabled this state transition by enhancing gain within the excitatory population. Therefore, fast spiking interneurons may be uniquely positioned to control the strength of recurrent excitatory connectivity and the transition to an inhibition stabilised regime. Overall, our results suggest that interneuron subtypes can exert selective control over excitatory gain allowing for differential modulation of global network state. Inhibitory interneurons comprise a significant proportion of all cortical neurons and play a crucial role in sustaining normal neural activity in the brain. Although it is well established that there exist distinct subtypes of interneurons, the impact of different interneuron subtypes upon cortical function remains unclear. In this work, we explore the role of interneuron subtypes for modulating neural activity using a network model containing two of the most common interneuron subtypes. We find that one interneuron subtype, known as fast spiking interneurons, preferentially control the strength of activity between excitatory neurons to regulate changes in network state. These findings suggest that interneuron subtypes may selectively modulate cortical activity to promote different computational capabilities.
Collapse
Affiliation(s)
- Alexander Bryson
- Ion Channels and Disease Group, The Florey Institute of Neuroscience and Mental Health, University of Melbourne, Australia
- Department of Neurology, Austin Health, Heidelberg, Australia
- * E-mail: (AB); (DBG)
| | - Samuel F. Berkovic
- Epilepsy Research Centre, Department of Medicine, University of Melbourne, Austin Health, Heidelberg, Australia
| | - Steven Petrou
- Ion Channels and Disease Group, The Florey Institute of Neuroscience and Mental Health, University of Melbourne, Australia
| | - David B. Grayden
- Department of Biomedical Engineering, University of Melbourne, Melbourne, Australia
- * E-mail: (AB); (DBG)
| |
Collapse
|