1
|
Meissner-Bernard C, Zenke F, Friedrich RW. Geometry and dynamics of representations in a precisely balanced memory network related to olfactory cortex. eLife 2025; 13:RP96303. [PMID: 39804831 PMCID: PMC11733691 DOI: 10.7554/elife.96303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2025] Open
Abstract
Biological memory networks are thought to store information by experience-dependent changes in the synaptic connectivity between assemblies of neurons. Recent models suggest that these assemblies contain both excitatory and inhibitory neurons (E/I assemblies), resulting in co-tuning and precise balance of excitation and inhibition. To understand computational consequences of E/I assemblies under biologically realistic constraints we built a spiking network model based on experimental data from telencephalic area Dp of adult zebrafish, a precisely balanced recurrent network homologous to piriform cortex. We found that E/I assemblies stabilized firing rate distributions compared to networks with excitatory assemblies and global inhibition. Unlike classical memory models, networks with E/I assemblies did not show discrete attractor dynamics. Rather, responses to learned inputs were locally constrained onto manifolds that 'focused' activity into neuronal subspaces. The covariance structure of these manifolds supported pattern classification when information was retrieved from selected neuronal subsets. Networks with E/I assemblies therefore transformed the geometry of neuronal coding space, resulting in continuous representations that reflected both relatedness of inputs and an individual's experience. Such continuous representations enable fast pattern classification, can support continual learning, and may provide a basis for higher-order learning and cognitive computations.
Collapse
Affiliation(s)
| | - Friedemann Zenke
- Friedrich Miescher Institute for Biomedical ResearchBaselSwitzerland
- University of BaselBaselSwitzerland
| | - Rainer W Friedrich
- Friedrich Miescher Institute for Biomedical ResearchBaselSwitzerland
- University of BaselBaselSwitzerland
| |
Collapse
|
2
|
Shao Y, Dahmen D, Recanatesi S, Shea-Brown E, Ostojic S. Identifying the impact of local connectivity patterns on dynamics in excitatory-inhibitory networks. ARXIV 2024:arXiv:2411.06802v2. [PMID: 39650608 PMCID: PMC11623704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 12/11/2024]
Abstract
Networks of excitatory and inhibitory (EI) neurons form a canonical circuit in the brain. Seminal theoretical results on dynamics of such networks are based on the assumption that synaptic strengths depend on the type of neurons they connect, but are otherwise statistically independent. Recent synaptic physiology datasets however highlight the prominence of specific connectivity patterns that go well beyond what is expected from independent connections. While decades of influential research have demonstrated the strong role of the basic EI cell type structure, to which extent additional connectivity features influence dynamics remains to be fully determined. Here we examine the effects of pair-wise connectivity motifs on the linear dynamics in excitatory-inhibitory networks using an analytical framework that approximates the connectivity in terms of low-rank structures. This low-rank approximation is based on a mathematical derivation of the dominant eigenvalues of the connectivity matrix, and predicts the impact on responses to external inputs of connectivity motifs and their interactions with cell-type structure. Our results reveal that a particular pattern of connectivity, chain motifs, have a much stronger impact on dominant eigenmodes than other pair-wise motifs. In particular, an over-representation of chain motifs induces a strong positive eigenvalue in inhibition-dominated networks and generates a potential instability that requires revisiting the classical excitation-inhibition balance criteria. Examining effects of external inputs, we show that chain motifs can on their own induce paradoxical responses, where an increased input to inhibitory neurons leads to a decrease in their activity due to the recurrent feedback. These findings have direct implications for the interpretation of experiments in which responses to optogenetic perturbations are measured and used to infer the dynamical regime of cortical circuits.
Collapse
Affiliation(s)
- Yuxiu Shao
- School of Systems Science, Beijing Normal University, Beijing, China
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960
- Ecole Normale Superieure - PSL Research University, Paris, France
| | - David Dahmen
- Institute for Advanced Simulation (IAS-6) Computational and Systems Neuroscience, Jülich Research Center, Jülich, Germany
| | | | - Eric Shea-Brown
- Department of Applied Mathematics and Computational Neuroscience Center, University of Washington, Seattle, WA, USA
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960
- Ecole Normale Superieure - PSL Research University, Paris, France
| |
Collapse
|
3
|
Song D, Ruff D, Cohen M, Huang C. Neuronal heterogeneity of normalization strength in a circuit model. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.22.624903. [PMID: 39605397 PMCID: PMC11601594 DOI: 10.1101/2024.11.22.624903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/29/2024]
Abstract
The size of a neuron's receptive field increases along the visual hierarchy. Neurons in higher-order visual areas integrate information through a canonical computation called normalization, where neurons respond sublinearly to multiple stimuli in the receptive field. Neurons in the visual cortex exhibit highly heterogeneous degrees of normalization. Recent population recordings from visual cortex find that the interactions between neurons, measured by spike count correlations, depend on their normalization strengths. However, the circuit mechanism underlying the heterogeneity of normalization is unclear. In this work, we study normalization in a spiking neuron network model of visual cortex. The model produces a range of neuronal heterogeneity of normalization strength and the heterogeneity is highly correlated with the inhibitory current each neuron receives. Our model reproduces the dependence of spike count correlations on normalization as observed in experimental data, which is explained by the covariance with the inhibitory current. We find that neurons with stronger normalization are more sensitive to contrast differences in images and encode information more efficiently. In addition, networks with more heterogeneity in normalization encode more information about visual stimuli. Together, our model provides a mechanistic explanation of heterogeneous normalization strengths in the visual cortex, and sheds new light on the computational benefits of neuronal heterogeneity.
Collapse
Affiliation(s)
- Deying Song
- Joint Program in Neural Computation and Machine Learning, Neuroscience Institute, and Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA
- Center for the Neural Basis of Cognition, Pittsburgh, PA
| | - Douglas Ruff
- Department of Neurobiology and Neuroscience Institute, University of Chicago, Chicago, IL
| | - Marlene Cohen
- Department of Neurobiology and Neuroscience Institute, University of Chicago, Chicago, IL
| | - Chengcheng Huang
- Center for the Neural Basis of Cognition, Pittsburgh, PA
- Department of Neuroscience and Department of Mathematics, University of Pittsburgh, Pittsburgh, PA
| |
Collapse
|
4
|
Xiao G, Cai Y, Zhang Y, Xie J, Wu L, Xie H, Wu J, Dai Q. Mesoscale neuronal granular trial variability in vivo illustrated by nonlinear recurrent network in silico. Nat Commun 2024; 15:9894. [PMID: 39548098 PMCID: PMC11567969 DOI: 10.1038/s41467-024-54346-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 11/06/2024] [Indexed: 11/17/2024] Open
Abstract
Large-scale neural recording with single-neuron resolution has revealed the functional complexity of the neural systems. However, even under well-designed task conditions, the cortex-wide network exhibits highly dynamic trial variability, posing challenges to the conventional trial-averaged analysis. To study mesoscale trial variability, we conducted a comparative study between fluorescence imaging of layer-2/3 neurons in vivo and network simulation in silico. We imaged up to 40,000 cortical neurons' triggered responses by deep brain stimulus (DBS). And we build an in silico network to reproduce the biological phenomena we observed in vivo. We proved the existence of ineluctable trial variability and found it influenced by input amplitude and range. Moreover, we demonstrated that a spatially heterogeneous coding community accounts for more reliable inter-trial coding despite single-unit trial variability. A deeper understanding of trial variability from the perspective of a dynamical system may lead to uncovering intellectual abilities such as parallel coding and creativity.
Collapse
Affiliation(s)
- Guihua Xiao
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Yeyi Cai
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Yuanlong Zhang
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Jingyu Xie
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Lifan Wu
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Hao Xie
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Jiamin Wu
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| | - Qionghai Dai
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| |
Collapse
|
5
|
Pattadkal JJ, O'Shea RT, Hansel D, Taillefumier T, Brager D, Priebe NJ. Synchrony dynamics underlie irregular neocortical spiking. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.10.15.618398. [PMID: 39464165 PMCID: PMC11507790 DOI: 10.1101/2024.10.15.618398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/29/2024]
Abstract
Cortical neurons are characterized by their variable spiking patterns. We challenge prevalent theories for the origin of spiking variability. We examine the specific hypothesis that cortical synchrony drives spiking variability in vivo . Using dynamic clamp, we demonstrate that intrinsic neuronal properties do not contribute substantially to spiking variability, but rather spiking variability emerges from weakly synchronous network drive. With large-scale electrophysiology we quantify the degree of synchrony and its time scale in cortical networks in vivo . We demonstrate that physiological levels of synchrony are sufficient to generate irregular responses found in vivo . Further, this synchrony shifts over timescales ranging from 25 to 200 ms, depending on the presence of external sensory input. Such shifts occur when the network moves from spontaneous to driven modes, leading naturally to a decline in response variability as observed across cortical areas. Finally, while individual neurons exhibit reliable responses to physiological drive, different neurons respond in a distinct fashion according to their intrinsic properties, contributing to stable synchrony across the neural network.
Collapse
|
6
|
Wu S, Huang C, Snyder AC, Smith MA, Doiron B, Yu BM. Automated customization of large-scale spiking network models to neuronal population activity. NATURE COMPUTATIONAL SCIENCE 2024; 4:690-705. [PMID: 39285002 DOI: 10.1038/s43588-024-00688-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 08/08/2024] [Indexed: 09/22/2024]
Abstract
Understanding brain function is facilitated by constructing computational models that accurately reproduce aspects of brain activity. Networks of spiking neurons capture the underlying biophysics of neuronal circuits, yet their activity's dependence on model parameters is notoriously complex. As a result, heuristic methods have been used to configure spiking network models, which can lead to an inability to discover activity regimes complex enough to match large-scale neuronal recordings. Here we propose an automatic procedure, Spiking Network Optimization using Population Statistics (SNOPS), to customize spiking network models that reproduce the population-wide covariability of large-scale neuronal recordings. We first confirmed that SNOPS accurately recovers simulated neural activity statistics. Then, we applied SNOPS to recordings in macaque visual and prefrontal cortices and discovered previously unknown limitations of spiking network models. Taken together, SNOPS can guide the development of network models, thereby enabling deeper insight into how networks of neurons give rise to brain function.
Collapse
Affiliation(s)
- Shenghao Wu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
- Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Chengcheng Huang
- Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
| | - Adam C Snyder
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Matthew A Smith
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Brent Doiron
- Department of Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
- Neural Basis of Cognition, Pittsburgh, PA, USA.
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
7
|
Tian GJ, Zhu O, Shirhatti V, Greenspon CM, Downey JE, Freedman DJ, Doiron B. Neuronal firing rate diversity lowers the dimension of population covariability. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.30.610535. [PMID: 39257801 PMCID: PMC11383671 DOI: 10.1101/2024.08.30.610535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Populations of neurons produce activity with two central features. First, neuronal responses are very diverse - specific stimuli or behaviors prompt some neurons to emit many action potentials, while other neurons remain relatively silent. Second, the trial-to-trial fluctuations of neuronal response occupy a low dimensional space, owing to significant correlations between the activity of neurons. These two features define the quality of neuronal representation. We link these two aspects of population response using a recurrent circuit model and derive the following relation: the more diverse the firing rates of neurons in a population, the lower the effective dimension of population trial-to-trial covariability. This surprising prediction is tested and validated using simultaneously recorded neuronal populations from numerous brain areas in mice, non-human primates, and in the motor cortex of human participants. Using our relation we present a theory where a more diverse neuronal code leads to better fine discrimination performance from population activity. In line with this theory, we show that neuronal populations across the brain exhibit both more diverse mean responses and lower-dimensional fluctuations when the brain is in more heightened states of information processing. In sum, we present a key organizational principle of neuronal population response that is widely observed across the nervous system and acts to synergistically improve population representation.
Collapse
|
8
|
Xia J, Jasper A, Kohn A, Miller KD. Circuit-motivated generalized affine models characterize stimulus-dependent visual cortical shared variability. iScience 2024; 27:110512. [PMID: 39156642 PMCID: PMC11328009 DOI: 10.1016/j.isci.2024.110512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 04/01/2024] [Accepted: 07/12/2024] [Indexed: 08/20/2024] Open
Abstract
Correlated variability in the visual cortex is modulated by stimulus properties. The stimulus dependence of correlated variability impacts stimulus coding and is indicative of circuit structure. An affine model combining a multiplicative factor and an additive offset has been proposed to explain how correlated variability in primary visual cortex (V1) depends on stimulus orientations. However, whether the affine model could be extended to explain modulations by other stimulus variables or variability shared between two brain areas is unknown. Motivated by a simple neural circuit mechanism, we modified the affine model to better explain the contrast dependence of neural variability shared within either primary or secondary visual cortex (V1 or V2) as well as the orientation dependence of neural variability shared between V1 and V2. Our results bridge neural circuit mechanisms and statistical models and provide a parsimonious explanation for the stimulus dependence of correlated variability within and between visual areas.
Collapse
Affiliation(s)
- Ji Xia
- Center for Theoretical Neuroscience and Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Anna Jasper
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Adam Kohn
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
- Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY, USA
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Kenneth D. Miller
- Center for Theoretical Neuroscience and Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
- Department of Neuroscience, Swartz Program in Theoretical Neuroscience, Kavli Institute for Brain Science, College of Physicians and Surgeons and Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York City, NY 10027, USA
| |
Collapse
|
9
|
Frechou MA, Martin SS, McDermott KD, Huaman EA, Gökhan Ş, Tomé WA, Coen-Cagli R, Gonçalves JT. Adult neurogenesis improves spatial information encoding in the mouse hippocampus. Nat Commun 2024; 15:6410. [PMID: 39080283 PMCID: PMC11289285 DOI: 10.1038/s41467-024-50699-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 06/24/2024] [Indexed: 08/02/2024] Open
Abstract
Adult neurogenesis is a unique form of neuronal plasticity in which newly generated neurons are integrated into the adult dentate gyrus in a process that is modulated by environmental stimuli. Adult-born neurons can contribute to spatial memory, but it is unknown whether they alter neural representations of space in the hippocampus. Using in vivo two-photon calcium imaging, we find that male and female mice previously housed in an enriched environment, which triggers an increase in neurogenesis, have increased spatial information encoding in the dentate gyrus. Ablating adult neurogenesis blocks the effect of enrichment and lowers spatial information, as does the chemogenetic silencing of adult-born neurons. Both ablating neurogenesis and silencing adult-born neurons decreases the calcium activity of dentate gyrus neurons, resulting in a decreased amplitude of place-specific responses. These findings are in contrast with previous studies that suggested a predominantly inhibitory action for adult-born neurons. We propose that adult neurogenesis improves representations of space by increasing the gain of dentate gyrus neurons and thereby improving their ability to tune to spatial features. This mechanism may mediate the beneficial effects of environmental enrichment on spatial learning and memory.
Collapse
Affiliation(s)
- M Agustina Frechou
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
- Gottesmann Institute for Stem Cell Biology and Regenerative Medicine, Albert Einstein College of Medicine, Bronx, NY, USA
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
| | - Sunaina S Martin
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
- Gottesmann Institute for Stem Cell Biology and Regenerative Medicine, Albert Einstein College of Medicine, Bronx, NY, USA
- Department of Psychology, University of California San Diego, La Jolla, CA, USA
| | - Kelsey D McDermott
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
- Gottesmann Institute for Stem Cell Biology and Regenerative Medicine, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Evan A Huaman
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
- Gottesmann Institute for Stem Cell Biology and Regenerative Medicine, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Şölen Gökhan
- Saul R. Korey Department of Neurology, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Wolfgang A Tomé
- Saul R. Korey Department of Neurology, Albert Einstein College of Medicine, Bronx, NY, USA
- Department of Radiation Oncology, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Ruben Coen-Cagli
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA
- Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY, USA
| | - J Tiago Gonçalves
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA.
- Gottesmann Institute for Stem Cell Biology and Regenerative Medicine, Albert Einstein College of Medicine, Bronx, NY, USA.
| |
Collapse
|
10
|
Rostami V, Rost T, Schmitt FJ, van Albada SJ, Riehle A, Nawrot MP. Spiking attractor model of motor cortex explains modulation of neural and behavioral variability by prior target information. Nat Commun 2024; 15:6304. [PMID: 39060243 PMCID: PMC11282312 DOI: 10.1038/s41467-024-49889-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 06/19/2024] [Indexed: 07/28/2024] Open
Abstract
When preparing a movement, we often rely on partial or incomplete information, which can decrement task performance. In behaving monkeys we show that the degree of cued target information is reflected in both, neural variability in motor cortex and behavioral reaction times. We study the underlying mechanisms in a spiking motor-cortical attractor model. By introducing a biologically realistic network topology where excitatory neuron clusters are locally balanced with inhibitory neuron clusters we robustly achieve metastable network activity across a wide range of network parameters. In application to the monkey task, the model performs target-specific action selection and accurately reproduces the task-epoch dependent reduction of trial-to-trial variability in vivo where the degree of reduction directly reflects the amount of processed target information, while spiking irregularity remained constant throughout the task. In the context of incomplete cue information, the increased target selection time of the model can explain increased behavioral reaction times. We conclude that context-dependent neural and behavioral variability is a signum of attractor computation in the motor cortex.
Collapse
Affiliation(s)
- Vahid Rostami
- Institute of Zoology, University of Cologne, Cologne, Germany
| | - Thomas Rost
- Institute of Zoology, University of Cologne, Cologne, Germany
| | | | - Sacha Jennifer van Albada
- Institute of Zoology, University of Cologne, Cologne, Germany
- Institute for Advanced Simulation (IAS-6), Jülich Research Center, Jülich, Germany
| | - Alexa Riehle
- Institute for Advanced Simulation (IAS-6), Jülich Research Center, Jülich, Germany
- UMR7289 Institut de Neurosciences de la Timone (INT), Centre National de la Recherche Scientifique (CNRS)-Aix-Marseille Université (AMU), Marseille, France
| | | |
Collapse
|
11
|
Stroud JP, Duncan J, Lengyel M. The computational foundations of dynamic coding in working memory. Trends Cogn Sci 2024; 28:614-627. [PMID: 38580528 DOI: 10.1016/j.tics.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 02/29/2024] [Accepted: 02/29/2024] [Indexed: 04/07/2024]
Abstract
Working memory (WM) is a fundamental aspect of cognition. WM maintenance is classically thought to rely on stable patterns of neural activities. However, recent evidence shows that neural population activities during WM maintenance undergo dynamic variations before settling into a stable pattern. Although this has been difficult to explain theoretically, neural network models optimized for WM typically also exhibit such dynamics. Here, we examine stable versus dynamic coding in neural data, classical models, and task-optimized networks. We review principled mathematical reasons for why classical models do not, while task-optimized models naturally do exhibit dynamic coding. We suggest an update to our understanding of WM maintenance, in which dynamic coding is a fundamental computational feature rather than an epiphenomenon.
Collapse
Affiliation(s)
- Jake P Stroud
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK.
| | - John Duncan
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK; Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
12
|
Eckmann S, Young EJ, Gjorgjieva J. Synapse-type-specific competitive Hebbian learning forms functional recurrent networks. Proc Natl Acad Sci U S A 2024; 121:e2305326121. [PMID: 38870059 PMCID: PMC11194505 DOI: 10.1073/pnas.2305326121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 04/25/2024] [Indexed: 06/15/2024] Open
Abstract
Cortical networks exhibit complex stimulus-response patterns that are based on specific recurrent interactions between neurons. For example, the balance between excitatory and inhibitory currents has been identified as a central component of cortical computations. However, it remains unclear how the required synaptic connectivity can emerge in developing circuits where synapses between excitatory and inhibitory neurons are simultaneously plastic. Using theory and modeling, we propose that a wide range of cortical response properties can arise from a single plasticity paradigm that acts simultaneously at all excitatory and inhibitory connections-Hebbian learning that is stabilized by the synapse-type-specific competition for a limited supply of synaptic resources. In plastic recurrent circuits, this competition enables the formation and decorrelation of inhibition-balanced receptive fields. Networks develop an assembly structure with stronger synaptic connections between similarly tuned excitatory and inhibitory neurons and exhibit response normalization and orientation-specific center-surround suppression, reflecting the stimulus statistics during training. These results demonstrate how neurons can self-organize into functional networks and suggest an essential role for synapse-type-specific competitive learning in the development of cortical circuits.
Collapse
Affiliation(s)
- Samuel Eckmann
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt am Main60438, Germany
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, CambridgeCB2 1PZ, United Kingdom
| | - Edward James Young
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, CambridgeCB2 1PZ, United Kingdom
| | - Julijana Gjorgjieva
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt am Main60438, Germany
- School of Life Sciences, Technical University Munich, Freising85354, Germany
| |
Collapse
|
13
|
Holt CJ, Miller KD, Ahmadian Y. The stabilized supralinear network accounts for the contrast dependence of visual cortical gamma oscillations. PLoS Comput Biol 2024; 20:e1012190. [PMID: 38935792 PMCID: PMC11236182 DOI: 10.1371/journal.pcbi.1012190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 07/10/2024] [Accepted: 05/23/2024] [Indexed: 06/29/2024] Open
Abstract
When stimulated, neural populations in the visual cortex exhibit fast rhythmic activity with frequencies in the gamma band (30-80 Hz). The gamma rhythm manifests as a broad resonance peak in the power-spectrum of recorded local field potentials, which exhibits various stimulus dependencies. In particular, in macaque primary visual cortex (V1), the gamma peak frequency increases with increasing stimulus contrast. Moreover, this contrast dependence is local: when contrast varies smoothly over visual space, the gamma peak frequency in each cortical column is controlled by the local contrast in that column's receptive field. No parsimonious mechanistic explanation for these contrast dependencies of V1 gamma oscillations has been proposed. The stabilized supralinear network (SSN) is a mechanistic model of cortical circuits that has accounted for a range of visual cortical response nonlinearities and contextual modulations, as well as their contrast dependence. Here, we begin by showing that a reduced SSN model without retinotopy robustly captures the contrast dependence of gamma peak frequency, and provides a mechanistic explanation for this effect based on the observed non-saturating and supralinear input-output function of V1 neurons. Given this result, the local dependence on contrast can trivially be captured in a retinotopic SSN which however lacks horizontal synaptic connections between its cortical columns. However, long-range horizontal connections in V1 are in fact strong, and underlie contextual modulation effects such as surround suppression. We thus explored whether a retinotopically organized SSN model of V1 with strong excitatory horizontal connections can exhibit both surround suppression and the local contrast dependence of gamma peak frequency. We found that retinotopic SSNs can account for both effects, but only when the horizontal excitatory projections are composed of two components with different patterns of spatial fall-off with distance: a short-range component that only targets the source column, combined with a long-range component that targets columns neighboring the source column. We thus make a specific qualitative prediction for the spatial structure of horizontal connections in macaque V1, consistent with the columnar structure of cortex.
Collapse
Affiliation(s)
- Caleb J Holt
- Department of Physics, Institute of Neuroscience, University of Oregon, Eugene, Oregon, United States of America
| | - Kenneth D Miller
- Deptartment of Neuroscience, Center for Theoretical Neuroscience, Swartz Program in Theoretical Neuroscience, Kavli Institute for Brain Science, College of Physicians and Surgeons, and Morton B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
| | - Yashar Ahmadian
- Department of Engineering, Computational and Biological Learning Lab, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
14
|
Podlaski WF, Machens CK. Approximating Nonlinear Functions With Latent Boundaries in Low-Rank Excitatory-Inhibitory Spiking Networks. Neural Comput 2024; 36:803-857. [PMID: 38658028 DOI: 10.1162/neco_a_01658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 01/02/2024] [Indexed: 04/26/2024]
Abstract
Deep feedforward and recurrent neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.
Collapse
Affiliation(s)
- William F Podlaski
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| | - Christian K Machens
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| |
Collapse
|
15
|
Waitzmann F, Wu YK, Gjorgjieva J. Top-down modulation in canonical cortical circuits with short-term plasticity. Proc Natl Acad Sci U S A 2024; 121:e2311040121. [PMID: 38593083 PMCID: PMC11032497 DOI: 10.1073/pnas.2311040121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 02/14/2024] [Indexed: 04/11/2024] Open
Abstract
Cortical dynamics and computations are strongly influenced by diverse GABAergic interneurons, including those expressing parvalbumin (PV), somatostatin (SST), and vasoactive intestinal peptide (VIP). Together with excitatory (E) neurons, they form a canonical microcircuit and exhibit counterintuitive nonlinear phenomena. One instance of such phenomena is response reversal, whereby SST neurons show opposite responses to top-down modulation via VIP depending on the presence of bottom-up sensory input, indicating that the network may function in different regimes under different stimulation conditions. Combining analytical and computational approaches, we demonstrate that model networks with multiple interneuron subtypes and experimentally identified short-term plasticity mechanisms can implement response reversal. Surprisingly, despite not directly affecting SST and VIP activity, PV-to-E short-term depression has a decisive impact on SST response reversal. We show how response reversal relates to inhibition stabilization and the paradoxical effect in the presence of several short-term plasticity mechanisms demonstrating that response reversal coincides with a change in the indispensability of SST for network stabilization. In summary, our work suggests a role of short-term plasticity mechanisms in generating nonlinear phenomena in networks with multiple interneuron subtypes and makes several experimentally testable predictions.
Collapse
Affiliation(s)
- Felix Waitzmann
- School of Life Sciences, Technical University of Munich, 85354Freising, Germany
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, 60438Frankfurt, Germany
| | - Yue Kris Wu
- School of Life Sciences, Technical University of Munich, 85354Freising, Germany
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, 60438Frankfurt, Germany
| | - Julijana Gjorgjieva
- School of Life Sciences, Technical University of Munich, 85354Freising, Germany
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, 60438Frankfurt, Germany
| |
Collapse
|
16
|
Tian Y, Murphy MJH, Steiner LA, Kalia SK, Hodaie M, Lozano AM, Hutchison WD, Popovic MR, Milosevic L, Lankarany M. Modeling Instantaneous Firing Rate of Deep Brain Stimulation Target Neuronal Ensembles in the Basal Ganglia and Thalamus. Neuromodulation 2024; 27:464-475. [PMID: 37140523 DOI: 10.1016/j.neurom.2023.03.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 01/27/2023] [Accepted: 03/02/2023] [Indexed: 05/05/2023]
Abstract
OBJECTIVE Deep brain stimulation (DBS) is an effective treatment for movement disorders, including Parkinson disease and essential tremor. However, the underlying mechanisms of DBS remain elusive. Despite the capability of existing models in interpreting experimental data qualitatively, there are very few unified computational models that quantitatively capture the dynamics of the neuronal activity of varying stimulated nuclei-including subthalamic nucleus (STN), substantia nigra pars reticulata (SNr), and ventral intermediate nucleus (Vim)-across different DBS frequencies. MATERIALS AND METHODS Both synthetic and experimental data were used in the model fitting; the synthetic data were generated by an established spiking neuron model that was reported in our previous work, and the experimental data were provided using single-unit microelectrode recordings (MERs) during DBS (microelectrode stimulation). Based on these data, we developed a novel mathematical model to represent the firing rate of neurons receiving DBS, including neurons in STN, SNr, and Vim-across different DBS frequencies. In our model, the DBS pulses were filtered through a synapse model and a nonlinear transfer function to formulate the firing rate variability. For each DBS-targeted nucleus, we fitted a single set of optimal model parameters consistent across varying DBS frequencies. RESULTS Our model accurately reproduced the firing rates observed and calculated from both synthetic and experimental data. The optimal model parameters were consistent across different DBS frequencies. CONCLUSIONS The result of our model fitting was in agreement with experimental single-unit MER data during DBS. Reproducing neuronal firing rates of different nuclei of the basal ganglia and thalamus during DBS can be helpful to further understand the mechanisms of DBS and to potentially optimize stimulation parameters based on their actual effects on neuronal activity.
Collapse
Affiliation(s)
- Yupeng Tian
- Krembil Research Institute - University Health Network, Toronto, ON, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada; KITE Research Institute, Toronto Rehabilitation Institute - University Health Network, Toronto, ON, Canada; CRANIA, University Health Network and University of Toronto, Toronto, ON, Canada
| | | | - Leon A Steiner
- Krembil Research Institute - University Health Network, Toronto, ON, Canada; Berlin Institute of Health, Berlin, Germany; Department of Surgery, University of Toronto, Toronto, ON, Canada; Department of Neurology, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Suneil K Kalia
- Krembil Research Institute - University Health Network, Toronto, ON, Canada; KITE Research Institute, Toronto Rehabilitation Institute - University Health Network, Toronto, ON, Canada; CRANIA, University Health Network and University of Toronto, Toronto, ON, Canada; Department of Surgery, University of Toronto, Toronto, ON, Canada; Division of Neurosurgery, Toronto Western Hospital, University Health Network, Toronto, ON, Canada
| | - Mojgan Hodaie
- Krembil Research Institute - University Health Network, Toronto, ON, Canada; CRANIA, University Health Network and University of Toronto, Toronto, ON, Canada; Department of Surgery, University of Toronto, Toronto, ON, Canada; Division of Neurosurgery, Toronto Western Hospital, University Health Network, Toronto, ON, Canada
| | - Andres M Lozano
- Krembil Research Institute - University Health Network, Toronto, ON, Canada; CRANIA, University Health Network and University of Toronto, Toronto, ON, Canada; Department of Surgery, University of Toronto, Toronto, ON, Canada; Division of Neurosurgery, Toronto Western Hospital, University Health Network, Toronto, ON, Canada
| | - William D Hutchison
- CRANIA, University Health Network and University of Toronto, Toronto, ON, Canada; Department of Surgery, University of Toronto, Toronto, ON, Canada; Department of Physiology, University of Toronto, Toronto, ON, Canada
| | - Milos R Popovic
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada; KITE Research Institute, Toronto Rehabilitation Institute - University Health Network, Toronto, ON, Canada; CRANIA, University Health Network and University of Toronto, Toronto, ON, Canada
| | - Luka Milosevic
- Krembil Research Institute - University Health Network, Toronto, ON, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada; KITE Research Institute, Toronto Rehabilitation Institute - University Health Network, Toronto, ON, Canada; CRANIA, University Health Network and University of Toronto, Toronto, ON, Canada
| | - Milad Lankarany
- Krembil Research Institute - University Health Network, Toronto, ON, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada; KITE Research Institute, Toronto Rehabilitation Institute - University Health Network, Toronto, ON, Canada; CRANIA, University Health Network and University of Toronto, Toronto, ON, Canada; Department of Physiology, University of Toronto, Toronto, ON, Canada.
| |
Collapse
|
17
|
Goris RLT, Coen-Cagli R, Miller KD, Priebe NJ, Lengyel M. Response sub-additivity and variability quenching in visual cortex. Nat Rev Neurosci 2024; 25:237-252. [PMID: 38374462 PMCID: PMC11444047 DOI: 10.1038/s41583-024-00795-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/24/2024] [Indexed: 02/21/2024]
Abstract
Sub-additivity and variability are ubiquitous response motifs in the primary visual cortex (V1). Response sub-additivity enables the construction of useful interpretations of the visual environment, whereas response variability indicates the factors that limit the precision with which the brain can do this. There is increasing evidence that experimental manipulations that elicit response sub-additivity often also quench response variability. Here, we provide an overview of these phenomena and suggest that they may have common origins. We discuss empirical findings and recent model-based insights into the functional operations, computational objectives and circuit mechanisms underlying V1 activity. These different modelling approaches all predict that response sub-additivity and variability quenching often co-occur. The phenomenology of these two response motifs, as well as many of the insights obtained about them in V1, generalize to other cortical areas. Thus, the connection between response sub-additivity and variability quenching may be a canonical motif across the cortex.
Collapse
Affiliation(s)
- Robbe L T Goris
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA.
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
- Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Kenneth D Miller
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA
- Dept. of Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY, USA
- Morton B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Swartz Program in Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Nicholas J Priebe
- Center for Learning and Memory, University of Texas at Austin, Austin, TX, USA
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
18
|
Pan X, Coen-Cagli R, Schwartz O. Probing the Structure and Functional Properties of the Dropout-Induced Correlated Variability in Convolutional Neural Networks. Neural Comput 2024; 36:621-644. [PMID: 38457752 PMCID: PMC11164410 DOI: 10.1162/neco_a_01652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 12/04/2023] [Indexed: 03/10/2024]
Abstract
Computational neuroscience studies have shown that the structure of neural variability to an unchanged stimulus affects the amount of information encoded. Some artificial deep neural networks, such as those with Monte Carlo dropout layers, also have variable responses when the input is fixed. However, the structure of the trial-by-trial neural covariance in neural networks with dropout has not been studied, and its role in decoding accuracy is unknown. We studied the above questions in a convolutional neural network model with dropout in both the training and testing phases. We found that trial-by-trial correlation between neurons (i.e., noise correlation) is positive and low dimensional. Neurons that are close in a feature map have larger noise correlation. These properties are surprisingly similar to the findings in the visual cortex. We further analyzed the alignment of the main axes of the covariance matrix. We found that different images share a common trial-by-trial noise covariance subspace, and they are aligned with the global signal covariance. This evidence that the noise covariance is aligned with signal covariance suggests that noise covariance in dropout neural networks reduces network accuracy, which we further verified directly with a trial-shuffling procedure commonly used in neuroscience. These findings highlight a previously overlooked aspect of dropout layers that can affect network performance. Such dropout networks could also potentially be a computational model of neural variability.
Collapse
Affiliation(s)
- Xu Pan
- Department of Computer Science, University of Miami, Coral Gables, FL 33146, U.S.A.
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology, Dominick Purpura Department of Neuroscience, and Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY 10461, U.S.A.
| | - Odelia Schwartz
- Department of Computer Science, University of Miami, Coral Gables, FL 33146, U.S.A.
| |
Collapse
|
19
|
Pattadkal JJ, Zemelman BV, Fiete I, Priebe NJ. Primate neocortex performs balanced sensory amplification. Neuron 2024; 112:661-675.e7. [PMID: 38091984 PMCID: PMC10922204 DOI: 10.1016/j.neuron.2023.11.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 05/08/2023] [Accepted: 11/07/2023] [Indexed: 01/25/2024]
Abstract
The sensory cortex amplifies relevant features of external stimuli. This sensitivity and selectivity arise through the transformation of inputs by cortical circuitry. We characterize the circuit mechanisms and dynamics of cortical amplification by making large-scale simultaneous measurements of single cells in awake primates and testing computational models. By comparing network activity in both driven and spontaneous states with models, we identify the circuit as operating in a regime of non-normal balanced amplification. Incoming inputs are strongly but transiently amplified by strong recurrent feedback from the disruption of excitatory-inhibitory balance in the network. Strong inhibition rapidly quenches responses, thereby permitting the tracking of time-varying stimuli.
Collapse
Affiliation(s)
- Jagruti J Pattadkal
- Department of Neuroscience, The University of Texas at Austin, Austin, TX 78712, USA.
| | - Boris V Zemelman
- Department of Neuroscience, The University of Texas at Austin, Austin, TX 78712, USA
| | - Ila Fiete
- Department of Brain and Cognitive Sciences, MIT, Boston, MA 02139, USA
| | - Nicholas J Priebe
- Department of Neuroscience, The University of Texas at Austin, Austin, TX 78712, USA.
| |
Collapse
|
20
|
Oldenburg IA, Hendricks WD, Handy G, Shamardani K, Bounds HA, Doiron B, Adesnik H. The logic of recurrent circuits in the primary visual cortex. Nat Neurosci 2024; 27:137-147. [PMID: 38172437 PMCID: PMC10774145 DOI: 10.1038/s41593-023-01510-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Accepted: 10/27/2023] [Indexed: 01/05/2024]
Abstract
Recurrent cortical activity sculpts visual perception by refining, amplifying or suppressing visual input. However, the rules that govern the influence of recurrent activity remain enigmatic. We used ensemble-specific two-photon optogenetics in the mouse visual cortex to isolate the impact of recurrent activity from external visual input. We found that the spatial arrangement and the visual feature preference of the stimulated ensemble and the neighboring neurons jointly determine the net effect of recurrent activity. Photoactivation of these ensembles drives suppression in all cells beyond 30 µm but uniformly drives activation in closer similarly tuned cells. In nonsimilarly tuned cells, compact, cotuned ensembles drive net suppression, while diffuse, cotuned ensembles drive activation. Computational modeling suggests that highly local recurrent excitatory connectivity and selective convergence onto inhibitory neurons explain these effects. Our findings reveal a straightforward logic in which space and feature preference of cortical ensembles determine their impact on local recurrent activity.
Collapse
Affiliation(s)
- Ian Antón Oldenburg
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA.
- The Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA.
- Department of Neuroscience and Cell Biology, Robert Wood Johnson Medical School, and Center for Advanced Biotechnology and Medicine, Rutgers University, Piscataway, NJ, USA.
| | - William D Hendricks
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA
- The Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA
| | - Gregory Handy
- Department of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA.
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA.
- Department of Mathematics, University of Minnesota, Minneapolis, MN, USA.
| | - Kiarash Shamardani
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
| | - Hayley A Bounds
- The Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA
| | - Brent Doiron
- Department of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
| | - Hillel Adesnik
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA.
- The Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA.
| |
Collapse
|
21
|
Becker LA, Li B, Priebe NJ, Seidemann E, Taillefumier T. Exact Analysis of the Subthreshold Variability for Conductance-Based Neuronal Models with Synchronous Synaptic Inputs. PHYSICAL REVIEW. X 2024; 14:011021. [PMID: 38911939 PMCID: PMC11194039 DOI: 10.1103/physrevx.14.011021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
The spiking activity of neocortical neurons exhibits a striking level of variability, even when these networks are driven by identical stimuli. The approximately Poisson firing of neurons has led to the hypothesis that these neural networks operate in the asynchronous state. In the asynchronous state, neurons fire independently from one another, so that the probability that a neuron experience synchronous synaptic inputs is exceedingly low. While the models of asynchronous neurons lead to observed spiking variability, it is not clear whether the asynchronous state can also account for the level of subthreshold membrane potential variability. We propose a new analytical framework to rigorously quantify the subthreshold variability of a single conductance-based neuron in response to synaptic inputs with prescribed degrees of synchrony. Technically, we leverage the theory of exchangeability to model input synchrony via jump-process-based synaptic drives; we then perform a moment analysis of the stationary response of a neuronal model with all-or-none conductances that neglects postspiking reset. As a result, we produce exact, interpretable closed forms for the first two stationary moments of the membrane voltage, with explicit dependence on the input synaptic numbers, strengths, and synchrony. For biophysically relevant parameters, we find that the asynchronous regime yields realistic subthreshold variability (voltage variance ≃4-9 mV2) only when driven by a restricted number of large synapses, compatible with strong thalamic drive. By contrast, we find that achieving realistic subthreshold variability with dense cortico-cortical inputs requires including weak but nonzero input synchrony, consistent with measured pairwise spiking correlations. We also show that, without synchrony, the neural variability averages out to zero for all scaling limits with vanishing synaptic weights, independent of any balanced state hypothesis. This result challenges the theoretical basis for mean-field theories of the asynchronous state.
Collapse
Affiliation(s)
- Logan A. Becker
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Baowang Li
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Center for Perceptual Systems, The University of Texas at Austin, Austin, Texas 78712, USA
- Center for Learning and Memory, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Psychology, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Nicholas J. Priebe
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Center for Learning and Memory, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Eyal Seidemann
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Center for Perceptual Systems, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Psychology, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Thibaud Taillefumier
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, Texas 78712, USA
- Department of Mathematics, The University of Texas at Austin, Austin, Texas 78712, USA
| |
Collapse
|
22
|
Becker LA, Li B, Priebe NJ, Seidemann E, Taillefumier T. Exact analysis of the subthreshold variability for conductance-based neuronal models with synchronous synaptic inputs. ARXIV 2023:arXiv:2304.09280v3. [PMID: 37131877 PMCID: PMC10153295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
The spiking activity of neocortical neurons exhibits a striking level of variability, even when these networks are driven by identical stimuli. The approximately Poisson firing of neurons has led to the hypothesis that these neural networks operate in the asynchronous state. In the asynchronous state neurons fire independently from one another, so that the probability that a neuron experience synchronous synaptic inputs is exceedingly low. While the models of asynchronous neurons lead to observed spiking variability, it is not clear whether the asynchronous state can also account for the level of subthreshold membrane potential variability. We propose a new analytical framework to rigorously quantify the subthreshold variability of a single conductance-based neuron in response to synaptic inputs with prescribed degrees of synchrony. Technically we leverage the theory of exchangeability to model input synchrony via jump-process-based synaptic drives; we then perform a moment analysis of the stationary response of a neuronal model with all-or-none conductances that neglects post-spiking reset. As a result, we produce exact, interpretable closed forms for the first two stationary moments of the membrane voltage, with explicit dependence on the input synaptic numbers, strengths, and synchrony. For biophysically relevant parameters, we find that the asynchronous regime only yields realistic subthreshold variability (voltage variance ≃ 4 - 9 m V 2 ) when driven by a restricted number of large synapses, compatible with strong thalamic drive. By contrast, we find that achieving realistic subthreshold variability with dense cortico-cortical inputs requires including weak but nonzero input synchrony, consistent with measured pairwise spiking correlations. We also show that without synchrony, the neural variability averages out to zero for all scaling limits with vanishing synaptic weights, independent of any balanced state hypothesis. This result challenges the theoretical basis for mean-field theories of the asynchronous state.
Collapse
Affiliation(s)
- Logan A. Becker
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
| | - Baowang Li
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
- Center for Perceptual Systems, The University of Texas at Austin
- Center for Learning and Memory, The University of Texas at Austin
- Department of Psychology, The University of Texas at Austin
| | - Nicholas J. Priebe
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
- Center for Learning and Memory, The University of Texas at Austin
| | - Eyal Seidemann
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
- Center for Perceptual Systems, The University of Texas at Austin
- Department of Psychology, The University of Texas at Austin
| | - Thibaud Taillefumier
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin
- Department of Neuroscience, The University of Texas at Austin
- Department of Mathematics, The University of Texas at Austin
| |
Collapse
|
23
|
Zhang WH, Wu S, Josić K, Doiron B. Sampling-based Bayesian inference in recurrent circuits of stochastic spiking neurons. Nat Commun 2023; 14:7074. [PMID: 37925497 PMCID: PMC10625605 DOI: 10.1038/s41467-023-41743-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 09/15/2023] [Indexed: 11/06/2023] Open
Abstract
Two facts about cortex are widely accepted: neuronal responses show large spiking variability with near Poisson statistics and cortical circuits feature abundant recurrent connections between neurons. How these spiking and circuit properties combine to support sensory representation and information processing is not well understood. We build a theoretical framework showing that these two ubiquitous features of cortex combine to produce optimal sampling-based Bayesian inference. Recurrent connections store an internal model of the external world, and Poissonian variability of spike responses drives flexible sampling from the posterior stimulus distributions obtained by combining feedforward and recurrent neuronal inputs. We illustrate how this framework for sampling-based inference can be used by cortex to represent latent multivariate stimuli organized either hierarchically or in parallel. A neural signature of such network sampling are internally generated differential correlations whose amplitude is determined by the prior stored in the circuit, which provides an experimentally testable prediction for our framework.
Collapse
Affiliation(s)
- Wen-Hao Zhang
- Department of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Si Wu
- School of Psychological and Cognitive Sciences, Peking University, Beijing, 100871, China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing, 100871, China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, 100871, China
- Center of Quantitative Biology, Peking University, Beijing, 100871, China
| | - Krešimir Josić
- Department of Mathematics, University of Houston, Houston, TX, USA.
- Department of Biology and Biochemistry, University of Houston, Houston, TX, USA.
| | - Brent Doiron
- Department of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA.
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA.
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA.
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
| |
Collapse
|
24
|
Weiss O, Bounds HA, Adesnik H, Coen-Cagli R. Modeling the diverse effects of divisive normalization on noise correlations. PLoS Comput Biol 2023; 19:e1011667. [PMID: 38033166 PMCID: PMC10715670 DOI: 10.1371/journal.pcbi.1011667] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 12/12/2023] [Accepted: 11/07/2023] [Indexed: 12/02/2023] Open
Abstract
Divisive normalization, a prominent descriptive model of neural activity, is employed by theories of neural coding across many different brain areas. Yet, the relationship between normalization and the statistics of neural responses beyond single neurons remains largely unexplored. Here we focus on noise correlations, a widely studied pairwise statistic, because its stimulus and state dependence plays a central role in neural coding. Existing models of covariability typically ignore normalization despite empirical evidence suggesting it affects correlation structure in neural populations. We therefore propose a pairwise stochastic divisive normalization model that accounts for the effects of normalization and other factors on covariability. We first show that normalization modulates noise correlations in qualitatively different ways depending on whether normalization is shared between neurons, and we discuss how to infer when normalization signals are shared. We then apply our model to calcium imaging data from mouse primary visual cortex (V1), and find that it accurately fits the data, often outperforming a popular alternative model of correlations. Our analysis indicates that normalization signals are often shared between V1 neurons in this dataset. Our model will enable quantifying the relation between normalization and covariability in a broad range of neural systems, which could provide new constraints on circuit mechanisms of normalization and their role in information transmission and representation.
Collapse
Affiliation(s)
- Oren Weiss
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, New York, United States of America
| | - Hayley A. Bounds
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
| | - Hillel Adesnik
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, California, United States of America
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, New York, United States of America
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York, United States of America
- Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, New York, United States of America
| |
Collapse
|
25
|
Levy ERJ, Carrillo-Segura S, Park EH, Redman WT, Hurtado JR, Chung S, Fenton AA. A manifold neural population code for space in hippocampal coactivity dynamics independent of place fields. Cell Rep 2023; 42:113142. [PMID: 37742193 PMCID: PMC10842170 DOI: 10.1016/j.celrep.2023.113142] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 06/14/2023] [Accepted: 08/30/2023] [Indexed: 09/26/2023] Open
Abstract
Hippocampus place cell discharge is temporally unreliable across seconds and days, and place fields are multimodal, suggesting an "ensemble cofiring" spatial coding hypothesis with manifold dynamics that does not require reliable spatial tuning, in contrast to hypotheses based on place field (spatial tuning) stability. We imaged mouse CA1 (cornu ammonis 1) ensembles in two environments across three weeks to evaluate these coding hypotheses. While place fields "remap," being more distinct between than within environments, coactivity relationships generally change less. Decoding location and environment from 1-s ensemble location-specific activity is effective and improves with experience. Decoding environment from cell-pair coactivity relationships is also effective and improves with experience, even after removing place tuning. Discriminating environments from 1-s ensemble coactivity relies crucially on the cells with the most anti-coactive cell-pair relationships because activity is internally organized on a low-dimensional manifold of non-linear coactivity relationships that intermittently reregisters to environments according to the anti-cofiring subpopulation activity.
Collapse
Affiliation(s)
| | - Simón Carrillo-Segura
- Center for Neural Science, New York University, New York, NY 10003, USA; Graduate Program in Mechanical and Aerospace Engineering, Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA
| | - Eun Hye Park
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - William Thomas Redman
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, Santa Barbara, CA 93106, USA
| | | | - SueYeon Chung
- Center for Neural Science, New York University, New York, NY 10003, USA; Flatiron Institute Center for Computational Neuroscience, New York, NY 10010, USA
| | - André Antonio Fenton
- Center for Neural Science, New York University, New York, NY 10003, USA; Neuroscience Institute at the NYU Langone Medical Center, New York, NY 10016, USA.
| |
Collapse
|
26
|
Wu S, Huang C, Snyder A, Smith M, Doiron B, Yu B. Automated customization of large-scale spiking network models to neuronal population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.21.558920. [PMID: 37790533 PMCID: PMC10542160 DOI: 10.1101/2023.09.21.558920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Understanding brain function is facilitated by constructing computational models that accurately reproduce aspects of brain activity. Networks of spiking neurons capture the underlying biophysics of neuronal circuits, yet the dependence of their activity on model parameters is notoriously complex. As a result, heuristic methods have been used to configure spiking network models, which can lead to an inability to discover activity regimes complex enough to match large-scale neuronal recordings. Here we propose an automatic procedure, Spiking Network Optimization using Population Statistics (SNOPS), to customize spiking network models that reproduce the population-wide covariability of large-scale neuronal recordings. We first confirmed that SNOPS accurately recovers simulated neural activity statistics. Then, we applied SNOPS to recordings in macaque visual and prefrontal cortices and discovered previously unknown limitations of spiking network models. Taken together, SNOPS can guide the development of network models and thereby enable deeper insight into how networks of neurons give rise to brain function.
Collapse
Affiliation(s)
- Shenghao Wu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Chengcheng Huang
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
| | - Adam Snyder
- Department of Neuroscience, University of Rochester, Rochester, NY, USA
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Matthew Smith
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Brent Doiron
- Department of Neurobiology, University of Chicago, Chicago, IL, USA
- Department of Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
| | - Byron Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| |
Collapse
|
27
|
Bernáez Timón L, Ekelmans P, Kraynyukova N, Rose T, Busse L, Tchumatchenko T. How to incorporate biological insights into network models and why it matters. J Physiol 2023; 601:3037-3053. [PMID: 36069408 DOI: 10.1113/jp282755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 08/24/2022] [Indexed: 11/08/2022] Open
Abstract
Due to the staggering complexity of the brain and its neural circuitry, neuroscientists rely on the analysis of mathematical models to elucidate its function. From Hodgkin and Huxley's detailed description of the action potential in 1952 to today, new theories and increasing computational power have opened up novel avenues to study how neural circuits implement the computations that underlie behaviour. Computational neuroscientists have developed many models of neural circuits that differ in complexity, biological realism or emergent network properties. With recent advances in experimental techniques for detailed anatomical reconstructions or large-scale activity recordings, rich biological data have become more available. The challenge when building network models is to reflect experimental results, either through a high level of detail or by finding an appropriate level of abstraction. Meanwhile, machine learning has facilitated the development of artificial neural networks, which are trained to perform specific tasks. While they have proven successful at achieving task-oriented behaviour, they are often abstract constructs that differ in many features from the physiology of brain circuits. Thus, it is unclear whether the mechanisms underlying computation in biological circuits can be investigated by analysing artificial networks that accomplish the same function but differ in their mechanisms. Here, we argue that building biologically realistic network models is crucial to establishing causal relationships between neurons, synapses, circuits and behaviour. More specifically, we advocate for network models that consider the connectivity structure and the recorded activity dynamics while evaluating task performance.
Collapse
Affiliation(s)
- Laura Bernáez Timón
- Institute for Physiological Chemistry, University of Mainz Medical Center, Mainz, Germany
| | - Pierre Ekelmans
- Frankfurt Institute for Advanced Studies, Frankfurt, Germany
| | - Nataliya Kraynyukova
- Institute of Experimental Epileptology and Cognition Research, University of Bonn Medical Center, Bonn, Germany
| | - Tobias Rose
- Institute of Experimental Epileptology and Cognition Research, University of Bonn Medical Center, Bonn, Germany
| | - Laura Busse
- Division of Neurobiology, Faculty of Biology, LMU Munich, Munich, Germany
- Bernstein Center for Computational Neuroscience, Munich, Germany
| | - Tatjana Tchumatchenko
- Institute for Physiological Chemistry, University of Mainz Medical Center, Mainz, Germany
- Institute of Experimental Epileptology and Cognition Research, University of Bonn Medical Center, Bonn, Germany
| |
Collapse
|
28
|
Levenstein D, Okun M. Logarithmically scaled, gamma distributed neuronal spiking. J Physiol 2023; 601:3055-3069. [PMID: 36086892 PMCID: PMC10952267 DOI: 10.1113/jp282758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 07/28/2022] [Indexed: 11/08/2022] Open
Abstract
Naturally log-scaled quantities abound in the nervous system. Distributions of these quantities have non-intuitive properties, which have implications for data analysis and the understanding of neural circuits. Here, we review the log-scaled statistics of neuronal spiking and the relevant analytical probability distributions. Recent work using log-scaling revealed that interspike intervals of forebrain neurons segregate into discrete modes reflecting spiking at different timescales and are each well-approximated by a gamma distribution. Each neuron spends most of the time in an irregular spiking 'ground state' with the longest intervals, which determines the mean firing rate of the neuron. Across the entire neuronal population, firing rates are log-scaled and well approximated by the gamma distribution, with a small number of highly active neurons and an overabundance of low rate neurons (the 'dark matter'). These results are intricately linked to a heterogeneous balanced operating regime, which confers upon neuronal circuits multiple computational advantages and has evolutionarily ancient origins.
Collapse
Affiliation(s)
- Daniel Levenstein
- Department of Neurology and NeurosurgeryMcGill UniversityMontrealQCCanada
- MilaMontréalQCCanada
| | - Michael Okun
- Department of Psychology and Neuroscience InstituteUniversity of SheffieldSheffieldUK
| |
Collapse
|
29
|
Naik S, Adibpour P, Dubois J, Dehaene-Lambertz G, Battaglia D. Event-related variability is modulated by task and development. Neuroimage 2023; 276:120208. [PMID: 37268095 DOI: 10.1016/j.neuroimage.2023.120208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 05/11/2023] [Accepted: 05/30/2023] [Indexed: 06/04/2023] Open
Abstract
In carefully designed experimental paradigms, cognitive scientists interpret the mean event-related potentials (ERP) in terms of cognitive operations. However, the huge signal variability from one trial to the next, questions the representability of such mean events. We explored here whether this variability is an unwanted noise, or an informative part of the neural response. We took advantage of the rapid changes in the visual system during human infancy and analyzed the variability of visual responses to central and lateralized faces in 2-to 6-month-old infants compared to adults using high-density electroencephalography (EEG). We observed that neural trajectories of individual trials always remain very far from ERP components, only moderately bending their direction with a substantial temporal jitter across trials. However, single trial trajectories displayed characteristic patterns of acceleration and deceleration when approaching ERP components, as if they were under the active influence of steering forces causing transient attraction and stabilization. These dynamic events could only partly be accounted for by induced microstate transitions or phase reset phenomena. Importantly, these structured modulations of response variability, both between and within trials, had a rich sequential organization, which in infants, was modulated by the task difficulty and age. Our approaches to characterize Event Related Variability (ERV) expand on classic ERP analyses and provide the first evidence for the functional role of ongoing neural variability in human infants.
Collapse
Affiliation(s)
- Shruti Naik
- Cognitive Neuroimaging Unit U992, NeuroSpin Center, F-91190 Gif/Yvette, France
| | - Parvaneh Adibpour
- Cognitive Neuroimaging Unit U992, NeuroSpin Center, F-91190 Gif/Yvette, France
| | - Jessica Dubois
- Cognitive Neuroimaging Unit U992, NeuroSpin Center, F-91190 Gif/Yvette, France; Université de Paris, NeuroDiderot, Inserm, F-75019 Paris, France
| | | | - Demian Battaglia
- Institute for System Neuroscience U1106, Aix-Marseille Université, F-13005 Marseille, France; University of Strasbourg Institute for Advanced Studies (USIAS), F-67000 Strasbourg, France.
| |
Collapse
|
30
|
Holt CJ, Miller KD, Ahmadian Y. The stabilized supralinear network accounts for the contrast dependence of visual cortical gamma oscillations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.11.540442. [PMID: 37214812 PMCID: PMC10197697 DOI: 10.1101/2023.05.11.540442] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
When stimulated, neural populations in the visual cortex exhibit fast rhythmic activity with frequencies in the gamma band (30-80 Hz). The gamma rhythm manifests as a broad resonance peak in the power-spectrum of recorded local field potentials, which exhibits various stimulus dependencies. In particular, in macaque primary visual cortex (V1), the gamma peak frequency increases with increasing stimulus contrast. Moreover, this contrast dependence is local: when contrast varies smoothly over visual space, the gamma peak frequency in each cortical column is controlled by the local contrast in that column's receptive field. No parsimonious mechanistic explanation for these contrast dependencies of V1 gamma oscillations has been proposed. The stabilized supralinear network (SSN) is a mechanistic model of cortical circuits that has accounted for a range of visual cortical response nonlinearities and contextual modulations, as well as their contrast dependence. Here, we begin by showing that a reduced SSN model without retinotopy robustly captures the contrast dependence of gamma peak frequency, and provides a mechanistic explanation for this effect based on the observed non-saturating and supralinear input-output function of V1 neurons. Given this result, the local dependence on contrast can trivially be captured in a retinotopic SSN which however lacks horizontal synaptic connections between its cortical columns. However, long-range horizontal connections in V1 are in fact strong, and underlie contextual modulation effects such as surround suppression. We thus explored whether a retinotopically organized SSN model of V1 with strong excitatory horizontal connections can exhibit both surround suppression and the local contrast dependence of gamma peak frequency. We found that retinotopic SSNs can account for both effects, but only when the horizontal excitatory projections are composed of two components with different patterns of spatial fall-off with distance: a short-range component that only targets the source column, combined with a long-range component that targets columns neighboring the source column. We thus make a specific qualitative prediction for the spatial structure of horizontal connections in macaque V1, consistent with the columnar structure of cortex.
Collapse
Affiliation(s)
- Caleb J Holt
- Institute of Neuroscience, Department of Physics, University of Oregon, OR, USA
| | - Kenneth D Miller
- Center for Theoretical Neuroscience, Swartz Program in Theoretical Neuroscience, Kavli Institute for Brain Science, and Dept. of Neuroscience, College of Physicians and Surgeons and Morton B. Zuckerman Mind Brain Behavior Institute, Columbia University, NY, USA
| | - Yashar Ahmadian
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
| |
Collapse
|
31
|
Zhu RJB, Wei XX. Unsupervised approach to decomposing neural tuning variability. Nat Commun 2023; 14:2298. [PMID: 37085524 PMCID: PMC10121715 DOI: 10.1038/s41467-023-37982-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 04/07/2023] [Indexed: 04/23/2023] Open
Abstract
Neural representation is often described by the tuning curves of individual neurons with respect to certain stimulus variables. Despite this tradition, it has become increasingly clear that neural tuning can vary substantially in accordance with a collection of internal and external factors. A challenge we are facing is the lack of appropriate methods to accurately capture the moment-to-moment tuning variability directly from the noisy neural responses. Here we introduce an unsupervised statistical approach, Poisson functional principal component analysis (Pf-PCA), which identifies different sources of systematic tuning fluctuations, moreover encompassing several current models (e.g.,multiplicative gain models) as special cases. Applying this method to neural data recorded from macaque primary visual cortex- a paradigmatic case for which the tuning curve approach has been scientifically essential- we discovered a simple relationship governing the variability of orientation tuning, which unifies different types of gain changes proposed previously. By decomposing the neural tuning variability into interpretable components, our method enables discovery of unexpected structure of the neural code, capturing the influence of the external stimulus drive and internal states simultaneously.
Collapse
Affiliation(s)
- Rong J B Zhu
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China.
- MOE Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, and MOE Frontiers Center for Brain Science, Shanghai, China.
| | - Xue-Xin Wei
- Department of Neuroscience, The University of Texas at Austin, Austin, USA.
- Department of Psychology, The University of Texas at Austin, Austin, USA.
- Center for Perceptual Systems, The University of Texas at Austin, Austin, USA.
- Center for Theoretical and Computational Neuroscience, The University of Texas at Austin, Austin, USA.
| |
Collapse
|
32
|
Zeraati R, Shi YL, Steinmetz NA, Gieselmann MA, Thiele A, Moore T, Levina A, Engel TA. Intrinsic timescales in the visual cortex change with selective attention and reflect spatial connectivity. Nat Commun 2023; 14:1858. [PMID: 37012299 PMCID: PMC10070246 DOI: 10.1038/s41467-023-37613-7] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 03/24/2023] [Indexed: 04/05/2023] Open
Abstract
Intrinsic timescales characterize dynamics of endogenous fluctuations in neural activity. Variation of intrinsic timescales across the neocortex reflects functional specialization of cortical areas, but less is known about how intrinsic timescales change during cognitive tasks. We measured intrinsic timescales of local spiking activity within columns of area V4 in male monkeys performing spatial attention tasks. The ongoing spiking activity unfolded across at least two distinct timescales, fast and slow. The slow timescale increased when monkeys attended to the receptive fields location and correlated with reaction times. By evaluating predictions of several network models, we found that spatiotemporal correlations in V4 activity were best explained by the model in which multiple timescales arise from recurrent interactions shaped by spatially arranged connectivity, and attentional modulation of timescales results from an increase in the efficacy of recurrent interactions. Our results suggest that multiple timescales may arise from the spatial connectivity in the visual cortex and flexibly change with the cognitive state due to dynamic effective interactions between neurons.
Collapse
Affiliation(s)
- Roxana Zeraati
- International Max Planck Research School for the Mechanisms of Mental Function and Dysfunction, University of Tübingen, Tübingen, Germany
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Yan-Liang Shi
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | | | - Marc A Gieselmann
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, UK
| | - Alexander Thiele
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, UK
| | - Tirin Moore
- Department of Neurobiology and Howard Hughes Medical Institute, Stanford University, Stanford, CA, USA
| | - Anna Levina
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany.
- Department of Computer Science, University of Tübingen, Tübingen, Germany.
- Bernstein Center for Computational Neuroscience Tübingen, Tübingen, Germany.
| | - Tatiana A Engel
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
| |
Collapse
|
33
|
Bermudez-Contreras E, Schjetnan AGP, Luczak A, Mohajerani MH. Sensory experience selectively reorganizes the late component of evoked responses. Cereb Cortex 2023; 33:2626-2640. [PMID: 35704850 PMCID: PMC10016043 DOI: 10.1093/cercor/bhac231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 05/13/2022] [Accepted: 05/14/2022] [Indexed: 11/13/2022] Open
Abstract
In response to sensory stimulation, the cortex exhibits an early transient response followed by late and slower activation. Recent studies suggest that the early component represents features of the stimulus while the late component is associated with stimulus perception. Although very informative, these studies only focus on the amplitude of the evoked responses to study its relationship with sensory perception. In this work, we expand upon the study of how patterns of evoked and spontaneous activity are modified by experience at the mesoscale level using voltage and extracellular glutamate transient recordings over widespread regions of mouse dorsal neocortex. We find that repeated tactile or auditory stimulation selectively modifies the spatiotemporal patterns of cortical activity, mainly of the late evoked response in anesthetized mice injected with amphetamine and also in awake mice. This modification lasted up to 60 min and results in an increase in the amplitude of the late response after repeated stimulation and in an increase in the similarity between the spatiotemporal patterns of the late early evoked response. This similarity increase occurs only for the evoked responses of the sensory modality that received the repeated stimulation. Thus, this selective long-lasting spatiotemporal modification of the cortical activity patterns might provide evidence that evoked responses are a cortex-wide phenomenon. This work opens new questions about how perception-related cortical activity changes with sensory experience across the cortex.
Collapse
Affiliation(s)
- Edgar Bermudez-Contreras
- Canadian Centre for Behavioral Neuroscience, University of Lethbridge, Lethbridge, AB T1K 3M4, Canada
| | | | - Artur Luczak
- Canadian Centre for Behavioral Neuroscience, University of Lethbridge, Lethbridge, AB T1K 3M4, Canada
| | - Majid H Mohajerani
- Corresponding author: Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, AB T1K 3M4, Canada.
| |
Collapse
|
34
|
Galgali AR, Sahani M, Mante V. Residual dynamics resolves recurrent contributions to neural computation. Nat Neurosci 2023; 26:326-338. [PMID: 36635498 DOI: 10.1038/s41593-022-01230-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Accepted: 11/08/2022] [Indexed: 01/14/2023]
Abstract
Relating neural activity to behavior requires an understanding of how neural computations arise from the coordinated dynamics of distributed, recurrently connected neural populations. However, inferring the nature of recurrent dynamics from partial recordings of a neural circuit presents considerable challenges. Here we show that some of these challenges can be overcome by a fine-grained analysis of the dynamics of neural residuals-that is, trial-by-trial variability around the mean neural population trajectory for a given task condition. Residual dynamics in macaque prefrontal cortex (PFC) in a saccade-based perceptual decision-making task reveals recurrent dynamics that is time dependent, but consistently stable, and suggests that pronounced rotational structure in PFC trajectories during saccades is driven by inputs from upstream areas. The properties of residual dynamics restrict the possible contributions of PFC to decision-making and saccade generation and suggest a path toward fully characterizing distributed neural computations with large-scale neural recordings and targeted causal perturbations.
Collapse
Affiliation(s)
- Aniruddh R Galgali
- Institute of Neuroinformatics, University of Zurich & ETH Zurich, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich & ETH Zurich, Zurich, Switzerland.
- Department of Experimental Psychology, University of Oxford, Oxford, UK.
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| | - Valerio Mante
- Institute of Neuroinformatics, University of Zurich & ETH Zurich, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich & ETH Zurich, Zurich, Switzerland.
| |
Collapse
|
35
|
Qiao L, Ghosh P, Rangamani P. Design principles of improving the dose-response alignment in coupled GTPase switches. NPJ Syst Biol Appl 2023; 9:3. [PMID: 36720885 PMCID: PMC9889403 DOI: 10.1038/s41540-023-00266-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 01/17/2023] [Indexed: 02/02/2023] Open
Abstract
"Dose-response alignment" (DoRA), where the downstream response of cellular signaling pathways closely matches the fraction of activated receptor, can improve the fidelity of dose information transmission. The negative feedback has been experimentally identified as a key component for DoRA, but numerical simulations indicate that negative feedback is not sufficient to achieve perfect DoRA, i.e., perfect match of downstream response and receptor activation level. Thus a natural question is whether there exist design principles for signaling motifs within only negative feedback loops to improve DoRA to near-perfect DoRA. Here, we investigated several model formulations of an experimentally validated circuit that couples two molecular switches-mGTPase (monomeric GTPase) and tGTPase (heterotrimeric GTPases) - with negative feedback loops. In the absence of feedback, the low and intermediate mGTPase activation levels benefit DoRA in mass action and Hill-function models, respectively. Adding negative feedback has versatile roles on DoRA: it may impair DoRA in the mass action model with low mGTPase activation level and Hill-function model with intermediate mGTPase activation level; in other cases, i.e., the mass action model with a high mGTPase activation level or the Hill-function model with a non-intermediate mGTPase activation level, it improves DoRA. Furthermore, we found that DoRA in a longer cascade (i.e., tGTPase) can be obtained using Hill-function kinetics under certain conditions. In summary, we show how ranges of activity of mGTPase, reaction kinetics, the negative feedback, and the cascade length affect DoRA. This work provides a framework for improving the DoRA performance in signaling motifs with negative feedback.
Collapse
Affiliation(s)
- Lingxia Qiao
- Department of Mechanical and Aerospace Engineering, Jacob's School of Engineering, University of California San Diego, La Jolla, CA, USA
| | - Pradipta Ghosh
- Department of Cellular and Molecular Medicine, School of Medicine, University of California San Diego, La Jolla, CA, USA.
- Moores Comprehensive Cancer Center, University of California San Diego, La Jolla, CA, USA.
- Department of Medicine, School of Medicine, University of California San Diego, La Jolla, CA, USA.
| | - Padmini Rangamani
- Department of Mechanical and Aerospace Engineering, Jacob's School of Engineering, University of California San Diego, La Jolla, CA, USA.
| |
Collapse
|
36
|
Nurminen L, Bijanzadeh M, Angelucci A. Size tuning of neural response variability in laminar circuits of macaque primary visual cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.17.524397. [PMID: 36711786 PMCID: PMC9882156 DOI: 10.1101/2023.01.17.524397] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
A defining feature of the cortex is its laminar organization, which is likely critical for cortical information processing. For example, visual stimuli of different size evoke distinct patterns of laminar activity. Visual information processing is also influenced by the response variability of individual neurons and the degree to which this variability is correlated among neurons. To elucidate laminar processing, we studied how neural response variability across the layers of macaque primary visual cortex is modulated by visual stimulus size. Our laminar recordings revealed that single neuron response variability and the shared variability among neurons are tuned for stimulus size, and this size-tuning is layer-dependent. In all layers, stimulation of the receptive field (RF) reduced single neuron variability, and the shared variability among neurons, relative to their pre-stimulus values. As the stimulus was enlarged beyond the RF, both single neuron and shared variability increased in supragranular layers, but either did not change or decreased in other layers. Surprisingly, we also found that small visual stimuli could increase variability relative to baseline values. Our results suggest multiple circuits and mechanisms as the source of variability in different layers and call for the development of new models of neural response variability.
Collapse
Affiliation(s)
- Lauri Nurminen
- Department of Ophthalmology and Visual Science, Moran Eye Institute, University of Utah, 65 Mario Capecchi Drive, Salt Lake City, UT 84132, USA
- Present address: College of Optometry, University of Houston, 4401 Martin Luther King Boulevard, Houston, TX 77204-2020, USA
| | - Maryam Bijanzadeh
- Department of Ophthalmology and Visual Science, Moran Eye Institute, University of Utah, 65 Mario Capecchi Drive, Salt Lake City, UT 84132, USA
| | - Alessandra Angelucci
- Department of Ophthalmology and Visual Science, Moran Eye Institute, University of Utah, 65 Mario Capecchi Drive, Salt Lake City, UT 84132, USA
| |
Collapse
|
37
|
Chadwick A, Khan AG, Poort J, Blot A, Hofer SB, Mrsic-Flogel TD, Sahani M. Learning shapes cortical dynamics to enhance integration of relevant sensory input. Neuron 2023; 111:106-120.e10. [PMID: 36283408 PMCID: PMC7614688 DOI: 10.1016/j.neuron.2022.10.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Revised: 07/14/2022] [Accepted: 09/30/2022] [Indexed: 11/05/2022]
Abstract
Adaptive sensory behavior is thought to depend on processing in recurrent cortical circuits, but how dynamics in these circuits shapes the integration and transmission of sensory information is not well understood. Here, we study neural coding in recurrently connected networks of neurons driven by sensory input. We show analytically how information available in the network output varies with the alignment between feedforward input and the integrating modes of the circuit dynamics. In light of this theory, we analyzed neural population activity in the visual cortex of mice that learned to discriminate visual features. We found that over learning, slow patterns of network dynamics realigned to better integrate input relevant to the discrimination task. This realignment of network dynamics could be explained by changes in excitatory-inhibitory connectivity among neurons tuned to relevant features. These results suggest that learning tunes the temporal dynamics of cortical circuits to optimally integrate relevant sensory input.
Collapse
Affiliation(s)
- Angus Chadwick
- Gatsby Computational Neuroscience Unit, University College London, London, UK; Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, London, UK; Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, UK.
| | - Adil G Khan
- Centre for Developmental Neurobiology, King's College London, London, UK
| | - Jasper Poort
- Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, UK
| | - Antonin Blot
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, London, UK
| | - Sonja B Hofer
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, London, UK
| | - Thomas D Mrsic-Flogel
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, London, UK
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London, London, UK.
| |
Collapse
|
38
|
Efficient coding theory of dynamic attentional modulation. PLoS Biol 2022; 20:e3001889. [PMID: 36542662 PMCID: PMC9831638 DOI: 10.1371/journal.pbio.3001889] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Revised: 01/10/2023] [Accepted: 10/24/2022] [Indexed: 12/24/2022] Open
Abstract
Activity of sensory neurons is driven not only by external stimuli but also by feedback signals from higher brain areas. Attention is one particularly important internal signal whose presumed role is to modulate sensory representations such that they only encode information currently relevant to the organism at minimal cost. This hypothesis has, however, not yet been expressed in a normative computational framework. Here, by building on normative principles of probabilistic inference and efficient coding, we developed a model of dynamic population coding in the visual cortex. By continuously adapting the sensory code to changing demands of the perceptual observer, an attention-like modulation emerges. This modulation can dramatically reduce the amount of neural activity without deteriorating the accuracy of task-specific inferences. Our results suggest that a range of seemingly disparate cortical phenomena such as intrinsic gain modulation, attention-related tuning modulation, and response variability could be manifestations of the same underlying principles, which combine efficient sensory coding with optimal probabilistic inference in dynamic environments.
Collapse
|
39
|
Khona M, Fiete IR. Attractor and integrator networks in the brain. Nat Rev Neurosci 2022; 23:744-766. [DOI: 10.1038/s41583-022-00642-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/22/2022] [Indexed: 11/06/2022]
|
40
|
Willumsen A, Midtgaard J, Jespersen B, Hansen CKK, Lam SN, Hansen S, Kupers R, Fabricius ME, Litman M, Pinborg L, Tascón-Vidarte JD, Sabers A, Roland PE. Local networks from different parts of the human cerebral cortex generate and share the same population dynamic. Cereb Cortex Commun 2022; 3:tgac040. [PMID: 36530950 PMCID: PMC9753090 DOI: 10.1093/texcom/tgac040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 10/05/2022] [Accepted: 10/10/2022] [Indexed: 11/07/2022] Open
Abstract
A major goal of neuroscience is to reveal mechanisms supporting collaborative actions of neurons in local and larger-scale networks. However, no clear overall principle of operation has emerged despite decades-long experimental efforts. Here, we used an unbiased method to extract and identify the dynamics of local postsynaptic network states contained in the cortical field potential. Field potentials were recorded by depth electrodes targeting a wide selection of cortical regions during spontaneous activities, and sensory, motor, and cognitive experimental tasks. Despite different architectures and different activities, all local cortical networks generated the same type of dynamic confined to one region only of state space. Surprisingly, within this region, state trajectories expanded and contracted continuously during all brain activities and generated a single expansion followed by a contraction in a single trial. This behavior deviates from known attractors and attractor networks. The state-space contractions of particular subsets of brain regions cross-correlated during perceptive, motor, and cognitive tasks. Our results imply that the cortex does not need to change its dynamic to shift between different activities, making task-switching inherent in the dynamic of collective cortical operations. Our results provide a mathematically described general explanation of local and larger scale cortical dynamic.
Collapse
Affiliation(s)
- Alex Willumsen
- Department of Neuroscience, Panum Institute, University of Copenhagen, Denmark
| | - Jens Midtgaard
- Department of Neuroscience, Panum Institute, University of Copenhagen, Denmark
| | - Bo Jespersen
- Department of Neurosurgery, Rigshospitalet, University Hospital of Copenhagen, Denmark
| | | | - Salina N Lam
- Department of Neuroscience, Panum Institute, University of Copenhagen, Denmark
| | - Sabine Hansen
- Department of Neuroscience, Panum Institute, University of Copenhagen, Denmark
| | - Ron Kupers
- Department of Neuroscience, Panum Institute, University of Copenhagen, Denmark,Department of Neurosurgery, Rigshospitalet, University Hospital of Copenhagen, Denmark
| | - Martin E Fabricius
- Department of Clinical Neurophysiology, Rigshospitalet, University Hospital of Copenhagen, Denmark
| | - Minna Litman
- Epilepsy Clinic, Department of Neurology, Rigshospitalet, University Hospital of Copenhagen, Denmark
| | - Lars Pinborg
- Epilepsy Clinic, Department of Neurology, Rigshospitalet, University Hospital of Copenhagen, Denmark,Neurobiology Research Unit, Department of Neurology, Rigshospitalet, University Hospital of Copenhagen, Denmark
| | | | - Anne Sabers
- Epilepsy Clinic, Department of Neurology, Rigshospitalet, University Hospital of Copenhagen, Denmark
| | - Per E Roland
- Corresponding author: Per E. Roland, Department of Neuroscience, Panum Institute, University of Copenhagen, Blegdamsvej 3B, 2200 Copenhagen, Denmark.
| |
Collapse
|
41
|
A circuit mechanism for independent modulation of excitatory and inhibitory firing rates after sensory deprivation. Proc Natl Acad Sci U S A 2022; 119:e2116895119. [PMID: 35925891 PMCID: PMC9371725 DOI: 10.1073/pnas.2116895119] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The cortex is particularly vulnerable to perturbations during sensitive periods, such as the critical period when manipulating sensory experience can induce long-lasting changes in brain structure. Depriving rodents of vision in one eye (known as monocular deprivation [MD]) reduces network activity over two days, whereby inhibitory neurons decrease their firing rates one day after MD, while excitatory neurons are delayed by an additional day. We use spiking networks to mechanistically dissect the requirements for this independent firing-rate regulation after sensory deprivation. We find that in networks stabilized by recurrent inhibition, at least two interneuron subtypes (parvalbumin-expressing and somatostatin-expressing interneurons) are necessary to dynamically alter the circuit response after deprivation and generalize the result across sensory cortices. Diverse interneuron subtypes shape sensory processing in mature cortical circuits. During development, sensory deprivation evokes powerful synaptic plasticity that alters circuitry, but how different inhibitory subtypes modulate circuit dynamics in response to this plasticity remains unclear. We investigate how deprivation-induced synaptic changes affect excitatory and inhibitory firing rates in a microcircuit model of the sensory cortex with multiple interneuron subtypes. We find that with a single interneuron subtype (parvalbumin-expressing [PV]), excitatory and inhibitory firing rates can only be comodulated—increased or decreased together. To explain the experimentally observed independent modulation, whereby one firing rate increases and the other decreases, requires strong feedback from a second interneuron subtype (somatostatin-expressing [SST]). Our model applies to the visual and somatosensory cortex, suggesting a general mechanism across sensory cortices. Therefore, we provide a mechanistic explanation for the differential role of interneuron subtypes in regulating firing rates, contributing to the already diverse roles they serve in the cortex.
Collapse
|
42
|
Hao X, Liu Q, Chan J, Li N, Shi X, Gu Y. Dark exposure can partly restore the disrupted cortical reliability Binocular visual experience drives the maturation of response variability and reliability in the visual cortex. iScience 2022; 25:104984. [PMID: 36105593 PMCID: PMC9465340 DOI: 10.1016/j.isci.2022.104984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 06/23/2022] [Accepted: 08/16/2022] [Indexed: 10/25/2022] Open
|
43
|
Chew KCM, Kumar V, Tan AYY. Different Excitation-Inhibition Correlations Between Spontaneous and Tone-evoked Activity in Primary Auditory Cortex Neurons. Neuroscience 2022; 496:205-218. [PMID: 35728764 DOI: 10.1016/j.neuroscience.2022.06.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 05/18/2022] [Accepted: 06/14/2022] [Indexed: 10/18/2022]
Abstract
Tone-evoked synaptic excitation and inhibition are highly correlated in many neurons with V-shaped tuning curves in the primary auditory cortex of pentobarbital-anesthetized rats. In contrast, there is less correlation between spontaneous excitation and inhibition in visual cortex neurons under the same anesthetic conditions. However, it was not known whether the primary auditory cortex resembles visual cortex in having spontaneous excitation and inhibition that is less correlated than tone-evoked excitation and inhibition. Here we report whole-cell voltage-clamp measurements of spontaneous excitation and inhibition in primary auditory cortex neurons of pentobarbital-anesthetized rats. Spontaneous excitatory and inhibitory currents appeared to mainly consist of distinct events, with the inhibitory event rate typically lower than the excitatory event rate. We use the ratio of the excitatory event rate to the inhibitory event rate, and the assumption that the excitatory and inhibitory synaptic currents can each be reasonably described as a filtered Poisson process, to estimate the maximum spontaneous excitatory-inhibitory correlation for each neuron. In a subset of neurons, we also measured tone-evoked excitation and inhibition. In neurons with V-shaped tuning curves, although tone-evoked excitation and inhibition were highly correlated, the spontaneous inhibitory event rate was typically sufficiently lower than the spontaneous excitatory event rate to indicate a lower excitatory-inhibitory correlation for spontaneous activity than for tone-evoked responses.
Collapse
Affiliation(s)
- Katherine C M Chew
- Department of Physiology, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Healthy Longevity Translational Research Programme, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Cardiovascular Translational Research Programme, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Neurobiology Programme, Life Sciences Institute, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore.
| | - Vineet Kumar
- Department of Physiology, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Healthy Longevity Translational Research Programme, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Cardiovascular Translational Research Programme, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Neurobiology Programme, Life Sciences Institute, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore.
| | - Andrew Y Y Tan
- Department of Physiology, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Healthy Longevity Translational Research Programme, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Cardiovascular Translational Research Programme, Yong Loo Lin School of Medicine, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore; Neurobiology Programme, Life Sciences Institute, National University of Singapore, 28 Medical Drive, Singapore 117456, Republic of Singapore.
| |
Collapse
|
44
|
Huang C, Pouget A, Doiron B. Internally generated population activity in cortical networks hinders information transmission. SCIENCE ADVANCES 2022; 8:eabg5244. [PMID: 35648863 PMCID: PMC9159697 DOI: 10.1126/sciadv.abg5244] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 04/14/2022] [Indexed: 06/15/2023]
Abstract
How neuronal variability affects sensory coding is a central question in systems neuroscience, often with complex and model-dependent answers. Many studies explore population models with a parametric structure for response tuning and variability, preventing an analysis of how synaptic circuitry establishes neural codes. We study stimulus coding in networks of spiking neuron models with spatially ordered excitatory and inhibitory connectivity. The wiring structure is capable of producing rich population-wide shared neuronal variability that agrees with many features of recorded cortical activity. While both the spatial scales of feedforward and recurrent projections strongly affect noise correlations, only recurrent projections, and in particular inhibitory projections, can introduce correlations that limit the stimulus information available to a decoder. Using a spatial neural field model, we relate the recurrent circuit conditions for information limiting noise correlations to how recurrent excitation and inhibition can form spatiotemporal patterns of population-wide activity.
Collapse
Affiliation(s)
- Chengcheng Huang
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Alexandre Pouget
- Department of Basic Neuroscience, University of Geneva, Geneva, Switzerland
| | - Brent Doiron
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Departments of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
| |
Collapse
|
45
|
|
46
|
Wang T, Chen Y, Cui H. From Parametric Representation to Dynamical System: Shifting Views of the Motor Cortex in Motor Control. Neurosci Bull 2022; 38:796-808. [PMID: 35298779 PMCID: PMC9276910 DOI: 10.1007/s12264-022-00832-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 11/29/2021] [Indexed: 11/01/2022] Open
Abstract
In contrast to traditional representational perspectives in which the motor cortex is involved in motor control via neuronal preference for kinetics and kinematics, a dynamical system perspective emerging in the last decade views the motor cortex as a dynamical machine that generates motor commands by autonomous temporal evolution. In this review, we first look back at the history of the representational and dynamical perspectives and discuss their explanatory power and controversy from both empirical and computational points of view. Here, we aim to reconcile the above perspectives, and evaluate their theoretical impact, future direction, and potential applications in brain-machine interfaces.
Collapse
Affiliation(s)
- Tianwei Wang
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China.,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China.,University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Yun Chen
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China.,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China.,University of Chinese Academy of Sciences, Beijing, 100049, China
| | - He Cui
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China. .,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China. .,University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
47
|
Echeveste R, Ferrante E, Milone DH, Samengo I. Bridging physiological and perceptual views of autism by means of sampling-based Bayesian inference. Netw Neurosci 2022; 6:196-212. [PMID: 36605888 PMCID: PMC9810278 DOI: 10.1162/netn_a_00219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Accepted: 12/01/2021] [Indexed: 01/09/2023] Open
Abstract
Theories for autism spectrum disorder (ASD) have been formulated at different levels, ranging from physiological observations to perceptual and behavioral descriptions. Understanding the physiological underpinnings of perceptual traits in ASD remains a significant challenge in the field. Here we show how a recurrent neural circuit model that was optimized to perform sampling-based inference and displays characteristic features of cortical dynamics can help bridge this gap. The model was able to establish a mechanistic link between two descriptive levels for ASD: a physiological level, in terms of inhibitory dysfunction, neural variability, and oscillations, and a perceptual level, in terms of hypopriors in Bayesian computations. We took two parallel paths-inducing hypopriors in the probabilistic model, and an inhibitory dysfunction in the network model-which lead to consistent results in terms of the represented posteriors, providing support for the view that both descriptions might constitute two sides of the same coin.
Collapse
Affiliation(s)
- Rodrigo Echeveste
- Research Institute for Signals, Systems, and Computational Intelligence sinc(i) (FICH-UNL/CONICET), Santa Fe, Argentina,* Corresponding Author:
| | - Enzo Ferrante
- Research Institute for Signals, Systems, and Computational Intelligence sinc(i) (FICH-UNL/CONICET), Santa Fe, Argentina
| | - Diego H. Milone
- Research Institute for Signals, Systems, and Computational Intelligence sinc(i) (FICH-UNL/CONICET), Santa Fe, Argentina
| | - Inés Samengo
- Medical Physics Department and Balseiro Institute (CNEA-UNCUYO/CONICET), Bariloche, Argentina
| |
Collapse
|
48
|
Liang J, Zhou C. Criticality enhances the multilevel reliability of stimulus responses in cortical neural networks. PLoS Comput Biol 2022; 18:e1009848. [PMID: 35100254 PMCID: PMC8830719 DOI: 10.1371/journal.pcbi.1009848] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 02/10/2022] [Accepted: 01/18/2022] [Indexed: 11/18/2022] Open
Abstract
Cortical neural networks exhibit high internal variability in spontaneous dynamic activities and they can robustly and reliably respond to external stimuli with multilevel features–from microscopic irregular spiking of neurons to macroscopic oscillatory local field potential. A comprehensive study integrating these multilevel features in spontaneous and stimulus–evoked dynamics with seemingly distinct mechanisms is still lacking. Here, we study the stimulus–response dynamics of biologically plausible excitation–inhibition (E–I) balanced networks. We confirm that networks around critical synchronous transition states can maintain strong internal variability but are sensitive to external stimuli. In this dynamical region, applying a stimulus to the network can reduce the trial-to-trial variability and shift the network oscillatory frequency while preserving the dynamical criticality. These multilevel features widely observed in different experiments cannot simultaneously occur in non-critical dynamical states. Furthermore, the dynamical mechanisms underlying these multilevel features are revealed using a semi-analytical mean-field theory that derives the macroscopic network field equations from the microscopic neuronal networks, enabling the analysis by nonlinear dynamics theory and linear noise approximation. The generic dynamical principle revealed here contributes to a more integrative understanding of neural systems and brain functions and incorporates multimodal and multilevel experimental observations. The E–I balanced neural network in combination with the effective mean-field theory can serve as a mechanistic modeling framework to study the multilevel neural dynamics underlying neural information and cognitive processes. The complexity and variability of brain dynamical activity range from neuronal spiking and neural avalanches to oscillatory local field potentials of local neural circuits in both spontaneous and stimulus-evoked states. Such multilevel variable brain dynamics are functionally and behaviorally relevant and are principal components of the underlying circuit organization. To more comprehensively clarify their neural mechanisms, we use a bottom-up approach to study the stimulus–response dynamics of neural circuits. Our model assumes the following key biologically plausible components: excitation–inhibition (E–I) neuronal interaction and chemical synaptic coupling. We show that the circuits with E–I balance have a special dynamic sub-region, the critical region. Circuits around this region could account for the emergence of multilevel brain response patterns, both ongoing and stimulus-induced, observed in different experiments, including the reduction of trial-to-trial variability, effective modulation of gamma frequency, and preservation of criticality in the presence of a stimulus. We further analyze the corresponding nonlinear dynamical principles using a novel and highly generalizable semi-analytical mean-field theory. Our computational and theoretical studies explain the cross-level brain dynamical organization of spontaneous and evoked states in a more integrative manner.
Collapse
Affiliation(s)
- Junhao Liang
- Department of Physics, Centre for Nonlinear Studies and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong SAR, China
- Centre for Integrative Neuroscience, Eberhard Karls University of Tübingen, Tübingen, Germany
- Department for Sensory and Sensorimotor Systems, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Changsong Zhou
- Department of Physics, Centre for Nonlinear Studies and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong SAR, China
- Department of Physics, Zhejiang University, Hangzhou, China
- * E-mail:
| |
Collapse
|
49
|
Shi YL, Steinmetz NA, Moore T, Boahen K, Engel TA. Cortical state dynamics and selective attention define the spatial pattern of correlated variability in neocortex. Nat Commun 2022; 13:44. [PMID: 35013259 PMCID: PMC8748999 DOI: 10.1038/s41467-021-27724-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Accepted: 12/03/2021] [Indexed: 01/20/2023] Open
Abstract
Correlated activity fluctuations in the neocortex influence sensory responses and behavior. Neural correlations reflect anatomical connectivity but also change dynamically with cognitive states such as attention. Yet, the network mechanisms defining the population structure of correlations remain unknown. We measured correlations within columns in the visual cortex. We show that the magnitude of correlations, their attentional modulation, and dependence on lateral distance are explained by columnar On-Off dynamics, which are synchronous activity fluctuations reflecting cortical state. We developed a network model in which the On-Off dynamics propagate across nearby columns generating spatial correlations with the extent controlled by attentional inputs. This mechanism, unlike previous proposals, predicts spatially non-uniform changes in correlations during attention. We confirm this prediction in our columnar recordings by showing that in superficial layers the largest changes in correlations occur at intermediate lateral distances. Our results reveal how spatially structured patterns of correlated variability emerge through interactions of cortical state dynamics, anatomical connectivity, and attention.
Collapse
Affiliation(s)
- Yan-Liang Shi
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | | | - Tirin Moore
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute, Stanford University, Stanford, CA, USA
| | - Kwabena Boahen
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | | |
Collapse
|
50
|
Wu YK, Zenke F. Nonlinear transient amplification in recurrent neural networks with short-term plasticity. eLife 2021; 10:e71263. [PMID: 34895468 PMCID: PMC8820736 DOI: 10.7554/elife.71263] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 12/10/2021] [Indexed: 11/24/2022] Open
Abstract
To rapidly process information, neural circuits have to amplify specific activity patterns transiently. How the brain performs this nonlinear operation remains elusive. Hebbian assemblies are one possibility whereby strong recurrent excitatory connections boost neuronal activity. However, such Hebbian amplification is often associated with dynamical slowing of network dynamics, non-transient attractor states, and pathological run-away activity. Feedback inhibition can alleviate these effects but typically linearizes responses and reduces amplification gain. Here, we study nonlinear transient amplification (NTA), a plausible alternative mechanism that reconciles strong recurrent excitation with rapid amplification while avoiding the above issues. NTA has two distinct temporal phases. Initially, positive feedback excitation selectively amplifies inputs that exceed a critical threshold. Subsequently, short-term plasticity quenches the run-away dynamics into an inhibition-stabilized network state. By characterizing NTA in supralinear network models, we establish that the resulting onset transients are stimulus selective and well-suited for speedy information processing. Further, we find that excitatory-inhibitory co-tuning widens the parameter regime in which NTA is possible in the absence of persistent activity. In summary, NTA provides a parsimonious explanation for how excitatory-inhibitory co-tuning and short-term plasticity collaborate in recurrent networks to achieve transient amplification.
Collapse
Affiliation(s)
- Yue Kris Wu
- Friedrich Miescher Institute for Biomedical ResearchBaselSwitzerland
- Faculty of Natural Sciences, University of BaselBaselSwitzerland
- Max Planck Institute for Brain ResearchFrankfurtGermany
- School of Life Sciences, Technical University of MunichFreisingGermany
| | - Friedemann Zenke
- Friedrich Miescher Institute for Biomedical ResearchBaselSwitzerland
- Faculty of Natural Sciences, University of BaselBaselSwitzerland
| |
Collapse
|