151
|
Mackwood O, Naumann LB, Sprekeler H. Learning excitatory-inhibitory neuronal assemblies in recurrent networks. eLife 2021; 10:59715. [PMID: 33900199 PMCID: PMC8075581 DOI: 10.7554/elife.59715] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Accepted: 03/03/2021] [Indexed: 12/22/2022] Open
Abstract
Understanding the connectivity observed in the brain and how it emerges from local plasticity rules is a grand challenge in modern neuroscience. In the primary visual cortex (V1) of mice, synapses between excitatory pyramidal neurons and inhibitory parvalbumin-expressing (PV) interneurons tend to be stronger for neurons that respond to similar stimulus features, although these neurons are not topographically arranged according to their stimulus preference. The presence of such excitatory-inhibitory (E/I) neuronal assemblies indicates a stimulus-specific form of feedback inhibition. Here, we show that activity-dependent synaptic plasticity on input and output synapses of PV interneurons generates a circuit structure that is consistent with mouse V1. Computational modeling reveals that both forms of plasticity must act in synergy to form the observed E/I assemblies. Once established, these assemblies produce a stimulus-specific competition between pyramidal neurons. Our model suggests that activity-dependent plasticity can refine inhibitory circuits to actively shape cortical computations.
Collapse
Affiliation(s)
- Owen Mackwood
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany.,Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
| | - Laura B Naumann
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany.,Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
| | - Henning Sprekeler
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany.,Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
| |
Collapse
|
152
|
Sachdeva PS, Livezey JA, Dougherty ME, Gu BM, Berke JD, Bouchard KE. Improved inference in coupling, encoding, and decoding models and its consequence for neuroscientific interpretation. J Neurosci Methods 2021; 358:109195. [PMID: 33905791 DOI: 10.1016/j.jneumeth.2021.109195] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 04/08/2021] [Accepted: 04/10/2021] [Indexed: 10/21/2022]
Abstract
BACKGROUND A central goal of systems neuroscience is to understand the relationships amongst constituent units in neural populations, and their modulation by external factors, using high-dimensional and stochastic neural recordings. Parametric statistical models (e.g., coupling, encoding, and decoding models), play an instrumental role in accomplishing this goal. However, extracting conclusions from a parametric model requires that it is fit using an inference algorithm capable of selecting the correct parameters and properly estimating their values. Traditional approaches to parameter inference have been shown to suffer from failures in both selection and estimation. The recent development of algorithms that ameliorate these deficiencies raises the question of whether past work relying on such inference procedures have produced inaccurate systems neuroscience models, thereby impairing their interpretation. NEW METHOD We used algorithms based on Union of Intersections, a statistical inference framework based on stability principles, capable of improved selection and estimation. COMPARISON We fit functional coupling, encoding, and decoding models across a battery of neural datasets using both UoI and baseline inference procedures (e.g., ℓ1-penalized GLMs), and compared the structure of their fitted parameters. RESULTS Across recording modality, brain region, and task, we found that UoI inferred models with increased sparsity, improved stability, and qualitatively different parameter distributions, while maintaining predictive performance. We obtained highly sparse functional coupling networks with substantially different community structure, more parsimonious encoding models, and decoding models that relied on fewer single-units. CONCLUSIONS Together, these results demonstrate that improved parameter inference, achieved via UoI, reshapes interpretation in diverse neuroscience contexts.
Collapse
Affiliation(s)
- Pratik S Sachdeva
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, 94720, CA, USA; Department of Physics, University of California, Berkeley, 94720, CA, USA; Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA
| | - Jesse A Livezey
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, 94720, CA, USA; Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA
| | - Maximilian E Dougherty
- Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA
| | - Bon-Mi Gu
- Department of Neurology, University of California, San Francisco, San Francisco, 94143, CA, USA
| | - Joshua D Berke
- Department of Neurology, University of California, San Francisco, San Francisco, 94143, CA, USA; Department of Psychiatry; Neuroscience Graduate Program; Kavli Institute for Fundamental Neuroscience; Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, 94143, CA, USA
| | - Kristofer E Bouchard
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, 94720, CA, USA; Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA; Computational Resources Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, 94720, CA, USA
| |
Collapse
|
153
|
Inhibitory neurons exhibit high controlling ability in the cortical microconnectome. PLoS Comput Biol 2021; 17:e1008846. [PMID: 33831009 PMCID: PMC8031186 DOI: 10.1371/journal.pcbi.1008846] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Accepted: 03/01/2021] [Indexed: 02/08/2023] Open
Abstract
The brain is a network system in which excitatory and inhibitory neurons keep activity balanced in the highly non-random connectivity pattern of the microconnectome. It is well known that the relative percentage of inhibitory neurons is much smaller than excitatory neurons in the cortex. So, in general, how inhibitory neurons can keep the balance with the surrounding excitatory neurons is an important question. There is much accumulated knowledge about this fundamental question. This study quantitatively evaluated the relatively higher functional contribution of inhibitory neurons in terms of not only properties of individual neurons, such as firing rate, but also in terms of topological mechanisms and controlling ability on other excitatory neurons. We combined simultaneous electrical recording (~2.5 hours) of ~1000 neurons in vitro, and quantitative evaluation of neuronal interactions including excitatory-inhibitory categorization. This study accurately defined recording brain anatomical targets, such as brain regions and cortical layers, by inter-referring MRI and immunostaining recordings. The interaction networks enabled us to quantify topological influence of individual neurons, in terms of controlling ability to other neurons. Especially, the result indicated that highly influential inhibitory neurons show higher controlling ability of other neurons than excitatory neurons, and are relatively often distributed in deeper layers of the cortex. Furthermore, the neurons having high controlling ability are more effectively limited in number than central nodes of k-cores, and these neurons also participate in more clustered motifs. In summary, this study suggested that the high controlling ability of inhibitory neurons is a key mechanism to keep balance with a large number of other excitatory neurons beyond simple higher firing rate. Application of the selection method of limited important neurons would be also applicable for the ability to effectively and selectively stimulate E/I imbalanced disease states. How small numbers of inhibitory neurons functionally keep balance with large numbers of excitatory neurons in the brain by controlling each other is a fundamental question. Especially, this study quantitatively evaluated a topological mechanism of interaction networks in terms of controlling abilities of individual cortical neurons to other neurons. Combination of simultaneous electrical recording of ~1000 neurons and a quantitative evaluation method of neuronal interactions including excitatory-inhibitory categories, enabled us to evaluate the influence of individual neurons not only about firing rate but also about their relative positions in the networks and controllable ability of other neurons. Especially, the result showed that inhibitory neurons have more controlling ability than excitatory neurons, and such neurons were more often observed in deep layers. Because the limited number of neurons in terms controlling ability were much smaller than neurons based on centrality measure and, of course, more directly selected neurons based on their ability to control other neurons, the selection method of important neurons will help not only to produce realistic computational models but also will help to stimulate brain to effectively treat imbalanced disease states.
Collapse
|
154
|
Robinson PA, Gao X, Han Y. Relationships between lognormal distributions of neural properties, activity, criticality, and connectivity. BIOLOGICAL CYBERNETICS 2021; 115:121-130. [PMID: 33825983 DOI: 10.1007/s00422-021-00871-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 03/15/2021] [Indexed: 06/12/2023]
Abstract
Relationships between convergence of inputs onto neurons, divergence of outputs from them, synaptic strengths, nonlinear firing response properties, and randomness of axonal ranges are systematically explored by interrelating means and variances of synaptic strengths, firing rates, and soma voltages. When self-consistency is imposed, it is found that broad distributions of synaptic strength are a necessary concomitant of the known massive convergence of inputs to individual neurons, and observed widths of lognormal distributions of synaptic strength and firing rate are explained provided the brain is in a near-critical state, consistent with independent observations. The strongest individual synapses are shown to have an effect on soma voltage comparable to the effect of all others combined, which supports suggestions that they may have a key role in neural communication. Remarkably, inclusion of moderate randomness in characteristic axonal ranges is shown to account for the observed [Formula: see text]-fold variability in two-point connectivity at a given separation and [Formula: see text]-fold overall when the known mean exponential fall-off is included, consistent with observed near-lognormal distributions. Inferred axonal deviations from straight-line paths are also consistent with independent estimates.
Collapse
Affiliation(s)
- P A Robinson
- School of Physics, University of Sydney, New South Wales 2006, Sydney, Australia.
- Center for Integrative Brain Function, University of Sydney, New South Wales 2006, Sydney, Australia.
| | - Xiao Gao
- School of Physics, University of Sydney, New South Wales 2006, Sydney, Australia
- Center for Integrative Brain Function, University of Sydney, New South Wales 2006, Sydney, Australia
| | - Y Han
- School of Physics, University of Sydney, New South Wales 2006, Sydney, Australia
- Center for Integrative Brain Function, University of Sydney, New South Wales 2006, Sydney, Australia
| |
Collapse
|
155
|
Aitchison L, Jegminat J, Menendez JA, Pfister JP, Pouget A, Latham PE. Synaptic plasticity as Bayesian inference. Nat Neurosci 2021; 24:565-571. [PMID: 33707754 DOI: 10.1038/s41593-021-00809-5] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Accepted: 01/26/2021] [Indexed: 01/21/2023]
Abstract
Learning, especially rapid learning, is critical for survival. However, learning is hard; a large number of synaptic weights must be set based on noisy, often ambiguous, sensory information. In such a high-noise regime, keeping track of probability distributions over weights is the optimal strategy. Here we hypothesize that synapses take that strategy; in essence, when they estimate weights, they include error bars. They then use that uncertainty to adjust their learning rates, with more uncertain weights having higher learning rates. We also make a second, independent, hypothesis: synapses communicate their uncertainty by linking it to variability in postsynaptic potential size, with more uncertainty leading to more variability. These two hypotheses cast synaptic plasticity as a problem of Bayesian inference, and thus provide a normative view of learning. They generalize known learning rules, offer an explanation for the large variability in the size of postsynaptic potentials and make falsifiable experimental predictions.
Collapse
Affiliation(s)
- Laurence Aitchison
- Gatsby Computational Neuroscience Unit, University College London, London, UK. .,Department of Computer Science, University of Bristol, Bristol, UK.
| | - Jannes Jegminat
- Institute of Neuroinformatics, UZH/ETH Zurich, Zurich, Switzerland.,Department of Physiology, University of Bern, Bern, Switzerland
| | - Jorge Aurelio Menendez
- Gatsby Computational Neuroscience Unit, University College London, London, UK.,CoMPLEX, University College London, London, UK
| | - Jean-Pascal Pfister
- Institute of Neuroinformatics, UZH/ETH Zurich, Zurich, Switzerland.,Department of Physiology, University of Bern, Bern, Switzerland
| | - Alexandre Pouget
- Gatsby Computational Neuroscience Unit, University College London, London, UK.,Department of Basic Neurosciences, University of Geneva, Geneva, Switzerland
| | - Peter E Latham
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| |
Collapse
|
156
|
Amgalan A, Taylor P, Mujica-Parodi LR, Siegelmann HT. Unique scales preserve self-similar integrate-and-fire functionality of neuronal clusters. Sci Rep 2021; 11:5331. [PMID: 33674620 PMCID: PMC7936002 DOI: 10.1038/s41598-021-82461-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Accepted: 01/19/2021] [Indexed: 11/09/2022] Open
Abstract
Brains demonstrate varying spatial scales of nested hierarchical clustering. Identifying the brain's neuronal cluster size to be presented as nodes in a network computation is critical to both neuroscience and artificial intelligence, as these define the cognitive blocks capable of building intelligent computation. Experiments support various forms and sizes of neural clustering, from handfuls of dendrites to thousands of neurons, and hint at their behavior. Here, we use computational simulations with a brain-derived fMRI network to show that not only do brain networks remain structurally self-similar across scales but also neuron-like signal integration functionality ("integrate and fire") is preserved at particular clustering scales. As such, we propose a coarse-graining of neuronal networks to ensemble-nodes, with multiple spikes making up its ensemble-spike and time re-scaling factor defining its ensemble-time step. This fractal-like spatiotemporal property, observed in both structure and function, permits strategic choice in bridging across experimental scales for computational modeling while also suggesting regulatory constraints on developmental and evolutionary "growth spurts" in brain size, as per punctuated equilibrium theories in evolutionary biology.
Collapse
Affiliation(s)
- Anar Amgalan
- Physics and Astronomy Department, Laufer Center for Physical and Quantitative Biology, Stony Brook University, Stony Brook, NY, USA
- Laboratory for Computational Neurodiagnostics, Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, USA
| | - Patrick Taylor
- College of Information and Computer Sciences, University of Massachusetts, Amherst, MA, USA
| | - Lilianne R Mujica-Parodi
- Physics and Astronomy Department, Laufer Center for Physical and Quantitative Biology, Stony Brook University, Stony Brook, NY, USA.
- Laboratory for Computational Neurodiagnostics, Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, USA.
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital/Harvard Medical School, Charlestown, MA, USA.
| | - Hava T Siegelmann
- College of Information and Computer Sciences, University of Massachusetts, Amherst, MA, USA.
- Neuroscience and Behavior Program, University of Massachusetts, Amherst, MA, USA.
- Center for Data Science, University of Massachusetts, Amherst, MA, USA.
| |
Collapse
|
157
|
Weidel P, Duarte R, Morrison A. Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural Networks. Front Comput Neurosci 2021; 15:543872. [PMID: 33746728 PMCID: PMC7970044 DOI: 10.3389/fncom.2021.543872] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Accepted: 02/08/2021] [Indexed: 11/13/2022] Open
Abstract
Reinforcement learning is a paradigm that can account for how organisms learn to adapt their behavior in complex environments with sparse rewards. To partition an environment into discrete states, implementations in spiking neuronal networks typically rely on input architectures involving place cells or receptive fields specified ad hoc by the researcher. This is problematic as a model for how an organism can learn appropriate behavioral sequences in unknown environments, as it fails to account for the unsupervised and self-organized nature of the required representations. Additionally, this approach presupposes knowledge on the part of the researcher on how the environment should be partitioned and represented and scales poorly with the size or complexity of the environment. To address these issues and gain insights into how the brain generates its own task-relevant mappings, we propose a learning architecture that combines unsupervised learning on the input projections with biologically motivated clustered connectivity within the representation layer. This combination allows input features to be mapped to clusters; thus the network self-organizes to produce clearly distinguishable activity patterns that can serve as the basis for reinforcement learning on the output projections. On the basis of the MNIST and Mountain Car tasks, we show that our proposed model performs better than either a comparable unclustered network or a clustered network with static input projections. We conclude that the combination of unsupervised learning and clustered connectivity provides a generic representational substrate suitable for further computation.
Collapse
Affiliation(s)
- Philipp Weidel
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure-Function Relationship (JBI-1 / INM-10), Research Centre Jülich, Jülich, Germany.,Department of Computer Science 3 - Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Renato Duarte
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure-Function Relationship (JBI-1 / INM-10), Research Centre Jülich, Jülich, Germany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure-Function Relationship (JBI-1 / INM-10), Research Centre Jülich, Jülich, Germany.,Department of Computer Science 3 - Software Engineering, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
158
|
Probing the structure-function relationship with neural networks constructed by solving a system of linear equations. Sci Rep 2021; 11:3808. [PMID: 33589672 PMCID: PMC7884791 DOI: 10.1038/s41598-021-82964-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 01/27/2021] [Indexed: 11/17/2022] Open
Abstract
Neural network models are an invaluable tool to understand brain function since they allow us to connect the cellular and circuit levels with behaviour. Neural networks usually comprise a huge number of parameters, which must be chosen carefully such that networks reproduce anatomical, behavioural, and neurophysiological data. These parameters are usually fitted with off-the-shelf optimization algorithms that iteratively change network parameters and simulate the network to evaluate its performance and improve fitting. Here we propose to invert the fitting process by proceeding from the network dynamics towards network parameters. Firing state transitions are chosen according to the transition graph associated with the solution of a task. Then, a system of linear equations is constructed from the network firing states and membrane potentials, in a way that guarantees the consistency of the system. This allows us to uncouple the dynamical features of the model, like its neurons firing rate and correlation, from the structural features, and the task-solving algorithm implemented by the network. We employed our method to probe the structure–function relationship in a sequence memory task. The networks obtained showed connectivity and firing statistics that recapitulated experimental observations. We argue that the proposed method is a complementary and needed alternative to the way neural networks are constructed to model brain function.
Collapse
|
159
|
Doostdar N, Airey J, Radulescu CI, Melgosa-Ecenarro L, Zabouri N, Pavlidi P, Kopanitsa M, Saito T, Saido T, Barnes SJ. Multi-scale network imaging in a mouse model of amyloidosis. Cell Calcium 2021; 95:102365. [PMID: 33610083 DOI: 10.1016/j.ceca.2021.102365] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 01/22/2021] [Accepted: 01/24/2021] [Indexed: 02/06/2023]
Abstract
The adult neocortex is not hard-wired but instead retains the capacity to reorganise across multiple spatial scales long into adulthood. Plastic reorganisation occurs at the level of mesoscopic sensory maps, functional neuronal assemblies and synaptic ensembles and is thought to be a critical feature of neuronal network function. Here, we describe a series of approaches that use calcium imaging to measure network reorganisation across multiple spatial scales in vivo. At the mesoscopic level, we demonstrate that sensory activity can be measured in animals undergoing longitudinal behavioural assessment involving automated touchscreen tasks. At the cellular level, we show that network dynamics can be longitudinally measured at both stable and transient functional assemblies. At the level of single synapses, we show that functional subcellular calcium imaging approaches can be used to measure synaptic ensembles of dendritic spines in vivo. Finally, we demonstrate that all three levels of imaging can be spatially related to local pathology in a preclinical rodent model of amyloidosis. We propose that multi-scale in vivo calcium imaging can be used to measure parallel plasticity processes operating across multiple spatial scales in both the healthy brain and preclinical models of disease.
Collapse
Affiliation(s)
- Nazanin Doostdar
- UK Dementia Research Institute, Department of Brain Sciences, Imperial College London, Hammersmith Hospital Campus, Du Cane Road, London, W12 0NN, United Kingdom
| | - Joseph Airey
- UK Dementia Research Institute, Department of Brain Sciences, Imperial College London, Hammersmith Hospital Campus, Du Cane Road, London, W12 0NN, United Kingdom
| | - Carola I Radulescu
- UK Dementia Research Institute, Department of Brain Sciences, Imperial College London, Hammersmith Hospital Campus, Du Cane Road, London, W12 0NN, United Kingdom
| | - Leire Melgosa-Ecenarro
- UK Dementia Research Institute, Department of Brain Sciences, Imperial College London, Hammersmith Hospital Campus, Du Cane Road, London, W12 0NN, United Kingdom
| | - Nawal Zabouri
- UK Dementia Research Institute, Department of Brain Sciences, Imperial College London, Hammersmith Hospital Campus, Du Cane Road, London, W12 0NN, United Kingdom
| | - Pavlina Pavlidi
- UK Dementia Research Institute, Department of Brain Sciences, Imperial College London, Hammersmith Hospital Campus, Du Cane Road, London, W12 0NN, United Kingdom
| | - Maksym Kopanitsa
- UK Dementia Research Institute, Department of Brain Sciences, Imperial College London, Hammersmith Hospital Campus, Du Cane Road, London, W12 0NN, United Kingdom
| | - Takashi Saito
- Department of Neurocognitive Science, Institute of Brain Science, Nagoya City University Graduate School of Medical Sciences, Aichi, 467-8601, Japan
| | - Takaomi Saido
- Laboratory for Proteolytic Neuroscience, RIKEN Centre for Brain Science, Wako-shi, Saitama, 351-0198, Japan
| | - Samuel J Barnes
- UK Dementia Research Institute, Department of Brain Sciences, Imperial College London, Hammersmith Hospital Campus, Du Cane Road, London, W12 0NN, United Kingdom.
| |
Collapse
|
160
|
Daie K, Svoboda K, Druckmann S. Targeted photostimulation uncovers circuit motifs supporting short-term memory. Nat Neurosci 2021; 24:259-265. [PMID: 33495637 DOI: 10.1038/s41593-020-00776-3] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 12/15/2020] [Indexed: 12/25/2022]
Abstract
Short-term memory is associated with persistent neural activity that is maintained by positive feedback between neurons. To explore the neural circuit motifs that produce memory-related persistent activity, we measured coupling between functionally characterized motor cortex neurons in mice performing a memory-guided response task. Targeted two-photon photostimulation of small (<10) groups of neurons produced sparse calcium responses in coupled neurons over approximately 100 μm. Neurons with similar task-related selectivity were preferentially coupled. Photostimulation of different groups of neurons modulated activity in different subpopulations of coupled neurons. Responses of stimulated and coupled neurons persisted for seconds, far outlasting the duration of the photostimuli. Photostimuli produced behavioral biases that were predictable based on the selectivity of the perturbed neuronal population, even though photostimulation preceded the behavioral response by seconds. Our results suggest that memory-related neural circuits contain intercalated, recurrently connected modules, which can independently maintain selective persistent activity.
Collapse
Affiliation(s)
- Kayvon Daie
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Karel Svoboda
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| | - Shaul Druckmann
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
- Stanford University, Stanford, CA, USA.
| |
Collapse
|
161
|
Monosynaptic inference via finely-timed spikes. J Comput Neurosci 2021; 49:131-157. [PMID: 33507429 DOI: 10.1007/s10827-020-00770-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 09/04/2020] [Accepted: 10/19/2020] [Indexed: 10/22/2022]
Abstract
Observations of finely-timed spike relationships in population recordings have been used to support partial reconstruction of neural microcircuit diagrams. In this approach, fine-timescale components of paired spike train interactions are isolated and subsequently attributed to synaptic parameters. Recent perturbation studies strengthen the case for such an inference, yet the complete set of measurements needed to calibrate statistical models is unavailable. To address this gap, we study features of pairwise spiking in a large-scale in vivo dataset where presynaptic neurons were explicitly decoupled from network activity by juxtacellular stimulation. We then construct biophysical models of paired spike trains to reproduce the observed phenomenology of in vivo monosynaptic interactions, including both fine-timescale spike-spike correlations and firing irregularity. A key characteristic of these models is that the paired neurons are coupled by rapidly-fluctuating background inputs. We quantify a monosynapse's causal effect by comparing the postsynaptic train with its counterfactual, when the monosynapse is removed. Subsequently, we develop statistical techniques for estimating this causal effect from the pre- and post-synaptic spike trains. A particular focus is the justification and application of a nonparametric separation of timescale principle to implement synaptic inference. Using simulated data generated from the biophysical models, we characterize the regimes in which the estimators accurately identify the monosynaptic effect. A secondary goal is to initiate a critical exploration of neurostatistical assumptions in terms of biophysical mechanisms, particularly with regards to the challenging but arguably fundamental issue of fast, unobservable nonstationarities in background dynamics.
Collapse
|
162
|
Vezoli J, Magrou L, Goebel R, Wang XJ, Knoblauch K, Vinck M, Kennedy H. Cortical hierarchy, dual counterstream architecture and the importance of top-down generative networks. Neuroimage 2021; 225:117479. [PMID: 33099005 PMCID: PMC8244994 DOI: 10.1016/j.neuroimage.2020.117479] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 09/29/2020] [Accepted: 10/15/2020] [Indexed: 12/18/2022] Open
Abstract
Hierarchy is a major organizational principle of the cortex and underscores modern computational theories of cortical function. The local microcircuit amplifies long-distance inter-areal input, which show distance-dependent changes in their laminar profiles. Statistical modeling of these changes in laminar profiles demonstrates that inputs from multiple hierarchical levels to their target areas show remarkable consistency, allowing the construction of a cortical hierarchy based on a principle of hierarchical distance. The statistical modeling that is applied to structure can also be applied to laminar differences in the oscillatory coherence between areas thereby determining a functional hierarchy of the cortex. Close examination of the anatomy of inter-areal connectivity reveals a dual counterstream architecture with well-defined distance-dependent feedback and feedforward pathways in both the supra- and infragranular layers, suggesting a multiplicity of feedback pathways with well-defined functional properties. These findings are consistent with feedback connections providing a generative network involved in a wide range of cognitive functions. A dynamical model constrained by connectivity data sheds insight into the experimentally observed signatures of frequency-dependent Granger causality for feedforward versus feedback signaling. Concerted experiments capitalizing on recent technical advances and combining tract-tracing, high-resolution fMRI, optogenetics and mathematical modeling hold the promise of a much improved understanding of lamina-constrained mechanisms of neural computation and cognition. However, because inter-areal interactions involve cortical layers that have been the target of important evolutionary changes in the primate lineage, these investigations will need to include human and non-human primate comparisons.
Collapse
Affiliation(s)
- Julien Vezoli
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt, Germany
| | - Loïc Magrou
- Univ Lyon, Université Claude Bernard Lyon 1, Inserm, Stem Cell and Brain Research Institute U1208, 69500 Bron, France
| | - Rainer Goebel
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, the Netherlands
| | - Xiao-Jing Wang
- Center for Neural Science, New York University (NYU), New York, NY 10003, USA
| | - Kenneth Knoblauch
- Univ Lyon, Université Claude Bernard Lyon 1, Inserm, Stem Cell and Brain Research Institute U1208, 69500 Bron, France
| | - Martin Vinck
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt, Germany.
| | - Henry Kennedy
- Univ Lyon, Université Claude Bernard Lyon 1, Inserm, Stem Cell and Brain Research Institute U1208, 69500 Bron, France; Institute of Neuroscience, State Key Laboratory of Neuroscience, Chinese Academy of Sciences (CAS) Key Laboratory of Primate Neurobiology, CAS, Shanghai 200031, China.
| |
Collapse
|
163
|
Causal Network Inference for Neural Ensemble Activity. Neuroinformatics 2021; 19:515-527. [PMID: 33393054 PMCID: PMC8233245 DOI: 10.1007/s12021-020-09505-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/03/2020] [Indexed: 11/11/2022]
Abstract
Interactions among cellular components forming a mesoscopic scale brain network (microcircuit) display characteristic neural dynamics. Analysis of microcircuits provides a system-level understanding of the neurobiology of health and disease. Causal discovery aims to detect causal relationships among variables based on observational data. A key barrier in causal discovery is the high dimensionality of the variable space. A method called Causal Inference for Microcircuits (CAIM) is proposed to reconstruct causal networks from calcium imaging or electrophysiology time series. CAIM combines neural recording, Bayesian network modeling, and neuron clustering. Validation experiments based on simulated data and a real-world reaching task dataset demonstrated that CAIM accurately revealed causal relationships among neural clusters.
Collapse
|
164
|
Tumulty JS, Royster M, Cruz L. Columnar grouping preserves synchronization in neuronal networks with distance-dependent time delays. Phys Rev E 2021; 101:022408. [PMID: 32168702 DOI: 10.1103/physreve.101.022408] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Accepted: 01/10/2020] [Indexed: 11/07/2022]
Abstract
Neuronal connectivity at the cellular level in the cerebral cortex is far from random, with characteristics that point to a hierarchical design with intricately connected neuronal clusters. Here we investigate computationally the effects of varying neuronal cluster connectivity on network synchronization for two different spatial distributions of clusters: one where clusters are arranged in columns in a grid and the other where neurons from different clusters are spatially intermixed. We characterize each case by measuring the degree of neuronal spiking synchrony as a function of the number of connections per neuron and the degree of intercluster connectivity. We find that in both cases as the number of connections per neuron increases, there is an asynchronous to synchronous transition dependent only on intrinsic parameters of the biophysical model. We also observe in both cases that with very low intercluster connectivity clusters have independent firing dynamics yielding a low degree of synchrony. More importantly, we find that for a high number of connections per neuron but intermediate intercluster connectivity, the two spatial distributions of clusters differ in their response where the clusters in a grid have a higher degree of synchrony than the clusters that are intermixed.
Collapse
Affiliation(s)
- Joseph S Tumulty
- Department of Physics, Drexel University, 3141 Chestnut Street, Philadelphia, Pennsylvania 19104, United States
| | - Michael Royster
- Department of Physics, Drexel University, 3141 Chestnut Street, Philadelphia, Pennsylvania 19104, United States
| | - Luis Cruz
- Department of Physics, Drexel University, 3141 Chestnut Street, Philadelphia, Pennsylvania 19104, United States
| |
Collapse
|
165
|
Abstract
Complex networks are abundant in nature and many share an important structural property: they contain a few nodes that are abnormally highly connected (hubs). Some of these hubs are called influencers because they couple strongly to the network and play fundamental dynamical and structural roles. Strikingly, despite the abundance of networks with influencers, little is known about their response to stochastic forcing. Here, for oscillatory dynamics on influencer networks, we show that subjecting influencers to an optimal intensity of noise can result in enhanced network synchronization. This new network dynamical effect, which we call coherence resonance in influencer networks, emerges from a synergy between network structure and stochasticity and is highly nonlinear, vanishing when the noise is too weak or too strong. Our results reveal that the influencer backbone can sharply increase the dynamical response in complex systems of coupled oscillators.
Collapse
|
166
|
Glutamatergic Dysfunction and Synaptic Ultrastructural Alterations in Schizophrenia and Autism Spectrum Disorder: Evidence from Human and Rodent Studies. Int J Mol Sci 2020; 22:ijms22010059. [PMID: 33374598 PMCID: PMC7793137 DOI: 10.3390/ijms22010059] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Revised: 12/15/2020] [Accepted: 12/22/2020] [Indexed: 12/12/2022] Open
Abstract
The correlation between dysfunction in the glutamatergic system and neuropsychiatric disorders, including schizophrenia and autism spectrum disorder, is undisputed. Both disorders are associated with molecular and ultrastructural alterations that affect synaptic plasticity and thus the molecular and physiological basis of learning and memory. Altered synaptic plasticity, accompanied by changes in protein synthesis and trafficking of postsynaptic proteins, as well as structural modifications of excitatory synapses, are critically involved in the postnatal development of the mammalian nervous system. In this review, we summarize glutamatergic alterations and ultrastructural changes in synapses in schizophrenia and autism spectrum disorder of genetic or drug-related origin, and briefly comment on the possible reversibility of these neuropsychiatric disorders in the light of findings in regular synaptic physiology.
Collapse
|
167
|
Ren N, Ito S, Hafizi H, Beggs JM, Stevenson IH. Model-based detection of putative synaptic connections from spike recordings with latency and type constraints. J Neurophysiol 2020; 124:1588-1604. [DOI: 10.1152/jn.00066.2020] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Detecting synaptic connections using large-scale extracellular spike recordings is a difficult statistical problem. Here, we develop an extension of a generalized linear model that explicitly separates fast synaptic effects and slow background fluctuations in cross-correlograms between pairs of neurons while incorporating circuit properties learned from the whole network. This model outperforms two previously developed synapse detection methods in the simulated networks and recovers plausible connections from hundreds of neurons in in vitro multielectrode array data.
Collapse
Affiliation(s)
- Naixin Ren
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut
| | - Shinya Ito
- Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, California
| | - Hadi Hafizi
- Department of Physics, Indiana University, Bloomington, Indiana
| | - John M. Beggs
- Department of Physics, Indiana University, Bloomington, Indiana
| | - Ian H. Stevenson
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut
| |
Collapse
|
168
|
Ju H, Kim JZ, Beggs JM, Bassett DS. Network structure of cascading neural systems predicts stimulus propagation and recovery. J Neural Eng 2020; 17:056045. [PMID: 33036007 PMCID: PMC11191848 DOI: 10.1088/1741-2552/abbff1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE Many neural systems display spontaneous, spatiotemporal patterns of neural activity that are crucial for information processing. While these cascading patterns presumably arise from the underlying network of synaptic connections between neurons, the precise contribution of the network's local and global connectivity to these patterns and information processing remains largely unknown. APPROACH Here, we demonstrate how network structure supports information processing through network dynamics in empirical and simulated spiking neurons using mathematical tools from linear systems theory, network control theory, and information theory. MAIN RESULTS In particular, we show that activity, and the information that it contains, travels through cycles in real and simulated networks. SIGNIFICANCE Broadly, our results demonstrate how cascading neural networks could contribute to cognitive faculties that require lasting activation of neuronal patterns, such as working memory or attention.
Collapse
Affiliation(s)
- Harang Ju
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA 19104, United States of America
| | - Jason Z Kim
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, United States of America
| | - John M Beggs
- Department of Physics, Indiana University, Bloomington, IN 47405, United States of America
| | - Danielle S Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, United States of America
- Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104, United States of America
- Department of Physics & Astronomy, University of Pennsylvania, Philadelphia, PA 19104, United States of America
- Department of Neurology, University of Pennsylvania, Philadelphia, PA 19104, United States of America
- Department of Psychiatry, University of Pennsylvania, Philadelphia, PA 19104, United States of America
- Santa Fe Institute, 1399 Hyde Park Rd, Santa Fe, NM 87501, United States of America
| |
Collapse
|
169
|
Srivastava P, Nozari E, Kim JZ, Ju H, Zhou D, Becker C, Pasqualetti F, Pappas GJ, Bassett DS. Models of communication and control for brain networks: distinctions, convergence, and future outlook. Netw Neurosci 2020; 4:1122-1159. [PMID: 33195951 PMCID: PMC7655113 DOI: 10.1162/netn_a_00158] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 07/21/2020] [Indexed: 12/13/2022] Open
Abstract
Recent advances in computational models of signal propagation and routing in the human brain have underscored the critical role of white-matter structure. A complementary approach has utilized the framework of network control theory to better understand how white matter constrains the manner in which a region or set of regions can direct or control the activity of other regions. Despite the potential for both of these approaches to enhance our understanding of the role of network structure in brain function, little work has sought to understand the relations between them. Here, we seek to explicitly bridge computational models of communication and principles of network control in a conceptual review of the current literature. By drawing comparisons between communication and control models in terms of the level of abstraction, the dynamical complexity, the dependence on network attributes, and the interplay of multiple spatiotemporal scales, we highlight the convergence of and distinctions between the two frameworks. Based on the understanding of the intertwined nature of communication and control in human brain networks, this work provides an integrative perspective for the field and outlines exciting directions for future work.
Collapse
Affiliation(s)
- Pragya Srivastava
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA USA
| | - Erfan Nozari
- Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia, PA USA
| | - Jason Z. Kim
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA USA
| | - Harang Ju
- Neuroscience Graduate Group, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA USA
| | - Dale Zhou
- Neuroscience Graduate Group, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA USA
| | - Cassiano Becker
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA USA
| | - Fabio Pasqualetti
- Department of Mechanical Engineering, University of California, Riverside, CA USA
| | - George J. Pappas
- Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia, PA USA
| | - Danielle S. Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA USA
- Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia, PA USA
- Department of Physics & Astronomy, University of Pennsylvania, Philadelphia, PA USA
- Department of Neurology, University of Pennsylvania, Philadelphia, PA USA
- Department of Psychiatry, University of Pennsylvania, Philadelphia, PA USA
- Santa Fe Institute, Santa Fe, NM USA
| |
Collapse
|
170
|
Wright EAP, Goltsev AV. Statistical analysis of unidirectional and reciprocal chemical connections in the C. elegans connectome. Eur J Neurosci 2020; 52:4525-4535. [PMID: 33022789 DOI: 10.1111/ejn.14988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Revised: 09/14/2020] [Accepted: 09/15/2020] [Indexed: 11/29/2022]
Abstract
We analyze unidirectional and reciprocally connected pairs of neurons in the chemical connectomes of the male and hermaphrodite Caenorhabditis elegans, using recently published data. Our analysis reveals that reciprocal connections provide communication between most neurons with chemical synapses, and comprise on average more synapses than both unidirectional connections and the entire connectome. We further reveal that the C. elegans connectome is wired so that afferent connections onto neurons with large numbers of presynaptic neighbors (in-degree) comprise an above-average number of synapses (synaptic multiplicity). Notably, the larger the in-degree of a neuron the larger the synaptic multiplicity of its afferent connections. Finally, we show that the male forms two times fewer reciprocal connections between sex-shared neurons than the hermaphrodite, but a large number of reciprocal connections with male-specific neurons. These observations provide evidence for Hebbian structural plasticity in the C. elegans.
Collapse
Affiliation(s)
- Edgar A P Wright
- Department of Physics & I3N, University of Aveiro, Aveiro, Portugal
| | - Alexander V Goltsev
- Department of Physics & I3N, University of Aveiro, Aveiro, Portugal.,A.F. Ioffe Physico-Technical Institute, St. Petersburg, Russia
| |
Collapse
|
171
|
Bojanek K, Zhu Y, MacLean J. Cyclic transitions between higher order motifs underlie sustained asynchronous spiking in sparse recurrent networks. PLoS Comput Biol 2020; 16:e1007409. [PMID: 32997658 PMCID: PMC7549833 DOI: 10.1371/journal.pcbi.1007409] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 10/12/2020] [Accepted: 07/28/2020] [Indexed: 12/26/2022] Open
Abstract
A basic—yet nontrivial—function which neocortical circuitry must satisfy is the ability to maintain stable spiking activity over time. Stable neocortical activity is asynchronous, critical, and low rate, and these features of spiking dynamics contribute to efficient computation and optimal information propagation. However, it remains unclear how neocortex maintains this asynchronous spiking regime. Here we algorithmically construct spiking neural network models, each composed of 5000 neurons. Network construction synthesized topological statistics from neocortex with a set of objective functions identifying naturalistic low-rate, asynchronous, and critical activity. We find that simulations run on the same topology exhibit sustained asynchronous activity under certain sets of initial membrane voltages but truncated activity under others. Synchrony, rate, and criticality do not provide a full explanation of this dichotomy. Consequently, in order to achieve mechanistic understanding of sustained asynchronous activity, we summarized activity as functional graphs where edges between units are defined by pairwise spike dependencies. We then analyzed the intersection between functional edges and synaptic connectivity- i.e. recruitment networks. Higher-order patterns, such as triplet or triangle motifs, have been tied to cooperativity and integration. We find, over time in each sustained simulation, low-variance periodic transitions between isomorphic triangle motifs in the recruitment networks. We quantify the phenomenon as a Markov process and discover that if the network fails to engage this stereotyped regime of motif dominance “cycling”, spiking activity truncates early. Cycling of motif dominance generalized across manipulations of synaptic weights and topologies, demonstrating the robustness of this regime for maintenance of network activity. Our results point to the crucial role of excitatory higher-order patterns in sustaining asynchronous activity in sparse recurrent networks. They also provide a possible explanation why such connectivity and activity patterns have been prominently reported in neocortex. Neocortical spiking activity tends to be low-rate and non-rhythmic, and to operate near the critical point of a phase transition. It remains unclear how this kind of spiking activity can be maintained within a neuronal network. Neurons are leaky and individual synaptic connections are sparse and weak, making the maintenance of an asynchronous regime a nontrivial problem. Higher order patterns involving more than two units abound in neocortex, and several lines of evidence suggest that they may be instrumental for brain function. For example, stable activity in vivo displays elevated clustering dominated by specific three-node (triplet) motifs. In this study we demonstrate a link between the maintenance of asynchronous activity and triplet motifs. We algorithmically build spiking neural network models to mimic the topology of neocortex and the spiking statistics that characterize wakefulness. We show that higher order coordination of synapses is always present during sustained asynchronous activity. Coordination takes the form of transitions in time between specific triangle motifs. These motifs summarize the way spikes traverse the underlying synaptic topology. The results of our model are consistent with numerous experimental observations, and their generalizability to other weakly and sparsely connected networks is predicted.
Collapse
Affiliation(s)
- Kyle Bojanek
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
| | - Yuqing Zhu
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
| | - Jason MacLean
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois, United States of America
- Department of Neurobiology, University of Chicago, Chicago, Illinois, United States of America
- Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, Chicago, Illinois, United States of America
- * E-mail:
| |
Collapse
|
172
|
Yakoubi R, Rollenhagen A, von Lehe M, Shao Y, Sätzler K, Lübke JHR. Quantitative Three-Dimensional Reconstructions of Excitatory Synaptic Boutons in Layer 5 of the Adult Human Temporal Lobe Neocortex: A Fine-Scale Electron Microscopic Analysis. Cereb Cortex 2020; 29:2797-2814. [PMID: 29931200 DOI: 10.1093/cercor/bhy146] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Revised: 05/22/2018] [Accepted: 05/29/2018] [Indexed: 11/14/2022] Open
Abstract
Studies of synapses are available for different brain regions of several animal species including non-human primates, but comparatively little is known about their quantitative morphology in humans. Here, synaptic boutons in Layer 5 (L5) of the human temporal lobe (TL) neocortex were investigated in biopsy tissue, using fine-scale electron microscopy, and quantitative three-dimensional reconstructions. The size and organization of the presynaptic active zones (PreAZs), postsynaptic densities (PSDs), and that of the 3 distinct pools of synaptic vesicles (SVs) were particularly analyzed. L5 synaptic boutons were medium-sized (~6 μm2) with a single but relatively large PreAZ (~0.3 μm2). They contained a total of ~1500 SVs/bouton, ~20 constituting the putative readily releasable pool (RRP), ~180 the recycling pool (RP), and the remainder, the resting pool. The PreAZs, PSDs, and vesicle pools are ~3-fold larger than those of CNS synapses in other species. Astrocytic processes reached the synaptic cleft and may regulate the glutamate concentration. Profound differences exist between synapses in human TL neocortex and those described in various species, particularly in the size and geometry of PreAZs and PSDs, the large RRP/RP, and the astrocytic ensheathment suggesting high synaptic efficacy, strength, and modulation of synaptic transmission at human synapses.
Collapse
Affiliation(s)
- Rachida Yakoubi
- Institute of Neuroscience and Medicine INM-10, Research Centre Jülich GmbH, Leo-Brandt Str., Jülich, Germany
| | - Astrid Rollenhagen
- Institute of Neuroscience and Medicine INM-10, Research Centre Jülich GmbH, Leo-Brandt Str., Jülich, Germany
| | - Marec von Lehe
- University Hospital/Knappschaftskrankenhaus Bochum, In der Schornau 23-25, Bochum, Germany.,Department of Neurosurgery, Ruppiner Kliniken, Medizinische Hochschule Brandenburg, Fehrbelliner Str. 38, Neuruppin, Germany
| | - Yachao Shao
- Simulation Lab Neuroscience, Research Centre Jülich GmbH, Leo-Brandt Str., Jülich, Germany.,College of Computer, National University of Defense Technology, Changsha, China
| | - Kurt Sätzler
- School of Biomedical Sciences, University of Ulster, Cromore Rd., BT52 1SA, Londonderry, UK
| | - Joachim H R Lübke
- Institute of Neuroscience and Medicine INM-10, Research Centre Jülich GmbH, Leo-Brandt Str., Jülich, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty/RWTH University Hospital Aachen, Pauwelsstr. 30, Aachen, Germany.,JARA Translational Brain Medicine, Germany
| |
Collapse
|
173
|
Li J, Shew WL. Tuning network dynamics from criticality to an asynchronous state. PLoS Comput Biol 2020; 16:e1008268. [PMID: 32986705 PMCID: PMC7544040 DOI: 10.1371/journal.pcbi.1008268] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Revised: 10/08/2020] [Accepted: 08/17/2020] [Indexed: 01/03/2023] Open
Abstract
According to many experimental observations, neurons in cerebral cortex tend to operate in an asynchronous regime, firing independently of each other. In contrast, many other experimental observations reveal cortical population firing dynamics that are relatively coordinated and occasionally synchronous. These discrepant observations have naturally led to competing hypotheses. A commonly hypothesized explanation of asynchronous firing is that excitatory and inhibitory synaptic inputs are precisely correlated, nearly canceling each other, sometimes referred to as 'balanced' excitation and inhibition. On the other hand, the 'criticality' hypothesis posits an explanation of the more coordinated state that also requires a certain balance of excitatory and inhibitory interactions. Both hypotheses claim the same qualitative mechanism-properly balanced excitation and inhibition. Thus, a natural question arises: how are asynchronous population dynamics and critical dynamics related, how do they differ? Here we propose an answer to this question based on investigation of a simple, network-level computational model. We show that the strength of inhibitory synapses relative to excitatory synapses can be tuned from weak to strong to generate a family of models that spans a continuum from critical dynamics to asynchronous dynamics. Our results demonstrate that the coordinated dynamics of criticality and asynchronous dynamics can be generated by the same neural system if excitatory and inhibitory synapses are tuned appropriately.
Collapse
Affiliation(s)
- Jingwen Li
- Department of Physics, University of Arkansas, Fayetteville, Arkansas, United States of America
| | - Woodrow L. Shew
- Department of Physics, University of Arkansas, Fayetteville, Arkansas, United States of America
- * E-mail:
| |
Collapse
|
174
|
Natale JL, Hentschel HGE, Nemenman I. Precise spatial memory in local random networks. Phys Rev E 2020; 102:022405. [PMID: 32942429 DOI: 10.1103/physreve.102.022405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Accepted: 06/16/2020] [Indexed: 11/07/2022]
Abstract
Self-sustained, elevated neuronal activity persisting on timescales of 10 s or longer is thought to be vital for aspects of working memory, including brain representations of real space. Continuous-attractor neural networks, one of the most well-known modeling frameworks for persistent activity, have been able to model crucial aspects of such spatial memory. These models tend to require highly structured or regular synaptic architectures. In contrast, we study numerical simulations of a geometrically embedded model with a local, but otherwise random, connectivity profile; imposing a global regulation of our system's mean firing rate produces localized, finely spaced discrete attractors that effectively span a two-dimensional manifold. We demonstrate how the set of attracting states can reliably encode a representation of the spatial locations at which the system receives external input, thereby accomplishing spatial memory via attractor dynamics without synaptic fine-tuning or regular structure. We then measure the network's storage capacity numerically and find that the statistics of retrievable positions are also equivalent to a full tiling of the plane, something hitherto achievable only with (approximately) translationally invariant synapses, and which may be of interest in modeling such biological phenomena as visuospatial working memory in two dimensions.
Collapse
Affiliation(s)
- Joseph L Natale
- Department of Physics, Emory University, Atlanta, Georgia 30322, USA
| | | | - Ilya Nemenman
- Department of Physics, Department of Biology, and Initiative in Theory and Modeling of Living Systems, Emory University, Atlanta, Georgia 30322, USA
| |
Collapse
|
175
|
Generation of Sharp Wave-Ripple Events by Disinhibition. J Neurosci 2020; 40:7811-7836. [PMID: 32913107 PMCID: PMC7548694 DOI: 10.1523/jneurosci.2174-19.2020] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Revised: 06/29/2020] [Accepted: 07/17/2020] [Indexed: 11/21/2022] Open
Abstract
Sharp wave-ripple complexes (SWRs) are hippocampal network phenomena involved in memory consolidation. To date, the mechanisms underlying their occurrence remain obscure. Here, we show how the interactions between pyramidal cells, parvalbumin-positive (PV+) basket cells, and an unidentified class of anti-SWR interneurons can contribute to the initiation and termination of SWRs. Using a biophysically constrained model of a network of spiking neurons and a rate-model approximation, we demonstrate that SWRs emerge as a result of the competition between two interneuron populations and the resulting disinhibition of pyramidal cells. Our models explain how the activation of pyramidal cells or PV+ cells can trigger SWRs, as shown in vitro, and suggests that PV+ cell-mediated short-term synaptic depression influences the experimentally reported dynamics of SWR events. Furthermore, we predict that the silencing of anti-SWR interneurons can trigger SWRs. These results broaden our understanding of the microcircuits supporting the generation of memory-related network dynamics. SIGNIFICANCE STATEMENT The hippocampus is a part of the mammalian brain that is crucial for episodic memories. During periods of sleep and inactive waking, the extracellular activity of the hippocampus is dominated by sharp wave-ripple events (SWRs), which have been shown to be important for memory consolidation. The mechanisms regulating the emergence of these events are still unclear. We developed a computational model to study the emergence of SWRs and to explain the roles of different cell types in regulating them. The model accounts for several previously unexplained features of SWRs and thus advances the understanding of memory-related dynamics.
Collapse
|
176
|
Scheffer LK, Xu CS, Januszewski M, Lu Z, Takemura SY, Hayworth KJ, Huang GB, Shinomiya K, Maitlin-Shepard J, Berg S, Clements J, Hubbard PM, Katz WT, Umayam L, Zhao T, Ackerman D, Blakely T, Bogovic J, Dolafi T, Kainmueller D, Kawase T, Khairy KA, Leavitt L, Li PH, Lindsey L, Neubarth N, Olbris DJ, Otsuna H, Trautman ET, Ito M, Bates AS, Goldammer J, Wolff T, Svirskas R, Schlegel P, Neace E, Knecht CJ, Alvarado CX, Bailey DA, Ballinger S, Borycz JA, Canino BS, Cheatham N, Cook M, Dreher M, Duclos O, Eubanks B, Fairbanks K, Finley S, Forknall N, Francis A, Hopkins GP, Joyce EM, Kim S, Kirk NA, Kovalyak J, Lauchie SA, Lohff A, Maldonado C, Manley EA, McLin S, Mooney C, Ndama M, Ogundeyi O, Okeoma N, Ordish C, Padilla N, Patrick CM, Paterson T, Phillips EE, Phillips EM, Rampally N, Ribeiro C, Robertson MK, Rymer JT, Ryan SM, Sammons M, Scott AK, Scott AL, Shinomiya A, Smith C, Smith K, Smith NL, Sobeski MA, Suleiman A, Swift J, Takemura S, Talebi I, Tarnogorska D, Tenshaw E, Tokhi T, Walsh JJ, Yang T, Horne JA, Li F, Parekh R, Rivlin PK, Jayaraman V, Costa M, Jefferis GSXE, Ito K, Saalfeld S, George R, Meinertzhagen IA, Rubin GM, Hess HF, Jain V, Plaza SM. A connectome and analysis of the adult Drosophila central brain. eLife 2020; 9:e57443. [PMID: 32880371 PMCID: PMC7546738 DOI: 10.7554/elife.57443] [Citation(s) in RCA: 430] [Impact Index Per Article: 107.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Accepted: 09/01/2020] [Indexed: 12/26/2022] Open
Abstract
The neural circuits responsible for animal behavior remain largely unknown. We summarize new methods and present the circuitry of a large fraction of the brain of the fruit fly Drosophila melanogaster. Improved methods include new procedures to prepare, image, align, segment, find synapses in, and proofread such large data sets. We define cell types, refine computational compartments, and provide an exhaustive atlas of cell examples and types, many of them novel. We provide detailed circuits consisting of neurons and their chemical synapses for most of the central brain. We make the data public and simplify access, reducing the effort needed to answer circuit questions, and provide procedures linking the neurons defined by our analysis with genetic reagents. Biologically, we examine distributions of connection strengths, neural motifs on different scales, electrical consequences of compartmentalization, and evidence that maximizing packing density is an important criterion in the evolution of the fly's brain.
Collapse
Affiliation(s)
- Louis K Scheffer
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - C Shan Xu
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | | | - Zhiyuan Lu
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
- Life Sciences Centre, Dalhousie UniversityHalifaxCanada
| | - Shin-ya Takemura
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Kenneth J Hayworth
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Gary B Huang
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Kazunori Shinomiya
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | | | - Stuart Berg
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Jody Clements
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Philip M Hubbard
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - William T Katz
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Lowell Umayam
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Ting Zhao
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - David Ackerman
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | | | - John Bogovic
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Tom Dolafi
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Dagmar Kainmueller
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Takashi Kawase
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Khaled A Khairy
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | | | - Peter H Li
- Google ResearchMountain ViewUnited States
| | | | - Nicole Neubarth
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Donald J Olbris
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Hideo Otsuna
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Eric T Trautman
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Masayoshi Ito
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
- Institute for Quantitative Biosciences, University of TokyoTokyoJapan
| | | | - Jens Goldammer
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
- Institute of Zoology, Biocenter Cologne, University of CologneCologneGermany
| | - Tanya Wolff
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Robert Svirskas
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | | | - Erika Neace
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | | | - Chelsea X Alvarado
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Dennis A Bailey
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Samantha Ballinger
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | | | - Brandon S Canino
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Natasha Cheatham
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Michael Cook
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Marisa Dreher
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Octave Duclos
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Bryon Eubanks
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Kelli Fairbanks
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Samantha Finley
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Nora Forknall
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Audrey Francis
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | | | - Emily M Joyce
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - SungJin Kim
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Nicole A Kirk
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Julie Kovalyak
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Shirley A Lauchie
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Alanna Lohff
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Charli Maldonado
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Emily A Manley
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Sari McLin
- Life Sciences Centre, Dalhousie UniversityHalifaxCanada
| | - Caroline Mooney
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Miatta Ndama
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Omotara Ogundeyi
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Nneoma Okeoma
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Christopher Ordish
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Nicholas Padilla
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | | | - Tyler Paterson
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Elliott E Phillips
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Emily M Phillips
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Neha Rampally
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Caitlin Ribeiro
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | | | - Jon Thomson Rymer
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Sean M Ryan
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Megan Sammons
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Anne K Scott
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Ashley L Scott
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Aya Shinomiya
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Claire Smith
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Kelsey Smith
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Natalie L Smith
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Margaret A Sobeski
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Alia Suleiman
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Jackie Swift
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Satoko Takemura
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Iris Talebi
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | | | - Emily Tenshaw
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Temour Tokhi
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - John J Walsh
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Tansy Yang
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | | | - Feng Li
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Ruchi Parekh
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Patricia K Rivlin
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Vivek Jayaraman
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Marta Costa
- Department of Zoology, University of CambridgeCambridgeUnited Kingdom
| | - Gregory SXE Jefferis
- MRC Laboratory of Molecular BiologyCambridgeUnited States
- Department of Zoology, University of CambridgeCambridgeUnited Kingdom
| | - Kei Ito
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
- Institute for Quantitative Biosciences, University of TokyoTokyoJapan
- Institute of Zoology, Biocenter Cologne, University of CologneCologneGermany
| | - Stephan Saalfeld
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Reed George
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Ian A Meinertzhagen
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
- Life Sciences Centre, Dalhousie UniversityHalifaxCanada
| | - Gerald M Rubin
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Harald F Hess
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| | - Viren Jain
- Google Research, Google LLCZurichSwitzerland
| | - Stephen M Plaza
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| |
Collapse
|
177
|
Phillips RS, Rosner I, Gittis AH, Rubin JE. The effects of chloride dynamics on substantia nigra pars reticulata responses to pallidal and striatal inputs. eLife 2020; 9:e55592. [PMID: 32894224 PMCID: PMC7476764 DOI: 10.7554/elife.55592] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Accepted: 08/14/2020] [Indexed: 11/20/2022] Open
Abstract
As a rodent basal ganglia (BG) output nucleus, the substantia nigra pars reticulata (SNr) is well positioned to impact behavior. SNr neurons receive GABAergic inputs from the striatum (direct pathway) and globus pallidus (GPe, indirect pathway). Dominant theories of action selection rely on these pathways' inhibitory actions. Yet, experimental results on SNr responses to these inputs are limited and include excitatory effects. Our study combines experimental and computational work to characterize, explain, and make predictions about these pathways. We observe diverse SNr responses to stimulation of SNr-projecting striatal and GPe neurons, including biphasic and excitatory effects, which our modeling shows can be explained by intracellular chloride processing. Our work predicts that ongoing GPe activity could tune the SNr operating mode, including its responses in decision-making scenarios, and GPe output may modulate synchrony and low-frequency oscillations of SNr neurons, which we confirm using optogenetic stimulation of GPe terminals within the SNr.
Collapse
Affiliation(s)
- Ryan S Phillips
- Department of Mathematics, University of PittsburghPittsburghUnited States
- Center for the Neural Basis of CognitionPittsburghUnited States
| | - Ian Rosner
- Center for the Neural Basis of CognitionPittsburghUnited States
- Department of Biological Sciences, Carnegie Mellon UniversityPittsburghUnited States
| | - Aryn H Gittis
- Center for the Neural Basis of CognitionPittsburghUnited States
- Department of Biological Sciences, Carnegie Mellon UniversityPittsburghUnited States
| | - Jonathan E Rubin
- Department of Mathematics, University of PittsburghPittsburghUnited States
- Center for the Neural Basis of CognitionPittsburghUnited States
| |
Collapse
|
178
|
Staiger JF, Petersen CCH. Neuronal Circuits in Barrel Cortex for Whisker Sensory Perception. Physiol Rev 2020; 101:353-415. [PMID: 32816652 DOI: 10.1152/physrev.00019.2019] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
The array of whiskers on the snout provides rodents with tactile sensory information relating to the size, shape and texture of objects in their immediate environment. Rodents can use their whiskers to detect stimuli, distinguish textures, locate objects and navigate. Important aspects of whisker sensation are thought to result from neuronal computations in the whisker somatosensory cortex (wS1). Each whisker is individually represented in the somatotopic map of wS1 by an anatomical unit named a 'barrel' (hence also called barrel cortex). This allows precise investigation of sensory processing in the context of a well-defined map. Here, we first review the signaling pathways from the whiskers to wS1, and then discuss current understanding of the various types of excitatory and inhibitory neurons present within wS1. Different classes of cells can be defined according to anatomical, electrophysiological and molecular features. The synaptic connectivity of neurons within local wS1 microcircuits, as well as their long-range interactions and the impact of neuromodulators, are beginning to be understood. Recent technological progress has allowed cell-type-specific connectivity to be related to cell-type-specific activity during whisker-related behaviors. An important goal for future research is to obtain a causal and mechanistic understanding of how selected aspects of tactile sensory information are processed by specific types of neurons in the synaptically connected neuronal networks of wS1 and signaled to downstream brain areas, thus contributing to sensory-guided decision-making.
Collapse
Affiliation(s)
- Jochen F Staiger
- University Medical Center Göttingen, Institute for Neuroanatomy, Göttingen, Germany; and Laboratory of Sensory Processing, Faculty of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Carl C H Petersen
- University Medical Center Göttingen, Institute for Neuroanatomy, Göttingen, Germany; and Laboratory of Sensory Processing, Faculty of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
179
|
Santuy A, Tomás-Roca L, Rodríguez JR, González-Soriano J, Zhu F, Qiu Z, Grant SGN, DeFelipe J, Merchan-Perez A. Estimation of the number of synapses in the hippocampus and brain-wide by volume electron microscopy and genetic labeling. Sci Rep 2020; 10:14014. [PMID: 32814795 PMCID: PMC7438319 DOI: 10.1038/s41598-020-70859-5] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 07/27/2020] [Indexed: 11/29/2022] Open
Abstract
Determining the number of synapses that are present in different brain regions is crucial to understand brain connectivity as a whole. Membrane-associated guanylate kinases (MAGUKs) are a family of scaffolding proteins that are expressed in excitatory glutamatergic synapses. We used genetic labeling of two of these proteins (PSD95 and SAP102), and Spinning Disc confocal Microscopy (SDM), to estimate the number of fluorescent puncta in the CA1 area of the hippocampus. We also used FIB-SEM, a three-dimensional electron microscopy technique, to calculate the actual numbers of synapses in the same area. We then estimated the ratio between the three-dimensional densities obtained with FIB-SEM (synapses/µm3) and the bi-dimensional densities obtained with SDM (puncta/100 µm2). Given that it is impractical to use FIB-SEM brain-wide, we used previously available SDM data from other brain regions and we applied this ratio as a conversion factor to estimate the minimum density of synapses in those regions. We found the highest densities of synapses in the isocortex, olfactory areas, hippocampal formation and cortical subplate. Low densities were found in the pallidum, hypothalamus, brainstem and cerebellum. Finally, the striatum and thalamus showed a wide range of synapse densities.
Collapse
Affiliation(s)
- Andrea Santuy
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, 28223, Pozuelo de Alarcón, Madrid, Spain
| | - Laura Tomás-Roca
- Genes to Cognition Program, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, EH16 4SB, UK
| | - José-Rodrigo Rodríguez
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, 28223, Pozuelo de Alarcón, Madrid, Spain.,Instituto Cajal, Consejo Superior de Investigaciones Científicas (CSIC), Avda. Doctor Arce, 37, 28002, Madrid, Spain.,Centro de Investigación Biomédica en Red Sobre Enfermedades Neurodegenerativas (CIBERNED) ISCIII, Madrid, Spain
| | - Juncal González-Soriano
- Departamento de Anatomía y Embriología, Universidad Complutense de Madrid, 28040, Madrid, Spain
| | - Fei Zhu
- Genes to Cognition Program, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, EH16 4SB, UK.,UCL Institute of Neurology, Queen Square, London, WC1N 3BG, UK
| | - Zhen Qiu
- Genes to Cognition Program, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, EH16 4SB, UK
| | - Seth G N Grant
- Genes to Cognition Program, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, EH16 4SB, UK
| | - Javier DeFelipe
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, 28223, Pozuelo de Alarcón, Madrid, Spain.,Instituto Cajal, Consejo Superior de Investigaciones Científicas (CSIC), Avda. Doctor Arce, 37, 28002, Madrid, Spain.,Centro de Investigación Biomédica en Red Sobre Enfermedades Neurodegenerativas (CIBERNED) ISCIII, Madrid, Spain
| | - Angel Merchan-Perez
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, 28223, Pozuelo de Alarcón, Madrid, Spain. .,Centro de Investigación Biomédica en Red Sobre Enfermedades Neurodegenerativas (CIBERNED) ISCIII, Madrid, Spain. .,Departamento de Arquitectura y Tecnología de Sistemas Informáticos, Universidad Politécnica de Madrid, 28223, Pozuelo de Alarcón, Madrid, Spain.
| |
Collapse
|
180
|
Ito T, Brincat SL, Siegel M, Mill RD, He BJ, Miller EK, Rotstein HG, Cole MW. Task-evoked activity quenches neural correlations and variability across cortical areas. PLoS Comput Biol 2020; 16:e1007983. [PMID: 32745096 PMCID: PMC7425988 DOI: 10.1371/journal.pcbi.1007983] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2020] [Revised: 08/13/2020] [Accepted: 05/27/2020] [Indexed: 02/06/2023] Open
Abstract
Many large-scale functional connectivity studies have emphasized the importance of communication through increased inter-region correlations during task states. In contrast, local circuit studies have demonstrated that task states primarily reduce correlations among pairs of neurons, likely enhancing their information coding by suppressing shared spontaneous activity. Here we sought to adjudicate between these conflicting perspectives, assessing whether co-active brain regions during task states tend to increase or decrease their correlations. We found that variability and correlations primarily decrease across a variety of cortical regions in two highly distinct data sets: non-human primate spiking data and human functional magnetic resonance imaging data. Moreover, this observed variability and correlation reduction was accompanied by an overall increase in dimensionality (reflecting less information redundancy) during task states, suggesting that decreased correlations increased information coding capacity. We further found in both spiking and neural mass computational models that task-evoked activity increased the stability around a stable attractor, globally quenching neural variability and correlations. Together, our results provide an integrative mechanistic account that encompasses measures of large-scale neural activity, variability, and correlations during resting and task states.
Collapse
Affiliation(s)
- Takuya Ito
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, New Jersey, United States of America
- Behavioral and Neural Sciences PhD Program, Rutgers University, Newark, New Jersey, United States of America
| | - Scott L. Brincat
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Markus Siegel
- Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
- Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- MEG Center, University of Tübingen, Tübingen, Germany
| | - Ravi D. Mill
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, New Jersey, United States of America
| | - Biyu J. He
- Neuroscience Institute, New York University, New York, New York, United States of America
- Departments of Neurology, Neuroscience and Physiology, and Radiology, New York University, New York, New York, United States of America
| | - Earl K. Miller
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Horacio G. Rotstein
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, New Jersey, United States of America
- Federated Department of Biological Sciences, Rutgers University, Newark, New Jersey, United States of America
- Institute for Brain and Neuroscience Research, New Jersey Institute of Technology, Newark, New Jersey, United States of America
| | - Michael W. Cole
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, New Jersey, United States of America
| |
Collapse
|
181
|
Kozachkov L, Lundqvist M, Slotine JJ, Miller EK. Achieving stable dynamics in neural circuits. PLoS Comput Biol 2020; 16:e1007659. [PMID: 32764745 PMCID: PMC7446801 DOI: 10.1371/journal.pcbi.1007659] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Revised: 08/19/2020] [Accepted: 06/27/2020] [Indexed: 01/01/2023] Open
Abstract
The brain consists of many interconnected networks with time-varying, partially autonomous activity. There are multiple sources of noise and variation yet activity has to eventually converge to a stable, reproducible state (or sequence of states) for its computations to make sense. We approached this problem from a control-theory perspective by applying contraction analysis to recurrent neural networks. This allowed us to find mechanisms for achieving stability in multiple connected networks with biologically realistic dynamics, including synaptic plasticity and time-varying inputs. These mechanisms included inhibitory Hebbian plasticity, excitatory anti-Hebbian plasticity, synaptic sparsity and excitatory-inhibitory balance. Our findings shed light on how stable computations might be achieved despite biological complexity. Crucially, our analysis is not limited to analyzing the stability of fixed geometric objects in state space (e.g points, lines, planes), but rather the stability of state trajectories which may be complex and time-varying.
Collapse
Affiliation(s)
- Leo Kozachkov
- The Picower Institute for Learning & Memory, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
- Nonlinear Systems Laboratory, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
| | - Mikael Lundqvist
- The Picower Institute for Learning & Memory, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Jean-Jacques Slotine
- The Picower Institute for Learning & Memory, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
- Nonlinear Systems Laboratory, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
| | - Earl K. Miller
- The Picower Institute for Learning & Memory, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
| |
Collapse
|
182
|
Bachmann C, Tetzlaff T, Duarte R, Morrison A. Firing rate homeostasis counteracts changes in stability of recurrent neural networks caused by synapse loss in Alzheimer's disease. PLoS Comput Biol 2020; 16:e1007790. [PMID: 32841234 PMCID: PMC7505475 DOI: 10.1371/journal.pcbi.1007790] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Revised: 09/21/2020] [Accepted: 03/17/2020] [Indexed: 11/19/2022] Open
Abstract
The impairment of cognitive function in Alzheimer's disease is clearly correlated to synapse loss. However, the mechanisms underlying this correlation are only poorly understood. Here, we investigate how the loss of excitatory synapses in sparsely connected random networks of spiking excitatory and inhibitory neurons alters their dynamical characteristics. Beyond the effects on the activity statistics, we find that the loss of excitatory synapses on excitatory neurons reduces the network's sensitivity to small perturbations. This decrease in sensitivity can be considered as an indication of a reduction of computational capacity. A full recovery of the network's dynamical characteristics and sensitivity can be achieved by firing rate homeostasis, here implemented by an up-scaling of the remaining excitatory-excitatory synapses. Mean-field analysis reveals that the stability of the linearised network dynamics is, in good approximation, uniquely determined by the firing rate, and thereby explains why firing rate homeostasis preserves not only the firing rate but also the network's sensitivity to small perturbations.
Collapse
Affiliation(s)
- Claudia Bachmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Renato Duarte
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
183
|
Montero-Crespo M, Dominguez-Alvaro M, Rondon-Carrillo P, Alonso-Nanclares L, DeFelipe J, Blazquez-Llorca L. Three-dimensional synaptic organization of the human hippocampal CA1 field. eLife 2020; 9:e57013. [PMID: 32690133 PMCID: PMC7375818 DOI: 10.7554/elife.57013] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Accepted: 06/10/2020] [Indexed: 12/14/2022] Open
Abstract
The hippocampal CA1 field integrates a wide variety of subcortical and cortical inputs, but its synaptic organization in humans is still unknown due to the difficulties involved studying the human brain via electron microscope techniques. However, we have shown that the 3D reconstruction method using Focused Ion Beam/Scanning Electron Microscopy (FIB/SEM) can be applied to study in detail the synaptic organization of the human brain obtained from autopsies, yielding excellent results. Using this technology, 24,752 synapses were fully reconstructed in CA1, revealing that most of them were excitatory, targeting dendritic spines and displaying a macular shape, regardless of the layer examined. However, remarkable differences were observed between layers. These data constitute the first extensive description of the synaptic organization of the neuropil of the human CA1 region.
Collapse
Affiliation(s)
- Marta Montero-Crespo
- Instituto Cajal, Consejo Superior de Investigaciones Científicas (CSIC)MadridSpain
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de MadridMadridSpain
| | - Marta Dominguez-Alvaro
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de MadridMadridSpain
| | - Patricia Rondon-Carrillo
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de MadridMadridSpain
| | - Lidia Alonso-Nanclares
- Instituto Cajal, Consejo Superior de Investigaciones Científicas (CSIC)MadridSpain
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de MadridMadridSpain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED), ISCIIIMadridSpain
| | - Javier DeFelipe
- Instituto Cajal, Consejo Superior de Investigaciones Científicas (CSIC)MadridSpain
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de MadridMadridSpain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED), ISCIIIMadridSpain
| | - Lidia Blazquez-Llorca
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de MadridMadridSpain
- Departamento de Psicobiología, Facultad de Psicología, Universidad Nacional de Educación a Distancia (UNED)MadridSpain
| |
Collapse
|
184
|
Kuśmierz Ł, Ogawa S, Toyoizumi T. Edge of Chaos and Avalanches in Neural Networks with Heavy-Tailed Synaptic Weight Distribution. PHYSICAL REVIEW LETTERS 2020; 125:028101. [PMID: 32701351 DOI: 10.1103/physrevlett.125.028101] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 03/03/2020] [Accepted: 05/26/2020] [Indexed: 06/11/2023]
Abstract
We propose an analytically tractable neural connectivity model with power-law distributed synaptic strengths. When threshold neurons with biologically plausible number of incoming connections are considered, our model features a continuous transition to chaos and can reproduce biologically relevant low activity levels and scale-free avalanches, i.e., bursts of activity with power-law distributions of sizes and lifetimes. In contrast, the Gaussian counterpart exhibits a discontinuous transition to chaos and thus cannot be poised near the edge of chaos. We validate our predictions in simulations of networks of binary as well as leaky integrate-and-fire neurons. Our results suggest that heavy-tailed synaptic distribution may form a weakly informative sparse-connectivity prior that can be useful in biological and artificial adaptive systems.
Collapse
Affiliation(s)
- Łukasz Kuśmierz
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
| | - Shun Ogawa
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo 113-8656, Japan
| |
Collapse
|
185
|
Huang L, Kebschull JM, Fürth D, Musall S, Kaufman MT, Churchland AK, Zador AM. BRICseq Bridges Brain-wide Interregional Connectivity to Neural Activity and Gene Expression in Single Animals. Cell 2020; 182:177-188.e27. [PMID: 32619423 PMCID: PMC7771207 DOI: 10.1016/j.cell.2020.05.029] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Revised: 03/27/2020] [Accepted: 05/15/2020] [Indexed: 12/26/2022]
Abstract
Comprehensive analysis of neuronal networks requires brain-wide measurement of connectivity, activity, and gene expression. Although high-throughput methods are available for mapping brain-wide activity and transcriptomes, comparable methods for mapping region-to-region connectivity remain slow and expensive because they require averaging across hundreds of brains. Here we describe BRICseq (brain-wide individual animal connectome sequencing), which leverages DNA barcoding and sequencing to map connectivity from single individuals in a few weeks and at low cost. Applying BRICseq to the mouse neocortex, we find that region-to-region connectivity provides a simple bridge relating transcriptome to activity: the spatial expression patterns of a few genes predict region-to-region connectivity, and connectivity predicts activity correlations. We also exploited BRICseq to map the mutant BTBR mouse brain, which lacks a corpus callosum, and recapitulated its known connectopathies. BRICseq allows individual laboratories to compare how age, sex, environment, genetics, and species affect neuronal wiring and to integrate these with functional activity and gene expression.
Collapse
Affiliation(s)
- Longwen Huang
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| | - Justus M Kebschull
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA; Department of Biology, Stanford University, Stanford, CA 94305, USA
| | - Daniel Fürth
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| | - Simon Musall
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| | - Matthew T Kaufman
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA; Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL 60637, USA; Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL 60637, USA
| | | | - Anthony M Zador
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA.
| |
Collapse
|
186
|
Sherrill SP, Timme NM, Beggs JM, Newman EL. Correlated activity favors synergistic processing in local cortical networks in vitro at synaptically relevant timescales. Netw Neurosci 2020; 4:678-697. [PMID: 32885121 PMCID: PMC7462423 DOI: 10.1162/netn_a_00141] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Accepted: 04/06/2020] [Indexed: 11/19/2022] Open
Abstract
Neural information processing is widely understood to depend on correlations in neuronal activity. However, whether correlation is favorable or not is contentious. Here, we sought to determine how correlated activity and information processing are related in cortical circuits. Using recordings of hundreds of spiking neurons in organotypic cultures of mouse neocortex, we asked whether mutual information between neurons that feed into a common third neuron increased synergistic information processing by the receiving neuron. We found that mutual information and synergistic processing were positively related at synaptic timescales (0.05-14 ms), where mutual information values were low. This effect was mediated by the increase in information transmission-of which synergistic processing is a component-that resulted as mutual information grew. However, at extrasynaptic windows (up to 3,000 ms), where mutual information values were high, the relationship between mutual information and synergistic processing became negative. In this regime, greater mutual information resulted in a disproportionate increase in redundancy relative to information transmission. These results indicate that the emergence of synergistic processing from correlated activity differs according to timescale and correlation regime. In a low-correlation regime, synergistic processing increases with greater correlation, and in a high-correlation regime, synergistic processing decreases with greater correlation.
Collapse
Affiliation(s)
- Samantha P. Sherrill
- Department of Psychological and Brain Sciences and Program in Neuroscience, Indiana University Bloomington, Bloomington, IN, USA
| | - Nicholas M. Timme
- Department of Psychology, Indiana University-Purdue University Indianapolis, Indianapolis, IN, USA
| | - John M. Beggs
- Department of Physics & Program in Neuroscience, Indiana University Bloomington, Bloomington, IN, USA
| | - Ehren L. Newman
- Department of Psychological and Brain Sciences and Program in Neuroscience, Indiana University Bloomington, Bloomington, IN, USA
| |
Collapse
|
187
|
Abstract
Our expanding understanding of the brain at the level of neurons and synapses, and the level of cognitive phenomena such as language, leaves a formidable gap between these two scales. Here we introduce a computational system which promises to bridge this gap: the Assembly Calculus. It encompasses operations on assemblies of neurons, such as project, associate, and merge, which appear to be implicated in cognitive phenomena, and can be shown, analytically as well as through simulations, to be plausibly realizable at the level of neurons and synapses. We demonstrate the reach of this system by proposing a brain architecture for syntactic processing in the production of language, compatible with recent experimental results. Assemblies are large populations of neurons believed to imprint memories, concepts, words, and other cognitive information. We identify a repertoire of operations on assemblies. These operations correspond to properties of assemblies observed in experiments, and can be shown, analytically and through simulations, to be realizable by generic, randomly connected populations of neurons with Hebbian plasticity and inhibition. Assemblies and their operations constitute a computational model of the brain which we call the Assembly Calculus, occupying a level of detail intermediate between the level of spiking neurons and synapses and that of the whole brain. The resulting computational system can be shown, under assumptions, to be, in principle, capable of carrying out arbitrary computations. We hypothesize that something like it may underlie higher human cognitive functions such as reasoning, planning, and language. In particular, we propose a plausible brain architecture based on assemblies for implementing the syntactic processing of language in cortex, which is consistent with recent experimental results.
Collapse
|
188
|
Ursino M, Ricci G, Magosso E. Transfer Entropy as a Measure of Brain Connectivity: A Critical Analysis With the Help of Neural Mass Models. Front Comput Neurosci 2020; 14:45. [PMID: 32581756 PMCID: PMC7292208 DOI: 10.3389/fncom.2020.00045] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Accepted: 04/30/2020] [Indexed: 12/12/2022] Open
Abstract
Objective: Assessing brain connectivity from electrophysiological signals is of great relevance in neuroscience, but results are still debated and depend crucially on how connectivity is defined and on mathematical instruments utilized. Aim of this work is to assess the capacity of bivariate Transfer Entropy (TE) to evaluate connectivity, using data generated from simple neural mass models of connected Regions of Interest (ROIs). Approach: Signals simulating mean field potentials were generated assuming two, three or four ROIs, connected via excitatory or by-synaptic inhibitory links. We investigated whether the presence of a statistically significant connection can be detected and if connection strength can be quantified. Main Results: Results suggest that TE can reliably estimate the strength of connectivity if neural populations work in their linear regions, and if the epoch lengths are longer than 10 s. In case of multivariate networks, some spurious connections can emerge (i.e., a statistically significant TE even in the absence of a true connection); however, quite a good correlation between TE and synaptic strength is still preserved. Moreover, TE appears more robust for distal regions (longer delays) compared with proximal regions (smaller delays): an approximate a priori knowledge on this delay can improve the procedure. Finally, non-linear phenomena affect the assessment of connectivity, since they may significantly reduce TE estimation: information transmission between two ROIs may be weak, due to non-linear phenomena, even if a strong causal connection is present. Significance: Changes in functional connectivity during different tasks or brain conditions, might not always reflect a true change in the connecting network, but rather a change in information transmission. A limitation of the work is the use of bivariate TE. In perspective, the use of multivariate TE can improve estimation and reduce some of the problems encountered in the present study.
Collapse
Affiliation(s)
- Mauro Ursino
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Cesena, Italy
| | - Giulia Ricci
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Cesena, Italy
| | - Elisa Magosso
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Cesena, Italy
| |
Collapse
|
189
|
Bostner Ž, Knoll G, Lindner B. Information filtering by coincidence detection of synchronous population output: analytical approaches to the coherence function of a two-stage neural system. BIOLOGICAL CYBERNETICS 2020; 114:403-418. [PMID: 32583370 PMCID: PMC7326833 DOI: 10.1007/s00422-020-00838-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Accepted: 05/18/2020] [Indexed: 06/11/2023]
Abstract
Information about time-dependent sensory stimuli is encoded in the activity of neural populations; distinct aspects of the stimulus are read out by different types of neurons: while overall information is perceived by integrator cells, so-called coincidence detector cells are driven mainly by the synchronous activity in the population that encodes predominantly high-frequency content of the input signal (high-pass information filtering). Previously, an analytically accessible statistic called the partial synchronous output was introduced as a proxy for the coincidence detector cell's output in order to approximate its information transmission. In the first part of the current paper, we compare the information filtering properties (specifically, the coherence function) of this proxy to those of a simple coincidence detector neuron. We show that the latter's coherence function can indeed be well-approximated by the partial synchronous output with a time scale and threshold criterion that are related approximately linearly to the membrane time constant and firing threshold of the coincidence detector cell. In the second part of the paper, we propose an alternative theory for the spectral measures (including the coherence) of the coincidence detector cell that combines linear-response theory for shot-noise driven integrate-and-fire neurons with a novel perturbation ansatz for the spectra of spike-trains driven by colored noise. We demonstrate how the variability of the synaptic weights for connections from the population to the coincidence detector can shape the information transmission of the entire two-stage system.
Collapse
Affiliation(s)
- Žiga Bostner
- Physics Department, Humboldt University Berlin, Newtonstr. 15, 12489 Berlin, Germany
| | - Gregory Knoll
- Physics Department, Humboldt University Berlin, Newtonstr. 15, 12489 Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, 10115 Berlin, Germany
| | - Benjamin Lindner
- Physics Department, Humboldt University Berlin, Newtonstr. 15, 12489 Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, 10115 Berlin, Germany
| |
Collapse
|
190
|
Laing CR, Bläsche C. The effects of within-neuron degree correlations in networks of spiking neurons. BIOLOGICAL CYBERNETICS 2020; 114:337-347. [PMID: 32124039 DOI: 10.1007/s00422-020-00822-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Accepted: 02/15/2020] [Indexed: 05/20/2023]
Abstract
We consider the effects of correlations between the in- and out-degrees of individual neurons on the dynamics of a network of neurons. By using theta neurons, we can derive a set of coupled differential equations for the expected dynamics of neurons with the same in-degree. A Gaussian copula is used to introduce correlations between a neuron's in- and out-degree, and numerical bifurcation analysis is used determine the effects of these correlations on the network's dynamics. For excitatory coupling, we find that inducing positive correlations has a similar effect to increasing the coupling strength between neurons, while for inhibitory coupling it has the opposite effect. We also determine the propensity of various two- and three-neuron motifs to occur as correlations are varied and give a plausible explanation for the observed changes in dynamics.
Collapse
Affiliation(s)
- Carlo R Laing
- School of Natural and Computational Sciences, Massey University, NSMC, Private Bag 102-904, Auckland, New Zealand.
| | - Christian Bläsche
- School of Natural and Computational Sciences, Massey University, NSMC, Private Bag 102-904, Auckland, New Zealand
| |
Collapse
|
191
|
Stapmanns J, Kühn T, Dahmen D, Luu T, Honerkamp C, Helias M. Self-consistent formulations for stochastic nonlinear neuronal dynamics. Phys Rev E 2020; 101:042124. [PMID: 32422832 DOI: 10.1103/physreve.101.042124] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Accepted: 12/18/2019] [Indexed: 01/28/2023]
Abstract
Neural dynamics is often investigated with tools from bifurcation theory. However, many neuron models are stochastic, mimicking fluctuations in the input from unknown parts of the brain or the spiking nature of signals. Noise changes the dynamics with respect to the deterministic model; in particular classical bifurcation theory cannot be applied. We formulate the stochastic neuron dynamics in the Martin-Siggia-Rose de Dominicis-Janssen (MSRDJ) formalism and present the fluctuation expansion of the effective action and the functional renormalization group (fRG) as two systematic ways to incorporate corrections to the mean dynamics and time-dependent statistics due to fluctuations in the presence of nonlinear neuronal gain. To formulate self-consistency equations, we derive a fundamental link between the effective action in the Onsager-Machlup (OM) formalism, which allows the study of phase transitions, and the MSRDJ effective action, which is computationally advantageous. These results in particular allow the derivation of an OM effective action for systems with non-Gaussian noise. This approach naturally leads to effective deterministic equations for the first moment of the stochastic system; they explain how nonlinearities and noise cooperate to produce memory effects. Moreover, the MSRDJ formulation yields an effective linear system that has identical power spectra and linear response. Starting from the better known loopwise approximation, we then discuss the use of the fRG as a method to obtain self-consistency beyond the mean. We present a new efficient truncation scheme for the hierarchy of flow equations for the vertex functions by adapting the Blaizot, Méndez, and Wschebor approximation from the derivative expansion to the vertex expansion. The methods are presented by means of the simplest possible example of a stochastic differential equation that has generic features of neuronal dynamics.
Collapse
Affiliation(s)
- Jonas Stapmanns
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| | - Tobias Kühn
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Thomas Luu
- Institut für Kernphysik (IKP-3), Institute for Advanced Simulation (IAS-4) and Jülich Center for Hadron Physics, Jülich Research Centre, Jülich, Germany
| | - Carsten Honerkamp
- Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany.,JARA-FIT, Jülich Aachen Research Alliance-Fundamentals of Future Information Technology, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| |
Collapse
|
192
|
Sachdeva PS, Livezey JA, DeWeese MR. Heterogeneous Synaptic Weighting Improves Neural Coding in the Presence of Common Noise. Neural Comput 2020; 32:1239-1276. [PMID: 32433901 DOI: 10.1162/neco_a_01287] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Simultaneous recordings from the cortex have revealed that neural activity is highly variable and that some variability is shared across neurons in a population. Further experimental work has demonstrated that the shared component of a neuronal population's variability is typically comparable to or larger than its private component. Meanwhile, an abundance of theoretical work has assessed the impact that shared variability has on a population code. For example, shared input noise is understood to have a detrimental impact on a neural population's coding fidelity. However, other contributions to variability, such as common noise, can also play a role in shaping correlated variability. We present a network of linear-nonlinear neurons in which we introduce a common noise input to model-for instance, variability resulting from upstream action potentials that are irrelevant to the task at hand. We show that by applying a heterogeneous set of synaptic weights to the neural inputs carrying the common noise, the network can improve its coding ability as measured by both Fisher information and Shannon mutual information, even in cases where this results in amplification of the common noise. With a broad and heterogeneous distribution of synaptic weights, a population of neurons can remove the harmful effects imposed by afferents that are uninformative about a stimulus. We demonstrate that some nonlinear networks benefit from weight diversification up to a certain population size, above which the drawbacks from amplified noise dominate over the benefits of diversification. We further characterize these benefits in terms of the relative strength of shared and private variability sources. Finally, we studied the asymptotic behavior of the mutual information and Fisher information analytically in our various networks as a function of population size. We find some surprising qualitative changes in the asymptotic behavior as we make seemingly minor changes in the synaptic weight distributions.
Collapse
Affiliation(s)
- Pratik S Sachdeva
- Redwood Center for Theoretical Neuroscience and Department of Physics, University of California, Berkeley, Berkeley, CA 94720 U.S.A., and Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, U.S.A.
| | - Jesse A Livezey
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA 94720, U.S.A., and Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, U.S.A.
| | - Michael R DeWeese
- Redwood Center for Theoretical Neuroscience, Department of Physics, and Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720 U.S.A.
| |
Collapse
|
193
|
Gerum RC, Erpenbeck A, Krauss P, Schilling A. Sparsity through evolutionary pruning prevents neuronal networks from overfitting. Neural Netw 2020; 128:305-312. [PMID: 32454374 DOI: 10.1016/j.neunet.2020.05.007] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Revised: 01/31/2020] [Accepted: 05/04/2020] [Indexed: 11/29/2022]
Abstract
Modern Machine learning techniques take advantage of the exponentially rising calculation power in new generation processor units. Thus, the number of parameters which are trained to solve complex tasks was highly increased over the last decades. However, still the networks fail - in contrast to our brain - to develop general intelligence in the sense of being able to solve several complex tasks with only one network architecture. This could be the case because the brain is not a randomly initialized neural network, which has to be trained from scratch by simply investing a lot of calculation power, but has from birth some fixed hierarchical structure. To make progress in decoding the structural basis of biological neural networks we here chose a bottom-up approach, where we evolutionarily trained small neural networks in performing a maze task. This simple maze task requires dynamic decision making with delayed rewards. We were able to show that during the evolutionary optimization random severance of connections leads to better generalization performance of the networks compared to fully connected networks. We conclude that sparsity is a central property of neural networks and should be considered for modern Machine learning approaches.
Collapse
Affiliation(s)
- Richard C Gerum
- Biophysics Group, Department of Physics, Friedrich Alexander University Erlangen-Nürnberg (FAU), Germany
| | - André Erpenbeck
- The Raymond and Beverley Sackler Center for Computational Molecular and Materials Science, School of Chemistry, Tel Aviv University (TAU), Israel
| | - Patrick Krauss
- Neuroscience Lab, Experimental Otolaryngology, University Hospital Erlangen, Germany; Cognitive Computational Neuroscience Group at the Chair of English Philology and Linguistics, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany; Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen (UMCG), The Netherlands
| | - Achim Schilling
- Neuroscience Lab, Experimental Otolaryngology, University Hospital Erlangen, Germany; Cognitive Computational Neuroscience Group at the Chair of English Philology and Linguistics, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany.
| |
Collapse
|
194
|
Montangie L, Miehl C, Gjorgjieva J. Autonomous emergence of connectivity assemblies via spike triplet interactions. PLoS Comput Biol 2020; 16:e1007835. [PMID: 32384081 PMCID: PMC7239496 DOI: 10.1371/journal.pcbi.1007835] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 05/20/2020] [Accepted: 03/31/2020] [Indexed: 01/08/2023] Open
Abstract
Non-random connectivity can emerge without structured external input driven by activity-dependent mechanisms of synaptic plasticity based on precise spiking patterns. Here we analyze the emergence of global structures in recurrent networks based on a triplet model of spike timing dependent plasticity (STDP), which depends on the interactions of three precisely-timed spikes, and can describe plasticity experiments with varying spike frequency better than the classical pair-based STDP rule. We derive synaptic changes arising from correlations up to third-order and describe them as the sum of structural motifs, which determine how any spike in the network influences a given synaptic connection through possible connectivity paths. This motif expansion framework reveals novel structural motifs under the triplet STDP rule, which support the formation of bidirectional connections and ultimately the spontaneous emergence of global network structure in the form of self-connected groups of neurons, or assemblies. We propose that under triplet STDP assembly structure can emerge without the need for externally patterned inputs or assuming a symmetric pair-based STDP rule common in previous studies. The emergence of non-random network structure under triplet STDP occurs through internally-generated higher-order correlations, which are ubiquitous in natural stimuli and neuronal spiking activity, and important for coding. We further demonstrate how neuromodulatory mechanisms that modulate the shape of the triplet STDP rule or the synaptic transmission function differentially promote structural motifs underlying the emergence of assemblies, and quantify the differences using graph theoretic measures. Emergent non-random connectivity structures in different brain regions are tightly related to specific patterns of neural activity and support diverse brain functions. For instance, self-connected groups of neurons, known as assemblies, have been proposed to represent functional units in brain circuits and can emerge even without patterned external instruction. Here we investigate the emergence of non-random connectivity in recurrent networks using a particular plasticity rule, triplet STDP, which relies on the interaction of spike triplets and can capture higher-order statistical dependencies in neural activity. We derive the evolution of the synaptic strengths in the network and explore the conditions for the self-organization of connectivity into assemblies. We demonstrate key differences of the triplet STDP rule compared to the classical pair-based rule in terms of how assemblies are formed, including the realistic asymmetric shape and influence of novel connectivity motifs on network plasticity driven by higher-order correlations. Assembly formation depends on the specific shape of the STDP window and synaptic transmission function, pointing towards an important role of neuromodulatory signals on formation of intrinsically generated assemblies.
Collapse
Affiliation(s)
- Lisandro Montangie
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
| | - Christoph Miehl
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
- * E-mail:
| |
Collapse
|
195
|
Systematic Integration of Structural and Functional Data into Multi-scale Models of Mouse Primary Visual Cortex. Neuron 2020; 106:388-403.e18. [DOI: 10.1016/j.neuron.2020.01.040] [Citation(s) in RCA: 90] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2019] [Revised: 10/17/2019] [Accepted: 01/27/2020] [Indexed: 01/08/2023]
|
196
|
Levy M, Sporns O, MacLean JN. Network Analysis of Murine Cortical Dynamics Implicates Untuned Neurons in Visual Stimulus Coding. Cell Rep 2020; 31:107483. [PMID: 32294431 PMCID: PMC7218481 DOI: 10.1016/j.celrep.2020.03.047] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 01/22/2020] [Accepted: 03/13/2020] [Indexed: 02/02/2023] Open
Abstract
Unbiased and dense sampling of large populations of layer 2/3 pyramidal neurons in mouse primary visual cortex (V1) reveals two functional sub-populations: neurons tuned and untuned to drifting gratings. Whether functional interactions between these two groups contribute to the representation of visual stimuli is unclear. To examine these interactions, we summarize the population partial pairwise correlation structure as a directed and weighted graph. We find that tuned and untuned neurons have distinct topological properties, with untuned neurons occupying central positions in functional networks (FNs). Implementation of a decoder that utilizes the topology of these FNs yields accurate decoding of visual stimuli. We further show that decoding performance degrades comparably following manipulations of either tuned or untuned neurons. Our results demonstrate that untuned neurons are an integral component of V1 FNs and suggest that network interactions contain information about the stimulus that is accessible to downstream elements.
Collapse
Affiliation(s)
- Maayan Levy
- Committee on Computational Neuroscience, The University of Chicago, Chicago, IL 60637, USA
| | - Olaf Sporns
- Indiana University Network Science Institute, Indiana University, Bloomington, IN 47405, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA
| | - Jason N MacLean
- Committee on Computational Neuroscience, The University of Chicago, Chicago, IL 60637, USA; Department of Neurobiology, The University of Chicago, Chicago, IL 60637, USA; Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior.
| |
Collapse
|
197
|
Hey, look over there: Distraction effects on rapid sequence recall. PLoS One 2020; 15:e0223743. [PMID: 32275703 PMCID: PMC7147745 DOI: 10.1371/journal.pone.0223743] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 03/11/2020] [Indexed: 11/19/2022] Open
Abstract
In the course of everyday life, the brain must store and recall a huge variety of representations of stimuli which are presented in an ordered or sequential way. The processes by which the ordering of these various things is stored and recalled are moderately well understood. We use here a computational model of a cortex-like recurrent neural network adapted by a multitude of plasticity mechanisms. We first demonstrate the learning of a sequence. Then, we examine the influence of different types of distractors on the network dynamics during the recall of the encoded ordered information being ordered in a sequence. We are able to broadly arrive at two distinct effect-categories for distractors, arrive at a basic understanding of why this is so, and predict what distractors will fall into each category.
Collapse
|
198
|
Novelli L, Atay FM, Jost J, Lizier JT. Deriving pairwise transfer entropy from network structure and motifs. Proc Math Phys Eng Sci 2020; 476:20190779. [PMID: 32398937 PMCID: PMC7209155 DOI: 10.1098/rspa.2019.0779] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2019] [Accepted: 03/24/2020] [Indexed: 11/12/2022] Open
Abstract
Transfer entropy (TE) is an established method for quantifying directed statistical dependencies in neuroimaging and complex systems datasets. The pairwise (or bivariate) TE from a source to a target node in a network does not depend solely on the local source-target link weight, but on the wider network structure that the link is embedded in. This relationship is studied using a discrete-time linearly coupled Gaussian model, which allows us to derive the TE for each link from the network topology. It is shown analytically that the dependence on the directed link weight is only a first approximation, valid for weak coupling. More generally, the TE increases with the in-degree of the source and decreases with the in-degree of the target, indicating an asymmetry of information transfer between hubs and low-degree nodes. In addition, the TE is directly proportional to weighted motif counts involving common parents or multiple walks from the source to the target, which are more abundant in networks with a high clustering coefficient than in random networks. Our findings also apply to Granger causality, which is equivalent to TE for Gaussian variables. Moreover, similar empirical results on random Boolean networks suggest that the dependence of the TE on the in-degree extends to nonlinear dynamics.
Collapse
Affiliation(s)
- Leonardo Novelli
- Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
| | - Fatihcan M. Atay
- Department of Mathematics, Bilkent University, 06800 Ankara, Turkey
- Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany
| | - Jürgen Jost
- Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany
- Santa Fe Institute for the Sciences of Complexity, Santa Fe, New Mexico 87501, USA
| | - Joseph T. Lizier
- Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, Australia
- Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany
| |
Collapse
|
199
|
Sumi T, Yamamoto H, Hirano-Iwata A. Suppression of hypersynchronous network activity in cultured cortical neurons using an ultrasoft silicone scaffold. SOFT MATTER 2020; 16:3195-3202. [PMID: 32096811 DOI: 10.1039/c9sm02432h] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The spontaneous activity pattern of cortical neurons in dissociated culture is characterized by burst firing that is highly synchronized among a wide population of cells. The degree of synchrony, however, is excessively higher than that in cortical tissues. Here, we employed polydimethylsiloxane (PDMS) elastomers to establish a novel system for culturing neurons on a scaffold with an elastic modulus resembling brain tissue, and investigated the effect of the scaffold's elasticity on network activity patterns in cultured rat cortical neurons. Using whole-cell patch clamp to assess the scaffold effect on the development of synaptic connections, we found that the amplitude of excitatory postsynaptic current, as well as the frequency of spontaneous transmissions, was reduced in neuronal networks grown on an ultrasoft PDMS with an elastic modulus of 0.5 kPa. Furthermore, the ultrasoft scaffold was found to suppress neural correlations in the spontaneous activity of the cultured neuronal network. The dose of GsMTx-4, an antagonist of stretch-activated cation channels (SACs), required to reduce the generation of the events below 1.0 event per min on PDMS substrates was lower than that for neurons on a glass substrate. This suggests that the difference in the baseline level of SAC activation is a molecular mechanism underlying the alteration in neuronal network activity depending on scaffold stiffness. Our results demonstrate the potential application of PDMS with biomimetic elasticity as cell-culture scaffold for bridging the in vivo-in vitro gap in neuronal systems.
Collapse
Affiliation(s)
- Takuma Sumi
- Research Institute of Electrical Communication, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan.
| | - Hideaki Yamamoto
- Research Institute of Electrical Communication, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan. and WPI-Advanced Institute for Materials Research (WPI-AIMR), Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan
| | - Ayumi Hirano-Iwata
- Research Institute of Electrical Communication, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan. and WPI-Advanced Institute for Materials Research (WPI-AIMR), Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan
| |
Collapse
|
200
|
Brock JA, Thomazeau A, Watanabe A, Li SSY, Sjöström PJ. A Practical Guide to Using CV Analysis for Determining the Locus of Synaptic Plasticity. Front Synaptic Neurosci 2020; 12:11. [PMID: 32292337 PMCID: PMC7118219 DOI: 10.3389/fnsyn.2020.00011] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Accepted: 03/04/2020] [Indexed: 01/17/2023] Open
Abstract
Long-term synaptic plasticity is widely believed to underlie learning and memory in the brain. Whether plasticity is primarily expressed pre- or postsynaptically has been the subject of considerable debate for many decades. More recently, it is generally agreed that the locus of plasticity depends on a number of factors, such as developmental stage, induction protocol, and synapse type. Since presynaptic expression alters not just the gain but also the short-term dynamics of a synapse, whereas postsynaptic expression only modifies the gain, the locus has fundamental implications for circuits dynamics and computations in the brain. It therefore remains crucial for our understanding of neuronal circuits to know the locus of expression of long-term plasticity. One classical method for elucidating whether plasticity is pre- or postsynaptically expressed is based on analysis of the coefficient of variation (CV), which serves as a measure of noise levels of synaptic neurotransmission. Here, we provide a practical guide to using CV analysis for the purposes of exploring the locus of expression of long-term plasticity, primarily aimed at beginners in the field. We provide relatively simple intuitive background to an otherwise theoretically complex approach as well as simple mathematical derivations for key parametric relationships. We list important pitfalls of the method, accompanied by accessible computer simulations to better illustrate the problems (downloadable from GitHub), and we provide straightforward solutions for these issues.
Collapse
Affiliation(s)
- Jennifer A Brock
- Centre for Research in Neuroscience, Brain Repair and Integrative Neuroscience Program, Department of Medicine, The Research Institute of the McGill University Health Centre, Montreal General Hospital, Montreal, QC, Canada.,Integrated Program in Neuroscience, McGill University, Montreal, QC, Canada
| | - Aurore Thomazeau
- Centre for Research in Neuroscience, Brain Repair and Integrative Neuroscience Program, Department of Medicine, The Research Institute of the McGill University Health Centre, Montreal General Hospital, Montreal, QC, Canada
| | - Airi Watanabe
- Centre for Research in Neuroscience, Brain Repair and Integrative Neuroscience Program, Department of Medicine, The Research Institute of the McGill University Health Centre, Montreal General Hospital, Montreal, QC, Canada.,Integrated Program in Neuroscience, McGill University, Montreal, QC, Canada
| | - Sally Si Ying Li
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - P Jesper Sjöström
- Centre for Research in Neuroscience, Brain Repair and Integrative Neuroscience Program, Department of Medicine, The Research Institute of the McGill University Health Centre, Montreal General Hospital, Montreal, QC, Canada
| |
Collapse
|