1
|
Akif A, Staib L, Herman P, Rothman DL, Yu Y, Hyder F. In vivo neuropil density from anatomical MRI and machine learning. Cereb Cortex 2024; 34:bhae200. [PMID: 38771239 PMCID: PMC11107380 DOI: 10.1093/cercor/bhae200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Revised: 04/23/2024] [Accepted: 04/28/2024] [Indexed: 05/22/2024] Open
Abstract
Brain energy budgets specify metabolic costs emerging from underlying mechanisms of cellular and synaptic activities. While current bottom-up energy budgets use prototypical values of cellular density and synaptic density, predicting metabolism from a person's individualized neuropil density would be ideal. We hypothesize that in vivo neuropil density can be derived from magnetic resonance imaging (MRI) data, consisting of longitudinal relaxation (T1) MRI for gray/white matter distinction and diffusion MRI for tissue cellularity (apparent diffusion coefficient, ADC) and axon directionality (fractional anisotropy, FA). We present a machine learning algorithm that predicts neuropil density from in vivo MRI scans, where ex vivo Merker staining and in vivo synaptic vesicle glycoprotein 2A Positron Emission Tomography (SV2A-PET) images were reference standards for cellular and synaptic density, respectively. We used Gaussian-smoothed T1/ADC/FA data from 10 healthy subjects to train an artificial neural network, subsequently used to predict cellular and synaptic density for 54 test subjects. While excellent histogram overlaps were observed both for synaptic density (0.93) and cellular density (0.85) maps across all subjects, the lower spatial correlations both for synaptic density (0.89) and cellular density (0.58) maps are suggestive of individualized predictions. This proof-of-concept artificial neural network may pave the way for individualized energy atlas prediction, enabling microscopic interpretations of functional neuroimaging data.
Collapse
Affiliation(s)
- Adil Akif
- Department of Biomedical Engineering, Yale University, 55 Prospect St, New Haven, CT 06511, United States
| | - Lawrence Staib
- Department of Biomedical Engineering, Yale University, 55 Prospect St, New Haven, CT 06511, United States
- Department of Radiology and Biomedical Imaging, Yale University, 300 Cedar St, New Haven, CT 06520, United States
- Department of Electrical Engineering, Yale University, 17 Hillhouse Ave, New Haven, CT 06511, United States
| | - Peter Herman
- Department of Radiology and Biomedical Imaging, Yale University, 300 Cedar St, New Haven, CT 06520, United States
- Magnetic Resonance Research Center, Yale University, 300 Cedar St, New Haven, CT 06520, United States
| | - Douglas L Rothman
- Department of Biomedical Engineering, Yale University, 55 Prospect St, New Haven, CT 06511, United States
- Department of Radiology and Biomedical Imaging, Yale University, 300 Cedar St, New Haven, CT 06520, United States
- Magnetic Resonance Research Center, Yale University, 300 Cedar St, New Haven, CT 06520, United States
| | - Yuguo Yu
- Research Institute of Intelligent and Complex Systems, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Institute of Science and Technology for Brain-Inspired Intelligence, 220 Handen Road, Shanghai, 200032, China
| | - Fahmeed Hyder
- Department of Biomedical Engineering, Yale University, 55 Prospect St, New Haven, CT 06511, United States
- Department of Radiology and Biomedical Imaging, Yale University, 300 Cedar St, New Haven, CT 06520, United States
- Magnetic Resonance Research Center, Yale University, 300 Cedar St, New Haven, CT 06520, United States
| |
Collapse
|
2
|
Zhang C, Revah O, Wolf F, Neef A. Dynamic Gain Decomposition Reveals Functional Effects of Dendrites, Ion Channels, and Input Statistics in Population Coding. J Neurosci 2024; 44:e0799232023. [PMID: 38286625 PMCID: PMC10977021 DOI: 10.1523/jneurosci.0799-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 12/18/2023] [Accepted: 12/19/2023] [Indexed: 01/31/2024] Open
Abstract
Modern, high-density neuronal recordings reveal at ever higher precision how information is represented by neural populations. Still, we lack the tools to understand these processes bottom-up, emerging from the biophysical properties of neurons, synapses, and network structure. The concept of the dynamic gain function, a spectrally resolved approximation of a population's coding capability, has the potential to link cell-level properties to network-level performance. However, the concept is not only useful but also very complex because the dynamic gain's shape is co-determined by axonal and somato-dendritic parameters and the population's operating regime. Previously, this complexity precluded an understanding of any individual parameter's impact. Here, we decomposed the dynamic gain function into three components corresponding to separate signal transformations. This allowed attribution of network-level encoding features to specific cell-level parameters. Applying the method to data from real neurons and biophysically plausible models, we found: (1) The encoding bandwidth of real neurons, approximately 400 Hz, is constrained by the voltage dependence of axonal currents during early action potential initiation. (2) State-of-the-art models only achieve encoding bandwidths around 100 Hz and are limited mainly by subthreshold processes instead. (3) Large dendrites and low-threshold potassium currents modulate the bandwidth by shaping the subthreshold stimulus-to-voltage transformation. Our decomposition provides physiological interpretations when the dynamic gain curve changes, for instance during spectrinopathies and neurodegeneration. By pinpointing shortcomings of current models, it also guides inference of neuron models best suited for large-scale network simulations.
Collapse
Affiliation(s)
- Chenfei Zhang
- Institute of Science and Technology for Brain-Inspired Intelligence, Shanghai 200433, People's Republic of China
- Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany
- Göttingen Campus Institute for Dynamics of Biological Networks, 37073 Göttingen, Germany
- Bernstein Center for Computational Neuroscience, 37073 Göttingen, Germany
| | - Omer Revah
- Koret School of Veterinary Medicine, Hebrew University of Jerusalem, 7610001 Rehovot, Israel
| | - Fred Wolf
- Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany
- Göttingen Campus Institute for Dynamics of Biological Networks, 37073 Göttingen, Germany
- Bernstein Center for Computational Neuroscience, 37073 Göttingen, Germany
- Institute for the Dynamics of Complex Systems, University of Göttingen, 37077 Göttingen, Germany
- Max Planck Institute of Multidisciplinary Sciences, 37077 Göttingen, Germany
- Center for Biostructural Imaging of Neurodegeneration, 37075 Göttingen, Germany
| | - Andreas Neef
- Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany
- Göttingen Campus Institute for Dynamics of Biological Networks, 37073 Göttingen, Germany
- Bernstein Center for Computational Neuroscience, 37073 Göttingen, Germany
- Institute for the Dynamics of Complex Systems, University of Göttingen, 37077 Göttingen, Germany
- Max Planck Institute of Multidisciplinary Sciences, 37077 Göttingen, Germany
- Institute for Auditory Neuroscience and InnerEarLab University Medical Center Göttingen, 37075 Göttingen, Germany
| |
Collapse
|
3
|
Yang YC, Wang GH, Chou P, Hsueh SW, Lai YC, Kuo CC. Dynamic electrical synapses rewire brain networks for persistent oscillations and epileptogenesis. Proc Natl Acad Sci U S A 2024; 121:e2313042121. [PMID: 38346194 PMCID: PMC10895348 DOI: 10.1073/pnas.2313042121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 01/09/2024] [Indexed: 02/15/2024] Open
Abstract
One of the very fundamental attributes for telencephalic neural computation in mammals involves network activities oscillating beyond the initial trigger. The continuing and automated processing of transient inputs shall constitute the basis of cognition and intelligence but may lead to neuropsychiatric disorders such as epileptic seizures if carried so far as to engross part of or the whole telencephalic system. From a conventional view of the basic design of the telencephalic local circuitry, the GABAergic interneurons (INs) and glutamatergic pyramidal neurons (PNs) make negative feedback loops which would regulate the neural activities back to the original state. The drive for the most intriguing self-perpetuating telencephalic activities, then, has not been posed and characterized. We found activity-dependent deployment and delineated functional consequences of the electrical synapses directly linking INs and PNs in the amygdala, a prototypical telencephalic circuitry. These electrical synapses endow INs dual (a faster excitatory and a slower inhibitory) actions on PNs, providing a network-intrinsic excitatory drive that fuels the IN-PN interconnected circuitries and enables persistent oscillations with preservation of GABAergic negative feedback. Moreover, the entities of electrical synapses between INs and PNs are engaged in and disengaged from functioning in a highly dynamic way according to neural activities, which then determine the spatiotemporal scale of recruited oscillating networks. This study uncovers a special wide-range and context-dependent plasticity for wiring/rewiring of brain networks. Epileptogenesis or a wide spectrum of clinical disorders may ensue, however, from different scales of pathological extension of this unique form of telencephalic plasticity.
Collapse
Affiliation(s)
- Ya-Chin Yang
- Department of Biomedical Sciences, College of Medicine, Chang Gung University, Taoyuan333, Taiwan
- Graduate Institute of Biomedical Sciences, College of Medicine, Chang Gung University, Taoyuan333, Taiwan
- Neuroscience Research Center, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan333, Taiwan
- Department of Psychiatry, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan333, Taiwan
| | - Guan-Hsun Wang
- Department of Biomedical Sciences, College of Medicine, Chang Gung University, Taoyuan333, Taiwan
- Department of Medical Education, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan333, Taiwan
- Department of Neurology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan333, Taiwan
| | - Ping Chou
- Department of Physiology, National Taiwan University College of Medicine, Taipei100, Taiwan
| | - Shu-Wei Hsueh
- Graduate Institute of Biomedical Sciences, College of Medicine, Chang Gung University, Taoyuan333, Taiwan
| | - Yi-Chen Lai
- Department of Biomedical Sciences, College of Medicine, Chang Gung University, Taoyuan333, Taiwan
| | - Chung-Chin Kuo
- Department of Physiology, National Taiwan University College of Medicine, Taipei100, Taiwan
- Department of Neurology, National Taiwan University Hospital, Taipei100, Taiwan
| |
Collapse
|
4
|
Engelken R, Ingrosso A, Khajeh R, Goedeke S, Abbott LF. Input correlations impede suppression of chaos and learning in balanced firing-rate networks. PLoS Comput Biol 2022; 18:e1010590. [PMID: 36469504 PMCID: PMC9754616 DOI: 10.1371/journal.pcbi.1010590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 12/15/2022] [Accepted: 09/20/2022] [Indexed: 12/12/2022] Open
Abstract
Neural circuits exhibit complex activity patterns, both spontaneously and evoked by external stimuli. Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity. We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, i.e., the suppression of internally-generated chaotic variability, strongly depends on correlations in the input. A distinctive feature of balanced networks is that, because common external input is dynamically canceled by recurrent feedback, it is far more difficult to suppress chaos with common input into each neuron than through independent input. To study this phenomenon, we develop a non-stationary dynamic mean-field theory for driven networks. The theory explains how the activity statistics and the largest Lyapunov exponent depend on the frequency and amplitude of the input, recurrent coupling strength, and network size, for both common and independent input. We further show that uncorrelated inputs facilitate learning in balanced networks.
Collapse
Affiliation(s)
- Rainer Engelken
- Zuckerman Mind, Brain, Behavior Institute, Columbia University, New York, New York, United States of America
| | - Alessandro Ingrosso
- The Abdus Salam International Centre for Theoretical Physics, Trieste, Italy
| | - Ramin Khajeh
- Zuckerman Mind, Brain, Behavior Institute, Columbia University, New York, New York, United States of America
| | - Sven Goedeke
- Neural Network Dynamics and Computation, Institute of Genetics, University of Bonn, Bonn, Germany
| | - L. F. Abbott
- Zuckerman Mind, Brain, Behavior Institute, Columbia University, New York, New York, United States of America
| |
Collapse
|
5
|
Gradient-based learning drives robust representations in recurrent neural networks by balancing compression and expansion. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00498-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
6
|
Xiao ZC, Lin KK, Young LS. A data-informed mean-field approach to mapping of cortical parameter landscapes. PLoS Comput Biol 2021; 17:e1009718. [PMID: 34941863 PMCID: PMC8741023 DOI: 10.1371/journal.pcbi.1009718] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 01/07/2022] [Accepted: 12/02/2021] [Indexed: 11/19/2022] Open
Abstract
Constraining the many biological parameters that govern cortical dynamics is computationally and conceptually difficult because of the curse of dimensionality. This paper addresses these challenges by proposing (1) a novel data-informed mean-field (MF) approach to efficiently map the parameter space of network models; and (2) an organizing principle for studying parameter space that enables the extraction biologically meaningful relations from this high-dimensional data. We illustrate these ideas using a large-scale network model of the Macaque primary visual cortex. Of the 10-20 model parameters, we identify 7 that are especially poorly constrained, and use the MF algorithm in (1) to discover the firing rate contours in this 7D parameter cube. Defining a "biologically plausible" region to consist of parameters that exhibit spontaneous Excitatory and Inhibitory firing rates compatible with experimental values, we find that this region is a slightly thickened codimension-1 submanifold. An implication of this finding is that while plausible regimes depend sensitively on parameters, they are also robust and flexible provided one compensates appropriately when parameters are varied. Our organizing principle for conceptualizing parameter dependence is to focus on certain 2D parameter planes that govern lateral inhibition: Intersecting these planes with the biologically plausible region leads to very simple geometric structures which, when suitably scaled, have a universal character independent of where the intersections are taken. In addition to elucidating the geometry of the plausible region, this invariance suggests useful approximate scaling relations. Our study offers, for the first time, a complete characterization of the set of all biologically plausible parameters for a detailed cortical model, which has been out of reach due to the high dimensionality of parameter space.
Collapse
Affiliation(s)
- Zhuo-Cheng Xiao
- Courant Institute of Mathematical Sciences, New York University, New York, New York, United States of America
| | - Kevin K. Lin
- Department of Mathematics, University of Arizona, Tucson, Arizona, United States of America
| | - Lai-Sang Young
- Courant Institute of Mathematical Sciences, New York University, New York, New York, United States of America
- Institute for Advanced Study, Princeton, New Jersey, United States of America
| |
Collapse
|
7
|
Sadeh S, Clopath C. Inhibitory stabilization and cortical computation. Nat Rev Neurosci 2020; 22:21-37. [PMID: 33177630 DOI: 10.1038/s41583-020-00390-z] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/22/2020] [Indexed: 12/22/2022]
Abstract
Neuronal networks with strong recurrent connectivity provide the brain with a powerful means to perform complex computational tasks. However, high-gain excitatory networks are susceptible to instability, which can lead to runaway activity, as manifested in pathological regimes such as epilepsy. Inhibitory stabilization offers a dynamic, fast and flexible compensatory mechanism to balance otherwise unstable networks, thus enabling the brain to operate in its most efficient regimes. Here we review recent experimental evidence for the presence of such inhibition-stabilized dynamics in the brain and discuss their consequences for cortical computation. We show how the study of inhibition-stabilized networks in the brain has been facilitated by recent advances in the technological toolbox and perturbative techniques, as well as a concomitant development of biologically realistic computational models. By outlining future avenues, we suggest that inhibitory stabilization can offer an exemplary case of how experimental neuroscience can progress in tandem with technology and theory to advance our understanding of the brain.
Collapse
Affiliation(s)
- Sadra Sadeh
- Bioengineering Department, Imperial College London, London, UK
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, UK.
| |
Collapse
|
8
|
Dalgleish HWP, Russell LE, Packer AM, Roth A, Gauld OM, Greenstreet F, Thompson EJ, Häusser M. How many neurons are sufficient for perception of cortical activity? eLife 2020; 9:e58889. [PMID: 33103656 PMCID: PMC7695456 DOI: 10.7554/elife.58889] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Accepted: 10/17/2020] [Indexed: 01/12/2023] Open
Abstract
Many theories of brain function propose that activity in sparse subsets of neurons underlies perception and action. To place a lower bound on the amount of neural activity that can be perceived, we used an all-optical approach to drive behaviour with targeted two-photon optogenetic activation of small ensembles of L2/3 pyramidal neurons in mouse barrel cortex while simultaneously recording local network activity with two-photon calcium imaging. By precisely titrating the number of neurons stimulated, we demonstrate that the lower bound for perception of cortical activity is ~14 pyramidal neurons. We find a steep sigmoidal relationship between the number of activated neurons and behaviour, saturating at only ~37 neurons, and show this relationship can shift with learning. Furthermore, activation of ensembles is balanced by inhibition of neighbouring neurons. This surprising perceptual sensitivity in the face of potent network suppression supports the sparse coding hypothesis, and suggests that cortical perception balances a trade-off between minimizing the impact of noise while efficiently detecting relevant signals.
Collapse
Affiliation(s)
- Henry WP Dalgleish
- Wolfson Institute for Biomedical Research, University College LondonLondonUnited Kingdom
| | - Lloyd E Russell
- Wolfson Institute for Biomedical Research, University College LondonLondonUnited Kingdom
| | - Adam M Packer
- Wolfson Institute for Biomedical Research, University College LondonLondonUnited Kingdom
| | - Arnd Roth
- Wolfson Institute for Biomedical Research, University College LondonLondonUnited Kingdom
| | - Oliver M Gauld
- Wolfson Institute for Biomedical Research, University College LondonLondonUnited Kingdom
| | - Francesca Greenstreet
- Wolfson Institute for Biomedical Research, University College LondonLondonUnited Kingdom
| | - Emmett J Thompson
- Wolfson Institute for Biomedical Research, University College LondonLondonUnited Kingdom
| | - Michael Häusser
- Wolfson Institute for Biomedical Research, University College LondonLondonUnited Kingdom
| |
Collapse
|
9
|
Harris SS, Wolf F, De Strooper B, Busche MA. Tipping the Scales: Peptide-Dependent Dysregulation of Neural Circuit Dynamics in Alzheimer’s Disease. Neuron 2020; 107:417-435. [DOI: 10.1016/j.neuron.2020.06.005] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Revised: 04/24/2020] [Accepted: 06/01/2020] [Indexed: 02/07/2023]
|
10
|
Dynamic representations in networked neural systems. Nat Neurosci 2020; 23:908-917. [PMID: 32541963 DOI: 10.1038/s41593-020-0653-3] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Accepted: 05/12/2020] [Indexed: 11/08/2022]
Abstract
A group of neurons can generate patterns of activity that represent information about stimuli; subsequently, the group can transform and transmit activity patterns across synapses to spatially distributed areas. Recent studies in neuroscience have begun to independently address the two components of information processing: the representation of stimuli in neural activity and the transmission of information in networks that model neural interactions. Yet only recently are studies seeking to link these two types of approaches. Here we briefly review the two separate bodies of literature; we then review the recent strides made to address this gap. We continue with a discussion of how patterns of activity evolve from one representation to another, forming dynamic representations that unfold on the underlying network. Our goal is to offer a holistic framework for understanding and describing neural information representation and transmission while revealing exciting frontiers for future research.
Collapse
|
11
|
Hong C, Wei X, Wang J, Deng B, Yu H, Che Y. Training Spiking Neural Networks for Cognitive Tasks: A Versatile Framework Compatible With Various Temporal Codes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:1285-1296. [PMID: 31247574 DOI: 10.1109/tnnls.2019.2919662] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Recent studies have demonstrated the effectiveness of supervised learning in spiking neural networks (SNNs). A trainable SNN provides a valuable tool not only for engineering applications but also for theoretical neuroscience studies. Here, we propose a modified SpikeProp learning algorithm, which ensures better learning stability for SNNs and provides more diverse network structures and coding schemes. Specifically, we designed a spike gradient threshold rule to solve the well-known gradient exploding problem in SNN training. In addition, regulation rules on firing rates and connection weights are proposed to control the network activity during training. Based on these rules, biologically realistic features such as lateral connections, complex synaptic dynamics, and sparse activities are included in the network to facilitate neural computation. We demonstrate the versatility of this framework by implementing three well-known temporal codes for different types of cognitive tasks, namely, handwritten digit recognition, spatial coordinate transformation, and motor sequence generation. Several important features observed in experimental studies, such as selective activity, excitatory-inhibitory balance, and weak pairwise correlation, emerged in the trained model. This agreement between experimental and computational results further confirmed the importance of these features in neural function. This work provides a new framework, in which various neural behaviors can be modeled and the underlying computational mechanisms can be studied.
Collapse
|
12
|
Mahrach A, Chen G, Li N, van Vreeswijk C, Hansel D. Mechanisms underlying the response of mouse cortical networks to optogenetic manipulation. eLife 2020; 9:e49967. [PMID: 31951197 PMCID: PMC7012611 DOI: 10.7554/elife.49967] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Accepted: 12/25/2019] [Indexed: 12/28/2022] Open
Abstract
GABAergic interneurons can be subdivided into three subclasses: parvalbumin positive (PV), somatostatin positive (SOM) and serotonin positive neurons. With principal cells (PCs) they form complex networks. We examine PCs and PV responses in mouse anterior lateral motor cortex (ALM) and barrel cortex (S1) upon PV photostimulation in vivo. In ALM layer five and S1, the PV response is paradoxical: photoexcitation reduces their activity. This is not the case in ALM layer 2/3. We combine analytical calculations and numerical simulations to investigate how these results constrain the architecture. Two-population models cannot explain the results. Four-population networks with V1-like architecture account for the data in ALM layer 2/3 and layer 5. Our data in S1 can be explained if SOM neurons receive inputs only from PCs and PV neurons. In both four-population models, the paradoxical effect implies not too strong recurrent excitation. It is not evidence for stabilization by inhibition.
Collapse
Affiliation(s)
- Alexandre Mahrach
- CNRS-UMR 8002, Integrative Neuroscience and Cognition CenterParisFrance
| | - Guang Chen
- Department of NeuroscienceBaylor College of MedicineHoustonUnited States
| | - Nuo Li
- Department of NeuroscienceBaylor College of MedicineHoustonUnited States
| | | | - David Hansel
- CNRS-UMR 8002, Integrative Neuroscience and Cognition CenterParisFrance
| |
Collapse
|
13
|
Dynamic Gain Analysis Reveals Encoding Deficiencies in Cortical Neurons That Recover from Hypoxia-Induced Spreading Depolarizations. J Neurosci 2019; 39:7790-7800. [PMID: 31399533 DOI: 10.1523/jneurosci.3147-18.2019] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2018] [Revised: 07/23/2019] [Accepted: 07/23/2019] [Indexed: 11/21/2022] Open
Abstract
Cortical regions that are damaged by insults, such as ischemia, hypoxia, and trauma, frequently generate spreading depolarization (SD). At the neuronal level, SDs entail complete breakdown of ionic gradients, persisting for seconds to minutes. It is unclear whether these transient events have a more lasting influence on neuronal function. Here, we describe electrophysiological changes in cortical neurons after recovery from hypoxia-induced SD. When examined with standard measures of neuronal excitability several hours after recovery from SD, layer 5 pyramidal neurons in brain slices from mice of either sex appear surprisingly normal. However, we here introduce an additional parameter, dynamic gain, which characterizes the bandwidth of action potential encoding by a neuron, and thereby reflects its potential efficiency in a multineuronal circuit. We find that the ability of neurons that recover from SD to track high-frequency inputs is markedly curtailed; exposure to hypoxia did not have this effect when SD was prevented pharmacologically. Staining for Ankyrin G revealed at least a fourfold decrease in the number of intact axon initial segments in post-SD slices. Since this effect, along with the effect on encoding, was blocked by an inhibitor of the Ca2+-dependent enzyme, calpain, we conclude that both effects were mediated by the SD-induced rise in intracellular Ca2+ Although effects of calpain activation were detected in the axon initial segment, changes in soma-dendritic compartments may also be involved. Whatever the precise molecular mechanism, our findings indicate that in the context of cortical circuit function, effectiveness of neurons that survive SD may be limited.SIGNIFICANCE STATEMENT Spreading depolarization, which commonly accompanies cortical injury, entails transient massive breakdown of neuronal ionic gradients. The function of cortical neurons that recover from hypoxia-induced spreading depolarization is not obviously abnormal when tested for usual measures of neuronal excitability. However, we now demonstrate that they have a reduced bandwidth, reflecting a significant impairment of their ability to precisely encode high-frequency components of their synaptic input in output spike trains. Thus, neurons that recover from spreading depolarizations are less able to function normally as elements in the multineuronal cortical circuitry. These changes are correlated with activation of the calcium-dependent enzyme, calpain.
Collapse
|
14
|
Puelma Touzel M, Wolf F. Statistical mechanics of spike events underlying phase space partitioning and sequence codes in large-scale models of neural circuits. Phys Rev E 2019; 99:052402. [PMID: 31212548 DOI: 10.1103/physreve.99.052402] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2017] [Indexed: 11/07/2022]
Abstract
Cortical circuits operate in an inhibition-dominated regime of spiking activity. Recently, it was found that spiking circuit models in this regime can, despite disordered connectivity and asynchronous irregular activity, exhibit a locally stable dynamics that may be used for neural computation. The lack of existing mathematical tools has precluded analytical insight into this phase. Here we present analytical methods tailored to the granularity of spike-based interactions for analyzing attractor geometry in high-dimensional spiking dynamics. We apply them to reveal the properties of the complex geometry of trajectories of population spiking activity in a canonical model of locally stable spiking dynamics. We find that attractor basin boundaries are the preimages of spike-time collision events involving connected neurons. These spike-based instabilities control the divergence rate of neighboring basins and have no equivalent in rate-based models. They are located according to the disordered connectivity at a random subset of edges in a hypercube representation of the phase space. Iterating backward these edges using the stable dynamics induces a partition refinement on this space that converges to the attractor basins. We formulate a statistical theory of the locations of such events relative to attracting trajectories via a tractable representation of local trajectory ensembles. Averaging over the disorder, we derive the basin diameter distribution, whose characteristic scale emerges from the relative strengths of the stabilizing inhibitory coupling and destabilizing spike interactions. Our study provides an approach to analytically dissect how connectivity, coupling strength, and single-neuron dynamics shape the phase space geometry in the locally stable regime of spiking neural circuit dynamics.
Collapse
Affiliation(s)
- Maximilian Puelma Touzel
- Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany and Mila, Université de Montréal, Montréal, Quebec, Canada H2S 3H1
| | - Fred Wolf
- Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany; Faculty of Physics, Georg August University, 37077 Göttingen, Germany; Bernstein Center for Computational Neuroscience, 37077 Göttingen, Germany; and Kavli Institute for Theoretical Physics, University of California, Santa Barbara, Santa Barbara, California 93106-4111, USA
| |
Collapse
|
15
|
Bressloff PC. Stochastic neural field model of stimulus-dependent variability in cortical neurons. PLoS Comput Biol 2019; 15:e1006755. [PMID: 30883546 PMCID: PMC6438587 DOI: 10.1371/journal.pcbi.1006755] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2019] [Revised: 03/28/2019] [Accepted: 02/26/2019] [Indexed: 01/03/2023] Open
Abstract
We use stochastic neural field theory to analyze the stimulus-dependent tuning of neural variability in ring attractor networks. We apply perturbation methods to show how the neural field equations can be reduced to a pair of stochastic nonlinear phase equations describing the stochastic wandering of spontaneously formed tuning curves or bump solutions. These equations are analyzed using a modified version of the bivariate von Mises distribution, which is well-known in the theory of circular statistics. We first consider a single ring network and derive a simple mathematical expression that accounts for the experimentally observed bimodal (or M-shaped) tuning of neural variability. We then explore the effects of inter-network coupling on stimulus-dependent variability in a pair of ring networks. These could represent populations of cells in two different layers of a cortical hypercolumn linked via vertical synaptic connections, or two different cortical hypercolumns linked by horizontal patchy connections within the same layer. We find that neural variability can be suppressed or facilitated, depending on whether the inter-network coupling is excitatory or inhibitory, and on the relative strengths and biases of the external stimuli to the two networks. These results are consistent with the general observation that increasing the mean firing rate via external stimuli or modulating drives tends to reduce neural variability.
Collapse
Affiliation(s)
- Paul C. Bressloff
- Department of Mathematics, University of Utah, Salt Lake City, Utah, USA
| |
Collapse
|
16
|
Abstract
Many fundamental neural computations from normalization to rhythm generation emerge from the same cortical hardware, but they often require dedicated models to explain each phenomenon. Recently, the stabilized supralinear network (SSN) model has been used to explain a variety of nonlinear integration phenomena such as normalization, surround suppression, and contrast invariance. However, cortical circuits are also capable of implementing working memory and oscillations which are often associated with distinct model classes. Here, we show that the SSN motif can serve as a universal circuit model that is sufficient to support not only stimulus integration phenomena but also persistent states, self-sustained network-wide oscillations along with two coexisting stable states that have been linked with working memory. A hallmark of cortical circuits is their versatility. They can perform multiple fundamental computations such as normalization, memory storage, and rhythm generation. Yet it is far from clear how such versatility can be achieved in a single circuit, given that specialized models are often needed to replicate each computation. Here, we show that the stabilized supralinear network (SSN) model, which was originally proposed for sensory integration phenomena such as contrast invariance, normalization, and surround suppression, can give rise to dynamic cortical features of working memory, persistent activity, and rhythm generation. We study the SSN model analytically and uncover regimes where it can provide a substrate for working memory by supporting two stable steady states. Furthermore, we prove that the SSN model can sustain finite firing rates following input withdrawal and present an exact connectivity condition for such persistent activity. In addition, we show that the SSN model can undergo a supercritical Hopf bifurcation and generate global oscillations. Based on the SSN model, we outline the synaptic and neuronal mechanisms underlying computational versatility of cortical circuits. Our work shows that the SSN is an exactly solvable nonlinear recurrent neural network model that could pave the way for a unified theory of cortical function.
Collapse
|
17
|
Jedlicka P. Revisiting the Quantum Brain Hypothesis: Toward Quantum (Neuro)biology? Front Mol Neurosci 2017; 10:366. [PMID: 29163041 PMCID: PMC5681944 DOI: 10.3389/fnmol.2017.00366] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2017] [Accepted: 10/24/2017] [Indexed: 12/14/2022] Open
Abstract
The nervous system is a non-linear dynamical complex system with many feedback loops. A conventional wisdom is that in the brain the quantum fluctuations are self-averaging and thus functionally negligible. However, this intuition might be misleading in the case of non-linear complex systems. Because of an extreme sensitivity to initial conditions, in complex systems the microscopic fluctuations may be amplified and thereby affect the system's behavior. In this way quantum dynamics might influence neuronal computations. Accumulating evidence in non-neuronal systems indicates that biological evolution is able to exploit quantum stochasticity. The recent rise of quantum biology as an emerging field at the border between quantum physics and the life sciences suggests that quantum events could play a non-trivial role also in neuronal cells. Direct experimental evidence for this is still missing but future research should address the possibility that quantum events contribute to an extremely high complexity, variability and computational power of neuronal dynamics.
Collapse
|
18
|
Argaman T, Golomb D. Does layer 4 in the barrel cortex function as a balanced circuit when responding to whisker movements? Neuroscience 2017; 368:29-45. [PMID: 28774782 DOI: 10.1016/j.neuroscience.2017.07.054] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2017] [Revised: 07/17/2017] [Accepted: 07/24/2017] [Indexed: 11/25/2022]
Abstract
Neurons in one barrel in layer 4 (L4) in the mouse vibrissa somatosensory cortex are innervated mostly by neurons from the VPM nucleus and by other neurons within the same barrel. During quiet wakefulness or whisking in air, thalamic inputs vary slowly in time, and excitatory neurons rarely fire. A barrel in L4 contains a modest amount of neurons; the synaptic conductances are not very strong and connections are not sparse. Are the dynamical properties of the L4 circuit similar to those expected from fluctuation-dominated, balanced networks observed for large, strongly coupled and sparse cortical circuits? To resolve this question, we analyze a network of 150 inhibitory parvalbumin-expressing fast-spiking inhibitory interneurons innervated by the VPM thalamus with random connectivity, without or with 1600 low-firing excitatory neurons. Above threshold, the population-average firing rate of inhibitory cortical neurons increases linearly with the thalamic firing rate. The coefficient of variation CV is somewhat less than 1. Moderate levels of synchrony are induced by converging VPM inputs and by inhibitory interaction among neurons. The strengths of excitatory and inhibitory currents during whisking are about three times larger than threshold. We identify values of numbers of presynaptic neurons, synaptic delays between inhibitory neurons, and electrical coupling within the experimentally plausible ranges for which spike synchrony levels are low. Heterogeneity in in-degrees increases the width of the firing rate distribution to the experimentally observed value. We conclude that an L4 circuit in the low-synchrony regime exhibits qualitative dynamical properties similar to those of balanced networks.
Collapse
Affiliation(s)
- Tommer Argaman
- Dept. of Brain and Cognitive Sciences, Ben Gurion University, Be'er-Sheva 8410501, Israel; Zlotowski Center for Neuroscience, Ben Gurion University, Be'er-Sheva 8410501, Israel
| | - David Golomb
- Zlotowski Center for Neuroscience, Ben Gurion University, Be'er-Sheva 8410501, Israel; Depts. of Physiology and Cell Biology and Physics, Ben Gurion University, Be'er-Sheva 8410501, Israel.
| |
Collapse
|
19
|
Angelucci A, Bijanzadeh M, Nurminen L, Federer F, Merlin S, Bressloff PC. Circuits and Mechanisms for Surround Modulation in Visual Cortex. Annu Rev Neurosci 2017; 40:425-451. [PMID: 28471714 DOI: 10.1146/annurev-neuro-072116-031418] [Citation(s) in RCA: 138] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Surround modulation (SM) is a fundamental property of sensory neurons in many species and sensory modalities. SM is the ability of stimuli in the surround of a neuron's receptive field (RF) to modulate (typically suppress) the neuron's response to stimuli simultaneously presented inside the RF, a property thought to underlie optimal coding of sensory information and important perceptual functions. Understanding the circuit and mechanisms for SM can reveal fundamental principles of computations in sensory cortices, from mouse to human. Current debate is centered over whether feedforward or intracortical circuits generate SM, and whether this results from increased inhibition or reduced excitation. Here we present a working hypothesis, based on theoretical and experimental evidence, that SM results from feedforward, horizontal, and feedback interactions with local recurrent connections, via synaptic mechanisms involving both increased inhibition and reduced recurrent excitation. In particular, strong and balanced recurrent excitatory and inhibitory circuits play a crucial role in the computation of SM.
Collapse
Affiliation(s)
- Alessandra Angelucci
- Department of Ophthalmology and Visual Science, Moran Eye Institute, University of Utah, Salt Lake City, Utah 84132; , , , ,
| | - Maryam Bijanzadeh
- Department of Ophthalmology and Visual Science, Moran Eye Institute, University of Utah, Salt Lake City, Utah 84132; , , , ,
| | - Lauri Nurminen
- Department of Ophthalmology and Visual Science, Moran Eye Institute, University of Utah, Salt Lake City, Utah 84132; , , , ,
| | - Frederick Federer
- Department of Ophthalmology and Visual Science, Moran Eye Institute, University of Utah, Salt Lake City, Utah 84132; , , , ,
| | - Sam Merlin
- Department of Ophthalmology and Visual Science, Moran Eye Institute, University of Utah, Salt Lake City, Utah 84132; , , , ,
| | - Paul C Bressloff
- Department of Mathematics, University of Utah, Salt Lake City, Utah 84132;
| |
Collapse
|
20
|
Distributed representations of action sequences in anterior cingulate cortex: A recurrent neural network approach. Psychon Bull Rev 2017; 25:302-321. [DOI: 10.3758/s13423-017-1280-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
21
|
Stringer C, Pachitariu M, Steinmetz NA, Okun M, Bartho P, Harris KD, Sahani M, Lesica NA. Inhibitory control of correlated intrinsic variability in cortical networks. eLife 2016; 5. [PMID: 27926356 PMCID: PMC5142814 DOI: 10.7554/elife.19695] [Citation(s) in RCA: 70] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2016] [Accepted: 11/14/2016] [Indexed: 12/27/2022] Open
Abstract
Cortical networks exhibit intrinsic dynamics that drive coordinated, large-scale fluctuations across neuronal populations and create noise correlations that impact sensory coding. To investigate the network-level mechanisms that underlie these dynamics, we developed novel computational techniques to fit a deterministic spiking network model directly to multi-neuron recordings from different rodent species, sensory modalities, and behavioral states. The model generated correlated variability without external noise and accurately reproduced the diverse activity patterns in our recordings. Analysis of the model parameters suggested that differences in noise correlations across recordings were due primarily to differences in the strength of feedback inhibition. Further analysis of our recordings confirmed that putative inhibitory neurons were indeed more active during desynchronized cortical states with weak noise correlations. Our results demonstrate that network models with intrinsically-generated variability can accurately reproduce the activity patterns observed in multi-neuron recordings and suggest that inhibition modulates the interactions between intrinsic dynamics and sensory inputs to control the strength of noise correlations. DOI:http://dx.doi.org/10.7554/eLife.19695.001 Our brains contain billions of neurons, which are continually producing electrical signals to relay information around the brain. Yet most of our knowledge of how the brain works comes from studying the activity of one neuron at a time. Recently, studies of multiple neurons have shown that they tend to be active together in short bursts called “up” states, which are followed by periods in which they are less active called “down” states. When we are sleeping or under a general anesthetic, the neurons may be completely silent during down states, but when we are awake the difference in activity between the two states is usually less extreme. However, it is still not clear how the neurons generate these patterns of activity. To address this question, Stringer et al. studied the activity of neurons in the brains of awake and anesthetized rats, mice and gerbils. The experiments recorded electrical activity from many neurons at the same time and found a wide range of different activity patterns. A computational model based on these data suggests that differences in the degree to which some neurons suppress the activity of other neurons may account for this variety. Increasing the strength of these inhibitory signals in the model decreased the fluctuations in electrical activity across entire areas of the brain. Further analysis of the experimental data supported the model’s predictions by showing that inhibitory neurons – which act to reduce electrical activity in other neurons – were more active when there were fewer fluctuations in activity across the brain. The next step following on from this work would be to develop ways to build computer models that can mimic the activity of many more neurons at the same time. The models could then be used to interpret the electrical activity produced by many different kinds of neuron. This will enable researchers to test more sophisticated hypotheses about how the brain works. DOI:http://dx.doi.org/10.7554/eLife.19695.002
Collapse
Affiliation(s)
- Carsen Stringer
- Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom
| | - Marius Pachitariu
- Institute of Neurology, University College London, London, United Kingdom.,Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Nicholas A Steinmetz
- Institute of Neurology, University College London, London, United Kingdom.,Institute of Ophthalmology, University College London, London, United Kingdom
| | - Michael Okun
- Institute of Neurology, University College London, London, United Kingdom
| | - Peter Bartho
- MTA TTK NAP B Sleep Oscillations Research Group, Budapest, Hungary
| | - Kenneth D Harris
- Institute of Neurology, University College London, London, United Kingdom.,Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom
| | | |
Collapse
|
22
|
Rollo JL, Banihashemi N, Vafaee F, Crawford JW, Kuncic Z, Holsinger RMD. Unraveling the mechanistic complexity of Alzheimer's disease through systems biology. Alzheimers Dement 2015; 12:708-18. [PMID: 26703952 DOI: 10.1016/j.jalz.2015.10.010] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2014] [Revised: 08/18/2015] [Accepted: 10/21/2015] [Indexed: 11/16/2022]
Abstract
Alzheimer's disease (AD) is a complex, multifactorial disease that has reached global epidemic proportions. The challenge remains to fully identify its underlying molecular mechanisms that will enable development of accurate diagnostic tools and therapeutics. Conventional experimental approaches that target individual or small sets of genes or proteins may overlook important parts of the regulatory network, which limits the opportunity of identifying multitarget interventions. Our perspective is that a more complete insight into potential treatment options for AD will only be made possible through studying the disease as a system. We propose an integrative systems biology approach that we argue has been largely untapped in AD research. We present key publications to demonstrate the value of this approach and discuss the potential to intensify research efforts in AD through transdisciplinary collaboration. We highlight challenges and opportunities for significant breakthroughs that could be made if a systems biology approach is fully exploited.
Collapse
Affiliation(s)
- Jennifer L Rollo
- Charles Perkins Centre, The University of Sydney, Sydney, NSW, Australia; Laboratory of Molecular Neuroscience, Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia; Department of Molecular Neuroscience, Institute of Neurology, University College of London, London, UK.
| | - Nahid Banihashemi
- Charles Perkins Centre, The University of Sydney, Sydney, NSW, Australia
| | - Fatemeh Vafaee
- Charles Perkins Centre, The University of Sydney, Sydney, NSW, Australia; School of Mathematics and Statistics, University of Sydney, Sydney, NSW, Australia
| | | | - Zdenka Kuncic
- Charles Perkins Centre, The University of Sydney, Sydney, NSW, Australia; School of Physics, The University of Sydney, Sydney, NSW, Australia
| | - R M Damian Holsinger
- Laboratory of Molecular Neuroscience, Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia; Discipline of Biomedical Science, School of Medical Sciences, Sydney Medical School, The University of Sydney, Lidcombe, NSW, Australia
| |
Collapse
|
23
|
Neural Circuits: Male Mating Motifs. Neuron 2015; 87:912-4. [PMID: 26335638 DOI: 10.1016/j.neuron.2015.08.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Characterizing microcircuit motifs in intact nervous systems is essential to relate neural computations to behavior. In this issue of Neuron, Clowney et al. (2015) identify recurring, parallel feedforward excitatory and inhibitory pathways in male Drosophila's courtship circuitry, which might explain decisive mate choice.
Collapse
|
24
|
Sadeh S, Clopath C, Rotter S. Emergence of Functional Specificity in Balanced Networks with Synaptic Plasticity. PLoS Comput Biol 2015; 11:e1004307. [PMID: 26090844 PMCID: PMC4474917 DOI: 10.1371/journal.pcbi.1004307] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2014] [Accepted: 04/30/2015] [Indexed: 11/19/2022] Open
Abstract
In rodent visual cortex, synaptic connections between orientation-selective neurons are unspecific at the time of eye opening, and become to some degree functionally specific only later during development. An explanation for this two-stage process was proposed in terms of Hebbian plasticity based on visual experience that would eventually enhance connections between neurons with similar response features. For this to work, however, two conditions must be satisfied: First, orientation selective neuronal responses must exist before specific recurrent synaptic connections can be established. Second, Hebbian learning must be compatible with the recurrent network dynamics contributing to orientation selectivity, and the resulting specific connectivity must remain stable for unspecific background activity. Previous studies have mainly focused on very simple models, where the receptive fields of neurons were essentially determined by feedforward mechanisms, and where the recurrent network was small, lacking the complex recurrent dynamics of large-scale networks of excitatory and inhibitory neurons. Here we studied the emergence of functionally specific connectivity in large-scale recurrent networks with synaptic plasticity. Our results show that balanced random networks, which already exhibit highly selective responses at eye opening, can develop feature-specific connectivity if appropriate rules of synaptic plasticity are invoked within and between excitatory and inhibitory populations. If these conditions are met, the initial orientation selectivity guides the process of Hebbian learning and, as a result, functionally specific and a surplus of bidirectional connections emerge. Our results thus demonstrate the cooperation of synaptic plasticity and recurrent dynamics in large-scale functional networks with realistic receptive fields, highlight the role of inhibition as a critical element in this process, and paves the road for further computational studies of sensory processing in neocortical network models equipped with synaptic plasticity. In primary visual cortex of mammals, neurons are selective to the orientation of contrast edges. In some species, as cats and monkeys, neurons preferring similar orientations are adjacent on the cortical surface, leading to smooth orientation maps. In rodents, in contrast, such spatial orientation maps do not exist, and neurons of different specificities are mixed in a salt-and-pepper fashion. During development, however, a “functional” map of orientation selectivity emerges, where connections between neurons of similar preferred orientations are selectively enhanced. Here we show how such feature-specific connectivity can arise in realistic neocortical networks of excitatory and inhibitory neurons. Our results demonstrate how recurrent dynamics can work in cooperation with synaptic plasticity to form networks where neurons preferring similar stimulus features connect more strongly together. Such networks, in turn, are known to enhance the specificity of neuronal responses to a stimulus. Our study thus reveals how self-organizing connectivity in neuronal networks enable them to achieve new or enhanced functions, and it underlines the essential role of recurrent inhibition and plasticity in this process.
Collapse
Affiliation(s)
- Sadra Sadeh
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
- Bioengineering Department, Imperial College London, London, United Kingdom
- * E-mail:
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Stefan Rotter
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
| |
Collapse
|
25
|
Thomas PJ. Commentary on Structured chaos shapes spike-response noise entropy in balanced neural networks, by Lajoie, Thivierge, and Shea-Brown. Front Comput Neurosci 2015; 9:23. [PMID: 25805988 PMCID: PMC4354338 DOI: 10.3389/fncom.2015.00023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2014] [Accepted: 02/08/2015] [Indexed: 11/13/2022] Open
|
26
|
Abstract
Advances in experimental techniques, including behavioral paradigms using rich stimuli under closed loop conditions and the interfacing of neural systems with external inputs and outputs, reveal complex dynamics in the neural code and require a revisiting of standard concepts of representation. High-throughput recording and imaging methods along with the ability to observe and control neuronal subpopulations allow increasingly detailed access to the neural circuitry that subserves neural representations and the computations they support. How do we harness theory to build biologically grounded models of complex neural function?
Collapse
Affiliation(s)
- Adrienne Fairhall
- Department of Physiology and Biophysics, University of Washington, 1705 NE Pacific St., HSB G424, Box 357290, Seattle, WA 98195-7290, USA.
| |
Collapse
|
27
|
|